uuid
int64
541B
3,299B
dataset
stringclasses
1 value
text
stringlengths
1
4.29M
1,314,259,994,063
arxiv
\section{Introduction} A primary motivation for this study is Parkinson's disease, which can cause an involuntary shaking that typically affects the distal portion of the upper limbs, and difficulty initiating motion. For patients with advanced Parkinson's disease who do not respond to drug therapy, electrical deep brain stimulation (DBS), an FDA-approved therapeutic procedure, may offer relief \cite{bena91}. Here, a neurosurgeon guides a small electrode into the sub-thalamic nucleus or globus pallidus interna (GPi); the electrode is connected to a pacemaker implanted in the chest which sends periodic electrical pulses directly into the brain tissue. The efficacy of DBS for the treatment of Parkinson's disease has been found to depend on the frequency of stimulation, with high-frequency stimulation (70 to 1000 Hz and beyond) being therapeutically effective \cite{bena91,rizz01,moro02}. The generally accepted therapeutic range is 130-180 Hz \cite{volk02,kunc04}. Experimental evidence has suggested that motor symptoms of Parkinson's disease are associated with pathological synchronization of neurons in the basal ganglia, and that DBS desynchronizes the neural activity~\cite{uhlh06,chen07,hamm07,levy00,schn05}. DBS has also shown promising results in treating other neurological conditions, for which the stimulation electrode is implanted in the GPi (for dystonia) or the thalamus (for Tourette syndrome and essential tremor)~\cite{savi12,bena02}. While the precise mechanisms which underly the effectiveness of DBS are not fully understood, theoretical studies have shown that DBS-like stimulation consisting of a periodic pulses applied to neural oscillator populations can lead to chaotic desynchronization~\cite{wils11} or clustering behavior~\cite{wils15cluster}, in which subpopulations of the neurons are synchronized, but the subpopulations are desynchronized with respect to each other. Clustering has also been found in theoretical studies of coordinated reset, in which multiple electrodes deliver inputs which are separated by a time delay~\cite{luck13,lysy11,lysy13,tass03a}. These studies, along with clinical successes with coordinated reset~\cite{adam14}, point to clustering as an attractive objective for designing stimulation properties; this has motivated the design of single control inputs which promote clustering~\cite{matc18,mong19_physicad,wils20}, in contrast to methods which seek to fully desynchronize the neural activity~\cite{tass03,nabi13,wils14a,mong20}. Notably, clustering has at least two important differences from chaotic desynchronization: clustered states often exist over a much larger parameter range than chaotic desynchronization, a possible explanation why effective DBS parameters are easier to find than chaotic desynchronization would suggest; and clustered states may induce plasticity changes more effectively than chaotic desynchronization, which may explain why benefits are more persistent for some kinds of stimulation mechanisms than others (cf.~\cite{adam14,mong19_physicad}). In this paper, we will focus on clustering which arises from a single stimulation electrode, unlike coordinated reset which uses multiple electrodes. Despite substantial data backing the general efficacy of DBS, it can have side effects including disorientation, memory deficits, spatial delayed recall, response inhibition, episodes of mania, hallucinations, or mood swings, as well as impairment of social functions such as the ability to recognize the emotional tone of a face~\cite{cyro16,buhm17}. Our study develops tools which can help to identify different stimuli that result in the same clustering behavior; our hope is that the identification of these alternatives will allow neurologists to consider different stimuli in order to find those which are effective at treating neurological disorders while minimizing the severity of side effects. In this paper, we investigate how the details of clustering due to periodic pulses of the type used in DBS can be understood in terms of one-dimensional maps defined on the circle. As a first step, Section~\ref{section:phase_reduction} describes phase reduction, a powerful classical technique for the analysis of oscillators in which a single variable describes the phase of the oscillation with respect to some reference state. Section~\ref{section:simulations} shows results from simulations of populations of neural ocsillators stimulated by periodic pulses of the type used for DBS; this illustrates the different types of clustering which can occur, and motivates the theoretical analysis. Section~\ref{section:identical_stimuli} derives and investigates the one-dimensional maps which can be used to understand the types of clusters which occur, their stability properties, and their basins of attraction. Section~\ref{section:alternating_stimuli} then demonstrates how this analysis in terms of maps can be generalized to consider stimuli that consist of pulses with alternating properties, which provide additional degrees of freedom for DBS stimulus design. Section~\ref{section:conclusion} summarizes the results. The models for the neurons considered in this paper are given in the Appendix. \section{Phase Reduction} \label{section:phase_reduction} A common way to describe the dynamics of neurons is to use conductance-based models such as the Hodgkin-Huxley equations~\cite{hodg52d}. Such models are typically high-dimensional and contain a large number of parameters, which can make them unwieldy for simulations of large neural populations. A powerful technique for the analysis of oscillatory neurons, whose dynamics are described by a stable periodic orbit, is the rigorous reduction of conductance-based models to phase models, with a single variable $\theta$ describing the phase of the oscillation with respect to some reference state~\cite{winf01,kura84,mong19}. Suppose that our conductance-based model is described by the $n$-dimensional dynamical system \begin{equation} \label{eq:Phase_reduction_1} \frac{d {\bf x}}{dt}= {\bf F}({\bf x}) , \qquad {\bf x} \in \mathbb{R}^n \quad (n \geq 2), \end{equation} with a stable periodic orbit $\gamma(t)$ with period $T$. For each point ${\bf x}^\ast$ in the basin of attraction of $\gamma(t)$ there exists a corresponding phase $\theta$(${\bf x}^\ast$) such that~\cite{guck75,winf01} \begin{equation} \label{eq:Phase_reduction_2} \lim\limits_{t \to \infty} \left|{\bf x}(t) - \gamma \left( t+\frac{T}{2\pi}\theta({\bf x}^\ast) \right) \right| = 0, \end{equation} where, under the given vector field, ${\bf x}(t)$ is the trajectory of the initial point ${\bf x}^\ast$. The asymptotic phase of ${\bf x}$, $\theta({\bf x})$, ranges in value from $[0,2\pi)$. In this paper, $\theta = 0$ will represent the phase at which the neuron fires an action potential. Isochrons are level sets of $\theta({\bf x})$, and we define isochrons such that the phase of a trajectory evolves linearly in time both on and off of the periodic orbit~\cite{winf67,winf01}. As a result, for the entire basin of attraction of the periodic orbit, \begin{equation} \label{eq:Phase_reduction_3} \frac{d\theta}{dt}= \frac{2\pi}{T} \equiv \omega. \end{equation} If we now consider the dynamical system \begin{equation} \label{eq:Phase_reduction_4} \frac{d{\bf x}}{dt}= {\bf F}({\bf x}) + {\bf U}(t), \qquad {\bf x} \in \mathbb{R}^n, \end{equation} where ${\bf U}(t) \in \mathbb{R}^n$ is an infinitesimal control input, phase reduction gives the one-dimensional system~\cite{kura84,brow04,mong19} \begin{equation} \label{eq:Phase_reduction_5} \frac{d\theta}{dt}= \omega + {\bf U}(t)^T {\bf Z}(\theta). \end{equation} In this equation, ${\bf Z}(\theta)$ is the gradient of $\theta$ evaluated on the periodic orbit, and is known as the phase response curve (PRC)~\cite{winf01,erme10,neto12}; it represents the change in phase that the control input will cause when applied at a given phase. In this paper, we consider electrical current inputs which only act in the voltage direction defined by the unit vector $\hat{V}$, i.e., ${\bf U}(t) = u(t) \hat{V}$, with the corresponding phase reduction \begin{equation} \label{eq:Phase_reduction_6} \frac{d\theta_{i}}{dt}= \omega + Z(\theta_{i}) u(t). \end{equation} Here, $\theta_i$ represents the phase of the $i^{\rm th}$ neuron, $\omega$ is the natural frequency of the neuron in radians per second, $Z(\theta) = \frac{\partial \theta}{\partial V}$ is the component of the PRC in the voltage direction, and $u(t)$ is the input. For the populations of neurons considered in this paper, we assume that the neurons are identical and they all receive the same input, and we will consider uncoupled neurons without noise; these assumptions allow a more detailed analysis to be performed. In the next section, we show simulation results for populations of neurons described by such phase models with periodic pulses of the type used for DBS. \section{Simulation Results for Identical Periodic DBS Pulses} \label{section:simulations} In this section, we show simulation results for populations of neurons stimulated by periodic pulses of the type used for DBS; these results will inspire the analysis in Section~\ref{section:identical_stimuli}. To illustrate a range of clustering behaviors, we show simulations for prototypical systems which represent two common types of neurons~\cite{rinz98}: as a Type I neuron model, we consider the model for thalamic neurons from~\cite{rubi04}, and as a Type II model we consider the Hodgkin-Huxley equations~\cite{hodg52d}. These models are not meant to correspond to the neurons directly relevant to Parkinson's disease in human patients; rather, they are used to illustrate typical clustering behaviors for populations of neural oscillators under DBS-like stimuli. The full equations and parameters for these models are given in the Appendix. For our simulations, we use the corresponding phase models. For reference, for these parameters the thalamic neurons have $\omega = 0.748$ rad/s, and the Hodgkin-Huxley neurons have $\omega = 0.429$ rad/s. The PRC functions for these neurons are shown in Figure~\ref{Z}(a) and (b), respectively. Each PRC was calculated numerically using XPP~\cite{erme02}, and is approximated by a Fourier series. \begin{figure}[tb] \begin{center} \leavevmode \epsfxsize=3.2in \epsfbox{Z.pdf} \end{center} \caption{Panels (a) and (b) show the phase response curves $Z(\theta)$ of the thalamic (Type I) and Hodgkin-Huxley (Type II) neurons considered in this paper, respectively. \label{Z}} \end{figure} \begin{figure}[tb] \begin{center} \leavevmode \epsfxsize=3.2in \epsfbox{one_kick_bigger.pdf} \end{center} \caption{Periodic sequence of identical pulses. \label{one_kick}} \end{figure} The input $u(t)$ that we consider, shown in Figure~\ref{one_kick} and inspired by DBS stimuli~\cite{mont10}, is a periodic sequence of identical charge-balanced pulses parameterized by amplitude $u_{max}$, period $\tau$ (with corresponding frequency $1/\tau$), pulse width $p$, and multiplier $\lambda$ (the ratio of time that the pulse is negative to the time that the pulse is positive). Mathematically, $u(t)$ is given by: \begin{equation} \label{eq:Pulsatile_Stimulus} u(t) = \left\{ \begin{array}{ll} u_{max} & \bmod(t,\tau) \leq p \\ u_{min} \equiv -\frac{u_{max}} {\lambda} & p < \bmod(t,\tau) \leq (\lambda+1)p\\ 0 & $otherwise$. \end{array} \right. \end{equation} Unless otherwise stated, we will use $u_{max}$ corresponding to a current density of $20 \mu A/cm^2$, $p = 0.5$ ms, and $\lambda = 3$ in our simulations. We consider different frequencies of stimulation between 70-300 Hz, which includes the typical therapeutic range of 130-180 Hz for DBS treatment of Parkinson's disease. We simulated 500 Hodgkin-Huxley neurons with initial phases evenly spaced between $0$ and $2 \pi$, corresponding to an initial uniform phase distribution. The stimulation frequency was varied from 70 Hz to 300 Hz in increments of 5 Hz. Figure~\ref{type2_phase_vs_freq} shows the final phases after 40 periods of stimulation, after transients have decayed away. The colors indicate the initial phases of the neurons. Not all colors are visible for most stimulation frequencies because the final phases of entire subpopulations of neurons are nearly identical, and only one representative initial phase can be seen. All of the neurons which have nearly the same final phase are part of the same cluster. Figure~\ref{phase_versus_time} shows the times series of the phases of a population of Hodgkin-Huxley neurons for selected frequencies, and helps us to interpret the results shown in Figure~\ref{type2_phase_vs_freq}. For example, Figure~\ref{phase_versus_time}(a) shows that for a $100$ Hz stimulus the neurons separate into three clusters, as is also the case for $250$ Hz as shown in Figure~\ref{phase_versus_time}(e). (Notice that Figure~\ref{type2_phase_vs_freq} shows three possible final phases for each of these frequencies, corresponding to these three clusters.) Figure~\ref{phase_versus_time}(b) shows that for a $150$ Hz stimulus they separate into two clusters. For a $180$ Hz stimulus, there is no clustering; see Figure~\ref{phase_versus_time}(c). By carefully looking at Figure~\ref{phase_versus_time}(d), one sees that for a $185$ Hz stimulus there are five clusters, and from Figure~\ref{phase_versus_time}(f) that for a $295$ Hz stimulus there are seven clusters, as expected from final states shown at these frequencies in Figure~\ref{type2_phase_vs_freq}. Such clustering behavior and non-clustering (chaotic) behavior has been seen in other studies, such as~\cite{wils11} and~\cite{wils15cluster}. \begin{figure}[!t] \begin{center} \includegraphics[width=0.45\textwidth]{type2_phase_vs_freq_colorbar_stretched_label.pdf} \end{center} \caption{The final phases $\theta$ of Hodgkin-Huxley neurons drawn from an initial uniform distribution as a function of stimulation frequency, after 40 periods of stimulation. Colors correspond to the neurons' initial phases. \label{type2_phase_vs_freq}} \end{figure} \begin{figure}[t] \begin{center} \leavevmode \epsfxsize=3.2in \epsfbox{phase_versus_time_stacked.pdf} \end{center} \caption{Time series showing the phases of Hodgkin-Huxley neurons drawn from an initial uniform distribution for frequencies (a) 100 Hz, (b) 150 Hz, (c) 180 Hz, (d) 185 Hz, (e) 250 Hz, and (f) 290 Hz. The titles of these panels indicate the number of clusters found after transients have decayed away. For (c), clusters do not form. For this and subsequent time series figures, $t$ is measured in ms, and the colors indicate the initial phases of the neurons, with colorbar as in Figure~\ref{type2_phase_vs_freq}. \label{phase_versus_time}} \end{figure} Inspired by neural synchrony in Parkison's patients, we also considered an initial partially synchronized neural population, with phases distributed according to a von Mises distribution~\cite{best79} centered at $\theta = 0$: \begin{equation} \rho_0(\theta) = \frac{e^{\kappa \cos \theta}}{2 \pi I_0 (\kappa)}, \end{equation} where $I_0(\kappa)$ is the modified Bessel function of order 0. This distribution is similar to a Gaussian distribution, but on a circle. We simulated 500 Hodgkin-Huxley neurons with initial phases distributed according to the von Mises distribution with $\kappa = 50$. As for Figure~\ref{type2_phase_vs_freq}, the stimulation frequency was varied from 70 Hz to 300 Hz in increments of 5 Hz. Figure~\ref{type2_phase_vs_freq_von_mises} shows the final phases after 40 periods of stimulation, after transients have decayed away. We see that the final phases of the neurons from the initial von Mises distribution lie on a subset of the final phases of the neurons from the intial uniform distribution. For example, when the stimulation frequency is 100 Hz, the neurons from the initial von Mises distribution are concentrated in two of the three clusters which exist for the initial uniform distribution. \clearpage \begin{figure}[tb] \begin{center} \leavevmode \epsfxsize=3.2in \epsfbox{type2_phase_vs_freq_von_mises_colorbar_stretched_label.pdf} \end{center} \caption{As a function of stimulation frequency, the final phases $\theta$ of Hodgkin-Huxley neurons drawn from an initial von Mises distribution after 40 periods of stimulation are shown as black $*$'s, overlaid on the final phases of Hodgkin-Huxley neurons drawn from an initial uniform distribution (as was shown in Figure~\ref{type2_phase_vs_freq}). \label{type2_phase_vs_freq_von_mises}} \end{figure} We also designed an algorithm to detect the size of clusters in a population. The algorithm groups the phases of neurons in a population at each timestep into clusters by sorting the phases in ascending order and checking if the $i^{\rm th}$ phase is within $\epsilon$ of the $(i + 1)$-th phase for an appropriate small value of $\epsilon$. If so, the size of the current cluster is increased by one. If not, the algorithm creates a new cluster. The process is repeated until all neurons have been grouped into clusters. Figure~\ref{cluster_sizes} shows the number of neurons in the different clusters over a range of frequencies for the intial uniform distribution (for which three clusters are populated) and von Mises distribution (for which only two clusters are populated). As we will see in Section~\ref{section:identical_stimuli}, this figure can be explained in terms of the basins of attraction of fixed points of iterates of a one-dimensional map defined on the circle. The initial phase of a given neuron will determine which cluster it ends up in. \begin{figure}[tb!] \begin{center} \leavevmode \epsfxsize=2.2in \epsfbox{cluster_sizes_stacked.pdf} \end{center} \caption{The number of Hodgkin-Huxley neurons in different clusters for a population size of 500, with initial (a) uniform and (b) von Mises distributions. \label{cluster_sizes}} \end{figure} We also considered populations of thalamic (Type I) neurons with the same stimuli~(\ref{eq:Pulsatile_Stimulus}) with $u_{max}$ corresponding to a current density of $20$ $\mu A/cm^2$, $p = 0.5$ ms, and $\lambda = 3$. We simulated 500 thalamic neurons with initial phases evenly spaced between $0$ and $2 \pi$, corresponding to a uniform distribution. The stimulation frequency was varied from 70 Hz to 300 Hz in increments of 5 Hz. Figure~\ref{type1_phase_vs_freq} shows the final phases after 40 periods of stimulation, after transients have decayed. Figure~\ref{phase_versus_time_type1} shows the time series of the phases of a population of such neurons for selected frequencies. Here we again see clustering for some frequencies (such as $250$ Hz), and non-clustering behavior for other frequencies (such as $200$ Hz). \begin{figure}[tb] \begin{center} \leavevmode \epsfxsize=3.2in \epsfbox{type1_phase_vs_freq_colorbar_stretched_label.pdf} \end{center} \caption{The final phases $\theta$ of thalamic neurons drawn from an initial uniform distribution as a function of stimulation frequency, after 40 periods of stimulation. Colors correspond to the neurons' initial phases. \label{type1_phase_vs_freq}} \end{figure} \begin{figure}[tb!] \begin{center} \leavevmode \epsfxsize=3.5in \epsfbox{phase_versus_time_type1_stacked.pdf} \end{center} \caption{Time series showing the phases of thalamic neurons drawn from an initial uniform distribution for frequencies (a) 200 Hz, and (b) 250 Hz. For (a), clusters do not form; for (b), there are two clusters after transients decay away. \label{phase_versus_time_type1}} \end{figure} In the next section, we derive and investigate one-dimensional maps which can be used to understand the types of clusters which occur in these simulations, along with their stability properties and their basins of attraction. \section{Analysis of Clusters due to Identical Pulses} \label{section:identical_stimuli} In this section, we show how the clustering behavior found in the simulations from Section~\ref{section:simulations} can be understood in terms of appropriate compositions of one-dimensional maps on the circle. We consider a system of neural oscillators subjected to a $\tau$-periodic sequence of pulses as shown in Figure~\ref{one_kick}, and described by the dynamics~\cite{wils15cluster} \begin{equation} \dot{\theta}_i = \omega + f(\theta_i) \delta({\rm mod}(t,\tau)), \qquad i = 1,\cdots,N. \end{equation} Here the response function $f(\theta)$ describes the change in phase due to a single pulse (including the positive current for time $p$, and the negative current for time $\lambda p$). If the pulse was a delta function with unit area, $f(\theta)$ would be equal to the infinitesimal PRC $Z(\theta)$; for more general pulses, it can be calculated using a direct method in which a pulse is applied at a known phase, and the change in phase is deduced from the change in timing of the next action potential~\cite{neto12}. We will think of the change in phase due to the pulse as occurring instantaneously, even though the pulse will typically have a finite duration; this will be a good approximation for pulses of short duration. Figure~\ref{f_type2} shows $f(\theta)$ for the Hodgkin-Huxley neurons considered in this paper for pulses as shown in Figure~\ref{one_kick} with $u_{max}$ corresponding to a current density of $20 \mu A/cm^2$, $p = 0.5$ ms, and $\lambda = 3$. \begin{figure}[h!] \begin{center} \leavevmode \epsfxsize=2.2in \epsfbox{f_type2_label.pdf} \end{center} \caption{Response function $f(\theta)$ which characterizes the phase response of Hodgkin-Huxley neurons to the stimulus, for $u_{max}$ corresponding to a current density of $20 \mu A/cm^2$, $p = 0.5$ ms, and $\lambda=3$. \label{f_type2}} \end{figure} To understand the clustering behavior, it will be useful to consider the map which takes the phase of a neuron to the phase exactly one forcing cycle later, cf.~\cite{wils15cluster}. To find this map, suppose that we start with $\theta(0^+) = 0$, immediately after the start of a pulse, where we assume that we have already accounted for the effect of the pulse according to the function $f(\theta)$. The next pulse comes at time $\tau$. Up until time $\tau$, the phase evolves according to $\dot{\theta} = \omega$; therefore, \begin{equation} \theta(\tau^-) = \theta_0 + \omega \tau. \end{equation} Treating the change in phase due to the next pulse as occurring instantaneously, we have \begin{equation} \theta(\tau^+) = \theta_0 + \omega \tau + f(\theta_0 + \omega \tau). \end{equation} The system then evolves for a time $\tau$ without stimulus, giving \begin{equation} \theta(2 \tau^-) = \theta_0 + 2 \omega \tau + f(\theta_0 + \omega \tau); \end{equation} the next pulse at time $2 \tau$ gives \begin{equation} \theta(2 \tau^+) = \theta_0 + 2 \omega \tau + f(\theta_0 + \omega \tau) + f(\theta + 2 \omega \tau + f(\theta_0 + \omega \tau)), \end{equation} and so on. It is useful to let~\cite{wils15cluster} \begin{equation} g(s) = s + \omega \tau + f(s + \omega \tau), \end{equation} which gives \begin{equation} \theta(n \tau^+) = g^{(n)} (\theta_0), \end{equation} where $g^{(n)}$ denotes the composition of $g$ with itself $n$ times, and $\theta_0$ is the initial state of the neuron. We look for fixed points of $g^{(n)}$, that is, solutions to $\theta^* = g^{(n)}(\theta^*)$; for such solutions, the phase has the same value after $n$ pulses as where it started. We are particularly interested in fixed points of $g^{(n)}$ which are not fixed points of $g^{(m)}$ for any positive integer $m$ satisfying $m < n$; then there will be $n$ fixed points of $g^{(n)}$ that correspond to points on a period-$n$ orbit of $g$. If \begin{equation} \left| \left. \frac{d}{d \theta} \right|_{\theta = \theta^*} (g^{(n)} (\theta)) \right|<1, \end{equation} then the fixed point $\theta^*$ of $g^{(n)}$ is stable, as is the corresponding period-$n$ orbit of $g$. Neurons which start with initial phases within the basin of attraction of a given fixed point of $g^{(n)}$ will asymptotically approach that fixed point under iterations of $g^{(n)}$. The $n$ different fixed points will each have a basin of attraction, so a uniform intial distribution of neurons will form $n$ clusters, one for each of these fixed points of $g^{(n)}$, cf.~\cite{wils15cluster}. We now illustrate how these maps can be used to understand the specific clustering behavior shown in Section~\ref{section:simulations}. As a first example, suppose that a population of Hodgkin-Huxley neurons is stimulated with frequency 150 Hz, corresponding to $\tau = 6.67$ ms. Figure~\ref{f150} shows $g(\theta)$ and $g^{(2)}(\theta)$. Fixed points of these maps correspond to intersections with the diagonal. We see that there are two stable fixed points for $g^{(2)}(\theta)$, at $\theta = 2.86$ and $\theta = 5.86$ (these fixed points are stable because the slope at the intersection is between $-1$ and $+1$). There are also two unstable fixed points for $g^{(2)}$ at $\theta = 1.305$ and $\theta = 4.685$, where the slope at the intersection is greater than 1. There are no fixed points for $g(\theta)$, but a cobweb analysis verifies that there is a period-2 orbit \[ \theta = 2.86 \rightarrow 5.86 \rightarrow 2.86 \rightarrow \cdots. \] These fixed points of $g^{(2)}$ correspond a stable 2-cluster state for a population of oscillators, as shown in Figure~\ref{phase_versus_time}(b). We note that we can deduce the basin of attraction for the different stable fixed points of $g^{(2)}$; for example, the basin of attraction for the stable fixed point at $\theta = 2.86$ is the range $1.305 < \theta_0 < 4.685$, that is, between the two unstable fixed points. \begin{figure}[t!] \begin{center} \leavevmode \epsfxsize=2.2in \epsfbox{f150_highlight_stacked.pdf} \end{center} \caption{Maps $g(\theta)$ and $g^{(2)}(\theta)$ for Hodgkin-Huxley neuron with stimulation frequency $150$ Hz. Intersections with the diagonal dashed line indicate fixed points of the respective map. The dotted lines show $\theta$ values for the stable fixed points of the $g^{(2)}$ map. \label{f150}} \end{figure} As another example, suppose that a population of Hodgkin-Huxley neurons is stimulated with frequency 100 Hz, corresponding to $\tau = 10$ ms. Figure~\ref{f100} shows $g(\theta)$ and $g^{(3)}(\theta)$. We see that there are three stable fixed points for $g^{(3)}(\theta)$, at $\theta = 1.43$, $\theta = 3.37$, and $\theta = 5.86$ (these fixed points are stable because the slope at the intersection has slope between $-1$ and $+1$). There are no fixed points for $g(\theta)$, but a cobweb analysis verifies that there is a period-3 orbit \[ \theta = 1.43 \rightarrow 5.86 \rightarrow 3.37 \rightarrow 1.43 \rightarrow \cdots. \] These fixed points of $g^{(3)}$ correspond a stable 3-cluster state for a population of oscillators, as shown in Figure~\ref{phase_versus_time}(a). \begin{figure}[b!] \begin{center} \leavevmode \epsfxsize=2.2in \epsfbox{f100_highlight_stacked.pdf} \end{center} \caption{Maps $g(\theta)$ and $g^{(3)}(\theta)$ for Hodgkin-Huxley neuron with stimulation frequency $100$ Hz. \label{f100}} \end{figure} We see that this map can also capture $n$-clusters for larger values of $n$. For example, for frequency 185 Hz, the $g$ and $g^{(5)}$ maps shown in Figure~\ref{f185} confirm that there is a stable period-5 orbit \[ \theta = 1.62 \rightarrow 3.38 \rightarrow 5.85 \rightarrow 2.05 \rightarrow 5.51 \rightarrow 1.62 \rightarrow \cdots, \] corresponding to the stable 5-cluster state shown in Figure~\ref{phase_versus_time}(d). \begin{figure}[t!] \begin{center} \leavevmode \epsfxsize=2.2in \epsfbox{f185_highlight_stacked.pdf} \end{center} \caption{Maps $g(\theta)$ and $g^{(5)}(\theta)$ for Hodgkin-Huxley neuron with stimulation frequency $185$ Hz. \label{f185}} \end{figure} Finally, if we apply these identical stimuli at $300$ Hz, we obtain a total of four stable fixed points for $g^{(4)}$, corresponding to a stable period-4 orbit for $g(\theta)$ \[ \theta = 0.82 \rightarrow 2.61 \rightarrow 3.32 \rightarrow 5.69 \rightarrow 0.82 \rightarrow \cdots\ \] which is equivalent to two stable period-2 orbits for $g^{(2)}$ \[ \theta = 0.82 \rightarrow 3.32 \rightarrow 0.82 \rightarrow \cdots, \] \[ \theta = 2.61 \rightarrow 5.69 \rightarrow 2.61 \rightarrow \cdots, \] and four stable fixed points for $g^{(4)}$ \[ \theta = 0.82, \qquad \theta = 2.61, \qquad \theta = 3.32, \qquad \theta = 5.69; \] see Figure~\ref{f300}. We show in the next section that it is possible to obtain similar dynamics with stimuli consisting of pulses with alternating properties. \begin{figure}[h!] \begin{center} \leavevmode \epsfxsize=2.2in \epsfbox{f300_highlight_stacked.pdf} \end{center} \caption{For the Hodgkin-Huxley neurons with stimulation frequency $300$ Hz, there is a stable period-4 orbit for $g$, which corresponds to two stable period-2 orbits for $g^{(2)}$, which in turn correspond to four stable fixed points for $g^{(4)}$. \label{f300}} \end{figure} We can understand the cluster sizes shown in Figure~\ref{cluster_sizes}(a) by looking at the basins of attraction of the different stable fixed points, as indicated in Figure~\ref{cluster_sizes_map} for $200$ Hz and $260$ Hz stimuli. The basin boundaries are at the phases of the appropriate unstable fixed points. When the initial phase distribution is uniform, the number of neurons which end up in each cluster is proportional to the size of the corresponding basin of attraction. For example, if there are 500 uniformly distributed neurons, this predicts that there will be 144, 173, and 183 neurons in Clusters I, II, and III, respectively, for a $200$ Hz stimluus, and 209, 133, and 159 neurons in Clusters I, II, and III, respectively, for a $260$ Hz stimulus. This is consistent with the results shown in Figure~\ref{cluster_sizes}(a). The number of neurons in each cluster for Figure~\ref{cluster_sizes}(b) would be determined by the number of neurons which are initially in the respective basin of attraction, as determined by the initial phase distribution; here, there were no neurons with initial phases that end up in Cluster III. \begin{figure}[tb] \begin{center} \leavevmode \epsfxsize=2.2in \epsfbox{cluster_sizes_map_stacked.pdf} \end{center} \caption{Basins of attraction for the different clusters for (a) $200$ Hz and (b) $260$ Hz stimuli. \label{cluster_sizes_map}} \end{figure} The same analysis techniques can also be used to understand the dynamics of thalamic neurons subjected to periodic pulses. Figure~\ref{type1}(a) shows the response function $f(\theta)$ for thalamic neurons with the stimulus given by (\ref{eq:Pulsatile_Stimulus}) with $u_{max}$ corresponding to a current density of $20 \mu A/cm^2$, $p = 0.5$ ms, and $\lambda = 3$; Figure~\ref{type1}(b) shows that there is a stable 2-cluster state for a stimulation frequency of $250$ Hz, as expected from Figure~\ref{type1_phase_vs_freq}. \begin{figure}[tb!] \begin{center} \leavevmode \epsfxsize=2.2in \epsfbox{type1_highlight_stacked.pdf} \end{center} \caption{(a) Response function $f(\theta)$ which characterizes the phase response of thalamic neurons to a pulse with $u_{max}$ corresponding to a current of $20 \mu A/cm^2$, $p = 0.5$ ms, and $\lambda=3$. (b) Map $g^{(2)}(\theta)$ for the thalamic neuron with stimulation frequency $250$ Hz, showing two stable fixed points which correspond to a 2-cluster state. \label{type1}} \end{figure} \section{Analysis of Clusters due to Pulses with Alternating Properties} \label{section:alternating_stimuli} In this section, we consider more general stimuli, specifically pulses with alternating properties, as shown in Figure~\ref{two_kicks}. Here, the pulses from before, that is with $u_{max}$ corresponding to a current density of $20 \mu A/cm^2$, $p = 0.5$ ms, and $\lambda = 3$, will be assumed to occur at times $0, \tau, 2 \tau, \cdots$. But now additional pulses with $u_{2max}$ corresponding to a current density of $10 \mu A/cm^2$, $\lambda=3$, $u_{2min} = -u_{2max}/\lambda$ and $p = 0.5$ ms, will be assumed to occur at times $\tau_2$, $\tau + \tau_2$, $2 \tau + \tau_2, \cdots$. Figure~\ref{type2_phase_vs_freq_two_kicks} shows that the clustering behavior for such alternating pulses with $\tau_2 = \tau/2$ strongly resembles the clustering behavior found at twice the frequency for identical pulses, as shown in Figure~\ref{type2_phase_vs_freq}, although there are differences. The analysis in this section shows how the methods from Section~\ref{section:identical_stimuli} can be adapted to understand clustering behavior for such alternating pulses. \begin{figure}[h!] \begin{center} \leavevmode \epsfxsize=3.2in \epsfbox{two_kicks_bigger.pdf} \end{center} \caption{Sequence of alternating pulses. \label{two_kicks}} \end{figure} \begin{figure}[h!] \begin{center} \leavevmode \epsfxsize=3.2in \epsfbox{type2_phase_vs_freq_two_kicks_colorbar_stretched_label.pdf} \end{center} \caption{The final phases $\theta$ of Hodgkin-Huxley neurons drawn from an initial uniform distribution as a function of stimulation frequency, after 80 periods of pulses with alternating properties (to allow transients to decay), as described in the text. Colors correspond to the neurons' initial phases. \label{type2_phase_vs_freq_two_kicks}} \end{figure} \begin{figure}[ht] \begin{center} \leavevmode \epsfxsize=2.2in \epsfbox{f_type2_10_label.pdf} \end{center} \caption{Response function $f_2(\theta)$ which characterizes the phase response of a Hodgkin-Huxley neuron to a pulse with $u_{2max}$ corresponding to a current density of $10 \mu A/cm^2$, $u_{2min} = -u_{2max}/3$, and $p = 0.5$ ms. \label{f_type2_10}} \end{figure} It will again be useful to consider the map which takes the phase of a neuron to its phase at a time $\tau$ later. To formulate this map, we need the response curves for each type of pulse: the response curve $f(\theta)$ for the pulse with $u_{max}$ corresponding to $20 \mu A/cm^2$ was already shown in Figure~\ref{f_type2}; the response curve $f_2(\theta)$ for the pulse with $u_{max}$ corresponding to $10 \mu A/cm^2$ is shown in Figure~\ref{f_type2_10}. To find this map, suppose that we start with $\theta(0^+) = 0$, immediately after the start of a pulse, where we assume that we have already accounted for the effect of the pulse according to the function $f(\theta)$. The next pulse, of different type, comes at time $\tau_2$. Up until time $\tau_2$, the phase evolves according to $\dot{\theta} = \omega$; therefore, \begin{equation} \theta(\tau_2^-) = \theta_0 + \omega \tau_2. \end{equation} Treating the change in phase due to the next pulse as occurring instantaneously, we have \begin{equation} \theta(\tau_2^+) = \theta_0 + \omega \tau_2 + f_2(\theta_0 + \omega \tau_2). \end{equation} The system then evolves for a time $\tau - \tau_2$ without stimulus, giving \begin{eqnarray*} \theta(\tau^-) &=& \theta_0 + \omega \tau_2 + \omega (\tau - \tau_2) + f_2 (\theta_0 + \omega \tau_2) \\ &=& \theta_0 + \omega \tau + f_2 (\theta_0 + \omega \tau_2). \end{eqnarray*} At time $\tau$, we have another pulse of the type that started at $t=0$, so \[ \theta(\tau^+) = \theta_0 + \omega \tau + f_2(\theta_0 + \omega \tau_2) + f(\theta_0 + \omega \tau + f_2(\theta_0 + \omega \tau_2)). \] Continuing in this fashion, we obtain \begin{eqnarray*} \theta(\tau + \tau_2^-) = \theta_0 &+& \omega (\tau + \tau_2) + f_2(\theta_0 + \omega \tau_2) \\ &+& f(\theta_0 + \omega \tau + f_2(\theta_0 + \omega \tau_2)), \end{eqnarray*} \begin{eqnarray*} \theta(\tau + \tau_2^+) &=& \theta_0 + \omega (\tau + \tau_2) + f_2(\theta_0 + \omega \tau_2) \\ && + f(\theta_0 + \omega \tau + f_2(\theta_0 + \omega \tau_2)) \\ && + f_2(\theta_0 + \omega (\tau + \tau_2) + f_2(\theta_0 + \omega \tau_2) \\ && \;\;\;\;\;\;\;\;\;\;\; + f(\theta_0 + \omega \tau + f_2(\theta_0 + \omega \tau_2))) \end{eqnarray*} \begin{eqnarray*} \theta(2 \tau^-) &=& \theta_0 + 2 \omega \tau + f_2(\theta_0 + \omega \tau_2) \\ && + f(\theta_0 + \omega \tau + f_2(\theta_0 + \omega \tau_2)) \\ && + f_2(\theta_0 + \omega (\tau + \tau_2) + f_2(\theta_0 + \omega \tau_2) \\ && \;\;\;\;\;\;\;\;\;\;\; + f(\theta_0 + \omega \tau + f_2(\theta_0 + \omega \tau_2))) \end{eqnarray*} \[ \theta(2 \tau^+) = \theta(2 \tau^-) + f(\theta(2 \tau^-)). \] A useful formulation is to let \begin{equation} G(s) = s + \omega \tau + f_2(s + \omega \tau_2) + f(s + \omega \tau + f_2(s + \omega \tau_2)), \end{equation} which gives \begin{equation} \theta(n \tau^+) = G^{(n)} (\theta_0). \end{equation} Alternatively, we can view this as a composition of two maps: \[ \theta(0^+) = \theta_0, \] \[ \theta(\tau_2^+) = \theta_0 + \omega \tau_2 + f_2 (\theta_0 + \omega \tau_2) \equiv h_2(\theta_0), \] \begin{eqnarray*} \theta(\tau^+) = \theta(\tau_2^+) + \omega (\tau-\tau_2) \!\! &+& \!\! f(\theta(\tau_2^+) + \omega (\tau- \tau_2)) \\ \equiv h_1(\theta(\tau_2^+)) &=& h_1(h_2(\theta_0)) = G(\theta_0). \end{eqnarray*} Note that we have written $G$, which is a map over the time interval $\tau$, as the composition of two maps $h_1$ and $h_2$, that is, \[ G = h_1 \circ h_2. \] Similar to before, we will look for fixed points of $G^{(n)}$, that is, solutions to $\theta^* = G^{(n)}(\theta^*)$. If \begin{equation} \left| \left. \frac{d}{d \theta} \right|_{\theta = \theta^*} (G^{(n)} (\theta)) \right|<1, \end{equation} then the fixed point of $G^{(n)}$ is stable. Note that the relationship between fixed points of $G^{(n)}$ and clusters is more subtle for pulses with alternating properties than the relationship between fixed points of $g^{(n)}$ and $n$-clusters for identical pulses, because each $\tau$-interval for the alternating case contains two pulses. This will be illustrated in the following examples. Figure~\ref{h1_h2_f100} shows $h_1(\theta)$ for $u_{max}$ corresponding to a current density of $20 \mu A/cm^2$ and $h_2(\theta)$ for $u_{2max}$ corresponding to a current density of $10 \mu A/cm^2$, for $\tau = 10$ ms and $\tau_2 = \tau/2$. We notice that these functions are quite similar to each other. Next, we show $G(\theta) = h_1(h_2(\theta))$ and $G^{(3)}(\theta)$ in Figure~\ref{alternating_20_10_f100}. We see that there is a stable period-3 orbit for $G$, corresponding to three stable fixed points for $G^{(3)}$. This corresponds to a 3-cluster state, as expected from Figure~\ref{type2_phase_vs_freq_two_kicks} evaluated at 100 Hz. Here, the stable fixed points of $G^{(3)}$ correspond to a stable period-3 orbit of $G$, which in turn corresponds to a 3-cluster state. We note that Figure~\ref{alternating_20_10_f100} looks very similar to Figure~\ref{f100} for identical pulses with frequency 100 Hz; however, the sequence of pulses is different. For Figure~\ref{alternating_20_10_f100}, there is a ``large'' pulse at $t=0$, a ``small'' pulse at $t = 5$ ms, another large pulse at $t = 10$ ms, another small pulse at $t=15$ ms, another large pulse at $t = 20$ ms, etc. For Figure~\ref{f100}, there is a large pulse at $t = 0$, another large pulse at $t = 10$ ms, another large pulse at $t = 20$ ms, etc, with no small pulses. \begin{figure}[tb] \begin{center} \leavevmode \epsfxsize=2.2in \epsfbox{h1_h2_f100_stacked.pdf} \end{center} \caption{Functions $h_1(\theta)$ for $u_{max}$ corresponding to a current density of $20 \mu A/cm^2$ and $h_2(\theta)$ for $u_{2max}$ corresponding to a current density of $10 \mu A/cm^2$, for $\tau = 10$ ms and $\tau_2 = \tau/2$. \label{h1_h2_f100}} \end{figure} \begin{figure}[tb!] \begin{center} \leavevmode \epsfxsize=2.2in \epsfbox{f100_two_kicks_highlight_stacked.pdf} \end{center} \caption{Functions $G(\theta)$ and $G^{(3)}(\theta)$ for pulses with alternating properties with $u_{max}$ corresponding to a current density of $20 \mu A/cm^2$ and $u_{2max}$ corresponding to a current density of $10 \mu A / cm^2$, and $\tau = 10$ ms, $\tau_2 = \tau/2$. \label{alternating_20_10_f100}} \end{figure} Figure~\ref{h1_h2_f150} shows $h_1(\theta)$ for $u_{max}$ corresponding to a current density of $20 \mu A/cm^2$ and $h_2(\theta)$ for $u_{2max}$ corresponding to a current density of $10 \mu A/cm^2$, for $\tau = 6.67$ ms and $\tau_2 = \tau/2$. As before, $h_1$ and $h_2$ are quite similar to each other. Figure~\ref{alternating_20_10} shows $G(\theta) = h_1(h_2(\theta))$ and $G^{(2)}(\theta)$. We see that there are four stable fixed points for $G^{(2)}$; these actually correspond to a 4-cluster state, as shown in Figure~\ref{type2_phase_vs_freq_two_kicks} evaluated at 150 Hz. While at first it might seem surprising that stable fixed points for $G^{(2)}$ correspond to a 4-cluster state, we note that these results are similar to what we found for identical stimuli for a $300$ Hz stimulus (or, equivalently, for alternating pulses with $\tau = 6.67$ ms and $u_{max} = u_{2max}$ corresponding to a current density of $20 \mu A/cm^2$, $\tau_2 = \tau/2$). The proper comparison is that $G$ for alternating pulses with a stimulation frequency of $150$ Hz is similar to $g^{(2)}$ for identical pulses with a stimulation frquency of $300$ Hz, and $G^{(2)}$ for alternating pulses for a stimulation frequncy of $150$ Hz is similar to $g^{(4)}$ for identical pulses with a stimulation frequency of $300$ Hz. These results show that we can obtain 4-cluster solutions for a population of oscillators with these alternating pulses; see Figure~\ref{phase_versus_time_two}(a). \begin{figure}[tb] \begin{center} \leavevmode \epsfxsize=2.2in \epsfbox{h1_h2_label_stacked.pdf} \end{center} \caption{Functions $h_1(\theta)$ for $u_{max}$ corresponding to a current density of $20 \mu A/cm^2$ and $h_2(\theta)$ for $u_{2max}$ corresponding to a current density of $10 \mu A/cm^2$, for $\tau = 6.67$ ms and $\tau_2 = \tau/2$. \label{h1_h2_f150}} \end{figure} \begin{figure}[tb!] \begin{center} \leavevmode \epsfxsize=2.2in \epsfbox{alternating_20_10_highlight_stacked.pdf} \end{center} \caption{Functions $G(\theta)$ and $G^{(2)}(\theta)$ for pulses with alternating properties with $u_{max}$ corresponding to a current density of $20 \mu A/cm^2$ and $u_{2max}$ corresponding to a current density of $10 \mu A / cm^2$, and $\tau = 6.67$ ms, $\tau_2 = \tau/2$. \label{alternating_20_10}} \end{figure} \begin{figure}[h!] \begin{center} \leavevmode \epsfxsize=3.5in \epsfbox{phase_versus_time_two_stacked.pdf} \end{center} \caption{Time series showing the phases of Hodgkin-Huxley neurons drawn from an initial uniform distribution with alternating pulses with $u_{max}$ corresponding to $20 \mu A/cm^2$ and $u_{2max}$ corresponding to $10 \mu A/cm^2$, for $\tau = 6.67$ ms and (a) $\tau_2 = 0.5 \tau$, (b) $\tau_2 = 0.4 \tau$, (c) $\tau_2 = 0.6 \tau$. Four clusters form for (a) and (b), while only two clusters form for (c). \label{phase_versus_time_two}} \end{figure} Our analytical formalism also allows one to consider alternating pulses for which $\tau_2 \neq \tau/2$. For example, Figure~\ref{alternating_20_10_tau0p4_tau0p6} shows results for $u_{max}$ corresponding to $20 \mu A/cm^2$, $u_{2max}$ corresponding to $10 \mu A/cm^2$, and $\tau_2 = 0.4 \tau$ and $\tau_2 = 0.6 \tau$. Interestingly, for $\tau_2 = 0.4 \tau$ there are four fixed points of the $G^{(2)}$ map, corresponding to a 4-cluster solution, but for $\tau_2 = 0.6 \tau$ there are only two fixed points of the $G^{(2)}$ map, corresponding to a 2-cluster solution. Figures~\ref{phase_versus_time_two}(b) and (c) show the corresponding time series for these cases. Comparing Figure~\ref{alternating_20_10_tau0p4_tau0p6} with the right panel of Figure~\ref{alternating_20_10}, we deduce that if $\tau_2$ is treated as a bifurcation parameter, there is a saddlenode bifurcation (this could also be called a tangent bifurcation for the $G^{(2)}$ map) for $\tau_2$ slightly larger than $0.5$. \begin{figure}[h!] \begin{center} \leavevmode \epsfxsize=2.2in \epsfbox{alternating_20_10_tau0p4_tau0p6_highlight_stacked.pdf} \end{center} \caption{Function $G^{(2)}(\theta)$ for alternating pulses with $u_{max}$ corresponding to $20 \mu A/cm^2$ and $u_{2max}$ corresponding to $10 \mu A/cm^2$, with $\tau = 6.67$ ms, and (left) $\tau_2 = 0.4 \tau$ and (right) $\tau_2 = 0.6 \tau$. \label{alternating_20_10_tau0p4_tau0p6}} \end{figure} \section{Conclusion} \label{section:conclusion} Populations of neural oscillators subjected to periodic pulsatile stimuli can display interesting clustering behavior, in which subpopulations of the neurons are synchronized but the subpopulations are desynchronized with respect to each other. The details of the clustering behavior depend on the frequency and amplitude of the stimuli in a complicated way. Such clustering may be an important mechanism by which deep brain stimulation can lead to the alleviation of symptoms of Parkinson's disease and other disorders. In this paper, we illustrated how the details of clustering for phase models of neurons subjected to periodic pulsatile inputs can be understood in terms of one-dimensional maps defined on the circle. In particular, the analysis allows one to predict the number of clusters, their stability properties, and their basins of attraction. Moreover, we generalized our analysis to consider stimuli with alternating properties, which provide additional degrees of freedom in the design of DBS stimuli. As part of our study, we found multiple ways to get the same type of clustering behavior, for example by using identical pulses or pulses with alternating properties, or from stimuli with different parameters such as stimulation frequency or the time spacing between pulses with alternating properties. Such clustering occurs through the use of a single stimulation electrode, unlike coordinated reset which requires multiple electrodes. We expect that the same clustering behavior can also be obtained for different amplitudes of the pulses, cf.~\cite{wils15cluster}. We believe that the analysis techniques used in this paper can be useful for identifying a collection of stimuli which give the same desireable clustering dynamics for a population of neurons, which will make it easier to find stimuli which are effective while minimizing the severity of side effects for DBS treatments. Our analysis assumed certain properties of a neural population: all neurons are identical, they all receive the same input, they are uncoupled, and there is no noise. For real neural populations, none of these assumptions would be valid. We also assumed that the phase models accurately capture the dynamics of the neurons, which is only true for sufficiently small inputs; see, for example,~\cite{wils18}. However, we believe that the results presented here form an important baseline for the analysis of more realistic neural populations stimulated by periodic pulses. We note that the effect of noise on periodically forced neural populations has been considered in~\cite{wils15cluster}, which shows that for weak noise and long times, the number of neurons in each cluster is roughly the same. We expect that similar results will hold for neurons in the presence of weak noise subjected to alternating stimuli. Our hope is that the techniques in this paper will help to guide the design of stimuli for the treatment of Parkinson's disease and other disorders. We believe that the use of pulses with alternating properties is particularly worthy of further investigation, since it represents a larger class of stimuli than has been considered in previous studies. \section{Acknowledgements} This research grew out of the Research Mentorship Program at the University of California, Santa Barbara during summer 2018. We thank Dr.~Lina Kim for providing the opportunity for Daniel and Jacob to conduct this research as high school students, and for Tim Matchen for guidance on the project. \section*{Appendix: Neuron Models} In this Appendix, we give details of the neural models used in the main text. \vskip 0.1in \noindent {\it Thalamic neuron model} \vskip 0.1in The full thalamic neuron model is given by: \begin{eqnarray*} \dot V&=&\frac{-I_L-I_{Na}-I_K-I_T+I_b}{C_m}+u(t),\\ \dot h&=&\frac{h_{\infty}-h}{\tau_h},\\ \dot r&=&\frac{r_{\infty}-r}{\tau_r}, \end{eqnarray*} where \begin{eqnarray*} h_\infty &=& 1/(1+\exp((V+41)/4)),\\ r_\infty &=& 1/(1+\exp((V+84)/4)),\\ \alpha_h &=& 0.128\exp(-(V+46)/18),\\ \beta_h &=& 4/(1+\exp(-(V+23)/5)), \end{eqnarray*} \begin{eqnarray*} \tau_h &=& 1/(\alpha_h+\beta_h),\\ \tau_r &=& (28+\exp(-(V+25)/10.5)),\\ \end{eqnarray*} \begin{eqnarray*} m_\infty &=& 1/(1+\exp(-(V+37)/7)),\\ p_\infty &=& 1/(1+\exp(-(V+60)/6.2)),\\ \end{eqnarray*} \begin{eqnarray*} I_L&=&g_L(v-e_L),\\ I_{Na}&=&g_{Na}({m_\infty}^3)h(v-e_{Na}),\\ I_K&=&g_K((0.75(1-h))^4)(v-e_K),\\ I_T&=&g_T(p_\infty^2)r(v-e_T). \end{eqnarray*} The parameters for this model are \[ C_m = 1 \; \mu F/cm^2 \;,\; g_L = 0.05 \; mS/cm^2 \;,\; e_L = -70 \; mV \;, \] \[ g_{Na} = 3 \; mS/cm^2 \;,\; e_{Na} = 50 \; mV, g_K = 5 \; mS/cm^2 \;, \] \[ e_K = -90 \; mV \;,\; g_T = 5 \; mS/cm^2 \;,\; e_T = 0 \; mV \;, \] \[ I_b = 5 \; \mu A/cm^2. \] \vskip 0.1in \noindent {\it Hodgkin-Huxley neuron model} \vskip 0.1in The full Hodgkin-Huxley model is given by: \begin{eqnarray*} \dot{V}&=&(I_b -\bar{g}_{Na}h(V-V_{Na})m^3-\bar{g}_K(V-V_K)n^4\\ && -\bar{g}_L(V-V_L))/c + u(t) \; , \\ \dot{m}&=& a_m(V)(1-m)-b_m(V)m \; , \\ \dot{h}&=&a_h(V)(1-h)-b_h(V)h \; , \\ \dot{n}&=&a_n(V)(1-n)-b_n(V)n \; , \end{eqnarray*} where \begin{eqnarray*} a_m(V) &=& 0.1(V+40)/(1-\exp(-(V+40)/10)) \; , \\ b_m(V) &=& 4\exp(-(V+65)/18) \; , \\ a_h(V) &=& 0.07\exp(-(V+65)/20) \; , \\ b_h(V) &=& 1/(1+\exp(-(V+35)/10)) \; , \\ a_n(V) &=& 0.01(V+55)/(1-\exp(-(V+55)/10)) \; , \\ b_n(V) &=& 0.125\exp(-(V+65)/80) \; , \label{eq_HH} \end{eqnarray*} The parameters for this model are \[ V_{Na}=50 \; mV \;,\; V_K=-77 \; mV \;,\; V_L=-54.4 \; mV \;, \\ \] \[ \bar{g}_{Na}=120 \; mS/cm^2 \; , \; \bar{g}_K=36 \; mS/cm^2 \;, \] \[ bar{g}_L=0.3 \; mS/cm^2 \;,\; I_b=10 \; \mu A/cm^2 \;, \] \[ c=1 \; \mu F/cm^2 . \] \vspace{-0.2in} \bibliographystyle{spmpsci}
1,314,259,994,064
arxiv
\section{Introduction} \label{intro} In this paper, we study the dynamical behavior of the solution $(u(t, x), v(t, x), g(t), h(t))$ to the following Lotka-Volterra type competition model with mixed dispersal and free boundaries in time-periodic environment \begin{align*} \left\{\begin{array}{l} \partial_{t}u=d_{1}\left(\int_{g(t)}^{h(t)}J(x-y)u(t,y)dy-u\right)+u(a(t)-u-b(t)v), \quad t>0,~g(t)<x<h(t),\\[5pt] \partial_{t}v=d_{2}\left[\tau \partial_{x}^{2}v+(1-\tau)\left(\int_{g(t)}^{h(t)}J(x-y)v(t,y)dy-v\right)\right] \\[5pt] \qquad\quad +v(c(t)-v-d(t)u),\quad t>0,~ g(t)<x<h(t),\\[5pt] u(t, g(t))=u(t,h(t))=v(t, g(t))=v(t,h(t))=0, \quad t\geq0,\\[5pt] h'(t)=-\mu v_{x}(t, h(t)) +\rho_{1}\int_{g(t)}^{h(t)}\int_{h(t)}^{\infty}J(x-y)u(t,x)dydx\\[5pt] \qquad\quad+\rho_{2}\int_{g(t)}^{h(t)}\int_{h(t)}^{\infty}J(x-y)v(t,x)dydx, \quad t\geq0,\\[5pt] g'(t)=-\mu v_{x}(t, g(t)) -\rho_{1}\int_{g(t)}^{h(t)}\int_{-\infty}^{g(t)}J(x-y)u(t,x)dydx\\[5pt] \qquad\quad-\rho_{2}\int_{g(t)}^{h(t)}\int_{-\infty}^{g(t)}J(x-y)v(t,x)dydx, \quad t\geq0,\\[5pt] u(0,x)=u_0(x),~v(0, x)=v_0(x), \quad |x|\leq h_{0},\\[5pt] h(0)=-g(0)=h_{0}. \end{array}\right. \tag{1.1} \end{align*} Here $u(t,x)$ and $v(t,x)$ represent the population densities of two competing species; the positive constants $d_1, d_2$ are dispersal rates of $u, v$ and the constant $0<\tau\leq 1$ measures the fraction of individuals adopting random dispersal; $h_{0}$, $\mu$ and $\rho_{i}$ $(i=1,2)$ are positive constants; the kernel function $J: \mathbb{R}\rightarrow \mathbb{R}$ satisfies that \begin{align*} (\textbf{J})\quad J~\mbox{is~Lipschitz~continuous},~ J(x)\geq 0, J(0)>0, \int_{\mathbb{R}}J(x)dx=1, J~\mbox{is~symmetric~and}~\sup_{\mathbb{R}}J<\infty; \end{align*} $a(t),c(t)$ represent the intrinsic growth rates of species, $b(t),d(t)$ represent competition between species and they satisfy that \begin{align*} &a(t),b(t),c(t),d(t)~\mbox{are~positive}~T\mbox{-periodic~functions~and}\\[3pt] &a,b\in C([0,T]), c,d\in C^{\frac{\alpha}{2}}([0,T])~\mbox{for}~0<\alpha<1; \end{align*} the initial functions $u_0$ and $v_0$ satisfy \begin{align*} \left\{\begin{array}{l} u_0\in C^{1-}([-h_{0},h_{0}]),\quad u_0(\pm h_0)=0,\quad u_0>0 \quad \mbox{in}~ (-h_{0}, h_0),\\[3pt] v_0\in C^{2}([-h_{0},h_{0}]), \quad v_0(\pm h_0)=0, \quad v_0>0 \quad \mbox{in}~ (-h_{0}, h_0), \end{array}\right. \tag{1.2} \end{align*} where $C^{1-}([-h_{0},h_{0}])$ is defined as the Lipschitz continuous function space. Ecologically, problem $(1.1)$ describes the dynamical process of two competing species which spread and invade to new environment with daily or seasonal changes via the same free boundaries. All the individuals in the population $u$ adopt nonlocal dispersal, while in the population $v$ a fraction of individuals adopt nonlocal dispersal and the remaining fraction assumes random dispersal. The latter strategy is called mixed dispersal, which was first proposed by Kao et al.\cite{kls12}. We assume that the spreading fronts expand at a speed that is proportional to the outward flux of the population of the two species at the front, which give rise to the free boundary conditions in (1.1). Problem $(1.1)$ is a variation of the following two species competition system studied in \cite{kls12}: \begin{align*} \left\{\begin{array}{l} \partial_{t}u=d_{1}\left(\int_{\mathbb{R}^{N}}J(x-y)u(t,y)dy-u\right) +u(a(x)-u-v), \quad t>0,~x\in \mathbb{R}^{N},\\[5pt] \partial_{t}v=d_{2}\left[\tau \partial_{x}^{2}v+(1-\tau)\left(\int_{\mathbb{R}^{N}}J(x-y)v(t,y)dy-v\right)\right] +v(a(x)-u-v), \quad t>0,~x\in \mathbb{R}^{N}. \end{array}\right. \end{align*} They investigated how the mixed dispersal affects the invasion of a single species and how the mixed dispersal strategies will evolve in spatially periodic but temporally constant environment. If $\tau=0$ and $a(t),b(t),c(t),d(t)$ are constants, (1.1) reduces to a two species nonlocal diffusion system with free boundaries studied by Du et al. \cite{dwz19}. They proved the model has a unique global solution, established a spreading-vanishing dichotomy and obtained criteria for spreading and vanishing. Moreover, for the weak competition case they determined the long-time asymptotic limit of the solution when spreading happens. If $\tau=1$ and $a(t),b(t),c(t),d(t)$ are constants, (1.1) becomes a free boundary problem of ecological model with nonlocal and local diffusions considered in \cite{waw18}. They also obtained well-posedness of solutions and spreading-vanishing results. Moreover, Cao et al. \cite{clwan19} recently considered a nonlocal diffusion Lotka-Volterra type competition model with free boundaries in the homogeneous environment, which consists of a native species distributing in the whole space $\mathbb{R}$ and an invasive species. In the absence of the species $v$ (i.e. $v\equiv 0$) and $a(t)$ is a constant, (1.1) reduces to the following nonlocal diffusion model with free boundaries \begin{align*} \left\{\begin{array}{l} \partial_{t}u=d_{1}\left(\int_{g(t)}^{h(t)}J(x-y)u(t,y)dy-u\right)+u(a-u), \quad t>0,~g(t)<x<h(t),\\[3pt] u(t, g(t))=u(t,h(t))=0, \quad t\geq0,\\[5pt] h'(t)= \rho_{1}\int_{g(t)}^{h(t)}\int_{h(t)}^{\infty}J(x-y)u(t,x)dydx, \quad t\geq0,\\[5pt] g'(t)= -\rho_{1}\int_{g(t)}^{h(t)}\int_{-\infty}^{g(t)}J(x-y)u(t,x)dydx, \quad t\geq0,\\[5pt] u(0,x)=u_0(x),\quad |x|\leq h_{0},\\[5pt] h(0)=-g(0)=h_{0}. \end{array}\right. \tag{1.3} \end{align*} which has been studied in \cite{cdll19}. Problem (1.3) is a nature extension of the local diffusion model with free boundary in \cite{dl10}, and similar results including the existence and uniqueness of global solutions for more general growth function $f(t,x,u)$ and the spreading-vanishing results in the homogeneous environment were obtained in \cite{cdll19}, from which one can see that the nonlocal diffusion brings many essential difficulties in analysis. Since the work of Du and Lin \cite{dl10}, the local diffusion models with free boundary(ies) have been studied extensively. For example, the model in \cite{dl10} has been extended to other situations of single species model such as in higher dimensional space, heterogeneous environment, time-periodic environment, or with other boundary conditions, general nonlinear term, advection term, we refer the readers to \cite{bdk12,clw16,dg11,dgp13,dbl15,dpw17,dmz14,gll15,k14,llz14,llo15,rz19,w143,chw18,zx14} and references therein. Moreover, two-species Lotka-Volterra type competition problems and predator-prey problems with free boundary(ies) have also been considered in the homogeneous environment or heterogeneous time-periodic environment, e.g., \cite{clw17,dl13,dwz17,gw12,gw15,tir18,w14,waz16,wz18,wz15,chw15,zzl17}. The epidemic models with free boundary(ies) have also been considered in \cite{clwy17,gklz15,lzh17} The aim of this paper is to study the well-posedness and long-time behaviors of solutions to problem $(1.1)$. We first investigate the existence and uniqueness of solutions to (1.1) with more general growth functions. To achieve it, we shall establish the maximum principle for linear parabolic equations with mixed dispersal, and prove that the nonlinear parabolic equations with mixed dispersal (see (2.5)) admit a unique positive solution under the assumption that $g^{\prime}(t),h^{\prime}(t)$ and $u(t,x)$ are only continuous functions by approximation method, which plays an important role in the process of using the fixed point theorem. Then we establish a spreading-vanishing dichotomy and criteria for spreading and vanishing. To discuss the spreading and vanishing, we need to consider the existence and properties of principle eigenvalue of time-periodic parabolic-type eigenvalue problems with random/mixed dispersal. Since the intrinsic growth rates $a(t)$ and $c(t)$ are independent of spatial variable, we can transform the parabolic-type eigenvalue problems into elliptic-type eigenvalue problems. This transformation is also used in discussing the asymptotic behavior of solution (see Theorem 4.1). Finally, it is worth mentioning that, due to the effect of mixed dispersal, in Theorem 4.4 we only prove the vanishing result for two cases, but whether the other situations still hold true is unknown, we leave it for future research. The rest of the paper is organized as follows. In Section 2, we establish the global existence and uniqueness of solutions to problem $(1.1)$ with more general growth functions. The comparison principle in the moving domain and the discussions on eigenvalue problems are given in Section 3. In Section 4, we investigate spreading and vanishing of species. \textbf{Notations.} Throughout the paper, we denote $\Omega_{T_{0}}^{g,h}=(0,T_{0}]\times (g(t), h(t))$, $D_{T_{0}}=(0,T_{0}]\times (-1, 1)$ and $a_{T}=\frac{1}{T}\int_{0}^{T}a(t)dt$. Under the transform $x(t,z)=\frac{(h(t)-g(t))z+h(t)+g(t)}{2}$, we always denote $\tilde{f}(t,z)=f(t,x(t,z))=f(t,\frac{(h(t)-g(t))z+h(t)+g(t)}{2})$. \section{Well-posedness} In this section, we give the global well-posedness of solutions to problem $(1.1)$ with more general growth functions. More precisely, we consider the following free boundary problem \begin{align*} \left\{\begin{array}{l} \partial_{t}u=d_{1}\left(\int_{g(t)}^{h(t)}J(x-y)u(t,y)dy-u\right)+f_{1}(t,x,u,v), \quad t>0,~g(t)<x<h(t),\\[5pt] \partial_{t}v=d_{2}\left[\tau \partial_{x}^{2}v+(1-\tau)\left(\int_{g(t)}^{h(t)}J(x-y)v(t,y)dy-v\right)\right] +f_{2}(t,x,u,v), \quad t>0,~ g(t)<x<h(t),\\[5pt] u(t, g(t))=u(t,h(t))=v(t, g(t))=v(t,h(t))=0, \quad t\geq0,\\[5pt] h'(t)=-\mu v_{x}(t, h(t)) +\rho_{1}\int_{g(t)}^{h(t)}\int_{h(t)}^{\infty}J(x-y)u(t,x)dydx\\[5pt] \qquad\quad+\rho_{2}\int_{g(t)}^{h(t)}\int_{h(t)}^{\infty}J(x-y)v(t,x)dydx, \quad t\geq0,\\[5pt] g'(t)=-\mu v_{x}(t, g(t)) -\rho_{1}\int_{g(t)}^{h(t)}\int_{-\infty}^{g(t)}J(x-y)u(t,x)dydx\\[5pt] \qquad\quad-\rho_{2}\int_{g(t)}^{h(t)}\int_{-\infty}^{g(t)}J(x-y)v(t,x)dydx, \quad t\geq0,\\[5pt] u(0,x)=u_0(x), v(0, x)=v_0(x), \quad |x|\leq h_{0},\\[5pt] h(0)=-g(0)=h_{0}, \end{array}\right. \tag{2.1} \end{align*} where the growth terms $f_{i}(t,x,u,v)$ $(i=1,2)$ satisfy the following assumptions: $(\textbf{f1})$ $f_{1}(t,x,0,v),f_{2}(t,x,u,0)\equiv0$, and there exists a constant $K>0$ such that $f_{1}(t,x,u,v)<0$ for all $u>K$, $v\geq 0$ and $(t,x)\in \mathbb{R}^{+}\times \mathbb{R}$, and $f_{2}(t,x,u,v)<0$ for all $u\geq 0$, $v>K$ and $(t,x)\in \mathbb{R}^{+}\times \mathbb{R}$; $(\textbf{f2})$ For any given $T, l, K_{1}, K_{2}>0$, there exists a constant $L=L(T,l,K_{1},K_{2})$ such that $$ \|f_{2}(\cdot,x,u,v)\|_{C^{\frac{\alpha}{2}}([0,T])}\leq L $$ for all $x\in [-l,l]$, $u\in [0,K_{1}]$ and $v\in [0,K_{2}]$; $(\textbf{f3})$ For any $K_{1},K_{2}>0$, there exists a constant $L^{*}=L^{*}(K_{1},K_{2})>0$ such that $$ |f_{i}(t,x,u,v)-f_{i}(t,y,u,v)|\leq L^{*}|x-y| $$ for all $u\in [0,K_{1}]$, $v\in [0,K_{2}]$ and all $(t,x,y)\in \mathbb{R}^{+}\times \mathbb{R}\times \mathbb{R}$; $(\textbf{f4})$ $f_{i}(t,x,u,v)$ is locally Lipschitz in $u,v\in \mathbb{R}^{+}$ uniformly for $(t,x)\in \mathbb{R}^{+}\times \mathbb{R}$, i.e., for any $K_{1},K_{2}>0$, there exists a constant $\hat{L}=\hat{L}(K_{1},K_{2})>0$ such that $$ |f_{i}(t,x,u_{1},v_{1})-f_{i}(t,x,u_{2},v_{2})|\leq \hat{L}(|u_{1}-u_{2}|+|v_{1}-v_{2}|) $$ for all $u_{1},u_{2}\in [0,K_{1}]$, $v_{1},v_{2}\in [0,K_{2}]$ and all $(t,x)\in \mathbb{R}^{+}\times \mathbb{R}$.\\ It is easy to check that the growth functions in (1.1) satisfy the conditions $(\textbf{f1})-(\textbf{f4})$. The main result of this section is stated in the following theorem. \\ \noindent\textbf{Theorem 2.1.} Assume that (\textbf{J}) and (\textbf{f1})-(\textbf{f4}) hold. For any given $(u_{0},v_{0})$ satisfying (1,2), the problem $(2.1)$ admits a unique global solution $(u, v, g, h)$ defined on $[0,T_{0}]$ for any $0<T_{0}<\infty$ and \begin{align*} \begin{array}{rl} &(u, v, g, h)\in C^{1,1-}(\overline{\Omega}_{T_{0}}^{g,h})\times C^{1+\frac{\alpha}{2},2+\alpha}(\Omega_{T_{0}}^{g,h})\times [C^{1+\frac{\alpha}{2}}([0, T_{0}])]^{2},\\[3pt] &0<u\leq K_{1}:=\max\{\|u_{0}\|_{L^{\infty}}, K\},\quad 0<v\leq K_{2}:=\max\{\|v_{0}\|_{L^{\infty}}, K\}, \quad \forall~ (t,x)\in\Omega_{T_{0}}^{g,h}, \\[3pt] &0<-v_{x}(t,h(t)),~v_{x}(t,g(t))\leq K_{3}:=2K_{2}\max\left\{\sqrt{\frac{\hat{L}+d_{2}(1-\tau)}{2d_{2}\tau}}, \frac{4\|v_{0}\|_{C^{1}([-h_{0},h_{0}])}}{3K_{2}}\right\},~0<t\leq T_{0}, \end{array} \tag{2.2} \end{align*} where $C^{1,1-}(\overline{\Omega}_{T_{0}}^{g,h})$ denotes the class of functions that are $C^{1}$ in $t$ and Lipschitz continuous in $x$, and $\hat{L}=\hat{L}(K_{1},K_{2})$ is the Lipschitz constant defined in $(\textbf{f4})$.\\ To prove Theorem 2.1, we first establish the maximum principle for linear parabolic equations with mixed dispersal. For some $h_{0}, T_{0}$, we define \begin{align*} &\mathbb{H}_{T_{0}}^{h_{0}} :=\{h\in C^{1}([0,T_{0}]): h(0)=h_{0}, ~0<h^{\prime}(t)\leq R(t)\}, \\[3pt] &\mathbb{G}_{T_{0}}^{h_{0}} :=\{g\in C^{1}([0,T_{0}]):~-g\in \mathbb{H}_{T_{0}}^{h_{0}}\} \end{align*} with \begin{align*} R(t):=\mu K_{3}+2(h_{0}\rho_{1}K_{1}+h_{0}\rho_{2}K_{2}+\mu K_{3})e^{(\rho_{1}K_{1}+\rho_{2}K_{2})t}. \end{align*} \noindent\textbf{Lemma 2.1.}(Maximum Principle) Assume that (\textbf{J}) holds and $(g,h)\in \mathbb{G}_{T_{0}}^{h_{0}}\times\mathbb{H}_{T_{0}}^{h_{0}}$. If $v(t,x)\in C^{1,2}(\Omega_{T_{0}}^{g,h})\cap C(\overline{\Omega}_{T_{0}}^{g,h})$ satisfies, for some $c\in L^{\infty}(\Omega_{T_{0}}^{g,h})$, \begin{align*} \left\{\begin{array}{l} \partial_{t}v\geq d_{2}\left[\tau \partial_{x}^{2}v +(1-\tau)\left(\int_{g(t)}^{h(t)}J(x-y)v(t,y)dy-v\right)\right]+c(t,x)v, \quad (t,x)\in\Omega_{T_{0}}^{g,h},\\[5pt] v(t,g(t))\geq0,~v(t,h(t))\geq0,\quad t\in(0, T_{0}],\\[5pt] v(0, x)\geq 0, \quad x\in[-h_{0}, h_{0}], \end{array}\right. \tag{2.3} \end{align*} then $v(t,x)\geq 0$ for all $(t,x)\in\overline{\Omega}_{T_{0}}^{g,h}$. Moreover, if $v(0,x)\not\equiv 0$ in $[-h_{0}, h_{0}]$, then $v(t,x)>0$ in $\Omega_{T_{0}}^{g,h}$.\\ \noindent\textbf{Proof.} $(i)$ Let $\omega(t,x)=e^{-kt}v(t,x)$, where $k>0$ is a constant chosen large enough such that $-k+c(t,x)<0$ for all $(t,x)\in \Omega_{T_{0}}^{g,h}$. Then \begin{align*} \partial_{t}\omega\geq d_{2}\left[\tau \partial_{x}^{2}\omega +(1-\tau)\int_{g(t)}^{h(t)}J(x-y)\omega(t,y)dy\right] +[-k-d_{2}(1-\tau)+c(t,x)]\omega. \end{align*} We are now in a position to prove that $\omega\geq 0$ in $\overline{\Omega}_{T_{0}}^{g,h}$. Suppose that $\omega_{\inf}=\inf_{(t,x)\in \overline{\Omega}_{T_{0}}^{g,h}}\omega(t,x)<0$. By (2.3), $\omega\geq 0$ on the parabolic boundary of $\overline{\Omega}_{T_{0}}^{g,h}$, and hence there exists $(t_{*}, x_{*})\in \Omega_{T_{0}}^{g,h}$ such that $\omega_{\inf}=\omega(t_{*}, x_{*})<0$. Since $\partial_{t}\omega(t_{*}, x_{*})\leq 0$, $\partial_{x}^{2}\omega(t_{*}, x_{*})\geq 0$, then \begin{align*} &\partial_{t}\omega(t_{*}, x_{*})\\[5pt] &\geq d_{2}\left[\tau \partial_{x}^{2}\omega(t_{*}, x_{*}) +(1-\tau)\int_{g(t_{*})}^{h(t_{*})}J(x_{*}-y)\omega(t_{*},y)dy\right] +[-k-d_{2}(1-\tau)+c(t_{*},x_{*})]\omega(t_{*}, x_{*})\\[5pt] &\geq d_{2}\tau \partial_{x}^{2}\omega(t_{*}, x_{*}) +d_{2}(1-\tau)w_{\inf}\int_{\mathbb{R}}J(x_{*}-y)dy +[-k-d_{2}(1-\tau)+c(t_{*},x_{*})]w_{\inf}\\[5pt] &=d_{2}\tau \partial_{x}^{2}\omega(t_{*}, x_{*}) +[-k+c(t_{*},x_{*})]w_{\inf}. \end{align*} Since $[-k+c(t_{*},x_{*})]\omega_{\inf}<0$, we can get a contradiction. Thus, $\omega(t,x)\geq 0$ in $\overline{\Omega}_{T_{0}}^{g,h}$, which implies that \begin{align*} v(t,x)\geq 0\quad \mbox{for~all}~(t,x)\in\overline{\Omega}_{T_{0}}^{g,h}. \tag{2.4} \end{align*} $(ii)$ Now assume that $v(0,x)\not\equiv 0$ in $[-h_{0},h_{0}]$. By (2.4) and the fact $J(x)\geq 0$, we have \begin{align*} \partial_{t}v &\geq d_{2}\left[\tau \partial_{x}^{2}v +(1-\tau)\left(\int_{g(t)}^{h(t)}J(x-y)v(t,y)dy-v\right)\right]+c(t,x)v\\[5pt] &\geq d_{2}[\tau\partial_{x}^{2}v-(1-\tau)v] +c(t,x)v\\[5pt] &=d_{2}\tau\partial_{x}^{2}v +[c(t,x)-d_{2}(1-\tau)]v. \end{align*} Define the transform \begin{align*} x(t,z)=\frac{(h(t)-g(t))z+h(t)+g(t)}{2}, \quad \mbox{that~is,}\quad z(t,x)=\frac{2x-g(t)-h(t)}{h(t)-g(t)}, \end{align*} and let $\tilde{v}(t,z)=v(t,x(t,z))$ and $\tilde{c}(t,z)=c(t,x(t,z))$, then $\tilde{v}(t,z)$ satisfies \begin{align*} \left\{\begin{array}{l} \partial_{t}\tilde{v}\geq d_{2}\tau\xi(t)\partial_{z}^{2}\tilde{v}+\eta(t,z)\partial_{z}\tilde{v} +[\tilde{c}(t,z)-d_{2}(1-\tau)]\tilde{v}, \quad (t,z)\in D_{T_{0}},\\[5pt] \tilde{v}(t,-1)\geq0,~\tilde{v}(t,1)\geq0,\quad t\in(0, T_{0}],\\[5pt] \tilde{v}(0, z)=v(0, h_{0}z)\geq 0, \quad z\in[-1, 1], \end{array}\right. \end{align*} where \begin{align*} \xi(t)=\frac{4}{(h(t)-g(t))^{2}},\quad \eta(t,z)=\frac{h^{\prime}(t)+g^{\prime}(t)}{h(t)-g(t)} +\frac{(h^{\prime}(t)-g^{\prime}(t))z}{h(t)-g(t)}. \end{align*} By the classical maximum principle for parabolic equation, we know \begin{align*} \tilde{v}(t,z)> 0,\quad \forall~(t,z)\in D_{T_{0}}. \end{align*} Thus, $v(t,x)>0$ in $\Omega_{T_{0}}^{g,h}$. This completes the proof. \hfill$\Box$\\ According to Lemma 2.1, we can derive the following comparison principle.\\ \noindent\textbf{Lemma 2.2.} (Comparison principle) Suppose that (\textbf{J}) holds, $(g,h)\in \mathbb{G}_{T_{0}}^{h_{0}}\times\mathbb{H}_{T_{0}}^{h_{0}}$ and $f(t,x,u,v)$ satisfies (\textbf{f4}). Let $v_{1}(t,x), v_{2}(t,x)\in C^{1,2}(\Omega_{T_{0}}^{g,h})\cap C(\overline{\Omega}_{T_{0}}^{g,h})$ satisfy \begin{align*} \left\{\begin{array}{l} \partial_{t}v_{1}- d_{2}\left[\tau \partial_{x}^{2}v_{1} +(1-\tau)\left(\int_{g(t)}^{h(t)}J(x-y)v_{1}(t,y)dy-v_{1}\right)\right] -f(t,x,u,v_{1})\\[5pt] \quad \geq \partial_{t}v_{2}- d_{2}\left[\tau \partial_{x}^{2}v_{2} +(1-\tau)\left(\int_{g(t)}^{h(t)}J(x-y)v_{2}(t,y)dy-v_{2}\right)\right] -f(t,x,u,v_{2}), \quad (t,x)\in\Omega_{T_{0}}^{g,h},\\[5pt] v_{1}(t,x)\geq v_{2}(t,x), \quad t\in(0, T_{0}],~x=g(t)~\mbox{or}~x=h(t),\\[5pt] v_{1}(0, x)\geq v_{2}(0, x), \quad x\in[-h_{0},h_{0}]. \end{array}\right. \end{align*} If $u(t,x)\in [0,c_{1}]$, $v_{1}(t,x),v_{2}(t,x)\in [0, c_{2}]$ in $\overline{\Omega}_{T_{0}}^{g,h}$ for some constants $c_{1},c_{2}>0$, then we have \begin{align*} v_{1}(t,x)\geq v_{2}(t,x),\quad \forall(t,x)\in \Omega_{T_{0}}^{g,h}. \end{align*} If we further assume that $v_{1}(0,x)\not\equiv v_{2}(0,x)$ for $x\in[-h_{0},h_{0}]$, then \begin{align*} v_{1}(t,x)>v_{2}(t,x),\quad \forall(t,x)\in \Omega_{T_{0}}^{g,h}. \end{align*} \noindent\textbf{Proof.} Let $w=v_{1}-v_{2}$, then we have \begin{align*} \left\{\begin{array}{l} \partial_{t}w- d_{2}\left[\tau \partial_{x}^{2}w +(1-\tau)\left(\int_{g(t)}^{h(t)}J(x-y)w(t,y)dy-w\right)\right]\\[5pt] \quad\geq f(t,x,u,v_{1})-f(t,x,u,v_{2}), \quad (t,x)\in\Omega_{T_{0}}^{g,h},\\[5pt] w(t,x)\geq 0, \quad t\in(0, T_{0}],~x=g(t)~\mbox{or}~x=h(t),\\[5pt] w(0, x)\geq0, \quad x\in[-h_{0},h_{0}]. \end{array}\right. \end{align*} Since $f(t,x,u,v)$ satisfies (\textbf{f4}), we have \begin{align*} f(t,x,u,v_{1})-f(t,x,u,v_{2}) &=\int_{v_{2}}^{v_{1}}\frac{\partial f(t,x,u,\eta)}{\partial \eta}d\eta =\int_{0}^{1}\frac{\partial f(t,x,u,\eta)}{\partial \eta}\Big|_{\eta=v_{2}+s(v_{1}-v_{2})}ds\cdot(v_{1}-v_{2})\\[3pt] &=c(t,x)w. \end{align*} Denote $\hat{L}(c_{1},c_{2})$ by the Lipschitz constant of $f$ for $(t,x,u,v)\in \mathbb{R}^{+}\times \mathbb{R}\times[0,c_{1}]\times [0,c_{2}]$, then $\|c\|_{L^{\infty}}\leq \hat{L}(c_{1},c_{2})$. By applying Lemma 2.1, we can get the desired results. \hfill$\Box$\\ Next, by applying the classical upper and lower method we shall prove that nonlinear parabolic equations with mixed dispersal (see (2.5)) admit a unique positive classical solution under the assumption that $g^{\prime}(t), h^{\prime}(t)$ and $u(t,x)$ are H\"{o}lder continuous. For some $h_{0}, T_{0}>0$, we define \begin{align*} &\widehat{\mathbb{H}}_{T_{0}}^{h_{0}} :=\{h\in C^{1+\frac{\alpha}{2}}([0,T_{0}]): h(0)=h_{0}, ~0<h^{\prime}(t)\leq R(t)\}, \\[3pt] &\widehat{\mathbb{G}}_{T_{0}}^{h_{0}} :=\{g\in C^{1+\frac{\alpha}{2}}([0,T_{0}]):~-g\in \widehat{\mathbb{H}}_{T_{0}}^{h_{0}}\}. \end{align*} \noindent\textbf{Lemma 2.3.} Suppose that (\textbf{J}) holds, $(g,h)\in \widehat{\mathbb{G}}_{T_{0}}^{h_{0}}\times\widehat{\mathbb{H}}_{T_{0}}^{h_{0}}$, $u\in C^{\frac{\alpha}{2},\alpha}(\overline{\Omega}_{T_{0}}^{g,h})$, $f_{2}$ satisfies (\textbf{f1})-(\textbf{f4}) and $v_{0}$ satisfies (1.2). Then for any $T_{0}>0$, the following problem \begin{align*} \left\{\begin{array}{l} \partial_{t}v= d_{2}\left[\tau \partial_{x}^{2}v +(1-\tau)\left(\int_{g(t)}^{h(t)}J(x-y)v(t,y)dy-v\right)\right]+f_{2}(t,x,u,v), \quad (t,x)\in\Omega_{T_{0}}^{g,h},\\[5pt] v(t,g(t))=v(t,h(t))=0,\quad t\in(0, T_{0}],\\[5pt] v(0, x)=v_{0}(x), \quad x\in[-h_{0}, h_{0}] \end{array}\right. \tag{2.5} \end{align*} admits a unique solution $v(t,x)\in C^{1+\frac{\alpha}{2},2+\alpha}(\Omega_{T_{0}}^{g,h})$. Moreover, $v(t,x)$ satisfies \begin{align*} \begin{array}{rl} &0<v(t,x)\leq K_{2} \quad\mbox{for}~(t,x)\in\Omega_{T_{0}}^{g,h},\\[3pt] &0<-v_{x}(t,h(t)), v_{x}(t,g(t))\leq K_{3} \quad\mbox{for}~t\in(0,T_{0}]. \end{array} \tag{2.6} \end{align*} \noindent\textbf{Proof.} We mainly adopt the classical upper and lower solutions method. Since the mixed dispersal is considered, we give the details of the proof. A function $\bar{v}$ is called an upper solution of (2.5) if $\bar{v}\in C^{1,2}(\Omega_{T_{0}}^{g,h})\cap C(\overline{\Omega}_{T_{0}}^{g,h})$ satisfies \begin{align*} \left\{\begin{array}{l} \partial_{t}\bar{v}\geq d_{2}\left[\tau \partial_{x}^{2}\bar{v} +(1-\tau)\left(\int_{g(t)}^{h(t)}J(x-y)\bar{v}(t,y)dy-\bar{v}\right)\right]+f_{2}(t,x,u,\bar{v}), \quad (t,x)\in\Omega_{T_{0}}^{g,h},\\[5pt] \bar{v}(t,g(t))\geq 0,~\bar{v}(t,h(t))\geq0,\quad t\in(0, T_{0}],\\[5pt] \bar{v}(0, x)\geq v_{0}(x), \quad x\in[-h_{0}, h_{0}], \end{array}\right. \end{align*} and a function $\underline{v}$ is called a lower solution of (2.5) if reversing all the above inequalities. \emph{Step 1.} We claim that, if $\bar{v}, \underline{v}$ are respectively nonnegative upper and lower solutions of (2.5), then (2.5) has a unique solution $v(t,x)$ satisfying \begin{align*} \underline{v}(t,x)\leq v(t,x)\leq \bar{v}(t,x), \quad \forall(t,x)\in \overline{\Omega}_{T_{0}}^{g,h}. \end{align*} Indeed, since $u\in C^{\frac{\alpha}{2},\alpha}(\overline{\Omega}_{T_{0}}^{g,h})$ and $\bar{v}, \underline{v}\in C(\overline{\Omega}_{T_{0}}^{g,h})$, there exists a constant $M>0$ such that $0\leq u,\bar{v}, \underline{v}\leq M$ for $(t,x)\in \overline{\Omega}_{T_{0}}^{g,h}$. By (\textbf{f4}), we have, for some constant $k>d_{2}(1-\tau)$, \begin{align*} |f_{2}(t,x,u,v_{1})-f_{2}(t,x,u,v_{2})|\leq [k-d_{2}(1-\tau)]|v_{1}-v_{2}| \quad \mbox{for~any~} (t,x)\in \overline{\Omega}_{T_{0}}^{g,h}~\mbox{and}~ u, v_{1}, v_{2}\in [0,M]. \end{align*} For any $\vartheta\in C(\overline{\Omega}_{T_{0}}^{g,h})$ satisfying $\vartheta\in[0,M]$, we define a mapping $\Phi$ by $v=\Phi \vartheta$, where $v\in C^{\frac{1+\alpha}{2},1+\alpha}(\overline{\Omega}_{T_{0}}^{g,h})$ is the unique solution of \begin{align*} \left\{\begin{array}{l} \partial_{t}v- d_{2}\tau \partial_{x}^{2}v+kv =d_{2}(1-\tau)\left(\int_{g(t)}^{h(t)}J(x-y)\vartheta(t,y)dy-\vartheta\right) +f_{2}(t,x,u,\vartheta)+k\vartheta, \quad (t,x)\in\Omega_{T_{0}}^{g,h},\\[5pt] v(t,g(t))=v(t,h(t))=0,\quad t\in(0, T_{0}],\\[5pt] v(0, x)=v_{0}(x), \quad x\in[-h_{0}, h_{0}]. \end{array}\right. \tag{2.7} \end{align*} The existence and uniqueness of $v\in C^{\frac{1+\alpha}{2},1+\alpha}(\overline{\Omega}_{T_{0}})$ is guaranteed by the $L^{p}$ theory for linear parabolic equation and the Sobolev imbedding theorem. More precisely, let $\tilde{v}(t,z)=v(t,x(t,z))$, $\tilde{u}(t,z)=u(t,x(t,z))$, $\tilde{\vartheta}(t,z)=\vartheta(t,x(t,z))$ and $\tilde{f}_{2}(t,z,\tilde{u},\tilde{\vartheta})=f_{2}(t,x(t,z),\tilde{u},\tilde{\vartheta})$, then (2.7) becomes \begin{align*} \left\{\begin{array}{l} \partial_{t}\tilde{v}- d_{2}\tau\xi(t)\partial_{z}^{2}\tilde{v}-\eta(t,z)\partial_{z}\tilde{v}+k\tilde{v} =d_{2}(1-\tau)\left(\frac{h(t)-g(t)}{2}\int_{-1}^{1}J(\frac{h(t)-g(t)}{2}(z-s)) \tilde{\vartheta}ds -\tilde{\vartheta}\right)\\[5pt] \qquad\qquad\qquad\qquad +\tilde{f}_{2}(t,z,\tilde{u},\tilde{\vartheta})+k\tilde{\vartheta}, \quad (t,z)\in D_{T_{0}},\\[5pt] \tilde{v}(t,-1)=\tilde{v}(t,1)=0,\quad t\in(0, T_{0}],\\[5pt] \tilde{v}(0, z)=v_{0}(h_{0}z), \quad z\in[-1, 1]. \end{array}\right. \tag{2.8} \end{align*} Note that the right hand of the equation in (2.8) is continuous in $\overline{D}_{T_{0}}$ and then belongs to $L^{p}(D_{T_{0}})$ with any $p>3$, $\xi(t)\in C([0,T_{0}])$ with $\|\xi\|_{L^{\infty}((0,T_{0}))}\leq \frac{1}{h_{0}^{2}}$ and $\|\eta\|_{L^{\infty}((0,T_{0}))}\leq \frac{2R(T_{0})}{h_{0}}$. Applying the $L^{p}$ theory to (2.8) and the Sobolev imbedding theorem, we can obtain a unique solution $\tilde{v}\in W^{1,2}_{p}(D_{T_{0}})\hookrightarrow C^{\frac{1+\alpha}{2},1+\alpha}(\overline{D}_{T_{0}})$, and then get a unique solution $v\in C^{\frac{1+\alpha}{2},1+\alpha}(\overline{\Omega}_{T_{0}}^{g,h})$ to (2.8). We shall show that $\Phi$ is monotone in the sense that if any $\vartheta_{1}, \vartheta_{2}\in C(\overline{\Omega}_{T_{0}}^{g,h})$ satisfy $0\leq \vartheta_{1},\vartheta_{2}\leq M$ and $\vartheta_{2}\geq \vartheta_{1}$, then $\Phi \vartheta_{2}\geq \Phi \vartheta_{1}$. To see that, let $w=\Phi \vartheta_{2}-\Phi \vartheta_{1}$, then $w$ satisfies \begin{align*} \left\{\begin{array}{l} \partial_{t}w-d_{2}\tau \partial_{x}^{2}w+kw=d_{2}(1-\tau)\left(\int_{g(t)}^{h(t)}J(x-y)(\vartheta_{2}(t,y)-\vartheta_{1}(t,y))dy -(\vartheta_{2}-\vartheta_{1})\right)\\[5pt] \qquad\qquad\qquad\qquad\quad +f_{2}(t,x,u,\vartheta_{2})-f_{2}(t,x,u,\vartheta_{1}) +k(\vartheta_{2}-\vartheta_{1}), \quad (t,x)\in\Omega_{T_{0}}^{g,h},\\[5pt] w(t,g(t))=w(t,h(t))=0,\quad t\in(0, T_{0}],\\[5pt] w(0, x)=0, \quad x\in[-h_{0}, h_{0}]. \end{array}\right. \tag{2.9} \end{align*} Since the equation in (2.9) satisfies \begin{align*} \begin{array}{rl} \partial_{t}w- d_{2}\tau \partial_{x}^{2}w+kw &=d_{2}(1-\tau)\left(\int_{g(t)}^{h(t)}J(x-y)(\vartheta_{2}(t,y)-\vartheta_{1}(t,y))dy -(\vartheta_{2}-\vartheta_{1})\right)\\[5pt] &\quad+f_{2}(t,x,u,\vartheta_{2})-f_{2}(t,x,u,\vartheta_{1}) +k(\vartheta_{2}-\vartheta_{1})\\[5pt] &\geq -d_{2}(1-\tau)(\vartheta_{2}-\vartheta_{1}) +f_{2}(t,x,u,\vartheta_{2})-f_{2}(t,x,u,\vartheta_{1}) +k(\vartheta_{2}-\vartheta_{1})\\[5pt] &=f_{2}(t,x,u,\vartheta_{2})-f_{2}(t,x,u,\vartheta_{1}) +[k-d_{2}(1-\tau)](\vartheta_{2}-\vartheta_{1})\\[5pt] &\geq 0, \end{array} \end{align*} similar as the proof of Lemma 2.1 $(ii)$, we can get $\tilde{w}(t,z)=w(t,x(t,z))\geq 0$ in $\overline{D}_{T_{0}}$ by the maximum principle for linear parabolic equation. Thus, we have $w(t,x)\geq 0$ in $\overline{\Omega}_{T_{0}}^{g,h}$ and then $\Phi \vartheta_{2}\geq \Phi \vartheta_{1}$. Next, we shall show that $\Phi \vartheta\leq \vartheta$ provided that $\vartheta$ is an upper solution. In fact, let $v=\Phi \vartheta$, then \begin{align*} \left\{\begin{array}{l} \partial_{t}(\vartheta-v)-d_{2}\tau \partial_{x}^{2}(\vartheta-v)+k(\vartheta-v)\geq 0, \quad (t,x)\in\Omega_{T_{0}}^{g,h},\\[5pt] (\vartheta-v)(t,g(t))\geq 0,~(\vartheta-v)(t,h(t))\geq0,\quad t\in(0, T_{0}],\\[5pt] (\vartheta-v)(0, x)\geq0, \quad x\in[-h_{0}, h_{0}]. \end{array}\right. \end{align*} Similar as above, we have $\vartheta-v\geq 0$ in $\overline{\Omega}_{T_{0}}^{g,h}$, i.e., $\Phi \vartheta\leq \vartheta$. Similarly, we can also prove that $\Phi \vartheta\geq \vartheta$ provided that $\vartheta$ is a lower solution. We then construct two sequences $\{v^{(n)}\}$ and $\{w^{(n)}\}$ as follows \begin{align*} &v^{(1)}=\Phi\bar{v},~v^{(2)}=\Phi v^{(1)},~\cdots,~v^{(n)}=\Phi v^{(n-1)},~ \cdots,\\[3pt] &w^{(1)}=\Phi\underline{v},~w^{(2)}=\Phi w^{(1)},~\cdots,~w^{(n)}=\Phi w^{(n-1)},~ \cdots. \end{align*} Thus, \begin{align*} \underline{v}\leq w^{(1)}\leq w^{(2)}\leq \cdots \leq w^{(n)}\leq v^{(n)}\leq \cdots\leq v^{(2)} \leq v^{(1)}\leq \bar{v}. \end{align*} We conclude that the pointwise limits \begin{align*} w^{*}(t,x)=\lim_{n\rightarrow\infty}w^{(n)}(t,x),~ v^{*}(t,x)=\lim_{n\rightarrow\infty}v^{(n)}(t,x) \end{align*} exist at each point in $\Omega_{T_{0}}^{g,h}$ and \begin{align*} \underline{v}(t,x)\leq w^{*}(t,x)\leq v^{*}(t,x)\leq\bar{v}(t,x) \quad \mbox{in}~\Omega_{T_{0}}^{g,h}. \tag{2.10} \end{align*} Now we show that $w^{*}(t,x), v^{*}(t,x)$ are solutions of (2.5). We claim that the operator $\Phi: D\rightarrow C(\overline{\Omega}_{T_{0}}^{g,h})$ is compact, where $D:=\{v\in C(\overline{\Omega}_{T_{0}}^{g,h}); \underline{v}(t,x)\leq v(t,x)\leq \bar{v}(t,x), ~\forall (t,x)\in \overline{\Omega}_{T_{0}}^{g,h}\}$. We first prove that $\Phi$ is continuous. For any $\vartheta_{1}, \vartheta_{2}\in D$, we still define $w=\Phi \vartheta_{2}-\Phi \vartheta_{1}$, then $w$ satisfies (2.9). Let $\tilde{w}(t,z)=w(t,x(t,z))$, $\tilde{u}(t,z)=u(t,x(t,z))$, $\tilde{\vartheta}_{i}(t,z)=\vartheta_{i}(t,x(t,z))$ and $\tilde{f}_{2}(t,z,\tilde{u},\tilde{\vartheta}_{i})=f_{2}(t,x(t,z),\tilde{u},\tilde{\vartheta}_{i})$ $(i=1,2)$, then (2.9) becomes \begin{align*} \left\{\begin{array}{l} \partial_{t}\tilde{w}- d_{2}\tau\xi(t)\partial_{z}^{2}\tilde{w}-\eta(t,z)\partial_{z}\tilde{w}+k\tilde{w}\\[5pt] =d_{2}(1-\tau)\left(\frac{h(t)-g(t)}{2}\int_{-1}^{1}J(\frac{h(t)-g(t)}{2}(z-s)) (\tilde{\vartheta}_{2}(t,s)-\tilde{\vartheta}_{1}(t,s))ds -(\tilde{\vartheta}_{2}-\tilde{\vartheta}_{1})\right)\\[5pt] \quad +\tilde{f}_{2}(t,z,\tilde{u},\tilde{\vartheta}_{2})-\tilde{f}_{2}(t,z,\tilde{u},\tilde{\vartheta}_{1}) +k(\tilde{\vartheta}_{2}-\tilde{\vartheta}_{1}), \quad (t,z)\in D_{T_{0}},\\[5pt] \tilde{w}(t,-1)=\tilde{w}(t,1)=0,\quad t\in(0, T_{0}],\\[5pt] \tilde{w}(0, z)=0, \quad z\in[-1, 1]. \end{array}\right. \tag{2.11} \end{align*} Applying the $L^{p}$ theory to (2.11), for any $p>1$, \begin{align*} \begin{array}{rl} \|\tilde{w}\|_{W^{1,2}_{p}(D_{T_{0}})} &\leq C_{1}\Big[\big\|\frac{h(t)-g(t)}{2}\int_{-1}^{1}J(\frac{h(t)-g(t)}{2}(z-s)) (\tilde{\vartheta}_{2}(t,s)-\tilde{\vartheta}_{1}(t,s))ds\big\|_{L^{p}(D_{T_{0}})}\\[5pt] &\qquad+\|\tilde{f}_{2}(t,z,\tilde{u},\tilde{\vartheta}_{2})-\tilde{f}_{2}(t,z,\tilde{u},\tilde{\vartheta}_{1})\|_{L^{p}(D_{T_{0}})} +\|\tilde{\vartheta}_{2}-\tilde{\vartheta}_{1}\|_{L^{p}(D_{T_{0}})}\Big]\\[5pt] &\leq C_{2}\|\tilde{\vartheta}_{2}-\tilde{\vartheta}_{1}\|_{C(\overline{D}_{T_{0}})}, \end{array} \end{align*} where we have used the estimates \begin{align*} \begin{array}{rl} &\left\|\frac{h(t)-g(t)}{2}\int_{-1}^{1}J(\frac{h(t)-g(t)}{2}(z-s)) (\tilde{\vartheta}_{2}(t,s)-\tilde{\vartheta}_{1}(t,s))ds\right\|_{L^{p}(D_{T_{0}})}\\[5pt] &\leq\|\tilde{\vartheta}_{2}-\tilde{\vartheta}_{1}\|_{C(\overline{D}_{T_{0}})} \left\|\frac{h(t)-g(t)}{2}\int_{-1}^{1}J(\frac{h(t)-g(t)}{2}(z-s))ds\right\|_{L^{p}(D_{T_{0}})}\\[5pt] &=\|\tilde{\vartheta}_{2}-\tilde{\vartheta}_{1}\|_{C(\overline{D}_{T_{0}})} \left\|\int_{\frac{h(t)-g(t)}{2}(z-1)}^{\frac{h(t)-g(t)}{2}(z+1)}J(y)dy\right\|_{L^{p}(D_{T_{0}})} \leq \|\tilde{\vartheta}_{2}-\tilde{\vartheta}_{1}\|_{C(\overline{D}_{T_{0}})} \|J\|_{L^{1}(\mathbb{R})}(2T_{0})^{\frac{1}{p}}\\[5pt] &=\|\tilde{\vartheta}_{2}-\tilde{\vartheta}_{1}\|_{C(\overline{D}_{T_{0}})} (2T_{0})^{\frac{1}{p}} \end{array} \end{align*} and \begin{align*} &\|\tilde{f}_{2}(t,z,\tilde{u},\tilde{\vartheta}_{2})-\tilde{f}_{2}(t,z,\tilde{u},\tilde{\vartheta}_{1})\|_{L^{p}(D_{T_{0}})} =\|f_{2}(t,x(t,z),\tilde{u},\tilde{\vartheta}_{2})-f_{2}(t,x(t,z),\tilde{u},\tilde{\vartheta}_{1})\|_{L^{p}(D_{T_{0}})}\\[5pt] &\leq \|\hat{L}(\tilde{\vartheta}_{2}-\tilde{\vartheta}_{1})\|_{L^{p}(D_{T_{0}})} \leq \hat{L}\|\tilde{\vartheta}_{2}-\tilde{\vartheta}_{1}\|_{C(\overline{D}_{T_{0}})} (2T_{0})^{\frac{1}{p}}. \end{align*} By the Sobolev imbedding theorem, we have \begin{align*} \|\tilde{w}\|_{C(\overline{D}_{T_{0}})} \leq \|\tilde{w}\|_{C^{\frac{\alpha}{2},\alpha}(\overline{D}_{T_{0}})} \leq C_{3}\|\tilde{w}\|_{W^{1,2}_{p}(D_{T_{0}})} \leq C_{4}\|\tilde{\vartheta}_{2}-\tilde{\vartheta}_{1}\|_{C(\overline{D}_{T_{0}})}, \end{align*} which is equivalent to \begin{align*} \|w\|_{C(\overline{\Omega}_{T_{0}}^{g,h})} \leq \|w\|_{C^{\frac{\alpha}{2},\alpha}(\overline{\Omega}_{T_{0}}^{g,h})} \leq C_{5}\|w\|_{W^{1,2}_{p}(\Omega_{T_{0}}^{g,h})} \leq C_{6}\|\vartheta_{2}-\vartheta_{1}\|_{C(\overline{\Omega}_{T_{0}}^{g,h})}. \end{align*} Thus, $\Phi: D\rightarrow C(\overline{\Omega}_{T_{0}}^{g,h})$ is continuous. Similar as above, we can show that, for any given constant $M_{1}>0$, there exists a constant $M_{2}>0$ independent of $\vartheta$ such that $\|\Phi \vartheta\|_{C^{\frac{\alpha}{2},\alpha}(\overline{\Omega}_{T_{0}}^{g,h})}\leq M_{2}$ for any $\vartheta$ satisfying $\|\vartheta\|_{C(\overline{\Omega}_{T_{0}}^{g,h})}\leq M_{1}$, which implies that $\Phi: D\rightarrow C(\overline{\Omega}_{T_{0}}^{g,h})$ is a compact operator. Thus, from the fact $\|v^{(n)}\|_{C(\overline{\Omega}_{T_{0}}^{g,h})}\leq M$ we know $\{v^{(n)}\}=\{\Phi v^{(n-1)}\}$ has a convergent subsequence in $C(\overline{\Omega}_{T_{0}}^{g,h})$. By the monotonicty of $v^{(n)}$ in $n$, we have $v^{(n)}\rightarrow v^{*}$ in $C(\overline{\Omega}_{T_{0}}^{g,h})$. Therefore, $v^{*}=\Phi v^{*}$, which means $v^{*}\in C^{\frac{1+\alpha}{2},1+\alpha}(\overline{\Omega}_{T_{0}}^{g,h})$ is a solution of (2.5), and then $\tilde{v}^{*}\in C^{\frac{1+\alpha}{2},1+\alpha}(\overline{D}_{T_{0}})$ is a solution of (2.8) with $\tilde{v},\tilde{\vartheta}$ replaced by $\tilde{v}^{*}$. Since $h,g\in C^{1+\frac{\alpha}{2}}([0, T_{0}])$, we have $\xi\in C^{\frac{\alpha}{2}}([0, T_{0}])$ and $\eta\in C^{\frac{\alpha}{2},\alpha}(\overline{D}_{T_{0}})$. Moreover, by the assumption of $f_{2}$, we know $f_{2}\in C^{\frac{\alpha}{2},\alpha}(\overline{D}_{T_{0}})$. By the Lipschitz continuity of $J$, we deduce \begin{align*} \begin{array}{rl} &\left|\frac{h(t_{1})-g(t_{1})}{2}\int_{-1}^{1}J(\frac{h(t_{1})-g(t_{1})}{2}(z_{1}-s)) \tilde{v}^{*}(t_{1},s)ds -\frac{h(t_{2})-g(t_{2})}{2}\int_{-1}^{1}J(\frac{h(t_{2})-g(t_{2})}{2}(z_{2}-s)) \tilde{v}^{*}(t_{2},s)ds\right|\\[5pt] &\leq \left|\frac{h(t_{1})-g(t_{1})}{2}\int_{-1}^{1}\left[J(\frac{h(t_{1})-g(t_{1})}{2}(z_{1}-s)) -J(\frac{h(t_{2})-g(t_{2})}{2}(z_{2}-s))\right] \tilde{v}^{*}(t_{1},s)ds\right|\\[5pt] &\quad+\left|\int_{-1}^{1}J(\frac{h(t_{2})-g(t_{2})}{2}(z_{2}-s)) \left[\frac{h(t_{1})-g(t_{1})}{2}\tilde{v}^{*}(t_{1},s) -\frac{h(t_{2})-g(t_{2})}{2} \tilde{v}^{*}(t_{2},s)\right]ds\right|\\[5pt] &\leq \frac{h(t_{1})-g(t_{1})}{2}\int_{-1}^{1}L\left|\frac{h(t_{1})-g(t_{1})}{2}(z_{1}-s) -\frac{h(t_{2})-g(t_{2})}{2}(z_{2}-s)\right| \tilde{v}^{*}(t_{1},s)ds\\[5pt] &\quad+\left|\int_{-1}^{1}J(\frac{h(t_{2})-g(t_{2})}{2}(z_{2}-s)) \left[\frac{h(t_{1})-g(t_{1})}{2}\tilde{v}^{*}(t_{1},s) -\frac{h(t_{2})-g(t_{2})}{2} \tilde{v}^{*}(t_{2},s)\right]ds\right|\\[5pt] &\leq C\int_{-1}^{1}\left|\frac{h(t_{1})-g(t_{1})}{2}(z_{1}-s) -\frac{h(t_{2})-g(t_{2})}{2}(z_{2}-s)\right|ds\\[5pt] &\quad+C\int_{-1}^{1} \left|\frac{h(t_{1})-g(t_{1})}{2}\tilde{v}^{*}(t_{1},s) -\frac{h(t_{2})-g(t_{2})}{2}\tilde{v}^{*}(t_{2},s)\right|ds\\[5pt] &\leq C\int_{-1}^{1}\left|\frac{h(t_{1})-g(t_{1})}{2}(z_{1}-z_{2})\right|ds +C\int_{-1}^{1}\left|(\frac{h(t_{1})-h(t_{2})}{2} -\frac{g(t_{1})-g(t_{2})}{2})(z_{2}-s)\right|ds\\[5pt] &\quad+C\int_{-1}^{1} \left|\frac{h(t_{1})-g(t_{1})}{2}(\tilde{v}^{*}(t_{1},s) -\tilde{v}^{*}(t_{2},s))\right|ds +C\int_{-1}^{1} \left|(\frac{h(t_{1})-h(t_{2})}{2} -\frac{g(t_{1})-g(t_{2})}{2})\tilde{v}^{*}(t_{2},s)\right|ds\\[5pt] &\leq C(|z_{1}-z_{2}|+|t_{1}-t_{2}|+\int_{-1}^{1}|\tilde{v}^{*}(t_{1},s) -\tilde{v}^{*}(t_{2},s)|ds)\\[5pt] &\leq C(|z_{1}-z_{2}|^{\alpha}+|t_{1}-t_{2}|^{\frac{\alpha}{2}}), \end{array} \tag{2.12} \end{align*} which means that $\frac{h(t)-g(t)}{2}\int_{-1}^{1}J(\frac{h(t)-g(t)}{2}(z-s)) \tilde{v}^{*}ds\in C^{\frac{\alpha}{2},\alpha}(\overline{D}_{T_{0}})$. Applying the Schauder regularity theory to (2.8) with $\tilde{v},\tilde{\vartheta}$ replaced by $\tilde{v}^{*}$, we can deduce that $\tilde{v}^{*}\in C^{1+\frac{\alpha}{2},2+\alpha}(D_{T_{0}})$, and then $v^{*}\in C^{1+\frac{\alpha}{2},2+\alpha}(\Omega_{T_{0}}^{g,h})$ is a classical solution to (2.5). Similarly, we can prove $w^{*}$ is also a classical solution of (2.5). Now we prove the uniqueness of solution in $[\underline{v},\bar{v}]$. In (2.10), we have obtained $w^{*}\leq v^{*}$. By Lemma 2.2, we also get $w^{*}\geq v^{*}$. Thus, $w^{*}=v^{*}$. If $v(t,x)$ is a solution of (2.5) and satisfies $\underline{v}\leq v\leq \bar{v}$, then $v=\Phi v$. From Step 1, we know \begin{align*} w_{n}=\Phi^{n}\underline{v}\leq \Phi^{n}v=v \leq \Phi^{n}\bar{v}=v_{n}. \end{align*} Since $\lim_{n\rightarrow \infty}w_{n}=w^{*}=v^{*}=\lim_{n\rightarrow \infty}v_{n}$, we have \begin{align*} w^{*}(t,x)=v(t,x)=v^{*}(t,x). \end{align*} \emph{Step 2.} It is easy to check that $\underline{v}=0$ and $\bar{v}=K_{2}$ are lower and upper solutions of (2.5), respectively. Then there exists a unique solution $v$ satisfying $0<v\leq K_{2}$. Note that $f_{2}(t,x,u,v)$ satisfies the assumption (\textbf{f4}). Lemma 2.2 implies that $v$ is unique solution of (2.5). We define \begin{align*} \Omega:= \{(t,x):0<t\leq T_{0},~h(t)-M^{-1}<x<h(t)\} \end{align*} and construct an auxiliary function \begin{align*} \psi(t,x)=K_{2}[2M(h(t)-x)-M^{2}(h(t)-x)^{2}]. \end{align*} We will choose $M$ such that $\psi(t,x)\geq v(t,x)$ holds over $\Omega$. Direct calculations show that, for $(t,x)\in \Omega$, \begin{align*} &\partial_{t}\psi=2K_{2}Mh^{\prime}(t)(1-M(h(t)-x))\geq 0,\\[5pt] &-\partial_{xx}\psi=2K_{2}M^{2},~f_{2}(t,x,u,v)\leq \hat{L}v. \end{align*} It follows that \begin{align*} \begin{array}{rl} &\partial_{t}\psi-d_{2}\left[\tau \partial_{xx}\psi +(1-\tau)\left(\int_{g(t)}^{h(t)}J(x-y)\psi(t,y)dy-\psi\right)\right]\\[5pt] &\geq 2d_{2}\tau K_{2}M^{2}-d_{2}(1-\tau)K_{2}\int_{g(t)}^{h(t)}J(x-y)dy\\[5pt] &\geq 2d_{2}\tau K_{2}M^{2}-d_{2}(1-\tau)K_{2}\geq \hat{L}K_{2}\\[5pt] &\geq \hat{L}v\geq \partial_{t}v-d_{2}\left[\tau \partial_{xx}v +(1-\tau)\left(\int_{g(t)}^{h(t)}J(x-y)v(t,y)dy-v\right)\right] \quad \mbox{in}~\Omega, \end{array} \end{align*} if $M^{2}\geq \frac{\hat{L}+d_{2}(1-\tau)}{2d_{2}\tau}$. On the other hand, \begin{align*} \psi(t,h(t)-M^{-1})=K_{2}\geq v(t,h(t)-M^{-1}),\quad \psi(t,h(t))=0=v(t,h(t)). \end{align*} Choosing \begin{align*} \begin{array}{rl} M:=\max\left\{\sqrt{\frac{\hat{L}+d_{2}(1-\tau)}{2d_{2}\tau}}, \frac{4\|v_{0}\|_{C^{1}([-h_{0},h_{0}])}}{3K_{2}}\right\}, \end{array} \end{align*} we can prove that $v_{0}(x)\leq \psi(0,x)$ for $x\in [h_{0}-M^{-1},h_{0}]$. Then we can apply Lemma 2.1 to $\psi-v$ over $\Omega$ to deduce that \begin{align*} v(t,x)\leq \psi(t,x)\quad\mbox{for}~(t,x)\in \Omega. \end{align*} It then follows that \begin{align*} v_{x}(t,h(t))\geq-2K_{2}M. \end{align*} Moreover, since $v(t,h(t))=0$ and $v>0$ in $\Omega_{T_{0}}^{g,h}$, we have $v_{x}(t,h(t))<0$. The estimates for $v_{x}(t,g(t))$ can be similarly obtained. \hfill $\Box$\\ Now, by approximation method we get the unique strong solution of (2.5) provided that $g^{\prime}(t),h^{\prime}(t)$ and $u(t,x)$ are only continuous functions, which plays an important role in the proof of Lemma 2.5 later.\\ \noindent\textbf{Lemma 2.4.} Suppose that (\textbf{J}) holds, $(g,h)\in \mathbb{G}_{T_{0}}^{h_{0}}\times\mathbb{H}_{T_{0}}^{h_{0}}$, $u\in C(\overline{\Omega}_{T_{0}}^{g,h})$, $f_{2}$ satisfies (\textbf{f1})-(\textbf{f4}) and $v_{0}$ satisfies (1.2). Then the problem (2.5) admits a unique solution $v\in W_{p}^{1,2}(\Omega_{T_{0}}^{g,h})\cap C^{\frac{1+\alpha}{2},1+\alpha}(\overline{\Omega}_{T_{0}}^{g,h})$ with any $p>3$. Moreover, $v$ satisfies (2.6).\\ \noindent\textbf{Proof.} \emph{Step 1.} (Uniqueness) Let $\tilde{v}(t,z)=v(t,x(t,z))$, $\tilde{f}(t,z,\tilde{u}, \tilde{v})=f(t,x(t,z),u(t,x(t,z)),v(t,x(t,z)))$, then the problem becomes \begin{align*} \left\{\begin{array}{l} \partial_{t}\tilde{v}= d_{2}\tau\xi(t)\partial_{z}^{2}\tilde{v}+\eta(t,z)\partial_{z}\tilde{v}\\[5pt] \qquad+d_{2}(1-\tau)\left(\frac{h(t)-g(t)}{2}\int_{-1}^{1}J(\frac{h(t)-g(t)}{2}(z-s)) \tilde{v}(t,s)ds-\tilde{v}\right) +\tilde{f}_{2}(t,z,\tilde{u}, \tilde{v}), \quad (t,z)\in D_{T_{0}},\\[5pt] \tilde{v}(t,-1)=\tilde{v}(t,1)=0,\quad t\in(0, T_{0}],\\[5pt] \tilde{v}(0, z)=v_{0}(h_{0}z), \quad z\in[-1, 1]. \end{array}\right. \tag{2.13} \end{align*} Assume that $v_{i}(t,x)\in W_{p}^{1,2}(\Omega_{T_{0}}^{g,h})\cap C^{\frac{1+\alpha}{2},1+\alpha}(\overline{\Omega}_{T_{0}}^{g,h})$, $i=1,2$, are two solutions of (2.5), then $\tilde{v}_{i}(t,z)=v_{i}(t,x(t,z))\in W_{p}^{1,2}(D_{T_{0}})\cap C^{\frac{1+\alpha}{2},1+\alpha}(\overline{D}_{T_{0}})$ are two solutions of (2.13). Let $\tilde{w}=\tilde{v}_{1}-\tilde{v}_{2}$, then $\tilde{w}$ satisfies \begin{align*} \left\{\begin{array}{l} \partial_{t}\tilde{w}= d_{2}\tau\xi(t)\partial_{z}^{2}\tilde{w}+\eta(t,z)\partial_{z}\tilde{w} +d_{2}(1-\tau)\left(\frac{h(t)-g(t)}{2}\int_{-1}^{1}J(\frac{h(t)-g(t)}{2}(z-s)) \tilde{w}(t,s)ds-\tilde{w}\right)\\[5pt] \qquad +\tilde{f}_{2}(t,z,\tilde{u}, \tilde{v}_{1}) -\tilde{f}_{2}(t,z,\tilde{u}, \tilde{v}_{2}), \quad (t,z)\in D_{T_{0}},\\[5pt] \tilde{w}(t,-1)=\tilde{w}(t,1)=0,\quad t\in(0, T_{0}],\\[5pt] \tilde{w}(0, z)=0, \quad z\in[-1, 1]. \end{array}\right. \tag{2.14} \end{align*} Multiplying the equation in (2.14) by $\tilde{w}\chi_{[0,t]}$, where $\chi_{[0,t]}$ is the characteristic function in $[0,t]$ with any $0<t\leq T_{0}$, and then integrating over $(0,T_{0}]\times [-1,1]$ gives \begin{align*} \begin{array}{rl} &\frac{1}{2}\int_{-1}^{1}\tilde{w}^{2}(t,z)\Big|_{0}^{t}dz\\[5pt] &=-d_{2}\tau\int_{0}^{t}\int_{-1}^{1}\xi(t)(\partial_{z}\tilde{w})^{2}dzdt +\int_{0}^{t}\int_{-1}^{1}\eta(t,z)\tilde{w}\partial_{z}\tilde{w}dzdt\\[5pt] &\quad+d_{2}(1-\tau)\int_{0}^{t}\int_{-1}^{1}\left(\frac{h(t)-g(t)}{2}\int_{-1}^{1}J(\frac{h(t)-g(t)}{2}(z-s)) \tilde{w}(t,s)ds-\tilde{w}(t,z)\right)\tilde{w}(t,z)dzdt\\[5pt] &\quad +\int_{0}^{t}\int_{-1}^{1}[\tilde{f}_{2}(t,z,\tilde{u},\tilde{v}_{1}) -\tilde{f}_{2}(t,z,\tilde{u},\tilde{v}_{2})]\tilde{w}(t,z)dzdt. \end{array} \end{align*} By the Young's inequality with $0<\varepsilon<\frac{4d_{2}\tau}{(h(T_{0})-g(T_{0}))^{2}}$, \begin{align*} \begin{array}{rl} \int_{0}^{t}\int_{-1}^{1}\eta(t,z)\tilde{w}\partial_{z}\tilde{w}dzdt \leq \varepsilon \int_{0}^{t}\int_{-1}^{1}(\partial_{z}\tilde{w})^{2}dzdt +C(\varepsilon)\int_{0}^{t}\int_{-1}^{1}\tilde{w}^{2}dzdt. \end{array} \end{align*} By the continuity of $J$ and H\"{o}lder inequality, \begin{align*} \begin{array}{rl} &d_{2}(1-\tau)\int_{0}^{t}\int_{-1}^{1}\left(\frac{h(t)-g(t)}{2}\int_{-1}^{1}J(\frac{h(t)-g(t)}{2}(z-s)) \tilde{w}(t,s)ds-\tilde{w}(t,z)\right)\tilde{w}(t,z)dzdt\\[5pt] &\leq d_{2}(1-\tau)C\int_{0}^{t}(\int_{-1}^{1}|\tilde{w}(t,z)|dz)^{2}dt -d_{2}(1-\tau)\int_{0}^{t}\int_{-1}^{1}\tilde{w}^{2}dzdt\\[5pt] &\leq d_{2}(1-\tau)C_{1}\int_{0}^{t}\int_{-1}^{1}\tilde{w}^{2}dzdt. \end{array} \end{align*} By the Lipschitz continuity of $f_{2}$ with respect to $\tilde{v}$, \begin{align*} \begin{array}{rl} \int_{0}^{t}\int_{-1}^{1}[\tilde{f}_{2}(t,z,\tilde{u},\tilde{v}_{1}) -\tilde{f}_{2}(t,z,\tilde{u},\tilde{v}_{2})]\tilde{w}(t,z)dzdt \leq L\int_{0}^{t}\int_{-1}^{1}\tilde{w}^{2}dzdt. \end{array} \end{align*} Combining the above estimates, we have \begin{align*} \begin{array}{rl} \int_{-1}^{1}\tilde{w}^{2}(t,z)dz \leq C\int_{0}^{t}\int_{-1}^{1}\tilde{w}^{2}dzdt. \end{array} \end{align*} By the Gronwall's inequality, we know $\int_{0}^{t}\int_{-1}^{1}\tilde{w}^{2}dzdt=0$, which implies that $\tilde{w}=0$, a.e. in $(0,t]\times [-1,1]$. Since $t\in(0,T_{0}]$ is arbitrary and $\tilde{w}\in C(\overline{D}_{T_{0}})$, we can obtain $\tilde{w}=0$ for all $(t,z)$ in $[0,T_{0}]\times [-1,1]$, which implies the uniqueness of solution. \emph{Step 2.} (Existence) For any $(g,h)\in \mathbb{G}_{T_{0}}^{h_{0}}\times\mathbb{H}_{T_{0}}^{h_{0}}$, we can find some sequences $(g_{n},h_{n})\in \widehat{\mathbb{G}}_{T_{0}}^{h_{0}}\times\widehat{\mathbb{H}}_{T_{0}}^{h_{0}}$ such that $g_{n}\rightarrow g$ and $h_{n}\rightarrow h$ in $C^{1}([0,T_{0}])$. Moreover, for every $u(t,x)\in C(\overline{\Omega}_{T_{0}}^{g,h})$, we can obtain $\tilde{u}(t,z)=u(t,x(t,z))\in C(\overline{D}_{T_{0}})$ and find some sequence $\tilde{u}_{n}\in C^{\frac{\alpha}{2},\alpha}(\overline{D}_{T_{0}})$ such that $\tilde{u}_{n}\rightarrow \tilde{u}$ in $C(\overline{D}_{T_{0}})$. Taking $u_{n}(t,x)=\tilde{u}_{n}(t,\frac{2x-g_{n}(t)-h_{n}(t)}{h_{n}(t)-g_{n}(t)})$, we know $u_{n}\in C^{\frac{\alpha}{2},\alpha}(\overline{\Omega}_{T_{0}}^{g_{n},h_{n}})$. Consider the approximate problem \begin{align*} \left\{\begin{array}{l} \partial_{t}v= d_{2}\left[\tau \partial_{x}^{2}v +(1-\tau)\left(\int_{g_{n}(t)}^{h_{n}(t)}J(x-y)v(t,y)dy-v\right)\right]+f_{2}(t,x,u_{n},v), \quad (t,x)\in\Omega_{T_{0}}^{g_{n},h_{n}},\\[5pt] v(t,g_{n}(t))=v(t,h_{n}(t))=0,\quad t\in(0, T_{0}],\\[5pt] v(0, x)=v_{0}(x), \quad x\in[-h_{0}, h_{0}]. \end{array}\right. \tag{2.15} \end{align*} By Lemma 2.3, we know (2.15) has a unique classical solution $v_{n}\in C^{1+\frac{\alpha}{2},2+\alpha}(\Omega_{T_{0}}^{g_{n},h_{n}})$, and satisfies \begin{align*} &0<v_{n}\leq K_{2}\quad\mbox{for}~(t,x)\in\Omega_{T_{0}}^{g_{n},h_{n}},\\[3pt] &0<-\partial_{x}v_{n}(t,h_{n}(t)), \partial_{x}v_{n}(t,g_{n}(t))\leq K_{3} \quad\mbox{for}~t\in(0,T_{0}]. \end{align*} Let $\tilde{v}_{n}(t,z)=v_{n}(t,x_{n}(t,z))$ and $\tilde{f}_{2}(t,z,\tilde{u}_{n}, \tilde{v}_{n}) =f_{2}(t,x_{n}(t,z),u_{n}(t,x_{n}(t,z)),v_{n}(t,x_{n}(t,z)))$ with \begin{align*} x_{n}(t,z)=\frac{(h_{n}(t)-g_{n}(t))z+h_{n}(t)+g_{n}(t)}{2}, \end{align*} then $\tilde{v}_{n}(t,z)\in C^{1+\frac{\alpha}{2},2+\alpha}(D_{T_{0}})$ is the unique solution of \begin{align*} \left\{\begin{array}{l} \partial_{t}\tilde{v}_{n}= d_{2}\tau\xi_{n}(t)\partial_{z}^{2}\tilde{v}_{n}+\eta_{n}(t,z)\partial_{z}\tilde{v}_{n} +d_{2}(1-\tau)\left(\frac{h_{n}(t)-g_{n}(t)}{2}\int_{-1}^{1}J(\frac{h_{n}(t)-g_{n}(t)}{2}(z-s)) \tilde{v}_{n}(t,s)ds-\tilde{v}_{n}\right)\\[5pt] \qquad +\tilde{f}_{2}(t,z,\tilde{u}_{n}, \tilde{v}_{n}), \quad (t,z)\in D_{T_{0}},\\[5pt] \tilde{v}_{n}(t,-1)=\tilde{v}_{n}(t,1)=0,\quad t\in(0, T_{0}],\\[5pt] \tilde{v}_{n}(0, z)=v_{0}(h_{0}z), \quad z\in[-1, 1], \end{array}\right. \tag{2.16} \end{align*} and satisfies \begin{align*} \begin{array}{rl} &0<\tilde{v}_{n}\leq K_{2}\quad\mbox{in}~D_{T_{0}},\\[5pt] &0<-\frac{2}{h_{n}(t)-g_{n}(t)}\partial_{z}\tilde{v}_{n}(t,1), \frac{2}{h_{n}(t)-g_{n}(t)}\partial_{z}\tilde{v}_{n}(t,-1)\leq K_{3} \quad\mbox{for}~t\in(0,T_{0}]. \end{array} \tag{2.17} \end{align*} Let \begin{align*} \begin{array}{rl} g(t,z):= d_{2}(1-\tau)\left(\frac{h_{n}(t)-g_{n}(t)}{2}\int_{-1}^{1}J(\frac{h_{n}(t)-g_{n}(t)}{2}(z-s)) \tilde{v}_{n}(t,s)ds\right) +\tilde{f}_{2}(t,z,\tilde{u}_{n}, \tilde{v}_{n}), \end{array} \end{align*} we know $g\in L^{\infty}(D_{T_{0}})$. Applying the $L^{p}$ theory for linear parabolic equations to (2.16), we have the solution $\tilde{v}_{n}$ satisfies \begin{align*} \|\tilde{v}_{n}\|_{W_{p}^{1,2}(D_{T_{0}})}\leq C, \end{align*} where $C$ is independent of $n$. By the weak compactness of the bounded set in $W_{p}^{1,2}(D_{T_{0}})$ and $\mathring{W}_{p}^{1,1}(D_{T_{0}})$ and the compactly imbedding theorem ($W_{p}^{1,1}(D_{T_{0}})\hookrightarrow\hookrightarrow L^{p}(D_{T_{0}})$), there exists a subsequence, still denoted by $\{\tilde{v}_{n}\}$, such that $\tilde{v}_{n}\rightharpoonup \tilde{v}$ in $W_{p}^{1,2}(D_{T_{0}})\cap\mathring{W}_{p}^{1,1}(D_{T_{0}})$, $\partial_{z}\tilde{v}_{n}\rightarrow \partial_{z}\tilde{v}$ in $L^{p}(D_{T_{0}})$ and $\tilde{v}_{n}\rightarrow \tilde{v}$ in $L^{p}(D_{T_{0}})$, which implies that $\tilde{v}\in W_{p}^{1,2}(D_{T_{0}})\cap \mathring{W}_{p}^{1,1}(D_{T_{0}})$ is the strong solution of (2.13). By the Sobolev imbedding theorem, $\tilde{v}\in C^{\frac{1+\alpha}{2},1+\alpha}(\overline{D}_{T_{0}})$. Note that $\tilde{v}_{n}$ satisfies (2.17). From the fact $\partial_{z}\tilde{v}_{n}\rightarrow \partial_{z}\tilde{v}$, $\tilde{v}_{n}\rightarrow \tilde{v}$ in $L^{p}(D_{T_{0}})$ (then a.e. in $D_{T_{0}}$) and $\tilde{v}\in C^{\frac{1+\alpha}{2},1+\alpha}(\overline{D}_{T_{0}})$, we have $0<\tilde{v}\leq K_{2}$ in $D_{T_{0}}$ and $0<-\frac{2}{h(t)-g(t)}\partial_{z}\tilde{v}(t,1), \frac{2}{h(t)-g(t)}\partial_{z}\tilde{v}(t,-1)\leq K_{3}$ for $t\in(0,T_{0}]$. Then $v(t,x)=\tilde{v}(t,z(t,x))$ satisfies (2.6), which completes the proof. \hfill $\Box$\\ In the following lemma, we prove the well-posedness for (2.1) with any fixed $(g,h)\in \mathbb{G}_{T_{0}}^{h_{0}}\times \mathbb{H}_{T_{0}}^{h_{0}}$ by the fixed point theorem. Denote \begin{align*} &\mathbb{X}_{T_{0}}^{1} :=\Big\{u\in C(\overline{\Omega}_{T_{0}}^{g,h}):~0\leq u\leq K_{1}, u(0,x)=u_{0}(x), u(t,g(t))=u(t,h(t))=0\Big\}, \\[3pt] &\mathbb{X}_{T_{0}}^{2} :=\Big\{v\in C(\overline{\Omega}_{T_{0}}^{g,h}):~0\leq v\leq K_{2}, v(0,x)=v_{0}(x), v(t,g(t))=v(t,h(t))=0\Big\}, \\[3pt] &\mathbb{X}_{T_{0}}^{g,h}:=\mathbb{X}_{T_{0}}^{1}\times\mathbb{X}_{T_{0}}^{2}. \end{align*} \noindent\textbf{Lemma 2.5.} For any $T_{0}>0$ and $(g,h)\in \mathbb{G}_{T_{0}}^{h_{0}}\times \mathbb{H}_{T_{0}}^{h_{0}}$, the problem \begin{align*} \left\{\begin{array}{l} \partial_{t}u=d_{1}\left(\int_{g(t)}^{h(t)}J(x-y)u(t,y)dy-u\right)+f_{1}(t,x,u,v), \quad (t,x)\in \Omega_{T_{0}}^{g,h},\\[5pt] \partial_{t}v=d_{2}\left[\tau \partial_{x}^{2}v+(1-\tau)\left(\int_{g(t)}^{h(t)}J(x-y)v(t,y)dy-v\right)\right] +f_{2}(t,x,u,v), \quad (t,x)\in \Omega_{T_{0}}^{g,h},\\[5pt] u(t, g(t))=u(t,h(t))=v(t, g(t))=v(t,h(t))=0, \quad t\in [0,T_{0}],\\[5pt] u(0,x)=u_0(x), v(0, x)=v_0(x), \quad x\in[-h_{0},h_{0}] \end{array}\right. \tag{2.18} \end{align*} admits a unique solution $(u, v)\in \mathbb{X}_{T_{0}}^{g,h}$, and $(u, v)$ satisfy \begin{align*} \begin{array}{rl} &0<u\leq K_{1}, 0<v\leq K_{2}\quad \mbox{in}~\Omega_{T_{0}}^{g,h},\\[5pt] &0<-v_{x}(t,h(t)), v_{x}(t,g(t))\leq K_{3}\quad \mbox{in}~(0,T_{0}]. \end{array} \tag{2.19} \end{align*} Moreover, $v\in W_{p}^{1,2}(\Omega_{T_{0}}^{g,h})\cap C^{\frac{1+\alpha}{2},1+\alpha}(\overline{\Omega}_{T_{0}}^{g,h})$ with any $p>3$.\\ \noindent\textbf{Proof.} For $u^{*}\in \mathbb{X}_{s}^{1}$ with $0<s\leq T_{0}$, from Lemma 2.4 we know that the initial-boundary value problem (2.5) with $(u,T_{0})$ replaced by $(u^{*},s)$ admits a unique solution $v\in\mathbb{X}_{s}^{2}$. For such $v\in\mathbb{X}_{s}^{2}$, we consider \begin{align*} \left\{\begin{array}{l} \partial_{t}u=d_{1}\left(\int_{g(t)}^{h(t)}J(x-y)u(t,y)dy-u\right)+f_{1}(t,x,u,v), \quad (t,x)\in \Omega_{T_{0}}^{g,h},\\[5pt] u(t, g(t))=u(t,h(t))=0, \quad t\in [0,T_{0}],\\[5pt] u(0,x)=u_0(x), \quad x\in[-h_{0},h_{0}]. \end{array}\right. \end{align*} By Lemma 2.3 in \cite{cdll19}, it admits a unique solution $u\in \mathbb{X}_{s}^{1}$. We define a mapping $\mathcal{F}_{s}: \mathbb{X}_{s}^{1}\rightarrow\mathbb{X}_{s}^{1}$ by $\mathcal{F}_{s}u^{*}=u$. If $\mathcal{F}_{s}u^{*}=u^{*}$, then $(u^{*},v)$ solves (2.18) with $T_{0}$ replaced by $s$. Next, we shall prove that $\mathcal{F}_{s}$ has a fixed point in $\mathbb{X}_{s}^{1}$ provided that $s$ is small enough. For $i=1,2$, we assume $u_{i}^{*}\in \mathbb{X}_{s}^{1}$, $u_{i}=\mathcal{F}_{s}u_{i}^{*}$, and $v_{i}$ be the unique solution of (2.5) with $(u,T_{0})$ replaced by $(u_{i}^{*},s)$. Denote $\theta^{*}=u_{1}^{*}-u_{2}^{*}$, $\theta=u_{1}-u_{2}$ and $w=v_{1}-v_{2}$. Note that $w$ satisfies \begin{align*} \left\{\begin{array}{l} \partial_{t}w=d_{2}\left[\tau \partial_{x}^{2}w+(1-\tau)\left(\int_{g(t)}^{h(t)}J(x-y)w(t,y)dy-w\right)\right] +a_{0}(t,x)w+b_{0}(t,x)\theta^{*},~ (t,x)\in \Omega_{s}^{g,h},\\[5pt] w(t, g(t))=w(t,h(t))=0, \quad t\in [0,s],\\[5pt] w(0, x)=0, \quad x\in[-h_{0},h_{0}], \end{array}\right. \end{align*} where \begin{align*} \begin{array}{rl} a_{0}(t,x)=\int_{0}^{1}f_{2,v}(t,x,u_{1}^{*},v_{2}+(v_{1}-v_{2})\tau)d\tau,\\[3pt] b_{0}(t,x)=\int_{0}^{1}f_{2,u}(t,x,u_{2}^{*}+(u_{1}^{*}-u_{2}^{*})\tau,v_{2})d\tau. \end{array} \end{align*} Let $\tilde{\theta}^{*}(t,z)=\theta^{*}(t,x(t,z)), \tilde{w}(t,z)=w(t,x(t,z)), \tilde{a}_{0}(t,z)=a_{0}(t,x(t,z)), \tilde{b}_{0}(t,z)=b_{0}(t,x(t,z))$. It is easy to see that $\tilde{w}$ satisfies \begin{align*} \left\{\begin{array}{l} \partial_{t}\tilde{w}=d_{2}\tau\xi(t)\partial_{z}^{2}\tilde{w} +\eta(t,z)\partial_{z}\tilde{w}+[\tilde{a}_{0}(t,z)-d_{2}(1-\tau)]\tilde{w}\\[5pt] \qquad +d_{2}(1-\tau)\frac{h(t)-g(t)}{2}\int_{-1}^{1}J(\frac{h(t)-g(t)}{2}(z-s)) \tilde{w}(t,s)ds +\tilde{b}_{0}(t,z)\tilde{\theta}^{*}, \quad (t,z)\in D_{s},\\[5pt] \tilde{w}(t, -1)=\tilde{w}(t,1)=0, \quad t\in [0,s],\\[5pt] \tilde{w}(0, z)=0, \quad z\in[-1,1]. \end{array}\right. \end{align*} By the $L^{p}$ theory for linear parabolic equation, we have \begin{align*} \begin{array}{rl} \|\tilde{w}\|_{W_{p}^{1,2}(D_{s})} &\leq C\left(\left\|\frac{h(t)-g(t)}{2}\int_{-1}^{1}J(\frac{h(t)-g(t)}{2}(z-s)) \tilde{w}(t,s)ds\right\|_{L^{p}(D_{s})} +\|\tilde{\theta}^{*}\|_{L^{p}(D_{s})}\right)\\[3pt] &\leq C(\|\tilde{w}\|_{C(\overline{D}_{s})} \left\|\int_{\frac{h(t)-g(t)}{2}(z-1)}^{\frac{h(t)-g(t)}{2}(z+1)}J(y)dy\right\|_{L^{p}(D_{s})} +\|\tilde{\theta}^{*}\|_{L^{p}(D_{s})})\\[3pt] &\leq C(\|\tilde{w}\|_{C(\overline{D}_{s})}(2s)^{\frac{1}{p}} +\|\tilde{\theta}^{*}\|_{L^{p}(D_{s})}). \end{array} \end{align*} From the proof of Theorem 1.1 in \cite{w19}, we know the H\"{o}lder semi-norm $[\tilde{w}]_{C^{\frac{\alpha}{2},\alpha}(\overline{D}_{s})} \leq C^{\prime}\|\tilde{w}\|_{W_{p}^{1,2}(D_{s})}$, where $C^{\prime}$ is independent of $\frac{1}{s}$. Thus, \begin{align*} |\tilde{w}(t,z)| =|\tilde{w}(t,z)-\tilde{w}(0,z)| \leq [\tilde{w}]_{C^{\frac{\alpha}{2},\alpha}(\overline{D}_{s})}t^{\frac{\alpha}{2}} \leq C^{\prime}\|\tilde{w}\|_{W_{p}^{1,2}(D_{s})}t^{\frac{\alpha}{2}}, \end{align*} which implies that \begin{align*} \|\tilde{w}\|_{C(\overline{D}_{s})} \leq C^{\prime}\|\tilde{w}\|_{W_{p}^{1,2}(D_{s})}s^{\frac{\alpha}{2}}. \tag{2.20} \end{align*} Choosing $s$ small such that $CC^{\prime}(2s)^{\frac{1}{p}}s^{\frac{\alpha}{2}}<\frac{1}{2}$, we have \begin{align*} \|\tilde{w}\|_{W_{p}^{1,2}(D_{s})} \leq 2C\|\tilde{\theta}^{*}\|_{L^{p}(D_{s})} \leq 2C(2s)^{\frac{1}{p}}\|\tilde{\theta}^{*}\|_{C(\overline{D}_{s})} =2C(2s)^{\frac{1}{p}}\|\theta^{*}\|_{C(\overline{\Omega}_{s}^{g,h})}. \end{align*} Similar to the proof of Lemma 2.3 (Step 3) in \cite{waw18}, we can choose $s$ small enough such that \begin{align*} \|\theta\|_{C(\overline{\Omega}_{s}^{g,h})} \leq\frac{1}{2}\|\theta^{*}\|_{C(\overline{\Omega}_{s}^{g,h})}. \end{align*} By the contraction mapping theorem, we know that $\mathcal{F}_{s}$ has a unique fixed point $u\in \mathbb{X}_{s}^{1}$. Following the arguments in the proof of Lemma 2.3 (Step 5) in \cite{waw18}, we can show that the unique solution $(u,v)$ of (2.18) can be extended to $\Omega_{T_{0}}^{g,h}$ and $(u,v)\in \mathbb{X}_{T_{0}}^{g,h}$. The estimates of $v_{x}(t,h(t)), v_{x}(t,g(t))$ and the regularity of $v$ have been established in Lemma 2.4. \hfill $\Box$\\ \noindent\textbf{Proof of Theorem 2.1.} By Lemma 2.5, for any $T_{0}>0$ and $(g,h)\in \mathbb{G}_{T_{0}}^{h_{0}}\times \mathbb{H}_{T_{0}}^{h_{0}}$, we can find a unique $(u,v)\in \mathbb{X}_{T_{0}}^{g,h}$ that solves (2.18), and (2.19) holds. For $0<t\leq T_{0}$, define the mapping \begin{align*} \begin{array}{rl} \mathcal{G}(g,h)=(\tilde{g},\tilde{h}) \end{array} \end{align*} by \begin{align*} \begin{array}{rl} \tilde{h}(t) &=h_{0}-\mu \int_{0}^{t}v_{x}(\tau, h(\tau))d\tau +\rho_{1}\int_{0}^{t}\int_{g(\tau)}^{h(\tau)}\int_{h(\tau)}^{\infty}J(x-y)u(\tau,x)dydxd\tau\\[3pt] &\quad+\rho_{2}\int_{0}^{t}\int_{g(\tau)}^{h(\tau)}\int_{h(\tau)}^{\infty}J(x-y)v(\tau,x)dydxd\tau,\\[3pt] \tilde{g}(t) &=-h_{0}-\mu \int_{0}^{t}v_{x}(\tau, g(\tau))d\tau -\rho_{1}\int_{0}^{t}\int_{g(\tau)}^{h(\tau)}\int_{-\infty}^{g(\tau)}J(x-y)u(\tau,x)dydxd\tau\\[3pt] &\quad-\rho_{2}\int_{0}^{t}\int_{g(\tau)}^{h(\tau)}\int_{-\infty}^{g(\tau)}J(x-y)v(\tau,x)dydxd\tau. \end{array} \end{align*} To prove this theorem, we will show that if $T_{0}$ is sufficiently small, then $\mathcal{G}$ maps a suitable closed subset $\Sigma_{T_{0}}$ of $\mathbb{G}_{T_{0}}^{h_{0}}\times \mathbb{H}_{T_{0}}^{h_{0}}$ into itself and is a contraction mapping. \emph{Step 1.} There exists a closed subset $\Sigma_{\tau}\subset\mathbb{G}_{T_{0}}^{h_{0}}\times \mathbb{H}_{T_{0}}^{h_{0}}$ such that $\mathcal{G}(\Sigma_{\tau})\subset\Sigma_{\tau}$. Let $(g,h)\in\mathbb{G}_{T_{0}}^{h_{0}}\times \mathbb{H}_{T_{0}}^{h_{0}}$. The definitions of $\tilde{h}(t)$ and $\tilde{g}(t)$ indicate that they belong to $C^{1}([0, T_{0}])$ and for $0<t\leq T_{0}$, \begin{align*} \begin{array}{rl} &\tilde{h}^{\prime}(t) =-\mu v_{x}(t, h(t)) +\rho_{1}\int_{g(t)}^{h(t)}\int_{h(t)}^{\infty}J(x-y)u(t,x)dydx +\rho_{2}\int_{g(t)}^{h(t)}\int_{h(t)}^{\infty}J(x-y)v(t,x)dydx,\\[3pt] &\tilde{g}^{\prime}(t) =-\mu v_{x}(t, g(t)) -\rho_{1}\int_{g(t)}^{h(t)}\int_{-\infty}^{g(t)}J(x-y)u(t,x)dydx -\rho_{2}\int_{g(t)}^{h(t)}\int_{-\infty}^{g(t)}J(x-y)v(t,x)dydx. \end{array} \end{align*} It follows that \begin{align*} \begin{array}{rl} [\tilde{h}(t)-\tilde{g}(t)]^{\prime} &=-\mu[v_{x}(t, h(t))-v_{x}(t, g(t))] +\rho_{1}\int_{g(t)}^{h(t)}[\int_{h(t)}^{\infty}+\int_{-\infty}^{g(t)}] J(x-y)u(t,x)dydx\\[3pt] &\quad+\rho_{2}\int_{g(t)}^{h(t)} [\int_{h(t)}^{\infty}+\int_{-\infty}^{g(t)}]J(x-y)v(t,x)dydx. \end{array} \tag{2.21} \end{align*} Note that from (\textbf{J}) we know there exist constants $\bar{\epsilon}\in (0,\frac{h_{0}}{4})$ and $\eta_{0}>0$ such that $J(x-y)>\eta_{0}$ if $|x-y|<\bar{\epsilon}$. Take \begin{align*} 0<\varepsilon_{0}<\min\Big\{\bar{\epsilon},\frac{8\mu K_{3}}{\rho_{1}K_{1}+\rho_{2}K_{2}}\Big\}, \quad M_{1}=2h_{0}+\frac{\varepsilon_{0}}{4},\quad 0<T_{1}\leq \frac{\varepsilon_{0}}{4[2\mu K_{3}+(\rho_{1}K_{1}+\rho_{2}K_{2})M_{1}]} \end{align*} such that $h(T_{1})-g(T_{1})\leq M_{1}$. Estimating the right hand of (2.21), we have \begin{align*} [\tilde{h}(t)-\tilde{g}(t)]^{\prime}\leq 2\mu K_{3}+\rho_{1}K_{1}[h(T_{1})-g(T_{1})] +\rho_{2}K_{2}[h(T_{1})-g(T_{1})]\leq 2\mu K_{3}+(\rho_{1}K_{1}+\rho_{2}K_{2})M_{1}. \end{align*} This implies \begin{align*} \begin{array}{rl} \tilde{h}(t)-\tilde{g}(t) \leq 2h_{0}+t[2\mu K_{3}+(\rho_{1}K_{1}+\rho_{2}K_{2})M_{1}] \leq M_{1}, \quad t\in[0,T_{1}]. \end{array} \end{align*} Similarly, we can show that \begin{align*} \begin{array}{rl} \tilde{h}^{\prime}(t),~-\tilde{g}^{\prime}(t) \leq \mu K_{3}+(\rho_{1}K_{1}+\rho_{2}K_{2})M_{1}=:\bar{R}\leq R(t), \quad t\in[0,T_{1}]. \end{array} \end{align*} It is easy to check that \begin{align*} \begin{array}{rl} h(t)\in [h_{0}, h_{0}+\frac{\varepsilon_{0}}{4}], \quad g(t)\in [-h_{0}-\frac{\varepsilon_{0}}{4}, -h_{0}],\quad t\in[0, T_{1}]. \end{array} \end{align*} Similar to (2.15) in \cite{waw18}, we can prove that, for $t\in(0, T_{1}]$, \begin{align*} \begin{array}{rl} \tilde{h}^{\prime}(t) \geq\rho_{1}\int_{g(t)}^{h(t)}\int_{h(t)}^{\infty}J(x-y)u(t,x)dydx \geq\frac{1}{4}\varepsilon_{0}\eta_{0} \rho_{1}e^{(-d_{1}+\hat{L})T_{1}} \int_{h_{0}-\frac{\varepsilon_{0}}{4}}^{h_{0}}u_{0}(x)dx =:\rho_{1}c_{0} \end{array} \end{align*} and \begin{align*} \begin{array}{rl} \tilde{g}^{\prime}(t) \leq-\rho_{1}\int_{g(t)}^{h(t)}\int_{h(t)}^{\infty}J(x-y)u(t,x)dydx \leq-\frac{1}{4}\varepsilon_{0}\eta_{0} \rho_{1}e^{(-d_{1}+\hat{L})T_{1}} \int_{-h_{0}}^{-h_{0}+\frac{\varepsilon_{0}}{4}}u_{0}(x)dx =:-\rho_{1}\bar{c}_{0}. \end{array} \end{align*} We now define, for $\tau\in (0, T_{1}]$, \begin{align*} \begin{array}{rl} \Sigma_{\tau} =\{(g,h)\in \mathbb{G}_{\tau}^{h_{0}}\times \mathbb{H}_{\tau}^{h_{0}}: \rho_{1}c_{0}\leq\tilde{h}^{\prime}(t)\leq \bar{R}, -\bar{R}\leq \tilde{g}^{\prime}(t)\leq-\rho_{1}\bar{c}_{0}, h(\tau)-g(\tau)\leq M_{1}\}. \end{array} \end{align*} Our analysis above shows that \begin{align*} \begin{array}{rl} \mathcal{G}(\Sigma_{\tau})\subset\Sigma_{\tau}\quad \mbox{for}~\tau\in(0, T_{1}]. \end{array} \end{align*} \emph{Step 2.} $\mathcal{G}$ is contraction mapping on $\Sigma_{\tau}$ for sufficiently small $\tau>0$. For $(g_{i},h_{i})\in \Sigma_{\tau}$ with $0<\tau\leq \min\{T_{1}, 1\}$, let $\mathcal{G}(g_{i},h_{i})=(\tilde{g}_{i},\tilde{h}_{i})$ $(i=1,2)$, $g=g_{1}-g_{2}, h=h_{1}-h_{2}, \tilde{g}=\tilde{g}_{1}-\tilde{g}_{2}, \tilde{h}=\tilde{h}_{1}-\tilde{h}_{2}, u=u_{1}-u_{2}$ and $v=v_{1}-v_{2}$, where $(u_{i}, v_{i})\in \mathbb{X}_{\tau}^{g_{i},h_{i}}$ $(i=1,2)$ are solutions of (2.18) with $(g(t),h(t))$ replaced by $(g_{i}(t),h_{i}(t))$. By Lemma 2.5, $v_{i}\in W_{p}^{1,2}(\Omega_{\tau}^{g_{i},h_{i}})\cap C^{\frac{1+\alpha}{2},1+\alpha}(\overline{\Omega}_{\tau}^{g_{i},h_{i}})$ with any $p>3$. Make the zero extension of $u_{i}, v_{i}$ in $([0, \tau]\times \mathbb{R})\setminus\Omega_{\tau}^{g_{i},h_{i}}$. It is easy to see that \begin{align*} |\tilde{h}^{\prime}(t)| &\leq \mu\left|\partial_{x}v_{1}(t,h_{1}(t))-\partial_{x}v_{2}(t,h_{2}(t))\right|\\[3pt] &\quad+\rho_{1}\left|\int_{g_{1}(t)}^{h_{1}(t)}\int_{h_{1}(t)}^{\infty}J(x-y)u_{1}(t,x)dydx -\int_{g_{2}(t)}^{h_{2}(t)}\int_{h_{2}(t)}^{\infty}J(x-y)u_{2}(t,x)dydx\right|\\[3pt] &\quad+\rho_{2}\left|\int_{g_{1}(t)}^{h_{1}(t)}\int_{h_{1}(t)}^{\infty}J(x-y)v_{1}(t,x)dydx -\int_{g_{2}(t)}^{h_{2}(t)}\int_{h_{2}(t)}^{\infty}J(x-y)v_{2}(t,x)dydx\right|\\[3pt] &=:E_{1}+E_{2}+E_{3}. \end{align*} We first estimate $E_{1}$. It follows from (2.18) that, for $i=1,2$, \begin{align*} \left\{\begin{array}{l} \partial_{t}v_{i}= d_{2}\left[\tau \partial_{x}^{2}v_{i} +(1-\tau)\left(\int_{g_{i}(t)}^{h_{i}(t)}J(x-y)v_{i}(t,y)dy-v_{i}\right)\right]+f_{2}(t,x,u_{i},v_{i}), \quad (t,x)\in\Omega_{\tau}^{g_{i},h_{i}},\\[5pt] v_{i}(t,g_{i}(t))=v_{i}(t,h_{i}(t))=0,\quad t\in(0, \tau],\\[5pt] v_{i}(0, x)=v_{0}(x), \quad x\in[-h_{0}, h_{0}]. \end{array}\right. \tag{2.22} \end{align*} Let $\tilde{u}_{i}(t,z)=u_{i}(t,x_{i}(t,z))$ and $\tilde{v}_{i}(t,z)=v_{i}(t,x_{i}(t,z))$ with $x_{i}(t,z)=\frac{1}{2}[(h_{i}(t)-g_{i}(t))z+h_{i}(t)+g_{i}(t)]$ $(i=1,2)$, then (2.22) becomes into \begin{align*} \left\{\begin{array}{l} \partial_{t}\tilde{v}_{i}- d_{2}\tau\xi_{i}(t)\partial_{z}^{2}\tilde{v}_{i}-\eta_{i}(t,z)\partial_{z}\tilde{v}_{i} =d_{2}(1-\tau)\left(\frac{h_{i}(t)-g_{i}(t)}{2}\int_{-1}^{1}J(\frac{h_{i}(t)-g_{i}(t)}{2}(z-s)) \tilde{v}_{i}ds -\tilde{v}_{i}\right)\\[5pt] \qquad\qquad\qquad\qquad +f_{2}(t,x_{i}(t,z),\tilde{u}_{i},\tilde{v}_{i}), \quad (t,z)\in D_{\tau},\\[5pt] \tilde{v}_{i}(t,-1)=\tilde{v}_{i}(t,1)=0,\quad t\in(0, \tau],\\[5pt] \tilde{v}_{i}(0, z)=v_{0}(h_{0}z), \quad z\in[-1, 1], \end{array}\right. \tag{2.23} \end{align*} where $\xi_{i}$ and $\eta_{i}$ are defined as $\xi$ and $\eta$ with $(g,h)$ replaced by $(g_{i},h_{i})$. By Lemma 2.4, we know the unique solution $\tilde{v}_{i}\in W_{p}^{1,2}(\Omega_{\tau}^{g,h})\cap C^{\frac{1+\alpha}{2},1+\alpha}(\overline{\Omega}_{\tau}^{g,h})$ satisfies $0<\tilde{v}_{i}\leq K_{2}$, then the right hand of the equation in (2.23) is bounded. Applying the $L^{p}$ theory, we know $\|\tilde{v}_{i}\|_{W_{p}^{1,2}(D_{\tau})}\leq C$. From the proof of Theorem 1.1 in \cite{w19}, we have $[\partial_{z}\tilde{v}_{i}]_{C^{\frac{\alpha}{2},\alpha}(\overline{D}_{\tau})} \leq C^{\prime}\|\tilde{v}_{i}\|_{W_{p}^{1,2}(D_{\tau})}$, where $C^{\prime}$ is independent of $\frac{1}{\tau}$. Then we can deduce that $\|\partial_{z}\tilde{v}_{i}\|_{C(D_{\tau})}\leq h_{0}\|v^{\prime}_{0}\|_{C([-h_{0},h_{0}])} +\tau^{\frac{\alpha}{2}}[\partial_{z}\tilde{v}_{i}]_{C^{\frac{\alpha}{2},\alpha}(\overline{D}_{\tau})} \leq h_{0}\|v^{\prime}_{0}\|_{C([-h_{0},h_{0}])}+CC^{\prime}$. Let $\tilde{v}=\tilde{v}_{1}-\tilde{v}_{2}$, $\tilde{u}=\tilde{u}_{1}-\tilde{u}_{2}$, we have \begin{align*} \left\{\begin{array}{l} \partial_{t}\tilde{v}- d_{2}\tau\xi_{1}(t)\partial_{z}^{2}\tilde{v}-\eta_{1}(t,z)\partial_{z}\tilde{v} +d_{2}(1-\tau)\tilde{v}-a_{1}(t,z)\tilde{v} =d_{2}\tau(\xi_{1}(t)-\xi_{2}(t))\partial_{z}^{2}\tilde{v}_{2}\\[5pt] \qquad\qquad +(\eta_{1}(t,z)-\eta_{2}(t,z))\partial_{z}\tilde{v}_{2} +b_{1}(t,z)+c_{1}(t,z)\tilde{u}+d_{2}(1-\tau)d_{1}(t,z), \quad (t,z)\in D_{\tau},\\[5pt] \tilde{v}(t,-1)=\tilde{v}(t,1)=0,\quad t\in(0, \tau],\\[5pt] \tilde{v}(0, z)=0, \quad z\in[-1, 1], \end{array}\right. \end{align*} where \begin{align*} &a_{1}(t,z)=\int_{0}^{1}\partial_{v}f_{2}(t,x_{1}(t,z),\tilde{u}_{1},\tilde{v}_{2}+\theta\tilde{v})d\theta,\\[3pt] &b_{1}(t,z)=f_{2}(t,x_{1}(t,z),\tilde{u}_{1},\tilde{v}_{2})- f_{2}(t,x_{2}(t,z),\tilde{u}_{1},\tilde{v}_{2}),\\[3pt] &c_{1}(t,z)=\int_{0}^{1}\partial_{u}f_{2}(t,x_{2}(t,z),\tilde{u}_{2}+\theta\tilde{u},\tilde{v}_{2})d\theta,\\[3pt] &d_{1}(t,z)=\frac{h_{1}(t)-g_{1}(t)}{2}\int_{-1}^{1}J(\frac{h_{1}(t)-g_{1}(t)}{2}(z-s)) \tilde{v}_{1}ds\\[3pt] &\quad\quad\quad\quad-\frac{h_{2}(t)-g_{2}(t)}{2}\int_{-1}^{1}J(\frac{h_{2}(t)-g_{2}(t)}{2}(z-s)) \tilde{v}_{2}ds. \end{align*} Similar to (2.12), by the Lipschitz continuity of $J$ and the boundness of $h_{i}(t), g_{i}(t)$ and $\tilde{v}_{i}$, \begin{align*} \begin{array}{rl} |d_{1}(t,z)| \leq C\Big(\int_{-1}^{1}|\frac{h-g}{2}(z-s)\tilde{v}_{1}|ds +\int_{-1}^{1}|\frac{h_{1}-g_{1}}{2}\tilde{v}|ds +\int_{-1}^{1}|\frac{h-g}{2}\tilde{v}_{2}|ds\Big). \end{array} \end{align*} Note that \begin{align*} &\|\xi_{1}-\xi_{2}\|_{L^{\infty}((0,\tau))}\leq \frac{h_{0}+\frac{\varepsilon_{0}}{4}}{h_{0}^{4}}\|g,h\|_{C([0,\tau])},~ \|\eta_{1}-\eta_{2}\|_{L^{\infty}(D_{\tau})}\leq \frac{\bar{R}+h_{0}+\frac{\varepsilon_{0}}{4}}{h_{0}^{2}}\|g,h\|_{C^{1}([0,\tau])},\\[3pt] &\|a_{1},c_{1}\|_{L^{\infty}(D_{\tau})}\leq \hat{L},~\|b_{1}\|_{L^{\infty}(D_{\tau})}\leq L^{*}\|g,h\|_{C([0,\tau])}. \end{align*} Applying the $L^{p}$ theory, we get \begin{align*} \|\tilde{v}\|_{W_{p}^{1,2}(D_{\tau})} \leq C(\|g,h\|_{C^{1}([0,\tau])}+\|\tilde{u}\|_{C(\overline{D}_{\tau})}) +C_{1}\|\tilde{v}\|_{C(\overline{D}_{\tau})}(2\tau)^{\frac{1}{p}}. \end{align*} From (2.20), we know $\|\tilde{v}\|_{C(\overline{D}_{\tau})} \leq C^{\prime}\|\tilde{v}\|_{W_{p}^{1,2}(D_{\tau})}\tau^{\frac{\alpha}{2}}$. Choosing $\tau$ small, we have \begin{align*} \|\tilde{v}\|_{W_{p}^{1,2}(D_{\tau})} \leq C(\|g,h\|_{C^{1}([0,\tau])}+\|\tilde{u}\|_{C(\overline{D}_{\tau})}). \end{align*} Similar as the proof of Lemma 2.6 in \cite{waw18}, we can prove that, for $\tau$ small enough, \begin{align*} \|\tilde{u}\|_{C(\overline{D}_{\tau})} \leq C(\|u\|_{C(\overline{\Omega}_{\tau}^{*})}+\|g,h\|_{C([0,\tau])}), \end{align*} where $\Omega_{\tau}^{*}:=\Omega_{\tau}^{g_{1},h_{1}}\cup\Omega_{\tau}^{g_{2},h_{2}}$. By the similar arguments in the proof of Lemma 2.5 (Steps 1-3) in \cite{waw18} and Theorem 2.1 (Step 2) in \cite{dwz19}, we can also get \begin{align*} &E_{1}\leq C\tau^{\frac{\alpha}{2}}(\|g,h\|_{C^{1}([0,\tau])}+\|u\|_{C(\overline{\Omega}_{\tau}^{*})}),\\[3pt] &E_{2}\leq C(\|u\|_{C(\overline{\Omega}_{\tau}^{*})}+\tau\|g,h\|_{C^{1}([0,\tau])}),\\[3pt] &E_{3}\leq C(\|v\|_{C(\overline{\Omega}_{\tau}^{*})}+\tau\|g,h\|_{C^{1}([0,\tau])}),\\[3pt] &\|u\|_{C(\overline{\Omega}_{\tau}^{*})} \leq C\tau\|g,h\|_{C^{1}([0,\tau])}. \end{align*} Thus, for small $\tau>0$, we have \begin{align*} \|\tilde{g},\tilde{h}\|_{C^{1}([0,\tau])} \leq\frac{1}{2}\|g,h\|_{C^{1}([0,\tau])}, \end{align*} which implies that $\mathcal{G}$ is a contraction map on $\sum_{\tau}$. The rest of the proof can be obtained by using similar arguments as that of Theorem 2.1 in \cite{dwz19,waw18}, here we omit the details. \hfill $\Box$ \section{Comparison principle and some eigenvalue problems} In this section, we first give a comparison principle for (1.1), and then investigate the existence and properties of principle eigenvalue of some eigenvalue problems. These results will play an important role in later sections. \subsection{The comparison principle} In this subsection, we discuss the comparison principle for $(1.1)$. \\ \noindent\textbf{Lemma 3.1.} (The Comparison Principle) Suppose that $T_{0}\in(0, \infty), \bar{g}, \bar{h}\in C^1([0, T_{0}]), \bar{u}\in C(\overline{\Omega}_{T_{0}}^{\bar{g}, \bar{h}})$, $\bar{v}\in C^{1,2}(\Omega_{T_{0}}^{\bar{g},\bar{h}})\cap C(\overline{\Omega}_{T_{0}}^{\bar{g},\bar{h}})$, and $(\bar{u},\bar{v},\bar{g},\bar{h})$ satisfy \begin{align*} \left\{\begin{array}{l} \partial_{t}\bar{u}\geq d_{1}\left(\int_{\bar{g}(t)}^{\bar{h}(t)}J(x-y)\bar{u}(t,y)dy-\bar{u}\right) +\bar{u}(a(t)-\bar{u}), \quad (t,x)\in \Omega_{T_{0}}^{\bar{g},\bar{h}},\\[5pt] \partial_{t}\bar{v}\geq d_{2}\left[\tau \partial_{x}^{2}\bar{v}+(1-\tau)\left(\int_{\bar{g}(t)}^{\bar{h}(t)}J(x-y)\bar{v}(t,y)dy-\bar{v}\right)\right] +\bar{v}(c(t)-\bar{v}), \quad (t,x)\in \Omega_{T_{0}}^{\bar{g},\bar{h}},\\[5pt] \bar{u}(t, \bar{g}(t))\geq0, \bar{u}(t,\bar{h}(t))\geq0, \quad 0<t\leq T_{0},\\[5pt] \bar{v}(t, \bar{g}(t))=0, \bar{v}(t,\bar{h}(t))=0, \quad 0<t\leq T_{0},\\[5pt] \bar{h}'(t)\geq-\mu \bar{v}_{x}(t, \bar{h}(t)) +\rho_{1}\int_{\bar{g}(t)}^{\bar{h}(t)}\int_{\bar{h}(t)}^{\infty}J(x-y)\bar{u}(t,x)dydx +\rho_{2}\int_{\bar{g}(t)}^{\bar{h}(t)}\int_{\bar{h}(t)}^{\infty}J(x-y)\bar{v}(t,x)dydx, \\[3pt] \qquad\qquad\qquad\qquad 0<t\leq T_{0},\\[5pt] \bar{g}'(t)\leq-\mu \bar{v}_{x}(t, \bar{g}(t)) -\rho_{1}\int_{\bar{g}(t)}^{\bar{h}(t)}\int_{-\infty}^{\bar{g}(t)}J(x-y)\bar{u}(t,x)dydx -\rho_{2}\int_{\bar{g}(t)}^{\bar{h}(t)}\int_{-\infty}^{\bar{g}(t)}J(x-y)\bar{v}(t,x)dydx, \\[3pt] \qquad\qquad\qquad\qquad 0<t\leq T_{0},\\[5pt] \bar{u}(0,x)\geq u_0(x), \bar{v}(0, x)\geq v_0(x), \quad |x|\leq h_{0},\\[5pt] \bar{h}(0)\geq h_{0},~\bar{g}(0)\leq-h_{0}. \end{array}\right. \tag{3.1} \end{align*} Let $(u,v,g,h)$ be the unique solution of $(1.1)$, then \begin{align*} g(t)\geq \bar{g}(t),\quad h(t)\leq \bar{h}(t)\quad \mbox{in}~(0,T_{0}], \quad u(t,x)\leq \bar{u}(t,x), \quad v(t,x)\leq \bar{v}(t,x)\quad \mbox{for}~(t,x)\in \overline{\Omega}_{T_{0}}^{g,h}. \end{align*} \noindent\textbf{Proof.} Thanks to Lemma 2.2 in \cite{cdll19} and Lemma 2.1, one sees that $\bar{u},\bar{v}>0$ for $(t,x)\in \Omega_{T_{0}}^{\bar{g},\bar{h}}$. We first consider the case $\bar{h}(0)>h_{0},~\bar{g}(0)<-h_{0}$. Then $\bar{h}(t)>h(t),~\bar{g}(t)<g(t)$ hold true for small $t>0$. We claim that $\bar{h}(t)>h(t),~\bar{g}(t)<g(t)$ for all $t\in (0,T_{0}]$. In fact, if this is not true, there exists $t_{1}\leq T_{0}$ such that \begin{align*} \bar{h}(t)>h(t),~\bar{g}(t)<g(t) \quad \mbox{for}~t\in(0,t_{1})\quad \mbox{and}\quad [\bar{h}(t_{1})-h(t_{1})][\bar{g}(t_{1})-g(t_{1})]=0. \end{align*} Without loss of generality, we may assume that \begin{align*} \bar{g}(t_{1})\leq g(t_{1})\quad \mbox{and}\quad \bar{h}(t_{1})=h(t_{1}). \end{align*} Thus, $\bar{h}^{\prime}(t_{1})\leq h^{\prime}(t_{1})$. Since $\bar{v}(0,x)\geq v_0(x)$ for $x\in[-h_{0},h_{0}]$, $\bar{v}(t,g(t))\geq0=v(t,g(t))$ and $\bar{v}(t,h(t))\geq0=v(t,h(t))$ for $t\in (0,t_{1}]$, by applying Lemma 2.2, we have $\bar{v}>v$ in $\Omega_{t_{1}}^{\bar{g},\bar{h}}$. Moreover, by the fact that $\bar{v}(t_{1},h(t_{1}))=\bar{v}(t_{1},\bar{h}(t_{1}))=0=v(t_{1},h(t_{1}))$, we deduce that $\bar{v}_{x}(t_{1},h(t_{1}))<v_{x}(t_{1},h(t_{1}))$. Similarly, using Lemma 2.2 in \cite{cdll19}, we can obtain $\bar{u}>u$ in $\Omega_{t_{1}}^{\bar{g},\bar{h}}$. It follows that \begin{align*} &\bar{h}^{\prime}(t_{1})\\[3pt] &\geq -\mu \bar{v}_{x}(t_{1}, \bar{h}(t_{1})) +\rho_{1}\int_{\bar{g}(t_{1})}^{\bar{h}(t_{1})}\int_{\bar{h}(t_{1})}^{\infty}J(x-y)\bar{u}(t_{1},x)dydx +\rho_{2}\int_{\bar{g}(t_{1})}^{\bar{h}(t_{1})}\int_{\bar{h}(t_{1})}^{\infty}J(x-y)\bar{v}(t_{1},x)dydx\\[3pt] &\geq -\mu \bar{v}_{x}(t_{1}, h(t_{1})) +\rho_{1}\int_{g(t_{1})}^{h(t_{1})}\int_{h(t_{1})}^{\infty}J(x-y)\bar{u}(t_{1},x)dydx +\rho_{2}\int_{g(t_{1})}^{h(t_{1})}\int_{h(t_{1})}^{\infty}J(x-y)\bar{v}(t_{1},x)dydx\\[3pt] &>-\mu v_{x}(t_{1}, h(t_{1})) +\rho_{1}\int_{g(t_{1})}^{h(t_{1})}\int_{h(t_{1})}^{\infty}J(x-y)u(t_{1},x)dydx +\rho_{2}\int_{g(t_{1})}^{h(t_{1})}\int_{h(t_{1})}^{\infty}J(x-y)v(t_{1},x)dydx\\[3pt] &=h^{\prime}(t_{1}), \end{align*} which is a contradiction. Hence, $h(t)<\bar{h}(t)$, $g(t)>\bar{g}(t)$ for all $t\in (0,T_{0}]$, and $\bar{u}(t,x)>u(t,x)$, $\bar{v}(t,x)>v(t,x)$ in $\Omega_{T_{0}}^{g,h}$. For the general case that $\bar{h}(0)\geq h_{0},~\bar{g}(0)\leq-h_{0}$, we can adopt the same method as the proof Lemma 5.1 in \cite{gw12}. \hfill $\Box$\\ \noindent\textbf{Remark 3.1.} From the proof of Lemma 3.1, we can see that the conditions $\bar{v}(t, \bar{g}(t))=0, \bar{v}(t,\bar{h}(t))=0$ are necessary in deriving the contradiction from the relationship between $\bar{h}^{\prime}(t)$ and $h^{\prime}(t)$. If $\tau=0$, as considered in \cite{dwz19}, then the expressions of $h^{\prime}(t), g^{\prime}(t)$ in (1.1) and $\bar{h}^{\prime}(t), \bar{g}^{\prime}(t)$ in (3.1) do not include the terms $-\mu v_{x}(t,h(t)),-\mu v_{x}(t,g(t))$ and $-\mu \bar{v}_{x}(t,\bar{h}(t)),-\mu \bar{v}_{x}(t,\bar{g}(t))$, respectively, in such case the conditions $\bar{v}(t, \bar{g}(t))=0, \bar{v}(t,\bar{h}(t))=0$ can be weaken into $\bar{v}(t, \bar{g}(t))\geq0, \bar{v}(t,\bar{h}(t))\geq0$. \subsection{some eigenvalue problems} In this subsection, we mainly study some eigenvalue problems and analyze the properties of their principle eigenvalue. Hereafter, we always assume $\Omega$ be a bounded, connected open interval in $\mathbb{R}$ and $|\Omega|$ be its length.\\ Consider the following operator \begin{align*} -(L_{\Omega}+a(t))[\phi](t,x)= \phi_{t}(t,x)-d_{1}[\int_{\Omega}J(x-y)\phi(t,y)dy-\phi(t,x)]-a(t)\phi(t,x), \quad (t,x)\in \mathbb{R}\times \overline{\Omega}, \tag{3.2} \end{align*} where $a\in C_{T}(\mathbb{R}):=\{a\in C(\mathbb{R}): a(t+T)=a(t)>0, \forall t\in \mathbb{R}\}$. For convenience, we define the space $\mathcal{X}_{\Omega}, \mathcal{X}_{\Omega}^{+}, \mathcal{X}_{\Omega}^{++}$ as follows: \begin{align*} &\mathcal{X}_{\Omega}=\{\phi\in C^{1,0}(\mathbb{R}\times \overline{\Omega}): \phi(t+T,x)=\phi(t,x), ~(t,x)\in \mathbb{R}\times \overline{\Omega}\},\\[3pt] &\mathcal{X}_{\Omega}^{+}=\{\phi\in \mathcal{X}_{\Omega}: \phi(t,x)\geq 0, ~(t,x)\in \mathbb{R}\times \overline{\Omega}\},\\[3pt] &\mathcal{X}_{\Omega}^{++}=\{\phi\in \mathcal{X}_{\Omega}: \phi(t,x)> 0, ~(t,x)\in \mathbb{R}\times \overline{\Omega}\}, \end{align*} where $C^{1,0}(\mathbb{R}\times \overline{\Omega})$ denotes the class of functions that are $C^{1}$ in $t$ and continuous in $x$. We define \begin{align*} \lambda_{1}(-(L_{\Omega}+a(t)))=\inf\Big\{\mathfrak{R}\lambda:~\lambda\in\sigma(-(L_{\Omega}+a(t)))\Big\}, \end{align*} where $\sigma(-(L_{\Omega}+a(t)))$ is the spectrum of $-(L_{\Omega}+a(t))$. By Theorem A $(1)$ in \cite{svo19}, we know that $\lambda_{1}(-(L_{\Omega}+a(t)))$ is the principle eigenvalue of $-(L_{\Omega}+a(t))$, which means that there exists an eigenfunction $\phi\in \mathcal{X}_{\Omega}^{++}$ such that \begin{align*} -(L_{\Omega}+a(t))[\phi](t,x)=\lambda_{1}(-(L_{\Omega}+a(t)))\phi. \end{align*} \noindent\textbf{Lemma 3.2.} (see Theorem B in \cite{svo19}) Assume that $J$ satisfies (\textbf{J}) and $a\in C_{T}(\mathbb{R})$. Let $u(t,x;u_{0})$ be a solution of \begin{align*} \left\{\begin{array}{l} u_t=d_{1}[\int_{\Omega}J(x-y)u(t,y)dy-u(t,x)]+u(a(t)-u), \quad t>0, x\in \overline{\Omega},\\[3pt] u(0,x)=u_{0}(x), \quad x\in \overline{\Omega}, \end{array}\right. \end{align*} where $u_{0}\in C(\overline{\Omega})$ is non-negative and not identically zero. The following statements hold:\\ $(i)$ If $\lambda_{1}(-(L_{\Omega}+a(t)))<0$, then the equation \begin{align*} u_t=d_{1}[\int_{\Omega}J(x-y)u(t,y)dy-u(t,x)]+u(a(t)-u), \quad t\in \mathbb{R}, x\in \overline{\Omega} \tag{3.3} \end{align*} admits a unique solution $u^{*}\in \mathcal{X}_{\Omega}^{++}$, and there holds \begin{align*} \|u(t,\cdot;u_{0})-u^{*}(t,\cdot)\|_{C(\overline{\Omega})}\rightarrow 0 \quad \mbox{as}~t\rightarrow \infty, \end{align*} $(ii)$ If $\lambda_{1}(-(L_{\Omega}+a(t)))>0$, then the equation (3.3) admits no solution in $\mathcal{X}_{\Omega}^{+}\setminus \{0\}$ and there holds \begin{align*} \|u(t,\cdot;u_{0})\|_{C(\overline{\Omega})}\rightarrow 0 \quad \mbox{as}~t\rightarrow \infty. \end{align*} \noindent\textbf{Remark 3.2.} For the case $\lambda_{1}(-(L_{\Omega}+a(t)))=0$, (3.3) has been shown in \cite{svo19} to admit no solution in $\mathcal{X}_{\Omega}^{+}\setminus \{0\}$, but the global dynamics is not provided. Since $a(t)$ is independent of spatial variable, we can also get $\|u(t,\cdot;u_{0})\|_{C(\overline{\Omega})}\rightarrow 0$, more details can be seen in the proof of Theorem 4.1.\\ In what follows, we present some further properties of $\lambda_1$. \\ \noindent\textbf{Lemma 3.3.} Let $J$ satisfies (\textbf{J}) and $a\in C_{T}(\mathbb{R})$. Then $(i)$ $\lambda_{1}(-(L_{\Omega}+a(t)))$ is strictly decreasing and continuous in $|\Omega|$; $(ii)$ $\lim_{|\Omega|\rightarrow +\infty}\lambda_{1}(-(L_{\Omega}+a(t)))=-a_{T}$, where $a_{T}=\frac{1}{T}\int_{0}^{T}a(t)dt$; $(iii)$ $\lim_{|\Omega|\rightarrow 0}\lambda_{1}(-(L_{\Omega}+a(t)))=d_{1}-a_{T}$.\\ \noindent\textbf{Proof.} Let $\phi\in \mathcal{X}_{\Omega}^{++}$ be an eigenfunction of $-(L_{\Omega}+a(t))$ associated with the principle eigenvalue $\lambda_{1}(-(L_{\Omega}+a(t)))$. We define \begin{align*} \psi(t,x)=e^{-\int_{0}^{t}(a(s)-a_{T})ds}\phi(t,x), \quad\forall (t,x)\in \mathbb{R}\times \overline{\Omega}. \end{align*} It is easy to check that $\psi\in \mathcal{X}_{\Omega}^{++}$. Multiplying the equation $-(L_{\Omega}+a(t))[\phi]=\lambda_{1}(-(L_{\Omega}+a(t))\phi$ by the function $t\mapsto e^{-\int_{0}^{t}(a(s)-a_{T})ds}$, we have \begin{align*} -\psi_{t}(t,x)+d_{1}[\int_{\Omega}J(x-y)\psi(t,y)dy-\psi(t,x)] +a_{T}\psi(t,x)+\lambda_{1}(-(L_{\Omega}+a(t))\psi(t,x)=0 \end{align*} for $(t,x)\in \mathbb{R}\times \overline{\Omega}$. Taking $\psi_{T}(x)=\frac{1}{T}\int_{0}^{T}\psi(t,x)dt$ for $x\in \overline{\Omega}$, and integrating the above equation over $[0,T]$ with respect to $t$, we have \begin{align*} d_{1}[\int_{\Omega}J(x-y)\psi_{T}(y)dy-\psi_{T}(x)] +a_{T}\psi_{T}(x)+\lambda_{1}(-(L_{\Omega}+a(t))\psi_{T}(x)=0, \quad x\in \overline{\Omega}. \end{align*} That is, $-\lambda_{1}(-(L_{\Omega}+a(t)))$ is the principle eigenvalue of the following nonlocal operator $\mathcal{L}_{\Omega}+a_{T}: C(\overline{\Omega})\rightarrow C(\overline{\Omega})$ defined by \begin{align*} (\mathcal{L}_{\Omega}+a_{T})[\omega](x) :=d_{1}[\int_{\Omega}J(x-y)\omega(y)dy-\omega(x)] +a_{T}\omega(x) \tag{3.4} \end{align*} with an eigenfunction $\psi_{T}\in \mathcal{X}_{\Omega}^{++}$. Denote by $\lambda_{1}(\mathcal{L}_{\Omega}+a_{T})$ the principle eigenvalue of $\mathcal{L}_{\Omega}+a_{T}$, then we have \begin{align*} -\lambda_{1}(-(L_{\Omega}+a(t)))=\lambda_{1}(\mathcal{L}_{\Omega}+a_{T}). \tag{3.5} \end{align*} Without loss of generality, we assume that $\Omega=(l_{1},l_{2})$. According to Proposition 3.4 in \cite{cdll19}, we know the following results hold: $(i)$ $\lambda_{1}(\mathcal{L}_{\Omega}+a_{T})$ is strictly increasing and continuous in $|\Omega|=l_{2}-l_{1}$; $(ii)$ $\lim_{l_{2}-l_{1}\rightarrow +\infty}\lambda_{1}(\mathcal{L}_{\Omega}+a_{T})=a_{T}$; $(iii)$ $\lim_{l_{2}-l_{1}\rightarrow 0}\lambda_{1}(\mathcal{L}_{\Omega}+a_{T})=a_{T}-d_{1}$.\\ Combining the above conclusions and (3.5), we can get the desired results. \hfill $\Box$\\ Now, we consider another periodic-parabolic eigenvalue problem \begin{align*} \left\{\begin{array}{l} -(\tilde{L}_{\Omega}+c(t))[\varphi](t,x)\\[3pt] =\varphi_{t}-d_{2}[\tau\varphi_{xx}+(1-\tau)(\int_{\Omega}J(x-y)\varphi(t,y)dy-\varphi(t,x))]-c(t)\varphi=\lambda \varphi, \quad \mbox{in}~[0,T]\times\Omega,\\[3pt] \varphi(t,x)=0, \quad \mbox{on}~[0,T]\times\partial\Omega, \\[3pt] \varphi(0,x)=\varphi(T,x), \quad \mbox{in}~\Omega. \end{array}\right. \tag{3.6} \end{align*} Define a linear nonlocal operator $\mathcal{K}$ on $C([0,T]; L^{p}(\Omega))$ $(\forall p\geq 1)$ by \begin{align*} (\mathcal{K}\varphi)(t,x):= \int_{\Omega}J(x-y)\varphi(t,y)dy-\varphi(t,x). \end{align*} For any given $0<\tau\leq 1$, we can check that $\{\mathcal{A}(t): 0\leq t\leq T\}:=\{-d_{2}[\tau\partial_{x}^{2}+(1-\tau)\mathcal{K}]-c(t)I: 0\leq t\leq T\}$ satisfy the hypotheses (11.5) in \cite{hess91}. As showed in Section II.14 of \cite{hess91}, based on the Krein-Rutman theorem, we can prove that (3.6) admits a principle eigenvalue $\lambda_{1}(-(\tilde{L}_{\Omega}+c(t)))$ with principle eigenfunction $\varphi$.\\ For later applications, we give the following lemma. \\ \noindent\textbf{Lemma 3.4.} Let $J$ satisfies (\textbf{J}) and $c\in C_{T}(\mathbb{R})$. Then $(i)$ $\lambda_{1}(-(\tilde{L}_{\Omega}+c(t)))$ is a strictly decreasing continuous function in $|\Omega|$ and $\lambda_{1}(-(\tilde{L}_{\Omega}+c(t)))=0$ has a unique root $|\Omega|=h^{*}$; $(ii)$ if $\lambda_{1}(-(\tilde{L}_{\Omega}+c(t)))<0$, then the problem \begin{align*} \left\{\begin{array}{l} \varphi_{t}-d_{2}[\tau\varphi_{xx}+(1-\tau)(\int_{\Omega}J(x-y)\varphi(t,y)dy-\varphi(t,x))]=\varphi(c(t)-\varphi), \quad \mbox{in}~(0,\infty)\times\Omega,\\[3pt] \varphi(t,x)=0, \quad \mbox{on}~(0,\infty)\times\partial\Omega. \end{array}\right. \end{align*} admits a unique positive $T$-periodic solution $\varphi^{*}$, and $\varphi^{*}$ is globally asymptotically stable.\\ \noindent\textbf{Proof.} $(i)$ Let $\varphi$ be an eigenfunction of $(3.6)$ associated with the principle eigenvalue $\lambda_{1}(-(\tilde{L}_{\Omega}+c(t)))$. Define \begin{align*} \psi(t,x)=e^{-\int_{0}^{t}(c(s)-c_{T})ds}\varphi(t,x), \quad\forall (t,x)\in \mathbb{R}\times \overline{\Omega}. \end{align*} Similar as the proof of Lemma 3.3, $\lambda_{1}(-(\tilde{L}_{\Omega}+c(t)))$ is the principal eigenvalue of the following elliptic-type problem \begin{align*} \left\{\begin{array}{l} -(\tilde{\mathcal{L}}_{\Omega}+c_{T})[\omega]=-d_{2}[\tau\omega_{xx}+(1-\tau)(\int_{\Omega}J(x-y)\omega(y)dy-\omega(x))]-c_{T}\omega=\lambda \omega, \quad \mbox{in}~\Omega,\\[3pt] \omega(x)=0, \quad \mbox{on}~\partial\Omega \end{array}\right. \tag{3.7} \end{align*} with an eigenfunction $\omega(x)=\frac{1}{T}\int_{0}^{T}\psi(t,x)dt$. Denote by $\lambda_{1}(-(\tilde{\mathcal{L}}_{\Omega}+c_{T}))$ the principle eigenvalue of (3.7), then we have \begin{align*} \lambda_{1}(-(\tilde{L}_{\Omega}+c(t)))=\lambda_{1}(-(\tilde{\mathcal{L}}_{\Omega}+c_{T})). \tag{3.8} \end{align*} The continuity of $\lambda_{1}(-(\tilde{\mathcal{L}}_{\Omega}+c_{T}))$ with respect to $|\Omega|$ can be obtained by using a simple re-scaling argument of the spatial variable $x$. Note that $\lambda_{1}(-(\tilde{\mathcal{L}}_{\Omega}+c_{T}))$ can be expressed in a variational formulation \begin{align*} &\lambda_{1}(-(\tilde{\mathcal{L}}_{\Omega}+c_{T}))\\[3pt] &=\inf_{0\not\equiv \omega\in H_{0}^{1}(\Omega)} \frac{d_{2}\tau\int_{\Omega}\omega_{x}^{2}(x)dx-d_{2}(1-\tau)\int_{\Omega}\int_{\Omega}J(x-y)\omega(y)\omega(x)dydx} {\int_{\Omega}\omega^{2}(x)dx}+[d_{2}(1-\tau)-c_{T}]. \end{align*} By the zero extension of principle eigenfunction, we can get the monotonicity of $\lambda_{1}(\tilde{\mathcal{L}}_{\Omega}+c_{T})$ from the variational formulation of principle eigenvalue. Next, we prove that $\lambda_{1}(-(\tilde{L}_{\Omega}+c(t)))=0$ has a unique root. Without loss of generality, we may assume that $\Omega=(0,l)$. Since \begin{align*} \int_{0}^{l}\int_{0}^{l}J(x-y)\omega(y)\omega(x)dydx \leq \int_{0}^{l}\int_{0}^{l}J(x-y)\frac{\omega^{2}(y)+\omega^{2}(x)}{2}dydx \leq \int_{0}^{l}\omega^{2}(x)dx, \end{align*} we have \begin{align*} \lambda_{1}(-(\tilde{\mathcal{L}}_{(0,l)}+c_{T})) \geq \inf_{0\not\equiv \omega\in H_{0}^{1}((0,l))} \frac{d_{2}\tau\int_{0}^{l}\omega_{x}^{2}(x)dx} {\int_{0}^{l}\omega^{2}(x)dx}-c_{T}. \end{align*} By the fact that \begin{align*} \inf_{0\not\equiv \omega\in H_{0}^{1}((0,l))} \frac{\int_{0}^{l}\omega_{x}^{2}(x)dx} {\int_{0}^{l}\omega^{2}(x)dx} =\frac{\pi^{2}}{4l^{2}}, \end{align*} we know \begin{align*} \lim_{l\rightarrow 0}\lambda_{1}(-(\tilde{\mathcal{L}}_{(0,l)}+c_{T}))=+\infty \tag{3.9} \end{align*} and \begin{align*} \liminf_{l\rightarrow +\infty}\lambda_{1}(-(\tilde{\mathcal{L}}_{(0,l)}+c_{T}))\geq -c_{T}. \tag{3.10} \end{align*} On the other hand, by $(\textbf{J})$, for any fixed $0<\varepsilon\ll 1$, there exists $L=L(\varepsilon)>0$ such that \begin{align*} \int_{-L}^{L}J(x)dx>1-\varepsilon. \end{align*} For any large $l>3L$, we choose the test function $\varphi_{\varepsilon}(x)$ defined as follows \begin{align*} \varphi_{\varepsilon}(x) =\left\{\begin{array}{l} \frac{x}{\varepsilon}, \quad x\in[0,\varepsilon],\\[3pt] 1, \quad x\in[\varepsilon, l-\varepsilon],\\[3pt] \frac{l-x}{\varepsilon}, \quad x\in[l-\varepsilon,l]. \end{array}\right. \end{align*} It is easy to check that $\varphi_{\varepsilon}\in H_{0}^{1}((0,l))$ and satisfies $\int_{0}^{l}\varphi_{\varepsilon}^{2}(x)dx=l-\frac{4}{3}\varepsilon$ and $\int_{0}^{l}(\partial_{x}\varphi_{\varepsilon})^{2}(x)dx=\frac{2}{\varepsilon}$. Thus, \begin{align*} &\lambda_{1}(-(\tilde{\mathcal{L}}_{(0,l)}+c_{T}))\\[3pt] &\leq \frac{d_{2}\tau\int_{0}^{l}(\partial_{x}\varphi_{\varepsilon})^{2}(x)dx -d_{2}(1-\tau)\int_{0}^{l}\int_{0}^{l}J(x-y)\varphi_{\varepsilon}(y)\varphi_{\varepsilon}(x)dydx} {\int_{0}^{l}\varphi_{\varepsilon}^{2}(x)dx}+[d_{2}(1-\tau)-c_{T}]\\[3pt] &\leq \frac{\frac{2d_{2}\tau}{\varepsilon} -d_{2}(1-\tau)\int_{L+\varepsilon}^{l-L-\varepsilon}\int_{\varepsilon}^{l-\varepsilon}J(x-y)dydx} {l-\frac{4}{3}\varepsilon}+[d_{2}(1-\tau)-c_{T}]\\[3pt] &\leq \frac{\frac{2d_{2}\tau}{\varepsilon} -d_{2}(1-\tau)\int_{L+\varepsilon}^{l-L-\varepsilon}\int_{-L}^{L}J(\xi)d\xi dx} {l-\frac{4}{3}\varepsilon}+[d_{2}(1-\tau)-c_{T}]\\[3pt] &\leq \frac{\frac{2d_{2}\tau}{\varepsilon} -d_{2}(1-\tau)(l-2L-2\varepsilon)(1-\varepsilon)} {l-\frac{4}{3}\varepsilon}+[d_{2}(1-\tau)-c_{T}]\\[3pt] &\rightarrow -d_{2}(1-\tau)(1-\varepsilon)+[d_{2}(1-\tau)-c_{T}] \quad \mbox{as}~l\rightarrow +\infty. \end{align*} Since $\varepsilon$ is arbitrary, it follows that \begin{align*} \limsup_{l\rightarrow +\infty}\lambda_{1}(-(\tilde{\mathcal{L}}_{(0,l)}+c_{T}))\leq -c_{T}, \end{align*} which together with (3.10) imply that \begin{align*} \lim_{l\rightarrow +\infty}\lambda_{1}(-(\tilde{\mathcal{L}}_{(0,l)}+c_{T}))=-c_{T}. \tag{3.11} \end{align*} From (3.8), (3.9) and (3.11), we know that $\lambda_{1}(-(\tilde{L}_{(0,l)}+c(t)))=0$ has a unique root. $(ii)$ the proof is similar as that of Theorem 28.1 in \cite{hess91}, we omit the details. \hfill $\Box$ \section{Spreading and vanishing for problem (1.1)} In this section, we investigate the dynamics of problem (1.1), including the spreading-vanishing dichotomy and some sufficient conditions for spreading and vanishing. In view of (2.2), we see that the free boundaries $h(t),-g(t)$ are strictly increasing functions with respect to time $t$. Thus, $h_{\infty}:=\lim_{t\rightarrow \infty}h(t)$ and $g_{\infty}:=\lim_{t\rightarrow \infty}g(t)$ are well-defined. Clearly, $h_{\infty},-g_{\infty}\leq +\infty$.\\ By similar argument as the proof of Proposition 3.1 in \cite{w14} with minor modifications, we have the following result.\\ \noindent\textbf{Lemma 4.1.} Let $d$, $\mu$ and $h^{0}$ be positive constants and $C\in \mathbb{R}$. Assume that $\varphi_{0}\in C^{2}([-h^{0}, h^{0}])$ satisfies $\varphi_{0}(-h^{0})=\varphi_{0}(h^{0})=0$ and $\varphi_{0}>0$ in $(-h^{0}, h^{0})$. Let $(g, h)\in [C^{1+\frac{\alpha}{2}}[0, \infty)]^{2}$, $\varphi\in C^{1+\frac{\alpha}{2}, 2+\alpha}((0, \infty)\times(g(t), h(t)))$ for some $\alpha\in (0,1)$ and satisfy $g(t)<0$, $h(t)>0$, $\varphi(t,x)>0$ for all $t\geq 0$ and $g(t)<x<h(t)$. We further suppose that $\lim_{t\rightarrow \infty}g(t)>-\infty$, $\lim_{t\rightarrow \infty}h(t)<\infty$, $\lim_{t\rightarrow \infty}g^{\prime}(t)=\lim_{t\rightarrow \infty}h^{\prime}(t)=0$ and there exists a constant $K>0$ such that $\|\varphi\|_{C^{1}[g(t), h(t)]}\leq K$ for $t>1$. If $(\varphi, g, h)$ satisfies \begin{align*} \left\{\begin{array}{l} \varphi_t-d\varphi_{xx}\geq C\varphi,\quad t>0,~ g(t)<x<h(t),\\[3pt] \varphi=0,\quad t\geq0, ~x=g(t)~\mbox{or}~ x=h(t),\\[3pt] g^{\prime}(t)\leq-\mu\varphi_{x}(t, g(t)),~h^{\prime}(t)\geq-\mu\varphi_{x}(t, h(t)), \quad t>0,\\[3pt] g(0)=-h^{0},~ h(0)=h^{0}, \\[3pt] \varphi(0,x)=\varphi_{0}(x), \quad -h^{0}<x<h^{0}, \end{array}\right. \end{align*} then $\lim_{t\rightarrow \infty}\max_{g(t)\leq x\leq h(t)}\varphi(t,x)=0$.\\ The next lemma provides an estimate for $v$. The proof is a simple modification of that for Lemma 4.2 in \cite{waw18}, so we omit it here.\\ \noindent\textbf{Lemma 4.2.} Let $(u,v,g,h)$ be the unique global solution of (1.1) and $h_{\infty}-g_{\infty}<\infty$. Then there exists $C>0$ such that \begin{align*} \|v\|_{C^{\frac{1+\alpha}{2},1+\alpha}(D_{\infty})}\leq C, \quad \mbox{where}~D_{\infty}:=[0,\infty)\times[g(t),h(t)] \tag{4.1} \end{align*} and hence \begin{align*} \|v_{x}(t,g(t))\|_{C^{\frac{\alpha}{2}}(\overline{\mathbb{R}}_{+})} +\|v_{x}(t,h(t))\|_{C^{\frac{\alpha}{2}}(\overline{\mathbb{R}}_{+})} \leq C. \tag{4.2} \end{align*} \noindent\textbf{Lemma 4.3.} If $h_{\infty}-g_{\infty}<\infty$, then $\lim_{t\rightarrow \infty}g^{\prime}(t)=\lim_{t\rightarrow \infty}h^{\prime}(t)=0$.\\ \noindent\textbf{Proof.} It is easy to see that $-\infty<g_{\infty}<h_{\infty}<\infty$. From (2.2), we can deduce that $g^{\prime}(t)$ and $h^{\prime}(t)$ defined in (1.1) are bounded. Let \begin{align*} \varphi_{1}(t)=v_{x}(t,h(t)), ~\varphi_{2}(t)=\int_{g(t)}^{h(t)}\int_{h(t)}^{\infty}J(x-y)u(t,x)dydx, ~\varphi_{3}(t)=\int_{g(t)}^{h(t)}\int_{h(t)}^{\infty}J(x-y)v(t,x)dydx. \end{align*} By (4.2), we get $|\varphi_{1}(t)-\varphi_{1}(s)|\leq C_{1}|t-s|^{\frac{\alpha}{2}}$ for any $t,s>0$. For $\varphi_{2}$, assume $t>s$, we have $h(t)>h(s)$, $g(t)<g(s)$ and then \begin{align*} &\varphi_{2}(t)-\varphi_{2}(s)\\[3pt] &=\int_{g(t)}^{h(t)}\int_{h(t)}^{\infty}J(x-y)u(t,x)dydx -\int_{g(s)}^{h(s)}\int_{h(s)}^{\infty}J(x-y)u(s,x)dydx\\[3pt] &=\int_{g(s)}^{h(s)}\int_{h(t)}^{\infty}J(x-y)[u(t,x)-u(s,x)]dydx +\int_{g(t)}^{g(s)}\int_{h(t)}^{\infty}J(x-y)u(t,x)dydx\\[3pt] &\quad+\int_{h(s)}^{h(t)}\int_{h(t)}^{\infty}J(x-y)u(t,x)dydx -\int_{g(s)}^{h(s)}\int_{h(s)}^{h(t)}J(x-y)u(s,x)dydx\\[3pt] &\leq \|\partial_{t}u\|_{L^{\infty}(D_{\infty})}(t-s)(h(s)-g(s)) +\|u\|_{L^{\infty}(D_{\infty})}(g(s)-g(t)) +2\|u\|_{L^{\infty}(D_{\infty})}(h(t)-h(s))\\[3pt] &\leq C_{2}(t-s), \end{align*} where $\|\partial_{t}u\|_{L^{\infty}(D_{\infty})}$ is obtained by the first equation in (1.1) and the bound of $u$. Thus, \begin{align*} |\varphi_{2}(t)-\varphi_{2}(s)| \leq C_{2}|t-s|. \end{align*} For $\varphi_{3}$, it follows from (4.1) that $|v(t,x)-v(s,x)|\leq C|t-s|^{\frac{1+\alpha}{2}}$ for $x\in [g(t),h(t)]$. Similar to $\varphi_{2}$, we can prove that \begin{align*} |\varphi_{3}(t)-\varphi_{3}(s)| \leq C_{3}|t-s|. \end{align*} Therefore, $h^{\prime}(t)=-\mu \varphi_{1}+\rho_{1}\varphi_{2}+\rho_{2}\varphi_{3}$ is uniformly continuous in $[0,\infty)$. From $\lim_{t\rightarrow \infty}h(t)=h_{\infty}<\infty$, we know $\lim_{t\rightarrow \infty}h^{\prime}(t)=0$. Similarly, we can show $\lim_{t\rightarrow \infty}g^{\prime}(t)=0$. \hfill $\Box$\\ \noindent\textbf{Theorem 4.1.} If $h_{\infty}-g_{\infty}<\infty$, then the solution $(u,v,g,h)$ of (1.1) satisfies \begin{align*} \lim_{t\rightarrow \infty}\|u(t,\cdot)\|_{C([g(t),h(t)])}= \lim_{t\rightarrow \infty}\|v(t,\cdot)\|_{C([g(t),h(t)])}=0. \end{align*} \noindent\textbf{Proof.} Since $J\geq 0$ and $v>0$, from the second equation in (1.1), there exists a constant $C>0$ such that \begin{align*} \partial_{t}v-d_{2}\tau\partial_{x}^{2}v\geq Cv. \end{align*} According to Lemma 4.1, we get \begin{align*} \lim_{t\rightarrow \infty}\|v(t,\cdot)\|_{C([g(t),h(t)])}=0. \end{align*} We claim that \begin{align*} \lambda_{1}(-(L_{(g_{\infty},h_{\infty})}+a(t)))\geq 0, \tag{4.3} \end{align*} where $-(L_{(g_{\infty},h_{\infty})}+a(t))$ is defined in (3.2). Assume on the contrary that $\lambda_{1}(-(L_{(g_{\infty},h_{\infty})}+a(t)))<0$. For convenient, for any $\varepsilon>0$ we define $h_{\infty}^{\pm \varepsilon}:=h_{\infty}\pm \varepsilon$, $g_{\infty}^{\pm \varepsilon}:=g_{\infty}\pm \varepsilon$. Thus, there exists $\varepsilon_{1}>0$ such that $\lambda_{1}(-(L_{(g_{\infty}^{+\varepsilon},h_{\infty}^{-\varepsilon})}+a(t)-b(t)\varepsilon))>0$ for all $\varepsilon\in (0, \varepsilon_{1})$. For such $\varepsilon>0$, we can find $T_{\varepsilon}>0$ such that, for $t>T_{\varepsilon}$, \begin{align*} h(t)>h_{\infty}^{-\varepsilon},~g(t)<g_{\infty}^{+\varepsilon},~ \|v(t,\cdot)\|_{C([g(t),h(t)])}<\varepsilon. \end{align*} Then $u$ satisfies \begin{align*} \left\{\begin{array}{l} u_t\geq d_{1}\int_{g_{\infty}^{+\varepsilon}}^{h_{\infty}^{-\varepsilon}} J(x-y)u(t,y)dy-d_{1}u+u(a(t)-u-b(t)\varepsilon),\quad t>T_{\varepsilon},~ x\in [g_{\infty}^{+\varepsilon}, h_{\infty}^{-\varepsilon}],\\[5pt] u(T_{\varepsilon},x)=u(T_{\varepsilon},x), \quad x\in [g_{\infty}^{+\varepsilon}, h_{\infty}^{-\varepsilon}]. \end{array}\right. \end{align*} Consider the following problem \begin{align*} \left\{\begin{array}{l} \phi_t= d_{1}\int_{g_{\infty}^{+\varepsilon}}^{h_{\infty}^{-\varepsilon}} J(x-y)\phi(t,y)dy-d_{1}\phi+\phi(a(t)-\phi-b(t)\varepsilon),\quad t>T_{\varepsilon},~ x\in [g_{\infty}^{+\varepsilon}, h_{\infty}^{-\varepsilon}],\\[5pt] \phi(T_{\varepsilon},x)=u(T_{\varepsilon},x), \quad x\in [g_{\infty}^{+\varepsilon}, h_{\infty}^{-\varepsilon}]. \end{array}\right. \tag{4.4} \end{align*} Since $\lambda_{1}(-(L_{(g_{\infty}^{+\varepsilon},h_{\infty}^{-\varepsilon})}+a(t)-b(t)\varepsilon))<0$, by Lemma 3.2 $(i)$ we know that the solution $\phi_{\varepsilon}(t,x)$ of problem (4.4) converges to $\phi^{*}_{\varepsilon}(t,x)$ uniformly in $[g_{\infty}^{+\varepsilon}, h_{\infty}^{-\varepsilon}]$ as $t\rightarrow \infty$, where $\phi^{*}_{\varepsilon}(t,x)\in \mathcal{X}_{\varepsilon}^{++}$ is the unique periodic solution of \begin{align*} \phi_t=d_{1}\int_{g_{\infty}^{+\varepsilon}}^{h_{\infty}^{-\varepsilon}} J(x-y)\phi(t,y)dy-d_{1}\phi+\phi(a(t)-\phi-b(t)\varepsilon),\quad t\in \mathbb{R},~ x\in [g_{\infty}^{+\varepsilon}, h_{\infty}^{-\varepsilon}]. \end{align*} By Lemma 2.2 in \cite{cdll19} and a simple comparison argument, we get \begin{align*} u(t,x)\geq \phi_{\varepsilon}(t,x), \quad \forall~t>T_{\varepsilon},~x\in [g_{\infty}^{+\varepsilon}, h_{\infty}^{-\varepsilon}]. \end{align*} Hence, there exist two constants $\tilde{T}_{\varepsilon}>T_{\varepsilon}$ and $C>0$ such that \begin{align*} u(t,x)\geq \frac{1}{2}\phi^{*}_{\varepsilon}(t,x)\geq C>0, \quad \forall~t>\tilde{T}_{\varepsilon},~x\in [g_{\infty}^{+\varepsilon}, h_{\infty}^{-\varepsilon}]. \end{align*} It follows that, for $0<\varepsilon<\min\{\varepsilon_{1}, \frac{\bar{\varepsilon}}{2}\}$ and $t>\tilde{T}_{\varepsilon}$, \begin{align*} h^{\prime}(t) &\geq \rho_{1}\int_{g(t)}^{h(t)}\int_{h(t)}^{\infty}J(x-y)u(t,x)dydx \geq \rho_{1}\int_{g_{\infty}^{+\varepsilon}}^{h_{\infty}^{-\varepsilon}}\int_{h_{\infty}}^{\infty}J(x-y)u(t,x)dydx\\[5pt] &\geq \rho_{1}\int_{h_{\infty}^{-\frac{\bar{\varepsilon}}{2}}}^{h_{\infty}^{-\varepsilon}}\int_{h_{\infty}}^{h_{\infty}^{-\frac{\bar{\varepsilon}}{2}}} \delta_{0}\frac{1}{2}\phi^{*}_{\varepsilon}(t,x)dydx \geq \rho_{1}\int_{h_{\infty}^{-\frac{\bar{\varepsilon}}{2}}}^{h_{\infty}^{-\varepsilon}}\int_{h_{\infty}}^{h_{\infty}^{-\frac{\bar{\varepsilon}}{2}}} \delta_{0}Cdydx>0, \end{align*} which implies that $h_{\infty}=\infty$. It is a contradiction and then (4.3) holds. Let $\bar{u}$ be the unique solution of \begin{align*} \left\{\begin{array}{l} \bar{u}_t=d_{1}\int_{g_{\infty}}^{h_{\infty}} J(x-y)\bar{u}(t,y)dy-d_{1}\bar{u}+\bar{u}(a(t)-\bar{u}),\quad t>0, x\in [g_{\infty}, h_{\infty}],\\[5pt] \bar{u}(0,x)=u_{0}(x),~ x\in [-h_{0},h_{0}]; \quad \bar{u}(0,x)=0, ~x\in [g_{\infty}, h_{\infty}]\setminus[-h_{0}, h_{0}]. \end{array}\right. \end{align*} Now we prove that $\lim_{t\rightarrow \infty}\bar{u}(t,x)=0$ uniformly in $[g_{\infty},h_{\infty}]$. Since (4.3) holds, we divide the discussion into two cases: (i) For the case $\lambda_{1}(-(L_{(g_{\infty},h_{\infty})}+a(t)))>0$, applying Lemma 3.2 $(ii)$ we can get the desired result. (i) For the case $\lambda_{1}(-(L_{(g_{\infty},h_{\infty})}+a(t)))=0$, we define \begin{align*} w(t,x)=e^{-\int_{0}^{t}[a(s)-a_{T}]ds}\bar{u}(t,x), \end{align*} then $w(t,x)$ satisfies \begin{align*} \left\{\begin{array}{l} w_t=d_{1}\int_{g_{\infty}}^{h_{\infty}} J(x-y)w(t,y)dy-d_{1}w+w(a_{T}-e^{\int_{0}^{t}[a(s)-a_{T}]ds}w),\quad t>0, x\in [g_{\infty}, h_{\infty}],\\[5pt] w(0,x)=u_{0}(x),~x\in [-h_{0}, h_{0}]; \quad w(0,x)=0,~x\in [g_{\infty}, h_{\infty}]\setminus[-h_{0}, h_{0}]. \end{array}\right. \end{align*} For any $t>0$, we can write $t=nT+\tau$ with $\tau\in [0,T)$, and then \begin{align*} e^{\int_{0}^{t}[a(s)-a_{T}]ds} =e^{\int_{0}^{t}[a(s)-a_{T}]ds} =e^{\int_{0}^{\tau}[a(s)-a_{T}]ds}, \end{align*} which together with the continuity of $a(t)$ imply that $M_{1}\leq e^{\int_{0}^{t}[a(s)-a_{T}]ds}\leq M_{2}$ for some positive constants $M_{1}$ and $M_{2}$. By the comparison principle, we know $w(t,x)\leq \bar{w}(t,x)$ with $\bar{w}(t,x)$ be the unique solution of \begin{align*} \left\{\begin{array}{l} \bar{w}_t=d_{1}\int_{g_{\infty}}^{h_{\infty}} J(x-y)\bar{w}(t,y)dy-d_{1}\bar{w}+\bar{w}(a_{T}-M_{1}w),\quad t>0, x\in [g_{\infty}, h_{\infty}],\\[5pt] \bar{w}(0,x)=u_{0}(x), ~x\in [-h_{0}, h_{0}]; \quad \bar{w}(0,x)=0, ~x\in [g_{\infty}, h_{\infty}]\setminus[-h_{0}, h_{0}]. \end{array}\right. \end{align*} Recall that in (3.5) we have $\lambda_{1}(\mathcal{L}_{(g_{\infty},h_{\infty})}+a_{T}) =-\lambda_{1}(-(L_{(g_{\infty},h_{\infty})}+a(t)))=0$, where $\mathcal{L}_{(g_{\infty},h_{\infty})}+a_{T}$ is defined in (3.4). By Proposition 3.5 in \cite{cdll19} (see also \cite{baz07,cov10}), we know that $\lim_{t\rightarrow \infty}\bar{w}(t,x)=0$ uniformly in $[g_{\infty},h_{\infty}]$. Thus, $w(t,x)$ and $\bar{u}(t,x)=e^{\int_{0}^{t}[a(s)-a_{T}]ds}w(t,x)$ converge to $0$ uniformly in $[g_{\infty},h_{\infty}]$ as $t\rightarrow +\infty$, which completes the proof. On the other hand, it is easy to know that \begin{align*} \left\{\begin{array}{l} \bar{u}_t\geq d_{1}\int_{g(t)}^{h(t)} J(x-y)\bar{u}(t,y)dy-d_{1}\bar{u}+\bar{u}(a(t)-\bar{u}),\quad t>0, x\in (g(t), h(t)),\\[5pt] \bar{u}(t,g(t))\geq 0,~ \bar{u}(t,h(t))\geq 0,\\[5pt] \bar{u}(0,x)=u_{0}(x),~ x\in [-h_{0},h_{0}]. \end{array}\right. \end{align*} By the comparison principle, we know $u(t,x)\leq \bar{u}(t,x)$ for any $t>0$ and $x\in [g(t), h(t)]$. Thus, $\lim_{t\rightarrow \infty}\|u(t,\cdot)\|_{C([g(t),h(t)])}=0$. \hfill $\Box$\\ From Lemma 4.1, we can obtain the following spreading-vanishing dichotomy.\\ \noindent\textbf{Corollary 4.1.} (Spreading-vanishing dichotomy) Let $(u,v,g,h)$ be the unique solution of (1.1). Then, the following alternative holds:\\ \emph{Either} $(i)$ spreading: $\lim_{t\rightarrow \infty}(h(t)-g(t))=\infty$, \emph{or} $(ii)$ vanishing: $\lim_{t\rightarrow \infty}(g(t), h(t))=(g_{\infty}, h_{\infty})$ is a finite interval and $\lim_{t\rightarrow \infty}\max_{g(t)\leq x\leq h(t)}u(t,x)=\lim_{t\rightarrow \infty}\max_{g(t)\leq x\leq h(t)}v(t,x)=0$.\\ In what follows, we will provide some sufficient conditions for spreading and vanishing.\\ \noindent\textbf{Theorem 4.2.} If $h_{\infty}-g_{\infty}<\infty$, then \begin{align*} h_{\infty}-g_{\infty}\leq h^{*}, \end{align*} where $|\Omega|=h^{*}$ is the unique root of $\lambda_{1}(-(\tilde{L}_{\Omega}+c(t)))=0$ with $-(\tilde{L}_{\Omega}+c(t))$ defined in (3.6).\\ \noindent\textbf{Proof.} Recall that in Lemma 4.4 we have showed that $h_{\infty}-g_{\infty}<\infty$ implies \begin{align*} \lim_{t\rightarrow \infty}\|u(t,\cdot)\|_{C([g(t),h(t)])}= \lim_{t\rightarrow \infty}\|v(t,\cdot)\|_{C([g(t),h(t)])}=0. \tag{4.5} \end{align*} Assume on the contrary that $h_{\infty}-g_{\infty}>h^{*}$. Then there exists $0<\varepsilon\ll 1$ and $\mathcal{T}\gg 1$ such that \begin{align*} \begin{array}{l} h_{\infty}^{-\varepsilon}-g_{\infty}^{+\varepsilon} =h_{\infty}-g_{\infty}-2\varepsilon>h^{*}_{\varepsilon},\\[3pt] g(\mathcal{T})<g_{\infty}^{+\varepsilon},~h(\mathcal{T})>h_{\infty}^{-\varepsilon},\\[3pt] 0\leq u(t,x)<\varepsilon,~\forall t\geq \mathcal{T}, x\in [g_{\infty}^{+\varepsilon},h_{\infty}^{-\varepsilon}], \end{array} \end{align*} where $|\Omega|=h^{*}_{\varepsilon}$ is the unique root of $\lambda_{1}(-(\tilde{L}_{\Omega}+c(t)-d(t)\varepsilon))=0$. Then $v$ satisfies \begin{align*} \left\{\begin{array}{l} v_t\geq d_{2}\left[\tau v_{xx}+(1-\tau)\left(\int_{g_{\infty}^{+\varepsilon}}^{h_{\infty}^{-\varepsilon}}J(x-y)v(t,y)dy-v\right)\right]+v(c(t)-d(t)\varepsilon-v),\quad \\[3pt] \qquad\qquad t>\mathcal{T}, x\in (g_{\infty}^{+\varepsilon}, h_{\infty}^{-\varepsilon}),\\[5pt] v(t,g_{\infty}^{+\varepsilon})> 0,~ v(t,h_{\infty}^{-\varepsilon})> 0, ~ t\geq\mathcal{T},\\[5pt] v(\mathcal{T},x)>0,~ x\in (g_{\infty}^{+\varepsilon},h_{\infty}^{-\varepsilon}). \end{array}\right. \end{align*} Let $\psi$ be the unique positive solution of \begin{align*} \left\{\begin{array}{l} \psi_t=d_{2}\left[\tau \psi_{xx}+(1-\tau)\left(\int_{g_{\infty}^{+\varepsilon}}^{h_{\infty}^{-\varepsilon}}J(x-y)\psi(t,y)dy-\psi\right)\right] +\psi(c(t)-d(t)\varepsilon-\psi),\\[5pt] \qquad\qquad t>\mathcal{T}, x\in (g_{\infty}^{+\varepsilon}, h_{\infty}^{-\varepsilon}),\\[5pt] \psi(t,g_{\infty}^{+\varepsilon})=0,~ \psi(t,h_{\infty}^{-\varepsilon})=0, ~ t\geq\mathcal{T},\\[5pt] \psi(\mathcal{T},x)=v(\mathcal{T},x),~ x\in (g_{\infty}^{+\varepsilon},h_{\infty}^{-\varepsilon}). \end{array}\right. \end{align*} By Lemma 2.2, we have \begin{align*} \psi(t,x)\leq v(t,x),\quad t\geq \mathcal{T}, x\in [g_{\infty}^{+\varepsilon},h_{\infty}^{-\varepsilon}]. \end{align*} Since $h_{\infty}^{-\varepsilon}-g_{\infty}^{+\varepsilon}=h_{\infty}-g_{\infty}-2\varepsilon>h^{*}_{\varepsilon}$, we have $\lambda_{1}(-(\tilde{L}_{(g_{\infty}^{+\varepsilon}, h_{\infty}^{-\varepsilon})}+c(t)-d(t)\varepsilon))<0$, and then Lemma 3.4 implies that $\psi(t+nT,x)\rightarrow \omega(t,x)$ as $n\rightarrow\infty$ uniformly in the compact subset of $(g_{\infty}^{+\varepsilon},h_{\infty}^{-\varepsilon})$, where $\omega(t,x)$ is the unique positive periodic solution of \begin{align*} \left\{\begin{array}{l} \omega_t=d_{2}\left[\tau \omega_{xx}+(1-\tau)\left(\int_{g_{\infty}^{+\varepsilon}}^{h_{\infty}^{-\varepsilon}}J(x-y)\omega(t,y)dy-\omega\right)\right]+\omega(c(t)-d(t)\varepsilon-\omega),\quad \\[5pt] \qquad\qquad t\in [0, T], x\in (g_{\infty}^{+\varepsilon}, h_{\infty}^{-\varepsilon}),\\[5pt] \omega(t,g_{\infty}^{+\varepsilon})=0,~ \omega(t,h_{\infty}^{-\varepsilon})=0, \quad t\in [0, T],\\[5pt] \omega(0,x)=\omega(T,x), \quad x\in (g_{\infty}^{+\varepsilon},h_{\infty}^{-\varepsilon}). \end{array}\right. \end{align*} Therefore, $\liminf_{n\rightarrow \infty}v(t+nT,x)\geq \lim_{n\rightarrow \infty}\psi(t+nT,x)=\omega(t,x)>0$ for all $x\in (g_{\infty}^{+\varepsilon}, h_{\infty}^{-\varepsilon})$, which is a contradiction to (4.5). This completes the proof. \hfill $\Box$\\ \noindent\textbf{Corollary 4.2.} If $h_{0}\geq\frac{1}{2}h^{*}$, then spreading occurs, that is, $h_{\infty}-g_{\infty}=+\infty$.\\ If $a_{T}\geq d_{1}$, then Lemma 3.3 implies that $\lambda_{1}(-(L_{\Omega}+a(t)))<0$ for all $l:=|\Omega|>0$. Thus, the vanishing can not happen by the proof of Theorem 4.1, which means that $h_{\infty}-g_{\infty}=+\infty$ always holds.\\ \noindent\textbf{Theorem 4.3.} If $a_{T}\geq d_{1}$, then spreading always happens.\\ On the other hand, if $a_{T}<d_{1}$, then Lemma 3.3 implies that $\lambda_{1}(-(L_{\Omega}+a(t)))>0$ for $0<|\Omega|\ll 1$, and $\lambda_{1}(-(L_{\Omega}+a(t)))<0$ for $|\Omega|\gg 1$. Since $\lambda_{1}(-(L_{\Omega}+a(t)))$ is strictly decreasing in $|\Omega|$, there exists a $l^{*}>0$ such that $\lambda_{1}(-(L_{\Omega}+a(t)))=0$ for $|\Omega|=l^{*}$, $\lambda_{1}(-(L_{\Omega}+a(t)))>0$ for $|\Omega|<l^{*}$ and $\lambda_{1}(-(L_{\Omega}+a(t)))<0$ for $|\Omega|>l^{*}$. From the proof of (4.3), we know that if $h_{\infty}-g_{\infty}<+\infty$ then $h_{\infty}-g_{\infty}\leq l^{*}$. Therefore, if $h_{0}\geq \frac{l^{*}}{2}$ then we have $h_{\infty}-g_{\infty}=+\infty$.\\ \noindent\textbf{Theorem 4.4.} Assume $a_{T}<d_{1}$ and $h_{0}<\frac{1}{2}\min\{h^{*}, l^{*}\}$. If one of the following conditions is satisfied:\\ $(i)$ $\tau=1$, $(ii)$ $J(x)$ is equal to a positive constant on $[-2h_{0}-2\delta_{0}, 2h_{0}+2\delta_{0}]$ for some small constant $\delta_{0}>0$,\\ then there exists $\Lambda_{0}>0$ such that $h_{\infty}-g_{\infty}<+\infty$ when $\mu+\rho_{1}+\rho_{2}\leq \Lambda_{0}$.\\ \noindent\textbf{Proof.} Since $\lambda_{1}(-(L_{(-h_{0}, h_{0})}+a(t)))>0$, we can choose $h_{0}<h_{1}<\frac{l^{*}}{2}$ such that $\lambda:=\lambda_{1}(-(L_{(-h_{1}, h_{1})}+a(t)))>0$. Let $\bar{u}$ be the unique solution of \begin{align*} \left\{\begin{array}{l} \bar{u}_t=d_{1}\int_{-h_{1}}^{h_{1}} J(x-y)\bar{u}(t,y)dy-d_{1}\bar{u}+a(t)\bar{u},\quad t>0, x\in [-h_{1}, h_{1}],\\[5pt] \bar{u}(0,x)=u_{0}(x),\quad |x|\leq h_{0};\quad \bar{u}(0,x)=0,\quad h_{0}<|x|\leq h_{1}. \end{array}\right. \end{align*} And let $\varphi(t,x)$ be the corresponding eigenfunction associated with $\lambda$ and satisfies $\|\varphi\|_{L^{\infty}([0, T]\times [-h_{1}, h_{1}])}=1$, that is, \begin{align*} -\left(L_{(-h_{1}, h_{1})}+a(t)\right)[\varphi] =\lambda \varphi. \end{align*} Let $\omega(t,x)=Ce^{-\frac{\lambda t}{2}}\varphi(t,x)$ for some $C>0$, it is easy to check that \begin{align*} &\omega_{t}-d_{1}\int_{-h_{1}}^{h_{1}} J(x-y)\omega(t,y)dy+d_{1}\omega-a(t)\omega\\[3pt] &=Ce^{-\frac{\lambda t}{2}}\left(\varphi_{t}-d_{1}\int_{-h_{1}}^{h_{1}} J(x-y)\varphi(t,y)dy+d_{1}\varphi-a(t)\varphi-\frac{\lambda}{2}\varphi\right)\\[3pt] &=\frac{1}{2}\lambda Ce^{-\frac{\lambda t}{2}}\varphi(t,x)>0, \end{align*} for all $t>0$ and $x\in [-h_{1}, h_{1}]$. Choosing $C>0$ large such that $\omega(0,x)=C\varphi(0,x)>u_{0}(x)$ on $[-h_{1}, h_{1}]$. Applying Lemma 3.3 in \cite{cdll19}, we have \begin{align*} \bar{u}(t,x)\leq \omega(t,x)=Ce^{-\frac{\lambda t}{2}}\varphi(t,x) \leq Ce^{-\frac{\lambda t}{2}}, \end{align*} for all $t>0$ and $x\in [-h_{1}, h_{1}]$. On the other hand, since $h_{0}<\frac{h^{*}}{2}$, we can choose a constant $h_{2}$ satisfying $h_{0}<h_{2}<\min\{\frac{h^{*}}{2},h_{1},h_{0}+\delta_{0}\}$ such that $\lambda_{1}(-(\tilde{L}_{(-h_{2}, h_{2})}+c(t)))>0$. Let $\psi(t,x)$ be the corresponding normalized eigenfunction of $(-(\tilde{L}_{(-h_{2}, h_{2})}+c(t)))$ associated with $\lambda_{1}(-(\tilde{L}_{(-h_{2}, h_{2})}+c(t)))$. Note that $\psi_{x}(t,h_{2})<0, \psi_{x}(t,-h_{2})>0$ in $[0,T]$. We claim that there exists a constant $\alpha>0$ such that \begin{align*} x\psi_{x}(t,x)\leq \alpha\psi(t,x), \quad \forall(t,x)\in [0,T]\times [-h_{2},h_{2}]. \end{align*} In fact, since $\pm h_{2}\psi_{x}(t,\pm h_{2})<0$, by the continuity of $x\psi_{x}(t,x)$, we have $x\psi_{x}(t,x)<0$ on some interval $[0,T]\times [-h_{2},-h_{2}+\delta_{1}]\cup [h_{2}-\delta_{2},h_{2}]\subset [0,T]\times [-h_{2},h_{2}]$. Moreover, from the positivity and continuity of $\psi(t,x)$, we know there exists a constant $m>0$ such that $\psi(t,x)\geq m$ for $(t,x)\in [0,T]\times[-h_{2}+\delta_{1}, h_{2}-\delta_{2}]$. Applying the continuity of $x\psi_{x}(t,x)$ again, we can choose a constant $\alpha>0$ large enough such that $x\psi_{x}(t,x)\leq \alpha m\leq \alpha\psi(t,x)$ on $[0,T]\times[-h_{2}+\delta_{1}, h_{2}-\delta_{2}]$. This shows the claim is true. Now we define \begin{align*} s(t)=h_{2}\varsigma(t),~ \varsigma(t)=1-\frac{\delta}{2}-\frac{\delta}{2}e^{-\sigma t},~ \bar{v}(t,x)=ke^{-\sigma t}\psi(\xi(t), \eta(t,x)),~\forall(t,x)\in [0,\infty)\times [-s(t),s(t)] \end{align*} with \begin{align*} \xi(t)=\int_{0}^{t}\frac{1}{\varsigma^{2}(\theta)}d\theta, \quad \eta(t,x)=\frac{h_{2}}{s(t)}x=\frac{x}{\varsigma(t)}, \end{align*} where $k>0, \sigma>0,0<\delta<1-\frac{h_{0}}{h_{2}}$ are positive constants to be determined later. Then $\overline{v}(t,x)$ satisfies \begin{align*} &\bar{v}_{t}(t,x)-d_{2}[\tau\bar{v}_{xx}+(1-\tau)(\int_{-s(t)}^{s(t)}J(x-y)\bar{v}(t,y)dy-\bar{v}(t,x))] -\bar{v}(t,x)(c(t)-\bar{v}(t,x))\\[3pt] &=ke^{-\sigma t}\Big[-\sigma\psi(\xi,\eta) -\frac{\varsigma^{\prime}(t)}{\varsigma(t)}\eta\psi_{\eta}(\xi,\eta)\\[3pt] &\quad+d_{2}(1-\tau)\Big(\frac{1}{\varsigma^{2}(t)}\int_{-h_{2}}^{h_{2}}J(\eta-\tilde{\eta})\psi(\xi,\tilde{\eta})d\tilde{\eta} -\varsigma(t)\int_{-h_{2}}^{h_{2}}J(\varsigma(t)\eta-\varsigma(t)\tilde{\eta})\psi(\xi,\tilde{\eta})d\tilde{\eta}\Big)\\[3pt] &\quad+d_{2}(1-\tau)(1-\frac{1}{\varsigma^{2}(t)})\psi(\xi,\eta)+(\frac{1}{\varsigma^{2}(t)}c(\xi)-c(t))\psi(\xi,\eta) +\frac{1}{\varsigma^{2}(t)}\lambda_{1}\psi(\xi,\eta)+ke^{-\sigma t}\psi^{2}(\xi,\eta)\Big]\\[3pt] &\geq ke^{-\sigma t}\Big[\left(-\sigma-\sigma \alpha +d_{2}(1-\tau)(1-\frac{1}{\varsigma^{2}(t)})+\frac{1}{\varsigma^{2}(t)}\lambda_{1} +(\frac{1}{\varsigma^{2}(t)}c(\xi)-c(t))\right)\psi(\xi,\eta)\\[3pt] &\quad+d_{2}(1-\tau)\left(\frac{1}{\varsigma^{2}(t)}\int_{-h_{2}}^{h_{2}}J(\eta-\tilde{\eta})\psi(\xi,\tilde{\eta})d\tilde{\eta} -\varsigma(t)\int_{-h_{2}}^{h_{2}}J(\varsigma(t)\eta-\varsigma(t)\tilde{\eta})\psi(\xi,\tilde{\eta})d\tilde{\eta}\right)\Big]. \end{align*} $(i)$ If $\tau=1$, by the fact $\varsigma(t)\rightarrow 1$ as $\delta\rightarrow 0$, we can choose $0<\sigma, \delta\ll 1$ such that, for $(t,x)\in[0,\infty)\times (-s(t),s(t))$, \begin{align*} &\bar{v}_{t}(t,x)-d_{2}[\tau\bar{v}_{xx}+(1-\tau)(\int_{-s(t)}^{s(t)}J(x-y)\bar{v}(t,y)dy-\overline{v}(t,x))] -\bar{v}(t,x)(c(t)-\bar{v}(t,x))\\[3pt] &\geq ke^{-\sigma t}\Big[\left(-\sigma-\sigma \alpha +\frac{1}{\varsigma^{2}(t)}\lambda_{1} +(\frac{1}{\varsigma^{2}(t)}c(\xi)-c(t))\right)\psi(\xi,\eta)\\[3pt] &>0. \end{align*} $(ii)$ If $J(x)$ is equal to a positive constant $K$ for $x\in [-2h_{2},2h_{2}]\subset [-2h_{0}-2\delta_{0}, 2h_{0}+2\delta_{0}]$, then \begin{align*} \begin{array}{rl} &\frac{1}{\varsigma^{2}(t)}\int_{-h_{2}}^{h_{2}}J(\eta-\tilde{\eta})\psi(\xi,\tilde{\eta})d\tilde{\eta} -\varsigma(t)\int_{-h_{2}}^{h_{2}}J(\varsigma(t)\eta-\varsigma(t)\tilde{\eta})\psi(\xi,\tilde{\eta})d\tilde{\eta}\\[5pt] &=\frac{1}{\varsigma^{2}(t)}\int_{-h_{2}}^{h_{2}}K\psi(\xi,\tilde{\eta})d\tilde{\eta} -\varsigma(t)\int_{-h_{2}}^{h_{2}}K\psi(\xi,\tilde{\eta})d\tilde{\eta}\\[5pt] &=K\left(\frac{1}{\varsigma^{2}(t)}-\varsigma(t)\right) \int_{-h_{2}}^{h_{2}}\psi(\xi,\tilde{\eta})d\tilde{\eta} >0. \end{array} \tag{4.6} \end{align*} By the fact $\varsigma(t)\rightarrow 1$ as $\delta\rightarrow 0$, we can also choose $0<\sigma, \delta\ll 1$ such that \begin{align*} \bar{v}_{t}(t,x)-d_{2}[\tau\bar{v}_{xx}+(1-\tau)(\int_{-s(t)}^{s(t)}J(x-y)\bar{v}(t,y)dy-\bar{v}(t,x))] -\bar{v}(t,x)(c(t)-\bar{v}(t,x))> 0 \end{align*} for $(t,x)\in[0,\infty)\times (-s(t),s(t))$. Moreover, choosing $k$ large enough such that $\bar{v}(0,x)\geq v_{0}(x)$ for $x\in [-h_{0},h_{0}]$. Since $s(t)<h_{2}<h_{1}$, we know $\bar{u}$ satisfies \begin{align*} \bar{u}_t\geq d_{1}\int_{-s(t)}^{s(t)} J(x-y)\bar{u}(t,y)dy-d_{1}\bar{u}+\bar{u}(a(t)-\bar{u}),\quad t>0, x\in (-s(t), s(t)). \end{align*} Note that \begin{align*} &-\bar{v}_{x}(t,s(t)) =-\frac{k}{\varsigma(t)}e^{-\sigma t}\psi_{\eta}(\xi(t),h_{2}) \leq \frac{k}{1-\delta}e^{-\sigma t}\|\psi\|_{C^{1}([0,T]\times [-h_{2},h_{2}])},\\[3pt] &\int_{-s(t)}^{s(t)}\int_{s(t)}^{\infty}J(x-y)\bar{v}(t,x)dydx \leq 2kh_{2}e^{-\sigma t},\\[3pt] &\int_{-s(t)}^{s(t)}\int_{s(t)}^{\infty}J(x-y)\bar{u}(t,x)dydx \leq 2Ch_{2}e^{-\frac{\lambda t}{2}}. \end{align*} Since $0<\sigma\ll 1$, we may further assume that $\sigma<\frac{\lambda}{2}$. Suppose that \begin{align*} 0<\mu+\rho_{1}+\rho_{2}\leq \frac{h_{2}\delta\sigma}{2A} \quad \mbox{with}~ A:=\max\left\{\frac{k}{1-\delta}\|\psi\|_{C^{1}([0,T]\times [-h_{2},h_{2}])}, 2kh_{2}, 2Ch_{2}\right\}, \end{align*} we have \begin{align*} s^{\prime}(t) &=\frac{1}{2}h_{2}\delta\sigma e^{-\sigma t} \geq \frac{k}{1-\delta}\mu e^{-\sigma t}\|\psi\|_{C^{1}([0,T]\times [-h_{2},h_{2}])} +2kh_{2}\rho_{2}e^{-\sigma t}+2Ch_{2}\rho_{1}e^{-\sigma t}\\[3pt] &\geq \frac{k}{1-\delta}\mu e^{-\sigma t}\|\psi\|_{C^{1}([0,T]\times [-h_{2},h_{2}])} +2kh_{2}\rho_{2}e^{-\sigma t}+2Ch_{2}\rho_{1}e^{-\frac{\lambda t}{2}}\\[3pt] &\geq -\mu\bar{v}_{x}(t,s(t))+\rho_{1}\int_{-s(t)}^{s(t)}\int_{s(t)}^{\infty}J(x-y)\bar{u}(t,x)dydx +\rho_{2}\int_{-s(t)}^{s(t)}\int_{s(t)}^{\infty}J(x-y)\bar{v}(t,x)dydx. \end{align*} Similarly, we can get \begin{align*} &-s^{\prime}(t)\\[3pt] &\leq -\mu\bar{v}_{x}(t, -s(t)) -\rho_{1}\int_{-s(t)}^{s(t)}\int_{-\infty}^{-s(t)}J(x-y)\bar{u}(t,x)dydx -\rho_{2}\int_{-s(t)}^{s(t)}\int_{-\infty}^{-s(t)}J(x-y)\bar{v}(t,x)dydx. \end{align*} This shows that $(\bar{u}, \bar{v}, -s(t), s(t))$ is an upper solution of (1.1). Applying Lemma 3.1, we have $h(t)\leq s(t)$ and $g(t)\geq -s(t)$, which implies that $h_{\infty}-g_{\infty}\leq 2h_{2}<+\infty$. \hfill $\Box$\\ \noindent\textbf{Remark 4.1.} To ensure (4.6) holds, we choose $s(t)=h_{2}\varsigma(t)=h_{2}(1-\frac{\delta}{2}-\frac{\delta}{2}e^{-\sigma t})$ with $\varsigma(t)<1$, which is slightly different from \cite{dl10,dgp13}. Here we only prove the vanishing result for two cases, but whether the other situations still hold true is unknown, we leave it for future research.\\ \noindent\textbf{Theorem 4.5.} Assume $a_{T}<d_{1}$. \\ $(i)$ If $h_{0}\geq\frac{1}{2}\min\{h^{*}, l^{*}\}$, then spreading always occurs;\\ $(ii)$ If $h_{0}<\frac{1}{2}\min\{h^{*}, l^{*}\}$, and one of the following conditions is satisfied: $(ii.1)$ $\tau=1$, $(ii.2)$ $J(x)$ is equal to a positive constant on $[-2h_{0}-2\delta_{0}, 2h_{0}+2\delta_{0}]$ for some small constant $\delta_{0}>0$,\\ then there exist $\Lambda^{*}>\Lambda_{*}>0$ such that $h_{\infty}-g_{\infty}<+\infty$ when $\mu+\rho_{1}+\rho_{2}\leq \Lambda_{*}$ and $h_{\infty}-g_{\infty}=+\infty$ when $\mu+\rho_{1}+\rho_{2}\geq \Lambda^{*}$.\\ \noindent\textbf{Proof.} $(1)$ If $h_{0}\geq \frac{1}{2}h^{*}$, then Corollary 4.2 implies that the spreading always occurs. For the case $h_{0}\geq \frac{1}{2}l^{*}$, if the vanishing happens, then $(g_{\infty},h_{\infty})$ is a finite interval and its length strictly larger than $2h_{0}\geq l^{*}$. Thus, $\lambda_{1}(-(L_{(g_{\infty},h_{\infty})}+a(t)))<0$, which is a contraction to (4.3). $(2)$ From (2.2), we can deduce that \begin{align*} &h'(t)>-\mu v_{x}(t, h(t)), \quad h'(t)>\rho_{1}\int_{g(t)}^{h(t)}\int_{h(t)}^{\infty}J(x-y)u(t,x)dydx, \quad t\geq0,\\[5pt] &g'(t)<-\mu v_{x}(t, g(t)), \quad g'(t)<-\rho_{1}\int_{g(t)}^{h(t)}\int_{-\infty}^{g(t)}J(x-y)u(t,x)dydx, \quad t\geq0. \end{align*} Since $u,v$ are positive and bounded, we know that $\int_{g(t)}^{h(t)}J(x-y)v(t,y)dy>0$ and there exists a constant $C>0$ such that $f_{1}\geq -Cu$, $f_{2}\geq -Cv$. For any given constant $H>\frac{1}{2}\min\{h^{*},l^{*}\}$, by Lemmas 5.1-5.2 in \cite{waw18}, there exist $\mu^{0}$ and $\rho_{1}^{0}$ such that $g_{\infty}-h_{\infty}\geq 2H$ for any $\mu\geq \mu^{0}$ or $\rho_{1}\geq \rho_{1}^{0}$. Taking $\Lambda^{0}=\mu^{0}+\rho_{1}^{0}$, we know $g_{\infty}-h_{\infty}=+\infty$ for $\mu+\rho_{1}\geq\Lambda^{0}$. Applying the continuity method, we can get the desired results. \hfill $\Box$\\ Combining Theorems 4.2-4.5 and Corollary 4.2, we immediately obtain the following criteria for spreading and vanishing.\\ \noindent\textbf{Corollary 4.3.} (Criteria for spreading and vanishing) Let $(u,v,g,h)$ be the unique solution of (1.1), $|\Omega|=h^{*}$ and $|\Omega|=l^{*}$ be the unique root of $\lambda_{1}(-(\tilde{L}_{\Omega}+c(t)))=0$ and $\lambda_{1}(-(L_{\Omega}+a(t)))=0$, respectively. \\ $(i)$ If one of the following conditions is satisfied: $(i.1)$ $a_{T}\geq d_{1}$, $(i.2)$ $h_{0}>\frac{1}{2}h^{*}$, $(i.3)$ $a_{T}<d_{1}$ and $h_{0}>\frac{1}{2}l^{*}$,\\ then spreading happens.\\ $(ii)$ If $a_{T}<d_{1}$, $h_{0}<\frac{1}{2}\min\{h^{*},l^{*}\}$ and one of the following conditions is satisfied: $(ii.1)$ $\tau=1$, $(ii.2)$ $J(x)$ is equal to a positive constant on $[-2h_{0}-2\delta_{0}, 2h_{0}+2\delta_{0}]$ for some small constant $\delta_{0}>0$,\\ then there exist $\Lambda^{*}>\Lambda_{*}>0$ such that vanishing happens when $\mu+\rho_{1}+\rho_{2}\leq \Lambda_{*}$ and spreading happens when $\mu+\rho_{1}+\rho_{2}\geq \Lambda^{*}$. \section*{Acknowledgments} Chen's work was supported by NSFC (No:11801432). Li's work was supported by NSFC (No:11571057). Tang's work was supported by NSFC (No:61772017). Wang's work was supported by NSFC (No:11801429) and the Natural Science Basic Research Plan in Shaanxi Province of China (No:2019JQ-136). \label{} \bibliographystyle{model3-num-names}
1,314,259,994,065
arxiv
\section{Introduction} In July 2016, the $\it {Juno}$ spacecraft will enter a bound orbit around Jupiter, and then complete $\sim 30$ further low-periapse orbits over a period of approximately one year. Measurements of the spacecraft's accelerations may reach a precision of $\sim 1 \phn \mu$gal \citep{kas10}, allowing determination of Jupiter's external gravitational potential, $V$, to a relative precision approaching $\sim 10^{-9}$. In roughly the same time frame, the $\it {Cassini}$ spacecraft will execute $\sim$ 22 low-periapse orbits around Saturn, making similar measurements of Saturn's external gravity potential. The nonspherical components of $V$ provide information about a planet's interior mass distribution. In this paper, we construct static interior models intended to represent the present state of Jupiter, using a pressure-density relation $P(\rho)$ derived from DFT-MD theory for the equation of state of the primary constituent of Jupiter and Saturn, a mixture of hydrogen and helium; see \citet{MH2013} and \citet{Militzer2013}. This barotrope is used to calculate the zonal harmonic coefficients $J_{2n}$, making various assumptions about the interior temperature distribution and core mass. Physically-motivated adjustments of the barotrope are made to achieve agreement with the observed $J_2$ (Table 1) and discrepancies with currently-observed higher $J_{2n}$ are discussed. Lines 2-11 of Table 1 give calculated values from interior models discussed in Section 4. To obtain a barotrope, we start with the grid of {\it ab initio} adiabats derived in \citet{Militzer2013} and \citet{MH2013}. These adiabats were determined with density functional molecular dynamics (DFT-MD) simulations using the Perdew-Burke-Ernzerhof (PBE) functional~\citep{PBE} in combination with a thermodynamic integration (TDI) technique to determine the full, nonideal entropy. The simulation cells contained a mixture of $N_{\rm He}=$18 helium and $N_{\rm H}=220$ hydrogen atoms, corresponding to a helium mass fraction of $Y$=0.245, close to the solar value. As discussed in \citet{MH2013}, each adiabat is characterized by the value of its absolute entropy per electron, $S/k_B/N_e$, where $k_B$ is Boltzmann's constant and $N_e$ is the number of electrons. Hereafter we denote this quantity with the simpler symbol $S$. Recently \citet{becker2014} constructed Jupiter models based on equations of state that were also derived with DFT-MD simulations but their approach differs in two respects. Becker {\it et al.} performed simulations for hydrogen and helium separately and then envoked the ideal mixing assumption while we simulated an interacting hydrogen-helium mixture directly. While we computed the full, nonideal entropy with TDI, Becker {\it el at.} obtained the entropy indirectly by fitting the internal energy and pressure, which are available in standard DFT-MD simulations. \citet{becker2014} reported deviations between 4 and 9\% when they compared their EOS with \citet{MH2013}. Such deviations could have a significant repercussion on values of zonal harmonics for interior models. In this paper, we use the term ``entropy'' and the symbol $S$ as a proxy for an adiabatic temperature $T$ vs. pressure $P$ relation for the fixed-composition mixture of H and He only (He mass fraction $Y_0=0.245$), as determined by our detailed DFT-MD simulations. The simulations give the absolute entropy and other dependent variables as a function of $T$ and $P$, for this specific composition. As discussed in Section \ref{comppert} below, for the purpose of calculating general pressure-density relations, the same $T(P)$ relation is taken to apply to adiabats with small, constant perturbations to the composition of the simulations. Moreover, the $S$ of the outermost layers of the model is determined by requiring a match to the Galileo Probe measurements of $T(P)$; see Figure~\ref{TvsP_gap}. The corresponding adiabat from our simulations has $S = 7.08$. Now, if we perturb this composition by changing $Y$ and increasing $Z$, how might the adiabatic $T(P)$ change, for $P \textgreater 20$ bar, and how might this affect the barotrope? Let the Gr{\"u}neisen parameter $\gamma = (\rho/T) (\partial T /\partial \rho)_S$, where $\rho$ is the mass density. Suppose we have a compositional perturbation to $Y$ and/or $Z$, of the order of $\sim 0.01$. This might lead to a perturbation $\Delta \gamma \sim 0.01$. Over a density range of three orders of magnitude, roughly spanning the jovian mantle, this value $\Delta \gamma$ would imply a cumulative change of temperature of $\sim 7\%$, with respect to the baseline $T(P)$. According Mie-Gr{\"u}neisen theory \citep{ZT1978}, the thermal pressure makes up only 10\% of the total pressure in the relevant Jupiter layers. Therefore, we expect the fractional change in density to be on the order of $\sim 0.1 \times 0.01 = 0.001$. This amount is so small that it is unlikely to affect any of our model predictions. It is certainly smaller than the previously-mentioned 4 to 9\% discrepancy with \citet{becker2014}. Our {\it ab initio} calculations show that under jovian interior conditions, there is no distinct phase transition from molecular (diatomic, insulating) hydrogen to metallic (monatomic, conducting) hydrogen~\citep{Vo07}. However, for convenience in this paper, the term ``molecular'' layer means layers at pressures below 1 Mbar, where the hydrogen is mostly diatomic. Likewise, the term ``metallic'' layer means layers at pressures above $\sim 2$ Mbar, but still external to a central dense core. By combining our {\it ab initio} calculations for Jupiter's interior adiabat \citep{Militzer2013} with the {\it ab initio} hydrogen-helium immiscibility calculations by ~\citet{Mo2013}, we predict that helium rain occurs in Jupiter's interior. While the detailed physics and dynamics of helium rain is not yet understood, we make the assumption that this process introduces a superadiabatic temperature gradient and a compositional difference between the outer, molecular layer and inner, metallic layer. In our models, the $T(P)$ of the molecular layer is set by the measurements of the Galileo entry probe while the $T(P)$ of the metallic layer is a free parameter that we can adjust between two limits. The value of $S$ labeling $T(P)$ for the metallic layer cannot be too high because otherwise no helium rain would have occurred in Jupiter according to DFT-MD simulations. The value of $S$ labeling $T(P)$ cannot be below the Galileo value because, we assume, the cooling of the metallic layer is less efficient. The assumption of reduced cooling of the metallic layer is consistent with specific models constructed by \citet{nettelmann2015} who studied the evolution of jovian interior temperature profiles under the influence of H/He demixing and layered double diffusive convection (see upper left-hand panel of Figure 10 of that paper). For the molecular layer, we assume the helium abundance that was measured by the Galileo probe \citep{vonzahn-jgr-98, mahaffy-jgr-2000}. We derive the helium contents in the metallic layer by assuming the planet as a whole has a protosolar helium abundance \citep{Lodders03}. The distribution of heavier elements throughout the planet is not well understood. The capture of comets has enriched the envelope over time. Similarly the erosion of the core may have added icy and rocky materials to the envelope~\citep{WilsonMilitzer2012,WilsonMilitzer2012b,Wahl2013,SiO2}. Given these uncertainties, we introduce three model parameters: the mass of today's dense core, the heavy element (``metals'') mass fraction in the molecular layer and that in the metallic layer. We assume that both layers are homogeneous and interpolate between both compositions to derive an estimate for the structure of the helium rain layer. Model predictions are not sensitive to details of this procedure because, in Jupiter, the interpolation layer between 1 and 2 Mbar contains very little mass. This article is organized as follows. In Section 2, we describe how we deal with hydrogen-helium immiscibility. In Section 3, we discuss how one perturbs the helium abundance in a particular EOS and how heavy elements are introduced. In Section 4, we discuss the EOS of different planetary ices and present results from additional {\it ab initio} simulations. In Section 5, we introduce our reference Jupiter model and discuss variations from it. Before we conclude, we describe in Section 6 how the moment of inertia is derived from CMS theory. In the Appendix, we provide additional details about the CMS calculations. \section{Adiabats and Hydrogen-Helium Immiscibility} \label{adimm} \begin{figure} \epsscale{1.0} \plotone{Jupiteradiabat61} \caption{General view of effect of immiscibility on Jupiter (and Saturn) evolution. The top two curves are DFT-MD adiabats with relatively high entropy per electron. The adiabat for $S \approx$ 7.20 osculates the boundary of the region of H-He immiscibility, while the adiabat just below it has $S =$ 7.08, which yields a temperature vs. pressure relation in the Jovian troposphere that matches corresponding data from the Galileo Probe \citep{seiff-1998}. The lowest adiabat has $S =$ 6.84, which yields a temperature vs. pressure relation that roughly matches Saturn's tropospheric profile \citep{Lindal-1985}. The pressure at Jupiter's core-mantle boundary (about 40 Mbar) is not shown on this figure. \label{phase_bdry2}} \end{figure} \begin{figure} \epsscale{1.0} \plotone{jupiter27b} \caption{Diagram showing the location of the hydrogen-helium immiscibility layer in Jupiter. \label{rain}} \end{figure} \begin{figure} \epsscale{1.0} \plotone{phase_bdry4} \caption{Temperature-pressure relations used in the models. DFT-MD adiabats are labeled with their entropy per electron $S = $ 7.24 (top) to 6.75 (bottom). The two middle (unlabeled) adiabats have $S =$ 7.20 and 7.13. The preferred temperature-pressure relation of this paper is shown as a heavier curve following the Galileo Probe adiabat to the immiscibility boundary \citep{Mo2013} shown with a dashed curve. At pressures higher than 2.7 Mbar we assume a higher-entropy adiabat with $S =$ 7.13 (heavier curve). \label{phase_bdry4}} \end{figure} Figure~\ref{phase_bdry2} shows a plot of temperature, $T$, versus pressure, $P$, for a family of such adiabats as well as the hydrogen-helium immiscibility domain derived from {\it ab initio} simulations by~\citet{Mo2013}. These simulations also used the DFT-MD technique in combination with the PBE functional and TDI method to compute the entropy. They are thus fully compatible with the abiabats from \citet{MH2013}. As is evident in Figure~\ref{phase_bdry2}, the interiors of both Jupiter and Saturn enter a region at a pressure $\sim$ 1 Mbar where helium in solar proportion to hydrogen becomes immiscible. Both planets are thus are likely to have layers with helium rain. Figure~\ref{rain} depicts the location of this layer in Jupiter. This prediction is a direct consequence of combining the {\it ab initio} immiscibility and adiabat calculations with measurements of the planets' tropospheric $T$ vs. $P$ profiles \citep{seiff-1998, Lindal-1985}. No temperature or pressure adjustments of the immiscibility domain were needed. Heavy elements were not considered in this analysis. Depending on the concentration, a small correction to the adiabatic profile would be plausible. We also note that \citet{Mo2013} performed the immiscibility calculations for $Y=0.25$ which differs slightly from the protosolar value. However, this concentration difference does not change the immiscibility temperature to a significant degree. Based on the analysis in ~\citet{nettelmann2015}, we estimate this correction to be of the order 160~K only. The hydrogen-helium immiscibility hypothesis was first invoked to explain Saturn's luminosity excess~\citep{Stevenson77a,Stevenson77b}. In the immiscibility layer, helium droplets would form and rain down into the deeper interior, resulting in a gradual removal of helium from the planets outer layer. The associate release of gravitational energy provides an energy source to explain Saturn's luminosity excess. Whether helium rain occurs on Jupiter is less certain. Its interior is hotter and no helium rain is need to explain its present luminosity~\citep{FH04}. The Galileo entry probe measured a small helium depletion in Jupiter's upper atmosphere (0.234 by mass compared to 0.274, the protosolar value~\citep{Lodders03}. Perhaps the strongest evidence for helium rain to occur on this planet comes from the depletion of neon. The Galileo measurements showed that there is ten times less neon in Jupiter's atmosphere compared to solar values. \citet{WilsonMilitzer2010} demonstrated with {\it ab initio} simulations that neon has a strong preference for dissolving in the forming helium droplets. This offered an explanation for the neon depletion and provided strong, though indirect evidence for helium rain to occur on Jupiter. According to the more recent {\it ab initio} calculations, the present Jupiter would encounter the immiscibility domain at pressures above $\sim 0.9$ Mbar (Figure~\ref{phase_bdry2}). In Saturn, the domain is entered at $P \sim 0.8$ Mbar. If a cooling scenario for Jupiter or Saturn involves a steady decrease of entropy with time, then the onset of helium rain would occur when the interior adiabat first touches the boundary of the helium immiscibility domain. The curvature of the boundary is such that a H-He adiabat with $S \approx$ 7.20 osculates the boundary at $P \sim 2$ Mbar and $T \sim 6600$ K. We are thus faced with the task of deriving a barotrope $P(\rho)$ for present-day Jupiter which is consistent with the properties of dense, hot hydrogen-helium mixtures shown in Figure~\ref{phase_bdry2} and with Jupiter's presumed cooling history. A detailed, dynamical calculation of the process of helium rain and subsequent evolution of Jupiter's interior temperature profile is beyond the scope of the present paper, whose aim is to infer a jovian barotrope based on current knowledge of Jupiter's composition and thermal state, and on current results from {\it ab initio} simulations of hydrogen-helium mixtures at high pressure. The resulting barotrope is used here to predict Jupiter's higher zonal harmonic coefficients, whose values are to be measured by {\it Juno}. Thus, we make the simplifying assumption that the cooling of early Jupiter to an interior adiabat $S \approx$ 7.20, corresponding the onset of immiscibility, then leads to reduced heat transport in the region around $P \ge 2$ Mbar, effectively slowing interior temperature decline, while layers at lower pressures continue to transfer heat to Jupiter's atmosphere. In this scenario, the present-day Jupiter barotrope for pressures $\le 1$ Mbar lies on the Galileo Probe adiabat with reduced He abundance, but at somewhat higher pressures, temperatures follow a higher-entropy adiabat with a slightly-above protosolar helium abundance, $Y=0.28$. The interior adiabat would be expected to lie between $S \approx$ 7.20 (for no heat transport across the immiscibility region) and $S \approx$ 7.08 (for efficient heat transport across the immiscibility region). In the study that we present here, our preferred model has an interior adiabat with $S =$ 7.13 (shown as a heavy line in Figure~\ref{phase_bdry4}). We refer to this model as Model DFT-MD 7.13; Its parameters are shown in boldface in Table 1. The $P(\rho)$ barotrope for Model DFT-MD 7.13 shown in Figure~\ref{Pvsrho_gap}. The corresponding $T(\rho)$ profile is shown in Figure~\ref{TvsP_gap}. \begin{figure} \epsscale{1.0} \plotone{Pvsrho_gap} \caption{The initial approximation for the present-Jupiter barotrope; the abscissa is $\rho_0(P)$. The gap corresponds to the region between the two plus symbols in Figure~\ref{phase_bdry4}. To the left of the gap the entropy is $S =$ 7.08, while to the right $S =$ 7.13. Both adiabats are for constant $Y_0$=0.245. Since we do not have DFT-MD simulation data at very low densities, we switch back to the SC model below 0.0670$\,$g$\,$cm$^{-3}$, where a small (and unimportant) density discontinuity $\sim 2\%$ can be seen. \label{Pvsrho_gap}} \end{figure} \begin{figure} \epsscale{1.0} \plotone{TvsP_gap} \caption{The $T$ vs. $P$ relation for the two adiabats shown in Figure~\ref{Pvsrho_gap}. Thick curve up to $P = 22$ bar shows Galileo Probe measurements. The $S =$ 7.08 adiabat's $T$ vs. $P$ relation matches Galileo Probe data. \label{TvsP_gap}} \end{figure} \section{Compositional Perturbations to Equation of State} \label{comppert} In order to derive general barotropes, we must now evaluate the effects of (1) varying He concentration, and (2) varying metallicity. The barotrope shown in Figures ~\ref{Pvsrho_gap} and \ref{TvsP_gap} corresponds to an initial He mass fraction $Y_0=0.245$ and metals mass fraction $Z_0=0$. Since this composition is a good initial approximation to the Jupiter envelope, we use a perturbation approach to derive the effects of compositional changes. Let the reference barotrope for $Y_0=0.245$ and $Z_0=0$ be $\rho_0(P)$. Although this barotrope is computed with detailed DFT-MD simulations not assuming an ideal mixture of H and He, to simplify this derivation we approximate it by an additive volume law, $V_{\rm H-He}(P,T)=V_{\rm H}(P,T)+V_{\rm He}(P,T)$, valid for a noninteracting mixture: \begin{equation} \label{add_vol0} {1 \over {\rho_0}} = {X_0 \over {\rho_H}} + {Y_0 \over {\rho_{He}}}, \end{equation} where $X_0 = 1 - Y_0 =$ 0.755. We now want to change the abundance of helium to $Y$ and metals to $Z$. We assume that the temperature-pressure relation $T(P)$ is unchanged under perturbations to the composition (i.e., the perturbing admixture is chemically and thermodynamically inert). With this assumption and the additive volumes approximation, $V_{\rm H-He-Z}=V_{\rm H}+V_{\rm He}+V_Z$, the perturbed density is given by, \begin{equation} \label{add_vol1} {1 \over {\rho}} = {{1 - Y - Z} \over {\rho_H}} + {Y \over {\rho_{He}}} + {Z \over {\rho_{Z}}}, \end{equation} where $V_Z$ and $\rho_{Z}$ is the volume and density of the metals component. Rewriting Equations (\ref{add_vol0}) and (\ref{add_vol1}), we find \begin{equation} \label{newrho} {{\rho_0} \over {\rho}} = {{1-Y-Z} \over {1-Y_0}} + {{ZY_0+Y-Y_0} \over {1-Y_0}}{{\rho_0} \over {\rho_{He}}} + Z{{\rho_0} \over {\rho_{Z}}}, \end{equation} with all densities evaluated for the reference $T(P)$. The same equation is obtained if one starts from a fully interacting hydrogen-helium equation of state and then perturbs the helium and metals abundances. For the composition in Jupiter's outer layers, at $P \phn \textless \phn 1$ Mbar, we adopt abundances from Galileo Probe measurements \citep{Wong2004}. In this region the main contributors to $Z$ are the molecules ${\rm CH_4}$ and ${\rm NH_3}$, and for the ${\rm H_2O}$ abundance we adopt the largest value measured by the probe (rather than assuming a solar abundance for H$_2$O). Neglecting other metals, we obtain $X =$ 0.7498, $Y =$ 0.2333, $Z \approx$ 0.0169, for the presumed jovian composition at layers with $P \phn \textless \phn$ 1 Mbar. Using a DFT-MD equation of state for pure He (Militzer 2008) along the $T(P)$ shown in Figure~\ref{TvsP_gap}, we obtain the density-pressure relation shown in Figure~\ref{dratioHe}. \begin{figure} \epsscale{1.0} \plotone{density_ratio_He} \caption{ Results for ${{\rho_0} / {\rho_{He}}}$ , evaluated along a Jupiter barotrope, to be inserted in Equation (\ref{newrho}). } \label{dratioHe} \end{figure} \section{Equation of State of H$_2$O, CH$_4$, and NH$_3$} \label{hydrides} Evaluation of the perturbation term $\rho_0 / \rho_Z$ in Eq.(\ref{newrho}) is somewhat more complex because of the presence of multiple molecular species, but need not be highly precise because the contribution of this term is comparatively small. We continue to assume, for pressures above and below the He-immiscibility gap, that the main contributors to the $Z$ mass fraction are the molecules ${\rm H_2O}$, ${\rm CH_4}$, and ${\rm NH_3}$, in solar proportions. Thus, to evaluate $\rho_0 / \rho_Z$, it is necessary to evaluate the density change of these molecular entities along the jovian $T(P)$. We thus performed a number of DFT-MD simulations of ${\rm H_2O}$, ${\rm CH_4}$, and ${\rm NH_3}$ under such conditions. All simulations were performed with the VASP code~\citep{vasp1} using the PBE functional. Pseudopotentials of the projector-augmented wave type~\citep{PAW} and a plane wave basis set cutoff of 1100~eV were employed. The zone-average point, $k=(\frac{1}{4},\frac{1}{4},\frac{1}{4})$, was used to sample the Brillouin zone. A time step of 0.2 fs was used. Density, temperature, and composition were prescribed in the simulations. After an initial equilibration period, the pressure was derived by averaging over the MD simulation. We first benchmarked our simulations by comparing our results with the shock wave measurements by~\citet{Nellis97} that compressed a mixture of water, ammonia, and isopropanol (C$_3$H$_8$O) to 200 GPa. This mixture, labeled ``synthetic Uranus'', was designed to resemble the different planetary ices in the outer solar system. The concentrations of the heavy nuclei (C:O=0.529, N:O=0.162) indeed closely resemble solar proportions. However, the mixtures is somewhat depleted in hydrogen (H:O=3.54) while one would expect a H:O ratio of 4.60 if one mixes H$_2$O, CH$_4$, and NH$_3$ in O:C:N proportions that were used in the experiments. This difference prompted us to perform two sets of simulations. First we studied a hydrogen-depleted mixture, H:O:C:N=87:25:13:4, that closely resembles the ``synthetic Uranus'' mixture within the size constraints of typical simulations that accomodate between 100 and 200 atoms. Our simulation results in Table~\ref{ice_table} show excellent agreement with the experimental findings. It should be noted that if we prescribe the central values for densities and temperature, that were measured in the experiments, then our computed pressures were, respectively, slightly higher and slightly lower than those reported in the experiments. However, if we adjusted the density and temperatures in our simulations within the experimental 1 $\sigma$ uncertainties then our computed pressures fall within the experimental error bars of the two available measurements. This provides another example for DFT-MD simulations that closely reproduce experimental findings~\citep{Knudson2012}. In Table~\ref{ice_table}, we also report results from simulations of a H:O:C:N=99:21:12:3 mixtures that exactly represent the hydrogen contents of a solar H$_2$O, CH$_4$, and NH$_3$ mixture. Because of the higher hydrogen content, the density is lower than that of ``synthetic Uranus'' when compared for the same $P$ and $T$. The simulation results were incorporated into Figures~\ref{jupiter_hydrides} and \ref{jupiter_hydrides_hi}. Figure~\ref{jupiter_hydrides} shows calculations used to perform the estimation of $\rho_0 / \rho_Z$. In the low-pressure region of this figure, pressure-density values for ${\rm CH_4}$, ${\rm NH_3}$, and ${\rm H_2O}$ are combined assuming ideal mixing. In the lower left-hand part of this figure, orange dots show the ideal-gas partial pressure of an ideal mixture of the three molecules along the jovian $T(P)$, with virial corrections up to $P \sim 2$ Kbar. Dash-dot curves at the top of the figure show zero-temperature $\rho(P)$ relations calculated from quantum-statistical models and tabulated in \citet{ZT1978}. The orange dashed curve shows the resulting zero-temperature $\rho(P)$ relation for a solar mixture of the three molecules. Dots in the upper right-hand corner of this figure show finite-temperature calculations for pressures greater than a megabar; Figure~\ref{jupiter_hydrides_hi} shows a zoom of this region, along with experimental data points for ``synthetic Uranus'' \citep{Nellis97}. To construct $\rho_Z(P)$ in the gap between low pressure and high pressure, we perform a linear interpolation in log-log space as indicated in Figure~\ref{jupiter_hydrides}. \begin{figure} \epsscale{1.0} \plotone{Jupiter_hydrides8} \caption{ Procedure for determining the compression of a solar-proportions mixture of ${\rm CH_4}$, ${\rm NH_3}$,and ${\rm H_2O}$ (the three most important jovian hydrides) along a jovian $T(P)$ curve. This relation is used to determine $\rho_Z(P)$. Van der Waals corrections for the three hydrides are computed using data from \citet{Weast72}. } \label{jupiter_hydrides} \end{figure} \begin{figure} \epsscale{1.0} \plotone{Jupiter_hydrides8hi} \caption{ Expanded view of the high-pressure region of Figure~\ref{jupiter_hydrides}. Brown triangles show results of our DFT-MD simulations for a solar-proportions mixture of ${\rm CH_4}$, ${\rm NH_3}$,and ${\rm H_2O}$ at four points on the jovian $T(P)$ curve. These results overlap with results for a simple Mie-Gr{\"u}neisen thermal perturbation (with a Gr{\"u}neisen $\gamma = 1$) plus zero-temperature pressure, smaller red dots. Squares show double-shock compression points from Livermore gas gun experiments on ``synthetic Uranus'' carried out by \citet{Nellis97}. A temperature $T=4100 \pm 300$ K was measured for the data point at 1.1 Mbar, plotted as a yellow square. A separate DFT-MD simulation agrees with this data point to within the error bars, but is not used to calibrate our $\rho_Z(P)$ curve.} \label{jupiter_hydrides_hi} \end{figure} \begin{figure} \epsscale{1.0} \plotone{fig9-10merged} \caption{ Results for ${{\rho_0} / {\rho_{Z}}}$ , evaluated along a Jupiter barotrope. The dashed curve shows results for low pressures, spanning Jupiter's ``molecular'' layer (corresponding to the lower pressure axis). The solid curve (corresponding to the upper pressure axis) shows results for pressures up to the core-mantle boundary and slightly different composition, spanning Jupiter's ``metallic'' layer. These relations are inserted into Eq. \ref{newrho}; see Section \ref{hydrides} for details.} \label{density_ratios_combined} \end{figure} Figure~\ref{density_ratios_combined} shows $\rho_0 / \rho_Z$ (dashed curve) for assumed Galileo Probe composition (for pressures below 2 Mbar). As a hypothesis to be tested by our preliminary Jupiter model, we assume that all jovian layers at pressures less than $\sim 1$ Mbar have the composition measured by the Galileo Probe, with a corresponding correction to the density given by Equation (\ref{newrho}). In this pressure range, we find from Figure~\ref{dratioHe} and Figure~\ref{density_ratios_combined} that $\rho_0 / \rho_{He} \approx$ 0.48 and $\rho_0 / \rho_Z \approx$ 0.38, leading to $\rho_0 / \rho =$ 0.995. The latter number is fortuitously close to unity because the slightly lower Galileo Probe He abundance (relative to the DFT-MD simulations) is almost compensated by the presence of metals. Note that ${\rm H_2O}$ is depleted relative to ${\rm CH_4}$ and ${\rm NH_3}$ in the Galileo Probe data. That is, Galileo Probe data show ${\rm H_2O}$ approaching a solar ratio to hydrogen-helium, while ${\rm CH_4}$ and ${\rm NH_3}$ are approximately three times their solar ratio to hydrogen-helium. In contrast, for solar proportions of ${\rm CH_4}$:${\rm NH_3}$:${\rm H_2O}$ and for $P \phn \textgreater \phn$ 1 Mbar, we have $\rho_0 / \rho_Z \phn \textgreater \phn$ 0.38, and $\rho_0 / \rho_Z \approx$ 0.42 through the bulk of the jovian envelope (Figure~\ref{density_ratios_combined}, solid curve). For layers at pressures greater than 2.7 Mbar, we take the He and metals abundances to be slightly higher than the protosolar values $Y =$ 0.2741 and $Z =$ 0.0149 \citep{Lodders03}. Our DFT-MD equation of state, combined with the constraints of Jupiter's total mass, volume, and $J_{2}$ and any reasonable interior temperature distribution, does not imply a large increase of $Z$ above its protosolar value, for otherwise the densities would be too large. Assuming $Z =$ 0.0246 in the deeper layers and taking into account a slight He enrichment caused by depletion in Jupiter's outer layers, we get $Y =$0.2788. The assumed value of $Z$ corresponds to abundances of ${\rm CH_4}$ and ${\rm NH_3}$, relative to H, that are 4 times protosolar. Because of the much larger protosolar value of ${\rm H_2O}$ relative to H, a similar 4 times enhancement of this molecule leads to larger $Z$ and hence interior densities, and the resulting models would be outside the acceptable range. We get $Z =$ 0.0246 if we take the enhancement of ${\rm H_2O}$ to be 2.4 times protosolar. We insert this value in Equation (\ref{newrho}) for the presumed jovian composition at layers with $P \textgreater$2 Mbar. Then, over a pressure range corresponding to the bulk of the jovian envelope, $2 \textless P \textless 40$ Mbar, we find from Figure~\ref{dratioHe} and Figure~\ref{density_ratios_combined} that $\rho_0 / \rho_{He} \approx$ 0.49 and $\rho_0 / \rho_Z \approx$ 0.42, leading to $\rho_0 / \rho =$ 0.959. As is obvious from these results, and has long been known, the presence of metals in Jupiter only affects the barotrope $\rho(P)$ at the level of a few percent. Thus modeling the abundance and distribution of metals in Jupiter by matching the planet's gravity data necessarily requires very accurate (better than 1\%) knowledge of $\rho_0(P)$. \section{Jupiter Models} \subsection{Spheroid Parameters and Code Function} The version of the concentric maclaurin spheroid (CMS) code that we use is designed to automatically calculate a mass distribution with a total mass equal to Jupiter's mass, $M_J = 1.8986 \times 10^{30}$ g, and an equatorial radius $a = 71492$ km. The latter is the observed equatorial radius of a layer at an average pressure of 1 bar, and the tabulated $J_{2n}$ are normalized to this radius. We assume that Jupiter rotates as a solid body with period \citep{Seid2007} $P_{\rm rot} = 9^{\rm h} 55^{\rm m} 29.7^{\rm s} = 2 \pi / \omega$. CMS theory is constructed to find a rotationally-distorted model for the dimensionless small parameter \begin{equation} \label{qdef} q = {{\omega^2 a^3} \over {G M_J}}, \end{equation} which to lowest order in $\omega^2$ is equivalent to $m$, see Equation (\ref{mdef}), but is more convenient as it can be directly computed from observed quantities. Models are calculated with $N+1=511$ spheroids. Using the notation of \citet{Hu2013}, the dimensionless equatorial radii of the spheroids $\lambda_i, i=0,...,N$ are specified as follows. By definition $\lambda_0 \equiv$ 1 for the outermost spheroid (its equatorial radius $\equiv a$). The innermost spheroid surface is placed at $\lambda_{N} = 0.15$ (i.e., the core's equatorial radius $= 0.15 \phn a$). The choice of core radius is somewhat arbitrary: the external zonal harmonic coefficients are sensitive to the total core mass but insensitive to its density. Models have 170 spheroids equally spaced in $\lambda$ in the range $0.15 \le \lambda \le 0.5$ and another 339 spheroids in the range $0.5 \le \lambda \le 1 - \delta \lambda/2$. (where in this region the spacing $\delta \lambda = 0.001477$, or 105.6 km). The outermost spheroid ($\lambda_0$) has zero density and is spaced $\delta \lambda/2$ (or 53 km) above the next spheroid ($\lambda_1$). We verified that zonal gravitational harmonic results were unaffected by the details of the spheroid spacing, by carrying out subsidiary calculations with spheroids equally spaced from core to surface. As shown in Figure~\ref{wt_fns} for a typical Jupiter interior model, spheroids interior to $\lambda \approx 0.5$ make no significant contribution to the $J_n$. Therefore we chose a closer spacing of spheroids exterior to $\lambda = 0.5$, to improve accuracy. As outlined in \citet{Hu2013}, two nested iterations are required to obtained a converged rotationally-distorted model fitted to a given barotrope $P(\rho)$. Before the iterations begin, a provisional density distribution is specified, with each $i^{\rm th}$ spheroid having a constant density $\rho_i$. For the specified $q$, the shape and total potential of each $i^{\rm th}$ spheroid is then iteratively calculated until relative changes between iterations fall below a specified tolerance, usually $\sim 10^{-13}$. Typically, this requires $\sim 30$ iterations. After satisfactory convergence, the total mass of the configuration $M_{\rm conf}$ is obtained by summing over all spheroids. An outer iteration loop (typically $\sim 50$ iterations) is performed to converge the model to the specified barotrope $P(\rho)$. As described by \citet{Hu2013}, using the $\rho_i$ and equipotential shapes from the converged inner loop, the average pressure $P_i$ between the upper and lower surface of each spheroid is calculated. Then using the $P_i$, the barotrope relation is solved for each spheroid to obtain new density values, $\rho_i = \rho(P_i)$. The core spheroid is not included in this procedure as it is assumed to be an incompressible high-density region. See Section~\ref{improvement} for details on the convergence of the iterations. Define the renormalization constant $\beta = M_J / M_{\rm conf}$. After the latest outer iteration, we renormalize all the $\rho_i$ by multiplying each value (including the core) by the factor $\beta$. These new $\rho_i$ are then passed to the inner iteration loop, where the spheroid shapes and corresponding external $J_{2n}$ are computed, and then $M_{\rm conf}$ (which depends on the spheroid shapes) is computed. The resulting configuration is then passed back to the outer loop. The final result of the two iteration loops is a model with converged $J_{2n}$, a mass and rescaled density of the incompressible core, and spheroids $i=0,...,509$ fitted to the {\it scaled} prescribed barotrope $P = P(\beta \rho)$. The model conforms precisely to the prescribed values of $q$, $a$, and $M_J$. The scaled barotrope $P = P(\beta \rho)$ corresponding to this model is convenient for comparing with barotropes for various values of $Y$ and $Z$, e.g. of the form of Equation (\ref{newrho}), in which the initial DFT-MD simulations for $\rho_0(P)$ are rescaled by a (roughly constant) factor to account for new values of $Y$ and $Z$. Values of $\beta$ for each model are used to obtain results for the model's metals content $Z$, as entered in Table 1. Introduction of the renormalization constant $\beta$ provides a convenient method for efficiently exploring the parameter space of jovian models, because, as discussed in Section~\ref{hydrides}, to first approximation the density $\rho$ of a perturbed mixture of H, He, and metals is related to the reference mixture by the divisor $\rho_0 / \rho$ which is nearly constant over a broad range of pressures. Thus if $\beta \textless 1$, the overall metals content of the model is reduced with respect to the assumed starting barotrope, and {\it vice versa}. \subsection{Parameters of Barotrope and Core} For the reader's assistance, Table 3 briefly defines a number of relevant parameters. As discussed by \citet{MHVTB}, it is difficult to fit the pre-{\it Juno} values of Jupiter's $J_{2n}$, especially $J_4$, with a constant-entropy, constant-composition barotrope and uniform rotation. Although the H-He DFT-MD equation of state has been updated since 2008, see \citet{MH2013} and \citet{Militzer2013}, the difficulty remains. For comparison purposes, we include at the end of Table 1 two interior models (denoted as SC) that we computed using the same CMS procedure as the other models, but with the older equation of state of \citet{SC95}. These SC models are able to match the pre-{\it Juno} $J_{2}$ and $J_4$ with vanishingly-small cores and tens of Earth masses of metals in the envelope (see Table 1). Why are our DFT-MD models so different? Although central temperatures for DFT-MD and SC models are similar (see Table 1), it turns out that mid-envelope temperatures for adiabatic DFT-MD models are considerably cooler. This behavior is a consequence of the depression of the adiabatic temperature gradient associated with hydrogen metallization, as discussed by \citet{MH2013} and \citet{Militzer2013}. Such behavior is not exhibited by the SC EOS and may not be incorporated in the other recent Jupiter models. Cooler temperatures, as well as revisions to the pressure-density relation, result in somewhat higher mass densities in the middle envelope, with respect to the other models. It is this effect, in our models, that is primarily responsible for considerably reduced envelope metallicity, larger core mass, and increased $|J_4|$. In 2008 we attempted to reduce the absolute value of $J_4$ by hypothesizing a subrotating layer below Jupiter's observable atmosphere, but this assumption is not supported by any realistic circulation model. It is possible to obtain a model which fits the pre-{\it Juno} value of $J_4$ by instead introducing a chemical change and corresponding extra density increase at layers around $P \phn \sim \phn 1$ Mbar, but such models are not grounded in any fundamental calculations of the thermodynamics of dense hydrogen plus impurities, and are inconsistent with reasonable barotropes. In this paper we take a different approach. We use the \citet{Mo2013} prediction for the pressure-temperature conditions of H/He immiscibility. Then we assume helium rain also introduces a composition change. As discussed in Section 3, for $P \textless 1$ Mbar, we have $\rho_0 / \rho =$ 0.995, while for $P \textgreater 2.7$ Mbar, if one has four times solar (primordial) abundances of ${\rm CH_4}$, ${\rm NH_3}$, and $\sim 2.4$ times ${\rm H_2O}$, and no other metals, as the composition at depth, one would have $\rho_0 / \rho =$ 0.959. These numbers suggest an expected extra $\sim 4\%$ density change resulting from the presence of a phase-separation region and an increase of metallicity and helium to approximately proto-solar values at deeper layers. As we discuss in more detail below, we need a much larger extra density change ($\sim 8\%$) to obtain a DFT-MD model with $|J_4|$ reduced enough to agree with the pre-{\it Juno} value. To treat the expected extra density change, in the pressure range between 1 and 2 Mbar we interpolate linearly in $\log P$ and $\log \rho$ between the low-pressure barotrope with $\rho = \rho_0(P) / 0.995$ and the high-pressure barotrope with $\rho = \rho_0(P) / 0.959$, noting that $\rho_0(P)$ at $P \phn \textgreater \phn$ 2.7 Mbar lies on a higher-entropy adiabat than the atmospheric adiabat. Results for gravitational harmonic coefficients of models are insensitive to the thickness of this narrow interpolation region. A CMS boundary could of course be placed at a discrete location to exactly treat an actual density discontinuity, but the resulting change to the gravitational harmonic coefficients would be negligible. The models presented in this paper are intended to correspond closely to the theoretical behavior of hydrogen-helium mixtures and to properties of the outer jovian layers as constrained by the Galileo Probe. The CMS method generates models that exactly fit the total jovian mass and 1-bar equatorial radius. We adjust the density (and thus the mass) of the schematic central core of all models to obtain a match to the pre-{\it Juno} observed value of $J_2$ given in Table 1, in the expectation that a more precise post-{\it Juno} value will not differ significantly from this number. The other parameter beside the core mass that is poorly constrained is the entropy of the deep adiabat, which we vary from the Galileo Probe value $S =$ 7.08 through the value that osculates the immiscibility boundary, $S =$ 7.20, on up to (as an extreme case) $S =$ 7.24. With increasing $S$, the thermal contribution to the deep pressure increases, yielding lower density for a given pressure, thus accommodating a slight increase in metallicity $Z$. As we see from Table 1, the predicted higher-order gravitational harmonic coefficients vary from one model to the next at the level of $\sim 10^{-5}$ for $J_4$ (readily measurable by {\it Juno}), to $\sim 10^{-6}$ for $J_6$, to $\sim 10^{-8}$ for $J_8$. The $J_{10}$ values appear to have less value for discriminating interior structure, but their near-constancy at a total level of $\sim 10^{-7}$ may be useful as a reference for discerning the signature of nonhydrostatic effects at a similar level, such as deep interior dynamics \citep{kas10}. By increasing the density by an additional amount in the vicinity of the He-immiscibility zone, it is possible to obtain a match to Jupiter's pre-{\it Juno} $J_2$ and $J_4$ with a suitable model. But, as noted by \citet{MHVTB}, one does not have free rein in this process because Jupiter's barotrope must correspond to a physically-plausible composition. Because the DFT-MD barotrope is generally denser than the corresponding barotrope that one would compute using the theory of \citet{SC95}, in our DFT-MD models very little enhancement of metals can be tolerated in Jupiter's envelope. Most of our models are calculated using (for $P \textless 1$ Mbar) the barotrope $\rho = \rho_0(P,S = 7.08) / 0.995$ , corresponding to the Galileo Probe $T(P)$ and abundances, and the barotrope $\rho = \rho_0(P,S) / 0.959$ for $P \textgreater 2.7$ Mbar, corresponding to an adiabat with entropy $S \textgreater 7.08$, (enhanced) protosolar helium abundance $Y=0.28$, and $Z=0.025$, corresponding to Galileo-Probe enhancement of methane and ammonia and a lesser enhancement of water, but no presence of denser species such as magnesium-silicates. During the CMS calculations we linearly interpolate in $\log \rho$ vs. $\log P$ across the immiscibility region between 1 and 2.7 Mbar. All models in Table 1 labeled DFT-MD $S$ (with no parenthesis) have the indicated compositions in the molecular and metallic regions respectively. As the deep $S$ increases, such models show a modest increase in metallicity in the hydrogen-helium envelope exterior to the dense core, as characterized by the parameter $M_Z$, the total mass of metals in Earth masses. Model DFT-MD 7.13 has $\beta=1.0000$, meaning that the input barotrope yields a match to the total planetary mass without rescaling the densities. A characteristic of DFT-MD 7.13 warrants discussion. This model has Galileo Probe abundances of CH$_4$, NH$_3$, and H$_2$O {\it throughout} the molecular layer, and $4 \times$ solar abundances of CH$_4$, and NH$_3$ in the metallic layer. The metallic layer has $2.4 \times$ solar H$_2$O, more than in the molecular layer; a full $4 \times$ solar H$_2$O enhancement would yield total densities which are too large to fit the total mass of Jupiter. As discussed in Section \ref{hydrides}, the assumed composition and temperature profile results in a reasonable $\rho(P)$ relation, which results in a reasonable planetary model. However, acceptable $\rho(P)$ relations only limit the possible range of temperature profiles and metallicities but do not uniquely constrain them. As alternatives, we investigated two variants of our preferred model, in which we imposed equal metallicities in the molecular and metallic layers. Model DFT-MD 7.13 (low-$Z$) has artificially low $Z=0.004$ in both layers (although He abundance does increase from the Galileo probe value to the protosolar value). This unrealistic model has the largest $|J_4|$ and core mass of the suite. At the opposite extreme, Model 7.24 (equal-$Z$) has the same metallicity $Z=0.027$ in both layers, and also has a relatively large $|J_4|$. All models shown in Table 1 have core mass adjusted to give agreement to seven significant figures with the observed value $J_2 = 14696.43 \times 10^{-6}$. Two of the models, DFT-MD 7.15(J4) and SC 7.15(J4), include an additional density (and metallicity) increase across the immiscibility region between 1 and 2 Mbar, adjusted to yield agreement with the pre-{\it Juno} observed values of $J_2$ and $J_4 = -587.14 \times 10^{-6}$. We note that uncertainties in observed values in Table 1 are formal error bars; none of our models would be ruled out by these pre-{\it Juno} measurements if the true error bars are $\sim 5$ times larger. All of our models are close to the pre-{\it Juno} observed value of $J_6$, but the agreement may be fortuitous. \subsection{Comparison of Barotropes with Models} Figure~\ref{rho_vs_r} shows a plot of polar and equatorial density profiles for our preferred model DFT-MD 7.13. Figure~\ref{P_vs_rho} plots the density vs. pressure profile for preferred model DFT-MD 7.13 (grey stairstep), along with the input barotrope. Figure~\ref{P_vs_rho_hi} is a close-up of the high-pressure region of Figure~\ref{P_vs_rho}. The weighting functions for contributions to the external zonal harmonic coefficients, for the preferred model, are shown in Figure~\ref{wt_fns}. \begin{figure} \epsscale{1.0} \plotone{rho_vs_r_713} \caption{ Equatorial (solid curve) and polar (dashed curve) density profiles.} \label{rho_vs_r} \end{figure} \begin{figure} \epsscale{1.0} \plotone{P_vs_rho_713} \caption{ The grey stairstep shows converged CMS model DFT-MD 7.13. The light grey rectangle shows the region where He immiscibility occurs and where the barotrope is interpolated to a higher-entropy barotrope at higher pressure. The red curve is the input barotrope for the assumed low-pressure and high-pressure compositions. } \label{P_vs_rho} \end{figure} \begin{figure} \epsscale{1.0} \plotone{P_vs_rho_713_hi} \caption{ A close-up of the barotrope interpolation region for preferred CMS model DFT-MD 7.13. The red curve is the input barotrope for the assumed compositions. } \label{P_vs_rho_hi} \end{figure} \begin{figure} \epsscale{1.0} \plotone{wt_fns_713} \caption{ Relative contribution of spheroids to external gravitational zonal harmonic coefficients, for model DFT-MD 7.13.} \label{wt_fns} \end{figure} Model DFT-MD 7.15(J4) reduces $|J_4|$ to the observed value by decreasing the barotrope's density at low pressures, and increasing the density at high pressures. However, densities in the outer region at pressures below 1 Mbar then correspond to unphysical negative metallicity. The entry for this model in Table 1 shows a total metals content $M_Z = 14.3 \phn M_E$ exterior to the dense core; this value is the sum of $14.9 \phn M_E$ in the H-He envelope at pressures greater than $\sim 1$ Mbar, and (unphysical) $-0.6 \phn M_E$ at lower pressures. We are unable to find a consistent DFT-MD Jupiter model that matches the observed $J_2$ and $J_4$ values in Table 1. \section{Moment of Inertia} Jupiter's normalized moment of inertia NMoI $=C/Ma^2$ (where $C$ is the moment of inertia about the rotation axis) is in principle separately measurable from the $J_{2n}$, and is a separate constraint on interior structure. \citet{Helled2011} investigate models with fixed values of $J_2$ and $J_4$ and conclude that a range of NMoI values between 0.2629 and 0.2645 can be found. \citet{nettelmann2012} calculate a moment of inertia but normalize it to the {\it mean} radius of the 1-bar equipotential surface, a model-dependent quantity with a precision limited to third order in their perturbative theory of figures. However, their result is in reasonable agreement with values that we calculate below. Since the nonperturbative approach of our present investigation virtually eliminates any uncertainty in the theoretical calculation of the $J_{2n}$, here we explore the subject further as a guide to measurement requirements for the {\it Juno} spacecraft. Once a converged interior model is obtained, the NMoI is given exactly by the expression \begin{equation} \label{NMoI} {C \over {Ma^2}} = {2 \over 5}{ {\Sigma_{j=0}^{N-1} \delta \rho_j \int_0^1 d\mu \xi_j(\mu)^5} \over {\Sigma_{j=0}^{N-1} \delta \rho_j \int_0^1 d\mu \xi_j(\mu)^3}} + {2 \over 3}J_2, \end{equation} in the notation of \citet{Hu2013}. Although Equation (\ref{NMoI}) resembles the Radau-Darwin relation in that it seemingly relates the NMoI to $J_2$, it actually has no relationship because Equation (\ref{NMoI}) shows that for a fixed $J_2$, an infinity of different CMS density distributions could enter into the first term. On the other hand, since each of those CMS density distributions is required to yield the fixed $J_2$, the range of variation of NMoI is in actuality quite restricted. To illustrate the point, in Figure~\ref{coma2} we show the cumulative value of the NMoI as a function of the CMS radius $\lambda$, for preferred model DFT-MD 7.13. The cumulative value of ${C / {Ma^2}}$ is obtained by partially summing the expression in Equation (\ref{NMoI}) from the central CMS ($j=N-1$) out to a CMS with dimensionless equatorial radius $\lambda$. \begin{figure} \epsscale{1.0} \plotone{coma2_713} \caption{ Cumulative value of $C/Ma^2$ for the preferred Jupiter model. The final point at $\lambda=1$ is the total value, $C/Ma^2 = 0.26389$.} \label{coma2} \end{figure} To illustrate how details of interior structure affect the total NMoI, Figure~\ref{Deltacoma2} shows the {\it difference} of the cumulative values of $C/Ma^2$, for the preferred model minus model SC 7.15. \begin{figure} \epsscale{1.0} \plotone{Delta_coma2} \caption{ Difference in cumulative values of $C/Ma^2$ for the preferred Jupiter model minus model with the SC equation of state. } \label{Deltacoma2} \end{figure} To truly discriminate between models with different barotropes, it will be necessary to measure the NMoI to about five significant figures, posing a difficult challenge to {\it Juno} or other future investigations. Figure~\ref{coma2vsJ4} illustrates the point. \begin{figure} \epsscale{1.0} \plotone{coma2vsJ4} \caption{ For the ten interior models of Table 1, all fixed to the observed $J_2$, we plot the NMoI vs. $J_4$. The open circle is the preferred model. The two diamonds to the right are the SC models.} \label{coma2vsJ4} \end{figure} We should point out that a measurement of Jupiter's NMoI would actually be obtained from a measurement of the planet's spin angular momentum, $J=C \omega$. Thus if Jupiter were to rotate differentially on cylinders with significant mass involved in the various rotation zones, the tightly constrained values of NMoI that we find here might be broadened to some extent. It remains to be determined whether measurement of NMoI will prove to be more of a constraint on the possibility of deep differential rotation, or on the range of possible interior barotropes. \section{Discussion and Conclusions} The combination of the DFT-MD equation of state and observed $J_{2n}$ already strongly limit the parameter space of acceptable pre-{\it Juno} models. Our study has the following new features: (a) We eliminate arbitrary density enhancements to fit the gravity field; instead we utilize the H-He immiscibility phase boundary computed by \citet{Mo2013} to bound the location and magnitude of a helium-related compositional change; (b) Our models incorporate the latest version of the DFT-MD equation of state, replacing the widely-used SC EOS theory \citep{SC95}; (c) We utilize CMS theory for the first time to calculate high-order zonal harmonic coefficients for realistic Jupiter models. It is important to note that for fixed $J_2$, the computed value of $|J_4|$ is sensitive to the density in the region of Jupiter's metallic-hydrogen envelope where He immiscibility is predicted. One may force an agreement with the pre-{\it Juno} value of $J_4$ given in Table 1 by imposing a density enhancement across the interpolation region which is much larger than the $\sim 4\%$ implied by an increase in He to the primordial value above $P \phn \textgreater \phn$ 2.7 Mbar. However, when this is done, conservation of mass leads to a model with (formally) negative metallicity in the low-pressure outer envelope. The new DFT-MD equation of state generally yields a very limited suite of interior models of relatively low metallicity. These models could be falsified by forthcoming {\it Juno} gravity data. In Jupiter model DFT-MD 7.13, about 0.83 of the total mass is between the He-immiscibility region near 1 Mbar pressure and the core-mantle boundary. So if $Z \sim 0.032$ in this region, the mass of metals outside the core would comprise $\sim 10 M_E$, to be added to a core mass $\sim 12 M_E$, for a total Jupiter metallicity $Z_{\rm global} \sim 0.07$. As shown in Table 1, most of the other DFT-MD models have similar total metallicities. In contrast, our models based on the SC EOS (last two lines in Table 1) have total metallicities that are about 60\% higher, in qualitative agreement with earlier results obtained by \citet{GGH97} and \citet{G99} that were also derived using the SC equation of state. The latter studies included the possibility that Jupiter's core mass might be zero, and our independent SC models also show very small core masses. The inferred large core masses of our DFT-MD models are consistent with a core-nucleated scenario for the formation of Jupiter \citep{Dangelo2014}. The overall metallicity of Jupiter implied by most of our models is roughly three times protosolar, implying that about two-thirds of the volatile protosolar nebular complement to the $\sim 12 M_E$ refractory core was not incorporated in primordial Jupiter. In summary, we are able to derive Jupiter interior models that match measured values of $J_2$, and sometimes $J_4$, and $J_6$, and are consistent with predictions from published {\it ab initio} simulations of hydrogen and helium, and additional results for different planetary ices, H$_2$O, CH$_4$, and NH$_3$ that we report here. In our preferred model, the heavy element abundance in the metallic layer is equivalent to a three-fold solar concentration of all three ices. The preferred value for the concentration in the molecular layer is slightly less but consistent with the Galileo measurements. Our preferred model has a massive core of 12 Earth masses which is very similar to our earlier model~\citep{MHVTB}. When one uses the semi-analytical equation of state (SC EOS) of \citet{SC95} instead of our {\it ab initio} DFT-MD EOS, a much smaller core of 4 Earth masses is predicted for the same model assumptions. This illustrates how sensitively some model predictions depend on the details of hydrogen-helium EOS. Our Jupiter model is preliminary and intended for use as a reference for comparison with experimental results from the {\it Juno} orbiter and other data sources. New data will tell us how well the model works. \acknowledgments This work has been supported by NASA and NSF.
1,314,259,994,066
arxiv
\section{Introduction} The structure of the proton is a result of complicated non-perturbative many-body interactions between its fundamental building blocks, the quarks and gluons. This structure is encoded in various distribution functions, the simplest one being the collinear parton distribution functions that describe the parton density as a function of longitudinal momentum fraction $x$ carried by the parton (measured at a given scale $Q^2$). These distributions have been measured with great precision by experiments at HERA~\cite{Aaron:2009aa} by studying total electron-proton cross sections. More detailed information can be extracted from more differential observables that provide access to more differential distribution functions. For example, in exclusive photon or vector meson production the total momentum transfer is measurable, and via Fourier transform provides access to the spatial distribution of partons known as the generalized parton distribution function GPDF~\cite{Diehl:2003ny,Belitsky:2005qn}. It is also possible to study the distribution of partons in the proton in transverse momentum space, described by transverse momentum dependent parton distribution functions (TMDs)~\cite{Collins:1981uw}. The most complete information of the proton structure is encoded in the Wigner distribution~\cite{Ji:2003ak,Belitsky:2003nz,Lorce:2011kd}, which depends on both transverse coordinate and transverse momentum, in addition to the longitudinal momentum fraction $x$. This quantum distribution is not positive definite and has a probabilistic interpretation only in certain semi-classical limits \cite{Moyal:1949sk,Hillery:1983ms,Polkovnikov:2009ys}. To access the Wigner distribution, more differential observables than single particle production or total cross sections are needed. In Ref.~\cite{Hatta:2016dxp}, it was shown that diffractive dijet production, where two jets are produced in a process where no net color charge is exchanged with the target, is sensitive to the gluon Wigner distribution at small $x$. A future Electron Ion Collider in the US~\cite{Accardi:2012qut,Aschenauer:2017jsk} or LHeC~\cite{AbelleiraFernandez:2012cc} at CERN would be able to measure this process over a wide kinematical region at high center-of-mass energies. At high energies or small $x$ the convenient effective theory to describe high energy scattering processes is provided by the Color Glass Condensate (CGC), which describes Quantum Chromodynamics (QCD) in the high energy limit. In Ref.~\cite{Mantysaari:2019csc}, summarized here, we calculate both the diffractive dijet production cross section and the Wigner distribution in the CGC framework. \section{Dipole-proton interaction in the CGC} \label{sec:dipole} At high energies the convenient degrees of freedom are Wilson lines $U(\mathbf{x})$ that describe the color rotation that the parton encounters when propagating eikonally through the target. For a given target configuration, the Wilson lines are obtained by solving the Yang-Mills equations \begin{equation} U(\mathbf{x}) = P \exp \left( -ig \int \mathrm{d} x^- \frac{\rho (x^-,\mathbf{x})}{\nabla^2 + \tilde m^2 }\right). \end{equation} The color charge density $\rho$ is assumed to be a local Gaussian variable with expectation value being related to the local density of the proton, which we assume to be Gaussian in this work, and $\tilde m^2$ is an infrared regulator. After the Wilson lines are determined at the initial Bjorken-$x$, their evolution to smaller $x$ is obtained by solving the perturbative JIMWLK evolution equations (see e.g.~\cite{JalilianMarian:1996xn}). All parameters that control e.g. the density at the initial $x_0=0.01$, the size of the proton and the values of the strong coupling and infrared regulators are constrained by the HERA structure function and diffractive vector meson production measurements~\cite{Mantysaari:2018zdd}. For a more detailed description of the setup, the reader is referred to~\cite{Mantysaari:2019csc}. When Wilson lines are sampled on the lattice and evolved to smaller $x$ with the JIMWLK equation, it becomes possible to construct the dipole-target scattering amplitude at any $x$ \begin{equation} N\left( \mathbf{r} = \mathbf{x} - \mathbf{y}, \mathbf{b} = \frac{\mathbf{x} + \mathbf{y}}{2} \right) = 1 - \frac{1}{N_c} \langle \mathrm{Tr} U(\mathbf{x}) U^\dagger(\mathbf{y}) \rangle, \end{equation} where the average is taken over different possible target configurations. When we consider exclusive dijet production, the Fourier conjugates to the dijet momentum and to the recoil momentum are the dipole size $\mathbf{r}$ and impact parameter $\mathbf{b}$. Having this in mind, we study the angular modulation of the dipole-proton scattering amplitude $N(\mathbf{r},\mathbf{b})$ calculated from the CGC framework. The dipole amplitude $N$ as a function of the angle between $\mathbf{r}$ and $\mathbf{b}$ is shown in Fig.~\ref{fig:dipole}. Note that in widely used dipole amplitude parametrizations such as IPsat~\cite{Kowalski:2003hm} there would be no angular dependence. To quantify the evolution of the elliptic modulation of the dipole amplitude, we extract the Fourier harmonics $v_n$ writing the dipole amplitude as $ N(\mathbf{r},\mathbf{b}) = v_0 [1 + 2 v_2 \cos 2\theta(\mathbf{r},\mathbf{b}) ]$. The extracted $v_0$ and $v_2$ coefficients at different rapidities are shown in Fig.~\ref{fig:dipole}. We find that the evolution suppresses the elliptic modulation (note that Bjorken-$x$ is related to the evolution rapidity as $x = x_0 e^{-y}$ with $x_0=0.01$). This is mainly due to the rapid growth of the proton density in the dilute region, resulting in a smoother and larger proton with smaller density gradients \begin{figure}[tb] \centering \begin{minipage}{.48\textwidth} \centering \includegraphics[width=\textwidth]{dipole_theta.pdf} \caption{Dipole amplitude as a function of angle between dipole size and impact parameter at different rapidities. Figure from Ref.~\cite{Mantysaari:2019csc}.} \label{fig:dipole} \end{minipage} \quad \begin{minipage}{0.48\textwidth} \centering \includegraphics[width=\textwidth]{husimi_Pdep_evolution} \caption{Rapidity evolution of the elliptic component of the Husimi distribution from Ref.~\cite{Mantysaari:2019csc}.} \label{fig:husimi} \end{minipage} \end{figure} \section{Wigner and Husimi distributions} As discussed in the Introduction, the gluon Wigner distribution $xW(\mathbf{P}, \mathbf{b}, x)$ contains the most complete information of the small-$x$ gluonic structure of the proton. In particular, it describes the gluon distribution as a function of both transverse coordinate $\mathbf{b}$ and transverse momentum $\mathbf{P}$. The disadvantage is that due to the uncertainty principle it can not have a probabilistic interpretation, and we indeed show in Ref.~\cite{Mantysaari:2019csc} that when calculated from the CGC framework, the Wigner distribution becomes negative at small transverse momenta. If the Wigner distribution is smeared over both transverse coordinate and transverse momentum with the smearing parameters being inverse to each other, one obtains the so called Husimi distribution \begin{equation} xH(\mathbf{P},\mathbf{b},x)) = \frac{1}{\pi^2} \int \mathrm{d}^2 \mathbf{b}' \mathrm{d}^2 \mathbf{P}' e^{-\frac{1}{l^2}(\mathbf{b}-\mathbf{b}')^2 - l^2(\mathbf{P}-\mathbf{P}')^2} xW(\mathbf{P}',\mathbf{b}',x). \end{equation} In this work we choose $l=1\,\mathrm{GeV}^{-1}$, as it corresponds to a distance scale much smaller than the proton size, but does not result in too large smearing in momentum space that would wash out most of the transverse momentum dependence. In Ref.~\cite{Mantysaari:2019csc} it is shown that the Husimi and Wigner distributions agree at large $|\mathbf{P}| \gtrsim 1/l$, and that the Husimi distribution calculated from the CGC framework following~\cite{Hagiwara:2016kam} is positive definite. To study the elliptic modulation (dependence on the angle between $\mathbf{P}$ and $\mathbf{b}$), we write the Husimi distribution as \begin{equation} xH(\mathbf{P},\mathbf{b},x) = v_0^H [1 + 2 v_2^H \cos 2\theta(\mathbf{P},\mathbf{b})]. \end{equation} The rapidity evolution of the elliptic coefficient $v_2^H$ is shown in Fig.~\ref{fig:husimi}. Except at the smallest momenta, the evolution suppresses the elliptic modulation as expected based on the analysis of the dipole amplitude in Sec.~\ref{sec:dipole}. At the smallest momentum values the elliptic component first grows in the evolution, which can be understood as when the proton grows, large dipoles $|\mathbf{r}| \sim |\mathbf{P}|^{-1}$ with large elliptic modulation start to contribute, before the proton grows enough that we start to see the decreasing density gradients also at small $|\mathbf{P}|$. \section{Diffractive dijet production} As discussed in Ref.~\cite{Hatta:2016dxp}, diffractive dijet production is sensitive to the gluon Wigner distribution. In particular, it is interesting to study diffractive dijet production as a function of the two momentum vectors, the average momentum $\mathbf{P} = \frac{1}{2}(\mathbf{p}_1 - \mathbf{p}_2)$ and the recoil momentum $\boldsymbol{\Delta} = \mathbf{p}_1 + \mathbf{p}_2$, where $\mathbf{p}_1$ and $\mathbf{p}_2$ are the momenta of the individual jets. The connection to the Wigner distribution is shown in the correlation limit $|\mathbf{P}| \gg |\boldsymbol{\Delta}|$, thus we use $|\boldsymbol{\Delta}|=0.1$ GeV in this work. The diffractive dijet production cross section in the CGC framework is derived in Ref.~\cite{Altinoluk:2015dpi}. The cross section as a function of jet momentum $\mathbf{P}$ is shown in Fig.~\ref{fig:dijet_spectrum}, where results obtained with different infrared regulators (with value of the coupling constant adjusted to describe the HERA structure function data) and using both fixed and running coupling evolution are shown, and our results are found to be insensitive to the infrared regularization. Here it is worth noticing that the Fourier conjugate to $\mathbf{P}$ is the dipole size, and thus this can be seen as a diffraction off the $q\bar q$ Fock state of the probing photon. Here we only consider charmed dijets, so the diffractive dip location can be estimated to be $|\mathbf{r}_\gamma|^{-1} \sim \sqrt{m_c^2 + Q^2} \approx 1.4\,\mathrm{GeV}$. To study the elliptic modulation we extract the Fourier harmonics of the dijet production cross section: \begin{equation} \mathrm{d}\sigma = v_0[1 + 2 v_2 \cos 2\theta(\mathbf{P},\boldsymbol{\Delta})]. \end{equation} The elliptic coefficient $v_2$ as a function of Bjorken-$x$ (denoted as $x_p$) is shown in Fig.~\ref{fig:dijet_v2}, where we find that the energy evolution reduces $v_2$ by almost a factor of $2$ in the EIC energy range. This is mostly due to the increasing proton size suppressing density gradients. The modulation is relatively small and likely difficult to measure. However, at larger $|\boldsymbol{\Delta}|$ we expect a much larger signal~\cite{Salazar:2019ncp}. For comparison, we also show the result obtained in the case where we do not perform the JIMWLK evolution towards small-$x$, but just scale the overall proton density, in which case the $v_2$ is independent of energy. In the models where there is no dependence on the angle between $\mathbf{r}$ and $\mathbf{b}$ in the dipole amplitude, one gets exactly $v_2=0$~\cite{Altinoluk:2015dpi}. \begin{figure}[tb] \centering \begin{minipage}{.48\textwidth} \centering \includegraphics[width=\textwidth]{PdepOverview_IPGlasma_charm_longitudinal.pdf} \caption{Charmed dijet photoproduction spectrum as a function of dijet momentum. Figure from Ref.~\cite{Mantysaari:2019csc}.} \label{fig:dijet_spectrum} \end{minipage} \quad \begin{minipage}{0.48\textwidth} \centering \includegraphics[width=\textwidth]{xdep.pdf} \caption{Elliptic ($v_2$) modulation of dijet photoproduction as a function of momentum fraction $x_p$ of the target. The values on top refer to the center of mass energies $W$. Figure from Ref.~\cite{Mantysaari:2019csc}.} \label{fig:dijet_v2} \end{minipage} \end{figure} \section{Conclusions} We have calculated the Wigner and Husimi distributions from the CGC framework and the diffractive dijet production cross section, which in principle is sensitive to the gluon Wigner distribution at small $x$. By solving the perturbative JIMWLK evolution equations we find that in the gluon distributions the correlation between the momentum and coordinate space angles decreases when evolving towards small-$x$, and predict that the elliptic modulation in the dijet production cross section decreases almost by a factor of $2$ from the lowest to highest center of mass energies at the EIC. \subsection*{Acknowledgements} HM is supported by the Academy of Finland, project 314764, and by the European Research Council, Grant ERC-2015-CoG-681707. NM and BS are supported by the U.S. Department of Energy, Office of Science, Office of Nuclear Physics, under contract No.DE- SC0012704. NM is funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - Project 404640738. \bibliographystyle{JHEP}
1,314,259,994,067
arxiv
\section{Introduction} \label{intro} Observational evidence in the X-rays and the MIR indicates that the strong AGN continuum source must be absorbed by obscuring material over a wide solid angle (see e.g., \citealt{Antonucci85,Maiolino98,Risaliti02}). According to observed spectra of different AGN types, the obscuring structure has to block the emission of the subparsec-scale Broad-Line Region (BLR) where the broad lines are produced, but not that of the kiloparsec-scale Narrow-Line Region (NLR). The unified model for active galaxies \citep{Antonucci93,Urry95} is based on the existence of a dusty toroidal structure surrounding the central region of AGN. This toroidal geometry explains the biconical shapes observed in Hubble Space Telescope (HST) imaging of several AGNs \citep{Tadhunter89,Malkan98,Tadhunter99} and also the polarimetric observations \citep{Antonucci85,Packham97}. Thus, considering this geometry of the obscuring material, the central engines of Type-1 AGN can be seen directly, resulting in typical spectra with both narrow and broad emission lines, whereas in Type-2 AGN the BLR is obscured. Pioneering work in modelling the dusty torus \citep{Pier92,Pier93,Granato94,Efstathiou95,Granato97,Siebenmorgen04} assumed a uniform dust density distribution to simplify the modelling, although from the start, \citet{Krolik88} realized that smooth dust distributions cannot survive within the AGN vicinity. They proposed instead that the material in the torus must be distributed in a clumpy structure, in order to prevent the dust grains from being destroyed by the hot surrounding gas. The IR range (and particularly the MIR) is key to set constraints on the torus models, since the reprocessed radiation from the dust in the torus is re-emitted in this range. However, in comparing the predictions of any torus model with observations, its small-scale emission must be isolated. High angular resolution is then essential to separate torus emission from stellar emission and star-heated dust in the near-IR (NIR) and MIR, respectively. Indeed, starlight dominates the nuclear NIR emission of Seyfert 2 galaxies when using large aperture data (see e.g., \citealt{Alonso96}) and still has a significant contribution for Seyfert 1 galaxies \citep{Kotilainen92}. Similar contamination problems can be present in the MIR with the star-heated dust and dust in the ionization cones \citep{Alonso06,Mason06}. Another controversial issue about the torus structure is its typical dimensions. \citet{Pier93} and \citet{Granato94} reproduced the infrared observations of nearby Seyfert galaxies with $\sim$100 pc scale tori. However, hard X-ray observations showed that about half of nearby Type-2 Seyferts are Compton-thick (i.e., they are obscured by a column density higher than 10$^{24}~cm^{-2}$; \citealt{Risaliti99}). For these highly obscured sources the torus dimensions are expected to be of a few parsecs, because otherwise the dynamical mass of the obscuring material would be too large to be realistic \citep{Risaliti99}. In addition, recent ground-based MIR observations of nearby Seyferts reveal that the torus size is likely restricted to a few parsecs. \citet{Packham05} and \citet{Radomski08} established upper limits of 2 and 1.6 pc for the outer radii of the Circinus galaxy and Centaurus A tori, respectively. Besides, interferometric observations obtained with the MIR Interferometric Instrument (MIDI) at the Very Large Telescope Interferometer (VLTI) of Circinus, NGC 1068, and Centaurus A suggest a scenario where the torus emission would only extend out to R = 1 pc \citep{Tristram07}, R = 1.7 - 2 pc \citep{Jaffe04,Raban09}, and R = 0.3 pc \citep{Meisenheimer07}, respectively. In order to solve the discrepancies between observations and previous models, an intensive search for an alternative torus geometry has been carried out in the last decade. The first results of radiative transfer calculations of a clumpy rather than a smooth medium were reported by \citet{Nenkova02} and \citet{Elitzur06}, and further work was done by \citet{Dullemond05}. The clumpy dusty torus models \citep{Nenkova02,Nenkova08a,Nenkova08b,Honig06,Schartmann08} propose that the dust is distributed in clumps, instead of homogeneously filling the torus volume. These models are making significant progress in accounting for the MIR emission of AGNs (\citealt{Mason06,Mason09,Mor09,Horst08,Horst09,Nikutta09}; Ramos Almeida et al.~2009a; \citealt{Honig10}). In our previous work (Ramos Almeida et al.~2009a; hereafter \citealt{Ramos09a}), we constructed subarcsecond resolution IR SEDs for eighteen Seyfert galaxies, mostly Type-2 Seyferts. From the comparison between our high angular resolution MIR fluxes and large aperture data, such as those from ISO, IRAS, or Spitzer, we confirmed that the former provide a spectral shape that is substantially different from that of the large aperture data \citep{Rodriguez97}. Since our nuclear measurements allowed us to better characterize the torus emission, we modelled our SEDs with clumpy torus models. In general, we found that Type-2 views are more inclined than those of Type-1s, and more importantly, we derive larger covering factors for the Type-2 tori (i.e., more clumps and wider torus angular distributions). This would imply that the observed differences between Type-1 and Type-2 AGN would not be due to orientation effects only, but to intrinsic differences in their tori. However, due to the limited size of the sample analyzed by \citealt{Ramos09a}, and in particular of Type-1 Seyferts compared with Type-2s, our aim is to enlarge the sample studied in the previous work with new Seyfert 1 infrared data to compare the properties of Type-1 and Type-2 Seyfert tori. In this work, we report new subarcsecond MIR imaging data for the 3 nearby Type-1 Seyfert galaxies NGC 7469, NGC 6221, and NGC 6814, for which we estimate unresolved nuclear MIR fluxes. We enlarge the sample by including the galaxies NGC 1097, NGC 1566, NGC 3227, and NGC 4151, which have similar MIR data, and we compile NIR nuclear fluxes from the literature of similar resolution to construct nuclear SEDs for all the galaxies. We fit these SEDs with clumpy torus models which we interpolate from the \citet{Nenkova08a,Nenkova08b} database, and compare them with the larger sample studied by \citealt{Ramos09a}. Table \ref{sources} summarizes key observational properties of the sources in the sample. Section \ref{observations} describes the observations, data reduction, and compilation of NIR and MIR fluxes. Sections \ref{extended} and \ref{sed} present the main observational results, and in \S \ref{modelling} we report the modelling results. We discuss differences between Type-1 and Type-2 Seyferts and draw conclusions about the clumpy torus models and AGN obscuration in general in \S \ref{discussion}. Finally, Section \ref{final} summarizes the main conclusions of this work. \input{tab1.tex} \section{Observations and Data Reduction} \label{observations} \subsection{MIR Imaging Observations} \label{MIR} In order to enlarge the sample of 18 Seyfert galaxies presented in \citealt{Ramos09a}, which comprises 12 Seyfert 2 (Sy2), two Seyfert 1.9 (Sy1.9), one Seyfert 1.8 (Sy1.8), two Seyfert 1.5 (Sy1.5), and one Seyfert 1 galaxy (Sy1), we obtained new subarcsecond MIR observations of the Type-1 Seyferts NGC 6221, NGC 6814, and NGC 7469 (see Table \ref{sources}). The observations were performed with the MIR camera/spectrograph T-ReCS (Thermal-Region Camera Spectrograph; \citealt{Telesco98}) on the Gemini-South telescope during the Summer of 2009. T-ReCS uses a Raytheon 320x240 pixel Si:As IBC array, providing a plate scale of 0.089\arcsec~pixel$^{-1}$, corresponding to a FOV of 28.5\arcsec x21.4\arcsec. The filters employed for the observations were the narrow Si-2 filter ($\lambda_c$=8.74 \micron, $\Delta\lambda$=0.78 \micron, 50\% cut-on/off) and the broad Qa filter ($\lambda_c$=18.3 \micron, $\Delta\lambda$=1.5 \micron, 50\% cut-on/off). The resolutions obtained were 0.3\arcsec~at 8.7 \micron~and 0.5\arcsec~at 18.3 \micron, as measured from the observed Point Spread Function (PSF) stars. A summary of the observations is reported in Table \ref{log}. \input{tab2.tex} The standard chopping-nodding technique was used to remove the time-variable sky background, the telescope thermal emission, and the so-called 1/f detector noise. The chopping throw was 15\arcsec, and the telescope was nodded 15\arcsec~in the direction of the chop every 45 s. The difference for each chopped pair for each given nodding set was calculated, and the nod sets were then differenced and combined until a single image was created. The data were reduced using in-house-developed IDL routines. Observations of flux standard stars were made for the flux calibration of each galaxy through the same filters. The uncertainties in the flux calibration are typically $\sim$5-10\% at 8.7 \micron~ and $\sim$15-20\% at 18.3 \micron. PSF star observations were also made immediately prior to or after each galaxy observation to accurately sample the image quality. These images were employed to determine the unresolved (i.e., nuclear) component of each galaxy. The PSF star, scaled to the peak of the galaxy emission, represents the maximum contribution of the unresolved source (100\%), where we integrate flux within an aperture of 2\arcsec. The residual of the total emission minus the scaled PSF represents the host galaxy contribution, which is analyzed in Section \ref{extended}. We require a flat profile in the residual for a realistic galaxy profile over the central pixels. Therefore, we reduce the scale of the PSF from matching the peak of the galaxy emission (100\%), when necessary, to obtain the unresolved fluxes reported in Table \ref{psf}. They include corresponding aperture corrections to take into account possible flux losses when integrating the scaled PSF flux in the 2\arcsec~aperture. Figure \ref{psf1} shows an example of PSF subtraction at various levels (in contours) for NGC 7469 in the Si-2 T-ReCS filter, following \citet{Radomski02,Radomski03}. The residual profiles from the different scales demonstrate the best-fitting result. The uncertainty in the unresolved fluxes determination from PSF subtraction is $\sim$10-15\%. Thus, we estimated the errors in the flux densities reported in Table \ref{psf} by adding quadratically the flux calibration and PSF subtraction uncertainties, resulting in $\sim$15\% at 8.7 \micron~and $\sim$25\% at 18.3 \micron. \begin{figure*}[!ht] \centering \includegraphics[width=15cm]{f1.eps} \caption{\footnotesize{8.74 \micron~contour plots of NGC 7469 at the 3$\sigma$ level, the PSF star, and scaled subtraction of the PSF for this galaxy at various levels (100\%, 90\%, and 80\%). The residuals of the subtraction in the lower right panel show the host galaxy profile.} \label{psf1}} \end{figure*} \input{tab3.tex} \subsection{Compilation of NIR and MIR High Spatial Resolution Data} In addition to the new T-ReCS observations described in Section \ref{MIR}, we include here two more Sy1 galaxies with already published T-ReCS data: NGC 1097 and NGC 1566\footnote{NGC 1097 and NGC 1566 were not included in the analysis of the SEDs presented in \citealt{Ramos09a} because of the lack of high angular resolution NIR fluxes for them at the time of publication.}. We reduced the images presented in \citet{Mason07} for NGC 1097 and obtained unresolved MIR fluxes by performing the same technique explained in Section \ref{MIR} (see Table \ref{psf}). For NGC 1566 we compiled the MIR nuclear fluxes reported \citealt{Ramos09a}. Both galaxies were observed in September 2005 with T-ReCS: NGC 1566 in the Si-2 and Qa filters, and NGC 1097 in the Si-5 ($\lambda_c$=11.66 \micron, $\Delta\lambda$=1.13 \micron, 50\% cut-on/off) and Qa filters (see Table \ref{log}). The resolutions of the images are 0.3\arcsec~at 8.7 \micron, $\sim$0.4\arcsec~at 11.66 \micron, and 0.5\arcsec~at 18.3 \micron. NIR subarcsecond resolution nuclear fluxes compiled from the literature are reported in Table \ref{literature}. For NGC 1097, NGC 1566, and NGC 7469, \citet{Prieto10} reported diffraction-limited and near-diffraction-limited adaptive optics NACO/VLT fluxes. The three galaxies are unresolved in the NIR down to the highest resolution achieved (FWHM$\sim$0.15\arcsec~for NGC 1097 in the L'-band, $\sim$0.12\arcsec~for NGC 1566 in the K-band, and $\sim$0.08\arcsec~for NGC 7469 in the H-band). For NGC 7469, \citet{Prieto10} reported NACO J-, H-, and K-band nuclear fluxes obtained from observations in Nov 2002, as well as in the L' and NB-4.05 \micron~filters, observed in this case in Dec 2005. First, we discarded the narrow-band filter, since it is designed to collect the Br$\alpha$ emission, which in the case of this galaxy is important (as inferred from the Br$\gamma$ line detected with NIR spectroscopy; \citealt{Genzel95,Hicks08}). The L'-band filter also contains Br$\alpha$, which is very likely compromising the flux measurement, since the L' and NB-4.05 fluxes do not match the SED shape of the remaining J, H, K, Si-2 and Qa data. Indeed, \citet{Prieto10} also reported a NICMOS/HST flux measurement in the filter F187N, obtained in 2007. This data point lies in between the NACO H and K measurements in wavelength and flux, which gives us extra-confidence in the reliability of the NACO J, H, and K measurements. Considering all the previous, we finally decided to consider the NACO L'-band flux as an upper limit\footnote{The nucleus of NGC 7469 had undergone different periods of activity, including a maximum in the optical happening in the period 1996 to 2000, followed by a relaxation epoch in the following years \citep{Prieto10}. For this reason, here we only consider data obtained from 2002.}. For NGC 6221 and NGC 6814, which do not have any published subarcsecond resolution NIR data, we retrieved broad-band images from the Hubble Legacy Archive\footnote{http://archive.stsci.edu/} obtained with NICMOS on the HST. The two galaxies were observed in the F160W filter, using the NIC2 camera, as part of the program 7330 (PI: Mulchaey, J.). The typical FWHM for an unresolved PSF is $\sim$0.13\arcsec~using the F160W filter with NIC2. Details of the observations can be found in \citet{scoville00} and \citet{regan99}. For the analysis, we first separated the nuclear emission from the underlying host galaxy emission. We then applied the two-dimensional image decomposition GALFIT program \citep{peng02} to fit and subtract the unresolved component (PSF). PSF models were created using the TinyTim software package, which includes the optics of HST plus the specifics of the camera and filter system \citep{krist93}. We checked that no prominent emission lines are included in the wavelength range covered by the filter F160W. The resulting unresolved NIR fluxes for NGC 6221 and NGC 6814 are reported in Table \ref{literature}. We finally include the Sy1.5 galaxies NGC 3227 and NGC 4151 in this study, which were also part of the \citealt{Ramos09a} sample. The MIR nuclear fluxes are the same reported in the latter work (see description of the observations in Section 2.1 in \citealt{Ramos09a}), whereas the NIR fluxes have been updated (see Table \ref{literature}). Using the NIR nuclear fluxes reported in Table \ref{literature} in combination with our MIR unresolved measurements (Table \ref{psf}) we construct AGN-dominated SEDs for the five Sy1 galaxies. \input{tab4.tex} \section{The MIR Extended Emission of NGC 7469 and NGC 6221} \label{extended} Our new MIR imaging data reveal complex extended emission for NGC 7469 and NGC 6221, which is known to be intense and associated with emitting-dust heated by star formation. On the other hand, NGC 6814 lacks of any extended emission. In this section we present the MIR images of NGC 7469 and NGC 6221, and compare them with published data in different wavelength ranges. \subsection{NGC 7469} \label{ngc7469extended} The most spectacular morphological feature of this Sy1 galaxy is a circumnuclear ring of powerful starbursts of $\sim$1.6 kpc diameter, which is deeply embedded in a large cloud of molecular gas and dust. This ring contains several super star clusters and regions of star formation and it accounts for two-thirds of the galaxy bolometric luminosity \citep{Genzel95}. The star-forming ring has been the subject of study of several works in different wavelengths (see D\' iaz-Santos et al.~2007, hereafter \citealt{Diaz07}, and references therein), including the MIR \citep{Miles94,Soifer03,Horst09}. Based on VISIR/VLT MIR observations at 12.3 \micron, \citet{Horst09} report the detection of distinct knots of star formation around the nucleus located at a distance of $\sim$1.3\arcsec~($\sim$400 pc), although with low signal-to-noise. Using NIR HST data, \citealt{Diaz07} identified 30 clusters of star formation in the ring at 1.1 \micron. By fitting the individual ultraviolet-to-NIR SEDs of the clusters, the authors reported the presence of a dominant intermediate age population (8-20 Myr) and a younger and more extinguished one ($\sim$1-3 Myr). The latter does not coincide with the optical/NIR continuum-emitting regions, but seems to be traced by the MIR/radio emission, less affected by extinction than the optical/NIR. Figure \ref{ngc7469mid} shows the high resolution 8.74 and 18.3 \micron~T-ReCS images of NGC 7469. At both wavelengths we detect extended emission with high signal-to-noise coincident with the star-forming ring. Indeed, our flux maps appears very similar to the high resolution 11.7 \micron~contours presented by \citet{Miles94} and to the high resolution (0.2\arcsec) VLA 8.4 GHz radio map presented in \citet{Colina01}. We identified the brightest knots in our MIR images in Figure \ref{ngc7469mid} using the same notation as in \citet{Miles94}, where the AGN is labelled A. The B and C knots in our images correspond to the brightest regions in radio, according to the 5 GHz \citep{Wilson91} and 8.4 GHz radio maps \citep{Colina01}. We also identified the knots D, E, and F. In Table \ref{ring} we report the star clusters identified by \citealt{Diaz07} which better match the positions of the A to F MIR compact regions. The distances between these knots and the AGN range from 1.4\arcsec~to 1.8\arcsec~(median distance of $\sim$480 pc). All the knots appear more compact in the 18.3 \micron~image than in the 8.74 \micron, where the ring emission is more extended. This is expected since the 8.74 \micron~filter contains the 8.6 \micron~Polycyclic Aromatic Hydrocarbon (PAH) feature, whereas the 18.3 \micron~filter is mostly probing hot dust emission. As found by Spitzer, the $\sim$8 \micron~PAH emission of nearby galaxies appears to be more extended than at $\sim$24 \micron~(e.g., \citealt{Helou04,Calzetti05}). In Table \ref{ring} we report aperture fluxes for the six identified knots in both the 8.74 and 18.3 \micron~images calculated using IRAF\footnote{IRAF is distributed by the National Optical Astronomy Observatory, which is operated by the Association of Universities for the Research in Astronomy, Inc., under cooperative agreement with the National Science Foundation (http://iraf.noao.edu/).}. The aperture radius was defined on the basis of the resolution element in the Qa band (0.55\arcsec), and corresponding aperture corrections were applied, since the knots are only barely resolved in both bands. Positions relative to the nucleus (A) in arcseconds are also given. \begin{figure*}[!ht] \centering \par{ \includegraphics[width=8cm]{f2a.eps} \includegraphics[width=8cm]{f2b.eps}\par} \caption{\footnotesize{8.74 \micron~(left) and 18.3 \micron~(right) T-ReCS images of NGC 7469 smoothed by using a moving box of 3 pixel size. The images size is 7.8 arcsec side. North is up, and East to the left. Emitting regions identified in both the Si-2 and Qa images, and also coincident with the radio emission, are labelled from A to F, where A corresponds to the AGN.} \label{ngc7469mid}} \end{figure*} \input{tab5.tex} \subsection{NGC 6221} \label{ngc6221extended} The galaxy NGC 6221 is a prototypical example of the so-called ``X-ray--loud composite galaxies'' (see e.g., \citealt{Moran96}). This classification comes from the comparison between its optical spectrum, which is starburst-like, and its X-ray data, where the AGN is revealed \citep{Levenson01}. According to the X-ray data, and in particular the width of the Fe K$\alpha$ line, the orientation of the Seyfert nucleus is Type-1 \citep{Levenson01}. However, as mentioned above, the optical spectrum of the galaxy resembles more that of a starburst galaxy, implying that there must be a big amount of dust (likely associated with the starburst) along the line of sight (LOS) hiding the BLR. A nuclear optical extinction of A$_V$=3.0 mag was measured from the optical spectrum by \citet{Levenson01}. They also presented HST images of the central region of NGC 6221 in the optical (F606W/WFPC2) and in the NIR (F160W/NICMOS) and identified the bright central NIR source as the AGN. At optical wavelengths, the nucleus is diffuse and weaker than other bright knots, identified as star clusters. Our 8.7 and 18.3 \micron~images of NGC 6221 are shown in Figure \ref{ngc6221mid}. They reveal spectacular extended emission with two bright knots. The AGN is centered in both images. The other bright knot, at $\sim$1.9\arcsec~SW from the nucleus, which is roughly coincident with the SW starburst region shown in the F606W optical image in Figure 3 of \citet{Levenson01}, reaches practically the same intensity as the AGN at 8.7 \micron~and appears brighter at 18.3 \micron. We measured aperture fluxes in a 0.7\arcsec~radius for the SW knot, selected to collect the bulk of its MIR emission, and obtain fluxes of 42 mJy at 8.7 \micron~and 422 mJy at 18.3 \micron. Surrounding the AGN there is more diffuse emission extending towards the North, which in this case is more intense in the 8.7 \micron~image than in the 18.3 \micron~one. As already mentioned in Section \ref{ngc7469extended}, the $\sim$8 \micron~PAH emission of nearby galaxies appears more extended than the $\sim$24 \micron~emission (e.g., \citealt{Helou04,Calzetti05}). \begin{figure*}[!ht] \centering \par{ \includegraphics[width=8cm]{f3a.eps} \includegraphics[width=8cm]{f3b.eps}\par} \caption{\footnotesize{8.74 \micron~(left) and 18.3 \micron~(right) T-ReCS images of NGC 6221 smoothed by using a moving box of 3 pixel size. The images size is 7.8 arcsec side. North is up, and East to the left. The AGN is centered in both images.} \label{ngc6221mid}} \end{figure*} \section{SED Observational Properties} \label{sed} \subsection{Average Seyfert 1 Spectral Energy Distribution} \label{averageSy1} Using the MIR and NIR data reported in Tables \ref{psf} and \ref{literature} we construct subarcsecond resolution nuclear SEDs in the wavelength range from $\sim$1 to 18 \micron~for the five Type-1 Seyferts analyzed here. Figure \ref{template} shows a comparison between their spectral shapes and the average Sy2 SED from \citealt{Ramos09a}. This mean template was constructed using individual Sy2 data of the same angular resolution as that achieved in this work ($\lesssim$0.55\arcsec). In the same way, we have constructed an average Type-1 Seyfert template using the IR nuclear SEDs of the seven galaxies studied here\footnote{We have not considered the NIR ground-based data reported in Table 4 of \citealt{Ramos09a} for NGC 3227 and NGC 4151 in the construction of the mean template to be consistent with the angular resolutions of the other SEDs.}. The spectral shape of Sy1 and Sy1.5 galaxies is practically identical (as can be seen from Figure \ref{template}). Based on the previous, and on the fact that both types of nuclei present broad lines in their optical spectra, in this work we consider them as Type-1 Seyferts. The Sy2 template defines the wavelength grid, and we performed a quadratic interpolation of nearby measurements of the individual Sy1 galaxies onto its scale (1.265, 1.60, 2.18, 3.80, 4.80, 8.74, and 18.3 \micron). We did not interpolate the sparse observations of NGC 6221 and NGC 6814, which only have NICMOS 1.6 \micron~data in addition to the MIR measurements. The interpolated fluxes were used solely for the purpose of deriving the average Sy1 template. The error bars correspond to the standard deviation of each averaged point, except for the 8.74 \micron~point (the wavelength chosen for the normalization). In this case, we assigned a 15\% error, which is the nominal percentage considered for the N-band flux measurements (see Section \ref{observations}). \begin{figure*}[!ht] \centering \includegraphics[width=15cm]{f4.eps} \caption{\footnotesize{Observed IR SEDs for the seven Type-1 Seyfert galaxies (in color and with different symbols) used for the construction of the Sy1 average template (dashed pink). The Sy2 template from \citealt{Ramos09a} (solid black) is also plotted for comparison. The SEDs have been normalized at 8.74 \micron, and the average Sy1 and Sy2 SEDs have been shifted in the Y-axis for clarity.} \label{template}} \end{figure*} We measured the 1.265--18.3 \micron~IR slope (f$_{\nu}~\alpha~\nu^{-\alpha_{IR}}$) of the Sy1 template, which is representative of the individual Sy1 SEDs: $\alpha_{IR} = 1.7\pm0.3$. We also calculated the NIR ($\alpha_{NIR} = 1.6\pm0.2$, from 1.265 to 8.74 \micron) and MIR spectral indexes ($\alpha_{MIR} = 2.0\pm0.2$, using the 8.7 and 18.3 \micron~points). A flat NIR slope indicates an important contribution of hot dust emission (up to $\sim$1000-1200 K; \citealt{Rieke81,Barvainis87}) that comes from the immediate vicinity of the AGN. $\alpha_{IR}$, $\alpha_{NIR}$, and $\alpha_{MIR}$ values for the individual Type-1 Seyfert galaxies and for the mean Sy1 and Sy2 SEDs are shown in Table \ref{slopes}. The shape of the Sy2 mean SED is very steep ($\alpha_{IR} = 3.1\pm0.9$, $\alpha_{NIR}=3.6\pm0.8$, and $\alpha_{MIR}=2.0\pm0.2$) compared with those of the Type-1 Seyferts. In general, Sy2 have steeper 1--10 \micron~SEDs than Sy1 \citep{Rieke78,Edelson87,Ward87,Fadda98,Alonso01,Alonso03}. On the contrary, the MIR slope results to be the same ($\alpha_{MIR}=2.0\pm0.2$) for both the Sy1 and Sy2 templates. \citet{Alonso03} also reported IR spectral indices measured from 1 to 16 \micron~for Sy1 and Sy1.5 galaxies ($\alpha_{IR}^{Alonso}$=1.5-1.6). On the other hand, the NIR slopes of the Sy1.8 and Sy1.9 in \citealt{Ramos09a} have intermediate values between those of Sy2 and Sy1 (mean slope $\alpha_{IR} = 2.0\pm0.4$), also in agreement with the IR slopes reported in \citet{Alonso03} for Sy1.8 and Sy1.9 ($\alpha_{IR}^{Alonso}$=1.8-2.6). In summary, the slope of the IR nuclear SED is generally correlated with Seyfert type: Type-2 nuclei show steeper SEDs, whereas Type-1 and intermediate Seyferts are flatter. The NIR excess responsible for flattening the SED of the Type-1 nuclei would come from the contribution of hot dust in the directly-illuminated faces of the clouds in the torus, as well as from the direct AGN emission (i.e., the tail of the optical power-law continuum). \input{tab6.tex} A similar comparison between Type-1 and Type-2 spectral shapes can be done by using the H/N and N/Q flux ratios. H/N is larger for Type-1 Seyferts (0.07$\pm$0.03) than for Sy2 (0.003$\pm$0.002), as measured from the individual values of the Sy1 considered here and the Sy2 in \citealt{Ramos09a}. The difference in H/N between Sy1 and Sy2 galaxies is significantly different at the 100\% confidence level, according to the Kolmogorov-Smirnov (K-S) test. This ratio depends on both the torus inclination and covering factor, as we will discuss in Section \ref{nirflux}. On the other hand, N/Q is very similar for Type-1 and Type-2 Seyferts (mean values of 0.27$\pm$0.11 and 0.23$\pm$0.14 respectively). Values of H/N and N/Q for the individual Sy1 galaxies and the average templates are reported in Table \ref{slopes}. \section{SED Modelling} \label{modelling} \subsection{Clumpy Dusty Torus Models and Bayesian approach} The clumpy dusty torus models of \citet{Nenkova02} hold that the dust surrounding the central engine of an AGN is distributed in clumps, instead of homogeneously filling the torus volume. These clumps are distributed with a radial extent $Y = R_{o}/R_{d}$, where $R_{o}$ and $R_{d}$ are the outer and inner radius of the toroidal distribution, respectively (see Figure \ref{clumpy_scheme}). The inner radius is defined by the dust sublimation temperature ($T_{sub} \approx 1500$ K), with $R_{d} = 0.4~(1500~K~T_{sub}^{-1})^{2.6}(L / 10^{45}\,\mathrm{erg ~s^{-1}})^{0.5}$ pc. Within this geometry, each clump has the same optical depth ($\tau_{V}$, defined at the $V$-band). The average number of clouds along a radial equatorial ray is $N_0$. The radial density profile is a power-law ($\propto r^{-q}$). A width parameter, $\sigma$, characterizes the angular distribution of the clouds, which has a smooth edge. The number of clouds along the LOS at an inclination angle $i$ is $N_{LOS}(i) = N_0~e^{(-(i-90)^2/\sigma^2)}$. Finally, the optical extinction produced by the torus along the LOS is computed as $A_{V}^{LOS} = 1.086~N_0~\tau_{V}~e^{(-(i-90)^{2}/\sigma^{2})}$ mag. For a detailed description of the clumpy models see \citet{Nenkova02,Nenkova08a,Nenkova08b}. \begin{figure}[!ht] \centering \includegraphics[width=10cm]{f5.eps} \caption{\footnotesize{Scheme of the clumpy torus described in \citet{Nenkova08a,Nenkova08b}. The radial extent of the torus is defined by the outer radius ($R_o$) and the dust sublimation radius ($R_d$). All the clouds are supposed to have the same $\tau_{V}$, and $\sigma$ characterizes the width of the angular distribution. The number of cloud encounters is function of the viewing angle, $i$.} \label{clumpy_scheme}} \end{figure} The clumpy database now contains 1.2$\times$10$^6$ models, calculated for a fine grid of model parameters. The inherent degeneracy between these parameters has to be taken into account when fitting the observables. To this end, we recently developed a Bayesian inference tool (BayesClumpy), that extracts as much information as possible from the observations. Details on the interpolation methods and algorithms employed can be found in \citet{Asensio09}. Thus, for the following analysis of the Seyfert SEDs, we are not using the original set of models described in \citet{Nenkova08a,Nenkova08b}, but an interpolated version of them (See Figures 3 and 4 in \citealt{Asensio09} for a comparison between the original and interpolated models). In this work we use the most up-to-date version of the models that are corrected for a mistake in the torus emission calculations, which in principle only affects the AGN scaling factor (see erratum by \citealt{Nenkova10}). The present version of BayesClumpy has also been updated to use the Multinest algorithm of \citet{Feroz09} for sampling. This algorithm is very robust and efficient when sampling from complex posterior distributions. The prior distributions for the model parameters are assumed to be truncated uniform distributions in the intervals reported in Table \ref{parametros}. Therefore, we give the same weight to all the values in each interval. Apart from the six parameters that characterize the models, there is an additional parameter that accounts for the vertical displacement required to match the fluxes of a chosen model to an observed SED, which we allow to vary freely. This vertical shift scales with the AGN bolometric luminosity (see Section \ref{discussion}). In order to compare with the observations, BayesClumpy simulates the effect of the employed filters on the simulated SED by integrating the product of the synthetic SED and the filter transmission curve. Observational errors are assumed to be Gaussian or upper/lower limit detections. A detailed description of the Bayesian inference applied to the clumpy models can be found in \citet{Asensio09}. Additionally, to see an example of the use of clumpy model fitting to IR SEDs using BayesClumpy, see \citealt{Ramos09a}. \input{tab7.tex} \subsection{Model Results} \subsubsection{Seyfert 1 Individual Fits} \label{individual} The results of the fitting process of the IR SEDs with the interpolated version of the clumpy models of \citet{Nenkova08a,Nenkova08b} are the posterior distributions for the six free parameters that describe the models and the vertical shift. These are indeed the probability distributions of each parameter, represented as histograms. When the observed data introduce sufficient information into the fit, the resulting posteriors will clearly differ from the input uniform priors, either showing trends or being centered at certain values within the intervals considered. For all the Sy1 fits, we considered uniform priors in the intervals shown in Table \ref{parametros}. The only exception is NGC 7469, for which we use a gaussian prior of 85\degr$\pm$2\degr for the inclination angle of the torus, based on the value of the accretion disk viewing angle deduced from X-ray observations \citep{Nandra07}, assuming that the disk and the torus are coplanar. We fit the individual Sy1 SEDs with BayesClumpy, modelling the torus emission and the direct AGN contribution (the latter as a broken power law). We also consider the IR extinction curve of \citet{Chiar06} to take into account any possible foreground extinction from the host galaxy. The AGN scales self-consistently with the torus flux, and the foreground extinction (separate from the clumpy torus) is another free parameter, which we set as a uniform prior ranging from A$_V$=0 to 10 mag\footnote{Note that A$_V$ is the foreground extinction from the host galaxy, which is different from the A$_V^{LOS}$ value reported in Table \ref{clumpy_parameters}, corresponding to the extinction produced by the torus. $A_V^{LOS} = 1.086~N_0~\tau_{V}~e^{(-(i-90)^{2}/\sigma^{2})}$ mag.}. In addition to the Gemini MIR unresolved fluxes reported in Table \ref{psf} we consider MIR nuclear fluxes from VISIR compiled from the literature when available (see Table \ref{literature1} in Appendix \ref{appendixA}) We find good agreement between the VISIR and T-ReCS unresolved fluxes. We did not include measurements in the VISIR PAH filters (8.59, 11.25 and 11.88 \micron) for the galaxies NGC 7469 and NGC 1097 because of their intense star formation and their already well-sampled SED. Although the solutions to the Bayesian inference problem are the probability distributions of each parameter, we can translate the results into corresponding spectra (Figures \ref{sy1_fits_a} and \ref{sy1_fits_b}). The solid lines correspond to the model described by the combination of parameters that maximizes their probability distributions (maximum-a-posteriori; MAP). Dashed lines represent the model computed with the median value of the probability distribution of each parameter. Shaded regions indicate the range of models compatible with the 68\% confidence interval for each parameter around the median. In Figure \ref{ngc1097} we show the posteriors of the six torus parameters (the vertical shift and the foreground extinction have been marginalized) for the galaxy NGC 1097. Those for the rest of the Sy1 galaxies are presented in Appendix \ref{appendixA} (Figures \ref{ngc1566} to \ref{ngc4151}). \begin{figure*}[!ht] \centering {\par \includegraphics[width=8cm]{f6a.eps} \includegraphics[width=8cm]{f6b.eps} \includegraphics[width=8cm]{f6c.eps} \includegraphics[width=8cm]{f6d.eps}\par} \caption{\footnotesize{High spatial resolution IR SEDs of the Sy1 galaxies NGC 1097, NGC 1566, NGC 6221, and NGC 6814. Solid and dashed lines correspond to the MAP and median models, respectively. Shaded regions indicate the range of models compatible with the 68\% confidence interval for each parameter around the median.} \label{sy1_fits_a}} \end{figure*} \begin{figure*}[!ht] \centering {\par \includegraphics[width=8cm]{f7a.eps} \includegraphics[width=8cm]{f7b.eps} \includegraphics[width=8cm]{f7c.eps}\par} \caption{\footnotesize{Same as in Figure \ref{sy1_fits_a}, but for NGC 7469, NGC 3227, and NGC 4151.} \label{sy1_fits_b}} \end{figure*} \begin{figure*}[!ht] \centering {\par \includegraphics[width=5.3cm]{f8a.eps} \includegraphics[width=5.3cm]{f8b.eps} \includegraphics[width=5.3cm]{f8c.eps} \includegraphics[width=5.3cm]{f8d.eps} \includegraphics[width=5.3cm]{f8e.eps} \includegraphics[width=5.3cm]{f8f.eps}\par} \caption{\footnotesize{Probability distributions resulting from the fit of NGC 1097. Solid and dashed lines represent the mode and the median of each distribution and dotted lines indicate the 68\% confidence level around the median. The histograms have been smoothed for presentation purposes, occasionally leading to small offsets between the solid line corresponding to the mode and the position of the maximum of the histograms.} \label{ngc1097}} \end{figure*} The more information the IR SEDs provide, the better the probability distributions are constrained. From the analysis performed here and in \citealt{Ramos09a} it appears important to sample the SED in the wavelength range around 3-4 \micron~and also at $\sim$18 \micron~to constrain the model parameters. A detailed study of the influence of different filters/wavelengths in the restriction of the clumpy models parameter space will be the subject of a forthcoming paper (Asensio Ramos et al., in prep.). The posterior information for the seven Sy1 galaxies is summarized in Table \ref{clumpy_parameters}. \input{tab8.tex} From the individual fits of the Sy1 galaxies in our sample with clumpy torus models we obtain the following results: \begin{enumerate} \item The average number of clouds along an equatorial ray is within the interval $N_0$=[1, 8], \item Low values of $\sigma$ are preferred: $\sigma~\simeq$ [25\degr, 45\degr], and intermediate inclination angles of the torus are found: $i~\simeq$ [35\degr, 85\degr]. \item The radial extent of the torus ($Y$=$R_o$/$R_d$) is weakly constrained within the interval $Y~\simeq$ [15,20]. \item Values in the range from $\tau_{V}~\simeq$ [40,140] are found for the optical depth of each cloud for all the galaxies. \item The radial density profile appears constrained within the interval $q$=[0.2,1.9], with the only exception of NGC 1097, for which $q$=2.7$\pm^{0.2}_{0.3}$. \item The 10 \micron~silicate feature appears in shallow emission or absent in the fitted models with the exception of NGC 6221 MAP model. The weak silicate feature arises in the clumpy models because both illuminated and dark cloud sides contribute to the observed spectrum (see Section 5.4 in \citealt{Ramos09a} for a detailed discussion on the silicate feature modelling). \item The optical extinction produced by the torus along the LOS results in $A_{V}^{LOS}<690$ mag for all the galaxies. \item The foreground extinction from the host galaxy, which obscures the AGN direct emission in our modelling, results in A$_V$$<$5 mag (see Figures \ref{sy1_fits_a} and \ref{sy1_fits_b}). The values derived are consistent with those published in the literature, e.g. the nuclear optical extinction of A$_V$=3 mag measured from the optical spectrum of NGC 6221 \citep{Levenson01}, A$_V\sim$1 mag determined from NACO/VLT colour maps \citep{Prieto05} for NGC 1097, and A$_V$=4.5-4.9 mag reported by \citet{Mundell95} for NGC 3227. All the above intervals or limits of the parameters correspond to median values. We chose the medians instead of the MAPs because the former gives a less biased information about the result, since it takes into account degeneracies, while the MAP does not. \end{enumerate} The clumpy models succesfully reproduce the observed Sy1 SEDs studied here with compatible results among them. This is indicating that the NIR and MIR unresolved fluxes employed here are dominated by a combination of reprocessed emission from dust in the torus and direct AGN emission. Our modelling results for NGC 1097 somehow contradict those presented in \citet{Mason07}. The latter authors unsuccessfully tried to reproduce the T-ReCS 11.66 and 18.3 \micron~aperture fluxes that they measured for this galaxy using the clumpy models of \citet{Nenkova02}. The difference with our result is probably due to i) the fact that we obtained unresolved fluxes using PSF subtraction over the same images presented in \citet{Mason07}, reducing the potential contamination from star formation; ii) they did not consider NIR data, but only the MIR fluxes; and finally iii) they faced the degeneracy problem of the clumpy models without using any sophisticated tool as e.g. BayesClumpy. \subsubsection{Sy2 and intermediate-type Seyfert results} \label{comparison} As a consequence of the publication of the erratum \citet{Nenkova10}, where the authors report a mistake in the torus emission calculations, we repeated all the fits presented in \citealt{Ramos09a} using our updated version of BayesClumpy. In order to do a proper comparison with the results for the Sy1 galaxies presented here, we performed the fits of the Sy2 and intermediate-type Seyferts considering exactly the same priors as for the Sy1. As in \citealt{Ramos09a}, we did not consider the direct AGN contribution for either Sy1.8/1.9 or Sy2. For the new fits we considered MIR nuclear fluxes from VISIR when available, in addition to the NIR and MIR fluxes reported in \citealt{Ramos09a}. We did not include measurements in the VISIR PAH filters (8.59, 11.25 and 11.88 \micron) for those galaxies with very intense star formation such as NGC 7582. Many of the SEDs in \citealt{Ramos09a} have been also updated with NIR data from recent publications. In Table \ref{literature2} (Appendix \ref{appendixB}) we report the NIR-to-MIR SEDs for all the sample. In general, the results from the fitting with the most up-to-date version of the interpolated clumpy models, which are reported in Table \ref{clumpy_parameters2} in Appendix \ref{appendixB}, are compatible with those presented in \citealt{Ramos09a} at the 1-sigma level. Indeed, if we compare the results for the five galaxies for which we fitted exactly the same SEDs as in the previous work (Circinus, Mrk 573, NGC 1386, NGC 1808, and NGC 1365) we find that they are practically identical. The only fits that are completely different from those presented in the previous work correspond to Centaurus A and NGC 3281. This is a consequence of adding new MIR data from VISIR and/or apriori information for the inclination angle of the torus (see Appendix \ref{appendixB}). In Table \ref{types} we show the ranges of the parameters found for the Sy1, intermediate-type Seyferts, and Sy2. We have excluded the unreliable fits of NGC 1808 and NGC 7582 as we did in \citealt{Ramos09a}. In the case of NGC 1808, due to the intense star formation that is taking place in its nuclear region and to the lower spatial resolution IR SED (all from 3-4 m telescopes), it is likely contaminated with starlight, resulting in its peculiar shape. For NGC 7582, the intense circumnuclear star formation and the edge-on orientation of the galaxy make it difficult to isolate the torus emission from that of the host galaxy. In the fit, the silicate feature is predicted in emission, while from MIR spectroscopy shows it in strong absorption \citep{Siebenmorgen04}. \input{tab9.tex} We considered the Sy1.8 and Sy1.9 types as a separate group in between the Sy1 \& Sy1.5 and Sy2 because the IR slopes measured from their SEDs are intermediate between those of Sy1 and Sy2, as measured for our sample and also as reported in the literature (Section \ref{averageSy1} and references therein). By looking at the ranges of parameters for the Sy1.8 and Sy1.9 types, we find similarities with the Sy1 \& Sy1.5 group in $Y$, $N_0$, and $\tau_V$, whereas $\sigma$ and $q$ are more similar to those of Sy2. \section{Comparison between Type-1 and Type-2 Seyfert nuclei.} \label{discussion} The main aim of this work is to enlarge the number of Type-1 Seyferts in the original sample of \citealt{Ramos09a} in order to better compare between Type-1 and Type-2 tori under the assumption that the SEDs studied here are torus/AGN dominated. Despite the relatively low number of objects considered (7 Type-1 and 9 Type-2 Seyferts\footnote{Here we consider the nine Sy2 in \citealt{Ramos09a} with reliable fits (NGC 1808 and NGC 7582 are excluded).}), we find that some of the parameters are significantly different between Sy1 and Sy2. To take full advantage of the Bayesian approach employed here for the individual fits, the best way to compare the results for Sy1 and Sy2 galaxies is to derive joint posterior distributions for the full Type-1 and Type-2 datasets respectively. If $D_i$ contains the observed data from the i-th SED, assuming that the different SEDs are statistically independent, we can use the Bayes theorem to write the posterior for all galaxies together as: \begin{equation} p(\mbox{\boldmath $\theta$}|\{D_i\}) \propto p(\{D_i\}|\mbox{\boldmath $\theta$}) p(\mbox{\boldmath $\theta$}) = \prod_{i=1}^N p(D_i|\mbox{\boldmath $\theta$}) p(\mbox{\boldmath $\theta$}), \end{equation} where $\mbox{\boldmath $\theta$}=(\sigma,Y,N_0,q,\tau_V,i)$. Thus, we normalized all the Sy1 SEDs at 8.74 \micron~and fitted them together using BayesClumpy, and we did the same for the Sy2. For those galaxies without flux measurements in the Si-2 filter we performed a quadratic interpolation of the SED and used the interpolated values at 8.74 \micron~to normalize the real data. We did not use the interpolated values in the fits. We considered the mean redshift for the Sy1 (z=0.0061$\pm$0.0045) and for the Sy2 (0.0078$\pm$0.0051) in the fits\footnote{Indeed, since all the galaxies are local Seyferts, the results are the same if we consider that the SEDs are rest-frame}. In Figure \ref{comparison1and2} we show the Sy1 (left panel) and Sy2 fits (right panel). Note that the MAP and median models predict a flat SED with the silicate feature in weak emission for the Sy1 galaxies, and steeper and with the silicate band in shallow absorption for Sy2. \begin{figure*}[!ht] \centering \par{ \includegraphics[width=8.1cm]{f9a.eps} \includegraphics[width=8.1cm]{f9b.eps}\par} \caption{\footnotesize{Same as in Figure \ref{sy1_fits_a}, but for the Sy1 (left) and Sy2 SEDs (right) normalized at 8.74 \micron.} \label{comparison1and2}} \end{figure*} \begin{figure*}[!ht] \centering {\par \includegraphics[width=8cm]{f10a.eps} \includegraphics[width=8cm]{f10b.eps} \includegraphics[width=8cm]{f10c.eps} \includegraphics[width=8cm]{f10d.eps} \includegraphics[width=8cm]{f10e.eps} \includegraphics[width=8cm]{f10f.eps}\par} \caption{\footnotesize{Same as in Figure \ref{ngc1097}, but for the joint Sy1 and Sy2 SEDs. KLD values derived from the comparison between Sy1 and Sy2 for each parameter are labelled.} \label{divergence_plot}} \end{figure*} The comparison between the Sy1 and Sy2 posterior distributions is shown in Figure \ref{divergence_plot}. From a visual inspection it is clear that the joint posteriors of the parameters N$_0$, $q$, $\tau_V$, and $\sigma$ are completely different between Sy1 and Sy2. There is not overlap between the 1-sigma intervals. In Table \ref{ks} we report the median and mode values of the histograms in Figure \ref{divergence_plot}. In order to quantify how different the probability distributions are, we calculated the Kullback-Leibler divergence (KLD; \citealt{Kullback51}) between the Sy1 and Sy2 posteriors. This divergence takes into account the full shape of the posterior and it is always a positive value, and it is equal to zero when two distributions are identical. Therefore, the larger the value of KLD, the more different the posteriors. We find KLD$>$1 for $\sigma$ (KLD=5.2), $N_0$ (KLD=25), $q$ (KLD=1.2), and $\tau_V$ (KLD=21). These are indeed the four parameters whose 1-sigma regions do not overlap (see Figure \ref{divergence_plot}) and thus we consider their differences significant between Sy1 and Sy2$\footnote{See further discussion on the Sy2 $q$ parameter results below.}$. For both $Y$ and $i$ we find KLD$<$1 and similar median values between Sy1 and Sy2. \input{tab10.tex} Sy1 tori are narrower and have fewer clouds ($\sigma$=44\degr$\pm^{8\degr}_{7\degr}$; N$_0$=4$\pm$1) than those of Sy2 ($\sigma$=63\degr$\pm^{4\degr}_{5\degr}$; N$_0$=11$\pm^{2}_{1}$). The radial density distribution of the clouds is also different between the two Seyfert types according to this analysis: in Sy2, the majority of the clumps are distributed very close to the nucleus (i.e. steep radial density distribution; $q$=2.3$\pm$0.1) whereas for Sy1 the clouds distribution is flatter ($q$=0.8$\pm$0.2). On the other hand, the optical depth of the clouds in Sy1 tori is larger ($\tau_V$=133$\pm^{8}_{9}$) than in Sy2 ($\tau_V$=30$\pm$1). By taking a closer look to the right panel of Figure \ref{comparison1and2}, which corresponds to the Sy2 fit, it is clear that some of the data points are underestimated by the model. This happens because of the Circinus SED, which has the shortest errors bars, has more weight in the fit than the rest of the SEDs. In order to check that our Sy2 results are not completely biased by Circinus, we have repeated the fit excluding it, as well as NGC 1386, which is the least restricted SED in terms of data points, getting rid of the extremes. From the new fit, we find even larger differences with the Sy1 values of $\sigma$, $N_0$, and $\tau_V$. On the other hand, the $q$ joint posterior becomes comparable to that of the Sy1, probably as a consequence of getting rid of the two SEDs fitted with the largest $q$ values among the Sy2. Considering all the previous we prefer to be cautious about the $q$ parameter, and only consider $\sigma$, $N_0$, and $\tau_V$ genuinely different between Sy1 and Sy2. Interestingly, we find high as well as low values of the inclination angle of the torus for Sy1 and Sy2 (see Table \ref{types}). This variety in the $i$ values translates into the similar median values found for the joint Sy1 and Sy2 posterior distributions (47\degr$\pm^{7\degr}_{6\degr}$ for Sy1 and 54\degr$\pm^{10\degr}_{11\degr}$ for Sy2), which are also intermediate within the considered prior ($i$=[0\degr,90\degr]). {\it This is telling us that, in the clumpy torus scenario, the classification of a Seyfert galaxy as a Type-1 or Type-2 depends more on the intrinsic properties of the torus rather than in its inclination.} Our results contradict those presented by \citet{Honig10}, who find similar averaged values of $N_0$ for both the Sy1 and Sy2 galaxies in their sample. However, it is worth mentioning that they fixed low values of the inclination angle of the torus for Sy1 and large for Sy2, what can likely have influenced their results. In Figure \ref{covering} we represent the median values of $\sigma$ and $N_0$ for the different Seyfert types over the covering factor contours\footnote{A similar plot showing values for Sy2 galaxies from \citealt{Ramos09a} and PG quasars from \citet{Mor09} was shown in the talk by M. Elitzur at the Physics of Galactic Nuclei conference held 15-19 June, 2009 at Ringberg Castle. Proceedings published online at http://www.mpe.mpg.de/events/pgn09/online$_-$proceedings.html.}. The covering factor is defined as C$_T=1-\int e^{-N_{LOS}(i)}d cos(i)$. Type-1 nuclei tend to be located within lower C$_T$ contours (C$_T\leq$0.6) than those of Type-2s, for which C$_T\geq$0.5, with the exception of Centaurus A and Mrk 573\footnote{ As discussed in \citealt{Ramos09a}, the fit of Centaurus A is complicated by the presence of a dust lane of A$_V\sim$7-8 mag that is likely affecting the NIR nuclear fluxes, as well as the possible synchrotron contamination of the MIR fluxes. Mrk~573 has been recently reclassified as an obscured Narrow-line Seyfert 1 (NLSy1) based on NIR spectroscopy \citep{Ramos08,Ramos09b}. However, here we considered it as a Sy2 because of the similarity in SED shape with the rest of the sample.}. We have represented with larger symbols the median values from the joint $\sigma$ and $N_0$ posteriors reported in Table \ref{ks}. Since the covering factor is a non-linear function of the torus parameters, we took full advantage of our Bayesian approach and generated joint posterior distributions for C$_T$ from those in Figure \ref{divergence_plot}, which are shown in the left panel of Figure \ref{comparison_plot}. The median values of the histograms are C$_T$(Sy2)=0.95$\pm$0.02 and C$_T$(Sy1)=0.5$\pm$0.1. The divergence between the Sy1 and Sy2 C$_T$ posteriors is KLD=28, indicating that the difference is significant (the 1-sigma regions do not overlap). Thus, Sy1 tori in our sample have lower C$_T$s than those of Sy2, implying that they are intrinsically different. \begin{figure*}[!ht] \centering \includegraphics[width=15cm]{f11.eps} \caption{\footnotesize{$\sigma$ versus N$_0$ for the individual galaxies. Either median values or upper/lower limits are taken from the fits presented here. Dots correspond to Type-1 Seyferts, triangles to Sy1.8 and Sy1.9, and squares to Sy2. Error bars indicate 68\% confidence level around the median. Note the segregation between Seyfert types, indicating the intrinsic difference between their tori in terms of covering factor (indicated in contours). The big dot and square correspond to the average $\sigma$ and N$_0$ values for Sy1 and Sy2 from Table \ref{ks}. \label{covering}}} \end{figure*} \begin{figure*}[!ht] \centering \includegraphics[width=15cm]{f12.eps} \caption{\footnotesize{Joint posterior distributions of the torus covering factor (left panels) and escape probability (right panels) for Sy1 (top) and Sy2 galaxies (bottom). The values of the Kullback-leibler divergence obtained from the comparison between Sy1 and Sy2 are KLD=28 for C$_T$ and KLD=29 for P$_{esc}$. \label{comparison_plot}}} \end{figure*} \subsection{Near-infrared Emission and Torus Angular Width} \label{nirflux} As reported in Section \ref{averageSy1}, Type-1 Seyferts present characteristically higher H/N ratios and flatter SEDs than those of Sy2 nuclei. The enhancement on the NIR emission of Type-1 AGN is produced by the hot dust from the directly-illuminated faces of the clumps in the torus which are close to the central engine, and also by direct AGN emission (i.e., the tail of the optical/ultraviolet power-law continuum), which strongly flattens their IR SEDs \citep{Rieke81,Barvainis87}. Based on the IR SEDs presented here, the relative NIR contribution to the SED is generally correlated with the Seyfert type. Sy2 galaxies show lower NIR to MIR ratios (H/N=0.003$\pm$0.002) than Sy1 (0.06$\pm$0.03). In the context of the clumpy models, the presence of a cloud along the LOS, which may occur from any viewing angle, results in a Type-2 classification. {\it Cloud encounters are more probable at large inclination angles ($i$), but there is always a finite probability for having an unobscured view of the AGN.} In fact, the likelihood of intercepting a cloud along a LOS depends on the combination of $i$, $N_0$, and $\sigma$. Thus, the preference for lower values of these parameters (especially $N_0$ and $\sigma$) in Type-1 Seyferts increases the likelihood of unimpeded views of some directly-illuminated cloud faces (i.e., those on the ``back'' side of the torus) and direct AGN emission, resulting in an increase of the NIR flux. The latter can be represented in terms of the escape probability P$_{esc}$ (see equation 4 in \citealt{Nenkova08a}). For a total number of clouds N$_{LOS}$ along a path, P$_{esc}\simeq~exp(-N_{LOS})$ when the clouds are optically thick ($\tau_\lambda>1$). In Figure \ref{slope} we show the dependence of the H/N ratio on the escape probability. All the Sy2 are in the bottom-left corner of the diagram, whereas the Sy1 have higher values of the H/N ratio and P$_{esc}$=[1\%,92\%]. Sy1.8 and Sy1.9 galaxies present intermediate values between those of Sy1 and Sy2. The derived joint posterior distributions of the escape probabilities for Sy1 and Sy2 (KLD=29 between them) are shown in Figure \ref{comparison_plot}, and the median values of the histograms are P$_{esc}$(Sy1) = 18$\pm$3\% and P$_{esc}$(Sy2) = 0.05$\pm^{0.08}_{0.03}$\%. Thus, while for tori with high values of $i$, $N_0$, and $\sigma$ the probability of having a direct view of the AGN is very little, that increases for objects with narrower and less inclined tori, and containing less clumps. {\it In this work we show for the first time that, in the clumpy torus scenario, the classification as a Type-1 or Type-2 Seyfert depends more on the intrinsic properties of the torus, rather than in the inclination angle itself.} The Sy1 galaxies in our sample have larger P$_{esc}$ than the Type-2 Seyferts, and as a consequence of that, we detect the broad lines in their spectra. \begin{figure*}[!ht] \centering \includegraphics[width=13cm]{f13.eps} \caption{\footnotesize{H/N versus P$_{esc}$. Lower values of $N_0$, $\sigma$, and $i$ result in higher P$_{esc}$, resulting in higher NIR-to-MIR ratios. The inset shows an amplification of the region occupied by the Sy2. Symbols are the same as in Figure \ref{covering}. The big dot and square correspond to the median values of P$_{esc}$ for Sy1 and Sy2 from Figure \ref{comparison_plot}, and to the H/N values of the Sy1 and Sy2 average templates (Table \ref{slopes}).} \label{slope}} \end{figure*} \subsection{AGN Luminosities} \label{luminosidades} The clumpy model fits yield the intrinsic bolometric luminosity of AGN (L$_{bol}^{AGN}$) by means of the vertical shift applied to match the observational data points. Combining this value with the torus luminosity (L$_{bol}^{tor}$), obtained by integrating the corresponding model torus emission (without the AGN contribution), we derive the reprocessing efficiency (RE) of the torus (L$_{bol}^{tor}$/$L_{bol}^{AGN}$). The previous values are calculated on the Bayesian framework, by combining the posterior distributions of the model parameters. Median values and 1-sigma intervals for Sy1 galaxies are reported in Table \ref{lum}, and those for Sy2 and intermediate-type Seyferts are shown in Table \ref{lum2} (Appendix \ref{appendixB}). \input{tab11.tex} Sy2 tori in our sample are more efficient reprocessors than Sy1, absorbing and re-emitting the majority of the intrinsic AGN luminosity in the IR: RE(Sy2)=[0.4, 1.0], with a median value of 0.8 and RE(Sy1)=[0.2,0.7], with median of 0.5. It makes no sense to compare the derived joint Sy1 and Sy2 posterior distributions of RE, because we normalized the SEDs to perform the fits, and consequently the derived L$_{bol}^{AGN}$ are meaningless. We considered a possible dependency of the RE (or alternatively the covering factor; C$_T$) on L$_{bol}^{AGN}$, since the amount of incoming radiation from the AGN could possibly have some influence on the reprocessed energy or even in the torus properties (e.g., receeding torus scenario; \citealt{Lawrence91}). However, we find no relationship between the two quantities in the luminosity range considered. This means that the reprocessing efficiency depends primarily on the total number of clouds available to absorb the incident radiation, i.e. on the torus covering factor. However, the possible dependence of the torus properties on the AGN luminosity considering a broader luminosity range is further investigated in \citet{Alonso11}. Figure \ref{luminosity} shows that there is a correlation between RE and the torus covering factor for the galaxies in our sample. The larger C$_T$ the more efficient reprocessor. Type-2 tori have in general larger RE than those of Type-1, with the exceptions of Centaurus A and Mrk 573. Despite the relatively small sample, Figure \ref{luminosity} shows a segregation between Sy1 and Sy2 galaxies in terms of reprocessing efficiency and covering factor. {\it If all Seyfert nuclei are identical, as the unified model predicts, only the viewing angle should determine the classification, not the properties of the torus itself. While our results are limited by the small sample analyzed, they suggest instead that the Type-1/Type-2 classification depends on the torus intrinsic properties rather than in the mere torus inclination.} \begin{figure*}[!ht] \centering \includegraphics[width=13cm]{f14.eps} \caption{\footnotesize{Reprocessing efficiency versus torus covering factor for the individual galaxies. Higher values of C$_T$ translate into more efficient reprocessors. In general, Type-2 tori are more efficient than Type-1 tori, with the exceptions of Centaurus A and Mrk 573. Symbols are the same as in Figure \ref{covering}. } \label{luminosity}} \end{figure*} The bolometric luminosity of the intrinsic AGN derived from the fits (L$_{bol}^{AGN}$; column 2 in Table \ref{lum}) can be directly compared with those from the absorption-corrected 2-10 keV luminosities compiled from the literature (L$_{X bol}^{AGN}$; column 6 in Table \ref{lum}), which is an effective proxy for the AGN bolometric luminosity \citep{Elvis94}. To obtain L$_{X bol}^{AGN}$ from the intrinsic 2-10 keV values we applied a bolometric correction factor of 20 \citep{Elvis94}. We find similar values of L$_{Xbol}^{AGN}$ and L$_{bol}^{AGN}$ for all the Sy1 (see Table \ref{lum}). In the case of NGC 1097, we expect to have some contamination from the nuclear starburst (see Section \ref{individual} and \citealt{Mason07}). Thus, if the L$_{Xbol}^{AGN}$ value represents the AGN contribution-only, $L_{bol}^{AGN}$ may be overestimated because of the starburst contribution, thus resulting in a low ratio between the two measurements. The good agreement with the observational X-ray measurements reinforces the results from our SED modelling. The same comparison with X-ray data for Sy2 and intermediate-type Seyferts is shown in Table \ref{lum2} in Appendix \ref{appendixB}. \subsection{Torus Size} In general, the IR SED fitting does not constrain the size of the torus ($Y$) as well as other model parameters (see Section 5.3 in \citealt{Ramos09a}). The NIR and MIR observations are sensitive to the warm dust (located within $\sim$10 pc of the nucleus), which depends on the combination of model parameters N$_0$, $q$, and $Y$ \citep{Thompson09}. FIR observations would be more sensitive to the torus extent independently. However, the fits performed here for Type-1 and Type-2 Seyferts are consistent with a small torus size, confined to scales of less than 6 pc (see below). Uniform density models require the dusty torus to extend over large dimensions, to provide cool dust that produces the IR emission (e.g., \citealt{Granato94}). In contrast, in a clumpy distribution, different dust temperatures can coexist at the same distance, including cool dust at small radii \citep{Nenkova02}, so large tori, which are inconsistent with imaging and interferometry results, are not necessary. Indeed, for the Seyfert galaxies considered here, we found $Y$ ranging from 10 to 25 (see Table \ref{clumpy_parameters}), showing that small tori can account for the observed IR nuclear emission. The outer size of the torus scales with the AGN bolometric luminosity: $R_{o} = Y R_{d}$, so assuming a dust sublimation temperature of 1500 K, $R_o= 0.4~Y~(L_{bol}^{AGN}/10^{45})^{0.5}$ pc. We derived $R_o$ posterior distributions from those of L$_{bol}^{AGN}$ and $Y$ and find that all tori in our sample have outer radii smaller than 6 pc (Tables \ref{lum} and \ref{lum2}), in agreement with MIR direct imaging of nearby Seyferts \citep{Packham05,Radomski08} and also interferometric observations \citep{Jaffe04,Tristram07,Meisenheimer07,Raban09}. For example, the estimated outer radius for NGC 1097 (R$_o$=0.4$\pm^{0.1}_{0.2}$ pc) is in agreement with the value derived from NIR NACO/VLT observations \citep{Prieto05}, which placed the radius of the central compact source in r$<$5 pc. \citet{Mason07} also derived an upper limir of 19 pc for the size of the unresolved component from the MIR images employed in this work for NGC 1097. For the case of NGC 7469, for which we derive R$_o$=5$\pm^{1}_{2}$ pc, \citet{Tristram09} reported an estimation of the size of the dust distribution of 10 pc, based on MIDI/VLT interferometric observations, although compromised by the high level of noise and the lack of fringes. \section{Conclusions} \label{final} We present new subarcsecond resolution MIR imaging data at 8.7 and 18.3 $\micron$ for the three Type-1 Seyfert galaxies NGC 6221, NGC 6814, and NGC 7469. NGC 7469 and NGC 6221 appear extended, with part of this extended emission associated with emitting-dust heated by star formation. On the contrary, NGC 6814 lacks of any extended emission. Nuclear MIR and NIR fluxes for the three galaxies as well as for NGC 1566, NGC 1097, NGC 3227, and NGC 4151 are reported. We construct nuclear SEDs that the AGN dominates and fit them with clumpy torus models and a Bayesian approach to derive torus parameters. The main results of this work for the individual Sy1 galaxies and from the comparison with the Sy2 and intermediate-type Seyferts in \citealt{Ramos09a} are summarized as follows: \begin{itemize} \item We derived an average Type-1 Seyfert template from the individual Sy1 SEDs, which is flatter (mean IR slope $\alpha_{IR} = 1.7\pm0.3$) than the Type-2 mean SED presented in \citealt{Ramos09a} ($\alpha_{IR} = 3.1\pm0.9$). \item The NIR-to-MIR flux ratios measured from the individual SEDs are larger for Sy1 (0.07$\pm$0.03) than for Sy2 (0.003$\pm$0.002). Indeed, the distributions of NIR-to-MIR values are significantly different between the two types at the 100\% confidence level. \item The interpolated version of the clumpy models of \citet{Nenkova08a,Nenkova08b} employed here successfully reproduces the nuclear IR SEDs of Type-1 Seyferts with compatible results among them. Consequently, the observed nuclear IR emission of these galaxies can be accounted for by dust heated by the central engine and direct AGN emission. \item We derive joint posterior distributions for Sy1 and Sy2 and find that the differences in N$_0$, $\tau_V$, and $\sigma$ between Type-1 and Type-2 tori are significant according to the Kullback-Leibler divergence and the lack of overlap between their 1-sigma confidence intervals. \item We find that Sy1 tori are narrower and have fewer clouds than those of Sy2. Additionally, the optical depth of the clouds in Sy1 tori is larger than in Sy2. \item There is not a clear trend in the values of the inclination angle of the torus for Sy1 and Sy2 (slightly larger values are found for Sy2). \item The larger the covering factor of the torus, the smaller the likelihood of intercepting a cloud along a LOS. In our sample, Seyfert 2 tori have larger covering factors and smaller escape probabilities than those of Seyfert 1. \item Despite the limited number of galaxies considered, we find that Type-2 tori are in general more efficient reprocessors than those of Type-1. Indeed, there is a correlation between the reprocessing efficiency and the torus covering factor. \item For the Seyfert galaxies studied here, we find that tori with outer radii smaller than 6 pc can account for the observed NIR/MIR nuclear emission, in agreement with MIR interferometric observations. \end{itemize} {\it Summarizing, we find tantalizing evidence, albeit for a small sample of Seyfert galaxies and under the clumpy torus hypotesis, that the classification as a Type-1 and Type-2 depends more on the intrinsic properties of the torus than on its mere inclination towards us.} \vspace{4cm}
1,314,259,994,068
arxiv
\section{Introduction} The tree augmentation problem (TAP) is a central problem in network design. In TAP, the input is a 2-edge-connected\footnote{A graph $G$ is 2-edge-connected if it remains connected after the removal of any single edge.} graph $G$ and a spanning tree $T$ of $G$, and the goal is to augment $T$ to be 2-edge-connected by adding to it a minimum size (or a minimum weight) set of edges from $G$. Augmenting the connectivity of $T$ makes it resistant to any single link failure, which is crucial for network reliability. TAP is extensively studied in the sequential setting, with several classical 2-approximation algorithms \cite{frederickson1981approximation,khuller1993approximation,goemans1994improved,jain2001factor}, as well as recent advances with the aim of achieving better approximation factors \cite{kortsarz2016simplified, DBLP:journals/corr/FioriniGKS17, adjiashvili2017beating, cheriyan2015approximating, DBLP:conf/stoc/0001KZ18}. TAP is part of a wider family of \emph{connectivity augmentation} problems. Finding a minimum spanning tree (MST) is another prime example for a problem in this family, but, although an MST is a low-cost backbone of the graph, it cannot survive even one link failure. Hence, in order to guarantee stronger reliability, it is vital to find subgraphs with higher connectivity. The motivation for considering TAP is for the case that adding any new edge to the backbone incurs a cost, and hence if we are already given a subgraph with some connectivity guarantee then we would naturally like to augment it with additional edges of minimum number or weight, rather than to compute a well-connected low-cost subgraph from scratch. Connectivity augmentation problems also serve as building blocks in other connectivity problems, such as computing the minimum $k$-edge-connected subgraph. A natural approach is to start with building a subgraph that satisfies some connectivity guarantee (e.g., a spanning tree), and then augment it to have stronger connectivity. Since the main motivation for TAP is improving the reliability of distributed networks, it is vital to consider TAP also from the distributed perspective. In this paper, we initiate the study of distributed connectivity augmentation and present the first distributed approximation algorithms for TAP. We do so in the CONGEST model~\cite{peleg2000distributed}, in which vertices exchange messages of $O(\log{n})$ bits in synchronous rounds, where we show fast algorithms for both the unweighted and weighted variants of the problem. In addition to fast approximations for TAP, our algorithms have the crucial implication of providing efficient algorithms for approximating the minimum 2-edge-connected spanning subgraph, as well as several related problems, such as verifying 2-edge-connectivity and augmenting the connectivity of any spanning connected subgraph to 2. Finally, we complement our study with proving lower bounds for distributed approximations of TAP. \subsection{Our Contributions} \subsubsection*{Distributed approximation algorithms for TAP} Our first main contribution is the first distributed approximation algorithm for TAP. In particular, our algorithm provides a 2-approximation for weighted TAP in the CONGEST model, summarized as follows. \begin{restatable}{theorem}{wTAP} \label{wTAP} There is a distributed 2-approximation algorithm for weighted TAP in the CONGEST model that runs in $O(h)$ rounds, where $h$ is the height of the tree $T$. \end{restatable} The approximation ratio of our algorithm matches the best approximation ratio for weighted TAP in the sequential setting. Its round complexity of $O(h)$ is tight if $h = O(D)$, where $D$ is the diameter of $G$. This happens, for example, when $T$ is a BFS tree, and follows from a lower bound of $\Omega(D)$ rounds which we show in Section~\ref{sec:lower}. However, the height $h$ of the spanning tree $T$ may be large, even if the diameter of $G$ is small, which raises the question of whether the dependence on $h$ is necessary. We address this question by providing an algorithm for \emph{unweighted} TAP that has a round complexity of $O(D+\sqrt{n}\log^*{n})$ rounds, which is significantly smaller for large values of $h$. This only comes at the price of a slight increase in the approximation ratio, from $2$ to $4$. \begin{restatable}{theorem}{uTAPtwo} \label{uTAPtwo} There is a distributed 4-approximation algorithm for unweighted TAP in the CONGEST model that runs in $O(D+\sqrt{n}\log^*{n})$ rounds. \end{restatable} \subsubsection*{Applications} The key application of our TAP approximation algorithm is an $O(D)$-round 2-approximation algorithm for the minimum size 2-edge-connected spanning subgraph problem (2-ECSS), which is obtained by building a BFS tree and augmenting it to a 2-edge-connected subgraph using our algorithm. \begin{restatable}{theorem}{ECSS} There is a distributed 2-approximation algorithm for unweighted 2-ECSS in the CONGEST model that completes in $O(D)$ rounds. \end{restatable} The time complexity of our algorithm improves significantly upon the time complexity of previous approximation algorithms for 2-ECSS, which are $O(n)$ rounds for a $\frac{3}{2}$-approximation \cite{krumke2007distributed} and $O(D+\sqrt{n}\log^*{n})$ rounds for a 2-approximation \cite{thurimella1995sub}. In addition, our weighted TAP algorithm implies a 3-approximation for \emph{weighted} 2-ECSS. Other applications of our algorithms are an $O(D)$-round algorithm for verifying 2-edge-connectivity, and an algorithm for augmenting the connectivity of any connected spanning subgraph $H$ of $G$ from $1$ to $2$. \subsubsection*{Lower bounds} We complement our algorithms by presenting lower bounds for TAP. We first show that approximating TAP is a global problem which requires $\Omega(D)$ rounds even in the LOCAL model\cite{Linial92}, where the size of messages is not bounded. \begin{restatable}{theorem}{local} \label{local-lb} Any distributed $\alpha$-approximation algorithm for weighted TAP takes $\Omega(D)$ rounds in the LOCAL model, where $\alpha \geq 1$ can be any polynomial function of $n$. This holds also for unweighted TAP, if $1 \leq \alpha < \frac{n-1}{2c}$ for a constant $c>1$. \end{restatable} Theorem~\ref{local-lb} implies that if $h=O(D)$ then our TAP approximation algorithms have an optimal round complexity. We also consider the case of $h = \omega(D)$ and show a family of graphs, based on the construction in \cite{sarma2012distributed}, for which $\Omega(h)$ rounds are needed in order to approximate weighted TAP, were $h=\Theta(\frac{\sqrt{n}}{\log{n}})$. \begin{restatable}{theorem}{congest} \label{theorem:congest-lb} For any polynomial function $\alpha(n)$, there is a $\Theta(n)$-vertex graph of diameter $\Theta(\log{n})$ for which any (even randomized) distributed $\alpha(n)$-approximation algorithm for weighted TAP with an instance tree $T \subseteq G$ of height $h=\Theta(\frac{\sqrt{n}}{\log{n}})$ requires $\Omega(h)$ rounds in the CONGEST model. \end{restatable} Theorem~\ref{theorem:congest-lb} implies that our algorithm for weighted TAP is optimal on these graphs. In particular, there cannot be an algorithm with a complexity of $O(f(h))$ for a sublinear function $f$. This lower bound can also be seen as an $\widetilde{\Omega}(D+\sqrt{n})$ lower bound. Our lower bound for weighted TAP implies a lower bound for weighted 2-ECSS, since an $\alpha$-approximation algorithm for weighted 2-ECSS gives an $\alpha$-approximation algorithm for weighted TAP where we give to the edges of the input tree $T$ weight 0. \subsection{Technical overview of our algorithms} As an introduction, we start by showing an $O(h)$-round 2-approximation algorithm for \emph{unweighted} TAP, which allows us to present some of the key ingredients in our algorithms. Later, we explain how we build on these ideas and extend them to give an algorithm for the weighted case, and a faster algorithm for unweighted TAP. \subsubsection*{Unweighted TAP} A natural approach for constructing a distributed algorithm for unweighted TAP could be to try to simulate the sequential $2$-approximation algorithm of Khuller and Thurimella \cite{khuller1993approximation}. In their algorithm, the input graph $G$ is first converted into a modified graph $G'$. Then, the algorithm finds a directed MST\footnote{A directed spanning tree of $G$ rooted at $r$, is a subgraph $T$ of $G$ such that the undirected version of $T$ is a tree and $T$ contains a directed path from $r$ to any other vertex in $V$. A directed MST is a directed spanning tree of minimum weight.} in $G'$, which induces a corresponding augmentation in $G$. When considered in the distributed setting, this approach imposes two difficulties. The first is that we cannot simply modify the input graph, because it is the graph that represents the underlying distributed network, whose topology is given and not under our control. The second is in the directed MST procedure, as finding a directed MST efficiently in the distributed setting seems to be difficult. The currently best known time complexity of this problem is $O(n^2)$ for an asynchronous setting\cite{humblet1983distributed}, which is trivial in the CONGEST model. We overcome the above using two key ingredients. First, we bring into our construction the tool of computing lowest common ancestors (LCAs). We show that building $G'$ and simulating a distributed computation over it can be done by an efficient computation of LCAs, and we achieve the latter by leveraging the \textit{labeling scheme} for LCAs presented in \cite{alstrup2004nearest}. Second, we drastically diverge from the Khuller-Thurimella framework by replacing the expensive directed MST construction by a completely different procedure. Roughly speaking, we show that the simple structure of $G'$ allows us to find an optimal augmentation in $G'$ efficiently by scanning the input tree $T$ from the leaves to the root and performing the following procedure. Each vertex sends to its parent information about edges that may be useful for the augmentation since they \emph{cover} many edges of the tree, and the vertices use the LCA labels in order to decide which edges to add to the augmentation. While a direct implementation of this would result in much information that is sent through the tree, we show that at most two edges need to actually be sent by each vertex. Thus, applying the labeling scheme and scanning the tree $T$ result in a time complexity of $O(h)$ rounds, where $h$ is the height of $T$. Finally, we prove that an optimal augmentation in $G'$ gives a 2-approximation augmentation for $G$, which gives a 2-approximation for unweighted TAP in $O(h)$ rounds. \subsubsection*{Weighted TAP} Our algorithm for the unweighted case relies heavily on the fact that we can compare edges and decide which one is the best for the augmentation according to the number of edges they cover in the tree. However, once the edges have weights, it is not clear how to compare edges. This is because of the tension between light edges that cover only few edges and heavier edges that cover many edges. Therefore, Theorem~\ref{wTAP}, which applies for the weighted case, cannot be directly obtained according to the above description. Nevertheless, we show how to overcome this by introducing a technique of having each vertex send to its parent edges with \emph{altered weights}. The trick here is that we modify the weight that is sent for an edge in a way that captures the cost for covering each edge of the tree. This successfully addresses the competing needs of covering as many tree edges as possible, while using the lightest possible edges, and allows focusing on a smaller number of edges that may be useful for the augmentation. Finally, using standard pipelining, this gives a time complexity of $O(h)$ rounds for the weighted case as well. \subsubsection*{Faster unweighted TAP} Both of our aforementioned algorithms rely on scanning the tree $T$, which results in a time complexity that is linear in the height $h$ of the tree $T$. In order to avoid the dependence on $h$, one must be a able to add edges to the augmentation without scanning the whole tree. However, if a vertex $v$ does not get information about the edges added to the augmentation by the vertices in the whole subtree rooted at $v$, then it may add additional edges in order to cover tree edges that are already covered. But then we are no longer guaranteed to get an optimal augmentation in $G'$, or even a good approximation for it. Nevertheless, we are still able to show a faster algorithm for unweighted TAP, which completes in $O(D+\sqrt{n}\log^*{n})$ rounds. The key ingredient in our algorithm is breaking the tree $T$ into fragments and applying our $2$-approximation for unweighted TAP algorithm on each fragment separately, as well as on the tree of fragments. Since our algorithm does not scan the whole tree, it may add different edges to cover the same tree edges, which makes the analysis much more involved. The approximation ratio analysis is based on dividing the edges to different types and bounding the number of edges of each type separately, using a subtle case-analysis. Although our algorithm does not find an optimal augmentation in $G'$, it gives a 2-approximation for it, which results in a 4-approximation augmentation for the original graph $G$. \\ \textbf{Roadmap:} In Section \ref{sec:app_uTAP}, we describe our $O(h)$-round 2-approximation algorithm for unweighted TAP, and in Section \ref{sec:app_wTAP} we extend it to the weighted case. In Section \ref{sec:applic}, we show applications of these algorithms, in particular for approximating 2-ECSS, and in Section \ref{sec:faster} we present our faster algorithm for unweighted TAP. We present lower bounds for TAP in Section \ref{sec:lower}, and discuss questions for future research in Section \ref{sec:dis}. \subsection{Related Work} \subsubsection*{Sequential algorithms for TAP} TAP is intensively studied in the sequential setting. Since TAP is NP-hard, approximation algorithms for it have been studied. The first 2-approximation algorithm for weighted TAP was given by Frederickson and J{\'{a}}J{\'{a}} \cite{frederickson1981approximation}, and was later simplified by Khuller and Thurimella \cite{khuller1993approximation}. Other 2-approximation algorithms for weighted TAP are the primal-dual algorithm of Goemans et al. \cite{goemans1994improved}, and the iterative rounding algorithm of Jain \cite{jain2001factor}. Recently, a new algorithm achieved an approximation of 1.5 for unweighted TAP \cite{kortsarz2016simplified}, and recent breakthroughs give 1.458-approximation for unweighted TAP \cite{DBLP:conf/stoc/0001KZ18}, and approximations better than 2 for bounded weights \cite{DBLP:journals/corr/FioriniGKS17, adjiashvili2017beating}. Achieving approximation better than 2 for the general weighted case is a central open question. See \cite{khuller1996approximation, kortsarz2010approximating} for surveys about approximation algorithms for connectivity problems. Also, the related work in \cite{DBLP:conf/stoc/0001KZ18} gives an overview of many recent sequential algorithms for TAP. \subsubsection*{Related work in the distributed setting} While ours are the first distributed approximation algorithms for TAP itself, there are important related studies in the distributed setting. ~\\ \textbf{MST:} In the distributed setting, finding an MST, which is a minimum weight subgraph with connectivity $1$, is a fundamental and well studied problem (see, e.g., \cite{gallager1983distributed,garay1998sublinear,kutten1995fast,elkin2006unconditional, DBLP:conf/podc/Elkin17, pandurangan2017time}). The first distributed algorithm for this problem is the GHS algorithm that works in $O(n\log{n})$ time \cite{gallager1983distributed}. Following algorithms improved the round complexity to $O(D+\sqrt{n}\log^*{n})$ \cite{garay1998sublinear, kutten1995fast}. ~\\ \textbf{$k$-ECSS:} For the minimum weight 2-edge-connected spanning subgraph (2-ECSS) problem, there is a distributed algorithm of Krumke et al. \cite{krumke2007distributed}. Their approach is finding a specific spanning tree and then augmenting it to a 2-edge-connected graph. In the unweighted case, they augment a DFS tree following the sequential algorithm of Khuller and Vishkin \cite{khuller1994biconnectivity}, which results in an $O(n)$-round $\frac{3}{2}$-approximation algorithm for 2-ECSS. In the weighted case they augment an MST and suggest a general $O(n\log{n})$-round 2-approximation algorithm for weighted TAP, which gives an $O(n\log{n})$-round $3$-approximation algorithm for 2-ECSS. Our algorithms for TAP imply faster approximations for unweighted and weighted 2-ECSS. Another distributed algorithm for \emph{unweighted} $k$-ECSS is an $O(k(D+\sqrt{n}\log^*{n}))$-round algorithm of Thurimella \cite{thurimella1995sub} that finds a sparse $k$-edge-connected subgraph. The general framework of the algorithm is to repeatedly find maximal spanning forests in the graph and remove their edges from the graph (this framework is also described in sequential algorithms \cite{khuller1996approximation,nagamochi1992linear}). This gives a $k$-edge-connected spanning subgraph with at most $k(n-1)$ edges. Since any $k$-edge-connected subgraph has at least $\frac{kn}{2}$ edges, since the degree of each vertex is at least $k$, this approach guarantees a 2-approximation for unweighted $k$-ECSS. ~\\ \textbf{Fault-tolerant tree structures:} Another related problem is the construction of fault-tolerant tree structures. Distributed algorithms for constructing fault tolerant BFS and MST structures are given in \cite{ghaffari2016near}, producing sparse subgraphs of the input graph $G$ that contain a BFS (or an MST) of $G \setminus\{e\}$ for each edge $e$, for the purpose of maintaining the functionality of a BFS (or an MST) even when an edge fails. However, TAP is different from these problems in several aspects. First, we augment a specific spanning tree $T$ rather then build the whole structure from scratch. In addition, since we need to preserve only connectivity when an edge fails and not the functionality of a BFS or an MST, optimal solutions for TAP may be much cheaper. ~\\ \textbf{Additional related problems:} Another connectivity augmentation problem studied in the distributed setting is the Steiner Forest problem \cite{lenzen2014improved, khan2012efficient}. There are also distributed algorithms for finding the 2-edge-connected and 3-edge-connected components of a connected graph \cite{pritchard2005robust,pritchard2011fast}, and distributed algorithms that decompose a graph with large connectivity into many disjoint trees, while almost preserving the total connectivity through the trees \cite{censor2014distributed}. \subsubsection*{Follow-up works} We show here a deterministic $O(D+\sqrt{n}\log^*{n})$-round 4-approximation algorithm for \emph{unweighted} TAP and a determinstic $O(h)$-round 2-approximation algorithm for \emph{weighted} TAP. In a recent follow-up work \cite{kECSS} we show a randomized $O((D+\sqrt{n})\log^2{n})$-round $O(\log{n})$-approximation for \emph{weighted} TAP and \emph{weighted} 2-ECSS, based on different techniques. In addition, we show in \cite{kECSS} a randomized $\widetilde{O}(n)$-round $O(\log{n})$-approximation for \emph{weighted} $k$-ECSS for any constant $k$, and a randomized $O(D\log^3{n})$-round $O(\log{n})$-approximation for \emph{unweighted} 3-ECSS. Also, a very recent work \cite{2ECSS_new} shows a deterministic $O(1)$-approximation for weighted TAP and weighted 2-ECSS, completing in $O((D+\sqrt{n})\log^2{n})$ rounds. Another very recent work \cite{un_kECSS} shows an $O(1)$-approximation for \emph{unweighted} $k$-ECSS completing in $O(k \log^{1+o(1)}{n})$ rounds. The basic approach in \cite{un_kECSS} is building $k$ ultra-sparse spanners iteratively. Since any ultra-sparse spanner has $O(n)$ edges, the total number of edges in the subgraph obtained is $O(kn)$, which gives a constant approximation for unweighted $k$-ECSS. While these recent works improve significantly the time complexity for weighted TAP and 2-ECSS, and unweighted $2$-ECSS, this comes at a price of larger approximation ratios than the ones we show here. For a detailed comparison see Table \ref{table_results}. \begin{table}[h!] \centering \begin{tabular}{ |p{3.5cm}|p{2cm}|p{3cm}|p{3.5cm}| } \hline \multicolumn{4}{|c|}{Algorithms and lower bounds for TAP} \\ \hline Reference& Variant & Approximation & Time complexity\\ \hline \textbf{This paper} & weighted &2 & $O(h)$\\ \textbf{This paper} & unweighted &4 & $O(D+\sqrt{n}\log^*{n})$\\ \textbf{This paper} & unweighted &$\alpha = O(n)$ & $\Omega(D)$\\ \textbf{This paper} & weighted &any polynomial $\alpha$& $\widetilde{\Omega}(D+\sqrt{n}), \Omega(h)$\\ Subsequent work \cite{kECSS} & weighted &$O(\log{n})$ & $O((D+\sqrt{n})\log^2{n})$\\ Subsequent work \cite{2ECSS_new} & weighted &$O(1)$ & $O((D+\sqrt{n})\log^2{n})$\\ \hline \multicolumn{4}{|c|}{} \\ \hline \multicolumn{4}{|c|}{Algorithms and lower bounds for weighted 2-ECSS} \\ \hline Reference& Variant & Approximation & Time complexity\\ \hline Prior work \cite{krumke2007distributed} & &3 & $O(n\log{n})$\\ \textbf{This paper} & &3 & $O(h_{MST} + \sqrt{n} \log^{*}{n})$\\ \textbf{This paper} & &any polynomial $\alpha$& $\widetilde{\Omega}(D+\sqrt{n})$\\ Subsequent work \cite{kECSS} & &$O(\log{n})$ & $O((D+\sqrt{n})\log^2{n})$\\ Subsequent work \cite{2ECSS_new} & &$O(1)$ & $O((D+\sqrt{n})\log^2{n})$\\ \hline \multicolumn{4}{|c|}{} \\ \hline \multicolumn{4}{|c|}{Algorithms for unweighted $k$-ECSS} \\ \hline Reference& Variant & Approximation & Time complexity\\ \hline Prior work \cite{krumke2007distributed} & $k=2$ &3/2 & $O(n)$\\ Prior work \cite{thurimella1995sub} & general $k$ &2 & $O(k(D+\sqrt{n}\log^*{n}))$\\ \textbf{This paper} & $k=2$ &2 & $O(D)$\\ Subsequent work \cite{un_kECSS} & general $k$ &$O(1)$ & $O(k \log^{1+o(1)}{n})$\\ \hline \end{tabular} \caption{Summary and comparison of our results} \label{table_results} \end{table} \subsection{Preliminaries} For completeness, we first formally define the notion of edge connectivity. \begin{definition} An undirected graph $G$ is \emph{$k$-edge-connected} if it remains connected after the removal of any $k-1$ edges. \end{definition} \textbf{The Tree Augmentation Problem (TAP).} In TAP, the input is an undirected 2-edge-connected graph $G$ with $n$ vertices, and a spanning tree $T$ of $G$. The goal is to add to $T$ a minimum size (or a minimum weight) set of edges $Aug$ from $G$, such that $T \cup Aug$ is 2-edge-connected. In the weighted version, each edge has a non-negative weight, and we assume that the weights of the edges can be represented in $O(\log n)$ bits. \begin{definition} An edge $e$ in a connected graph $G$ is a \emph{bridge} in $G$ if $G \setminus \{e\}$ is disconnected. \end{definition} \begin{definition} A non-tree edge $e=\{u,v\}$ \emph{covers} the tree edge $e'$ if $e'$ is on the unique path in $T$ between $u$ and $v$, i.e., if $e'$ is not a bridge in $T\cup \{e\}$. \end{definition} A graph $G$ is 2-edge-connected if and only if it does not contain bridges. Hence, augmenting the connectivity of $T$ requires covering all the tree edges. ~\\ \textbf{Models of distributed computation.} In the distributed CONGEST model \cite{peleg2000distributed}, the network is modeled as an undirected connected graph $G=(V,E)$. Communication takes place in synchronous rounds. In each round, each vertex can send a message of $O(\log{n})$ bits to each of its neighbors. The time complexity of an algorithm is measured by the number of rounds. Our algorithms work in the CONGEST model, but some of our lower bounds hold also in the stronger LOCAL model \cite{Linial92}, where the size of messages is not bounded. In the distributed setting, the input to TAP is a rooted spanning tree $T$ of $G$ with root $r$, whose height is denoted by $h$. The tree $T$ is given to the vertices locally, that is, each vertex knows which of its adjacent edges is in $T$ and which of those leads to its parent in $T$.\footnote{If a root and orientation are not given, we can find a root $r$ and orient all the edges towards $r$ in $O(h)$ rounds using standard techniques.} For each vertex $v\neq r$, we denote by $p(v)$ the parent of $v$ in $T$. The output is a set of edges $Aug$, such that $T \cup Aug$ is 2-edge-connected. In the distributed setting it is enough that at the end of the algorithm each vertex knows which of the edges incident to it are added to $Aug$. All the messages sent in our algorithms consist of a constant number of ids, labels and weights, hence the maximal message size is bounded by $O(\log{n})$ bits, as required in the CONGEST model. \\ \section{A 2-approximation for Unweighted TAP in $O(h)$ rounds} \label{sec:app_uTAP} As an introduction, we describe an $O(h)$-round 2-approximation algorithm, {$A_{TAP}$}, for unweighted TAP. The general structure of {$A_{TAP}$} is as follows. \begin{enumerate} \itemsep0em \item It builds a related virtual graph $G'$. \item It finds an optimal augmentation $A'$ in $G'$. \item It converts it to a 2-approximation augmentation $A$ in $G$. \end{enumerate} The graph $G'$ is defined as in \cite{khuller1993approximation}. After building $G'$, we diverge completely from the approach of \cite{khuller1993approximation} since we cannot simulate it efficiently in the distributed setting, as explained in the introduction. Instead, {$A_{TAP}$} finds an optimal augmentation in $G'$, and converts it to a 2-approximation augmentation in $G$. All the communication in the algorithm is on the edges of the graph $G$, since $G'$ is a virtual graph. In order to simulate the algorithm on $G$ we use labels that represent the edges of $G'$. In Section \ref{sec:app_build}, we describe how we build the virtual graph $G'$. Then, we show in Section \ref{sec:app_corr} that an optimal augmentation in $G'$ gives a 2-approximation augmentation in $G$. In Section \ref{sec:app_find}, we describe the algorithm for finding an optimal augmentation in $G'$, and we prove its correctness in Section \ref{sec:app_correct}. \subsection{Building $G'$ from $G$} \label{sec:app_build} {$A_{TAP}$} starts by building a related \emph{undirected} virtual graph $G'$. Building $G'$ requires efficient computation of lowest common ancestors (LCAs), which we next explain how to obtain in the distributed setting. \subsubsection{Computing LCAs} We use the \textit{labeling scheme} for LCAs of Alstrup et al. \cite{alstrup2004nearest}. This labeling scheme assigns labels of size $O(\log{n})$ bits to the vertices of a rooted tree with $n$ vertices, such that given the labels of $u$ and $v$ it is possible to infer the label of their LCA. The algorithm for computing the labels takes $O(n)$ rounds in a centralized setting, and we observe that it can be implemented in $O(h)$ rounds in the distributed setting, where $h$ is the depth of the tree, as was also observed by \cite{pritchard2011fast}. This is because the algorithm consists of a constant number of traversals of the tree, from the root to the leaves or vice versa. Thus, we have: \begin{lemma} \label{lca} Constructing the labeling scheme for LCAs of Alstrup et al. \cite{alstrup2004nearest} takes $O(h)$ rounds. \end{lemma} {$A_{TAP}$} starts by applying the labeling scheme, which takes $O(h)$ rounds. We next explain how we use it in order to build $G'$. \subsubsection{The Graph $G'$} We next describe the graph $G'$. To simplify the presentation of the algorithm it is convenient to give an orientation to the edges of $G'$. However, we emphasize that $G'$ is an undirected graph, that is, we do not address the notion of directed connectivity. The graph $G'$ is defined as follows (as in \cite{khuller1993approximation}). The graph $G'$ includes all the edges of $T$, and they are all oriented towards the root $r$ of $T$. For every non-tree edge $e=\{u,v\}$ in $G$ there are two cases (see Figure \ref{pic1a}): \begin{enumerate} \item If $u$ is an ancestor of $v$ in $T$, we add the edge $\{u,v\}$ to $G'$, oriented from $u$ to $v$. \item Otherwise, denote $t=LCA(u,v)$. In this case we add to $G'$ the edges $\{t,u\}$ and $\{t,v\}$, oriented from $t$ to $u$ and to $v$, respectively. \end{enumerate} \setlength{\intextsep}{0pt} \begin{figure}[h] \centering \setlength{\abovecaptionskip}{-2pt} \setlength{\belowcaptionskip}{6pt} \includegraphics[scale=0.6]{pic2.png} \caption{There are two cases for every non-tree edge in $G$. The left graph shows the first case, where the edge $\{u,v\}$ is between an ancestor and a descendant in $T$. The right graph shows the second case, where $t=LCA(u,v)$.} \label{pic1a} \end{figure} Note that in the second case, the edges $\{t,u\}$ and $\{t,v\}$ added to $G'$ are not necessarily in $G$, and therefore we cannot use them for communication. Hence, the rest of the communication in the algorithm is only over the tree edges. In order to simulate the algorithm over $G'$, it is enough that each vertex knows only the tree edges incident to it (which is its input), and the labels of the non-tree edges incoming to it in $G'$. In order to achieve this, each vertex $v$ sends its label to all of its neighbors in $G$, and receives their labels. From them, each vertex $v$ computes the edges incoming to it in $G'$ using the labeling scheme: for each edge $e=\{u,v\}$ that is not a tree edge, $v$ uses the labels of $v$ and $u$ in order to compute $t=LCA(u,v)$. If $t=u$, i.e., $u$ is an ancestor of $v$ in $T$, the edge $\{u,v\}$ is incoming to $v$ in $G'$. Otherwise $t \neq u$, and if $t \neq v$, the edge $\{t,v\}$ is incoming to $v$ in $G'$. Since $v$ knows the labels of $u$ and $t$, using LCA computations it learns the labels of all the edges incoming to it in $G'$. The construction of $G'$ takes $O(h)$ time, for constructing the labeling scheme by Lemma \ref{lca}. The rest of the computations take one round. This gives the following. \begin{lemma} \label{timeb1} Building $G'$ from $G$ takes $O(h)$ rounds. \end{lemma} \subsection{The Correspondence between $G$ and $G'$} \label{sec:app_corr} We next show that an optimal augmentation in $G'$ corresponds to an augmentation in $G$ with size at most twice the size of an optimal augmentation. To build $G'$ from $G$, for each edge $e \in G$ that is not a tree edge, we added one or two edges to $G'$. These edges are the edges \emph{corresponding} to $e$ in $G'$. Equivalently, for each such edge $\widetilde{e} \in G'$, the edge $e$ is an edge \emph{corresponding} to $\widetilde{e}$ in $G$. An edge $\widetilde{e} \in G'$ may have several corresponding edges in $G$. A non-tree edge $e=\{u,v\}$ in $G$ covers all the edges in the unique path in $T$ between $u$ in $v$. We next show that the corresponding edges to $e$ in $G'$ cover together exactly the same tree edges as $e$. This allows us to show that an optimal augmentation in $G'$ gives a 2-approximation augmentation in $G$, when we replace each edge of the augmentation in $G'$ by a corresponding edge in $G$. \begin{claim} \label{claim1} If the non-tree edge $e=\{u,v\}$ covers the tree edge $e'$ in $G$, then one of the edges corresponding to $e$ in $G'$ covers $e'$ in $G'$. \end{claim} \begin{proof} If $e$ is in $G'$ the claim is immediate. Otherwise, the edges $\{t,u\}$ and $\{t,v\}$, where $t=LCA(u,v)$, are the edges corresponding to $e$ in $G'$. The path from $u$ to $v$ in $T$ is the union of a simple path between $u$ and $t$ and another simple path from $t$ to $v$, so the edge $e'$ must be on one of these paths, hence one of the edges $\{t,u\}$ or $\{t,v\}$ covers it. \end{proof} \begin{claim} \label{claim2} If the non-tree edge $\widetilde{e}$ in $G'$ covers the tree edge $e'$, and $e$ is an edge corresponding to $\widetilde{e}$ in $G$, then $e$ covers $e'$ in $G$. \end{claim} \begin{proof} If $e=\widetilde{e}$ then the claim is immediate. Otherwise, $\widetilde{e}=\{t,u\}$ for some $t,u$, and $e=\{u,v\}$ where $t=LCA(u,v)$. The edge $\widetilde{e}$ covers $e'$ in $G'$, so $e'$ is on the unique path in $T$ between $t$ and $u$. The unique path in $T$ between $u$ and $v$ is the union of a simple path between $u$ and $t$ and another simple path from $t$ to $v$. In particular, the edge $e=\{u,v\}$ covers the edge $e'$ in $G$, as needed. \end{proof} Assume that $A'$ is an augmentation in $G'$, and $A$ is the set of corresponding edges in $G$, where each edge in $A'$ is replaced by a corresponding edge in $G$. \begin{corollary} $A$ is an augmentation in $G$. \end{corollary} \begin{proof} $A'$ is an augmentation so it covers all tree edges and hence from Claim \ref{claim2}, $A$ covers all tree edges, i.e., $A$ is an augmentation in $G$. \end{proof} \begin{lemma} \label{corr} Assume that $A'$ is an $\alpha$-approximation to the optimal augmentation in $G'$, then $A$ is a $2\alpha$-approximation to the optimal augmentation in $G$. \end{lemma} \begin{proof} Note that $|A|\leq |A'|$ because each edge in $A'$ is replaced by one edge in $A$. Assume that $OPT$ is an optimal augmentation in $G$ and $OPT'$ is the set of corresponding edges in $G'$, where each edge in $G$ is replaced by the corresponding one or two edges in $G'$. $OPT$ covers all tree edges, so $OPT'$ covers all tree edges by Claim \ref{claim1}, i.e, it is an augmentation in $G'$. It holds that $|OPT'|\leq 2|OPT|$ because each edge is replaced by at most two edges. Moreover, $|A'|\leq \alpha|OPT'|$ because $A'$ is an $\alpha$-approximation to the optimal augmentation in $G'$. We conclude that $$|A|\leq |A'|\leq \alpha|OPT'| \leq 2\alpha|OPT|.$$ \end{proof} \subsection{Finding an Optimal Augmentation in $G'$} \label{sec:app_find} The goal of {$A_{TAP}$} now is to find an optimal augmentation in $G'$. In $G'$ all the edges that are not tree edges are between an ancestor and a descendant of it in $T$. This allows us to compare edges and define the notion of \emph{maximal} edges. Intuitively, the notion of maximal edges would capture our goal that during the algorithm, when we cover a tree edge, we would like to cover it by an edge that reaches the highest ancestor possible, allowing us to cover many tree edges simultaneously. This motivates the following definition. Let $v$ be a vertex in the tree, and let $e=\{u,w\}$ and $e'=\{u',w'\}$ be two edges between ancestors $u,u'$ of $v$ and descendants $w,w'$ of $v$. We say that $e$ is the \textit{maximal} edge among $e$ and $e'$ if and only if $u$ is an ancestor of $u'$. If $u=u'$ we can choose arbitrarily one of them to be the maximal edge. Among the edges incoming to $v$, the \textit{maximal} edge is the edge $\{u,v\}$ from the ancestor $u$ of $v$ that is closest to the root. Note that using the LCA labels of such edges $e,e'$, a vertex $v$ can learn which is the maximal by computing $LCA(u,u')$. Moreover, using the labels of the edge $e$, a vertex $v$ can check if $e$ covers the tree edge $\{v,p(v)\}$ using LCA computations: it checks if $v$ is an ancestor of $w$ and if $u$ is an ancestor of $p(v)$. In our algorithm, each time a vertex sends an edge $e$, it sends the labels of $e$ which allow these computations. In order to cover all tree edges of $G'$, we assign each vertex $v\neq r$ in $G'$ with the responsibility of covering the tree edge $\{v,p(v)\}$. The idea behind the algorithm is to scan the tree $T$ from the leaves to the root, and whenever a tree edge that is still not covered is reached, it is covered by the vertex responsible for it, using the maximal edge possible. The algorithm {$A_{Aug}$} for finding an optimal augmentation in $G'$ starts at the leaves of $T$ and works as follows: \begin{itemize} \item Each leaf $v$ covers the tree edge $\{v,p(v)\}$ by the maximal edge $e$ incoming to $v$, it adds $e$ to the augmentation and sends $e$ to its parent. We call this a \emph{necessary} edge. \item Each internal vertex $v$ receives from each of its children at most 2 edges: one is necessary and one is \emph{optional}. Denote by $nec_v$ the maximal necessary edge received from $v$'s children, and denote by $opt_v$ the maximal edge among all the optional edges $v$ receives from its children and the edges incoming to $v$. There are two cases: \begin{enumerate} \item The tree edge $\{v,p(v)\}$ is already covered by $nec_v$. In this case $nec_v$ is the necessary edge $v$ sends to its parent. In addition, $v$ sends to its parent $opt_v$ as an optional edge. \item The tree edge $\{v,p(v)\}$ is not covered by $nec_v$. In this case $v$ adds to the augmentation the edge $opt_v$. From the definition of $opt_v$, it follows that it is the maximal edge that covers $\{v,p(v)\}$. In this case $opt_v$ is the edge $v$ sends to its parent as a necessary edge, and it does not send an optional edge. If $opt_v$ is an optional edge received from one of $v$'s children, $v$ updates the relevant child that this edge is necessary and has been added to the augmentation. It also updates its other children that their edges are not necessary. \end{enumerate} \item When an internal vertex receives from its parent indication if the optional edge it sent is necessary, it forwards the answer to the relevant child, if such exists. \item At the end, each vertex knows if the maximal edge incoming to it is necessary or not. The augmentation consists of all the necessary edges. \end{itemize} \subsection{Correctness Proof} \label{sec:app_correct} Denote by $A'$ the solution obtained by {$A_{Aug}$}, and by $A^*$ an optimal augmentation in $G'$. \begin{lemma} \label{opt} The algorithm {$A_{Aug}$} finds an optimal augmentation in $G'$. \end{lemma} \begin{proof} First, $A'$ is an augmentation in $G'$. Consider a tree edge $e=\{v,p(v)\}$. There are edges in $G$ that cover $e$ because $G$ is 2-edge-connected, hence from Claim \ref{claim1} there are edges in $G'$ that cover $e$. Therefore, $v$ adds such an edge in order to cover $e$, if it is not already covered by $nec_v$. Now we show that $|A'|\leq|A^*|$, by showing a one-to-one mapping from $A'$ to $A^*$. Since $A'$ is an augmentation in $G'$, it follows that $A'$ is an optimal augmentation. When an edge $e \in A'$ is added to $A'$ in {$A_{Aug}$}, it is in order to cover some tree edge that is still not covered, denote this edge by $t(e)$. Let $t(A')$ be all such tree edges. We map $e \in A'$ to an edge $e^* \in A^*$ that covers $t(e)$. This mapping is one-to-one: assume to the contrary that there are two edges $e_1,e_2 \in A'$ that are mapped to the same edge $e^* \in A^*$. Note that $e^*$ is an edge between an ancestor and its descendant in $T$ that covers both $t(e_1)=\{v_1,p(v_1)\}$ and $t(e_2)=\{v_2,p(v_2)\}$. Hence, $t(e_1)$ and $t(e_2)$ are on the same path in the tree between an ancestor and its descendant. Assume that $t(e_2)$ is closer to the root $r$ on this path. Note that the tree edge $t(e_1)$ is not covered by $nec_{v_1}$ since $t(e_1) \in t(A')$. Hence, $v_1$ adds the edge $e_1$ in order to cover it, which is the maximal edge possible. Since the edge $e^*$ covers both $t(e_1)$ and $t(e_2)$, it follows that $e_1$ covers $t(e_2)$ as well, contradicting the fact that $t(e_2) \in t(A')$. This completes the proof that $|A'|\leq|A^*|$. \end{proof} We complete {$A_{TAP}$} by replacing each edge in $A'$ by a corresponding edge in $G$. \begin{lemma} \label{time} The time complexity of {$A_{TAP}$} is $O(h)$ rounds. \end{lemma} \begin{proof} Building $G'$ from $G$ takes $O(h)$ rounds by Lemma \ref{timeb1}. Finding an optimal augmentation in $G'$ takes $O(h)$ rounds as well: the algorithm {$A_{Aug}$} consists of two traversals of the tree, from the leaves to the root, and vice versa. Hence, the total time complexity of {$A_{TAP}$} is $O(h)$ rounds. \end{proof} \begin{restatable}{theorem}{uTAP} \label{uTAP} There is a distributed 2-approximation algorithm for unweighted TAP in the CONGEST model that runs in $O(h)$ rounds, where $h$ is the height of the tree $T$. \end{restatable} \begin{proof} The algorithm {$A_{Aug}$} finds an optimal augmentation in $G'$, as proven in Lemma \ref{opt}. By Lemma \ref{corr}, this corresponds to an augmentation in $G$ with size at most twice the optimal augmentation of $G$. The time complexity follows from Lemma \ref{time}. \end{proof} \section{A 2-approximation for Weighted TAP in $O(h)$ rounds} \label{sec:app_wTAP} In this section, we prove Theorem \ref{wTAP}. \wTAP* Our algorithm for weighted TAP, {$A_{wTAP}$}, has the same structure of {$A_{TAP}$}. It starts by building the same virtual graph $G'$, and then it finds an optimal augmentation in $G'$. The only difference in building $G'$ is that now each edge $e$ is replaced by one or two edges in $G'$ with the same weight that $e$ has. The proof that an optimal augmentation in $G'$ corresponds to an augmentation in $G$ with at most twice the cost of an optimal augmentation in $G$ is the same as in the unweighted case. The difference is in finding an optimal augmentation in $G'$. In the unweighted case, for each vertex $v$, the only edge incoming to $v$ in $G'$ that was useful for the algorithm was the maximal edge. However, when edges have weights, potentially all the edges incoming to $v$ may be useful for the algorithm, and we can no longer use the notion of \emph{maximal} edges in order to compare edges. This is because of the tension between heavy edges that cover many edges of the tree, and light edges that cover less edges of the tree. To overcome this obstacle, we introduce a new technique of \emph{altering} the weights of the edges we send in the algorithm. Let $min_v$ be the weight of the minimum weight edge that covers $\{v,p(v)\}$. The intuition behind our approach is that in order to cover the tree edge $\{v,p(v)\}$ we must pay at least $min_v$. Thus, $min_v$ captures the cost of covering this tree edge. Therefore, before sending to its parent information about relevant edges, $v$ alters their weights by reducing from them the weight $min_v$. We show that altering the weights is crucial for selecting which edges to add to the augmentation, and allows to divide the weight of an edge in a way that captures the cost for covering each tree edge. In addition, we show that using this approach, sending information about at most $h$ edges from each vertex to its parent suffices for selecting the best edges for the augmentation. In Section \ref{sec:app_alg}, we describe our algorithm for finding an optimal augmentation in $G'$. In Section \ref{sec:app_wcorrect}, we prove the correctness of the algorithm, and in Section \ref{sec:app_analysis}, we analyze its time complexity. \subsection{Finding an Optimal Augmentation in $G'$} \label{sec:app_alg} Our algorithm consists of two traversals of the tree: from the leaves to the root and vice versa. As in {$A_{Aug}$}, each vertex $v$ is responsible for covering the tree edge $\{v,p(v)\}$. In the first traversal, each vertex $v$ computes the weight $min_v$ of the minimum weight edge that covers the tree edge $\{v,p(v)\}$ according to the weights of the edges it receives from its children, and the weights of the edges incoming to it. It also computes the weights of the minimum weight edges that cover the path from $v$ to each of its ancestors $u$, according to the weights $v$ receives in the algorithm. Then, $v$ subtracts $min_v$ from the weights of these edges, and sends them to its parent with the altered weights. In the second traversal, we scan the tree from the root to the leaves. Each child $v$ of $r$ adds to the augmentation the edge having weight $min_v$. It informs the relevant child who sent it, if exists, and informs its other children it did not add their edges. Each internal vertex $v$ receives from its parent a message that indicates whether one of the edges it sent was added to the augmentation by one of its ancestors or not. In the former case, $v$ learns that this edge was added to the augmentation and forwards the message to the relevant child who sent it, if such exists. Otherwise, the tree edge $\{v,p(v)\}$ is still not covered, and $v$ adds to the augmentation the edge having weight $min_v$. It informs the relevant child who sent it, if exists, and informs its other children that their edges were not added to the augmentation. A description of the algorithm is given in Algorithm \ref{alg}. For simplicity of presentation, we start by describing an algorithm which takes $O(h^2)$ rounds. Later, in Section \ref{sec:app_analysis}, we explain how using pipelining we improve the time complexity to $O(h)$ rounds. \\ \begin{algorithm} \caption{Finding an Optimal Augmentation in $G'$}\label{alg} \begin{algorithmic}[1] \Statex \Statex The code is for every vertex $v \neq r$ \Statex \State \underline{Initialization:} \State $e_{v,u} \gets$ the minimum weight edge incoming to $v$ that covers the path between $v$ and its ancestor $u$ or $\bot$ if there is no such edge. \State $w_v(u) \gets w(e_{v,u})$ for each ancestor $u$ of $v$ such that $e_{v,u} \neq \bot$, and $w_v(u) \gets \infty$ otherwise. \State $A_v \gets$ the union of $v$ and its children in $T$. \State $Aug_v \gets \emptyset$ \Statex \State \underline{First Traversal:} \If {$v$ is a leaf} \State for each ancestor $u$ of $v$: $sender_v(u) \gets v$ \Else \State \textbf{wait} for receiving $w_{v'}(u)$ for all ancestors $u$ of $v$, from each child $v'$ of $v$ \State for each ancestor $u$ of $v$: $w_v(u) \gets min_{v' \in A_v} {w_{v'}(u)}$, $sender_v(u) \gets argmin_{v' \in A_v} {w_{v'}(u)}$\label{l1} \EndIf \State $min_v \gets w_v(p(v))$ \label{line} \State for each ancestor $u$ of $v$: $w_v(u) \gets w_v(u)-min_v$ \State for each ancestor $u \neq p(v)$ of $v$ \textbf{send} $(u,w_v(u))$ to $p(v)$ \Statex \State \underline{Second Traversal:} \State $u \gets p(v)$ \If {$v$ is not a child of $r$} \State \textbf{wait} for a message $m$ from $p(v)$ \If {$m \neq \bot$} $u \gets m$ \EndIf \EndIf \State $s \gets sender_v(u)$ \If {$s=v$} \State $Aug_v \gets Aug_v \cup \{e_{v,u}\}$ \label{take} \Else \State send $u$ to $s$ \EndIf \State for each child $v' \neq s$ of $v$ \textbf{send} $\bot$ to $v'$ \end{algorithmic} \end{algorithm} \subsubsection*{Technical Details:} We assume in the algorithm that each vertex knows all the ids of its ancestors in $T$. We justify it in the next claim. Note that when we construct $G'$, if $\{u,v\}$ is an edge between an ancestor $u$ and its descendant $v$ in $T$, $v$ learns the label of $u$ according to the LCA labeling scheme and not the id of $u$. However, once $v$ learns about the ids and labels of all its ancestors, it knows the id of $u$ as well, and can use it in the algorithm. \begin{claim} All the vertices can learn the ids and labels of all their ancestors in $O(h)$ rounds. \end{claim} \begin{proof} In order to do this, at the first round each vertex sends to its children its id and label. In the next round, each vertex sends to its children the id and label it received in the previous round, and we continue in the same way until each vertex learns about all its ancestors. Clearly, after $h$ rounds each vertex learns all the ids and labels of all its ancestors. \end{proof} \begin{claim} If a vertex $v$ adds $e_{v,u}$ to $Aug_v$ in line \ref{take} of its algorithm, then $e_{v,u} \neq \bot$. \end{claim} \begin{proof} Since $G'$ is 2-edge-connected, we can cover all tree edges by edges from $G'$. Hence, the minimum weight of an edge that covers some tree edge is never infinite. It follows that if a vertex $v$ adds $e_{v,u}$ to $Aug_v$, then $e_{v,u} \neq \bot$. \end{proof} \subsection{Correctness Proof} \label{sec:app_wcorrect} The challenge in establishing the correctness of our algorithm lies in the fact that the vertices use altered weights rather than the original ones. Nevertheless, we show that our intuition behind choosing these altered weights faithfully captures the essence of finding an augmentation in the weighted case. \begin{lemma} \label{wcor} Algorithm \ref{alg} finds an optimal augmentation in $G'$. \end{lemma} \begin{proof} Note that the solution obtained by the algorithm is an augmentation of $G'$ because each vertex $v$ adds an edge in order to cover the tree edge $\{v,p(v)\}$ if it is not already covered by an edge which one of its ancestors decides to add to the augmentation. We next show that the augmentation is optimal. The key ingredient we use in our proof is giving costs to the edges of $T$ such that the sum of the costs is equal to both the cost of the solution obtained by the algorithm and the cost of an optimal augmentation of $G'$. Hence, we conclude that the cost of the solution obtained by the algorithm is optimal. \paragraph*{Giving costs to the edges of $T$:$\ $} Fix a vertex $v \neq r$ and let $t=\{v,p(v)\}$. We define $c(t)=min_v$ (the value of $w_v(p(v))$ in line \ref{line} of the algorithm). For an edge $e=\{u,x\}$ that covers $t$, such that $u$ is an ancestor of $x$ in $T$, let $P$ be the path of tree edges between $x$ and $p(v)$ in $T$. Note that the path $P$ is defined with respect to $t$ and $e$. For a vertex $v'$ such that $\{v',p(v')\} \in P$, let $P_{v'}$ be the path of tree edges between $x$ and $v'$. Note that $min_v$ is the weight of the minimum weight edge covering the tree edge $t=\{v,p(v)\}$ according to the weights $v$ receives in the algorithm. Denote this edge by $e_v$. \begin{claim} \label{c1} $w(e_v)=\sum_{t' \in P} c(t')$, where $P$ is the path defined by $t=\{v,p(v)\}$ and $e_v$. \end{claim} \begin{proof} Let $e_v=\{u,x\}$, where $u$ is an ancestor of $x$ in $T$. For each vertex $v'$ on the path between $x$ and $v$, $e_v$ is the minimum weight edge covering the path between $v'$ and its ancestor $p(v)$, according to the weights $v'$ receives in the algorithm, as otherwise we get a contradiction to the definition of $e_v$. Each vertex on this path reduces $min_{v'}$ from the weight of $e_v$ it receives before sending it to its parent. Denote by $V'$ all the vertices on the path between $x$ and $v$, excluding $v$. It follows that $$c(t)=min_v=w(e_v)-\sum_{v' \in V'} min_{v'}=w(e_v)-\sum_{t' \in P_v}c(t'),$$ which gives $w(e_v)=\sum_{t' \in P} c(t')$. \end{proof} \begin{claim} \label{c2} For each edge $e$ that covers $t$, it holds that $w(e) \geq \sum_{t' \in P} c(t'),$ where $P$ is the path defined by $t$ and $e$. \end{claim} \begin{proof} Let $e=\{u,x\}$ be an edge that covers $t=\{v,p(v)\}$ where $u$ is an ancestor of $x$ in $T$. Denote by $P_v=\{x=v_1,...,v_k=v\}$ the path of tree edges between $x$ and $v$ in $T$. We prove by induction that $$w_{v_i}(p(v)) \leq w(e)-\sum_{t' \in P_{v_i}} c(t'),$$ where $w_{v_i}(p(v))$ is the value obtained in line \ref{l1} of the algorithm of $v_i$ (or at the initialization if $v_i$ is a leaf). For $i=1$, let $e_{v_1,p(v)}$ be the minimum weight edge incoming to $v_1$ that covers the path between $v_1$ and $p(v)$ in $T$. Note that $w(e_{v_1,p(v)}) \leq w(e)$ because $e$ is an edge incoming to $v_1$ that covers the path between $v_1$ and $p(v)$. The value of $w_{v_1}(p(v))$ is the weight of the minimum weight edge covering the path between $v_1$ and $p(v)$, according to the weights $v_1$ receives. In particular, $w_{v_1}(p(v)) \leq w(e_{v_1,p(v)})$, and therefore $w_{v_1}(p(v)) \leq w(e)$. Since $P_{v_1}$ is an empty path, we have $\sum_{t' \in P_{v_1}} c(t') = 0$, which gives $$w_{v_1}(p(v)) \leq w(e)-\sum_{t' \in P_{v_1}} c(t').$$ Assume the claim holds for $i$, and we prove it holds for $i+1$. Denote by $t_i$ the tree edge $\{v_i,v_{i+1}\}$. Note that $v_i$ sends to $v_{i+1}$ the message $(p(v), w_{v_i}(p(v))-min_{v_i})$ since it reduces $min_{v_i}$ from the value of $w_{v_i}(p(v))$ before sending it to its parent. The value of $w_{v_{i+1}}(p(v))$ is the weight of the minimum weight edge covering the path between $v_{i+1}$ and $p(v)$, according to the weights $v_{i+1}$ receives. In particular, $w_{v_{i+1}}(p(v)) \leq w_{v_i}(p(v))-min_{v_i}$. By the induction hypothesis $w_{v_i}(p(v)) \leq w(e)-\sum_{t' \in P_{v_i}} c(t')$, which gives $$w_{v_{i+1}}(p(v)) \leq w(e)-\sum_{t' \in P_{v_i}} c(t') - min_{v_i} = w(e) - \sum_{t' \in P_{v_{i+1}}} c(t').$$ For $i=k$ we get $$c(t) = w_{v}(p(v)) \leq w(e)-\sum_{t' \in P_v} c(t'),$$ which implies that $w(e) \geq \sum_{t' \in P} c(t')$, as claimed. \end{proof} \begin{claim} \label{c3} The sum of the costs of the edges of $T$ is equal to the cost of the solution obtained by the algorithm. \end{claim} \begin{proof} We map each edge $e$ added to the augmentation to a path $P_e$ of tree edges, such that: \begin{enumerate}[(I)] \item The paths that correspond to different augmentation edges are disjoint, and their union is the entire tree $T$. That is, $P_e \cap P_{e'} = \emptyset$ for $e \neq e'$, and $\cup P_e = T$. \item The weight of $e$ is equal to the sum of costs of tree edges in the corresponding path, i.e., $w(e)=\sum_{t' \in P_e} c(t')$. \end{enumerate} Let $e=\{u,x\}$ be an edge added to the augmentation, such that $u$ is an ancestor of $x$ in $T$. Let $v$ be the vertex that decides to add $e$ to the augmentation. Note that $v$ decides to add $e$ to the augmentation because it covers the tree edge $\{v,p(v)\}$, which is not covered yet by an edge that one of $v$'s ancestors decides to add to the augmentation. We map $e$ to the tree path $P_e$ that consists of all the tree edges on the path between $x$ and $p(v)$. Note that $e$ covers all the edges on this path (and it may also cover other tree edges, on the path between $p(v)$ and $u$ in $T$). This divides the tree edges to disjoint paths because the vertices on the path between $x$ and $p(v)$ do not decide to add other edges to the augmentation, since all the relevant tree edges are already covered by $e$. In addition, these paths include all tree edges because the edges added to the augmentation cover all tree edges. This proves (I). Note that $v$ adds $e$ to the augmentation because the tree edge $\{v,p(v)\}$ is not covered yet. So $v$ chooses $e$ because it is the minimum weight edge $e_v$ that covers $\{v,p(v)\}$. By Claim \ref{c1}, it holds that $w(e_v)=\sum_{t' \in P} c(t')$ where $P=P_e$ is the path of tree edges between $x$ and $p(v)$. This proves (II). (I) and (II) complete the proof that the cost of all the edges added to the augmentation is equal to the sum of costs of the edges in $T$. \end{proof} \begin{claim} \label{c4} The cost of any augmentation of $G'$ is at least the sum of costs of the edges of $T$. \end{claim} \begin{proof} Let $A$ be an augmentation in $G'$. We map a subset of edges $E' \subseteq A$ to paths $\{P'_e\}_{e \in E'}$ in $T$ such that: \begin{enumerate}[(I)] \item The paths that correspond to different edges are disjoint, and their union is the entire tree $T$. \item The weight of an edge $e \in E'$ is at least the sum of costs of tree edges on the path $P'_e$. \end{enumerate} We cover tree edges by edges from $A$ as follows. While there is a tree edge that is still not covered, we choose a tree edge $\{v,p(v)\}$ that is still not covered and is closest to the root $r$, where initially $p(v)=r$. Since $A$ is an augmentation, there is an edge $e=\{u,x\}$ in $A$ such that $u$ is an ancestor of $x$ in $T$ and $e$ covers $\{v,p(v)\}$. We map $e$ to the tree path $P'_e$ between $x$ and $p(v)$. The edge $e$ covers all the tree edges on this path, and may cover additional edges closer to the root that are already covered by other edges from $A$. We continue in the same manner until all the tree edges are covered. From the construction, the paths are disjoint and include all tree edges, proving (I). From Claim \ref{c2}, it holds that $w(e) \geq \sum_{t' \in P} c(t')$ where $P=P'_e$ is the path of tree edges between $x$ and $p(v)$, proving (II). To conclude, the cost of all the edges in $A$ is at least the sum of costs of all the edges of $T$. Note that there might be edges from $A$ that are not mapped to paths in $T$, which can only increase the cost of $A$. \end{proof} From Claims \ref{c3} and \ref{c4} we have that the cost of the augmentation obtained by the algorithm is smaller or equal to the cost of any augmentation of $G'$, hence the solution obtained by the algorithm is optimal. This completes the proof of Lemma \ref{wcor}. \end{proof} \subsection{Time analysis} \label{sec:app_analysis} We next analyze the time complexity of the algorithm. In the second traversal of the tree, each parent sends to each of its children one message, which takes $O(h)$ rounds. In the first traversal of the tree, each vertex sends to its parent at most $h$ edges. If each vertex waits to receive all the messages from its children, before sending messages to its parent, it would result in a time complexity of $O(h^2)$ rounds. However, using pipelining we get a time complexity of $O(h)$ rounds. To show this, we carefully design each vertex $v$ to send the messages $(u,w_v(u))$ in increasing order of heights of its ancestors. The main intuition is that although each vertex $v$ may receive $h$ different messages from each of its children during the algorithm, in order for $v$ to send to its parent $p(v)$ the message concerning an ancestor $u$, the vertex $v$ only needs to receive one message from each of its children concerning the ancestor $u$. Hence, if all the vertices send the messages according to increasing order of heights of their ancestors, we can pipeline the messages and get a time complexity of $O(h)$ rounds. We formalize this intuition in the next lemma. \begin{lemma} If all the vertices send the messages according to increasing order of heights of their ancestors, the following holds. A vertex $v$ at height $i$ sends to its parent until round $i+j$ the message $(u,w_v(u))$ such that $u$ is an ancestor of $v$ at height $j$. \end{lemma} \begin{proof} We prove the lemma by induction. For a vertex at height 0 (a leaf) the claim holds since $v$ sends the messages according to increasing order of heights. We assume that the claim holds for each vertex at height at most $i-1$, and show that it also holds for each vertex $v$ at height $i$. If $j \leq i$ the claim holds trivially, since $v$ does not have ancestors at height $j$. We assume that the claim holds for $i$ and $j-1$ and we show that it also holds for $i$ and $j$. Let $v$ be a vertex at height $i$, and let $u$ be an ancestor of $v$ at height $j$. Note that by the induction hypothesis, by round $i-1+j$ all the children $v'$ of $v$ already sent to $v$ the messages $(u,w_{v'}(u))$. Therefore, $v$ can compute $w_v(u) \gets min_{v' \in A_v} {w_{v'}(u)}$. Note that by round $i+j-1$, $v$ already sent all the messages concerning ancestors at height at most $j-1$ and sends the message concerning $u$ to its parent until round $i+j$ as needed (in the case that $u=p(v)$ no message is sent in the algorithm). Note that $v$ also knows and sends the new weight $w_v(u)$: denote by $i'$ the height of the parent of $v$ ($i < i'$), then each other ancestor of $v$ is at height greater than $i'$. Until round $i+i'$, $v$ knows $min_v=w_v(p(v))$, so for all the relevant values of $j$ ($i' \leq j$) it can compute the new weight $w_v(u) \gets w_v(u)-min_v$ until round $i+j$. \end{proof} From the lemma we get that by round $2h$ all the children of $r$ learn about the minimum weight edge that covers the tree edge between them and $r$, so the first traversal is completed after $O(h)$ rounds. It follows that the overall time complexity of the algorithm is $O(h)$ rounds as needed, giving the following. \begin{lemma} \label{timew} Algorithm \ref{alg} completes in $O(h)$ rounds. \end{lemma} \wTAP* \begin{proof} By Lemma \ref{wcor}, Algorithm \ref{alg} finds an optimal augmentation in $G'$. Its time complexity is $O(h)$ rounds by Lemma \ref{timew}. This augmentation corresponds to an augmentation in $G$ with cost at most twice the cost of an optimal augmentation of $G$ by Lemma \ref{corr} (the proof is for the unweighted case, but the same proof shows it holds for the weighted case as well). Building $G'$ is the same as in the unweighted case and takes $O(h)$ rounds by Lemma \ref{timeb1}. \end{proof} \section{Applications} \label{sec:applic} In this section, we discuss applications of our algorithms, and show they provide efficient algorithms for additional related problems. \\ \textbf{Minimum Weight 2-Edge-Connected Spanning Subgraph:} In the minimum weight 2-edge-connected spanning subgraph problem (2-ECSS), the input is a 2-edge-connected graph $G$, and the goal is to find the minimum weight 2-edge-connected spanning subgraph of $G$. Using {$A_{TAP}$} we have the following. \ECSS* \begin{proof} We apply {$A_{TAP}$} on $G$ and a BFS tree $T$ of $G$. Finding a BFS tree takes $O(D)$ rounds \cite{peleg2000distributed}, and {$A_{TAP}$} takes $O(D)$ rounds since $T$ is a BFS tree. The size of the augmentation $Aug$ is at most $n-1$ because in the worst case we add a different edge in order to cover each tree edge. Hence, $T \cup Aug$ is a 2-edge-connected subgraph with at most $2(n-1)$ edges. Note that any 2-edge-connected graph has at least $n$ edges, which implies a 2-approximation, as claimed. \end{proof} The above algorithm has a better time complexity compared to the algorithm of \cite{krumke2007distributed}, which finds a $\frac{3}{2}$-approximation to 2-ECSS in $O(n)$ rounds. In the algorithm of \cite{krumke2007distributed}, the augmented tree $T$ is a DFS tree rather then a BFS tree. The same proof of \cite{krumke2007distributed, khuller1994biconnectivity} gives that if we apply {$A_{TAP}$} on $G$ and a DFS tree we also obtain a $\frac{3}{2}$-approximation to 2-ECSS in $O(n)$ rounds. For weighted 2-ECSS, using {$A_{wTAP}$} gives the following. \begin{theorem} There is a distributed 3-approximation algorithm for weighted 2-ECSS in the CONGEST model that completes in $O(h_{MST}+\sqrt{n}\log^*{n})$ rounds, where $h_{MST}$ is the height of the MST. \end{theorem} \begin{proof} We follow the same approach of \cite{krumke2007distributed}. We start by constructing an MST, which takes $O(D+\sqrt{n}\log^*{n})$ rounds \cite{kutten1995fast}, and then we augment it using {$A_{wTAP}$} in $O(h_{MST})$ rounds.\footnote{We assume that the MST is unique. Otherwise, $h_{MST}$ is the height of the MST we construct.} Let $w(A)$ be the weight of an optimal solution $A$ to weighted 2-ECSS. Since both the MST and an optimal augmentation have weights at most $w(A)$, and since our algorithm for weighted TAP gives a 2-approximation, this approach gives a 3-approximation for weighted 2-ECSS. \end{proof} This algorithm has a better time complexity compared to the algorithm of \cite{krumke2007distributed}, which takes $O(n \log{n})$ rounds, with the same approximation ratio. \\ \textbf{Increasing the Edge-Connectivity from 1 to 2:} {$A_{wTAP}$} is a 2-approximation algorithm for TAP, but can also be used to increase the connectivity of any spanning subgraph $H$ of $G$ from $1$ to $2$. In order to do so, we start by finding a spanning tree $T$ of $H$. Note that it is not enough to apply {$A_{TAP}$} on $T$ and take the augmentation obtained, since edges from $H$ can be added to the augmentation with no cost. Hence, we apply {$A_{wTAP}$} on $G$ and $T$, where we set the weights of all the edges of $H$ to be $0$. The augmentation $Aug$ we obtain is a set of edges such that $T \cup Aug$ is 2-edge-connected, which also implies that $H \cup Aug$ is 2-edge-connected. In addition, its cost is at most twice the cost of an optimal augmentation of $H$, because any augmentation of $H$ corresponds to an augmentation of $T$ with the same cost, and $Aug$ is a $2$-approximation to the optimal augmentation of $T$. The time complexity is $O(D_H)$ rounds where $D_H$ is the diameter of $H$, since finding a spanning tree $T$ of $H$ takes $O(D_H)$ rounds and applying {$A_{wTAP}$} takes $O(D_H)$ rounds because it is the height of $T$. \\ \textbf{Verifying 2-Edge-Connectivity:} The algorithm {$A_{TAP}$} can be used in order to verify if a connected graph $G$ is 2-edge-connected in $O(D)$ rounds, where at the end of the algorithm all the vertices know if $G$ is 2-edge-connected.\footnote{A verification algorithm with the same complexity can also be deduced from the edge-biconnectivity algorithm of Pritchard \cite{pritchard2005robust}.} We start by building a BFS tree $T$ of $G$ and then apply {$A_{TAP}$} to $G$ and $T$. Note that when we find an optimal augmentation in $G'$ by {$A_{Aug}$}, each vertex $v$ is responsible to cover the tree edge $\{v,p(v)\}$. If the graph $G$ is 2-edge-connected, all the edges can be covered. If the graph $G$ is not 2-edge-connected, then there is a tree edge $\{v,p(v)\}$ that is a bridge in the graph, and hence cannot be covered by any edge in $G$. In such a case, $v$ identifies that it cannot cover the edge and hence the graph is not 2-edge-connected. Therefore, after scanning the tree from the leaves to the root in {$A_{Aug}$}, we can distinguish between these two cases, which takes $O(D)$ rounds. The root $r$ can distribute the information to all the vertices in $O(D)$ rounds as well. \section{A 4-approximation for Unweighted TAP in $\widetilde{O}(D+\sqrt{n})$ rounds} \label{sec:faster} The time complexity of {$A_{TAP}$} and {$A_{wTAP}$} is linear in the height of $T$. When $h$ is large, we suggest a much faster $O(D+\sqrt{n}\log^*{n})$-round algorithm for unweighted TAP, proving Theorem \ref{uTAPtwo}. \uTAPtwo* The structure of the algorithm is the same as the structure of {$A_{TAP}$}. It starts by building the same virtual graph $G'$, and then it finds an augmentation in $G'$. However, now we do not necessarily obtain an optimal augmentation in $G'$, but rather a 2-approximation to the optimal augmentation of $G'$, which results in a 4-approximation to the optimal augmentation in $G$. Since we want to reduce the time complexity, our algorithm cannot scan the whole tree anymore. Therefore, we can no longer use directly the LCA labeling scheme and the algorithm {$A_{Aug}$} for finding an optimal augmentation. To overcome this, we break the tree $T$ into fragments, and we divide the algorithm into local parts, in which we communicate in each fragment separately, and to global parts, in which we coordinate between different fragments over a BFS tree. This approach is useful also in other distributed algorithms for global problems, such as finding an MST \cite{kutten1995fast} or a minimum cut \cite{nanongkai2014almost}. The challenge is showing that this approach guarantees a good approximation. Since our algorithm does not scan the whole tree $T$ it may add different edges in order to cover the same tree edges, which makes the analysis much more involved. \\ \textbf{Building $G'$ from $G$:} To build $G'$ from $G$ we use the labeling scheme for LCAs that we used in {$A_{TAP}$}. However, applying this scheme directly takes $O(h)$ rounds. We show how to compute all the relevant LCAs more efficiently in $O(D + \sqrt{n})$ rounds. The idea is to apply the labeling scheme on each fragment separately to obtain \emph{local labels}, and to apply the labeling scheme on the tree of fragments to obtain \emph{global labels}. We show that using the local and global labels, and additional information on the structure of the tree of fragments, each vertex can compute all the edges incoming to it in $G'$. \\ \textbf{Finding an augmentation in $G'$:} In order to find an augmentation in $G'$, we need to cover tree edges between fragments (\emph{global edges}) and tree edges in the same fragment (\emph{local edges}). We next give a high-level overview of our approach, the exact algorithm differs slightly from this description and appears in Section \ref{sec:app_aug}. We start by computing all the maximal edges that cover the global edges. To cover all the global edges, one approach could be to add all these maximal edges to the augmentation. However, this cannot guarantee a good approximation. Instead, we apply {$A_{Aug}$} on the tree of fragments in order to cover all the global edges. Then, we apply it on each fragment separately in order to cover the local edges in the fragment that are still not covered. This algorithm requires coordination between different fragments, since each vertex $v$ needs to learn if the tree edge $\{v,p(v)\}$ is already covered after the first part of the algorithm. In addition, although the second part is applied on each fragment separately, a vertex $v$ may need to add an edge incoming to another fragment to cover the tree edge $\{v,p(v)\}$. For achieving an efficient time complexity, we show how to use only $O(\sqrt{n})$ different messages for the whole coordination of the algorithm. \\ We next provide full details of the algorithm. In Section \ref{frag}, we explain how we break the tree into fragments using the MST algorithm of Kutten and Peleg \cite{kutten1995fast}. In Section \ref{sec:app_lca}, we show how we build the graph $G'$, and in Section \ref{sec:app_aug} we explain how we find an augmentation in $G'$. The approximation ratio analysis appears in Section \ref{sec:app_approx}. \subsection{Breaking $T$ into fragments} \label{frag} We break the tree $T$ into fragments, such that each fragment is a tree with diameter at most $O(\sqrt{n})$ and there are at most $O(\sqrt{n})$ fragments. We do this by using the MST algorithm of Kutten and Peleg \cite{kutten1995fast} which has a time complexity of $O(D+\sqrt{n}\log^*{n})$ rounds. We say that a tree edge is a \textit{local} edge if its vertices are in the same fragment, and is a \textit{global} edge if it connects two fragments. The tree of fragments $T_F$ is the tree obtained by contracting each fragment $F$ into one vertex $v_F$ and having an edge between $v_{F_1}$ and $v_{F_2}$ if the two fragments are connected by a global edge. Since there are at most $O(\sqrt{n})$ fragments, $T_F$ is of size $O(\sqrt{n})$. Each fragment has a root, which is the vertex $v$ closest to $r$ in the fragment. Our algorithm is divided to local parts, in which we communicate in each fragment separately, which results in time complexity proportional to the fragments' diameter, $O(\sqrt{n})$, and to global parts, in which we coordinate between different fragments over a BFS tree rooted at $r$. Building a BFS tree rooted at $r$ takes $O(D)$ rounds \cite{peleg2000distributed}. Using the BFS tree we can distribute $k$ different messages from vertices in the tree to all the vertices in the tree in $O(D+k)$ rounds: we first collect all the messages in the root $r$ using upcast, and then $r$ broadcasts the messages to all the vertices in the tree. Each of these parts takes $O(D+k)$ rounds \cite{peleg2000distributed}. We show that it is enough to distribute $O(\sqrt{n})$ different messages for the coordination, which results in time complexity of $O(D + \sqrt{n})$ rounds. The overall time complexity of the algorithm is $O(D+\sqrt{n}\log^*{n})$ rounds. \subsection{Building $G'$ from $G$} \label{sec:app_lca} In order to build $G'$ from $G$, it is enough that each vertex knows all the edges incoming to it in $G'$. In order to obtain this, we use the labeling scheme for LCAs that we used for {$A_{TAP}$}. However, applying this scheme takes $O(h)$ rounds, and in order to avoid the dependence on $h$ we break the label to a local part and a global part in the following way: \begin{itemize} \item We first apply the labeling scheme for LCAs on each fragment separately, to obtain local labels. \item Next, we apply the labeling scheme for LCAs on $T_F$, such that each fragment gets a label. This is the global label of all the vertices in the fragment. \end{itemize} The first part takes $O(h_F)$ rounds on a fragment $F$ of height $h_F$. Since the diameter of each fragment is $O(\sqrt{n})$, it follows that this part takes $O(\sqrt{n})$ rounds. In order to implement the second part efficiently, we first distribute information about the global edges to all the vertices. Note that each global edge connects two fragments. We assume that each fragment has an id known to all the vertices in the fragment, say, the id of the root of the fragment, which it can distribute to all the vertices in the fragment in $O(\sqrt{n})$ rounds. For each global edge $e=\{v,p(v)\}$, the vertex $v$ distributes the message $(id_1,id_2,\ell_1,\ell_2)$ where $id_1,id_2$ are the ids of the fragments of $v$ and $p(v)$, and $\ell_1$, $\ell_2$ are the local labels of $v$ and $p(v)$. Since there are $O(\sqrt{n})$ global edges, we can distribute this information over the BFS tree to all the vertices in $O(D + \sqrt{n})$ rounds. After distributing the information about the global edges to all the vertices, they all learn the whole structure of $T_F$. Now each vertex can compute locally the labeling scheme for LCAs on $T_F$ and, in particular, learn its global label. Note that applying the labeling scheme does not require communication, so the total round complexity of the second part is $O(D + \sqrt{n})$. We now explain how we use the local and global labels in order to compute LCAs in $T$. Assume the vertices $v,u$ have the local labels $\ell_v,\ell_u$ and the global labels $g_v,g_u$, respectively: \begin{itemize} \item If $g_v=g_u$ then $v$ and $u$ are in the same fragment. It follows that their LCA is in this fragment, since the root of the fragment is an ancestor of both of them. In this case we use the local labels $\ell_v,\ell_u$ in order to compute the local label of their LCA in the fragment, whose global label is $g_v$. \item If $g_v \neq g_u$ then $v$ and $u$ are in different fragments $F_v, F_u$. They use the global labels in order to compute the global label $g$ of the fragment $F$ that is the LCA of $F_v,F_u$ in $T_F$. In this case it follows that the LCA of $v$ and $u$ in $T$ is in $F$, and its global label is $g$. If $F=F_v$ it follows that $v$ is the LCA of $v$ and $u$, so its local label is $\ell_v$. Similarly, if $F=F_u$ then its local label is $\ell_u$. Otherwise, in order to find the local label of the LCA, note that $v$ and $u$ know the whole structure of $T_F$. In particular, they can find the paths between $F_v$ to $F$ in $T_F$, and between $F_u$ to $F$ in $T_F$. The last edges on these paths are global edges of the form $e_1=\{v_1,p(v_1)\},e_2=\{v_2,p(v_2)\}$ where $p(v_1),p(v_2)$ are in $F$ ($e_1 \neq e_2$, otherwise we get a contradiction to the fact that $F$ is the LCA of $F_v,F_u$ in $T_F$). Note that $v$ and $u$ know the local labels of all the vertices in global edges, and in particular they know the local labels $\ell_1,\ell_2$ of $p(v_1),p(v_2)$. They can use $\ell_1,\ell_2$ in order to compute the local label of the LCA of $p(v_1),p(v_2)$ in $F$. This is the LCA of $v,u$ in $T$. In conclusion, using $g_v,g_u$, $v$ and $u$ can compute the local and global labels of their LCA in $T$. The computation is based on the information about global edges all vertices know, and does not require communiction. \end{itemize} We explained how all the vertices get local and global labels, and how they use these labels in order to compute LCAs in $T$. As in {$A_{TAP}$}, in one round each vertex can send its labels to all its neighbors in $G$, and get their labels. From these labels each vertex can compute the local and global labels of all the edges incoming to it in $G'$ by computing LCAs, which does not require communication. The overall time complexity of constricting $G'$ is $O(D + \sqrt{n})$ rounds, for applying the labeling scheme. This gives, \begin{lemma} \label{timeb} Building $G'$ from $G$ takes $O(D + \sqrt{n})$ rounds. \end{lemma} \subsection{Finding an Augmentation in $G'$} \label{sec:app_aug} We next explain how to find an augmentation in $G'$ in $O(D + \sqrt{n})$ rounds. In {$A_{Aug}$}, when we find an augmentation in $G'$, we scan $T$ from the leaves to the root, and whenever we get to a tree edge that is still not covered we cover it by the maximal edge possible. An edge $e$ is the maximal edge between $e=\{u,w\}$ and $e'=\{u',w'\}$, where $u,u'$ are ancestors of $w,w'$ respectively, if and only if $u$ is an ancestor of $u'$. We define a variant of this algorithm, {$A'_{Aug}$}, whose input is the tree $T$, the graph $G$, and a set $T_0$ of tree edges from $T$ that are already covered. {$A'_{Aug}$} finds an augmentation in $G$ by applying {$A_{Aug}$}, with the difference that now we cover only the tree edges that are not in $T_0$. When we cover an edge, we still cover it by the maximal edge possible. The general structure of the algorithm for finding an augmentation in $G'$ is as follows: \begin{itemize} \item Each leaf $v$ covers the tree edge $\{v,p(v)\}$ by the maximal edge possible. \item We cover global edges that are still not covered by applying {$A'_{Aug}$} on $T_F$. \item We cover local edges that are still not covered by applying {$A'_{Aug}$} on each fragment separately. \end{itemize} We next describe how to implement the above efficiently in a distributed way. \subsubsection{Covering Leaf Edges} For a leaf $v$ in $T$, we say that the tree edge $\{v,p(v)\}$ is a \textit{leaf edge}. We start the algorithm by covering leaf edges: each leaf $v$ covers the tree edge $\{v,p(v)\}$ by the maximal edge possible. Since each vertex knows the labels of all the edges incoming to it in $G'$, it knows which is the maximal one as in {$A_{Aug}$}. This computation does not require any communication. However, for the rest of the algorithm each vertex $u$ needs to know if the tree edge $\{u,p(u)\}$ is covered by one of the edges we added in order to cover leaf edges. In order to do that, we need coordination between the vertices. We divide this task into a local coordination at each fragment, and a global coordination between fragments. \paragraph*{Local coordination:$\ $} In this part, each vertex $v$ learns about the maximal edge that covers $\{v,p(v)\}$ among edges added to the augmentation by leaves in its fragment, if such exists. In order to do this, we apply the following algorithm in each fragment separately: we scan the fragment from its leaves to its root by having each leaf $v$ of the fragment that is also a leaf in $T$ send to its parent the labels of the edge it added. Any leaf of the fragment that is not a leaf in $T$ sends to its parent an empty message. Each internal vertex $v$ gets messages from all its children. If at least one of the messages is an edge that covers $\{v,p(v)\}$, $v$ sends to its parent the labels of the maximal edge among those it received from its children. Otherwise, it sends an empty message. Note that using the labels of an edge $e=\{u,w\}$, where $u$ is an ancestor of $w$, a vertex $v$ knows if this edge covers $\{v,p(v)\}$ using LCA computations: it checks if $v$ is an ancestor of $w$ and if $u$ is an ancestor of $p(v)$. It can also learn which is the maximal edge by LCA computations. Note that by the end of the algorithm each vertex $v$ learns if the tree edge $\{v,p(v)\}$ is covered by an edge that one of the leaves of the fragment adds to the augmentation, and the root of the fragment, $v'$, learns the labels of the maximal edge added to the augmentation by leaves of the fragment that covers the global edge $\{v',p(v')\}$, if such exists. The round complexity of this part is proportional to the diameter of the fragment, and is bounded by $O(\sqrt{n})$. \paragraph*{Global coordination:$\ $} Each vertex $v$ that is a root of a fragment, excluding $r$, sends over the BFS tree the labels of the maximal edge added to the augmentation by leaves of the fragment that covers $\{v,p(v\}$, if such exists. Since there are at most $O(\sqrt{n})$ fragments, there are at most $O(\sqrt{n})$ messages sent. So we can distribute these messages over the BFS tree to all the vertices in $O(D + \sqrt{n})$ rounds. Note that using the labels of an edge $e$, a vertex $v$ knows if this edge covers $\{v,p(v)\}$ using LCA computations. In particular, each vertex $v$ knows if the tree edge $\{v,p(v)\}$ is covered by one of the $O(\sqrt{n})$ edges sent to all the vertices. Note that although there may be $\omega(\sqrt{n})$ leaves in $T$, and each one adds an edge to the augmentation, after the local coordination and the global coordination, in which each vertex receives information about $O(\sqrt{n})$ edges, each vertex $v$ knows if the tree edge $\{v,p(v)\}$ is covered by an edge added by a leaf in $T$. This is proven in the next claim. \begin{claim} \label{glob} After the local and global coordination, each vertex $v$ knows if the tree edge $\{v,p(v)\}$ is covered by an edge added by a leaf in $T$. \end{claim} \begin{proof} Let $v$ be a vertex and assume there is an edge $e=\{u,w\}$ added by a leaf $u$ in $T$, which covers the tree edge $\{v,p(v)\}$. If $u$ is in the same fragment as $v$, in the local coordination $v$ learns about the maximal edge added by a leaf in the fragment that covers $\{v,p(v)\}$, and in particular it learns that there is an edge that covers $\{v,p(v)\}$, as needed. Assume now that $u,v$ are in different fragments, $F_u, F_v$, and there is no leaf in $F_v$ that adds an edge that covers $\{v,p(v)\}$. Let $r_u$ be the root of $F_u$, and let $e_u$ be the edge $r_u$ sends over the BFS tree. Note that $e_u$ covers $\{v,p(v)\}$ because the edge $e$ covers $\{r_u,p(r_u)\}$ and covers $\{v,p(v)\}$, and $e_u$ is the maximal edge that covers $\{r_u,p(r_u)\}$. So $v$ learns there is an edge added by a leaf that covers $\{v,p(v)\}$, as needed. \end{proof} \begin{claim} \label{glob2} After the global coordination, each vertex knows if a global edge $\{v,p(v)\}$ is covered by an edge added by a leaf in $T$. \end{claim} \begin{proof} Note that all the vertices know the labels of all the global edges. If a global edge $\{v,p(v)\}$ is covered by an edge $\{u,w\}$, where $u$ is a leaf and $u$ is in the fragment $F_u$, then the edge $e_u$ sent by the root $r_u$ of $F_u$ covers $\{v,p(v)\}$ as well. Since all vertices learn about the labels of $e_u$, by LCA computations they can learn that there is an edge added by a leaf that covers $\{v,p(v)\}$. \end{proof} \subsubsection{Covering Global Edges} The goal now is to cover global edges that are still not covered by applying {$A'_{Aug}$} to $T_F$. Note that the maximal edge that covers a global edge must be a maximal edge incoming to a fragment: assume that $e=\{v_{F_1},v_{F_2} \}$ is the maximal edge that covers the global edge $e'$ in $T_F$ and that $e$ is incoming to $F_1$, then the maximal edge $e_M$ incoming to $F_1$ covers $e'$ as well. Since $e$ is the maximal edge that covers $e'$, it follows that $e=e_M$. Therefore, in order to apply {$A'_{Aug}$} it is enough to know the maximal incoming edge to each fragment in $T_F$ (they are the only edges that may be added to the augmentation), and which global edges are already covered. Note that all the vertices know which global edges are already covered after the global coordination, according to Claim \ref{glob2}. In order to learn the maximal edge incoming to a fragment, each fragment computes this edge by scanning the fragment from its leaves to its root. A leaf sends to its parent the labels of the maximal edge incoming to it. Each internal vertex $v$, excluding the root of the fragment, sends to its parent the labels of the maximal edge covering $\{v,p(v)\}$ among the edges it receives from its children and the maximal edge incoming to it (it can compute the maximal edge by LCA computations using the labels of the edges). At the end, the root $v$ of each fragment learns about the maximal edge incoming to the fragment that covers the global edge $\{v,p(v)\}$, if such exists. The root of each fragment (excluding $r$) distributes over the BFS tree the (local and global) labels of the maximal edge $e$ incoming to its fragment. Note that the global labels of $e$ indicate which fragments are connected by $e$. Since there are $O(\sqrt{n})$ fragments, we can distribute all this information over the BFS tree to all the vertices in $O(D+\sqrt{n})$ rounds. The computation at each fragment takes $O(\sqrt{n})$ rounds and the communication between fragments takes $O(D + \sqrt{n})$ rounds. So the overall time complexity of this part is bounded by $O(D + \sqrt{n})$ rounds. After all the vertices learn the maximal edge incoming to each fragment and which global edges are already covered, each vertex can apply {$A'_{Aug}$} on $T_F$ locally, without any communication. When a vertex covers a global edge, it covers it by the maximal edge possible with respect to $T$. This is also a maximal edge with respect to $T_F$, but there may be several edges in $T_F$ that connect the same fragments, in which case we use the local labels in order to choose the maximal between them. Note that after applying {$A'_{Aug}$}, each vertex knows which of the maximal edges incoming to a fragment is added to the augmentation and, in particular, a vertex $v$ knows if the maximal edge incoming to it is added to the augmentation and if there is an edge added to the augmentation that covers the tree edge $\{v,p(v)\}$. The edges added to the augmentation cover all the global edges and some of the local edges. We next cover the local edges that are still not covered. \subsubsection{Covering Local Edges} In this part, we cover local edges that are still not covered by applying {$A'_{Aug}$} locally in each fragment. The idea is to scan the fragment from its leaves to its root, and each time we get to a tree edge that is still not covered, we cover it by the maximal edge possible. Note that the maximal edge covering a tree edge $\{v,p(v)\}$ may be the maximal edge incoming to any vertex in the subtree rooted at $v$. In particular, it may be incoming to a vertex in another fragment $F$. However, in this case it must be the maximal edge incoming to $F$. Since each vertex knows the maximal edges incoming to each fragment, we can compute the maximal edge covering a tree edge without communication with other fragments. Note that we also know which edges are already covered by edges already added to the augmentation. Denote by $T_0$ all the tree edges that are covered by edges added to the augmentation in order to cover leaf edges or global edges. The distributed implementation of {$A'_{Aug}$} is very similar to {$A_{Aug}$}. However, there are several differences: \begin{itemize} \item We cover only tree edges that are not in $T_0$. Note that each vertex $v$ knows if the edge $\{v,p(v)\}$ is in $T_0$. \item In order to apply the algorithm we need to compute for each edge the maximal edge that covers it. A leaf $v$ of the fragment computes this edge among the edges incoming to it and the maximal edges incoming to a fragment. An internal vertex computes it as in {$A_{Aug}$}, using the edges it receives from its children and the edges incoming to it. \item At the end of the algorithm, as in {$A_{Aug}$}, each vertex knows if the maximal edge it sent to its parent is added to the augmentation. In particular, each vertex in the fragment learns if the maximal edge incoming to it is added to the augmentation by another vertex in the fragment. However, we may decide to add to the augmentation edges incoming to other fragments. We explain next how to distribute this information between fragments. \end{itemize} The computation on each fragment takes $O(\sqrt{n})$ rounds. In order to end the algorithm, each vertex needs to know if the maximal edge incoming to it is added to the augmentation, which we achieve using global coordination between the fragments. \paragraph*{Global coordination:$\ $} Note that when we apply {$A'_{Aug}$}, a vertex may decide to add to the augmentation one of the $O(\sqrt{n})$ maximal edges incoming to a fragment. A vertex that decides to add such an edge sends the labels of this edge over the BFS tree. Since there are at most $O(\sqrt{n})$ such edges, there are at most $O(\sqrt{n})$ different messages sent over the BFS tree, and we can distribute this information over the BFS tree to all the vertices in $O(D + \sqrt{n})$ rounds. So, at the end each vertex knows if the maximal edge incoming to it is added to the augmentation as needed. Note that we covered all the edges that were still not covered, so the solution obtained is an augmentation. The overall time complexity of the algorithm for finding an augmentation in $G'$ is $O(D + \sqrt{n})$ rounds. We next show that it is a 2-approximation to the optimal augmentation in $G'$. As in {$A_{TAP}$}, after we have an augmentation in $G'$ we can convert it to an augmentation in $G$ that is at most twice the size, which implies that we get a 4-approximation to the optimal augmentation in $G$. \begin{lemma} \label{time4} The time complexity of the whole algorithm is $O(D+\sqrt{n}\log^*{n})$ rounds. \end{lemma} \begin{proof} Breaking the tree $T$ into fragments takes $O(D+\sqrt{n}\log^*{n})$ rounds, using the MST algorithm of Kutten and Peleg \cite{kutten1995fast}. Building $G'$ from $G$ takes $O(D+\sqrt{n})$ rounds by Lemma \ref{timeb}, Finding an augmentation in $G'$ takes $O(D+\sqrt{n})$ rounds, as discussed throughout. \end{proof} \subsection{Approximation Ratio} \label{sec:app_approx} \subsubsection*{Intuition for the analysis} We next show that the size of our solution is at most twice the size of an optimal augmentation in $G'$. Denote by $A$ the solution obtained by the algorithm and by $A^*$ an optimal augmentation in $G'$. In the correctness proof of {$A_{TAP}$} we show a one-to-one mapping from $A$ to $A^*$, but this mapping is no longer one to one here. However, if we could show that each edge in $A^*$ is mapped to by at most two edges from $A$, we can obtain a 2-approximation. Unfortunately, this does not hold either. Our approach is to divide the edges in $A$ to two types $A_1,A_2$ as follows. We map each edge $e \in A$ to a corresponding path $P_e$ in $T$. If $P_e$ contains an internal vertex with more than one child in the tree we say that $e \in A_1$, otherwise $e \in A_2$. Then, we show that $|A_1| \leq 2|A^*|$, and $|A_2| \leq 2|A^*|$. The main idea is that the number of edges in $A_1$ is related to the degrees of internal vertices in $T$, which affects the number of leaves in the tree. We use this in order to show that $|A_1| \leq 2\ell$ where $\ell$ is the number of leaves in $T$. Note that $\ell$ is a lower bound on the size of any augmentation in $G'$, since we need to add to the augmentation a different edge in order to cover each one of the leaves. This gives $|A_1| \leq 2|A^*|$. In order to show that $|A_2| \leq 2|A^*|$, we use the fact that the edges of $A_2$ correspond to tree paths with a simple structure. This allows us to show a mapping between $A_2$ to $A^*$ in which each edge in $A^*$ is mapped to by at most two edges from $A_2$, giving $|A_2| \leq 2|A^*|$. In conclusion, $|A| = |A_1 \cup A_2| \leq 4|A^*|$. A more delicate analysis extending these ideas gives $|A| \leq 2|A^*|$. This gives a 2-approximation to the optimal augmentation of $G'$, which results in a 4-approximation to the optimal augmentation in $G$. \subsubsection*{Approximation ratio analysis} Each edge $e \in A$ is added to $A$ in the algorithm in order to cover some tree edge that is still not covered, denote this edge by $t(e)$. Let $t(A)$ be all such tree edges. For each edge $t(e) \in t(A)$, we go up in the tree until we get to another tree edge $t' \in t(A)$, or to the root in case there is no such edge. If $t'$ exists, we denote it by $t_2(e)$. \begin{claim} \label{obs1} Let $e \in A$, such that $t(e)$ is a global edge and $t_2(e)$ exists. Then there is no edge that covers both $t(e)$ and $t_2(e)$. \end{claim} \begin{proof} Note that $t(e),t_2(e) \in t(A)$, i.e., when we get to them in the algorithm they are still not covered. Also, since $t(e)$ is a global edge and $t_2(e)$ is on the path between $t(e)$ to $r$, we get to $t(e)$ before $t_2(e)$ in the algorithm. When we get to $t(e)$ in the algorithm we cover it by the maximal edge possible. This edge does not cover $t_2(e)$, otherwise $t_2(e) \not \in t(A)$. \end{proof} Let $V_{>1}$ be the set of vertices with more than one child in $T$. We write $A=A_1 \cup A_2$ in the following way: let $e \in A$, $t(e)=\{v,p(v)\}$, and $t_2(e)=\{u,p(u)\}$ if it exists. Let $P(e)=\{p(v)=v_1,...,v_k\}$ be the vertices on the tree path between $p(v)$ and $v_k=u$ if $t_2(e)$ exists, or between $p(v)$ and $v_k=r$ otherwise. If there is a vertex $v' \in P(e)$ such that $v' \in V_{>1}$, we say that $e \in A_1$, and otherwise $e \in A_2$. \begin{claim} \label{obs2} There is at most one edge $e \in A_2$ such that $t_2(e)$ does not exist. \end{claim} \begin{proof} Assume there are two edges $e_1,e_2 \in A_2$ such that $t_2(e_1),t_2(e_2)$ does not exist. Then, on the path $P_1$ between $t(e_1)$ to $r$ and on the path $P_2$ between $t(e_2)$ and $r$ there is no vertex in $V_{>1}$. It can only happen if one of $P_1,P_2$ is contained in the other. Assume without loss of generality that $P_1$ contains $P_2$. But then on the path $P_1$ there is another edge in $t(A)$, so $t_2(e_1)$ exists. \end{proof} \begin{claim} \label{path} Let $t=\{v,p(v)\} \in t(A)$ and $e \in A_2$ such that $t_2(e)=\{u,p(u)\}$ where $u$ is an ancestor of $p(v)$. Then $t(e)$ is on the tree path $P'=\{v,p(v),...,u\}$ between $t$ to $t_2(e)$. \end{claim} \begin{proof} If $t=t(e)$ we are done. Note that since $e \in A_2$, on the tree path $P(e)$ between $t(e)$ to $t_2(e)$ there are no vertices in $V_{>1}$ and no other edges in $t(A)$. If $t(e)$ is not on the path $P'$ between $t$ to $t_2(e)$, it follows that there is a vertex $v' \in P(e)$ such that $v' \in V_{>1}$, at the point where $P(e)$ and $P'$ diverge, or $t \in t(A)$ is in $P(e)$. Either case gives a contradiction. \end{proof} Let $A_1^*$ be the edges in $A^*$ that cover leaf edges. Let $\ell$ be the number of leaves in $T$. \begin{claim} $|A_1^*| = \ell$. \end{claim} \begin{proof} Each leaf edge is covered by a different edge in $A^*$ since all the edges in $G'$ are between an ancestor and its descendant in the tree. Also, each leaf edge is covered by exactly one edge in $A^*$, because if there are two edges $e_1,e_2$ that cover the same leaf edge, and assume without loss of generality that $e_1$ is the maximal between them, then it covers all the edges covered by $e_2$, which contradicts the optimality of $A^*$. \end{proof} We divide the leaves to two types in the following way: we map each leaf $v$ to the edge $e_v \in A_1^*$ that covers the corresponding leaf edge. For each edge $e_v$ in $A_1^*$ we look at the corresponding path of tree edges that it covers. If one of the vertices in this path is in $V_{>1}$ we say that $v \in L_1$, otherwise $v \in L_2$. Let $\ell_1=|L_1|$ and $\ell_2=|L_2|$, giving $\ell=\ell_1+\ell_2$. \begin{claim} If there is an edge in $A_1^*$ of the form $\{v,r\}$ that covers a leaf $v \in L_2$ then the solution given by the algorithm is optimal. \end{claim} \begin{proof} Note that if there is an edge in $A_1^*$ of the form $\{v,r\}$ that covers a leaf $v \in L_2$ it follows that $T$ is just the path between $v$ to $r$ and there is one edge that covers it. In such a case, our algorithm is optimal because it starts by adding the maximal edges that cover leaves, and hence it adds this edge and no other edge. \end{proof} We next assume that there are no edges in $A_1^*$ of the form $\{v,r\}$ that cover a leaf $v \in L_2$. According to our assumption, each edge in $A_1^*$ that covers a leaf edge $e_v$ such that $v \in L_2$ is of the form $\{v,u\}$ where $u \neq r$. There are exactly $\ell_2$ tree edges of the form $\{u,p(u)\}$ for all such veritces $u$, denote them by $E_2$. Let $A_2^*$ be all the edges in $A^*$ that cover edges in $E_2$. \begin{claim} $A_1^* \cap A_2^* = \emptyset$. \end{claim} \begin{proof} Let $e = \{u,p(u)\} \in E_2$, so there is a leaf $v \in L_2$ such that $\{v,u\} \in A_1^*$. Note that $e$ is not covered by edges from $A_1^*$ because by the definition of $L_2$, the subtree rooted at $u$ is the path between $v$ to $u$, and the only edge from $A_1^*$ that covers edges on this path is $\{v,u\}$, which does not cover $\{u,p(u)\}$. \end{proof} \begin{claim} \label{A2_size} $|A_2^*| \geq \ell_2$. \end{claim} \begin{proof} There are exactly $\ell_2$ edges in $E_2$. We show that each of them is covered by a different edge from $A_2^*$. Note that if $\{u,p(u)\} \in E_2$ then the subtree rooted at $u$ is a path, in which all edges are covered by an edge from $A_1^*$. In particular, in this path there are no other tree edges from $E_2$. It follows that edges in $E_2$ cannot be on the same path between a leaf and $r$ in the tree, and cannot be covered by the same edge because all the edges in $G'$ are between an ancestor to its descendant in the tree. The claim follows. \end{proof} Let $A_3^* = A^* \setminus (A_1^* \cup A_2^*)$. In order to show that $|A| \leq 2|A^*|$, we prove the following two lemmas: \begin{lemma} \label{lem1} $|A_1| \leq 2|A_1^*| -2$. \end{lemma} \begin{lemma} \label{lem2} $|A_2| \leq 2|A_2^*|+2|A_3^*|+1$. \end{lemma} To prove Lemma \ref{lem1}, we map edges in $A_1$ to vertices in $V_{>1}$ in the following way: Let $e \in A_1$, such that $t(e)=\{v,p(v)\}$. By definition of $A_1$, on the path $P(e)$ there is a vertex in $V_{>1}$. We map $e$ to a vertex $u \in V_{>1}$ that is closest to $v$ on this path. We need the following claim. \begin{claim} \label{claim_V1} If $u \in V_{>1}$ has $k$ children then it is mapped to by at most $k$ edges. \end{claim} \begin{proof} The edges $e$ mapped to $u$ are such that $t(e)$ is in the subtree rooted at $u$. We divide this subtree to $k$ parts according to its children. Let $u'$ be a child of $u$, let $T_{u'}$ be the subtree rooted at $u'$, and let $T' = T_{u'} \cup \{u,u'\}$. We show that there is at most one edge $e$ where $t(e) \in T'$ that is mapped to $u$. Assume there are 2 edges $e_1,e_2 \in A_1$ such that $t(e_1),t(e_2) \in T'$ that are mapped to $u$. Let $P_1,P_2$ be the paths between $t(e_1)$ and $u$, and between $t(e_2)$ and $u$ respectively. If one of $P_1,P_2$ is contained in the other, and assume without loss of generality that $P_1$ contains $P_2$, then $t(e_2)$ is on the path between $t(e_1)$ to $u$. From the definition of $A_1$ there is a vertex $v' \in V_{>1}$ between $t(e_1)$ to $t(e_2)$, which is closer to $t(e_1)$ than $u$, a contradiction to the fact that $e_1$ is mapped to $u$. Otherwise, $P_1$ and $P_2$ diverge in some vertex $v'$ in $T_{u'}$, but then $v'$ is a vertex in $V_{>1}$ that is closer to $t(e_1)$ and $t(e_2)$, a contradiction. \end{proof} Using Claim \ref{claim_V1}, we prove Lemma \ref{lem1}. \begin{proof} [Proof of Lemma \ref{lem1}] For each internal vertex (including $r$) we choose one child and call it the \emph{main child}, and we call the other children \emph{extra children}. Note that all the vertices in $T$ except $r$ are children of some parent, so there are $n-1$ children in $T$. Denote by $x$ the number of extra children in $T$. There are $n-\ell$ internal vertices, so there are $n-\ell$ main children, giving $x=n-1-(n-\ell)=\ell-1$. \remove{ We map edges in $A_1$ to vertices in $V_{>1}$ in the following way: Let $e \in A_1$, such that $t(e)=\{v,p(v)\}$. By definition of $A_1$, on the path $P(e)$ there is a vertex in $V_{>1}$. We map $e$ to a vertex $u \in V_{>1}$ that is closest to $v$ on this path. \begin{claim} If $u \in V_{>1}$ has $k$ children then it is mapped to by at most $k$ edges. \end{claim} \begin{proof} The edges $e$ mapped to $u$ are such that $t(e)$ is in the subtree rooted at $u$. We divide this subtree to $k$ parts according to its children. Let $u'$ be a child of $u$, let $T_{u'}$ be the subtree rooted at $u'$, and let $T' = T_{u'} \cup \{u,u'\}$. We show that there is at most one edge $e$ where $t(e) \in T'$ that is mapped to $u$. Assume there are 2 edges $e_1,e_2 \in A_1$ such that $t(e_1),t(e_2) \in T'$ that are mapped to $u$. Let $P_1,P_2$ be the paths between $t(e_1)$ and $u$, and between $t(e_2)$ and $u$ respectively. If one of $P_1,P_2$ is contained in the other, and assume without loss of generality that $P_1$ contains $P_2$, then $t(e_2)$ is on the path between $t(e_1)$ to $u$. From the definition of $A_1$ there is a vertex $v' \in V_{>1}$ between $t(e_1)$ to $t(e_2)$, which is closer to $t(e_1)$ than $u$, a contradiction to the fact that $e_1$ is mapped to $u$. Otherwise, $P_1$ and $P_2$ diverge in some vertex $v'$ in $T_{u'}$, but then $v'$ is a vertex in $V_{>1}$ that is closer to $t(e_1)$ and $t(e_2)$, a contradiction. \end{proof} } By Claim \ref{claim_V1}, if $u \in V_{>1}$ has $k$ children then it is mapped to by at most $k$ edges. It follows that if $u$ has $k-1$ extra children, we map to it at most $k$ edges from $A_1$. In the worst case, the number of edges in $A_1$ is twice the number of extra children. In conclusion, $|A_1| \leq 2x=2\ell-2=2|A_1^*|-2$, which completes the proof. \end{proof} We next prove Lemma \ref{lem2}. According to Claim \ref{obs2}, there is at most one edge $e' \in A_2$ such that $t_2(e')$ does not exist. For the proof of Lemma \ref{lem2}, we map all the edges in $A_2$ except $e'$ to edges in $A^*$ in the following way: let $e \in A_2$. If $t(e)$ is a leaf edge or a local edge, we map $e$ to an edge in $A^*$ that covers $t(e)$. Otherwise, we map $e$ to an edge in $A^*$ that covers $t_2(e)$. We need the following claims. \begin{claim} \label{global_claim} If $t(e_1),t(e_2)$ are both global edges that are not leaf edges then $e_1,e_2$ cannot be mapped to the same edge $e \in A^*$. \end{claim} \begin{proof} Assume that $t(e_1),t(e_2)$ are both global edges that are not leaf edges. In this case, $e$ covers both $t_2(e_1),t_2(e_2)$, so the path $P$ between them is between a descendant to its ancestor in the tree. Assume without loss of generality that $t_2(e_2)$ is closer to the root in $P$. By Claim \ref{path}, $t(e_2)$ is in $P$, and it follows that $e$ covers $t(e_2),t_2(e_2)$ where $t(e_2)$ is a global edge, a contradiction to Claim \ref{obs1}. \end{proof} \begin{claim} \label{local} If $t(e_1),t(e_2)$ are both local or leaf edges then $e_1,e_2$ cannot be mapped to the same edge $e \in A^*$. \end{claim} \begin{proof} Assume that $t(e_1),t(e_2)$ are both local or leaf edges. In this case, $e$ covers both $t(e_1),t(e_2)$. Assume without loss of generality that $t(e_2)$ is closer to the root. Note that $t(e_1),t(e_2)$ cannot be in the same fragment, and $t(e_1)$ cannot be a leaf edge, because when we get to $t(e_1)$ in the algorithm, we cover it by the maximal edge possible, which covers $t(e_2)$ because the edge $e$ covers $t(e_1)$ and $t(e_2)$. If $t(e_1)$ is a leaf edge or is in the same fragment as $t(e_2)$, it follows that $t(e_2) \not \in t(A)$. The same argument shows that $t(e_1),t_2(e_1)$ are not in the same fragment ($t_2(e_1)$ is on the path between $t(e_1)$ and $t(e_2)$ and is covered by $e$ also). Hence, there is a global edge on the path $P$ between $t(e_1)$ and $t_2(e_1)$. Let $g$ be a global edge in $P$ that is closest to $t(e_1)$. If $g \in t(A)$, then when we get to $g$ in the algorithm it is still not covered, and we add the maximal edge possible in order to cover it. This edge covers $t_2(e_1)$ because the edge $e$ covers both $g$ and $t_2(e_1)$. This contradicts the fact that $t_2(e_1) \in t(A)$. Hence, $g \not \in t(A)$, and when we get to it in the algorithm it is already covered by an edge $\widetilde{e}$ added in order to cover a tree edge $g'$. The edge $g'$ may be a leaf edge or a global edge, so $g' \neq t(e_1)$. Note that $t(e_1)$ is on the path $P$ between $g'$ and $g$, otherwise we have a vertex in $V_{>1}$ on the path $P'$ between $t(e_1)$ to $g$ (and in particular between $t(e_1)$ and $t_2(e_1)$) at the point where $P$ and $P'$ diverge, or another global edge $g'$ between $t(e_1)$ and $g$ (if $g'$ is a leaf edge it cannot be on the path between $t(e_1)$ and $g$). Either case gives a contradiction. Hence, $\widetilde{e}$ covers $t(e_1)$, but then $t(e_1) \not \in t(A)$. \end{proof} \begin{proof} [Proof of Lemma \ref{lem2}] Our proof is based on the following claims: \begin{enumerate}[(I)] \item There are at most $\ell_2$ edges in $A_2$ that are mapped to edges in $A_1^*$. \label{I} \item Each edge in $A^*$ is mapped to by at most two edges. \label{II} \item \label{III} Each edge in $A_2^*$ is mapped to by at most one edge. \end{enumerate} From the above three claims we get that the number of edges in $A_2$ is bounded by $1 + \ell_2 + |A_2^*| + 2|A_3^*|$ as follows: there is at most one edge $e'$ that is not included in the mapping, there are at most $\ell_2$ edges in $A_2$ that are mapped to edges in $A_1^*$, at most $|A_2^*|$ edges that are mapped to edges in $A_2^*$, and at most $2|A_3^*|$ edges that are mapped to edges in $A_3^*$. Note that by Claim \ref{A2_size}, $|A_2^*| \geq \ell_2$. It follows that $|A_2| \leq 2|A_2^*| + 2|A_3^*| + 1$ as needed. \begin{proof} [Proof of (\ref{I})] Let $e^* \in A_1^*$. Then $e^*$ covers a leaf edge $t=\{v,p(v\}$. Let $P$ be the path of tree edges that $e^*$ covers. Note that $t$ is the only edge in $P$ such that $t \in t(A)$: since we start the algorithm by covering each leaf edge by the maximal edge possible, then if $e \in A$ is added in the algorithm in order to cover $t$, it covers also all the edges in $P$. Since the only edges that may be mapped to $e^*$ are edges $\widetilde{e}$ such that $t(\widetilde{e})$ or $t_2(\widetilde{e})$ are in $P$, it follows that the only edge in $A$ that may be mapped to $e^*$ is the edge $e$. Note that if $e \in A_2$, in the path $P(e)$ there are no vertices in $V_{>1}$, it follows that in $P$ there are no vertices in $V_{>1}$, so $v \in L_2$ by definition. It follows that there are at most $\ell_2$ edges in $A_2$ that are mapped to edges in $A_1^*$. \end{proof} \begin{proof} [Proof of (\ref{II})] Assume that there are two edges $e_1,e_2$ in $A_2$ that are mapped to the same edge $e \in A^*$. \remove{ \begin{claim} If $t(e_1),t(e_2)$ are both global edges that are not leaf edges then $e_1,e_2$ cannot be mapped to the same edge $e \in A^*$. \end{claim} \begin{proof} Assume that $t(e_1),t(e_2)$ are both global edges that are not leaf edges. In this case, $e$ covers both $t_2(e_1),t_2(e_2)$, so the path $P$ between them is between a descendant to its ancestor in the tree. Assume without loss of generality that $t_2(e_2)$ is closer to the root in $P$. By Claim \ref{path}, $t(e_2)$ is in $P$, and it follows that $e$ covers $t(e_2),t_2(e_2)$ where $t(e_2)$ is a global edge, a contradiction to Claim \ref{obs1}. \end{proof} \begin{claim} \label{local} If $t(e_1),t(e_2)$ are both local or leaf edges then $e_1,e_2$ cannot be mapped to the same edge $e \in A^*$. \end{claim} \begin{proof} Assume that $t(e_1),t(e_2)$ are both local or leaf edges. In this case, $e$ covers both $t(e_1),t(e_2)$. Assume without loss of generality that $t(e_2)$ is closer to the root. Note that $t(e_1),t(e_2)$ cannot be in the same fragment, and $t(e_1)$ cannot be a leaf edge, because when we get to $t(e_1)$ in the algorithm, we cover it by the maximal edge possible, which covers $t(e_2)$ because the edge $e$ covers $t(e_1)$ and $t(e_2)$. If $t(e_1)$ is a leaf edge or is in the same fragment as $t(e_2)$, it follows that $t(e_2) \not \in t(A)$. The same argument shows that $t(e_1),t_2(e_1)$ are not in the same fragment ($t_2(e_1)$ is on the path between $t(e_1)$ and $t(e_2)$ and is covered by $e$ also). Hence, there is a global edge on the path $P$ between $t(e_1)$ and $t_2(e_1)$. Let $g$ be a global edge in $P$ that is closest to $t(e_1)$. If $g \in t(A)$, then when we get to $g$ in the algorithm it is still not covered, and we add the maximal edge possible in order to cover it. This edge covers $t_2(e_1)$ because the edge $e$ covers both $g$ and $t_2(e_1)$. This contradicts the fact that $t_2(e_1) \in t(A)$. Hence, $g \not \in t(A)$, and when we get to it in the algorithm it is already covered by an edge $\widetilde{e}$ added in order to cover a tree edge $g'$. The edge $g'$ may be a leaf edge or a global edge, so $g' \neq t(e_1)$. Note that $t(e_1)$ is on the path $P$ between $g'$ and $g$, otherwise we have a vertex in $V_{>1}$ on the path $P'$ between $t(e_1)$ to $g$ (and in particular between $t(e_1)$ and $t_2(e_1)$) at the point where $P$ and $P'$ diverge, or another global edge $g'$ between $t(e_1)$ and $g$ (if $g'$ is a leaf edge it cannot be on the path between $t(e_1)$ and $g$). Either case gives a contradiction. Hence, $\widetilde{e}$ covers $t(e_1)$, but then $t(e_1) \not \in t(A)$. \end{proof} } From Claim \ref{global_claim}, if $t(e_1),t(e_2)$ are both global edges that are not leaf edges then $e_1,e_2$ cannot be mapped to the same edge $e \in A^*$. From Claim \ref{local}, if $t(e_1),t(e_2)$ are both local or leaf edges then $e_1,e_2$ cannot be mapped to the same edge $e \in A^*$. It follows that there is no edge in $A^*$ that is mapped to by three or more edges. Assume there are three edges $e_1,e_2,e_3$ that are mapped to the same edge $e \in A^*$. At least two of $t(e_1),t(e_2),t(e_3)$ are local or leaf edges, or at least two of them are global edges that are not leaf edges. Either case gives a contradiction. It follows that each edge in $A^*$ is mapped to by at most two edges as needed. \end{proof} \begin{proof} [Proof of (\ref{III})] Let $e^* \in A_2^*$, and let $P$ be the path of tree edges covered by $e^*$. By definition, there is an edge $t=\{u,p(u)\} \in E_2$ that is covered by $e^*$, and the subtree $T_u$ rooted at $u$ is a path which is covered by an edge $e_1=\{u,v\} \in A_1^*$ where $v$ is a leaf. Note that there is only one edge $e_2$ such that $t(e_2) \in T_u$, which is the edge $e_2$ that covers $\{v,p(v)\}$, since all other edges in $T_u$ are already covered by $e_2$ (it is the maximal edge possible, and in particular covers all tree edges covered by $e_1$), and $e_2$ is mapped to $e_1 \not \in A_2^*$. The only edges that may be mapped to $e^*$ are edges $e$ such that $t(e)$ or $t_2(e)$ are in $P$. There may be at most one edge $e$ mapped to $e^*$ such that $t(e)$ is a local edge, according to Claim \ref{local}. So, if there is another edge $e_3$ mapped to $e^*$ it must be a global edge such that $t_2(e_3)$ is in $P$. Note that $t(e_3)$ cannot be in $P$, otherwise we have a contradiction to Claim \ref{obs1}, and it cannot be in $T_u$ as explained above. Let $v'$ be the first vertex in $P$ on the path $P(e_3)$ between $t(e_3)$ to $t_2(e_3)$. Note that $v' \in V_{>1}$, since it has a child not in $P$ on the path $P(e)$ and another child in $P$ because it is an ancestor of $u$. In such a case $e_3$ cannot be in $A_2$. Hence, there is at most one edge in $A_2$ that is mapped to each edge in $A_2^*$ as needed. \end{proof} This completes the proof of Lemma \ref{lem2}. \end{proof} \uTAPtwo* \begin{proof} By Lemma \ref{lem1} and Lemma \ref{lem2}, we have: $$|A|=|A_1 \cup A_2| \leq 2|A_1^*|+2|A_2^*|+2|A_3^*| - 1 \leq 2|A^*|.$$ Hence, $A$ is an augmentation in $G'$, whose size is at most twice the size of an optimal augmentation in $G'$. It corresponds to an augmentation in $G$ whose size is at most 4 times the size of an optimal augmentation in $G$ according to Lemma \ref{corr}. The running time is $O(D+\sqrt{n}\log^*{n})$ rounds by Lemma \ref{time4}. \end{proof} \section{Lower Bounds} \label{sec:lower} \subsection{An $\Omega(D)$ Lower Bound for TAP in the LOCAL model} \label{sec:app_lproof} We show that TAP is a global problem, which admits a lower bound of $\Omega(D)$ rounds, even in the LOCAL model where the size of messages is unbounded. In the LOCAL model, a vertex can learn in $r$ rounds its $r$-neighborhood, which consists of all the vertices and edges at distance at most $r$ from it. In addition, if the $r$-neighborhood of a vertex is the same in two different graphs it cannot distinguish between them in any algorithm that takes at most $r$ rounds. Based on this, we show the following. \local* \begin{proof} Let $k$ be an even integer, and consider the graph $G_1$ that consists of a path $P$ of $n=2k+1$ vertices $\{v_0,v_1,...,v_{2k}\}$, and the additional edges $\{v_{2i},v_{2(i+1)}\}$ for $0 \leq i < k$. Consider also the graph $G_2=G_1 \cup \{v_0,v_{2k}\}$. Both graphs have diameter $D=\Theta(k)$. Consider an instance for TAP where $T$ is the path $P$ for both graphs $G_1$ and $G_2$. It is easy to verify that an optimal augmentation in $G_1$ includes all the edges $\{v_{2i},v_{2(i+1)}\}$ for $0 \leq i < k$, as this is the only way to cover all the edges. However, in $G_2$ an optimal augmentation includes only the edge $\{v_0,v_{2k}\}$. Note that the $(\frac{k}{2}-1)$-neighborhood of $v_k$ is the same in both $G_1$ and $G_2$, so it cannot distinguish between them in any algorithm that takes at most $\frac{k}{2}-1$ rounds. Hence, $v_k$ must have the same output in both cases. However, in $G_1$, both of the edges $\{v_{k-2},v_k\}$, $\{v_k,v_{k+2}\}$ are included in an optimal augmentation, and in $G_2$ they are not, so any distributed algorithm that solves TAP \emph{exactly} must take $\Omega(\frac{k}{2}-1)=\Omega(D)$ rounds. This lower bound holds also for approximation algorithms for the weighted problem: give the weight $1$ to the edge $\{v_0,v_{2k}\}$ and the weight $\alpha + 1$ to the edges $\{v_{2i},v_{2(i+1)}\}$ for $0 \leq i < k$. Any algorithm that adds at least one of the edges $\{v_{2i},v_{2(i+1)}\}$ to the augmentation has weight at least $\alpha + 1$, and hence is not an $\alpha$-approximation to weighted TAP. Therefore, any distributed $\alpha$-approximation algorithm for weighted TAP must take $\Omega(D)$ rounds. A similar proof shows that approximating unweighted TAP takes $\Omega(D)$ rounds for appropriate values of $\alpha$. In the unweighted case, an algorithm that adds all the edges $\{v_{2i},v_{2(i+1)}\}$ for $0 \leq i < k$, gives a $k$-approximation to the optimal augmentation in $G_1$. However, if we want a better approximation we need $\Omega(D)$ rounds. Assume that $c>1$ is a constant and we want an $\alpha$-approximation where $\alpha < \frac{k}{c}=\frac{n-1}{2c}$. Consider the $\left \lceil \frac{k}{c} \right \rceil$ edges $\{v_{2i},v_{2(i+1)}\}$ that are closest to $v_k$. Each of the vertices on these edges is at distance $\Omega(k)=\Omega(D)$ from the vertices $v_0,v_{2k}$. Hence, they cannot distinguish between $G_1,G_2$ in less than $\Omega(D)$ rounds. It follows that any distributed $\alpha$-approximation algorithm for unweighted TAP must take $\Omega(D)$ rounds. \end{proof} \subsection{A Lower Bound for weighted TAP in the CONGEST model} \label{sec:app_congest} By Theorem \ref{local-lb}, when $h=O(D)$ our algorithms {$A_{TAP}$}, {$A_{wTAP}$} are optimal up to a constant factor. But what about the case of $h=\omega(D)$ for the CONGEST model? We next show a family of graphs where $h=\omega(D)$, in which $\Omega(h)$ rounds are needed in order to approximate weighted TAP, where $h=O(\sqrt{n})$. The lower bound is proven using a reduction from the 2-party set-disjointness problem, in which there are two players, Alice and Bob. Each player gets a binary input string of length $k$: $a=(a_1,...,a_k), b=(b_1,...,b_k)$, and the players have to decide whether their inputs are disjoint, i.e., whether there is an index $i$ such that $a_i = b_i = 1$ or not. It is known that in order to solve this problem, Alice and Bob have to exchange at least $\Omega(k)$ bits, even when using random protocols \cite{razborov1992distributional}. Our construction is based on a construction presented in \cite{sarma2012distributed, elkin2006unconditional}. In order to use this construction for showing lower bounds for TAP, we add to it additional parallel edges\footnote{We also show a construction with no parallel edges.} and give weights to the edges in such a way that all the edges of the input tree $T$ can be covered by parallel edges of weight 0, except for $k$ edges, $\{e_i\}_{i=1}^k$. The edge $e_i$ may be covered either by a corresponding parallel edge $e_i^A$, or by a distant edge $e_i^B$ that closes a cycle that contains $e_i$. However, the weights of the edges $e_i^A$ and $e_i^B$ depend on the $i$'th bit in the input strings of Alice and Bob, such that there is a light edge that covers $e_i$ if and only if this bit equals 0 at least in one of the input strings. It follows that all the $k$ edges can be covered by light edges if and only if the input strings of Alice and Bob are disjoint. We next describe the construction. We start by presenting a construction that includes parallel edges, and later explain how to change it to a similar construction that does not include parallel edges. \subsubsection{Construction with Parallel Edges} We follow the constructions presented in \cite{sarma2012distributed, elkin2006unconditional}. Let $G_1=G(k,d,p)$ be a graph that consists of $k$ paths $P_1,...,P_k$ of length $d^p$, where the vertices on the path $P_i$ are denoted by $v^i_j$, for $0 \leq j \leq d^p-1$, and a tree $S$ of depth $p$, where each internal vertex has degree $d$, so it has $d^p$ leaves denoted by $u_j$, for $0 \leq j \leq d^p-1$. In addition, there is an edge between $u_j$ to $v^i_j$ for $1 \leq i \leq k$, $0 \leq j \leq d^p-1$. Let $G_2$ be a weighted graph with the same structure as $G_1$, and with parallel edges on the paths and in the tree. That is, there are two parallel edges between $v^i_j$ and $v^i_{j+1}$, for $0 \leq j<d^p-1$, and there are two parallel edges between a parent to each one of its $d$ children in $S$. All of the above parallel edges have weight $0$. In addition, there are two parallel edges between $u_0$ to $v^i_0$, one of them with weight $0$. The edges between $u_j$ to $v^i_j$ for $0 < j < d^p-1$ have weight $x=\alpha k+1$. Given two binary input strings of length $k$: $a=(a_1,...,a_k), b=(b_1,...,b_k)$, the second edge between $u_0$ and $v^i_0$ has weight $x$ if $a_i=1$ and has weight $1$ otherwise. Similarly, the edge between $u_{d^p-1}$ and $v^i_{d^p-1}$ has weight $x$ if $b_i=1$ and has weight $1$ otherwise. The input to the TAP problem is the graph $G_2$ with a spanning tree $T_{G_2}$ rooted at $r=u_0$ (see Figure \ref{pic2}). $T_{G_2}$ includes one copy of all the path edges, and one copy of all the edges of $S$, and the edges between $r=u_0$ and $v^i_0$ that have weight $0$ for $1 \leq i \leq k$. Since we can cover all the path edges and the edges of $S$ by their parallel edges having weight $0$, in order to cover all tree edges in $T_{G_2}$ optimally we need to cover the edges between $r$ and $v^i_0$ optimally. \setlength{\intextsep}{0pt} \begin{figure}[h] \centering \setlength{\abovecaptionskip}{-10pt} \setlength{\belowcaptionskip}{2pt} \includegraphics[scale=0.35]{lower_bound2.pdf} \caption{The structure of the graph $G_2$. The edges of $T_{G_2}$ are marked with solid lines, other edges are marked with dashed lines.} \label{pic2} \end{figure} \begin{claim} \label{disj} The cost of an optimal augmentation is $k$ if the input strings $a$ and $b$ are disjoint, and it is at least $x=\alpha k+1$ otherwise. \end{claim} \begin{proof} In order to cover the tree edge $\{r,v^i_0\}$ we can use any other edge between $u_j$ to $v^i_j$. Each such edge has weight $x$ unless at least one of $a_i$ or $b_i$ is equal to $0$, in which case the second edge between $r=u_0$ to $v^i_0$ or the edge between $u_{d^p-1}$ and $v^i_{d^p-1}$ has weight $1$. These are the only edges that cover the tree edge $\{r,v^i_0\}$. All the other edges in $T_{G_2}$ can be covered with parallel edges of weight $0$. It follows that if $a$ and $b$ are disjoint then we can cover all the edges in $T_{G_2}$ with cost $k$, otherwise the cost is at least $x$ because we need at least one edge of weight $x$. \end{proof} By Claim \ref{disj}, an $\alpha$-approximation algorithm that computes the weight of an optimal augmentation on the graph $G_2$ with spanning tree $T_{G_2}$ can be used in order to solve the set-disjointness problem: if the input strings are disjoint the weight of an optimal augmentation is $k$, in which case the output of the algorithm is at most $\alpha k$. Otherwise, the output of the algorithm is at least $x=\alpha k + 1$. Note that if $A$ is a distributed $\alpha$-approximation algorithm for weighted TAP that takes $R$ rounds, then there is a distributed $\alpha$-approximation algorithm $A_1$ for computing the weight of the optimal augmentation that completes in $O(R+D)$ rounds, where at the end of $A_1$ all the vertices know the weight of an optimal augmentation. This done by having $A_1$ simulate $A$ and then collect the weight of the augmentation over a BFS tree and distribute it to all the vertices. Since $R=\Omega(D)$ by Theorem \ref{local-lb}, it follows that the time complexity of $A_1$ is $O(R)$ rounds, so a lower bound on the time complexity of $A_1$ gives a lower bound on the time complexity of $A$. Our algorithms work in the CONGEST model where the maximal message size is bounded by $\Theta(\log{n})$ bits, however the proof of the lower bound is based on the proof in \cite{sarma2012distributed} which works in a more general model where the maximal message size is bounded by $B$ bits. Hence, the lower bound we show holds for this generalized model as well. \begin{claim} \label{sim} If there is a distributed (even randomized) $\alpha$-approximation algorithm for computing the weight of an optimal augmentation in $G_2$ that has time complexity of $R$ rounds where $R \leq \frac{d^p-1}{2}$, then set-disjointness can be solved by exchanging $O(dpBR)$ bits. \end{claim} \begin{proof} The proof of the claim follows from the proof of Theorem 3.1 in \cite{sarma2012distributed}, in which it is shown how Alice and Bob can simulate a distributed algorithm on the graph $G_1$ by exchanging at most $2dpBR$ bits, where at the end of the simulation each player knows the output of one of the vertices $r, u_{d^p-1}$. In the algorithm for computing the weight of an optimal augmentation all the vertices know the weight at the end, so it is enough that each of Alice and Bob knows the output of one vertex. Note that the graphs $G_1$ and $G_2$ have the same structure, but in $G_2$ there may be two parallel edges between vertices $v,u$ that have only one edge between them in $G_1$. It follows that $v,u$ can exchange $2B$ bits between them in a round in each direction, instead of $B$ bits. Therefore, in order to simulate a distributed algorithm on $G_2$, Alice and Bob can use the same simulation but may need to exchange twice as many bits in order to simulate one round, and $4dpBR$ bits for the whole simulation, which is still $O(dpBR)$ bits, as claimed. At the end of the simulation, both Alice and Bob know an $\alpha$-approximation to the weight of an optimal augmentation, and can deduce if their input strings are disjoint according to Claim \ref{disj}. \end{proof} \begin{theorem} (equivalent to Theorem 7.1 in \cite{sarma2012distributed}) \label{lowerbound2} For any polynomial function $\alpha(n)$, integers $p > 1$, $B \geq 1$ and $n \in \{2^{2p+1}pB, 3^{2p+1}pB,...\}$, there is a $\Theta(n)$-vertex graph of diameter $2p+2$ for which any (even randomized) distributed $\alpha(n)$-approximation algorithm for weighted TAP with an instance tree $T \subseteq G$ of height $h$ requires $\Omega((n/(pB))^{\frac{1}{2}-\frac{1}{2(2p+1)}})$ rounds which is $\Omega(h)$. \end{theorem} \begin{proof} By Claim \ref{sim} and the lower bound on set-disjointness \cite{razborov1992distributional} we have $R=\Omega(min(d^p,\frac{k}{dpB}))$. Choosing $k=d^{p+1}pB$ gives $\Omega(min(d^p,\frac{k}{dpB}))=\Omega(d^p)$. As in \cite{sarma2012distributed,elkin2006unconditional}, $G_1$ and $G_2$ have $n=\Theta(kd^p)=\Theta(d^{2p+1}pB)$ vertices and diameter $2p+2$. In addition, $h = d^p + 1$ since the height of $T_{G_2}$ is determined by the length of the paths. Hence, we have $R = \Omega(d^p) = \Omega(h)$ where $h = \Theta(d^p) = \Theta((n/(pB))^{\frac{1}{2}-\frac{1}{2(2p+1)}})$. \end{proof} Choosing $B=p=\Theta(\log{n})$ in Theorem \ref{lowerbound2} gives the following. \congest* \subsubsection{Construction without Parallel Edges} We next explain how to modify the above construction to avoid parallel edges. We define $G_3$ as follows: \begin{itemize} \item If there is a single edge between the vertices $v$ and $u$ in $G_2$, then this edge is in $G_3$ and has the same weight as it has in $G_2$. \item For every pair of vertices $v,u$ which have two parallel edges between them in $G_2$, we add in $G_3$ a new vertex $vu$ and replace one of the two parallel edges between $v$ and $u$ which has weight $0$ by two edges $\{v,vu\}$ and $\{vu,u\}$, both with weight $0$.\footnote{Notice that at least one of the two parallel edges indeed has weight $0$.} \end{itemize} The tree $T_{G_3}$ in the TAP problem in $G_3$ is constructed according to the tree $T_{G_2}$ in $G_2$, such that if $\{v,u\}$ is a tree edge in $T_{G_2}$, then $\{v,vu\},\{vu,u\}$ are tree edges in $T_{G_3}$. Note that the edge $\{v,u\}$ covers both $\{v,vu\},\{vu,u\}$. Since all the edges on the paths and in the tree $S$ in $G_2$ have weight 0, all the edges on the corresponding paths and tree $S_{G_3}$ in $T_{G_3}$ can be covered by edges of weight $0$. In order to cover all tree edges in $T_{G_3}$ optimally we need to cover the edges $\{r,rv^i_0\},\{rv^i_0,v^i_0\}$ optimally. Similarly to the case in $G_2$, we can cover them by any one of the edges between $u_j$ and $v^i_j$. All those edges have weight $x$ unless at least one of $a_i$ or $b_i$ is equal to $0$, so Claim \ref{disj} holds for $G_3$ as well. If $n$ is the number of vertices in $G_2$, then in $G_3$ the number of vertices is $2n-1=\Theta(n)$ because we add one vertex for each edge of $T_{G_2}$ (the parallel edges in $G_2$ are only on the tree $T_{G_2}$). Similarly, the height of $T_{G_3}$ is $2h=\Theta(h)$ where $h$ is the height of $T_{G_2}$, and the diameter of $G_3$ is $\Theta(D)$ where $D$ is the diameter of $G_2$. Assume that $A$ is an $\alpha$-approximation algorithm for weighted TAP that takes $R$ rounds in $G_3$, then there is an $\alpha$-approximation algorithm $A_1$ for weighted TAP that takes $R$ rounds in $G_2$. $A_1$ simulates $A$: all the vertices that are both in $G_2$ and in $G_3$ simulate themselves. For each vertex $vu$, one of the vertices $v,u$ simulates $vu$, and assume w.l.o.g that $v$ simulates $vu$. Note that there are two parallel edges between $v$ and $u$ in $G_2$. One of them is used in order to simulate the messages sent on the edge $\{v,u\}$ in $A$, and the other is used in order to simulate the messages sent on the edge $\{vu,u\}$ in $A$. Note that there is no need for communication in order to simulate messages sent on the edge $\{v,vu\}$ because the vertex $v$ simulates both $v,vu$. It follows that the simulation of $A$ in $G_2$ takes $R$ rounds. In addition, from the correspondence between $G_2$ and $G_3$, any augmentation in $G_3$ is an augmentation in $G_2$, and vice versa. The above implies that the lower bound holds for $G_3$ (which has no parallel edges) as well, and hence Theorem \ref{lowerbound2} holds also for simple graphs. \section{Discussion} \label{sec:dis} In this paper, we present the first distributed approximation algorithms for TAP. Many intriguing problems remain open. First, can we get efficient distributed algorithms for TAP with an approximation ratio better than 2? In the sequential setting, achieving an approximation better than 2 for weighted TAP is a central open question. However, there are several recent algorithms achieving better approximations for unweighted TAP \cite{kortsarz2016simplified, cheriyan2015approximating, DBLP:conf/stoc/0001KZ18} or for weighted TAP with bounded weights \cite{DBLP:journals/corr/FioriniGKS17, adjiashvili2017beating}. Second, there are many additional connectivity augmentation problems, such as increasing the edge connectivity from $k$ to $k+1$ or to some function $f(k)$, as well as augmentation for increasing the vertex connectivity. Such problems have been widely studied in the sequential setting, and a natural question is to design distributed algorithms for them. Finally, it is interesting to study TAP and additional connectivity problems also in other distributed models, such as the dynamic model where edges or vertices may be added or removed from the network during the algorithm. An interesting question is how to maintain highly-connected backbones when the network can change dynamically. \bibliographystyle{spmpsci}
1,314,259,994,069
arxiv
\section*{Keywords} Spherical symmetry; loop quantum gravity; black holes; singularity resolution. \section{Introduction} The study of situations of high symmetry in which one first applies the symmetry to the classical theory and then proceeds to quantize has proven a valuable tool to probe potential regimes of quantum gravity in a scenario where detailed, well controlled, calculations are possible. A prime example of this is {\em loop quantum cosmology} (LQC) \cite{lqc} where spatial homogeneity is imposed before quantization. It has led to several attractive insights like the elimination of the Big Bang singularity, even though it implies a radical reduction of the degrees of freedom of the theory (from infinitely many to a finite number). It is a natural progression to attempt to consider situations with less symmetry. In that context, spherically symmetric space-times appear as an attractive scenario since they include the important case of black holes. And although general relativity with spherical symmetry does not have field-theoretic degrees of freedom on-shell, initially the treatment resembles that of a situation with infinitely many degrees of freedom. In particular, the constraints of the theory do not form a Lie algebra, but an algebra with structure functions like in the full theory. This can be a significant impediment to complete the Dirac quantization of the theory, as there are several well known obstacles that the structure functions introduce \cite{FrJa}. It therefore came as a welcome surprise when it was noted that a rescaling \cite{GaPuprl,GaOlPu} of the constraints can actually turn them into a Lie algebra. We are not aware of any deep reason for the emergence of this possibility for these models. In particular, it does not seem to survive the inclusion of matter. But nevertheless it allows to complete the Dirac quantization and discuss interesting properties in the vacuum case. In this manuscript we will review our work on spherical symmetry, which is based on the use of inhomogeneous slices that may penetrate horizons when they are present and become homogeneous inside. There are other approaches to spherical symmetry that focus on the interiors of black holes exploiting the isometry of the Schwarzschild interior with the Kantowski--Sachs space-times and some consider extensions to the exterior. There is a significant literature on the subject (see \cite{aos} and references therein) and we will dedicate a section at the end of this chapter. A separate review of our work is present in \cite{javier}. \section{New variables for gravity with spherical symmetry} Ashtekar's new variables cast general relativity in terms of quantities that resemble the variables of an $SU(2)$ Yang--Mills theory. Spherically symmetric configurations in that theory were already considered in the 1970's by Cordero and Teitelboim \cite{CoTe}. Their results can be adapted to the Ashtekar's variables context and was discussed in detail by Bojowald and Swiderski \cite{BoSw}. We will not conduct a full review of the spherical reduction here but just introduce the resulting variables and their connection to the traditional metric variables. One is left with a ``radial" and ``transverse'' components of the triads $E^x$ and $E^\varphi$, respectively. We call the radial variable $x$ so as not to prejudice in terms of a particular radial coordinate like isotropic or Schwarzschild. Their canonically conjugate momenta are denoted by $K_x$ and $K_\varphi$, respectively, with Poisson brackets, \begin{align}\nonumber\label{eq:poiss} &\{K_x(x),E^x(\tilde x)\}=G\delta(x-\tilde x),\\ &\{{K}_\varphi(x),E^\varphi(\tilde x)\}=G\delta(x-\tilde x), \end{align} with $G$ Newton's constant. We take the Immirzi parameter to one. The relationship with the traditional metric variables is, \begin{eqnarray} g_{xx}&=&\frac{(E^\varphi)^2}{|E^x|},\\ g_{\theta\theta}&=&|E^x|,\\ K_{xx}&=& -\frac{2 K_x \left(E^\varphi\right)} {\sqrt{|E^x|}},\\ K_{\theta\theta}&=&-\sqrt{|E^x|}K_\varphi. \end{eqnarray} The use of symmetry adapted variables obviously implies to work in a restricted set of coordinates (gauges). This eliminates the Gauss law usually present in the Ashtekar formulation. One is left with one diffeomorphism constraint in the radial variable and the Hamiltonian constraint. In terms of the variables we are considering they take the form, \begin{subequations} \begin{align} & D:=G^{-1}[E^\varphi K_\varphi'-(E^x)' K_x]\,,\label{eq:difeo}\\ \nonumber &H :=G^{-1}\left\{\frac{\left[(E^x)'\right]^2}{8\sqrt{E^x}E^\varphi} -\frac{E^\varphi}{2\sqrt{E^x}} - 2 K_\varphi \sqrt{E^x} K_x -\frac{E^\varphi K_\varphi^2}{2 \sqrt{E^x}}\right.\\ &\left.-\frac{\sqrt{E^x}(E^x)' (E^\varphi)'}{2 (E^\varphi)^2} + \frac{\sqrt{E^x} (E^x)''}{2 E^\varphi}\right\}\,,\label{eq:scalar1} \end{align} \end{subequations} where prime is derivative with respect to the radial coordinate $x$. These expressions are valid for $E^x>0$. They can be extended to the full real axis substituting $E^x$ by $\vert E^x\vert$. These constraints have the same algebra as in the full theory, in particular the Poisson brackets of two Hamiltonian constraints is proportional to the diffeomorphism constraint and the proportionality factor is a structure function that involves the metric. It is therefore not a Lie algebra and faces the same well known difficulties about promoting it to a quantum algebra of self-adjoint operators as one has in the full theory we mentioned before\cite{FrJa}. It should be noted, however, that the variable $K_x$ appears undifferentiated in both the Hamiltonian and diffeomorphism constraints. This suggests the possibility of eliminating it. So performing the linear combination,\begin{equation}\label{eq:hnew} H_{\rm new} :=\frac{\left(E^x\right)'}{E^\varphi}H-2\frac{\sqrt{E^x}}{E^\varphi}K_\varphi H_r= -\frac{1}{G}\left[ \sqrt{E^x}\left(1-\frac{[(E^x)']^2 }{4 (E^\varphi)^2}+K_\varphi^2\right)\right]', \end{equation} one is left with a Hamiltonian constraint $H_{\rm new}$ that has vanishing Poisson bracket with itself and has the usual Hamiltonian/diffeomorphism Poisson bracket. The algebra of constraints therefore becomes a Lie algebra, opening the possibility of promoting the constraints to self-adjoint operators. The linear combination is equivalent to redefining the lapse and the shift, \begin{equation} N^{\rm new}_r:= N_r -2 N\frac{K_\varphi\sqrt{E^x}}{\left(E^x\right)'},\quad N_{\rm new} := N \frac{E^\varphi}{\left(E^x\right)'}. \end{equation} We will find it more convenient to work with the smeared version of the Hamiltonian constraint. This requires some care with the falloff of the various quantities at the edges of the manifold considered. This was discussed by Kucha\v{r} \cite{kuchar} in terms of the traditional variables and the use of the Ashtekar new variables does not add to this discussion (apart from changes in notation) so we will not repeat it here. Details can be found in \cite{javier}. The final result, after an integration by parts, is, \begin{equation}\label{eq:H_new-den} \tilde H(\tilde N) :=\frac{1}{G}\int dx {N}_{\rm new}' \sqrt{E^x}\bigg[ K_\varphi^2-\frac{[(E^x)']^2}{4 (E^\varphi)^2}+\left(1-\frac{2 G M}{\sqrt{E^x}}\right)\bigg]. \end{equation} and one has an additional pair of canonical variables at spatial infinity given by the proper time there and the $ADM$ mass. In the quantum case, where the singularity is removed, one may have slices with two asymptotic regions, one outside and one inside the horizon. In that case there will be a pair of canonical variables associated to each of the asymptotic infinities. \section{Quantization: kinematics} Let us proceed to the quantization. We need to take into account the extra variables at infinity we mentioned in the last section. For them we will just consider a traditional quantization based on square integrable functions of the $ADM$ mass $M$. For the other variables we will proceed with a loop-like quantization. It is natural to consider one dimensional spin networks, graphs consistent of edges adjoining vertices along the radial direction. The variable $K_x$ is proportional to the connection $A_x$ so one can associate a traditional holonomy along the edges. The variable $K_\varphi$ is a scalar and therefore is naturally associated with vertices. So for a given graph $g$ we will consider a basis of states, \begin{align*} |\vec{\mu},\vec{k}\rangle=\mbox{ \begin{picture}(200,15)(0,0) \put(0,5){\line(1,0){200}} \put(50,5){\circle*{5}} \put(100,5){\circle*{5}} \put(150,5){\circle*{5}} \put(50,-3){\makebox(0,0){$\mu_{j-1}$}} \put(100,-3){\makebox(0,0){$\mu_j$}} \put(150,-3){\makebox(0,0){$\mu_{j+1}$}} \put(25,10){\makebox(0,0){$\cdots$}} \put(75,12){\makebox(0,0){$k_j$}} \put(125,12){\makebox(0,0){$k_{j+1}$}} \put(175,10){\makebox(0,0){$\cdots$}} \end{picture}}, \end{align*} that can be translated into the connection representation as, \begin{align}\label{eq:kin-graph} &T_{g,\vec{k},\vec{\mu}}(K_x,K_\varphi) =\prod_{e_j\in g} \exp\left(i {k_{j}} \int_{e_j} dx\,K_x(x)\right) \prod_{v_j\in g} \exp\left(i {\mu_{j}} K_\varphi(v_j) \right). \end{align} The labels $k_j$ are integers and correspond to the ``color'' of the edges $e_j$ of the spin network associated with the graph $g$. The labels $\mu_j$ are real. So the kinematical Hilbert space we choose will be given by the direct product of the square integrable functions of the ADM mass $M$ with the Hilbert space of square summable functions corresponding to the holonomies of $K_x$ times the Hilbert space of square integrable functions of the Bohr compactification \cite{Ashtekar:2006rx} of the $K_\varphi$'s. This comes naturally endowed with an Ashtekar--Lewandowski \cite{asle} like inner product, \begin{align} \langle\vec{k},\vec{\mu},M|\vec{k}',\vec{\mu}',M' \rangle=\delta_{\vec{k},\vec{k}'}\delta_{\vec{\mu},\vec{\mu}'}\delta(M-M')\;.\label{11} \end{align} On the kinematical Hilbert space the operator associated with the ADM mass acts multiplicatively and the action of the triads is also straightforward, they also act multiplicatively, \begin{align} &{\hat{M} } |g,\vec{k},\vec{\mu},M\rangle = M |g,\vec{k},\vec{\mu},M\rangle,\\ &{\hat{E}^x(x) } |g,\vec{k},\vec{\mu},M\rangle = \ell_{\rm Pl}^2 k_j(x) |g,\vec{k},\vec{\mu},M\rangle,\label{13} \\ &\hat{E}^\varphi(x) |g,\vec{k},\vec{\mu},M\rangle = \ell_{\rm Pl}^2 \sum_{v_j\in g} \delta\big(x-x_j\big)\mu_j |g,\vec{k},\vec{\mu},M\rangle, \end{align} where $k_j(x)$ is the valence of the edge that includes the point $x$. If it coincides with a vertex, we take the edge to the right. We denote the position of the vertex $v_j$ as $x(v_j)$. We see that $\hat{E}^\varphi(x)$ on this basis of states acts as a distribution due to the fact that classically it is a scalar density. We will concentrate on states based on a single graph, but superpositions with different graphs can also be considered, it just leads to more complex expressions. Superpositions were found to be relevant in the context of the fermion doubling problem \cite{doubling}. Concerning $K_\varphi(x)$, the only connection component that is present in the scalar constraint, the representation adopted for it will be in terms of point holonomies of length $\rho$, whose operators are, \begin{equation} N^\varphi_{\pm n\rho}(x) |g,\vec{k},\vec{\mu},M\rangle = |g,\vec{k},\vec{\mu}'_{\pm n\rho},M\rangle ,\quad n\in \mathbb{N}, \end{equation} where the new vector $\vec{\mu}'_{\pm n\rho}$ either has just the same components than $\vec{\mu}$ up to $\mu_j\to\mu_j\pm n\rho$ if $x$ coincides with a vertex of the graph located at $x_j$, or $\vec{\mu}'_{\pm n\rho}$ will be $\vec{\mu}$ with a new component $\{\ldots,\mu_j,\pm n\rho,\mu_{j+1},\ldots\}$ with $x_{j}<x<x_{j+1}$. One can also construct other geometrical operators of physical interest, at the kinematical level, like the total volume operator, given by \begin{equation}\label{eq:phys-vol} \hat {\cal V}|g,\vec{k},\vec{\mu},M\rangle =4\pi \ell_{\rm Pl}^3\sum_{v_j\in g} \mu_j \sqrt{k_j} |g,\vec{k},\vec{\mu},M\rangle. \end{equation} \section{Dynamics: $\mu_0$ style quantization} To proceed to quantize the Hamiltonian constraint we draw on the experience in loop quantum cosmology. The action of the basic kinematical operators can be viewed as involving an ``LQC at each vertex" of the one-dimensional spin network. Just like in that case the operator associated with $K_\varphi$ is not a well defined quantity and one needs to ``polymerize'' the expressions involving it, in particular the Hamiltonian constraint, by doing the substitution $K_\varphi\to\sin\left(\rho K_\varphi\right)/\rho$. Here $\rho$ is a fixed parameter that would correspond to the $\mu_0$ of LQC, and it is assumed to be very small (zero in the classical limit). To promote the Hamiltonian constraint to an operator it is convenient to rescale it, to take a square root (this simplifies finding solutions) and choose a factor ordering, \begin{equation}\label{17} \hat{H}(N)=\int dx N(x)\left( 2\left\{\sqrt{\sqrt{\hat{E}^x} \left(1 +{\sin^2\left(\rho \hat{K}_{\varphi}\right)}/{\rho^2}\right) -2 G \hat{M}}\right\}\hat{E}^\varphi -\sqrt[4]{\hat{E}^x}\left(\hat{E}^x\right)'\right). \end{equation} The expression is readily promoted to an operator acting on the kinematical Hilbert space. We start with the action on spin network functions, introducing $y_j=K_\varphi(x_j)$, \begin{eqnarray} \hat{H}(N) T_{g,\vec{k},\vec{\mu}}(K_x,{\vec y}) &=&\sum_{v_j\in g} N(v_j)\left(k_j \ell_{\rm Planck}^2\right)^{\frac{1}{4}}\nonumber\\ &&\left[\hat{\Sigma}_j- \left(k_j-k_{j-1}\right)\ell_{\rm Planck}^2\right] T_{g,\vec{k},\vec{\mu}}(K_x,{\vec y}). \end{eqnarray} with, \begin{equation}\hat{\Sigma}_j=2 \sqrt{1+\frac{\sin^2\left(\rho y_j\right)}{\rho^2} -\frac{2G\hat{M}}{\sqrt{k_j \ell_{\rm Planck}^2}}}\ell_{\rm Planck}^2 (-i\partial_{y_j}) \end{equation} Notice that the action of the Hamiltonian constraint keeps invariant the valences $k_j$. This allows one to restrict its action only on states associated with non-degenerate triads, that is, with non-vanishing $k_j$. This immediately implies, as we shall see in detail later on, the elimination of the singularities associated with degenerate triads and in particular it will eliminate the singularity inside black holes if one is to have the metric be a self-adjoint operator, as we will see in section 6. To proceed to find solutions of the Hamiltonian constraint we try a solution of the type, \begin{equation} \Psi\left(K_{\varphi},K_{x},g, \vec{k},M\right) = \sum_{v\in g} \sum_{\mu(v)} T_{g,\vec{k},\vec{\mu}}(K_x,K_\varphi) \Psi(\mu(v),M). \end{equation} where, given (\ref{13}) and (\ref{17}), we will assume the integers $k_i$ satisfy $k_1<k_2<...<k_v$ with $i$ going from $1$ to $V$ in order to avoid unnecesary redundancies in the description From now on we can omit the $g$ dependence since we only include vertices where $k_j$ changes and therefore all information about the graph is included in the $k_j$'s. The information about the $k_j$'s can be codified in ${\vec k}$ and we will seek solutions with a given $\vec{k}$. It would be straightforward, though more complicated, to consider superpositions with different $k_j$'s, so we will omit them here, but one would expect that generic states would involve them. It turns out that the Hamiltonian constraint can be solved, with the solution given by, \begin{equation}\label{21} \Psi\left(K_{\varphi},K_{x},g,\vec{k},M\right) = \exp\left(f\left(K_{\varphi},g,\vec{k},M\right)\right) \Pi_{e_j\in g} \exp\left( i k_j\int_{e_j} K_{x}(x)dx\right), \end{equation} with the quantities \begin{equation} f=\sum_{v_j\in g} -\frac{i}{2} \Delta K_j m_j F\left(\sin\left(\rho K_{\varphi}(v_j), i m_j\right)\right), \end{equation} and, \begin{eqnarray} \Delta K_j&=& k_j-k_{j-1},\\ m_j&=&\left[\rho \sqrt{1-2 G M/\sqrt{k_{j}}\ell_{\rm Planck}}\right]^{-1},\\ F(\phi,m)&=&\int_0^\phi\left(1-m^2 \sin^2 t\right)^{-1/2} dt, \end{eqnarray} with the latter the Jacobi elliptic function of the first kind. These solutions to the constraint are well defined for both the exterior $(m_j^2>0)$ and the interior $(m_j^2<0)$ of the black hole. In particular, they belong to the kinematical Hilbert space in the sense that they have finite norm with respect to the inner product (\ref{11}). See \cite{GaOlPu} for details. To complete the construction of physical states one needs to group average these solutions so that the resulting averaged states are invariant under the transformations generated by the diffeomorphism constraint. This leads to states that are superpositions of spin networks with vertices along all the possible positions along the radial line. The order of the vertices has to be preserved. We will later see that this preservation leads to the appearance of new observables in the quantum theory. One is therefore left with elements of the physical space of states that are well defined functions of $K_x, K_\varphi$ labeled by the $\vec{k}$ and $M$, we will denote them as $\vert \vec{k},M\rangle$, with inner product $\langle\vec{k},M\vert\vec{k'},M'\rangle=\delta_{\vec{k},\vec{k}'}\delta(M-M')$. \section{Parameterized Dirac observables} Physical observables are operators that leave invariant the physical Hilbert space. We will use the technique known as parameterized Dirac observables (or evolving constants of the motion) to construct them. We briefly review it. A parameterized Dirac observable has vanishing Poisson bracket with the constraints and depends on parameters (or in the case of field theories, functional parameters). They are used to describe the evolution, and more generally, gauge dependent quantities in terms of Dirac observables and parameters in totally constrained systems. A simple example is given by the well-known parameterized free particle in one dimension. The canonical variables are $p_0,q^0$ and $p, q$. Where $q^0$ is Newtonian time and $p_0$ its canonical momentum. The constraint is $\phi=p_0+p^2/(2m)$, and examples of independent Dirac observables are $p$ and $x=q-p q^0/m$. A parameterized Dirac observables is $Q(t)=x+p t/m$ and it satisfies $\left\{\phi, Q(t)\right\}=0$. One also has that $Q(t)\vert_{t=q^0}=q$ and $Q(t)\vert_{t=q^0+\Delta t}=q+p\Delta t/m$. Hence, the parametrized Dirac observable $Q(t)$ describes the position of the free particle\footnote{In a quantum theory, there is the question of the meaning of a classical parameter like $t$. This has led to a rich discussion of the role of real clocks in quantum theory that is beyond the scope of this review \cite{universeus}}. This is also the case in the quantum theory where the parametrized Dirac observables can be promoted to operators in the physical space of states that are annihilated by the constraints. These parametrized observables coincide with the natural ones in a Heisenberg picture and, moreover, comparison of the latter with a Schr\"odinger one in \cite{evol} shows explicitly the equivalence between the two pictures (for the relativistic particle). Just like we were able to use a parameterized Dirac observable to represent the position of the particle in the above example (which is NOT a Dirac observable) we will be able to use parameterized Dirac observables to represent quantities like any element of the kinematical space, which are not Dirac observables in general relativity. This point tends to generate confusion, but the above example should make it clear. A parameterized Dirac observable does not have a well defined value until one chooses a phase space variable (in this case $q_0$) or phase space function and identifies it with a (time) parameter. Similarly, a metric component is not well defined until one picks a coordinate system. Picking a parameterization (remember that these are functional parameters in the case of general relativity) therefore corresponds to a choice of space-time coordinates, or slicings of space-time. As we mentioned, the order of the vertices is unchanged by the group averaging procedure. This is associated with the existence of a parameterized quantum Dirac observable that does not have a classical counterpart. It is given by, \begin{equation} \hat{O}(z)\vert \vec k,M\rangle=\ell_{\rm Planck}^2 k_{\rm Int( V z)} \vert \vec k, M\rangle \end{equation} where ${\rm Int}$ means integer part, $V$ is the number of vertices in the spin network (itself a Dirac observable without classical counterpart) and $z$ is a real parameter in the interval $[0,1]$. What this observable does is, as one slides the parameter between zero and one, to pick the various values of the valences of the spin network. As seen from its action, this parameterized Dirac observable keeps invariant the physical Hilbert state, as one expects for a Dirac observable. We would also like to observe that the triads can be represented by parameterized Dirac observables. For $E^x$ we have that, \begin{equation} \hat E^x(x)\vert \vec k,M\rangle=\hat O\big(z(x)\big)\vert \vec k,M\rangle, \end{equation} where $z(x): [0, x]\to[0, 1]$, an arbitrary monotonic function that plays the role of the (functional) parameter. Different choices of function correspond to representing the triad in different spatial coordinates. An expression for $E^\varphi$ can be obtained by solving the Hamiltonian constraint, as it appears algebraically in it. Let us start by the classical expression for the Hamiltonian density that appears integrated in (\ref{17}) ${\cal H}(x)$, given by, \begin{equation} {\cal H}(x)= 2{E}^\varphi(x)\sqrt{\sqrt{{E}^x(x)} \left(1 +{{K}_{\varphi}(x)}^2\right) -2 G M} -\sqrt[4]{{E}^x(x)}[{E}^x(x)]' \label{28} \end{equation} and let us define \begin{equation} H_r(x)=\frac{{\cal H}(x)}{2\sqrt{\sqrt{E^x(x)}(1+{ {K}_{\varphi}(x)}^2)-2GM}} \end{equation} which makes it obvious that it has weakly vanishing Poisson bracket with the Hamiltonian constraint. Now taking into account (\ref{28})), we can define the parametrized observable, \begin{equation} {{\cal E}}^{\varphi}(x)={H_r(x)}+\frac{[\epsilon^x(x)]'}{2\sqrt{1+{{\kappa}_{\varphi}(x)}^2-\frac{2GM}{\sqrt{\epsilon^x}(x)}}}, \end{equation} and this is a parameterized Dirac observable dependent on two functional parameters $\kappa_\varphi(x)$ and $\epsilon^x(x)$ respectively associated to $K_\varphi$ and $E^x$. It satisfies \begin{equation} {\cal E}^\varphi(x)\vert_{{K_\varphi=\kappa_\varphi}\atop{E^x=\epsilon^x}}=E^\varphi(x). \end{equation} Taking into account that $\hat{H_r}$ vanishes on the physical space, the polimerization of $K_\varphi$ and using the new observables ${\hat O}(z)$ that appear at the quantum level, we can promote this parameterized observable to an operator acting on the physical space of states given by, \begin{equation}\label{eq:calEphi} {\hat{\cal E}}^\varphi(x)=\frac{\frac{\hat{O}(z(x)+1/V)-\hat{O}(z(x))}{x(z+1/V)-x(z)}} {2 \sqrt{1+\sin^2\left(\rho \alpha_\varphi(x)\right)/\rho^2-2GM/\sqrt{\hat{O}(z(x))}}} , \end{equation} that is a well defined operator on the physical Hilbert space ${\cal H}_{\rm phys}$. $z(x)$ and $\alpha_\varphi(x)$ are the parameters and $x(z)$ is the inverse of the monotonic function $z(x)$. The expression that appears in the numerator is the quantum version of $[E^x(x)]'$ that was taken as a functional parameter $[\epsilon^x(x)]'$ in the classical theory. In fact $\hat{O}(z(x)+1/V)\vert \vec k,M\rangle=\ell_{\rm Planck}^2 k_{{\rm Int}( V z(x)+1)} \vert \vec k, M\rangle$ and therefore \begin{equation} [E^x(x)]'\vert \vec k,M\rangle=\frac{\ell_{\rm Planck}^2\left( k_{{\rm Int}(V z(x)+1)}-k_{{\rm Int}(V z(x))}\right )}{x(z+1/V)-x(z)} \vert \vec k,M\rangle . \end{equation} A similar technique can be applied for the remaining kinematical variable $K_x(x)$ that can be promoted to a parameterized Dirac observable. We start with the classical phase space function ${\cal D}(x)=\left\{E^\varphi(x) [K_\varphi(x)]'-[E^x(x)]' K_x(x)\right\}$ obtained from the classical diffeomorphism constraint. Then, it is easy to verify that, \begin{equation} {\cal K}_x(x)=-\frac{{\cal D}(x)}{{E^x(x)}'}+ \frac{E^\varphi(x) {\kappa_\varphi(x)}'}{{\epsilon^x(x)}'}, \end{equation} satisfies ${\cal K}_x(x)|_{E^x=\epsilon(x)^x\atop{K_\varphi=\kappa_\varphi(x)}}=K_x(x)$, as it is required for a parameterized observable. As in the case of $E^\varphi$, upon quantization one has to take into account that ${\hat E}^x(x)={\hat O}(z(x))$ and $K_\varphi$ must be polymerized, which implies that $\kappa_\varphi(x)=\sin(\rho\alpha_\varphi(x))/\rho$. Thus, the quantum parameterized observable is \begin{equation} {\hat {\cal K}}_x(x) = \frac{{\hat{\cal E}}^\varphi(x) {\cos(\rho\alpha_\varphi(x))}{\alpha_\varphi}'(x)}{\frac{\hat{O}(z(x)+1/V)-\hat{O}(z(x))}{x(z+1/V)-x(z)}} \end{equation} As it happened in the example of the free particle one ends up having a system where all the kinematical variables are parameterized Dirac observables or pure parameters. \section{The metric as a Dirac observable} Using these results one can write parameterized Dirac observables representing the various components of the space-time metric. We start by noticing that, from a space-time point of view, the parameterized observables introduced in the previous section correspond to stationary choices where $K_\varphi$ and $E^x$ are functions of $x$ independent of $t$. One can define the classical metric components in any stationary gauge by imposing the gauge fixing conditions $\Phi_1=E^x(x)-\epsilon^x(x)$ and $\Phi_2=K_{\varphi}(x) - \kappa_\varphi(x)$, where $\epsilon^x(x)$ and $\kappa_\varphi(x)$ are arbitrary functions that represent the choice of coordinates for stationary space-times. In many cases, the gauge functions might depend on the canonical variables and $M$ as well. This is the case, for instance, of the well-known Eddington--Finkelstein coordinates (see the appendix of \cite{analysisimproved} for details). We additionally require that the resulting space-times be asymptotically flat. This restricts $\epsilon^x(x)=x^2+{\cal O}(x^{-1})$ and $\kappa_\varphi(x)={\cal O}(x^{-1})$ in the limit $x\to\infty$. These conditions allow us to determine \begin{equation} N(x)^2 = 1+{\kappa_\varphi}^2(x) -\frac{2 G M}{\sqrt{\epsilon^x(x)}},\quad N^x(x) = 2\frac{\kappa_\varphi(x)\sqrt{\epsilon^x(x)}}{[{\epsilon^x}(x)]'}\sqrt{1+{\kappa_\varphi}^2(x) -\frac{2 G M}{\sqrt{\epsilon^x(x)}}}, \end{equation} up to an irrelevant constant of integration for the lapse $N(x)$ that is fixed by the condition $N(x)=1+{\cal O}(x^{-1})$ in the limit $x\to\infty$. The classical metric components for stationary coordinates are $g_{xx}(x)=\left({\cal E}^\varphi(x)\right)^2/\epsilon^x(x)$ \begin{equation} g_{tx}(x) = g_{xx}(x) N^x(x) =-\frac{[\epsilon^x(x)]' \kappa_\varphi(x)}{2 \sqrt{\epsilon^x(x)}\sqrt{1+\kappa_\varphi(x)^2-\frac{2 G M}{\sqrt{\epsilon^x(x)}}}}, \end{equation} and a similar expression for $g_{tt}(x)$. Notice the role played by $K_\varphi(x) $ in this expression: it determines the slicing. For instance, $K_\varphi(x)=0$ leads to co-moving slicings with $g_{tx}(x)=0$ like the ones that cover the exterior of black holes only. Non-vanishing $K_\varphi(x)$'s will be needed for horizon penetrating slicings like the Painlev\'e--Gullstrand and Eddington--Finkelstein ones. The above expression is straightforwardly promoted to an operator acting on the physical space of states by taking into account the new observables ${\hat O}(z(x))$, the polimerization of the extrinsic curvature $\kappa_\varphi(x)=\frac{sin\left(\rho\alpha_\varphi(x)\right)}{\rho}$ and the observable ${\hat M}$. General gauge fixings may require considering functions $\alpha_\varphi(x,{\hat M},{\hat O})$. For instance, the metric component $\hat{g}_{tx}(x)$ takes the form, \begin{equation} \hat{g}_{tx} (x)= \frac{{\hat{\cal E}}^\varphi(x) \sin\left(\rho \alpha_\varphi(x)\right)}{2\rho \sqrt{\hat{O}(z(x))}}. \end{equation} The square root that appears in ${\hat{\cal E}}^\varphi(x)$ ---see Eq \eqref{eq:calEphi}--- leads to the following inequality, in order to get a self-adjoint operator (notice that there are no factor ordering issues), $ 1 +\left(\frac{\sin \left(\rho \alpha_{\varphi}(x)\right)}{\rho}\right)^2-\frac{2 G M}{\sqrt{\hat{O}(z(x))}}\ge 0. $ The inequality is violated when the eigenvalues of $E^x(x)$ become small. The most favorable choice of parameters, from the point of view of keeping the expression positive at that point is $\alpha_{\varphi}(x=0)=\pi/(2\rho)$ (since $E^x(x)$ is monotonic the worse case happens at $x=0$). Therefore the condition for the square root that appears in the metric for it to be real and therefore the metric operator self-adjoint is, in terms of the eigenvalues of $\hat{E}^x$, given by, $ k_0> \left(\frac{2 G M }{\ell_{\rm Planck}\left(1 +\frac{1}{\rho^2}\right)}\right)^2. $ As a consequence, given the fact that we take $\rho$ small, sufficiently small values of $k_0$ are excluded in order to have a self-adjoint metric operator and as a consequence the singularity is avoided. The region exterior to the horizon is covered for any choice of $\alpha_\varphi(x)$ since the last term in the first inequality is less or equal to one outside the horizon. Notice that there exist choices of the parameters that would make the metric singular. Those correspond to coordinate singularities and loop quantum gravity correctly does not eliminate them (as in the classical theory they amount to pathological choices of parametrized observables). As we mentioned, the action of the Hamiltonian constraint and all Dirac observables leave the values of $\vec{k}$ invariant, so it is consistent to consider values of the $k_j$'s bigger than $k_0$. This implies that the singularity that appears inside black holes in general relativity can be eliminated. This will also be the case in the improved quantization we discuss in the next section, but details will be different. The analysis can be extended to the interval $[-x_+,x_+]$ with a simple generalization of $O(z)$ to $z\in[-1,1]$. The expectation value of the determinant of the space-time metric can be explicitly calculated in any given gauge and it goes through a maximum value and starts decreasing for negative values of $x$. One can view this as a generalization of the Kruskal extension including a new region that is reached by tunneling through the singularity. Clearly, not all quantum states will exhibit semi-classical behavior. To begin with, a condition for good semiclassical states is that the separation of the vertices of the spin network be small with respect to the relevant radius of curvature. That would require consecutive values of the $k_j$'s that are close to each other. The quantization of the areas of the spheres of symmetry imposes a minimum bound on the separation of consecutive points on the spin networks. For instance, for a black hole of mass $M$, close to the horizon its curvature is proportional to $(GM)^{-2}$, while the minimal separation of two points on the spin networks will be proportional to ${\ell_{\rm Planck}}^2/GM$ there. For large black holes compared to the Planck scale, that is very small number. This allows to consider spin networks with very small separations of their vertices. They will approximate a smooth geometry exceedingly well. As mentioned, one can also consider states that are superpositions of several spin networks as well. These states will improve the semiclassical behavior. \section{Dynamics: improved quantization} Up to now we have considered a fixed polymerization parameter $\rho$. This is a $\mu_0$ style quantization in the terminology of loop quantum cosmology. A problem associated with it, is that the curvature near the region where the singularity used to be in the classical theory, although finite, can have very large values, it goes as $G^2 M^2/\ell_{\rm Planck}^4$. The improved quantization aligns better with what has been observed in singularity elimination in loop quantum cosmology, where no trans-Planckian behavior is observed. We have seen that the basic mathematical building blocks of our quantum theory are $1$-dimensional oriented graphs. Here we still consider graphs such that each contains a collection of consecutive edges $e_j$, each one associated with a vertex $v_j$. The kinematical Hilbert space ${\cal H}^{\rm grav}_{\rm kin}$ of the theory is characterized by a basis of states $|\vec{k},\vec{\mu}\rangle$. Here, $k_j\in \mathbb{Z}$ and $\mu_j\in \mathbb{R}$ are valences of edges $e_j$ and vertices $v_j$, respectively. The treatment is similar to what has been done above, but now the point holonomies \mbox{$\hat{{\cal N}}_{\rho_j}:=\widehat{\exp}(i\rho_j K_\varphi(x_j))$} of the connection $K_\varphi$ defined on a vertex $v_j$ act as follows \begin{equation}\label{eq:Nmu-def} \hat{{\cal N}}_{\rho_j}|\mu_j\rangle = |\mu_j+\rho_j\rangle, \end{equation} where $\rho$ now depends on $j$. An improved quantization for these types of models was first proposed by Chiou {\em et al.} \cite{chiou}. The idea is very similar to the improved quantization of LQC \cite{Ashtekar:2006rx}: one relates the polymerization parameter to the area gap, \begin{equation}\label{eq:area-cond} 4\pi \ell^2_{\rm Pl}k_j \bar\rho^2_j = \Delta. \end{equation} This can be viewed as associating the point holonomy of the $K_\varphi$ with a plaquette enclosing an area $\Delta$, the first non-zero eigenvalue of the area operator in loop quantum gravity, of the order of $\ell_{\rm Planck}^2$. Here $\ell^2_{\rm Pl}k_j$ is the eigenvalue of the kinematical operator $\hat E^x(x_j)$, defined in Eq. \eqref{13}. Now, point holonomies \eqref{eq:Nmu-def} of ``length'' $\bar\rho_j$ will produce a shift in a state $|\mu_j\rangle$ which depends on the spectrum of some kinematical operators. Concretely, $|\mu_j\rangle\to|\mu_j+\bar\rho_j\rangle $, and given the above relation, \begin{equation} \bar\rho_j = \sqrt{\frac{\Delta}{4\pi \ell^2_{\rm Pl}k_j}}.\label{rho-cond} \end{equation} Therefore, it will be convenient to adopt a more appropriate state labeling $|\nu_{j}\rangle$ with $\nu_{j}=\sqrt{k_j}\mu_{j}/\lambda$, and $\lambda^2=\Delta/4\pi \ell_{\rm Pl}^2$. Point holonomies of the form \mbox{$\hat{{\cal N}}_{\bar\rho_j}:=\widehat{\exp}(i\bar\rho_j K_\varphi(x_j))$} again have a well-defined and simple action on this new (single-vertex) state basis of ${\cal H}^{\rm grav}_{\rm kin}$ \begin{equation}\label{eq:Ndef} \hat{{\cal N}}_{\bar\rho_j}|\nu_j\rangle = |\nu_j+1\rangle. \end{equation} The physical space of states annihilated by the constraints can be obtained through a procedure similar to the one we followed before. The main difference is that the polymerization adopted for the extrinsic curvature $K_\varphi(x)$ takes the form $\sin\left(\rho_j\,K_\varphi(x_j)\right)/{\rho_j}$. Using those techniques one can check that the physical space of states is identified by the same basis and has the same observables that in the previous approach. The main difference one has is the change of polymerization and this has implications in the details of how the singularity is eliminated. \section{Singularity elimination and space-time extensions} As we discussed in section 6, in order to have a self-adjoint operator for the metric as a parameterized Dirac observable, one needs to limit the range of $k_j$'s to a range larger than a minimum number $k_0$. That minimum number is of order unity. This is due to the fact that the polymerization parameter $\rho$ is small. This requires some explanation. In full loop quantum gravity the polymerization parameter would be associated with the minimum area of a loop, given by the quantum of area of the theory. Therefore, compared to features of a semiclassical solution in the exterior of macroscopic black hole, it is a very small number. Here, since we are dealing with point holonomies, the polymerization parameter is dimensionless. So there is less clear guidance on its value. Yet, if we believe that we should be capturing features of the full theory, one necessarily has to conclude the parameter must be small. This is important because the condition of a minimum $k_0$ that is of order unity is a very sensible condition in the context of a discrete geometry. Having a ``hole'' in the manifold that is of the order of the point separation in its discrete geometry means that if one is considering a semi-classical solution such a hole would not be a distinct feature. This is important because it corresponds to the region where the singularity is present in the classical theory. Furthermore, in the case of the improved quantization, the polymerization parameter is a ratio of length scales (or more precisely, the square root of a ratio of areas), $\rho_j ~ \sqrt{\Delta / k_j \ell_{Pl}^2}$. Therefore, a natural argument would be to limit $k_j$ to ensure that the area $k_j \ell_{Pl}^2$ be bounded below by $\Delta$ (i.e., LQG gives a lower bound on allowed non-zero areas). In what follows, in order to analyze the implications of the improved dynamics in the singularity resolution, we will work with spin networks with a finite but large number of vertices $V$. For simplicity, we restrict the study to spin networks whose values of $k_j$ are associated with a lattice with equidistant spacing such that, \begin{equation} x_j = \delta x(|j|+j_0), \end{equation} where $j\in\mathbb{Z}$ and $j_0\geq 1$ is an integer that will be specified below and $\delta x$ is the step of the lattice of the coordinate $x$ that we choose to be $\delta x=\ell_{\rm Pl}$. This choice amounts to choosing the function $z(x)\in[-1,1]$ as $z(x)=x/(V\delta x)$, such that $z(x_j)={\rm sign}(j)(|j|+j_0)/V$ and we choose the semi-classical basis elements $k_j=(|j|+j_0)^2$. For instance, in this family of states, the triad $E^x$ and its spatial derivative can be easily represented as physical parametrized observables as, \begin{align}\label{eq:hex} &\hat E^x(x_j)|\vec k,M\rangle=\hat O(z(x_j))|\vec k,M\rangle=\ell_{\rm Pl}^2 k_{j}|\vec k,M\rangle=x^2_{j}|\vec k,M\rangle.\\\label{eq:hdex} & [\hat E^x(x_j)]'|\vec k,M\rangle=\frac{(x_j+\delta x)^2-x_j^2}{\delta x^2}|\vec k,M\rangle={\rm sign}(j)(2x_j+\delta x)|\vec k,M\rangle. \end{align} We now consider the action of the parametrized Dirac observable $\hat{\cal E}^\varphi$ \begin{equation} \label{eq:hephi} (\hat{\cal E}^\varphi(x_j)) = \frac{\left[\hat E^x(x_j)\right]'/2}{\sqrt{1+ \widehat{\frac{{\sin^2\left(\bar\rho_j \alpha_\varphi(x_j)\right)}}{{\bar\rho}_j^2}} -\frac{2 G \hat M}{\sqrt{|\hat E^x(x_j)|}}}}, \end{equation} where $\alpha_\varphi(x_j)$ can depend on $\hat M$ or $\hat O(z)$. For instance, for Eddington--Finkelstein coordinates. \begin{equation} \widehat{\frac{{\sin^2\left({\bar\rho}_j \alpha_\varphi(x_j)\right)}}{{\bar\rho}_j^2}}=\frac{(2G\hat M)^2}{{\hat O(z(x_j))}}\frac{1}{{1+\frac{2G\hat M}{\sqrt{\hat O(z(x_j))}}}}. \end{equation} In order for $\hat{\cal E}^\varphi$ to be a well defined self-adjoint operator we have the condition, in terms of eigenvalues, \begin{equation}\label{eq:imp} 1+\frac{\sin^2\left(\bar\rho_j \alpha_\varphi(x_j)\right)}{\bar\rho^2_j} -\frac{2 G M}{\sqrt{E^x(x_j)}}>0, \quad \forall x_j,M. \end{equation} This implies a minimum eigenvalue of $\hat E^x(x_j)$, $\ell_{\rm Pl}^2 k_0$, and at this point the curvature is maximum. Let us analyze this situation in some detail. It implies both $\sin\left(\bar\rho_j \alpha_\varphi(x_j)\right)=1$, and $\bar\rho_j$ given by \eqref{rho-cond}. For a given mass $M$, the smallest area of the 2-spheres must be such that \begin{equation}\label{eq:k0-cond} \left(1+\frac{4\pi \ell_{\rm Pl}^2 k_0}{\Delta}\right) -\frac{2 G M}{\sqrt{\ell_{\rm Pl}^2 k_0}}>0. \end{equation} Assuming that $k_0\gg 1$, we get \begin{equation} k_0 > \left(\frac{2 G M \Delta}{4\pi \ell_{\rm Pl}^3}\right)^{2/3} = \tilde k_0. \end{equation} Now, since $\Delta \simeq \ell_{\rm Pl}^2$, the limit $k_0\gg 1$ implies $M\gg m_{\rm Pl}$. This would correspond to large black holes (compared to the Planck mass). Let us take into account the first integer $k_0$ that is larger than ${\tilde k}_0$. For states with $\tilde k_0\gg 1$, the minimum value of the smallest 2-sphere is\footnote{This scaling with the mass is in agreement with the prescription proposed by Ashtekar, Olmedo and Singh \cite{aos}.} \begin{equation} k_0 \simeq\tilde k_0 \propto M^{2/3}. \end{equation} After polymerizing $K_\varphi$, we will represent its presence in the metric via a function $F(x_j)\in[-1,1]$, \begin{equation}\label{eq:slice} \widehat{\sin^2\left(\bar\rho_j \alpha_\varphi(x_j)\right)}=[\hat F(x_j)]^2 \end{equation} and different choices of $F(x_j)$ correspond to different slices. The metric operator can be written as, \begin{eqnarray} \hat g_{tt}(x_j) &=& -\left(1-\frac{\hat r_S}{\sqrt{\hat E^x(x_j)}}\right),\\ \hat g_{tx}(x_j) &=& -\sqrt{\frac{\pi}{\Delta}}\frac{\left\{\widehat{\left[E^x(x_j)\right]'}\right\}{\sqrt{[\hat F(x_j)]^2}}}{\sqrt{1-\frac{\hat r_S}{\sqrt{\hat E^x(x_j)}}+\frac{4\pi \hat E^x(x_j) [\hat F(x_j)]^2}{\Delta}}},\\ \hat g_{xx}(x_j) &=& \frac{\left\{\widehat{\left[E^x(x_j)\right]'}\right\}^2}{4 \hat E^x\textbf{}\left(1-\frac{\hat r_S}{\sqrt{\hat E^x(x_j)}}+\frac{4\pi \hat E^x(x_j) [\hat F(x_j)]^2}{\Delta}\right)},\\ \quad\hat g_{\theta\theta}(x_j)&=&\hat E^x(x_j),\quad \hat g_{\phi\phi}(x_j)=\hat E^x(x_j)\sin^2\theta, \end{eqnarray} with $\hat r_S = 2G\hat M$. And in terms of this operator we can obtain an effective metric assuming we are in a semi-classical situation as $g_{\mu\nu}=\langle \hat g_{\mu\nu}\rangle$. We can take the expectation value as we have the metric written as an operator acting on the physical space of states annihilated by the constraints. For the states considered this is straightforward as they are eigenstates. To compare with traditional classical results in terms of a metric geometry we will make some additional assumptions. We will consider the leading quantum corrections when the dispersion of the mass can be neglected. We can then proceed to drop all hats in the above expression and call the result ${}^{(0)}g_{\mu\nu}(x_j)$. We also take, for convenience, a continuum limit where $x_j=\delta x\, |j|+x_0$ is replaced by $(|x|+x_0)$, with $x\in \mathbb{R}$. We keep terms $\delta x/x_j$ writing them as $\delta x/(|x|+x_0)$ at first order. This implies that the effective geometries ``bounce'' when they reach $x=0$. Let us consider as example the Painlev\'e--Gullstrand coordinates \cite{frontiers}. They correspond to \begin{align}\label{eq:gf-f1} \hat F(x_j)=\bar\rho_j\sqrt{\frac{\hat r_S}{\sqrt{\hat E^x(x_j)}}}. \end{align} This choice is equivalent to a lapse operator $\hat N(x_j)=\hat I$. Notice that the function $F_1(x)<1$ for all $x\neq 0$, while $F_1(x=0)=1$. This will allow to probe the high curvature region of the effective geometry. The metric can be written as, \begin{eqnarray} {}^{(0)} g_{tt}(x) &=& \label{eq:hatgmunu31}-\left(1-\frac{r_S}{|x|+x_0}\right),\\ {}^{(0)} g_{tx}(x) &=& -{\rm sign}(x)\sqrt{\frac{r_S}{|x|+x_0}}\left(1+\frac{\delta x}{2(|x|+x_0)}\right)\,,\\ {}^{(0)} g_{xx}(x) &=& \left(1+\frac{\delta x}{2(|x|+x_0)}\right)^2, \quad {}^{(0)} g_{\theta\theta}(x)=(|x|+x_0)^2,\\ {}^{(0)}g_{\phi\phi}(x)&=&(|x|+x_0)^2\sin^2\theta.\label{eq:hatgmunu3} \end{eqnarray} For large $x$ the metric approximates extremely well the Schwarzschild solution in Painlev\'e--Gullstrand coordinates. The curvature reaches its maximum when $F(x)=1$, at $x=0$. \begin{figure}[ht] \begin{center} {\centering \includegraphics[width = 0.85\textwidth]{gmunu-BH-WH} } \end{center} \caption{The $tt$ component of the metric and the inverse of $xx$ for the metric in diagonal coordinates. When the first vanishes, horizons arise. Notice that in the region between the two horizons the discreteness is more manifest and is represented in the separation of the dots (the plot does not show all the points in the lattice but only one out of fifty). Reproduced from reference \cite{frontiers}.} \label{gtt} \end{figure} It is convenient to go to a diagonal gauge, \begin{equation}\label{eq:diag-g} {}^{(0)}g_{xx}(x) \to {}^{(0)}\tilde g_{xx}(x) = \frac{\left(1+\frac{\delta x}{2(|x|+x_0)}\right)^2}{\left(1-\frac{r_S}{|x|+x_0}\right)}, \quad {}^{(0)} g_{tx}(x) \to {}^{(0)}\tilde g_{tx}(x) = 0, \end{equation} while all other components remain as \begin{align} {}^{(0)}g_{tt}(x) &\to {}^{(0)}\tilde g_{tt}(x) = -\left(1-\frac{r_S}{|x|+x_0}\right), \\ {}^{(0)} g_{\theta\theta}(x) &\to {}^{(0)}\tilde g_{\theta\theta}(x) = (|x|+x_0)^2, \quad {}^{(0)} g_{\varphi\varphi}(x) \to {}^{(0)}\tilde g_{\varphi\varphi}(x) = (|x|+x_0)^2\sin\theta. \end{align} Figure 1 shows the values of the $tt$ and inverse $xx$ components of the metric, where one sees the emergence of two regions where one has a horizon, one for positive values of $x$ and one for negative ones \cite{frontiers}. This would correspond to a Penrose diagram as that of figure 2, which is reminiscent of Reissner--Nordstrom, but singularity free. \begin{figure}[ht] \begin{center} {\centering \includegraphics[width = 0.75\textwidth]{schild-imp} } \end{center} \caption{Penrose diagram of the effective geometry discussed in the text. B and W denote a black hole and white hole respectively. The horizontal lines separating them correspond to regions of high curvature. Regions I, II, III, IV approximate very well the Schwarzschild exterior. Reproduced from reference \cite{frontiers}.} \label{fig:penrose} \end{figure} In terms of the semiclassical metric we can compute an effective stress tensor by computing the Einstein tensor, $T_{\mu\nu}:=\frac{1}{8\pi G} G_{\mu\nu}$, and for it define the effective densities and pressures by taking into account that the Killing vector $X^\mu=(\partial_t)^{\mu}$ of the metric \eqref{eq:diag-g} will be timelike or spacelike as one traverses the horizons of the black hole of the Penrose diagram shown in Fig. \ref{fig:penrose}. In summary, \begin{equation} \rho^{ext} := -T_{\mu\nu}\frac{X^\mu X^\nu}{X^\rho X_\rho}, \end{equation} \begin{equation} p_x^{ext} := T_{\mu\nu}\frac{r^\mu r^\nu}{r^\rho r_\rho}, \end{equation} and \begin{equation} p_{||}^{ext} := T_{\mu\nu}\frac{\theta^\mu \theta^\nu}{\theta^\rho \theta_\rho}, \end{equation} and for the interior, \begin{equation} \rho^{int} := -T_{\mu\nu}\frac{r^\mu r^\nu}{r^\rho r_\rho}, \end{equation} \begin{equation} p_x^{int} := T_{\mu\nu}\frac{X^\mu X^\nu}{X^\rho X_\rho}, \end{equation} and notice that $p_{||}^{int} = p_{||}^{ext}$ as $\theta^\mu$ remains space-like. Figure 3 shows a plot of these components. As can be seen, negative values develop in the region where the singularity is replaced by a bounce. Although there are no violations of the strong energy condition as the energy density $\rho^{int}$ is always positive, there is violation of the dominant energy condition. One can therefore see this effective semi-classical violation as explaining how it circumvents the singularity theorems in the interior of the black hole in the semi-classical picture. \begin{figure}[ht] {\centering \includegraphics[width = 0.85\textwidth]{Tmunu-BH-WH} } \caption{The stress energy tensor of the effective metric ${}^{(0)}\tilde g_{\mu\nu}(x)$. This plot corresponds to $\delta x=\ell_{\rm Pl}$, namely, $s=1$. Reproduced from reference \cite{frontiers}.} \label{tmunu} \end{figure} \section{Covariance} Since the quantizations we have considered are canonical, it may not appear obvious that the results are covariant in the traditional sense. For instance, the polymerization technique affects spatial variables that are slicing dependent. This has led to criticisms of our work \cite{Bojocrit}. An aspect that has to be pointed out is that we are redefining the constraints in order to have a Lie algebra between them. Any re-defintion of variables is bound to have problems in certain points of phase space described in certain coordinates and may therefore lead to a non-equivalent quantization. We believe this is inevitable and a reasonable expectation. One way of checking covariance is to show that the resulting line elements from the space-time metrics constructed as Dirac observables are invariant. This was explored in some detail in \cite{covarianceGOP}. Here we just show an example to give the flavor of the calculations involved, we refer to the cited reference for more details. Let us consider stationary foliations, like the Painlev\'e--Gullstrand or Eddington--Finkelstein coordinates. The choice of functional parameter $K_{\varphi}(x)=\kappa_{\varphi}(x)$ will determine the foliation. For convenience, we introduce a function $F(x_j)$ such that $\sin\left(\rho_j \alpha_{\varphi}(x_j)\right)=F(x_j)$ with $F(x_j) \in [-1,1]\,\, \forall x_j$ and therefore, with the notation $F(x_j)\equiv F_j$, and similarly for other parameters and operators, as we discussed before. Each choice of $F_j$ corresponds to a different foliation, for instance $F_j=\rho_j\sqrt{r_S/\sqrt{E^x_j}}$ leads to ingoing Painlev\'e--Gullstrand form of the metric and $F_j=\rho_j{r_S}/\sqrt{E^x_j(1+r_S/\sqrt{E^x_j})}$ to ingoing Eddington-Finkelstein coordinates. The space-time metric is given by the operator, \begin{eqnarray} \hat{N}_F(x_j)&=& \sqrt{1+\frac{F_j^2}{\rho_j^2} -\frac{r_S}{\sqrt{\hat{E}^x_j}}},\\ \hat{N}^x_F(x_j)&=& \frac{2F_j}{\rho_j} \frac{\sqrt{\hat{E}^x_j}}{\left(\hat{E}^x_j\right)'} \hat{N}_F(x_j),\\\label{eq:Fgxx} \hat{g}^F_{xx}(x_j)&=& \frac{\left(\left(\hat{E}^x_j\right)'\right)^2}{4\hat{E}^x_j} \hat{N}_F(x_j)^{-2},\\\label{eq:Fgtt} \hat{g}^F_{tt}(x_j)&=&-1+\frac{\hat{r}_S}{\sqrt{\hat{E}^x_j}},\\\label{eq:Fgtx} \hat{g}^F_{tx}(x_j)&=& \hat{g}_{xx}(x_j) \hat{N}^x_F(x_j). \end{eqnarray} In terms of this operator we consider the length of a polygonal curve $(t,x)$ described by a discrete set of points $[...(t_j,x_j),(t_{j+1},x_{j+1})...]$ where $\sqrt(\hat{E}^x_j)|{\vec k},M\rangle=\ell_{\rm Planck}\sqrt{k_j}|{\vec k},M\rangle=(|j|+j_0) \ell_{\rm Planck}|{\vec k},M\rangle=x_j|{\vec k},M\rangle$ and ${\widehat t(x_j)}|{\vec k},M\rangle=t(x_j)|{\vec k},M\rangle$. One can obtain more general ones combining these. The reason for talking about polygonal curves in space-time arises in that space is discrete. Their length can be written as, \begin{equation}{(\widehat{\Delta s_j}})^2={\widehat{g_{ab}(t_j,x_j)}}\widehat{\Delta {x^a_j}} \widehat{\Delta {x^b_j}}, \end{equation} with ${\widehat{\Delta x^0_j}}={\hat t}_{j+1}-{\hat t}_j={\widehat{ \Delta t}_j}$ and ${\widehat{ \Delta x^1_j}}={\hat x}_{j+1}-{\hat x}_{j}={\widehat{ \Delta x}_j}$. It is straightforward to show that the resulting expression is invariant when one chooses different $F_j$ functions that correspond to different slices. Details can be seen in \cite{covarianceGOP}. We have also studied the covariance of several curvature scalars: the Ricci and the Kretschmann scalars, and the scalar obtained by contracting the Weyl tensor with itself. We checked that in the approximation where $x_j$ is treated as a continuous variable, which allows to use derivatives instead of finite differences, these scalars do not depend on the choice of the gauge function. Notice that the discreteness of the metric was still taken into account given the fact that $({\hat E}^x)’$ is always described by finite differences as was done for instance in equation (\ref{eq:hatgmunu31}-\ref{eq:hatgmunu3}) for Painlevé-Gullstrand. Of course, this is just a check of covariance and it is rather difficult to give a complete proof, since that would require evaluating all possible tensors computed from the metric and showing that they transform appropriately. But the straightforward nature of the above computations do not suggest any immediate problems in the construction of tensors from the metric and their transformation laws. \section{Extension to charged black holes} The above constructions can be extended to charged black holes. This was studied in reference \cite{marshall}. There a gauge was chosen in which the electric field gets completely determined by the geometric variables. The resulting theory has a Hamiltonian constraint that differs by one term from the one we consider here, that is proportional to the charge squared. The kinematical Hilbert space remains the same and one can proceed to find solutions to the Hamiltonian constraint, that still retains the same properties in the sense of not changing the $\vec{k}$ valences when acting on a state. One can construct the metric as a Dirac observable and just like in the cases we discussed, demanding that it be a well defined self-adjoint operator requires restricting the range in the $k_j$'s and this leads to the elimination of the singularity. This is true in the usual, extremal or super-extremal case. In the latter, the operator remains self adjoint until one hits the singularity, so the number of points to remove to ensure self-adjointness is smaller than in the regular case. The naked singularity present in the super-extremal case is therefore removed. The weak gravity conjecture \cite{weaklubos} claims that in any scheme involving gravitation and other interactions, naked singularities appear unless one guarantees that gravity is the weaker interaction. Since loop quantum gravity does not seem to put limits on the strength of the electromagnetic force, it has been suggested that this could be problematic due to the emergence of naked singularities \cite{weak}. The fact that in super extremal Reissner--Nordstrom the naked singularity is removed indicates that the above objection is not necessarily true. In fact, some of the solutions used to demonstrate the weak gravity conjecture are in fact based on Wick rotations of Reissner--Nordstrom space times \cite{weaksantos}. Further research is needed in loop quantum gravity to probe all the issues involved, as the resulting space-times might be unstable \cite{gleiserdotti}. \section{Some observational consequences} Black hole space-times, when perturbed, admit modes of vibration, the so-called quasinormal modes (QNMs). They appear, for instance, during the ringdown regime of the final black hole after merging of its progenitors. They are gravitational waves of complex frequencies emitted outwards to infinity and inwards towards the horizon. The imaginary part of the frequencies results in a decrease of the amplitude of the gravitational waves with time. Besides, in Einstein's theory, they only carry information about the mass, charge and angular momentum of the black hole. Hence, they can be used as a probe to falsify theories. These quasinormal frequencies are studied within the framework of perturbation theory of black hole space-times. In the case of the Schwarzschild black hole, the gauge invariant perturbations were originally introduced in Refs. \cite{reggewheeler,zerilli}, and their gauge invariant formulation in Ref. \cite{moncrief}. They can be divided into two sectors according to their polarity: axial and polar perturbations. Their equations of motion are similar to the ones of a massless Klein-Gordon field \footnote{These equations are derived assuming the Einstein equations on a generic spherically symmetric background with vanishing Einstein tensor. The loop quantum gravity effects we will discuss stem from modifications of the background. There could be additional effects due to modifications of the Einstein equations themselves, we are ignoring these as a first analysis of the problem.}. After factoring out the time and angular dependencies (the background geometries are static and spherically symmetric), the radial parts $\psi_{\tilde\omega,\ell}(r)$ satisfy the equations of motion \begin{equation} \frac{\partial^2\psi_{\tilde\omega,\ell}}{\partial r_{*}^{2}}+\left[\tilde\omega^2-V_\ell(r)\right] \psi_{\tilde\omega,\ell}=0, \label{diffeq} \end{equation} where $\ell$ is the mode number associated to the spherical harmonic $Y_{\ell m}(\theta,\phi)$, $\tilde\omega$ is the (dimensionful) frequency, $V_\ell(r)$ an effective potential, and $r_{*}$ is the tortoise coordinate defined as \begin{equation} \mathrm{d} r_* = \sqrt{\frac{F(r)}{G(r)}} \mathrm{d} r. \end{equation} Here, $F(r)$ and $G(r)$ are some of the components of the line element of the space-time that we can write as \begin{equation}\label{eq:ds2} \mathrm{d} s^{2}=-G(r) \mathrm{d} t^{2}+F(r) \mathrm{d} r^{2}+H(r) \mathrm{d} \Omega^{2}. \end{equation} QNMs of black hole space-times, as originally proposed in Ref. \cite{qnm}, are those solutions to the radial equation \eqref{diffeq} with boundary conditions \begin{align}\nonumber \psi_{n,\ell}(r)&\propto e^{-i \tilde\omega_{n,\ell} r_*}\quad\quad r_*\to+\infty, \\ \psi_{n,\ell}(r)&\propto e^{i \tilde\omega_{n,\ell} r_*}\quad\quad \;\;r_*\to-\infty.\label{eq:qnm-bndry} \end{align} The resulting frequencies $\tilde\omega_{n,\ell}$ belong to a discrete subset of imaginary numbers, with the imaginary part of $\tilde\omega_{n,\ell}$ being positive. In this way, QNMs will dissipate in time. In the case of axial perturbations, the potential has the form \begin{equation} {}^{(a)}V_\ell(r)=G(r)\left[\frac{\ell(\ell+1)}{H(r)}-R(r)\right] \end{equation} where \begin{align}\label{eq:Rfunc R(r)=&\frac{2}{H(r)}+\frac{1}{F(r)}\bigg(\frac{G'(r)H'(r)}{4G(r)H(r)}-\frac{F'(r)H'(r)}{4F(r)H(r)}-\frac{3[H'(r)]^2}{4H^2(r)}+\frac{H''(r)}{2H(r)}\bigg), \end{align} with the primes denoting derivative with respect to $r$. For the polar perturbations, the potential is \begin{equation} \begin{split} {}^{(p)}V_\ell(r)&=\frac{G(r) (\ell-1)^2(\ell+2)^2}{\lambda_\ell(r)^2}\left[\frac{(\ell-1)(\ell+2)+2}{H(r)}+R(r)\right.\\&+\left.\frac{H(r)R(r)^2}{(\ell-1)^2(\ell+2)^2}\left((\ell-1)(\ell+2)+\frac{H(r)R(r)}{3}\right)\right], \end{split} \end{equation} with \begin{equation} \lambda_\ell(r)=(\ell-1)(\ell+2)+H(r)R(r). \end{equation} To compute the complex frequencies of these QNMs, one can use a WKB method. Here, one needs to solve for $\tilde\omega_{n,\ell}$ in the equation \begin{equation} \tilde\omega_{n,\ell}^2=V_{\ell}(\tilde r)-\sqrt{-2V_{\ell}^{''}(\tilde r)}\left[\left(n+\frac12\right)+\sum_{i=2}^N \Lambda^{(i)}_{n,\ell}(\tilde r)\right]. \label{wkbformula} \end{equation} The functions $\Lambda^{(i)}_{n,\ell}(\tilde r)$ codify high-order WKB corrections, which depend on $\tilde\omega_{n,\ell}$ itself and on the derivatives of the corresponding potential evaluated on its maximum located at $V_{\ell}^{'}(\tilde r)=0$. Closed form expressions for $\Lambda^{(i)}_{n,\ell}(\tilde r)$ can be found in Refs. \cite{wkb,konoplya}. Interestingly, in the case of the Schwarzschild black hole, the quasinormal spectrum of axial and polar perturbations is isospectral, namely, they agree, despite their potential being different. However, the origin of this agreement was already noted in Ref. \cite{qnm} and explained in detail in \cite{iso-db-cov} as a consequence of the covariance of Eq. \eqref{diffeq} under Darboux transformations. However, in the case of the effective geometries discussed in this chapter, a detailed calculation in Ref. \cite{lqg-qnms} shows not only that their quasinormal frequencies deviate from the classical black hole, but also that isospectrality is violated. Nevertheless, these deviations are small for macroscopic black holes, since they decrease fast with the radius of the horizon. For instance, if one chooses in Eq. \eqref{eq:diag-g} the parameter $\delta x=\ell_{\rm Pl}$, those deviations decrease with the power $(r_S/\ell_{\rm Pl})^{-2/3}$, while for $\delta x=\ell_{\rm Pl}^2/(2r_0)$, they decrease as $(r_S/\ell_{\rm Pl})^{-4/3}$, for any choice of $n$ and $\ell$. In Table \ref{qnms} we show the numerical values of the dimensionless quasinormal frequencies, namely, $\omega_{n\ell}=(r_S/2G)\,\tilde\omega_{n,\ell}$, as is usual in the literature. \begingroup \setlength{\tabcolsep}{6pt} \renewcommand{\arraystretch}{0.5} \begin{table}[h] \footnotesize \centering \begin{tabular}{|cccc|} \hline\hline \multicolumn{4}{|c|}{\textbf{Quasinormal frequencies ($r_S=10^3\,\ell_P$)}} \\ \hline\hline \multicolumn{1}{|c|}{($n$,$\ell$)} & \multicolumn{1}{c|}{\textbf{Schwarzschild}} & \multicolumn{1}{c|}{\textbf{axial}} & \textbf{polar} \\ \hline \multicolumn{1}{|c|}{(0,2)} & \multicolumn{1}{c|}{0.74733225 - 0.17792806$i$} & \multicolumn{1}{c|}{0.74736483 - 0.17720680$i$} & 0.74749081 - 0.17733983$i$ \\ \hline \multicolumn{1}{|c|}{(1,2)} & \multicolumn{1}{c|}{0.69322645 - 0.54811740$i$} & \multicolumn{1}{c|}{0.69401982 - 0.54578773$i$} & 0.69407330 - 0.54597173$i$ \\ \hline \multicolumn{1}{|c|}{(0,3)} & \multicolumn{1}{c|}{1.19888658 - 0.18540612$i$} & \multicolumn{1}{c|}{1.19894618 - 0.18468114$i$} & 1.19897954 - 0.18471131$i$ \\ \hline \multicolumn{1}{|c|}{(1,3)} & \multicolumn{1}{c|}{1.16528891 - 0.56259764$i$} & \multicolumn{1}{c|}{1.16577313 - 0.56034440$i$} & 1.16577795 - 0.56043389$i$ \\ \hline \multicolumn{1}{|c|}{(0,4)} & \multicolumn{1}{c|}{1.61835676 - 0.18832792$i$} & \multicolumn{1}{c|}{1.61841428 - 0.18758921$i$} & 1.61842820 - 0.18759958$i$ \\ \hline \multicolumn{1}{|c|}{(1,4)} & \multicolumn{1}{c|}{1.59326305 - 0.56866870$i$} & \multicolumn{1}{c|}{1.59363598 - 0.56640627$i$} & 1.59364290 - 0.56643745$i$ \\ \hline \multicolumn{1}{|c|}{(0,5)} & \multicolumn{1}{c|}{2.02459062 - 0.18974103$i$} & \multicolumn{1}{c|}{2.02464204 - 0.18899367$i$} & 2.02464917 - 0.18899817$i$ \\ \hline \multicolumn{1}{|c|}{(1,5)} & \multicolumn{1}{c|}{2.00444206 - 0.57163476$i$} & \multicolumn{1}{c|}{2.00474707 - 0.56936206$i$} & 2.00475176 - 0.56937559$i$ \\ \hline \multicolumn{1}{|c|}{(0,6)} & \multicolumn{1}{c|}{2.42401964 - 0.19053169$i$} & \multicolumn{1}{c|}{2.42406539 - 0.18977889$i$} & 2.42406933 - 0.18978122$i$ \\ \hline \multicolumn{1}{|c|}{(1,6)} & \multicolumn{1}{c|}{2.40714795 - 0.57329985$i$} & \multicolumn{1}{c|}{2.40740646 - 0.57101967$i$} & 2.40740936 - 0.57102668$i$ \\ \hline \end{tabular} \caption{Quasinormal frequencies for the first overtones of axial and polar perturbations. In the first column we show the results for the Schwarzschild black hole (due to isospectrality we do only include axial perturbations). In the second and third columns we show the corresponding axial and polar frequencies of our effective geometry, respectively. They correspond to the choice $\delta x=\ell_{\rm Pl}$ and for $r_S=10^3\,\ell_P$. } \label{qnms} \end{table} \endgroup Recently, there have also been interesting investigations of alternative black hole models in loop quantum gravity in terms of quasinormal modes \cite{BMM}, also including the possibility of observations with the Event Horizon Telescoope \cite{shadow}. \section{Mini-superspace approach: Kantowski-Sachs models} Another complementary approach for the quantization of black holes spacetimes, which usually focuses on singularity resolution, restricts the study to the region inside the horizon. This region is (classically) isometric to the (vacuum) Kantowski-Sachs cosmologies. The homogeneity of the spatial hypersurfaces allows one to apply LQC quantization techniques. The literature on the topic is considerable (see for instance \cite{aos,ab,lm,bv,dc,am,bdhr,cartin,cgp,bkd,yks,cs,oss,cctr,ao}). These models, due to homogeneity, have two kinematical, global degrees of freedom, corresponding to the independent components of the Ashtekar connection and the conjugate densitized triad, usually denoted by the conjugate pairs $(c,p_c)$ and $(b,p_b)$, respectively. The Hamiltonian constraint involves the curvature of the connection, which in the quantum theory, is written in terms of holonomies of the gravitational connection around suitable loops with a minimum non-zero area. The dynamics of the system has been studied in most of the literature by assuming an effective description. Analysis of the effective equations of motion of these models show that the singularity is replaced by a space-like, 3-dimensional transition surface. It separates a trapped and an anti-trapped region. They correspond to black hole and white hole regions, respectively. However, different approaches in the literature differ in the choice of the loops of the holonomies that regularize the effective Hamiltonian constraint, namely, in the choice of two quantum parameters, denoted by $\delta_{b}$ and $\delta_{c}$. The choices can be classified into three broad classes. In general, they are proportional to the (square root of) area gap $\Delta$ times a function, which can be different for each parameter. In \cite{ab,lm,cgp} these functions are constant; in \cite{aos,cs,oss} they are chosen as functions of Dirac observables (functions on phase space constant along dynamical trajectories); and in \cite{bv,dc,bkd,djs,cctr,am,bdhr} they are more general functions on phase space (non-constant on dynamical trajectories). As was noted in Ref. \cite{aos}, some of these choices have undesirable or puzzling features with no clear physical origin. With this motivation, and adopting the strategy in which $\delta_{b}$ and $\delta_{c}$ are chosen to be functions of Dirac observables, reference \cite{aos} suggests a judicious choice for these parameters where the area enclosed by the loops at the transition surface equals the area gap $\Delta$ of loop quantum gravity. As a result, the transition surface is always located at the Planck regime while there is an excellent agreement with classical general relativity in low curvature regions. Moreover, reference \cite{aos} also extends the effective description to the asymptotic regions, and show that the effective metric is smooth across the horizon, the surface joining the exterior and interior regions. The effective spacetime line element of these geometries is given by the expression \begin{equation}\label{metric} g_{ab} d x^{a} d x^{b} \equiv d s^2 = - N^2 d T^2 + \frac{p_b^2}{|p_c| L_o^2} d x^2 + |p_c| (d \theta^2 + \sin^2\theta d \phi^2) , \end{equation} where $L_o$ is the length of a fiducial cell introduced to avoid spurious divergencies due to the non-compactness of the spatial slicing, and $N$ is the lapse function \begin{equation} \label{N} N = \frac{\gamma \,p_c^{1/2} \,\,\delta_b}{\sin(\delta_b b)}, \end{equation} with $\gamma$ the Immirzi parameter (we chose it to be equal one but we leave it here explicitly). The effective Hamiltonian (constraint) of the system is \begin{equation} \label{H_eff} H_{\mathrm{eff}}[N] = - \frac{1}{2 G \gamma} \Big[2 \frac{\sin (\delta_c c)}{\delta_c} \, p_c + \Big(\frac{\sin (\delta_b b)}{\delta_b} + \frac{\gamma^2 \delta_b}{\sin(\delta_b b)} \Big) \, p_b \Big] . \end{equation} One can easy see that in the classical limit $\delta_b \rightarrow 0$ and $\delta_c \rightarrow 0$, one recovers the classical limit of the lapse function $N$ and the classical Hamiltonian. The dynamical equations for the phase space variables are \begin{equation} \label{eom1} \dot b = - \frac{1}{2} \left(\frac{\sin(\delta_b b)}{\delta_b} +\frac{\gamma^2 \delta_b}{\sin(\delta_b b)}\right) , ~~~~ \dot c = - 2 \, \frac{\sin(\delta_c c)}{\delta_c}, \end{equation} and \begin{equation} \label{eom2} \dot p_b = \frac{p_{b}}{2} \, \cos(\delta_b b) \left(1 - \frac{\gamma^2 \delta_b^2}{\sin^2(\delta_b b)}\right) , ~~~~ \dot p_c = 2 \, p_c \, \cos(\delta_c c) . \end{equation} It is easy to integrate the Hamilton's equations for $b$, $c$ and $p_c$, while we obtain the solution for $p_b$ using the effective Hamiltonian constraint $H_{\mathrm{eff}} \approx 0$ on shell. The solution is: \begin{eqnarray} \label{eq:c} \tan \Big(\frac{\delta_{c}\, c(T)}{2} \Big)&=& \mp \frac{\gamma L_o \delta_c}{8 m} e^{-2 T},\\ \label{eq:pc} p_c(T) &=& 4 m^2 \Big(e^{2 T} + \frac{\gamma^2 L_o^2 \delta_c^2}{64 m^2} e^{-2 T}\Big) , \end{eqnarray} \begin{equation} \label{eq:b} \cos \big(\delta_{b }\,b(T)\big) = b_o \tanh\left(\frac{1}{2}\Big(b_o T + 2 \tanh^{-1}\big(\frac{1}{b_o}\big)\Big)\right), \end{equation} where% \begin{equation} b_o = (1 + \gamma^2 \delta_b^2)^{1/2} , \end{equation} and, \begin{equation}\label{eq:pb} p_b(T) = - 2 \frac{\sin (\delta_c\, c(T))}{\delta_c} \frac{\sin (\delta_b\, b(T))}{\delta_b} \frac{p_c(T)}{\frac{\sin^2(\delta_b\, b(T))}{\delta_b^2} + \gamma^2}. \end{equation} One should note that the triad $p_c$ is bounded from below by ${p_c} {\mid}_{\mathrm{min}} = m \gamma L_o \delta_c$ in every effective space-time, where \begin{equation} \label{m} m := \Big[ \frac{ \sin\delta_c c}{\gamma L_{o}\delta_c}\Big]\, p_{c}, \end{equation} is the mass of the black hole spacetime. In consequence, these geometries are free of singularities. One can see that they define a transition surface from the trapped (black hole type) region to an anti-trapped (white hole type) region. The improved dynamics proposed in Ref. \cite{aos}, where the minimum area conditions are imposed {\it at this transition surface}, after a detailed calculation, show that, in the large mass limit, \begin{equation}\label{db-dc} \delta_b=\Big(\frac{\sqrt{\Delta}}{\sqrt{2\pi}\gamma^2m}\Big)^{1/3}, \qquad L_{o}\delta_c=\frac{1}{2} \Big(\frac{\gamma\Delta^2}{4\pi^2 m}\Big)^{1/3}. \end{equation} Therefore $\delta_b$ and $\delta_c$ are fixed once and for all to these values, even if one evaluates the effective geometry away from the transition surface. The resulting effective interior geometries can be given explicitly. They have some interesting properties. For instance, curvature invariants, evaluated at the transition surface, defined by $\dot p_c(T_{\cal T}) = 0$, have universal upper bounds that are mass-independent (in the large mass limit). Concretely, the (square of the) Ricci scalar has the asymptotic form: \begin{equation} R^{2}(T_{\cal T})\,\,=\,\,\frac{256\pi^{2}}{\gamma^4\Delta^{2}}+{\cal O}\Big(\big(\frac{\Delta}{m^2}\big)^{\frac{1}{3}}\,\ln \frac{m^2}{\Delta}\Big); \end{equation} the square of the Ricci tensor has the asymptotic form \begin{equation} R_{ab}R^{ab}(T_{\cal T})\,\,=\,\,\frac{256\pi^2}{\gamma^4\Delta^2}+{\cal O}\Big(\big(\frac{\Delta}{m^2}\big)^{\frac{1}{3}}\,\ln \frac{m^2}{\Delta} \Big); \end{equation} the square of the Weyl tensor has the asymptotic form \begin{equation} C_{abcd}C^{abcd}(T_{\cal T})\,\,=\,\, \frac{1024\pi^2}{3\gamma^4\Delta^2}+{\cal O}\Big(\big(\frac{\Delta}{m^2}\big)^{\frac{1}{3}}\, \ln \frac{m^2}{\Delta}\Big); \end{equation} and, consequently, the Kretschmann scalar $K = R_{abcd}R^{abcd}$ has the asymptotic form \begin{equation} K(T_{\cal T})\,\, =\,\, \frac{768\pi^2}{\gamma^4\Delta^2}+{\cal O}\Big(\big( \frac{\Delta}{m^2}\big)^\frac{1}{3} \,\ln\frac{m^2}{\Delta}\Big). \end{equation} Hence, the non-vanishing of the Ricci tensor and the Einstein equations imply that there is a nontrivial stress-energy tensor. Actually, Ref. \cite{aos} showed that this effective stress-energy tensor violates the strong energy conditions. Moreover, it decreases fast away from the transition surface. Hence, the effective geometries are in very good agreement with the classical geometries close to the past and future horizons. All these properties of the interior geometry of these effective Kruskal geometries are unique. In particular, the prescriptions in which $\delta_b$ and $\delta_c$ are chosen to be constant, either they give results sensitive to the value of the fiducial parameter $L_o$ or the curvature at the transition surface has no upper universal bounds\cite{ab,lm,cartin,cgp,yks,cs,oss}. Other prescriptions where they are chosen to be functions of Dirac observables, show a similar issue with curvature invariants at the transition surface, and they also give a white hole geometry with a very large mass compared to the black hole mass \cite{bv,dc,bkd,djs,cctr}. The physical origin of this large difference is still unclear. Finally, the choices for $\delta_b$ and $\delta_c$ that involve more general functions on phase space, so far, either trigger large quantum corrections at regions where one does not expect such deviations from the classical theory or the improved dynamics conditions are no longer valid and they cannot be trusted (see \cite{aos} for details). Moreover, Ref. \cite{aos} also introduced the extension of the effective geometries to the exterior region. They suggest adopting the same spacetime foliation (i.e. a homogeneous slicing). However, the intrinsic metric of the hypersurfaces have Lorentzian signature (they are not spacelike as in the interior). Therefore, the signature of the internal space for the gravitational connection and triads also has signature -,+,+. This implies that the internal gauge group is now ${\rm SU(1,1)}$ (rather than ${\rm SU(2)}$). Keeping this difference in mind, the phase space variables for the exterior region can be obtained simply by making the substitutions \begin{equation} \label{substitutions} b \to i\tilde{b}, \, p_{b} \to i \tilde{p}_{b}; \qquad c\to \tilde{c},\, p_{c} \to \tilde{p}_{c}. \end{equation} The Hamiltonian constraint for the exterior region now has the form \begin{equation}\label{Hext} \tilde{H}_{\mathrm{eff}}[\tilde{N}] = - \frac{1}{2 G \gamma} \Big[2 \frac{\sin (\delta_{\tilde{c}}\, \tilde{c})}{\delta_{\tilde{c}}} \, |\tilde{p}_c| + \Big(-\frac{\sinh (\delta_{\tilde{b}}\, \tilde{b})}{\delta_{\tilde{b}}} + \frac{\gamma^2 \delta_{\tilde{b}}}{\sinh(\delta_{\tilde{b}}\, \tilde{b})} \Big) \, \tilde{p}_b \Big] ~. \end{equation} Hamilton's equations can be obtained and also easily integrated. The resulting exterior metric admits a closed form expression given by \begin{equation} \label{eq:g-tr}\tilde{g}_{ab} d x^{a} d x^{b} = \tilde{g}_{tt}d t^{2} + \tilde{g}_{rr} d r^{2} + \tilde{R}^{2}\, d \omega^{2} \,, \end{equation} with \begin{equation} \tilde{g}_{tt}= -\left(\frac{r}{r_S}\right)^{2\epsilon}\frac{\left(1-\left(\frac{r_S}{r}\right)^{1+\epsilon}\right)\left(2+\epsilon+\epsilon\left(\frac{r_S}{r}\right)^{1+\epsilon}\right)^{2} \left((2+\epsilon)^{2}-\epsilon^{2}\left(\frac{r_S}{r}\right)^{1+\epsilon}\right)}{16\left(1+\frac{\delta_{\tilde{c}}^{2} L_0^{2}\gamma^{2}r_S^2}{16 r^4} \right) (1+\epsilon)^{4}}\, , \end{equation} \begin{equation} \label{grr} \tilde{g}_{rr}= \Big(1+\frac{\delta_{\tilde{c}}^{2} L_0^{2}\gamma^{2}r_S^2}{16 r^4} \Big)\frac{\Big(\epsilon +\left(\frac{r}{r_S}\right)^{1+\epsilon } (2+\epsilon )\Big)^2}{\Big(\left(\frac{r}{r_S}\right)^{1+\epsilon }-1\Big) \Big(\left(\frac{r}{r_S}\right)^{1+\epsilon } (2+\epsilon )^2-\epsilon ^2\Big)}\, , \end{equation} and \begin{equation} R^{2} := \tilde{p}_{c} = 4m^2\left( e^{2T}+\frac{\gamma^2L_0^2\delta_{\tilde{c}}^2}{64m^2}e^{-2T}\right) \equiv r^2\left(1+\frac{\gamma^{2}L_{0}^{2}\delta_{\tilde{c}}^{2} r_S^2}{16 r^4} \right)\, , \end{equation} where $r_{S} := 2m$ and $1 + \epsilon:=(1 +\gamma^{2}\delta_{\tilde{b}}^{2})^{\frac{1}{2}}$. This metric has several interesting properties. For instance, in the limit in which $\delta_{\tilde{c}}\to 0$ and $\delta_{\tilde{b}}\to0$, one recovers the Schwarszchild metric in its standard form. Besides, it is asymptotically flat, but the fall-off conditions are not the standard ones \cite{suddo2}. Here, several curvature scalars computed out of the Riemann tensor (in particular the Krestchamnn scalar) decay as $r^{-4}$ rather than $r^{-6}$, as in the classical case. Despite this non-standard asymptotic behavior, quantities like the ADM, Ricci and horizon energies are well defined, and they agree up to small corrections (in the large $m$ limit). Moreover, one can actually check that, even for microscopic black holes with masses a few orders of magnitud larger than the Planck mass, deviation from the classical theory appear at distances many orders of magnitude beyond the current Hubble horizon size $5{\rm Gpc} \sim 10^{61}\ell_{Pl}$ (see \cite{ao} for details). Finally, we would like to mention that the near horizon geometry agrees very well with the classical one. By means of quantum fields propagating on static space-times and using Euclidean methods, one can easily calculate the temperature associated with the Killing horizon. The result (see Ref. \cite{ao}) is % \begin{equation} \label{temp} T_{\rm H}\, =\, \frac{\hbar}{8\pi k_B m} \, \frac{1}{(1+\epsilon_{m})}, \end{equation} where $k_B$ is the Boltzmann constant, and \begin{equation} \epsilon_{m} = \frac{1}{256}\left(\frac{\gamma \Delta^{\frac{1}{2}}}{\sqrt{2\pi}m}\right)^{8/3}. \end{equation} This correction is tiny, even for microscopic black holes. For instance, for a black hole of $10^6 \, M_{\rm Pl}$, the correction to the Hawking temperature is as small as $10^{-21}$, while for a solar mass black hole its value is of the order of $4 \times 10^{-106}$. We would like to conclude this section by mentioning that, although these models have been studied mainly within the effective dynamics approximation, some few contributions have been focused on the genuine quantum theory. For instance, \cite{cartin} explored the solutions to the difference equation proposed in \cite{ab}. In \cite{cgp}, after a reduced phase space quantization, the authors compute the physical states and derive the effective dynamics of semiclassical states. Besides, Ref. \cite{zmsz,zmsz2} shows that the spectrum of the mass operator is discrete with a minimum non-vanishing eigenvalue, indicating that the final fate of evaporation can be a stable black hole remnant. Moreover, \cite{egm} explores for the first time several properties of physical quantum states of the model proposed in \cite{aos}. Besides, other proposals have been motivated or derived from the full theory, and include additional corrections into the effective Hamiltonian constraint. This is the case, for instance, of Refs. \cite{qrlg1,qrlg2,qrlg3,adl}. These modifications affect mainly the interior region beyond the transition surface. There, one does not recover a classical white hole geometry. However, there is agreement with respect to singularity resolution and the right semiclassical physics close to the black hole horizon. \section{Conclusions} We have given an overview of some of the most relevant contributions on spherically symmetric loop quantum gravity, which is based on assuming spherical symmetry in the classical theory and then quantizing the resulting reduced model. We start the discussion with models where the slicings are inhomogeneous outside black holes and that can penetrate the interior and go over the region where the singularity used to be in the classical theory and be continued beyond there smoothly. The singularity is naturally avoided if one demands that the metric be a well defined self-adjoint operator. We also discussed the charged case, potentially observable effects by studying quansinormal ringing and the covariance of the approach. It should be noted that this treatment completes the Dirac quantization of the system and analyzes its semiclassical properties in terms of Dirac observables. It is the first such treatment in quantum general relativity in a field theoretic context. Some recent proposals, like \cite{abv} discuss a covariant effective description where no abelianization of the constraint is required, provided one adopts a partial polymerization of geometrical variables (in this approach an extension where all connection variables are polymerized is still unknown). Moreover, in the last years, several models of collapse, discussing black hole formation and even evaporation, have been proposed in \cite{bgllp,bglp,hkswe,hkswe2}. The studies adopt numerical approaches that allow them to probe the formation of a dynamical horizon (black hole type), and critical phenomena \cite{bgllp,bglp}. Besides, in Ref. \cite{hkswe,hkswe2}, the authors show that once the matter reaches the high curvature region, it bounces, forming a shock wave that eventually evaporates the black hole in a time that is proportional to the square of its mass. Additional models for collapse without local degrees of freedom can be seen in \cite{othermodels}. We have also discussed other approaches that exploit the simplicity of homogeneous slicings in spherical symmetry. In this case, black hole dynamics can be described by means of few global degrees of freedom. Some of the most recent treatments introduce novel choices of the lenght of the plaquettes of the loops resulting on effective geometries that solve some of the problems found in previous proposals. Most of the models agree in some of the physical properties of the trapped interior region and singularity resolution, but the fate of the geometry beyond this quantum region is still a matter of debate. \section{Acknowledgements} This work was supported in part by Grant NSF-PHY-1903799, NSF-PHY-2206557, funds of the Hearne Institute for Theoretical Physics, CCT-LSU, Pedeciba, Fondo Clemente Estable FCE 1 2019 1 155865 and the Spanish Government through the projects PID2020-118159GB-C43, PID2019-105943GB-I00 (with FEDER contribution), and the ``Operative Program FEDER2014-2020 Junta de Andaluc\'ia-Consejer\'ia de Econom\'ia y Conocimiento'' under project E-FQM-262-UGR18 by Universidad de Granada.
1,314,259,994,070
arxiv
\section{Introduction} Anomaly mediated supersymmetry breaking provides a flavor blind, UV insensitive, predictive method of supersymmetry breaking. In this way it almost fulfills the long wish list of things we would want from the perfect model of supersymmetry breaking \cite{3}\cite{13}. However, it has two main problems; the first is that it generates negative slepton mass squareds, the second is the $\mu$ problem, or the generation of a weak scale mass for the Higgsinos. Within the framework of anomaly mediation, we can generate a Giudice-Masiero-like $\mu$ term that doesn't require a singlet. However, the resulting $B$ term will be a loop factor too large to facilitate proper electroweak symmetry breaking. This leads one to ignore this mu term and search for a fix elsewhere. Clearly we need extra sources of SUSY breaking to create a viable model. One can generate soft masses for fields through generic hidden sector supersymmetry breaking; for example, contact terms with hidden sector fields that have $F$ term vevs \begin{eqnarray} \int d^4 \theta \frac{XX^{\dagger}}{M^2}QQ^{\dagger}. \end{eqnarray} However, these operators may have unsuppressed flavor violation. Instead we may add a new source of supersymmetry breaking, a U(1) gauge field which acquires a $D$ term vev in the hidden sector. This doesn't allow for direct contact terms with the scalar fields $Q$ which would contribute to flavor violating processes at leading order. This was explored in ref \cite{17}. In the MSSM, a single new operator can be generated which is an additional $B\mu$ term for the Higgs fields, $\int d^2\theta W^{'}W^{'}H_u H_d$ \cite{10}. With this new term, we need not throw out the Giudice-Masiero-like $\mu$ term but instead can keep it and tune the $B\mu$ term against it to get the correct electroweak vev. This mechanism is a module which can be used in conjunction with several different methods of fixing anomaly mediation's slepton problem, for example the addition of Fayet-Iliopoulos D-terms as in refs \cite{4}\cite{9}. Another such method is the addition of extra chiral superfields, or messengers, in a vector-like representation. Such fields have a nonsupersymmetric mass threshold with mu-like and $B$-like terms generated by a Kahler potential operator, exactly like the Higgs fields. This mass threshold pushes the gauginos off of their anomaly mediated trajectory, and adds to the scalar masses at two loops. Multiple sets of these vector-like fields are typically required to counteract the negative slepton masses from anomaly mediation, but with the enhanced $B$ term the deflection becomes much greater. Thus the slepton masses are driven positive with only one set of extra chiral superfields. If the single messenger is in a complete SU(5) representation, then perturbative unification is easily preserved. Since the mass thresholds are set at only $10$ TeV, UV insensitivity is preserved down to that scale. With only a small number of new parameters, the theory remains predictive and viable, but the trade off is for moderately heavy fine tuning. This paper has the following outline: Section 2 provides an overview of anomaly mediation and discusses how the new B term arises from the addition of the new U(1). Section 3 discusses electroweak symmetry breaking and minimal fixes to the slepton problem. In Section 4 viable spectra are produced and discussed, and Section 5 is conclusions. \section{SUSY Breaking} To allow anomaly mediation to dominate and prevent arbitrary flavor violation, one must forbid contact terms between the MSSM and the hidden sector. One way this can be achieved is by using a 5-D setup with two boundaries separated by the extra dimension. The standard MSSM fields inhabit one boundary and hidden sector fields inhabit the other, with only gravity propagating in the extra dimension. Supersymmetry is broken on the hidden sector boundary and is communicated to the visible sector as the F term of the conformal compensator $\Phi= 1+ {\theta}^2 m_{3/2} $. The conformal compensator is the spurion of broken scale invariance and appears in the Lagrangian next to any explicit mass scale. Thus after rescaling fields and regulating, the conformal compensator appears with $\Lambda$, the cutoff of the theory. The Lagrangian for the set of fields $Q_i$ is \begin{eqnarray} L = \int d^4 \theta[Z_i(\frac{\mu}{\sqrt{\Phi\Phi^{\dagger}}\Lambda})Q_i^{\dagger} e^{-2V} Q_i]+\int d^2\theta \frac{1}{2}g^{-2}(\frac{\mu}{\Phi\Lambda}) Tr[W^{\alpha}W_{\alpha}]+h.c. \nonumber \\ - \int d^2 \theta [ \lambda_{ijk} Q_i Q_j Q_k +\Phi m_{ij} Q_i Q_j + {\Phi}^2{v_i}^2Q_i] +h.c. \end{eqnarray} where $\mu$ is the renormalization scale. Expanding the functions of $\Phi$ in terms of ${\theta}^2$ yields gaugino masses \begin{equation}\label{eq:gauginomass} m_{\lambda_i}=\frac{\beta(g_i)}{g_i}m_{3/2}. \end{equation} and scalar masses and trilinear terms \begin{eqnarray}\label{eqn:ammasses} m^2_i = -\frac{1}{4} \dot{\gamma}m_{3/2}^2 & & A_{ijk}= -\frac{1}{2}(\gamma_i +\gamma_j +\gamma_k) m_{3/2} \end{eqnarray} For mass thresholds that are supersymmetric, the soft masses at low energy depend only on the value of the $\beta$ functions at that energy. Thus we call the theory UV insensitive. Also, a single scale parameter $m_{3/2}$ sets the masses of the superpartners, therefore this form of SUSY breaking is highly predictive. Only scalars that participate in the strong force get large positive contribution to their anomaly mediated masses. The sleptons have only non-asymptotically free couplings, thus due to their $\beta$ functions their anomaly mediated mass squareds are negative. Any complete model employing anomaly mediation will have to address this issue. On a more positive note, we can generate a mu term using a Giudice-Masiero-like operator without introducing a singlet \cite{2}\cite{12}. We may write down a term in the Kahler potential \begin{eqnarray} \lambda_h \int d^4 \theta \phi^{\dagger} \phi H_{u} H_{d} \end{eqnarray} We can then rescale the fields $H \rightarrow \phi H$ to get \begin{eqnarray} \lambda_h \int d^4 \theta \frac{\phi^{\dagger}}{\phi} H_{u} H_{d}. \end{eqnarray} Inserting for the conformal compensator $\phi = 1 +m_{3/2} \theta^2$ and expanding gives \begin{eqnarray} \lambda_h \int d^2 \theta m_{3/2} H_{u} H_{d} - \lambda_h m_{3/2}^2 h_u h_d \end{eqnarray} The first term is a mu term for the Higgsinos. The second term is a B term for the scalar Higgs. The scale $m_{3/2}$ is $\sim 10$ TeV, so that with a choice of small coupling, $\lambda_h \sim \frac{1}{16\pi^2}$, we may have a mu term that is of order the weak scale. However in this case, the B term is of order $\frac{m_{3/2}^2}{4 \pi}$. With such a $B$ term, the dynamics of electroweak symmetry breaking would generate a Higgs vev at the scale $m_{3/2}$ which is a loop factor too large. A new mechanism is needed to create a viable $\mu$ term. In this model, there is one additional source of SUSY breaking. We have a U(1) gauge symmetry; in our 5-D setup it propagates in the bulk and is broken on the hidden sector boundary. The gauge field gets a $D$ term vev by some dynamical mechanism in the hidden sector. Since we want a $D$ term that is the same size as the overall SUSY breaking scale, we may deduce that the $D$ term vev is itself closely connected to, and possible even required for supersymmetry breaking. For examples of such models see refs. \cite{21},\cite{10}. With the addition of this extra U(1) gauge field, there are only two new operators that we may write down with all Lorentz and gauge indices contracted. One operator is \begin{equation} \frac{c_h}{{M}^2} \int d^2 \theta W^{'} W^{'} H_u H_d . \end{equation} When the $D$ term is set to it's vev this term becomes \begin{equation} {c_h}{m_D^2}H_u H_d \end{equation} with $\frac{D}{M_{cut}} \equiv m_{D}$, and $m_{D} \sim O(m_{3/2}) $. This is an additional $B$ term which we can cancel against the Giudice-Masiero-like $B$ term. The entire $B\mu$ term is now an adjustable parameter which need not be of order $\lambda_h m_{3/2}^2$. A one percent tuning between the scales $\lambda m_{3/2}^2$ and $c_h m_D^2$ is enough to give the correct electroweak physics. The new operator only adds one more parameter to the theory, the scale $c_h m_{D}$, so the mu problem can be solved while maintaining an economy of parameters. We generate one additional operator that could potentially cause problems in this model: \begin{equation} \int d^2 \theta W_Y W^{'} \end{equation} which is a tadpole term for the hypercharge $D$ term. One cannot forbid this new term through boundary conditions as the new U(1) lives in the bulk and must communicate with at least some fields on the standard model brane. However, if we choose not to write down this operator in the first place it will not be generated by radiative corrections. For those who worry that whatever operator can be written down must be included, one could try to forbid this term with symmetries. For example one can introduce charge conjugation under which only $W^{'}$ is odd, and which is broken only on the hidden sector boundary. Once the extra dimension is integrated out, operators with charge conjugation violation will be communicated to the standard model brane but will be suppressed at the least by powers of the SUSY breaking scale over the compactification radius. Scalar masses for squarks and sleptons cannot be generated through direct contact terms with the hidden sector gauge field. Holomorphy prevents us from writing such a term in the superpotential. Instead the lowest dimension mass term we may write is $\frac{1}{M^6}\int d^4 \theta W^{'}W^{'}W^{'\dagger}W^{'\dagger}QQ^{\dagger}$, which is highly suppressed and not generated by any divergent diagrams. There is a correction to the Wino mass and slepton mass due to loop effects from the Higgsinos. The Wino mass is corrected at one loop in a way similar to gauge mediation (\cite{14}\cite{15}\cite{21}), with the overall contribution going like \begin{equation} \frac{{g_y}^2}{16 {\pi}^2} \frac {B \mu}{\mu} \end{equation} Generically this correction is of order $\frac{1}{16 {\pi}^2} \mu$. The masses of the sleptons are corrected at two loops with diagrams involving the Winos, Higgs' and Higgsinos. These corrections are of order \begin{equation} m_{sl}^2 \sim {\frac {B \mu}{\mu}}^2 \left( \frac{{g}^2}{16 {\pi}^2}\right)^2 \end{equation} For a weak scale $\mu$ term however, this correction is not nearly enough to fix the negative slepton masses from anomaly mediation; additional structure will have to be added to complete the model. \section{Deflected Anomaly Mediation} One method of fixing the slepton problem involves changing the anomaly mediated trajectories of the scalars and gauginos by adding chiral superfields in a vector like representation \cite{6} \cite{2} \cite{1}. These fields are known as messengers. The messengers $\Psi$ will have a term in the Kahler potential $\int d^4 \theta \phi^{\dagger} \phi \Psi \overline{\Psi}$. This can be rescaled as $\int d^2 \theta \frac{\lambda_1 m_{3/2}}{\phi} \Psi \overline{\Psi}$ giving rise to mu-like and $B$-like terms analogous to those of the Higgs sector. The effect of the new fields is to change the anomaly mediated contribution of the gauginos in a way similar to gauge mediation. Like the example of the Higgs loops contributing to gaugino masses in Section 2, the new messenger loops contribute a mass roughly the size $\frac{g^2}{16\pi^2}\mu$. However, in this case there are no further constraints from electroweak symmetry breaking requiring the $\mu$-like term, which we will call $M$, to be of order the weak scale. M can then be it's more natural value of order $m_{3/2}$. The gaugino contribution is now free to be of order $ \frac{1}{16\pi^2}m_{3/2}$. The slepton masses are then corrected as the running is pushed off of its anomaly mediated trajectory. We will first derive the masses of the gauginos and scalar superpartners which are deflected by the $\mu$-like and $B$-like thresholds of the Kahler potential operator alone. This formalism is worked out in refs \cite{1}\cite{6}\cite{16}\cite{18}. We define the mu-like term $M \equiv \lambda m_{3/2}$ and the $B$-like term $F \equiv -\lambda m_{3/2}^2$. Thus the mass of the messengers is defined to be $X \equiv M +\theta^2 F$. We can look directly at the gauge couplings and extract the mass of the gauginos. Note that we will be looking at non-holomorphic gauge couplings. The holomorphic gauge couplings may be expressed as functions of the real gauge couplings as discussed in ref \cite{1}. For the mass threshold $X$ we see \begin{eqnarray} \alpha^{-1}(\mu, X)=\alpha^{-1}(\Lambda)+\frac{b-N}{4\pi}ln \left( \frac{XX^{\dagger}}{\Lambda \Lambda^{\dagger} \Phi \Phi^{\dagger}}\right)+\frac{b}{4\pi}ln \left( \frac{\mu^2}{XX^{\dagger}}\right) \end{eqnarray} where $N$ is the number of sets of messengers \cite{1}\cite{7}. The low energy beta function coefficient $b$ is just the beta function coefficient above the new mass threshold minus the number of extra chiral superfields, or \begin{equation} b_{UV} = b - N. \end{equation} A supersymmetric mass threshold $M$ comes with one power of the conformal compensator, as the Lagrangian of Equation 2.1 demonstrates, which cancels the powers of the $\Phi$ that appear with the cutoff. In this way, the first term in the equation above contributes nothing to the gauge coupling. The only contribution comes from the second term, which when we expand gives us just the contribution we expect from anomaly mediation. Thus the high energy running is wiped out and only the low energy anomaly mediated trajectory remains. In this way the low energy theory is insensitive to the UV physics. However, in our case we have a nonsupersymmetric mass scale $X = \lambda\frac{m_{3/2}}{\Phi}$ so the gauge coupling becomes \begin{eqnarray} \alpha^{-1}(\mu, X)=\alpha^{-1}(\Lambda )+\frac{b-N}{4\pi}ln\left( \frac{\lambda^2 m_{3/2}^2}{\Lambda ^2 {(\Phi \Phi^{\dagger}})^2}\right) +\frac{b}{4\pi}ln\left( \frac{\mu^2\Phi \Phi^{\dagger}}{\lambda^2 m_{3/2}^2}\right), \end{eqnarray} If we expand in terms of $\theta^2$ contained in the conformal compensator, $\Phi \rightarrow 1+\theta^2 m_{3/2}$, we see that the gaugino masses go like \begin{equation} m_{\lambda_i}= \frac{{\alpha_i}^2}{4 \pi} (b_i-2N)m_{3/2}. \end{equation} Since the gaugino masses have changed, the running of the scalar masses is deflected from its anomaly mediated trajectory. For the scalars, we may expand the wave function renormalization in terms of $\theta^2$. After rescaling fields we can extract the $\theta^2 \overline{\theta^2}$ component, just like we do in anomaly mediation. This gives scalar masses \begin{equation} m_s^2=-\frac{1}{4}m_{3/2}^2\left( \frac{\partial^2}{\partial^2 ln\mu} - d\frac{\partial^2}{\partial^2 ln X}\right)^2 ln(Z) \end{equation} with \begin{equation} Z_i(\mu, X) = Z_i(\Lambda)\left( \frac{\alpha(\Lambda)}{\alpha(X)} \right)^{\frac{2c_i}{b_i-N}}\left( \frac{\alpha(X)}{\alpha(\mu)} \right)^{\frac{2c_i}{b_i}}. \end{equation} We define the deflection parameter $d$ such that $\frac{F}{M}-m_{3/2}=dm_{3/2}$. The square of the first term is just the anomaly mediated contribution, and the running from the threshold is contained in the rest. For the case of a supersymmetric mass threshold we have $\frac{F}{M}=m_{3/2}$ or $d=0$. This just reproduces the anomaly mediated mass for scalars. We see that $d$ parameterizes the difference between the anomaly mediated mass threshold and the new mass threshold we have added. Differentiating, we get a scalar mass of \begin{equation} m_s^2 = \Sigma_i \frac{2c_i b_i}{{16 \pi}^2}\left( \left( \frac{N}{b_i}\alpha_{\Psi}^2 - \frac{N^2}{b_i^2}[\alpha_{\Psi}^2-\alpha_{\mu}^2]\right)d^2 + 2\frac{N}{b_i}\alpha_{\mu}^2 d \right)m_{3/2}^2+\Sigma_i \frac{2c_i b_i}{{16 \pi}^2}\alpha_{\mu}^2 m_{3/2}^2 \end{equation} In our case, $\frac{F}{M}=-m_{3/2}$ so we have $d=-2$. This gives positive slepton masses for $N \ge 5$. However, in this case there is one additional operator that can be written down, a new $B$ term that comes from the operator $\int d^2 \theta c W^{'} W^{'} \Psi \overline{\Psi}$, in exact analogy to the Higgs sector. This is an addition of $c m_D^2$ to the scalar messenger mass squared. We will now rederive the gaugino and scalar masses with this new contribution. We define $c m_D^2 \equiv -\kappa m_{3/2}^2 \equiv B$. Now we have $X \equiv M + \theta^2 F + \theta^2 B$. so $\alpha^{-1}$ becomes \begin{eqnarray} \alpha^{-1}(\mu, X)&=&\alpha^{-1}(\Lambda)+\frac{b-N}{4\pi}ln\left( \frac{(M+\theta^2 F+\theta^2 B)(M+\overline{\theta^2}F+\overline{\theta^2}B)}{\Lambda^2\Phi \Phi^{\dagger}}\right) \nonumber \\ &+&\frac{b}{4\pi}ln\left( \frac{\mu^2}{(M+\theta^2 F+\theta^2 B)(M+\overline{\theta^2}F+\overline{\theta^2}B)}\right) \end{eqnarray} and the gaugino masses become \begin{eqnarray} m_{\lambda_i}= \frac{\alpha_i}{4 \pi} (b_i-2N) m_{3/2}-\frac{\alpha_i}{4 \pi}N\frac{\kappa}{\lambda}m_{3/2}. \end{eqnarray} The the scalar masses are still given by Equations (3.5) and (3.7) but we have now changed the deflection parameter to $d = -2 - \frac{\kappa}{\lambda}$ Notice that $dm_{3/2}=\frac{2F+B}{M}$. We may now use $\kappa$ to adjust the $B$ term such that the number of necessary copies of messenger multiplets is only one. To do this we need the coupling $\kappa$ to be greater than 1. In this case we may think of the greater than 1 coupling arising from picking the cutoff to be $\frac{M_{cut}}{\sqrt{\kappa}}$. Thus instead of having a cutoff at, say, the Planck scale, the theory requires new physics at $\frac{M_{Pl}}{a few}$. Notice that at the threshold in the limit that $B$ gets very large and the deflection parameter becomes very negative, and the dominant contribution to scalar masses comes from our new source of SUSY breaking. \section{Electroweak Symmetry Breaking and Spectrum} The Higgs potential for the neutral scalars looks exactly like that of the MSSM, \begin{eqnarray} V &=& (\mu^2 + m_{Hu}^2)|H_{u}|)^2 + ( \mu^2 + m_{Hd})|H_d|)^2 -((\frac{\mu^2}{\lambda_h}- c m_{D}^2)H_u H_d + h.c.) \nonumber \\ &+& \frac{1}{8} (g_2^2 +g_y^2)(|H_u|^2-|H_d|^2 )^2 \end{eqnarray} The conditions for finding the minimum are \begin{equation} {m_Z}^2 = -\frac{m_{Hu}^2-m_{Hd}^2}{\cos{2\beta}}-(m_{Hu}^2 + m_{Hd}^2 + 2\mu^2) \end{equation} \begin{equation} \sin{2\beta} = -\frac{2B}{m_{Hu}^2 + m_{Hd}^2 + 2\mu^2}. \end{equation} It was previously noted that for $\mu = \lambda m_{3/2} \sim$ weak scale with $B$ fixed at $\lambda m_{3/2}^2$, no solution exists, as we can see from the second condition \cite{8}\cite{5}. However, in this new model with an adjustable $B$ term there are two regions of parameter space where we find viable solutions. One region has large values of the mu term, $\mu \sim m_{3/2}$ with a coupling $\lambda_h$ greater than 1. Such values could possibly fix the slepton mass problem without adding any messengers at all. In this case, the correction to the slepton mass squareds would be of order $\left( \frac{1}{16\pi^2}\right)^2 \frac{B^2}{\mu}$. From the first condition we see that these solutions are only valid for $\cos{2\beta} \rightarrow 0$, or tan$\beta \rightarrow 1$. Looking at the second equality we see that we must then have $\mu^2 \sim B$. Such points give positive slepton masses for the choice $\lambda_h$ of order 5. All scalar masses get large contributions from the deflection, therefore we may shift down the entire spectrum by picking a small $m_{3/2}$, just at 10 TeV or so. Unfortunately, such points are significantly fine-tuned. In addition, such a low tan$\beta$ requires a large top Yukawa coupling that gets a Landau pole far below the unification scale. These points allow a very minimal model with a single new parameter, but the fine-tuning and top Yukawa problems leads us to search for solutions in other regions of parameter space. Instead we may pick parameters such that the $\mu$ term is of order the weak scale. This will allow large values of tan$\beta$. In order to satisfy equation (4.2) we must pick $\mu^2$ such that it cancels the large anomaly mediated Higgs masses down to the Z mass. This requires a tuning of a little better than one percent. We may then look at equation(4.3) and see that we must cancel the two large contributions to the $B$ term down to the scale $\mu^2$. This is a tuning of a little worse than one percent. So we see that parameters in the Higgs sector involve a total sensitivity of $10^{-4}$, which means that this model is highly constrained but not impossible. A sample point for this parameter space is given in Table 1. \begin{table} \label{tab:points} \begin{center} \begin{tabular}{c|c|c|c} & & Point 1 & Point 2\\ \hline inputs: &$\frac{m_{3/2}}{16\pi^2}$ & 200 & 100 \\ &$m_D$ & .5$m_{3/2}$ & .5$m_{3/2}$\\ Higgs sector couplings: &$\lambda_h$ & .015 & .026\\ &$c_h$ & .0598 & .1029 \\ Messenger sector couplings: &$\lambda$ & $4$ & $20$ \\ &$\kappa$& 11.9 & 379 \\ \hline Higgs Sector: &$\mu$& 474 & 415\\ &$\tan{\beta}$ & 7.24 & 1.20\\ \hline sleptons: &$m_{\tilde{e}_L}$& 446 & 1162 \\ &$m_{\tilde{e}_R}$& 126 & 426 \\ \hline Gauginos: &$m_{\tilde{W}}$ & 510 & 936\\ &$m_{\tilde{B}}$ & 407 & 407\\ \hline Squarks: &$m_{\tilde{sq}_R}$ & 1235 & 3707 \\ \hline \hline \hline \end{tabular} \caption{Two possible spectra, all masses given in GeV.} \end{center} \end{table} We must also avoid getting a vev for the scalar messengers $\Psi$. Thus the mass matrix \begin{equation} M_{\Psi}^2= m_{3/2}^2\left( \begin{array}{cc}\lambda^2&\lambda+\kappa\\ \lambda +\kappa& \lambda^2 \end{array} \right) \end{equation} must have positive determinant. We then have the constraint $\lambda^2 > \lambda + \kappa$, which gives the upper bound on the B term for a fixed $\lambda$. We may now calculate the lower bound on $B$; for example with $\lambda \sim 4$ and $\kappa$ = $0$, one extra vector-like multiplet falls short of making the slepton mass positive by about a factor of 8. Thus we can estimate $B$, using equation (3.7) which tells us we must have $B > 1.75 F$. Unlike those of the Higgs sector, these constraints are not very strict, so the spectrum is viable for a large range of couplings. One feature of the spectrum is that for moderate values of $\lambda$ (less than 10) the Wino mass falls naturally at a few hundred GeV. The slepton masses are small, so that the wino is now in the middle of the anomaly mediated mass spectrum. Thus the right handed sleptons become the LSPs. Table 1 contains such a point. This model differs from minimal anomaly mediation most in its heavier wino mass. In fact, for the moderate values of couplings that yield positive slepton masses, we always have the wino heavier than the bino. The squarks are also much heavier than the minimal model, with masses of a few TeV, due to deflection from their large SU(3) coupling. We have a heavy and light scalar messenger. The light mass eigenvalue is around 10 TeV. For these moderate values of the couplings $\lambda$ and $\kappa$ the spectrum looks like some combination of anomaly mediation and low energy gauge mediation, with each contributing effects of the same order; other models employ similar combinations of SUSY breaking sources giving different spectra \cite{19} \cite{20} \cite{11}. For the case of very large couplings $\kappa$ and $\lambda$ we see that the new $B$ terms dominate the anomaly mediated contributions. The spectrum looks like gauge mediation, and in fact for $N=1$ we see that the bino can be made lighter than the sleptons. This requires the choice of large coupling, $\lambda > 19$ and $\kappa > 350$. If we then pick a moderate sized $\mu$ term for the Higgs, the bino is the lightest superpartner. At these large values of the messenger couplings the entire spectrum becomes heavy. We can drop the entire spectrum by making $m_{3/2}$ smaller. However, in order to maintain the bino as the LSP, we must pick the couplings in the Higgs sector such that the $\mu$ term is larger than the bino mass as we drop $m_{3/2}$. These larger couplings drive tan$\beta$ lower as was discussed above. For points with a light overall scale we have tan$\beta \sim 1.2$ which is still not as large as we would like. If we instead pick large values of $m_{3/2} \ge$ 75 TeV, we need not have large Higgs sector couplings to maintain the bino LSP. We then have more viable values of tan$\beta$. However, the entire spectrum has masses at the TeV scale. The prospect of a bino LSP is exciting, but we would have to accept a scale of new physics at $M_{cut} \sim \frac{M_{Pl}}{19}$ and a choice between low tan$\beta$ or a heavy spectrum. \section{Conclusions} Anomaly mediated SUSY breaking provides a UV insensitive flavor blind method of generating superpartner masses. A mass term for Higgsinos can be generated employing a Giudice-Masiero-like mechanism, however it also generates a $B\mu$ term which is two orders of magnitude too large. A new fix to the $\mu$ problem employs anomaly mediation with an new broken U(1) which generates a single new operator, an additional $B$ term. The new term may be tuned against the anomaly mediated term and generate viable electroweak symmetry breaking. This mechanism adds a minimal number of new parameters to the theory and thus maintains a high level of predictivity. Several fixes to the slepton problem may be used in conjunction with our new $\mu$ term. One viable model is the addition of N copies of messenger fields which change the running of the gauginos and hence the scalar masses at two loops. By adding an extra B term for these fields, in analogy to what was done with the Higgs sector, the slepton masses may be driven positive with just one extra set of vector-like fields instead of five. The result $N>5$ destroys the possibility of perturbative unification. Unification may occur for N=5, however our result of N = 1 allows for unification to be easily preserved. The spectrum allows for two interesting but possibly dangerous results. By sacrificing large values of tan$\beta$ in the Higgs sector, we may achieve positive slepton masses and viable $\mu$ terms without adding any messenger fields at all. This is the most minimal model, but it results in perturbative breakdown well below the unification scale. Alternatively, keeping the messengers and allowing the cuttoff of the theory to be lowered by a factor or 19 or so from the Plank scale, we produce a spectrum with the bino as the LSP. Though this has interesting cosmological implications, we would still need to contend with a heavy spectrum or questionable values tan$\beta$. The safest points in parameter space produce a spectrum with light sleptons, middle weight winos and possibly a scalar messenger lurking just above the squark masses at 10TeV. The model generates viable electroweak symmetry breaking, a weak scale mu term, positive slepton mass squareds and a viable spectrum while maintaining UV insensitivity, flavor blindness, and a minimal number of extra parameters. This is a predictive model, though the parameter space available is very small and thus the theory is fine tuned. \vspace{.5in} {\bf Acknowledgements} I'd like to thank David E. Kaplan for all of his help, and Markus Luty for useful discusions.
1,314,259,994,071
arxiv
\section{Introduction} The trigonometric P\"oschl-Teller (TPT) potential (also called P\"oschl-Teller I or Darboux-P\"oschl-Teller potential) is one of the most valuable exactly solvable (ES) potentials in nonrelativistic quantum mechanics \cite{poschl, flugge}. It is indeed close to potentials widely used in molecular physics to describe out-of-plane bending vibrations and in solid state physics to provide models for one-dimensional crystals \cite{antoine}. It is also related to the Scarf I potential \cite{scarf} via simple changes of variable and of parameters \cite{cq12}.\par The TPT potential is (translationally) shape invariant (SI) in supersymmetric (SUSY) quantum mechanics \cite{genden}. Such a property provides an easy way of solving the corresponding Schr\"odinger equation \cite{cooper}. First- and second-order SUSY transformations have been used to generate new potentials whose spectrum slightly differs from the TPT one \cite{contreras}. Recently, some extensions of the TPT potential have been extensively studied (see, {\it e.g.}, \cite{cq08, odake09, odake11, gomez14, bagchi15, grandati} and references quoted therein) in connection with the new concepts of exceptional orthogonal polynomials \cite{gomez09}, para-Jacobi polynomials \cite{calogero}, or confluent Darboux transformations \cite{fernandez}.\par On the other hand, considering a position-dependent mass (PDM) instead of a constant one in the Schr\"odinger equation is known to play an important role in many physical problems, such as the study of electronic properties of semiconductor heterostructures \cite{bastard, weisbuch}, quantum wells and quantum dots \cite{serra, harrison}, helium clusters \cite{barranco}, graded crystals \cite{geller}, quantum liquids \cite{arias}, metal clusters \cite{puente}, nuclei \cite{ring, bonatsos}, nanowire structures \cite{willatzen}, and neutron stars \cite{chamel}.\par Exact solutions of PDM Schr\"odinger equations may provide a conceptual understanding of some physical phenomena, as well as a testing ground for some approximation schemes. Such solutions may belong not only to ES Schr\"odinger equations, for which all the eigenstates can be found explicitly by algebraic means, but also to quasi-exactly solvable (QES) equations, for which only a finite number of eigenstates can be derived in this way for some ad hoc couplings, while the remaining ones can only be obtained through numerical calculations.\par The generation of PDM and potential pairs leading to such exact solutions has been achieved by various methods (see, {\it e.g.}, \cite{cq06} and references quoted therein). In particular, on taking advantage of the known equivalence of PDM problems to those arising from a deformation of the canonical commutation relations \cite{cq04}, it has been shown that several well-known ES potentials in a constant mass background remain ES for a well chosen PDM \cite{bagchi05, cq09}. This has been achieved by using a deformed supersymmetric (DSUSY) approach and a deformed shape invariance (DSI) concept. Among those potentials, one finds both the one- and two-parameter TPT potentials.\par The aim of the present work is to construct infinite families of QES extensions of these ES PDM and TPT potential pairs with known ground and first excited states. For such a purpose, we plan to use a recently devised generating function method \cite{cq18} (see also \cite{voznyak}), generalizing a procedure known for constant mass problems \cite{tkachuk}.\par This paper is organized as follows. In sec.~2, the description of PDM Schr\"odinger equations in DSUSY and the DSI property are reviewed, then the corresponding results for the ES one- and two-parameter TPT potentials are recalled. In sec.~3, the generating function method for constructing PDM Schr\"odinger equations with known ground and first excited states is presented. Such a procedure is then applied to extensions of one- and two-parameter TPT potentials in secs.~4 and 5, respectively. Finally, sec.~6 contains the conclusion.\par \section{Deformed supersymmetric approach to the trigonometric P\"oschl-Teller potentials with position-dependent mass} The standard Schr\"odinger equation \begin{equation} \left(\hat{p}^2 + V(x) - E\right) \psi(x) = 0, \label{eq:SE} \end{equation} where $\hat{p} = - {\rm i} d/dx$ and $\hbar = 1$, is known to be ES for the one- and two-parameter TPT potentials \cite{poschl, flugge, cooper}, defined by \begin{equation} V(x) = A(A-1) \sec^2 x, \qquad - \tfrac{\pi}{2} < x < \tfrac{\pi}{2}, \qquad A>1, \label{eq:TPT-1} \end{equation} and \begin{equation} V(x) = A(A-1) \sec^2 x + B(B-1) \csc^2 x, \qquad 0 < x < \tfrac{\pi}{2}, \qquad A, B>1, \label{eq:TPT-2} \end{equation} respectively.\par Let us replace $\hat{p}$ by $\hat{\pi} = - {\rm i} \sqrt{f(x)} (d/dx) \sqrt{f(x)}$, where $f(x)$ is some positive and smooth parameter-dependent function and $\hat{\pi}$ is assumed to be Hermitian with respect to the measure $dx$ \cite{cq04}. Then the standard commutation relation $[\hat{x}, \hat{p}] = {\rm i}$ is changed into $[\hat{x}, \hat{\pi}] = {\rm i} f(x)$ and the conventional Schr\"odinger equation (\ref{eq:SE}) becomes \begin{align} (\hat{H} - E) \psi(x) &= \left(\hat{\pi}^2 + V(x) - E\right) \psi(x) \nonumber \\ &= \left(- \sqrt{f(x)} \frac{d}{dx} f(x) \frac{d}{dx} \sqrt{f(x)} + V(x) - E\right) \psi(x) = 0. \label{eq:def-SE} \end{align} This deformed Schr\"odinger equation can be interpreted as a PDM one, \begin{equation} \left(- m^{-1/4}(x) \frac{d}{dx} m^{-1/2}(x) \frac{d}{dx} m^{-1/4}(x) + V(x) - E\right) \psi(x) = 0, \label{eq:PDM-SE} \end{equation} where $m(x) = 1/f^2(x)$. As is well known, the noncommutativity of $m(x)$ with the differential operator $d/dx$ creates an ordering ambiguity in PDM Schr\"odinger equations \cite{vonroos}. The ordering obtained in (\ref{eq:PDM-SE}) is that chosen by Mustafa and Mazharimousavi \cite{mustafa}, from which other orderings can be taken care of by replacing $V(x)$ by some effective potential $V_{\rm eff}(x)$ including derivatives of $m(x)$.\par Bound state wavefunctions $\psi_n(x)$ of eq.~(\ref{eq:def-SE}) (or, equivalently, (\ref{eq:PDM-SE})) have to be square integrable on the interval of definition $(x_1, x_2)$ of $V(x)$ with respect to the measure $dx$ and, in addition, must ensure the Hermiticity of $\hat{H}$ or, equivalently, that of $\hat{\pi}$, imposing that \cite{bagchi05} \begin{equation} |\psi_n(x)|^2 f(x) = \frac{|\psi_n(x)|^2}{\sqrt{m(x)}} \to 0 \qquad \text{for $x \to x_1$ and $x \to x_2$.} \label{eq:Hermiticity} \end{equation} \par A DSUSY approach to eq.~(\ref{eq:def-SE}) consists in considering a pair of partner Hamiltonians, defined on the same interval $(x_1, x_2)$, \begin{equation} \hat{H}_{1,2} = \hat{\pi}^2 + V_{1,2}(x) + E_0, \qquad V_{1,2}(x) = W^2(x) \mp f(x) \frac{dW}{dx}, \label{eq:V1,2} \end{equation} where $E_0$ denotes the ground state energy of eq.~(\ref{eq:def-SE}) and $V_1(x)$ is the rescaled potential $V_1(x) = V(x) - E_0$ \cite{cq04, bagchi05}. The superpotential $W(x)$ in eq.~(\ref{eq:V1,2}) can be expressed in terms of the ground state wavefunction $\psi_0(x)$ of $\hat{H}_1$ through \begin{equation} W(x) = - f(x) \frac{d}{dx} \log \psi_0(x) - \frac{1}{2} \frac{df}{dx} \end{equation} or, conversely, \begin{equation} \psi_0(x) \propto f^{-1/2} \exp\left(- \int^x \frac{W(x')}{f(x')} dx'\right). \label{eq:psi0} \end{equation} \par The two first-order differential operators \begin{equation} \hat{A}^{\pm} = \mp \sqrt{f(x)} \frac{d}{dx} \sqrt{f(x)} + W(x), \label{eq:A} \end{equation} allow to rewrite the two partner Hamiltonians (\ref{eq:V1,2}) as \begin{equation} \hat{H}_1 = \hat{A}^+ \hat{A}^- + E_0, \qquad \hat{H}_2 = \hat{A}^- \hat{A}^+ + E_0, \end{equation} so that the latter intertwine with $\hat{A}^+$ and $\hat{A}^-$ as $\hat{A}^- \hat{H}_1 = \hat{H}_2 \hat{A}^-$ and $\hat{A}^+ \hat{H}_2 = \hat{H}_1 \hat{A}^+$. The ground state wavefunction $\psi_0(x)$ of $\hat{H}_1$ is annihilated by the operator $\hat{A}^-$, while the ground state wavefunction $\psi'_0(x)$ of $\hat{H}_2$ is transformed by $\hat{A}^+$ into the first excited state wavefunction $\psi_1(x)$ of $\hat{H}_1$.\par This procedure can in principle be iterated by considering $\hat{H}_2$ as a new starting Hamiltonian, thereby obtaining another DSUSY pair of partner Hamiltonians \begin{equation} \hat{H}'_{1,2} = \hat{\pi}^2 + V'_{1,2}(x) + E'_0, \qquad V'_{1,2}(x) = W^{\prime 2}(x) \mp f(x) \frac{dW'}{dx}, \label{eq:V'1,2} \end{equation} where \begin{equation} V'_1(x) + E'_0 = V_2(x) + E_0. \label{eq:V'-V} \end{equation} Then the first excited state wavefunction of $\hat{H}_1$ with energy $E_1 = E'_0$, \begin{equation} \psi_1(x) \propto \hat{A}^+ \psi'_0(x), \label{eq:psi1} \end{equation} can be obtained from the ground state wavefunction of $\hat{H}'_1 = \hat{H}_2$, given by \begin{equation} \psi'_0(x) \propto f^{-1/2} \exp\left(- \int^x \frac{W'(x')}{f(x')} dx'\right). \label{eq:psi'0} \end{equation} \par Equation (\ref{eq:V'-V}) can be rewritten as \begin{equation} W^2(x) + f(x) \frac{dW}{dx} = W^{\prime 2}(x) - f(x) \frac{dW'}{dx} + E_1 - E_0 \label{eq:W-W'} \end{equation} in terms of the two superpotentials $W(x)$ and $W'(x)$. Such a condition can be satisfied, in particular, whenever, up to some additive constant $R$, $V_1(x)$ and $V_2(x)$ are similar in shape and differ only in the parameters that appear in them. In such a case, $V(x)$ is said to be deformed shape invariant (DSI) and eq.~(\ref{eq:W-W'}) is referred to as the DSI condition. The latter can then be generalized to any neighbouring members of a DSUSY hierarchy and the whole bound state spectrum of eq.~(\ref{eq:def-SE}) can be easily derived.\par {}For the one-parameter TPT potential (\ref{eq:TPT-1}), the DSI condition is satisfied for the deforming function \begin{equation} f(x) = 1 + \alpha \sin^2 x, \qquad -1 < \alpha \ne 0, \label{eq:f-1} \end{equation} corresponding to a PDM $m(x) = (1 + \alpha \sin^2 x)^{-2}$, and for the two superpotentials \cite{bagchi05} \begin{align} W(x) &= \lambda \tan x, \qquad \lambda = \tfrac{1}{2}(1 + \alpha + \Delta), \qquad \Delta = \sqrt{(1+ \alpha)^2 + 4A(A-1)}, \\ W'(x) &= \lambda' \tan x, \qquad \lambda' = \lambda + 1 + \alpha. \end{align} The first two partner potentials read \begin{align} V_1(x) &= A(A-1) \sec^2 x - A(A-1) - \tfrac{1}{2}(1 + \alpha + \Delta), \\ V_2(x) &= [A(A-1) + (1+\alpha) (1+\alpha+\Delta)] \sec^2 x - A(A-1) \nonumber \\ & \quad {}- \tfrac{1}{2}(1+2\alpha) (1+\alpha+\Delta), \end{align} while the ground and first excited state energies of $V(x)$ are given by \begin{align} E_0 &= \lambda(\lambda-\alpha) = A(A-1) + \tfrac{1}{2}(1+\alpha+\Delta), \\ E_1 &= (\lambda+1)^2 - \alpha(\lambda-1) = A(A-1) + \tfrac{1}{2}(5+5\alpha+3\Delta). \end{align} More generally, the whole bound state spectrum is obtained as \begin{align} E_n &= (\lambda+n)^2 - \alpha(\lambda-n^2) \nonumber \\ &= A(A-1) + \tfrac{1}{2}(1+\alpha+\Delta) + (1+\alpha+\Delta)n + (1+\alpha)n^2, \nonumber \\ & \quad n=0, 1, 2, \ldots, \end{align} with the corresponding wavefunctions \begin{equation} \psi_n(x) \propto f^{- \frac{1}{2}\left(\frac{\lambda}{1+\alpha} + 1\right)} (\cos x)^{\frac{\lambda}{1+ \alpha}} C_n^{\left(\frac{\lambda}{1+\alpha}\right)}(t), \qquad t = \sqrt{\frac{1+\alpha}{f}} \sin x, \end{equation} expressed in terms of Gegenbauer polynomials \cite{cq09}. Note that, in this case, eq.~(\ref{eq:Hermiticity}) does not provide any additional condition since it is automatically fulfilled for square integrable functions $\psi_n(x)$ on $(-\pi/2, \pi/2)$.\par {}For the two-parameter TPT potential (\ref{eq:TPT-2}), the deforming function, obtained in \cite{bagchi05},\footnote{The results given in \cite{bagchi05, cq09} are for the Scarf I potential. They have been transformed here for the TPT potential by using the changes of variable and parameters given in \cite{cq12}.} writes \begin{equation} f(x) = 1 + \alpha \cos 2x, \qquad 0 < |\alpha| < 1, \label{eq:f-2} \end{equation} with corresponding PDM $m(x) = (1+\alpha \cos 2x)^{-2}$, and the two superpotentials are \begin{align} W(x) &= \lambda \tan x - \mu \cot x, \qquad \lambda = \tfrac{1}{2}(1-\alpha+\Delta_1), \qquad \mu = \tfrac{1}{2}(1+\alpha+\Delta_2), \nonumber \\ \Delta_1 &= \sqrt{(1-\alpha)^2 + 4A(A-1)}, \qquad \Delta_2 = \sqrt{(1+\alpha)^2 + 4B(B-1)}, \\ W'(x) &= \lambda' \tan x - \mu' \cot x, \qquad \lambda' = \lambda+1-\alpha, \qquad \mu' = \mu+1+\alpha. \end{align} The first two partner potentials read \begin{align} V_1(x) &= A(A-1) \sec^2 x + B(B-1) \csc^2 x - A(A-1) - B(B-1) - \tfrac{3}{2}(1-\alpha^2) \nonumber \\ & \quad{}- (1+\alpha)\Delta_1 - (1-\alpha)\Delta_2 - \tfrac{1}{2}\Delta_1\Delta_2, \\ V_2(x) &= [A(A-1) + (1-\alpha)(1-\alpha+\Delta_1)] \sec^2 x \nonumber \\ & \quad{} + [B(B-1) + (1+\alpha)(1+\alpha+\Delta_2] \csc^2 x - A(A-1) - B(B-1) \nonumber \\ & \quad {} - \tfrac{1}{2}(3+5\alpha^2) - (1-\alpha)\Delta_1 - (1+\alpha)\Delta_2 - \tfrac{1}{2} \Delta_1 \Delta_2, \end{align} while the ground and first excited state energies of $V(x)$ are given by \begin{align} E_0 &= (\lambda+\mu)^2 + 2\alpha(\lambda-\mu) = A(A-1) + B(B-1) + \tfrac{3}{2}(1-\alpha^2) \nonumber \\ & \quad{} + (1+\alpha)\Delta_1 + (1-\alpha)\Delta_2 + \tfrac{1}{2}\Delta_1\Delta_2, \\ E_1 &= (\lambda+\mu+2)^2 + 6\alpha(\lambda-\mu) - 4\alpha^2 = A(A-1) + B(B-1) \nonumber \\ & \quad{} + \tfrac{19}{2}(1-\alpha^2) + 3(1+\alpha)\Delta_1 + 3(1-\alpha)\Delta_2 + \tfrac{1}{2}\Delta_1 \Delta_2. \end{align} The whole bound state spectrum is obtained as \begin{align} E_n &= (\lambda+\mu+2n)^2 + 2\alpha(\lambda-\mu)(2n+1) - 4\alpha^2 n^2 \nonumber \\ &= A(A-1) + B(B-1) + \tfrac{3}{2}(1-\alpha^2) + (1+\alpha)\Delta_1 + (1-\alpha)\Delta_2 + \tfrac{1}{2} \Delta_1 \Delta_2 \nonumber \\ & \quad{} + 2n[2(1-\alpha^2) + (1+\alpha)\Delta_1 + (1-\alpha)\Delta_2] + 4(1-\alpha^2)n^2, \nonumber\\ &\qquad n=0, 1, 2, \ldots, \end{align} with the corresponding wavefunctions \cite{cq09} \begin{align} \psi_n(x) &\propto f^{-\frac{1}{2}\left(1 + \frac{\lambda}{1-\alpha} + \frac{\mu}{1+\alpha}\right)} (\cos x)^{\frac{\lambda}{1-\alpha}} (\sin x)^{\frac{\mu}{1+\alpha}} P_n^{\left(\frac{\mu}{1+\alpha} - \frac{1}{2}, \frac{\lambda}{1-\alpha} - \frac{1}{2}\right)}(t), \nonumber \\ t &= \frac{\cos 2x + \alpha}{1 + \alpha\cos 2x}, \end{align} expressed in terms of Jacobi polynomials.\par \section{Generating function method for PDM Schr\"odinger equations with two known eigenstates} \setcounter{equation}{0} Let us start from eq.~(\ref{eq:W-W'}) relating the superpotentials $W(x)$ and $W'(x)$ of the first two steps of a DSUSY hierarchy and let us define the two functions \cite{cq18, voznyak} \begin{equation} W_+(x) = W'(x) + W(x), \qquad W_-(x) = W'(x) - W(x). \label{eq:W_+-W_-} \end{equation} In terms of the latter, eq.~(\ref{eq:W-W'}) can be rewritten as \begin{equation} f(x) \frac{dW_+}{dx} = W_+(x) W_-(x) + E_1 - E_0. \end{equation} Hence, $W_-(x)$ can be expressed in terms of $W_+(x)$ and the energy difference $E_1-E_0$ as \begin{equation} W_-(x) = \frac{f(x) dW_+(x)/dx + E_0 - E_1}{W_+(x)}. \label{eq:W_-} \end{equation} \par The generating function method starts from two functions $W_+(x)$ and $W_-(x)$ that are compatible, {\it i.e.}, such that there exists some positive constant $E_1-E_0$ satisfying eq.~(\ref{eq:W_-}). The two superpotentials $W(x)$ and $W'(x)$ are then obtained from eq.~(\ref{eq:W_+-W_-}) as $W(x) = \frac{1}{2}(W_+ - W_-)$ and $W'(x) = \frac{1}{2}(W_+ + W_-)$. The starting potential $V(x)$ and its ground state energy $E_0$ are determined from $W(x)$ through eq.~(\ref{eq:V1,2}) and the ground state wavefunction is derived from eq.~(\ref{eq:psi0}). The knowledge of $E_1-E_0$ and $E_0$ provides the first excited state energy $E_1$, while a combination of eqs.~(\ref{eq:A}), (\ref{eq:psi1}), (\ref{eq:psi'0}), and (\ref{eq:W_+-W_-}) leads to the corresponding wavefunction \begin{align} \psi_1(x) &\propto \left(- f \frac{d}{dx} - \frac{1}{2} \frac{df}{dx} + W\right) f^{-1/2} \exp\left(- \int^x \frac{W'(x')}{f(x')} dx'\right) \nonumber \\ &\propto [W'(x) + W(x)] f^{-1/2} \exp\left(- \int^x \frac{W'(x')}{f(x')} dx'\right) \nonumber \\ &\propto W_+(x) f^{-1/2} \exp\left(- \int^x \frac{W'(x')}{f(x')} dx'\right). \label{eq:psi1-W+} \end{align} The construction of the first two bound state wavefunctions $\psi_0(x)$ and $\psi_1(x)$ of $V(x)$ is of course only valid provided such functions satisfy both the square integrability condition on $(x_1,x_2)$ and the additional restriction (\ref{eq:Hermiticity}). As observed in sec.~2, the latter is automatically fulfilled for the deforming functions $f(x)$ considered for the one- and two-parameter TPT potentials provided the wavefunctions are square integrable.\par \section{Extensions of the one-parameter trigonometric P\"oschl-Teller potentials} \setcounter{equation}{0} In the present section, we will deal with an infinite family of extensions of the one-parameter TPT potential (\ref{eq:TPT-1}), defined by \begin{equation} V^{(m)}(x) = \sum_{k=1}^{2m+1} A_{2k} \sec^{2k}x, \qquad - \frac{\pi}{2} < x < \frac{\pi}{2}, \qquad m = 1, 2, \ldots, \label{eq:Vm} \end{equation} with $A_{4m+2}>0$. As it is obvious, the $m=0$ case would give back potential (\ref{eq:TPT-1}) with $A_2 = A(A-1)$. Our aim consists in showing that parameters $A_2$, $A_4$, \ldots, $A_{4m}$ can be found in terms of $A_{4m+2}$ and $\alpha$ in such a way that the PDM Schr\"odinger equation (\ref{eq:def-SE}) with $f(x)$ and $V(x)$ given by (\ref{eq:f-1}) and (\ref{eq:Vm}), respectively, has known ground and first excited states. The corresponding PDM will then be \begin{equation} m(x) = (1 + \alpha \sin^2x)^{-2}, \qquad -1 < \alpha \ne 0. \end{equation} \par \begin{figure} \begin{center} \includegraphics{Quesne-fig1.eps} \caption{Plot of the extended potential $V^{(1)}(x)$ with $A_6=1$ and $\alpha=-1/2$. The ground and first excited state energies are $E_0 = 19/16$ and $E_1 = 115/16$. The PDM reads $m(x) = \left(1 - \frac{1}{2} \sin^2x\right)^{-2}$.} \end{center} \end{figure} \par \begin{figure} \begin{center} \includegraphics{Quesne-fig2.eps} \caption{Plots of the ground state wavefunction $\psi_0(x)$ (solid line) and of the first excited state wavefunction $\psi_1(x)$ (dashed line) for the potential displayed in fig.~1.} \end{center} \end{figure} \par {}For such a purpose, let us consider the generating functions \begin{align} W_+(x) &= 2\sqrt{A_{4m+2}} \sum_{k=0}^m \frac{(2m+1)!!}{(2k+1)!! (2m-2k)!!} (1+\alpha)^{k-m} (\tan x)^{2k+1}, \label{eq:W_+1}\\ W_-(x) &= (2m+1) (1+\alpha) \tan x. \label{eq:W_-1} \end{align} To prove their compatibility, we have to show that there exists some positive constant $E_1-E_0$ satisfying eq.~(\ref{eq:W_-}). Straightforward calculations show that \begin{align} \frac{dW_+}{dx} &= 2\sqrt{A_{4m+2}} \Biggl\{\frac{(2m+1)!!}{(2m)!!} (1+\alpha)^{-m} \nonumber \\ &\quad{} + \sum_{k=1}^m \frac{(2m+1)!!}{(2k-1)!! (2m-2k)!!} (1+\alpha)^{k-m} (\tan x)^{2k}\Biggr\} \sec^2 x, \end{align} \begin{align} f \frac{dW_+}{dx} &= 2\sqrt{A_{4m+2}} \Biggl\{\frac{(2m+1)!!}{(2m)!!} (1+\alpha)^{-m} \nonumber \\ &\quad{} + (2m+1)\sum_{k=0}^m \frac{(2m+1)!!}{(2k+1)!! (2m-2k)!!} (1+\alpha)^{k+1-m} (\tan x)^{2k+2}\Biggr\}, \end{align} and \begin{align} & W_+ W_- \nonumber \\ &= 2\sqrt{A_{4m+2}} (2m+1)\sum_{k=0}^m \frac{(2m+1)!!}{(2k+1)!! (2m-2k)!!} (1+\alpha)^{k+1-m} (\tan x)^{2k+2}, \end{align} so that we indeed get \begin{equation} E_1-E_0 = 2\sqrt{A_{4m+2}} \frac{(2m+1)!!}{(2m)!!} (1+\alpha)^{-m} > 0. \label{eq:diff-1} \end{equation} \par {}From (\ref{eq:W_+1}) and (\ref{eq:W_-1}), we then obtain the two superpotentials $W(x)$ and $W'(x)$ in the form \begin{equation} W(x) = \sum_{k=0}^m \lambda_k (\tan x)^{2k+1}, \qquad W'(x) = \sum_{k=0}^m \lambda'_k (\tan x)^{2k+1}, \label{eq:W-1} \end{equation} where \begin{equation} \begin{split} \lambda_0 &= \sqrt{A_{4m+2}} \frac{(2m+1)!!}{(2m)!!} (1+\alpha)^{-m} - \frac{1}{2}(2m+1)(1+\alpha), \\ \lambda'_0 &= \lambda_0 + (2m+1)(1+\alpha), \\ \lambda_k &= \lambda'_k = \sqrt{A_{4m+2}} \frac{(2m+1)!!}{(2k+1)!! (2m-2k)!!} (1+\alpha)^{k-m}, \qquad k=1, 2, \ldots, m. \end{split}. \label{eq:lambda-1} \end{equation} \par To determine $V(x)$ and $E_0$ from eqs.~(\ref{eq:V1,2}), (\ref{eq:W-1}), and (\ref{eq:lambda-1}), it is convenient to proceed in two steps: first to express $V_1(x)$ as an expansion in $\tan^2 x$, \begin{equation} V_1(x) = \sum_{k=0}^{2m+1} a_k (\tan x)^{2k}, \label{eq:a-1} \end{equation} then to reexpress it as an expansion in $\sec^2 x$ by making use of the relation $\tan^2 x = \sec^2 x -1$. In such a way, we obtain $E_0$ and the parameters $A_{2k}$, $k=1, 2, \ldots, 2m$ of eq.~(\ref{eq:Vm}) as \begin{equation} E_0 = \sum_{k=0}^{2m+1} (-1)^{k+1} a_k, \qquad A_{2k} = \sum_{l=k}^{2m+1} (-1)^{l-k} \binom{l}{k} a_l. \end{equation} From the values of the coefficients $a_k$ in eq.~(\ref{eq:a-1}), we get \begin{align} E_0 &= \frac{1}{4} (2m+1) (1+\alpha) [2m+1 + (2m+3)\alpha] \nonumber \\ & \quad{} + \sqrt{A_{4m+2}} \frac{(2m+1)!!}{(2m)!!} (1+\alpha)^{-m} \nonumber \\ & \quad{} \times \biggl[1 + 2(2m+1) \sum_{k=1}^{m+1} (-1)^k \frac{(2m)!!}{(2k-1)!! (2m-2k+2)!!} (1+\alpha)^k\biggr] \nonumber \\ & \quad{} - A_{4m+2} (1+\alpha)^{-2m} \biggl[\sum_{k=1}^{m+1} (-1)^k S^{(m,k)}_{0,k-1} (1+ \alpha)^{k-1} \nonumber \\ & \quad{} + \sum_{k=m+2}^{2m+1} (-1)^k S^{(m,k)}_{k-m-1,m} (1+\alpha)^{k-1}\biggr], \label{eq:E0-1} \end{align} \begin{align} A_2 &= \frac{1}{4}(2m+1)(2m+3) (1+\alpha)^2 \nonumber \\ & \quad{} + 2(2m+1) \sqrt{A_{4m+2}} \sum_{k=1}^{m+1} (-1)^k k \frac{(2m+1)!!}{(2k-1)!! (2m-2k+2)!!} (1+\alpha)^{k-m} \nonumber \\ & \quad{} - A_{4m+2} (1+\alpha)^{-2m} \biggl[\sum_{k=1}^{m+1} (-1)^k k S^{(m,k)}_{0,k-1} (1+\alpha)^{k-1} \nonumber \\ & \quad{} +\sum_{k=m+2}^{2m+1} (-1)^k k S^{(m,k)}_{k-m-1,m} (1+\alpha)^{k-1}\biggr], \end{align} \begin{align} A_{2k} &= - 2(2m+1) \sqrt{A_{4m+2}} \sum_{l=k}^{m+1} (-1)^{l-k} \binom{l}{k} \frac{(2m+1)!!}{(2l-1)!! (2m-2l+2)!!} (1+\alpha)^{l-m} \nonumber \\ & \quad{} + A_{4m+2} (1+\alpha)^{-2m} \biggl[\sum_{l=k}^{m+1} (-1)^{l-k} \binom{l}{k} S^{(m,l)}_{0,l-1} (1+\alpha)^{l-1} \nonumber \\ & \quad{} + \sum_{l=m+2}^{2m+1} (-1)^{l-k} \binom{l}{k} S^{(m,l)}_{l-m-1,m} (1+\alpha)^{l-1} \biggr], \qquad 2 \le k \le m+1, \end{align} \begin{equation} A_{2k} = A_{4m+2} \sum_{l=k}^{2m+1} (-1)^{l-k} \binom{l}{k} S^{(m,l)}_{l-m-1,m} (1+\alpha)^{l-2m-1}, \qquad m+2 \le k \le 2m, \end{equation} where we have introduced finite sums $S^{(m,k)}_{a,b}$, $0 \le a \le b \le m$, defined by \begin{equation} S^{(m,k)}_{a,b} = \sum_{l=a}^b \frac{[(2m+1)!!]^2}{(2l+1)!! (2k-2l-1)!! (2m-2l)!! (2m-2k+2l+2)!!}. \end{equation} Equations (\ref{eq:diff-1}) and (\ref{eq:E0-1}) yield the first excited state energy $E_1$.\par It remains to determine the wavefunctions $\psi_0(x)$ and $\psi_1(x)$ from eqs.~(\ref{eq:psi0}) and (\ref{eq:psi1-W+}), respectively. For such a purpose, it is useful to express the ratios $W(x)/f(x)$ and $W'(x)/f(x)$ in terms of a new variable $y=\cos x$. For the former, for instance, we get the relation \begin{align} \frac{W(x)}{f(x)} &= \sin x \bigl[y^{2m+1} (1+\alpha-\alpha y^2)\bigr]^{-1} \sum_{k=0}^m \lambda_k y^{2m-2k} (1-y^2)^k \nonumber \\ &= \sin x \bigl[y^{2m+1} (1+\alpha-\alpha y^2)\bigr]^{-1} \sum_{p=0}^m \biggl[\sum_{l=0}^p (-1)^l \binom{m+l-p}{l} \lambda_{m+l-p}\biggr] y^{2p} \nonumber \\ &= \sin x \biggl(\sum_{\kappa=0}^m \frac{C_{2\kappa+1}}{y^{2\kappa+1}} + \frac{\alpha C_1 y}{1+\alpha-\alpha y^2}\biggr), \end{align} with \begin{align} C_{2\kappa+1} &= \frac{1}{(1+\alpha)^{m+1-\kappa}} \sum_{p=0}^{m-\kappa} \alpha^{m-\kappa-p} (1+\alpha)^p \sum_{l=0}^p (-1)^l \binom{l+m-p}{l} \lambda_{l+m-p}, \nonumber \\ & \qquad \kappa=0, 1, \ldots, m, \end{align} from which the integration in eq.~(\ref{eq:psi0}) is straightforward. The results read \begin{equation} \psi_0(x) \propto f^{-\frac{1}{2}(C_1+1)} (\cos x)^{C_1} \exp\left(-\sum_{\kappa=1}^m \frac{C_{2\kappa+1}}{2\kappa} \sec^{2\kappa}x\right) \end{equation} and \begin{align} \psi_1(x) &\propto f^{-\frac{1}{2}(C_1+2m+2)} (\cos x)^{C_1} \exp\left(-\sum_{\kappa=1}^m \frac{C_{2\kappa+1}}{2\kappa} \sec^{2\kappa}x\right) \nonumber \\ & \quad{}\times \sum_{k=0}^m \biggl\{\biggl[\sum_{l=0}^k (-1)^{k-l} \binom{m-l}{k-l} \frac{(2m+1)!!}{(2l+1)!! (2m-2l)!!} (1+\alpha)^{l-m}\biggr] \nonumber \\ & \quad{}\times \sin^{2k+1}x\biggr\}. \end{align} The function $f(x)$ having a finite value $1+\alpha$ for $x \to \pm \pi/2$, the behaviour of $\psi_0(x)$ and $\psi_1(x)$ at the interval boundaries is determined by that of $\exp\left[- C_{2m+1} \sec^{2m}x/(2m)\right]$, where $C_{2m+1}= \sqrt{A_{4m+2}}/(1+\alpha) > 0$, thereby showing that such functions are square integrable, as it should be.\par As an illustration, let us present some detailed results for the $m=1$ case. The potential reads \begin{align} V^{(1)}(x) &= \left[\frac{15}{4} (1+\alpha)^2 + 3(1+4\alpha) \sqrt{A_6} + \frac{3(4\alpha^2-1)}{4(1+\alpha)^2} A_6\right] \sec^2 x \nonumber \\ & \quad{} - 3\sqrt{A_6} \left[2(1+\alpha) + \frac{\alpha}{1+\alpha} \sqrt{A_6}\right] \sec^4 x + A_6 \sec^6 x. \label{eq:V(1)} \end{align} Its ground and first excited state energies are \begin{align} E_0 &= \frac{3}{4}(1+\alpha)(3+5\alpha) + \frac{3(-1+2\alpha+4\alpha^2)}{2(1+\alpha)} \sqrt{A_6} + \frac{(1-2\alpha)^2}{4(1+\alpha)^2} A_6, \\ E_1 &= \frac{3}{4}(1+\alpha)(3+5\alpha) + \frac{3(1+2\alpha+4\alpha^2)}{2(1+\alpha)} \sqrt{A_6} + \frac{(1-2\alpha)^2}{4(1+\alpha)^2} A_6, \end{align} with corresponding wavefunctions \begin{align} \psi_0(x) &\propto f^{\frac{1}{4} - \frac{\sqrt{A_6}}{4(1+\alpha)^2}} (\cos x)^{\frac{\sqrt{A_6}}{2(1+\alpha)^2} - \frac{3}{2}} \exp\left(- \frac{\sqrt{A_6}}{2(1+\alpha)} \sec^2 x\right), \label{eq:V(1)gs} \\ \psi_1(x) &\propto f^{-\frac{5}{4} - \frac{\sqrt{A_6}}{4(1+\alpha)^2}} (\cos x)^{\frac{\sqrt{A_6}}{2(1+\alpha)^2} - \frac{3}{2}} \sin x [3 - (1-2\alpha)\sin^2 x] \nonumber \\ & \quad{} \times \exp\left(- \frac{\sqrt{A_6}}{2(1+\alpha)}\sec^2 x\right). \label{eq:V(1)es} \end{align} As can be checked, the odd wavefunction $\psi_1(x)$ has a single zero at $x=0$ in the defining interval $(-\pi/2, \pi/2)$, since no real $x$ satisfies the condition $\sin^2 x = 3/(1-2\alpha) > 1$.\par \begin{figure} \begin{center} \includegraphics{Quesne-fig3.eps} \caption{Plot of the extended potential $V^{(1,1)}(x)$ with $A_6 = B_6 = 1$ and $\alpha = 1/2$. The ground and first excited state energies are $E_0 = 69/2$ and $E_1 = 293/2$. The PDM reads $m(x) = \left(1 + \frac{1}{2} \cos2x\right)^{-2}$.} \end{center} \end{figure} \par \begin{figure} \begin{center} \includegraphics{Quesne-fig4.eps} \caption{Plots of the ground state wavefunction $\psi_0(x)$ (solid line) and of the first excited state wavefunction $\psi_1(x)$ (dashed line) for the potential displayed in fig.~3.} \end{center} \end{figure} \par In fig.~1, an example of extended potential (\ref{eq:V(1)}) is plotted. Its corresponding (rescaled) unnormalized wavefunctions (\ref{eq:V(1)gs}) and (\ref{eq:V(1)es}) are displayed in fig.~2.\par \section{Extensions of the two-parameter trigonometric P\"oschl-Teller potential} \setcounter{equation}{0} Let us consider next an infinite family of extensions of the two-parameter TPT potential (\ref{eq:TPT-2}), defined by \begin{equation} V^{(m_1,m_2)}(x) = \sum_{k=1}^{2m_1+1} A_{2k} \sec^{2k}x + \sum_{l=1}^{2m_2+1} B_{2l} \csc^{2l}x, \qquad 0 < x < \frac{\pi}{2}, \label{eq:Vm1m2} \end{equation} where $m_1$ and $m_2$ are two nonnegative integers and $A_{4m_1+2}, B_{4m_2+2} > 0$. For $m_1 = m_2 = 0$, eq.~(\ref{eq:Vm1m2}) would give back eq.~(\ref{eq:TPT-2}) with $A_2 = A(A-1)$ and $B_2 = B(B-1)$. Here, we wish to determine parameters $A_2, A_4, \ldots, A_{4m_1}, B_2, B_4, \ldots B_{4m_2}$ in terms of $A_{4m_1+2}$, $B_{4m_2+2}$, and $\alpha$ in such a way that eq. (\ref{eq:def-SE}) with $f(x)$ and $V(x)$ given by (\ref{eq:f-2}) and (\ref{eq:Vm1m2}), respectively, has known ground and first excited states. The corresponding PDM will then be \begin{equation} m(x) = (1 + \alpha \cos2x)^{-2}, \qquad 0 < |\alpha| <1. \end{equation} It is enough to assume $m_1 \ge m_2$, because the $m_1<m_2$ case could be easily obtained from that with $m_1>m_2$ by permuting the roles of $\sec^2x$ and $\csc^2x$, which can be achieved by the change of variable $x \to \frac{\pi}{2} - x$ and the change of parameter $\alpha \to - \alpha$. Since the cases $m_1 \ge m_2 > 0$ and $m_1 > m_2=0$ have to be distinguished, we will start with the former, then point out the changes to be made to care for the latter.\par \subsection{\boldmath Extensions with $m_1 \ge m_2 > 0$} Let us consider the generating functions \begin{align} W_+(x) &= 2\sqrt{A_{4m_1+2}} \sum_{k=0}^{m_1} \binom{m_1+m_2+1}{m_2+k+1} \left(\frac{1+\alpha}{1-\alpha}\right)^{m_1-k} (\tan x)^{2k+1} \nonumber \\ & \quad{} - 2\sqrt{B_{4m_2+2}} \sum_{l=0}^{m_2} \binom{m_1+m_2+1}{m_1+l+1} \left(\frac{1-\alpha}{1+\alpha}\right)^{m_2-l} (\cot x)^{2l+1}, \label{eq:W_+2}\\ W_-(x) &= (2m_1+1) (1-\alpha) \tan x - (2m_2+1) (1+\alpha) \cot x. \label{eq:W_-2} \end{align} A calculation similar to that carried out in sec.~4 shows that such functions are compatible and that $E_1-E_0$ is given by \begin{align} E_1-E_0 &= 4 \frac{(m_1+m_2+1)!}{m_1! m_2!} \biggl[\sqrt{A_{4m_1+2}} \frac{(1+\alpha)^{m_1+1}} {(1-\alpha)^{m_1}} \nonumber \\ & \quad{} + \sqrt{B_{4m_2+2}} \frac{(1-\alpha)^{m_2+1}}{(1+\alpha)^{m_2}}\biggr]. \label{eq:diff-2} \end{align} \par \begin{figure} \begin{center} \includegraphics{Quesne-fig5.eps} \caption{Plot of the extended potential $V^{(1,0)}(x)$ with $A_6 = B_2 = 1$ and $\alpha = 1/2$. The ground and first excited state energies are $E_0 = 629/16$ and $E_1 = 1381/16$. The PDM reads $m(x) = \left(1 + \frac{1}{2} \cos2x\right)^{-2}$.} \end{center} \end{figure} \par \begin{figure} \begin{center} \includegraphics{Quesne-fig6.eps} \caption{Plots of the ground state wavefunction $\psi_0(x)$ (solid line) and of the first excited state wavefunction $\psi_1(x)$ (dashed line) for the potential displayed in fig.~5.} \end{center} \end{figure} \par The two superpotentials $W(x)$ and $W'(x)$ can now be expressed as \begin{align} W(x) &= \sum_{k=0}^{m_1} \lambda_k (\tan x)^{2k+1} - \sum_{l=0}^{m_2} \mu_l (\cot x)^{2l+1}, \\ W'(x) &= \sum_{k=0}^{m_1} \lambda'_k (\tan x)^{2k+1} - \sum_{l=0}^{m_2} \mu'_l (\cot x)^{2l+1}, \end{align} where \begin{equation} \begin{split} \lambda_0&= \sqrt{A_{4m_1+2}} \binom{m_1+m_2+1}{m_2+1} \left(\frac{1+\alpha}{1-\alpha}\right)^{m_1} - \left(m_1+\frac{1}{2}\right)(1-\alpha), \\ \lambda'_0 &= \lambda_0 + (2m_1+1)(1-\alpha), \\ \lambda_k &= \lambda'_k = \sqrt{A_{4m_1+2}} \binom{m_1+m_2+1}{m_2+k+1} \left(\frac{1+\alpha}{1-\alpha}\right)^{m_1-k}, \qquad k=1, 2, \ldots, m_1, \\ \mu_0&= \sqrt{B_{4m_2+2}} \binom{m_1+m_2+1}{m_1+1} \left(\frac{1-\alpha}{1+\alpha}\right)^{m_2} - \left(m_2+\frac{1}{2}\right)(1+\alpha), \\ \mu'_0 &= \mu_0 + (2m_2+1)(1+\alpha), \\ \mu_l &= \mu'_l = \sqrt{B_{4m_2+2}} \binom{m_1+m_2+1}{m_1+l+1} \left(\frac{1-\alpha}{1+\alpha}\right)^{m_2-l}, \qquad l=1, 2, \ldots, m_2. \end{split} \end{equation} \par As in sec.~4, the determination of $E_0$ and $A_2, A_4, \ldots, A_{4m_1}, B_2, B_4, \ldots, B_{4m_2}$ in terms of $A_{4m_1+2}$, $B_{4m_2+2}$, and $\alpha$ can be carried out in two steps: first to obtain the coefficients $a_k$ and $b_l$ in the expansion \begin{equation} V_1(x) = \sum_{k=0}^{2m_1+1} a_k (\tan x)^{2k} + \sum_{l=1}^{2m_2+1} b_l (\cot x)^{2l}, \end{equation} then to express the searched for quantities as \begin{equation} \begin{split} E_0 &= \sum_{k=0}^{2m_1+1} (-1)^{k+1} a_k + \sum_{l=1}^{2m_2+1} (-1)^{l+1} b_l, \\ A_{2k} &= \sum_{l=k}^{2m_1+1} (-1)^{l-k} \binom{l}{k} a_l, \qquad k=1, 2, \ldots, 2m_1, \\ B_{2l} &= \sum_{k=l}^{2m_2+1} (-1)^{k-l} \binom{k}{l} b_k, \qquad l=1, 2, \ldots, 2m_2. \end{split} \end{equation} After some lengthy, but straightforward calculations, we get the results detailed in appendix A. \par To determine the wavefunctions, we rewrite this time $W/f$ and $W'/f$ in terms of the variable $y=\cos 2x$. For the former, for instance, we get \begin{equation} \frac{W(x)}{f(x)} = \sin 2x \left(\sum_{p=1}^{m_1+1} \frac{C_p}{(1+y)^p} - \sum_{q=1}^{m_2+1} \frac{D_q}{(1-y)^q} - \frac{\alpha (C_1+D_1)}{1+\alpha y}\right), \end{equation} with \begin{equation} C_p = \sum_{q=p-1}^{m_1} 2^q \frac{(-\alpha)^{q-p+1}}{(1-\alpha)^{q-p+2}} \sum_{k=q}^{m_1} (-1)^{k-q} \binom{k}{q} \lambda_k, \qquad p=1, 2, \ldots,m_1+1, \end{equation} and \begin{equation} D_q = \sum_{p=q-1}^{m_2} 2^p \frac{\alpha^{p-q+1}}{(1+\alpha)^{p-q+2}} \sum_{l=p}^{m_2} (-1)^{l-p} \binom{l}{p} \mu_l, \qquad q=1, 2, \ldots, m_2+1. \end{equation} The results read \begin{align} \psi_0(x) &\propto f^{- \frac{1}{2}(C_1+D_1+1)} (\cos x)^{C_1} (\sin x)^{D_1} \nonumber \\ & \quad{} \times \exp\left[- \sum_{p=2}^{m_1+1} \frac{C_p}{2^p (p-1)} (\sec x)^{2(p-1)} - \sum_{q=2}^{m_2+1} \frac{D_q}{2^q (q-1)} (\csc x)^{2(q-1)}\right] \end{align} and \begin{align} \psi_1(x) &\propto f^{- \frac{1}{2}(C_1+D_1+2m_1+2m_2+3)} (\cos x)^{C_1} (\sin x)^{D_1} \nonumber \\ & \quad{} \times \biggl\{-2 \sqrt{B_{4m_2+2}} \sum_{k=0}^{m_2} (-1)^k \binom{m_1+m_2+1}{k} \left(\frac{2\alpha}{1+\alpha}\right)^k \sin^{2k}x \nonumber \\ & \quad {} + \sum_{k=m_2+1}^{m_1+m_2+1} \biggl[\binom{m_1+m_2+1}{k} \nonumber \\ & \quad{} \times \biggl(2 \sqrt{A_{4m_1+2}} \left(\frac{1+\alpha}{1-\alpha}\right)^{m_1+m_2-k+1} F\left(k-m_2-1,k;:\frac{1+\alpha}{1-\alpha}\right) \nonumber \\ & \quad{} - 2\sqrt{B_{4m_2+2}} (-1)^k F\left(m_2,k; \frac{1-\alpha}{1+\alpha}\right)\biggr) \sin^{2k}x\biggr] \biggr\} \nonumber \\ & \quad{} \times \exp\left[- \sum_{p=2}^{m_1+1} \frac{C_p}{2^p (p-1)} (\sec x)^{2(p-1)} - \sum_{q=2}^{m_2+1} \frac{D_q}{2^q (q-1)} (\csc x)^{2(q-1)}\right], \end{align} % where we have defined % \begin{equation} F(n,k;z) = \sum_{p=0}^n (-1)^p \binom{k}{p} z^p, \qquad k>n. \end{equation} At the boundaries $x=0$ and $x=\pi/2$ of the defining interval, the behaviour of $\psi_0(x)$ and $\psi_1(x)$ is governed by that of $\exp[-D_{m_2+1} (\csc x)^{2m_2}/(2^{m_2+1}m_2)]$ and $\exp[-C_{m_1+1} (\sec x)^{2m_1}/(2^{m_1+1}m_1)]$, where $D_{m_2+1} = 2^{m_2} \sqrt{B_{4m_2+2}}/(1+\alpha) > 0$ and $C_{m_1+1} = 2^{m_1} \sqrt{A_{4m_1+2}}/(1-\alpha) > 0$, respectively. Hence, such functions are square integrable on $(0,\pi/2)$, as it should be.\par {}For the simplest case corresponding to $m_1=m_2=1$, we get, for instance, the potential \begin{align} V^{(1,1)}(x) &= \biggl[\frac{15}{4}(1-\alpha)^2 - 24\alpha\sqrt{A_6} + \frac{12\alpha(1+2\alpha)}{(1-\alpha)^2} A_6 - \frac{6(1-\alpha)}{1+\alpha} \sqrt{A_6B_6}\biggr] \sec^2 x \nonumber \\ & \quad{} + 3\sqrt{A_6} \biggl[-2(1-\alpha) + \frac{1+3\alpha}{1-\alpha} \sqrt{A_6}\biggr] \sec^4 x + A_6 \sec^6 x \nonumber \\ & \quad{} + \biggl[\frac{15}{4}(1+\alpha)^2 + 24\alpha\sqrt{B_6} - \frac{6(1+\alpha)}{1-\alpha} \sqrt{A_6B_6} - \frac{12\alpha(1-2\alpha)}{(1+\alpha)^2} B_6 \biggr] \csc^2 x \nonumber \\ & \quad{} + 3\sqrt{B_6} \biggl[-2(1+\alpha) + \frac{1-3\alpha}{1+\alpha} \sqrt{B_6}\biggr] \csc^4 x + B_6 \csc^6 x. \label{eq:V(1,1)} \end{align} Its ground and first excited state energies are given by \begin{align} E_0 &= 3(2\alpha^2+3) + \frac{12(\alpha^2-2\alpha-1)}{1-\alpha} \sqrt{A_6} + \frac{12(\alpha^2+2\alpha-1)}{1+\alpha} \sqrt{B_6} \nonumber \\ & \quad{} + \frac{4(1+2\alpha)^2}{(1-\alpha)^2} A_6 + \frac{8(1-2\alpha)(1+2\alpha)}{(1-\alpha)(1+\alpha)} \sqrt{A_6B_6} + \frac{4(1-2\alpha)^2}{(1+\alpha)^2} B_6 \end{align} and \begin{align} E_1 &= 3(2\alpha^2+3) + \frac{12(3\alpha^2+2\alpha+1)}{1-\alpha} \sqrt{A_6} + \frac{12(3\alpha^2-2\alpha+1)}{1+\alpha} \sqrt{B_6} \nonumber \\ & \quad{} + \frac{4(1+2\alpha)^2}{(1-\alpha)^2} A_6 + \frac{8(1-2\alpha)(1+2\alpha)}{(1-\alpha)(1+\alpha)} \sqrt{A_6B_6} + \frac{4(1-2\alpha)^2}{(1+\alpha)^2} B_6, \end{align} with corresponding wavefunctions \begin{align} \psi_0(x) &\propto f^{-\frac{1+\alpha}{(1-\alpha)^2}\sqrt{A_6} -\frac{1-\alpha}{(1+\alpha)^2}\sqrt{B_6} +1} (\cos x)^{\frac{2(1+\alpha)}{(1-\alpha)^2}\sqrt{A_6} - \frac{3}{2}} (\sin x)^{\frac{2(1-\alpha)}{(1+\alpha)^2}\sqrt{B_6} - \frac{3}{2}} \nonumber \\ & \quad{} \times \exp\biggl[- \frac{\sqrt{A_6}}{2(1-\alpha)} \sec^2x - \frac{\sqrt{B_6}}{2(1+\alpha)} \csc^2x\biggr] \label{eq:V(1,1)gs} \end{align} and \begin{align} \psi_1(x) &\propto f^{-\frac{1+\alpha}{(1-\alpha)^2}\sqrt{A_6} -\frac{1-\alpha}{(1+\alpha)^2}\sqrt{B_6} -2} (\cos x)^{\frac{2(1+\alpha)}{(1-\alpha)^2}\sqrt{A_6} - \frac{3}{2}} (\sin x)^{\frac{2(1-\alpha)}{(1+\alpha)^2}\sqrt{B_6} - \frac{3}{2}} \nonumber \\ & \quad{} \times \biggl\{- 2\sqrt{B_6} + \frac{12\alpha}{1+\alpha} \sqrt{B_6} \sin^2x + 6 \biggl[ \frac{1+\alpha}{1-\alpha} \sqrt{A_6} + \frac{1-3\alpha}{1+\alpha} \sqrt{B_6}\biggr] \sin^4x \nonumber \\ & \quad{} - 4\biggl[\frac{1+2\alpha}{1-\alpha} \sqrt{A_6} + \frac{1-2\alpha}{1+\alpha} \sqrt{B_6}\biggr] \sin^6x\biggr\} \nonumber \\ & \quad{} \times \exp\biggl[- \frac{\sqrt{A_6}}{2(1-\alpha)} \sec^2x - \frac{\sqrt{B_6}}{2(1+\alpha)} \csc^2x\biggr]. \label{eq:V(1,1)es} \end{align} \par In fig.~3, an example of extended potential (\ref{eq:V(1,1)}) is plotted. Its corresponding (rescaled) unnormalized wavefunctions (\ref{eq:V(1,1)gs}) and (\ref{eq:V(1,1)es}) are displayed in fig.~4.\par \subsection{\boldmath Extensions with $m_1> m_2=0$} {}For $m_1> m_2=0$, the results presented in sec.~5.1 remain valid provided we replace $\sqrt{B_2}$ by $1 + \alpha + \frac{1}{2}\Delta$, $\Delta = \sqrt{(1+\alpha)^2 + 4B_2}$, in the generating functions (\ref{eq:W_+2}) and (\ref{eq:W_-2}), which therefore become \begin{align} W_+(x) &= 2\sqrt{A_{4m_1+2}} \sum_{k=0}^{m_1} \binom{m_1+1}{k+1} \left(\frac{1+\alpha}{1-\alpha}\right)^{m_1-k} (\tan x)^{2k+1} \nonumber \\ & \quad{} - (2+2\alpha+\Delta) \cot x, \\ W_-(x) &= (2m_1+1)(1-\alpha) \tan x - (1+\alpha) \cot x, \end{align} with corresponding $E_1-E_0$ given by \begin{equation} E_1 - E_0 = 4(m_1+1) \frac{(1+\alpha)^{m_1+1}}{(1-\alpha)^{m_1}} \sqrt{A_{4m_1+2}} + 2(m_1+1)(1-\alpha)(2+2\alpha+\Delta). \end{equation} \par We shall not present the general results, but instead show the simplest example corresponding to $m_1=1$. In such a case, the potential reads \begin{align} V^{(1,0)}(x) &= \biggl[\frac{15}{4}(1-\alpha)^2 - (24\alpha+\Delta) \sqrt{A_6} + \frac{-1+2\alpha+15\alpha^2}{(1-\alpha)^2} A_6\biggr] \sec^2x \nonumber \\ & \quad{} + \sqrt{A_6} \biggl[-6(1-\alpha) + \frac{1+7\alpha}{1-\alpha}\sqrt{A_6}\biggr] \sec^4x + A_6 \sec^6x + B_2 \csc^2x, \label{eq:V(1,0)} \end{align} with ground and first excited state energies \begin{align} E_0 &= \frac{5}{4}(1-\alpha)(1-5\alpha) - (1-\alpha)\Delta + \frac{-2-4\alpha+22\alpha^2+(1+3\alpha)\Delta}{1-\alpha} \sqrt{A_6} \nonumber \\ & \quad{} + \left(\frac{1+3\alpha}{1-\alpha}\right)^2 A_6 + B_2, \\ E_1 &= \frac{1}{4}(1-\alpha)(37+7\alpha) + 3(1-\alpha)\Delta + \frac{6(1+2\alpha+5\alpha^2)+(1+3\alpha)\Delta}{1-\alpha} \sqrt{A_6} \nonumber \\ & \quad{} + \left(\frac{1+3\alpha}{1-\alpha}\right)^2 A_6 + B_2, \end{align} and corresponding wavefunctions \begin{align} \psi_0(x) &\propto f^{-\frac{1+\alpha}{2(1-\alpha)^2}\sqrt{A_6} - \frac{\Delta}{4(1+\alpha)}} (\cos x)^{\frac{1+\alpha}{(1-\alpha)^2}\sqrt{A_6} - \frac{3}{2}} (\sin x)^{\frac{\Delta}{2(1+\alpha)} +\frac{1}{2}} \nonumber \\ & \quad{} \times \exp\left[- \frac{\sqrt{A_6}}{2(1-\alpha)} \sec^2x\right], \label{eq:V(1,0)gs} \\ \psi_1(x) &\propto f^{-\frac{1+\alpha}{2(1-\alpha)^2}\sqrt{A_6} - \frac{\Delta}{4(1+\alpha)} - 2} (\cos x)^{\frac{1+\alpha}{(1-\alpha)^2}\sqrt{A_6} - \frac{3}{2}} (\sin x)^{\frac{\Delta}{2(1+\alpha)} +\frac{1}{2}} \nonumber \\ & \quad{} \times \biggl\{2+2\alpha+\Delta - 2\biggl[\frac{2(1+\alpha)}{1-\alpha}\sqrt{A_6} +2+2\alpha +\Delta\biggr] \sin^2x \nonumber \\ & \quad{} + \biggl[\frac{2(1+3\alpha)}{1-\alpha}\sqrt{A_6} +2+2\alpha+\Delta\biggr] \sin^4x\biggr\} \exp\left[- \frac{\sqrt{A_6}}{2(1-\alpha)} \sec^2x\right]. \label{eq:V(1,0)es} \end{align} \par In fig.~5, an example of extended potential (\ref{eq:V(1,0)}) is plotted. Its corresponding (rescaled) unnormalized wavefunctions (\ref{eq:V(1,0)gs}) and (\ref{eq:V(1,0)es}) are displayed in fig.~6.\par \section{Conclusion} In the present paper, we have shown that it is possible to generate infinite families of PDM Schr\"odinger equations with known ground and first excited states in DSUSY by considering extensions of both one- and two-parameter TPT potentials endowed with a DSI property. If needed, higher energy levels should be calculated numerically. This work completes a previous study \cite{cq18}, where only extensions of some simpler potentials were explicitly constructed, and demonstrates the efficiency of the method proposed there to deal with more complex potentials. This opens the way for building extensions of other potentials with a DSI property, whose treatment is rather involved, such as the Eckart and Rosen-Morse I potentials considered in \cite{bagchi05, cq09}.\par Taking into account the usefulness of the TPT potential as a first approximation in several problems of molecular and solid state physics, it is obvious that the exact results presented here for potentials including some extra terms may find helpful applications in such fields. The search for such applications would be another interesting topic for future investigation.\par \section*{\boldmath Appendix A. General results for extensions of the two-parameter trigonometric P\"oschl-Teller potential with $m_1 \ge m_2 > 0$} \renewcommand{\theequation}{A.\arabic{equation}} \setcounter{equation}{0} In this appendix, we present the general results obtained for the ground state energy and the parameters of the extensions of the two-parameter trigonometric P\"oschl-Teller potential with $m_1 \ge m_2 > 0$: \begin{align} E_0 &= (m_1+m_2+1)^2 - 2\alpha(m_1-m_2)(m_1+m_2+2) + \alpha^2[(m_1-m_2)^2 \nonumber \\ &\quad{} + 2(m_1+m_2+1)] \nonumber \\ &\quad{} - \sqrt{A_{4m_1+2}} \biggl\{2m_2 \binom{m_1+m_2+1}{m_2+1} \frac{(1+\alpha)^{m_1+1}} {(1-\alpha)^{m_1}} \nonumber \\ &\quad{} + \sum_{k=1}^{m_1+1} (-1)^k \biggl[(2m_2-2k) \binom{m_1+m_2+1}{m_2+k+1} - (2m_1+2k) \binom{m_1+m_2+1}{m_2+k}\biggr] \nonumber \\ &\quad{} \times \frac{(1+\alpha)^{m_1-k+1}}{(1-\alpha)^{m_1-k}}\Biggr\} \nonumber \\ &\quad{} - \sqrt{B_{4m_2+2}} \biggl\{2m_1 \binom{m_1+m_2+1}{m_1+1} \frac{(1-\alpha)^{m_2+1}} {(1+\alpha)^{m_2}} \nonumber \\ &\quad{} + \sum_{l=1}^{m_2+1} (-1)^l \biggl[(2m_1-2l) \binom{m_1+m_2+1}{m_1+l+1} - (2m_2+2l) \binom{m_1+m_2+1}{m_1+l}\biggr] \nonumber \\ &\quad{} \times \frac{(1-\alpha)^{m_2-l+1}}{(1+\alpha)^{m_2-l}}\Biggr\} \nonumber \\ &\quad{} - A_{4m_1+2} \sum_{k=1}^{2m_1+1} \biggl[(-1)^k \left(\frac{1+\alpha} {1-\alpha}\right)^{2m_1-k+1} \nonumber \\ & \quad{} \times \sum_{l={\rm max}(0,k-m_1-1)}^{{\rm min}(k-1,m_1)} \binom{m_1+m_2+1}{m_2+l+1} \binom{m_1+m_2+1}{m_2+k-l}\biggr] \nonumber \\ & \quad{} + 2\sqrt{A_{4m_1+2}B_{4m_2+2}} \biggl\{\sum_{k=0}^{m_1} \biggl[(-1)^k \left(\frac{1+\alpha}{1-\alpha}\right)^{m_1-m_2-k} \nonumber \\ & \quad{} \times \sum_{l=k}^{{\rm min}(m_2+k,m_1)} \binom{m_1+m_2+1}{m_2+l+1} \binom{m_1+m_2+1}{m_2+k-l}\biggr] \nonumber \\ & \quad{} + \sum_{l=1}^{m_2} \biggl[(-1)^l \left(\frac{1+\alpha}{1-\alpha}\right)^{m_1-m_2+l}\sum_{k=l}^{m_2} \binom{m_1+m_2+1}{m_1+k+1} \binom{m_1+m_2+1}{m_1+l-k}\biggr]\biggr\} \nonumber \\ &\quad{} - B_{4m_2+2} \sum_{l=1}^{2m_2+1} \biggl[(-1)^l \left(\frac{1-\alpha} {1+\alpha}\right)^{2m_2-l+1} \nonumber \\ & \quad{} \times \sum_{k={\rm max}(0,l-m_2-1)}^{{\rm min}(l-1,m_2)} \binom{m_1+m_2+1}{m_1+k+1} \binom{m_1+m_2+1}{m_1+l-k}\biggr], \label{eq:E0-2} \end{align} \begin{align} A_2 &= \left(m_1+\frac{1}{2}\right) \left(m_1+\frac{3}{2}\right) (1-\alpha)^2 \nonumber \\ & \quad{} - \sqrt{A_{4m_1+2}} \sum_{k=1}^{m_1+1} \biggl\{(-1)^k k \frac{(1+\alpha)^{m_1-k+1}} {(1-\alpha)^{m_1-k}} \nonumber \\ & \quad{} \times \biggl[(2m_2-2k) \binom{m_1+m_2+1}{m_2+k+1} - (2m_1+2k) \binom{m_1+m_2+1}{m_2+k}\biggr]\biggr\} \nonumber \\ &\quad{} - A_{4m_1+2} \sum_{k=1}^{2m_1+1} \biggl[(-1)^k k \left(\frac{1+\alpha} {1-\alpha}\right)^{2m_1-k+1} \nonumber \\ & \quad{} \times \sum_{l={\rm max}(0,k-m_1-1)}^{{\rm min}(k-1,m_1)} \binom{m_1+m_2+1}{m_2+l+1} \binom{m_1+m_2+1}{m_2+k-l}\biggr] \nonumber \\ & \quad{} + 2\sqrt{A_{4m_1+2}B_{4m_2+2}} \sum_{k=1}^{m_1} \biggl[(-1)^k k \left(\frac{1+\alpha}{1-\alpha}\right)^{m_1-m_2-k} \nonumber \\ & \quad{} \times \sum_{l=k}^{{\rm min}(m_2+k,m_1)} \binom{m_1+m_2+1}{m_2+l+1} \binom{m_1+m_2+1}{m_2+k-l}\biggr], \end{align} \begin{align} A_{2k} &= \sqrt{A_{4m_1+2}} \sum_{l=k}^{m_1+1} \biggl\{(-1)^{l-k} \binom{l}{k} \frac{(1+\alpha)^{m_1-l+1}}{(1-\alpha)^{m_1-l}} \nonumber \\ & \quad{} \times \biggl[(2m_2-2l) \binom{m_1+m_2+1}{m_2+l+1} - (2m_1+2l) \binom{m_1+m_2+1}{m_2+l}\biggr]\biggr\} \nonumber \\ & \quad{} + A_{4m_1+2} \sum_{l=k}^{2m_1+1} \biggl[(-1)^{l-k} \binom{l}{k} \left(\frac{1+\alpha} {1-\alpha}\right)^{2m_1-l+1} \nonumber \\ & \quad{} \times \sum_{p={\rm max}(0,l-m_1-1)}^{{\rm min}(l-1,m_1)} \binom{m_1+m_2+1}{m_2+p+1} \binom{m_1+m_2+1}{m_2+l-p}\biggr] \nonumber \\ & \quad{} - 2\sqrt{A_{4m_1+2}B_{4m_2+2}} \sum_{l=k}^{m_1} \biggl[(-1)^{l-k} \binom{l}{k} \left(\frac{1+\alpha}{1-\alpha}\right)^{m_1-m_2-l} \nonumber \\ & \quad{} \times \sum_{p=l}^{{\rm min}(m_2+l,m_1)} \binom{m_1+m_2+1}{m_2+p+1} \binom{m_1+m_2+1}{m_2+l-p}\biggr], \qquad 2 \le k \le m_1+1, \end{align} \begin{align} A_{2k} &= A_{4m_1+2} \sum_{l=k}^{2m_1+1} \biggl[(-1)^{l-k} \binom{l}{k} \left(\frac{1+\alpha} {1-\alpha}\right)^{2m_1-l+1} \nonumber \\ & \quad{} \times \sum_{p=l-m_1-1}^{m_1} \binom{m_1+m_2+1}{m_2+p+1} \binom{m_1+m_2+1}{m_2+l-p}\biggr], \qquad m_1+2 \le k \le 2m_1, \end{align} \begin{align} B_2 &= \left(m_2+\frac{1}{2}\right) \left(m_2+\frac{3}{2}\right) (1+\alpha)^2 \nonumber \\ & \quad{} - \sqrt{B_{4m_2+2}} \sum_{k=1}^{m_2+1} \biggl\{(-1)^k k \frac{(1-\alpha)^{m_2-k+1}} {(1+\alpha)^{m_2-k}} \nonumber \\ & \quad{} \times \biggl[(2m_1-2k) \binom{m_1+m_2+1}{m_1+k+1} - (2m_2+2k) \binom{m_1+m_2+1}{m_1+k}\biggr]\biggr\} \nonumber \\ & \quad{} + 2\sqrt{A_{4m_1+2}B_{4m_2+2}} \sum_{k=1}^{m_2} \biggl[(-1)^k k \left(\frac{1+\alpha}{1-\alpha}\right)^{m_1-m_2+k} \nonumber \\ & \quad{} \times \sum_{l=k}^{m_2} \binom{m_1+m_2+1}{m_1+l+1} \binom{m_1+m_2+1}{m_1+k-l}\biggr] \nonumber \\ &\quad{} - B_{4m_2+2} \sum_{k=1}^{2m_2+1} \biggl[(-1)^k k \left(\frac{1-\alpha} {1+\alpha}\right)^{2m_2-k+1} \nonumber \\ & \quad{} \times \sum_{l={\rm max}(0,k-m_2-1)}^{{\rm min}(k-1,m_2)} \binom{m_1+m_2+1}{m_1+l+1} \binom{m_1+m_2+1}{m_1+k-l}\biggr], \end{align} \begin{align} B_{2l} &= \sqrt{B_{4m_2+2}} \sum_{k=l}^{m_2+1} \biggl\{(-1)^{k-l} \binom{k}{l} \frac{(1-\alpha)^{m_2-k+1}}{(1+\alpha)^{m_2-k}} \nonumber \\ & \quad{} \times \biggl[(2m_1-2k) \binom{m_1+m_2+1}{m_1+k+1} - (2m_2+2k) \binom{m_1+m_2+1}{m_1+k}\biggr]\biggr\} \nonumber \\ & \quad{} - 2\sqrt{A_{4m_1+2}B_{4m_2+2}} \sum_{k=l}^{m_2} \biggl[(-1)^{k-l} \binom{k}{l} \left(\frac{1+\alpha}{1-\alpha}\right)^{m_1-m_2+k} \nonumber \\ & \quad{} \times \sum_{p=k}^{m_2} \binom{m_1+m_2+1}{m_1+p+1} \binom{m_1+m_2+1}{m_1+k-p}\biggr] \nonumber \\ & \quad{} + B_{4m_2+2} \sum_{k=l}^{2m_2+1} \biggl[(-1)^{k-l} \binom{k}{l} \left(\frac{1-\alpha} {1+\alpha}\right)^{2m_2-k+1} \nonumber \\ & \quad{} \times \sum_{p={\rm max}(0,k-m_2-1)}^{{\rm min}(k-1,m_2)} \binom{m_1+m_2+1}{m_1+p+1} \binom{m_1+m_2+1}{m_1+k-p}\biggr], \quad 2 \le l \le m_2+1, \end{align} \begin{align} B_{2l} &= B_{4m_2+2} \sum_{k=l}^{2m_2+1} \biggl[(-1)^{k-l} \binom{k}{l} \left(\frac{1-\alpha} {1+\alpha}\right)^{2m_2-k+1} \nonumber \\ & \quad{} \times \sum_{p=k-m_2-1}^{m_2} \binom{m_1+m_2+1}{m_1+p+1} \binom{m_1+m_2+1}{m_1+k-p}\biggr], \qquad m_2+2 \le l \le 2m_2. \end{align} Note that $E_1$ can be easily obtained from (\ref{eq:diff-2}) and (\ref{eq:E0-2}).\par \newpage
1,314,259,994,072
arxiv
\section{Introduction\label{Introduction}} The intriguing behavior of glass-forming liquids is attracting continued interest from many researchers \cite{bapst2020unveiling,jensen2018slow,hecksher2017toward,klieber2013mechanical,blazhnov2004temperature,niss2018perspective,klieber2015nonlinear,gundermann2011predicting}. By virtue of its ability to simultaneously probe multiple relaxation processes (thermal expansion, acoustic and even orientational response \cite{glorieux2002thermal,silence1992structural}), the use of impulsive stimulated scattering (ISS) in a periodical grating geometry has been successful in obtaining new insights from the thermoelastic response (measured via the accompaning coherent diffraction of a probe laser beam) to impulsive photothermal excitation of different glassformers \cite{yang1995t,yang1995impulsive2,paolucci2000impulsive,halalay1992liquid,halalay1992time,silence1992structural,silence1990impulsive}. Standard thermo-mechanical modelling, based on the assumption of frequency independent or non-relaxing heat capacity and thermal expansion coefficient, have been shown not to be adequate to characterize the dynamics triggered in an ISS experiment, especially for viscous systems. Along with the first experimental ISS results, a semi-empirical model (SEM) \cite{yang1995impulsive1}, relying on a stretched-exponential function to describe the nontrivial initial thermal expansion rise of the ISS signal, has proved to be effective to describe the ISS response of glycerol, salol, and oil DC705 \cite{yang1995t,yang1995impulsive1,yang1995impulsive2,paolucci2000impulsive}.\\ Inspired by successful descriptions in literature of experimental results for the temperature response to heating \cite{birge1985specific,bentefour2003broadband,bentefour2004thermal,niss2012dynamic} by invoking a frequency dependent heat capacity $C$, and indications for a frequency dependent thermal expansion coefficient $\gamma$ \cite{blazhnov2004temperature}, here, we derive analytically a generalized ISS model to take into account the relaxation of $C$ and $\gamma$, which are not implicitly considered in the SEM. We start from frequency domain versions of the thermal diffusion equation and the thermoelastic equation and we impose a frequency dependent heat capacity and thermal expansion coefficient according to Debye and Havriliak-Negami (HN) relaxation models. We then investigate to what extent the introduced physical model is consistent with the empirical model by conducting a case study on ISS results of glycerol reported in Refs. \cite{paolucci2000impulsive,Liu2021}. We also present an interpretation of the Debye assumption for the frequency dependent heat capacity and thermal expansion coefficient in the framework of a two-temperature model (TTM). Furthermore, a set of experimentally recorded ISS signals of a supercooled glycerol is analysed with the developed models to extract $C(\omega)$ and $\gamma(\omega)$ up to sub-100 MHz. This largely extends the upper limit of the previously accessible bandwidth, 100 kHz \cite{bentefour2003broadband} and 1 Hz \cite{niss2012dynamic} respectively for the spectroscopy of \textit{C} and $\gamma$, enabling the comparison of fragility by thermal, mechanical, and dielectric susceptibilities in a broader frequency/temperature range. \\ The manuscript is organized as follows: In Section \ref{Sec:Temperature} we present analytical expressions for the temperature response to impulse photothermal excitation in a grating geometry in two scenarios: frequency independent and frequency dependent (according to Debye and HN functions) heat capacity. In Section \ref{Sec:displacement} a continuum mechanics thermoelastic model is used to calculate the response of the material strain to photothermal excitation, by considering the temperature change as source term in the equation of motion, in which Debye and HN relaxation behavior of thermal expansion is coupled. A comparison between results obtained by the newly proposed approach and simulations by the empirical model for literature values on glycerol \cite{paolucci2000impulsive} is presented in Section \ref{Sec:comparison_with_liter}. In Section \ref{Sec:two_temp_model} the compatibility between Debye frequency dependence of the heat capacity and of thermal expansion coefficient and the TTM is verified. Finally, in Section \ref{experiment}, we apply the developed models to the concrete case of the experimental ISS signals recorded on a supercooled glycerol.\\ The present work accompanies the results presented in Ref. \cite{Liu2021}, in which the thermal relaxation dynamics of glycerol is investigated by a combination of ISS and thermal lens spectroscopy \cite{ThermalLens}.\\ \section{Temperature response to impulsive photothermal excitation in a periodical grating geometry} \label{Sec:Temperature} \subsection{Scenario with frequency-independent heat capacity \label{Subsec:Frequency_independent_heat_capacity}} In this section we calculate the temperature evolution of a system that is subject to impulsive photothermal excitation generating a transient thermal grating (TTG). For now, we assume that the heat capacity is frequency independent.\\ The starting point is the thermal diffusion equation in the temperature $T$ for a 1D infinite geometry \cite{gandolfi2019accessing}: \begin{equation} \frac{\partial^2 T}{\partial x^2 }-\frac{\rho C}{\kappa_T} \frac{\partial T}{\partial t}=-\frac{Q(x,t)}{\kappa_T}, \label{diffusion_equation2} \end{equation} where $\rho$ (kg m$^{-3}$), $\kappa_T$ (W m$^{-1}$ K$^{-1}$) and $C$ (J kg$^{-1}$ K$^{-1}$)) are the mass density, the thermal conductivity and the frequency-independent heat capacity per unit mass, while $Q(x,t)$ (W m$^{-3}$) is the heat source. In ISS experiments, the heat input is impulsive in time and periodical in space: \begin{equation} Q(x,t)=Q_0\cos(qx)\delta(t), \label{Q_x_t} \end{equation} where $Q_0$ (J m$^{-3}$) is the supplied heat density and $q$ (m$^{-1}$) is the wavenumber, defined as $2\pi$ times the inverse of spatial period of periodical light intensity pattern. \\ Prior to excitation, the system is at equilibrium at constant temperature $T_0$.\\ After taking a Fourier transform, the following frequency domain expression is obtained: \begin{equation} \frac{\partial^2 \tilde{T}}{\partial x^2 }-i\omega\frac{\rho C}{\kappa_T} \tilde{T}=-\frac{1}{\kappa_T}\tilde{Q}(x,\omega), \label{diffusion_equation_in_freq_domain_AA} \end{equation} and the solution for temperature field reads: \begin{equation} \tilde{T}(x,\omega)=T_0\delta(\omega)+\frac{Q_0}{2\pi i \rho C( \omega-i\alpha q^2)}\cos(qx), \label{temperature_in_freq} \end{equation} where $\alpha=\kappa_T/(\rho C)$ (m$^2$ s) is the thermal diffusivity. By taking an inverse Fourier transform, the following expression is obtained for the temperature evolution: \begin{equation} T(x,t)=T_0+\frac{Q_0}{\rho C}\cos(qx)\exp\left(-\alpha q^2 t\right)\theta(t), \label{temperature_in_time} \end{equation} where $\theta(t)$ is the Heaviside step function.\\ \subsection{Scenario with frequency dependent heat capacity} \label{Subse:Frequ-dependent heat capacity} \subsubsection{Debye model \label{Subsum:Debye}} The following Debye expression for the frequency dependent heat capacity \cite{fivez2011dynamics}: \begin{equation} C(\omega)=C_\infty+\frac{\Delta C}{1+i\omega \tau_C}=C_\infty+\frac{\Delta C}{1+i\frac{\omega}{\omega_C}}, \label{C_omega} \end{equation} with $C_\infty$ the part of the heat capacity related to the higher frequency response, or, in time domain, the instantaneous response of the temperature to impulsive heating. $\Delta C$ is the additional part of the heat capacity that determines the reduction of the temperature response at low frequencies (lower than the relaxation frequency $\omega_C=\tau_C^{-1}$), or, in time domain, at times longer than the relaxation time $\tau_C$. Upon substitution of the expression for $C(\omega)$ into Eq. \ref{diffusion_equation_in_freq_domain_AA} we get the following differential equation: \begin{equation} \frac{\partial^2 \tilde{T}}{\partial x^2 }-i\omega\frac{\rho}{\kappa_T} \left(C_\infty+\frac{\Delta C}{1+i{\frac{\omega}{\omega_C} }}\right)\tilde{T}=-\frac{1}{\kappa_T}\tilde{Q}(x,\omega). \label{diffusion_equation_in_freq_C_omega} \end{equation} Inserting the expression for the heat source $\tilde{Q}(x,\omega)$ (obtained transforming in frequency domain Eq. \ref{Q_x_t}), we obtain the following solution: \begin{equation} \tilde{T}(x,\omega)=T_0\delta(\omega)-\frac{iQ_0\left(\omega-i\omega_C\right)}{2\pi \rho C_\infty\left(\omega-\omega_1\right)\left(\omega-\omega_2\right)}\cos(qx), \label{temperature_in_freq_C_omega} \end{equation} with $\alpha_\infty=\kappa_T/(\rho C_\infty)$ the high frequency limit of the thermal diffusivity and the frequences \begin{widetext} \begin{eqnarray} \omega_1=\frac{i}{2}\left\{\left[\alpha_\infty q^2+\omega_C\left(1+\frac{\Delta C}{C_\infty}\right)\right]-\sqrt{\left[\alpha_\infty q^2+\omega_C\left(1+\frac{\Delta C}{C_\infty}\right)\right]^2-4\alpha_\infty q^2\omega_C}\right\}, \label{omega_1} \label{omega1_def}\\ \nonumber\\ \omega_2=\frac{i}{2}\left\{\left[\alpha_\infty q^2+\omega_C\left(1+\frac{\Delta C}{C_\infty}\right)\right]+\sqrt{\left[\alpha_\infty q^2+\omega_C\left(1+\frac{\Delta C}{C_\infty}\right)\right]^2-4\alpha_\infty q^2\omega_C}\right\}, \label{omega2_def}\\ \nonumber \end{eqnarray} The expression for the temperature in time domain can be obtained by applying the inverse Fourier transform to Eq. \ref{temperature_in_freq_C_omega} (see Appendix \ref{appen:res_th} for more details). Thus, we obtain: $$T(x,t)=T_0+\frac{Q_0}{\rho C_\infty}\cos(qx)\times\left[\frac{\left(\omega_1-i\omega_C\right)}{\left(\omega_1-\omega_2\right)}\exp\left(i\omega_1 t\right)+\frac{\left(\omega_2-i\omega_C\right)}{\left(\omega_2-\omega_1\right)}\exp\left(i\omega_2 t\right)\right]\theta(t)=$$ $$=T_0+\frac{Q_0}{\rho C_\infty}\cos(qx)\exp\left\{-\frac{t}{2}\left[\alpha_\infty q^2+\omega_C\left(1+\frac{\Delta C}{C_\infty}\right)\right]\right\}\times$$ $$\times\left\{\cosh\left({t\over2}\sqrt{\left[\alpha_\infty q^2+\omega_C\left(1+\frac{\Delta C}{C_\infty}\right)\right]^2-4\alpha_\infty q^2\omega_C}\right)\right.+$$ \begin{equation} -\frac{\left[\alpha_\infty q^2+\omega_C\left(\frac{\Delta C}{C_\infty}-1\right)\right]}{\sqrt{\left[\alpha_\infty q^2+\omega_C\left(1+\frac{\Delta C}{C_\infty}\right)\right]^2-4\alpha_\infty q^2\omega_C}}\left.\sinh\left({t\over2}\sqrt{\left[\alpha_\infty q^2+\omega_C\left(1+\frac{\Delta C}{C_\infty}\right)\right]^2-4\alpha_\infty q^2\omega_C}\right)\right\}\theta(t). \label{T_t_expression_final} \end{equation} \subsubsection{Havriliak-Negami model \label{Subsum:Havriliak}} Simple Debye relaxation behavior has turned out not to be a fully adequate description for the dynamic behavior of many glass-forming materials. By virtue of two additional model parameters $a_C$ and $b_C$, the generalized \textit{Havriliak-Negami} (HN) model \cite{havriliak1966complex}: \begin{equation} C(\omega)=C_\infty+\frac{\Delta C}{\left[1+\left(i\omega \tau_C\right)^{a_C}\right]^{b_C}}=C_\infty+\frac{\Delta C}{\left[1+\left(i\frac{\omega}{\omega_C}\right)^{a_C}\right]^{b_C}}, \label{C_omega_HN} \end{equation} has been shown to be more effective (note that the HN model tends to the Debye model when $a_C=b_C=1$).\\ In this case, the temperature response in the frequency domain is given by: \begin{equation} \tilde{T}(x,\omega)=T_0\delta(\omega)-\frac{iQ_0\left[\omega_C^{a_C}+(i\omega)^{a_C}\right]^{b_C}\cos(qx)}{2\pi \rho C_\infty\left\{\omega\left[\omega_C^{a_C}+(i\omega)^{a_C}\right]^{b_C}+\frac{\Delta C}{C_\infty}\omega\omega_C^{a_Cb_C}-i\alpha_\infty q^2\left[\omega_C^{a_C}+(i\omega)^{a_C}\right]^{b_C}\right\}}. \label{temperature_in_freq_C_omega_HN} \end{equation} As the exponents $a_C$ and $b_C$ are typically non-integer, analytical derivation of the inverse Fourier transform of the latter expression is cumbersome. Therefore, in this work, this inverse Fourier was performed numerically.\clearpage \end{widetext} \section{ISS Signal} \label{Sec:displacement} \subsection{Constitutive equation} \label{SubSec:Constitutive_Eq} Impulsive stimulated scattering occurs due to coherent diffraction of a probe beam that trespasses a medium of interest in which, via its relation with the refractive index, a spatially periodic strain pattern is present. The ISS signal is therefore proportional to the magnitude of the strain grating in the medium. In this subsection we derive, for different scenarios for the relaxation behavior an expression for the displacement and strain. We assume that the viscoelastic behavior of the material can be described by the Kelvin-Voigt model, corresponding with a lumped model containing a spring and a dashpot in parallel (as described on page 87 of Ref. \cite{auld1973acoustic}).\\ Under this assumption, the constitutive equations are: \begin{equation} \left\{\begin{array}{l} \displaystyle{\rho \frac{\partial^2 \mathbf{u}}{\partial t^2}=\nabla\cdot \sigma}\\ \\ \displaystyle{\sigma=\mathbf{C}\varepsilon+\eta\frac{\partial \varepsilon}{\partial t},} \end{array}\right. \label{constitutive_relations} \end{equation} where $\mathbf{u}$ (m) is the displacement, $\sigma$ (Pa) is the stress, $\mathbf{C}$ (Pa) is the stiffness matrix and $\eta$ (Pa s) is the viscosity tensor. The strain $\varepsilon$ can be written as: \begin{equation} \varepsilon=\nabla_S\mathbf{u}-\gamma_M \Delta T, \label{definition_of_strain_C} \end{equation} where $\nabla_S\mathbf{u}=\frac{\nabla \mathbf{u}+\nabla^T\mathbf{u}}{2}$, $\gamma_M$ [$K^{-1}$] is the matrix of linear expansion and $\Delta T$ is temperature variation that drives the mechanics \cite{gandolfi2020optical}. This approach is in agreement with Green-Lindsay theory for thermoviscoelastic media \cite{mukhopadhyay1999relaxation,othman2012fundamental}.\\ We write the viscoelastic tensor as $\eta=\tau_\eta \mathbf{C}$ \footnote{As indicated on page 88 of Ref. \cite{auld1973acoustic}, the viscosity tensor has same form of the stiffness matrix. Hence, we can write viscosity tensor as $\eta=\tau_\eta C$, where $C$ is the stiffness matrix \cite{mukhopadhyay1999relaxation,othman2012fundamental}.}, where $\tau_\eta$ representes the damping time and we assume that the medium is homogeneous and isotropic. Upon these assumptions, the equation ruling the displacement reads: \vspace{3cm} \begin{widetext} \begin{equation} \frac{\partial^2 u_x}{\partial t^2}= c_L^2\left(1+\tau_\eta\frac{\partial}{\partial t}\right)\frac{\partial^2 u_x}{\partial x^2}-(3c_L^2-4c_T^2)\gamma\left(1+\tau_\eta\frac{\partial}{\partial t}\right)\frac{\partial T}{\partial x}, \label{Expansion_isotropic} \end{equation} where $c_L=\sqrt{(\lambda+2\mu)/\rho}$ and $c_T=\sqrt{\mu/\rho}$ are the longitudinal and transverse velocities (m/s), with $\lambda$ (Pa) and $\mu$ (Pa) the two Lam\'{e} coefficients. $\gamma$ is the linear expansion coefficient. \end{widetext} \subsection{ISS response in case of frequency independent heat capacity and thermal expansion} \label{SubSec:Freq_ind_u} In this subsection we derive, for a non-relaxing medium, a general expression for the displacement occurring when the system is excited by a transient optical grating. \\ To do this, we apply the temporal Fourier transform to Eq. \ref{Expansion_isotropic} to get: $$-\omega^2 \tilde{u}_x= c_L^2(1+i\omega\tau_\eta)\frac{\partial^2 \tilde{u}_x}{\partial x^2}+$$ \begin{equation} -\left(3c_L^2-4c_T^2\right)\left(1+i\omega \tau_\eta\right)\gamma\frac{\partial \tilde{T}}{ \partial x}. \label{Expansion_isotropic_FT} \end{equation} By defining: \begin{equation} c^2(\omega)=c_L^2(1+i\omega\tau_\eta) \label{def_c} \end{equation} and \begin{equation} \xi(\omega)=3-4\frac{c_T^2}{ c_L^2}, \label{def_xi} \end{equation} we can write Eq. \ref{Expansion_isotropic_FT} in the more compact form: \begin{equation} \frac{\partial^2 \tilde{u}_x}{\partial x^2}+\frac{\omega^2}{c^2(\omega)} \tilde{u}_x=\xi(\omega)\gamma\frac{\partial \tilde{T}}{ \partial x}. \label{Expansion_isotropic_FT_compact} \end{equation} In order calculate the displacement occurring due to the TTG excitation, we use the solution for the temperature in frequency domain, as derived in Section \ref{temperature_in_freq}: \begin{equation} \frac{\partial^2 \tilde{u}_x}{\partial x^2}+\frac{\omega^2}{c^2(\omega)} \tilde{u}_x=Z(\omega) \sin(qx), \label{Expansion_isotropic_FT_compact2} \end{equation} where \begin{equation} Z(\omega)=-\frac{qQ_0\xi(\omega)\gamma}{2\pi i \rho C( \omega-i\alpha q^2)}. \label{def_Z} \end{equation} The general solution of Eq. \ref{Expansion_isotropic_FT_compact2} is $\tilde{u}(x,\omega)=z(x,\omega)+z_p(x,\omega)$, where $z_p(x,\omega)$ is a particular solution of Eq. \ref{Expansion_isotropic_FT_compact2}, while $z(x,\omega)$ is the solution of the associated homogeneous differential equation. It can be shown that \begin{equation} z_p(x,\omega)=\frac{Z(\omega)c^2(\omega)}{\omega^2- q^2c^2(\omega)}\sin(qx) \end{equation} is a particular solution of Eq. \ref{Expansion_isotropic_FT_compact2}.\\ In order to have the system at rest before the excitation (i.e. $u(x,t)=0$ and $\frac{du}{dt}(x,t)=0$ for negative times) and avoid divergence of the displacement at infinity, we must have $z(x,\omega)=0$ $\forall \omega$.\\ Hence the final solution is: \begin{equation} \tilde{u}(x,\omega)=z_p(x,\omega)=\frac{Z(\omega)c^2(\omega)}{\omega^2- q^2c^2(\omega)}\sin(qx) \end{equation} Substituting back the expressions for $Z(\omega)$, $\xi(\omega)$ and for $c(\omega)$ in the latter equation we obtain: \begin{equation} \tilde{u}(x,\omega)=-\frac{qQ_0\gamma\left(3c_L^2-4c_T^2\right)\left(1+i\omega \tau_\eta\right)}{2\pi i \rho C(\omega-i\alpha q^2)\left(\omega-\omega_3\right)\left(\omega-\omega_4\right)}\sin(qx), \label{u_omega_substituted2} \end{equation} where \begin{equation} \omega_3=i\frac{q^2}{2}\left[c_L^2\tau_\eta-\sqrt{c_L^4\tau_\eta^2-\frac{4\rho}{q^2}c_L^2} \right]. \label{def_of_omega3} \end{equation} and \begin{equation} \omega_4=i\frac{q^2}{2}\left[c_L^2\tau_\eta+\sqrt{c_L^4\tau_\eta^2-\frac{4\rho}{q^2}c_L^2} \right]. \label{def_of_omega4} \end{equation} The time domain expression for the displacement can be obtained by applying an inverse Fourier transform to Eq. \ref{u_omega_substituted2} (see Appendix \ref{appen:res_th}), resulting in: \begin{widetext} $$u(x,t)=-\frac{qQ_0\gamma}{\rho C}\sin(qx)\left(3c_L^2-4c_T^2\right)\left\{\frac{1-\alpha q^2 \tau_\eta}{\left(i\alpha q^2-\omega_3\right)\left(i\alpha q^2-\omega_4\right)}\exp\left(-\alpha q^2t\right)\right.+$$ \begin{equation} \left.+\frac{1+i\omega_3 \tau_\eta}{(\omega_3-i\alpha q^2)\left(\omega_3-\omega_4\right)}\exp\left(i \omega_3 t\right)+\frac{1+i\omega_4 \tau_\eta}{(\omega_4-i\alpha q^2)\left(\omega_4-\omega_3\right)}\exp\left(i \omega_4 t\right)\right\}\theta(t) \label{u_t_non_dep_on_freq} \end{equation} \subsection{ISS response in case of frequency dependent heat capacity and thermal expansion described by the Debye model} \label{SubSec:Freq_dep_u} In this section we assume that the heat capacity depends on frequency according to Debye model, in analogy with Subsection \ref{Subsum:Debye}. Hence, we substitute Eq. \ref{C_omega} into Eq. \ref{u_omega_substituted2} to get: \begin{equation} \tilde{u}(x,\omega)=-\frac{qQ_0\gamma\left(3c_L^2-4c_T^2\right)\left(1+i\omega \tau_\eta\right)(\omega-i\omega_C)}{2\pi i \rho C_\infty(\omega-\omega_1)(\omega-\omega_2)\left(\omega-\omega_3\right)\left(\omega-\omega_4\right)}\sin(qx), \label{u_omega_substituted_C_omega} \end{equation} \end{widetext} where $\omega_1$ and $\omega_2$ are defined in Eq.s \ref{omega1_def} and \ref{omega2_def}.\\ Furthermore, we also assume the linear expansion coefficient to be frequency dependent following the Debye expression: \begin{equation} \gamma(\omega)=\gamma_\infty+\frac{\Delta \gamma}{1+i\omega \tau_\gamma}=\gamma_\infty+\frac{\Delta \gamma}{1+i\frac{\omega}{\omega_\gamma}}, \label{gamma_omega} \end{equation} where $\gamma_\infty$ and $\Delta \gamma$ represent the instantaneous and additional relaxing contribution of the response of the volume to a temperature change, respectively. $\omega_\gamma=\tau_\gamma^{-1}$ is the associated relaxation frequency.\\ \\ For the sake of simple analytical treatment, and given that the focus of this work is on the thermal expansion part of the signal and not on the superposed acoustic part, in the following, we neglect the frequency and temperature dependence of the elastic moduli and of the density \cite{jensen2018slow,hecksher2017toward,klieber2013mechanical,blazhnov2004temperature}. \\ With this choice, the equation to be solved reduces to: \begin{equation} \frac{\partial^2 \tilde{u}_x}{\partial x^2}+\frac{\omega^2}{c^2(\omega)} \tilde{u}_x=\xi(\omega)\left(\gamma_\infty+\frac{\Delta \gamma}{1+i\frac{\omega}{\omega_\gamma}}\right)\frac{\partial \tilde{T}}{ \partial x}. \label{Expansion_isotropic_FT_compact_gamma_omega} \end{equation} The expression for $\gamma(\omega)$ can be rewritten as: \begin{equation} \gamma(\omega)=\gamma_\infty\frac{\omega-i\omega_\gamma\left(1+\frac{\Delta \gamma}{\gamma_\infty}\right)}{\omega-i\omega_\gamma}=\gamma_\infty\frac{\omega-\omega_6}{\omega-\omega_5}, \label{gamma_omega_2} \end{equation} where \begin{equation} \omega_5=i\omega_\gamma \label{omega5_def} \end{equation} and \begin{equation} \omega_6=i\omega_\gamma\left(1+\frac{\Delta \gamma}{\gamma_\infty}\right). \label{omega6_def} \end{equation} \vspace{0.5 cm} Substituting Eq. \ref{gamma_omega_2} into Eq. \ref{u_omega_substituted_C_omega} we get: \begin{widetext} \begin{equation} \tilde{u}_x(x,\omega)=-\frac{qQ_0\gamma_\infty\left(3c_L^2-4c_T^2\right)\left(1+i\omega \tau_\eta\right)(\omega-i\omega_C)(\omega-\omega_6)}{2\pi i \rho C_\infty\prod_{j=1}^{5}(\omega-\omega_j)}\sin(qx). \label{u_omega_substituted_C_omega2} \end{equation} By inverse Fourier inverse transforming the latter expression we obtain the solution for the displacement in time domain: \begin{equation} u_x(x,t)=-\left[\frac{qQ_0\gamma_\infty \sin(qx)}{\rho C_\infty}\right]\left(3c_L^2-4c_T^2\right) \sum_{l=1}^{5}\left\{\left(1+i\omega_l \tau_\eta\right)(\omega_l-i\omega_C)(\omega_l-\omega_6)\left(\prod_{\substack{j=1 \\ j\neq l}}^{5}{1\over\omega_l-\omega_j}\right)\exp(i\omega_l t)\right\}\theta(t). \label{u_t_substituted_C_omega} \end{equation} The ISS signal $U_{ISS}(t)$ is proportional to the amplitude of the strain grating \cite{fivez2011dynamics,yan1987impulsive} $\Delta \rho/\rho$. Hence, the strain of the 1D displacement pattern equally its spatial derivative, the ISS signal can be derived from Eq. \ref{u_t_substituted_C_omega} as: \begin{equation} U_{ISS}(t)\propto \max_{x}\left[\frac{\partial u_{x}(x,t)}{\partial x}\right]. \end{equation} \subsection{ISS response in case of frequency dependent heat capacity and thermal expansion described by the Havriliak-Negami model} \label{SubSec:Freq_dep_u_HN} In the HN scenario for the thermal expansion response, \begin{equation} \gamma(\omega)=\gamma_\infty+\frac{\Delta \gamma}{\left[1+\left(i\omega \tau_\gamma\right)^{a_\gamma}\right]^{b_\gamma}}=\gamma_\infty+\frac{\Delta \gamma}{\left[1+\left(i\frac{\omega}{\omega_\gamma}\right)^{a_\gamma}\right]^{b_\gamma}}, \label{gamma_omega_HN} \end{equation} with $a_\gamma$ and $b_\gamma$ are additional model parameters.\\ Taking also the heat capacity behavior according to the HN model, as described in Subsection \ref{Subsum:Havriliak} we get: \begin{equation} \tilde{u}(x,\omega)=-\frac{qQ_0\gamma_\infty\left(3c_L^2-4c_T^2\right)\left(1+i\omega B_C\tau_\eta\right)\left[\omega_C^{a_C}+(i\omega)^{a_C}\right]^{b_C}\left\{\left[\omega_\gamma^{a_\gamma}+(i\omega)^{a_\gamma}\right]^{b_\gamma}+\frac{\Delta \gamma}{\gamma_\infty}\omega_\gamma^{a_\gamma b_\gamma}\right\}\sin(qx)}{2\pi i \rho C_\infty\left\{\omega\left[\omega_C^{a_C}+(i\omega)^{a_C}\right]^{b_C}+\frac{\Delta C}{C_\infty}\omega\omega_C^{a_Cb_C}-i\alpha_\infty q^2\left[\omega_C^{a_C}+(i\omega)^{a_C}\right]^{b_C}\right\}\left(\omega-\omega_3\right)\left(\omega-\omega_4\right)\left[\omega_\gamma^{a_\gamma}+(i\omega)^{a_\gamma}\right]^{b_\gamma}}. \label{u_omega_substituted_HN} \end{equation} The ISS signal can then be obtained in analogy with \ref{SubSec:Freq_dep_u}. \end{widetext} \section{Comparison with stretched exponential model \label{Sec:comparison_with_liter}} \begin{figure} \begin{center} \includegraphics[width=0.38\textwidth]{Figure1_PRB.pdf} \caption{Plot of the time dependence of the ISS signal obtained with the SEM (black curves), most squares fit of this signal with the Debye model (blue curves) and the HN model (red curves), for different temperature-wavenumber combinations, based on material parameters of glycerol reported in \cite{paolucci2000impulsive} and listed in Table \ref{table_Param}. The small fitting residues (fitting curve minus SEM curve) indicate that the Debye based model is adequate. Thanks to the two additional model parameters, the HN model is fitting even better. In each panel, all the curves are normalized to the maximum of the SEM ISS signal.} \label{ISS_strVsHN} \end{center} \end{figure} A semi-empirical model (SEM) describing ISS signals in glassformers has been introduced along with the first experimental reports on ISS signals in salol \cite{yang1995t,yang1995impulsive1,yang1995impulsive2} and has been successfully used to fit ISS data in glycerol \cite{paolucci2000impulsive}. The SEM expression describing the ISS signal is \footnote{ Since in this work we study the ISS signal detected with a heterodyne experimental setup, the right-hand-side of Eq. \ref{Str_exp_expr} is not squared. This choice is at variance with respect to Ref. \cite{paolucci2000impulsive}, which is based on homodyne detection. }: $$I(t)=(A+B)\exp\left(-\Gamma_Ht\right)+$$ \begin{equation} -A\exp\left(-\Gamma_At\right)\cos\left(\omega_At\right)-B\exp\left[-\left(\Gamma_Rt\right)^\beta\right], \label{Str_exp_expr} \end{equation} where the first term is associated to the thermal diffusion ($\Gamma_H$ being the thermal decay rate), while the second one corresponding to the acoustics ($\omega_A$ being the acoustic oscillation frequency and $\Gamma_A$ the acoustic damping rate). The use of a stretched exponential - also known as Kohlrausch-Williams-Watts (KWW) - term was inspired by other response functions in the physics of supercooled liquids, and was aimed at coping with the empirical observation that the initial thermal expansion cannot be fitted by a simple exponential. $\Gamma_R$ is the structural relaxation rate and $0<\beta\leq 1$ the stretching exponent. The coefficients $A$ and $B$ account for the weights of each term contributing to the total ISS signal.\\ The SEM has proved useful to describe and fit the ISS signal measured on supercooled glycerol, as described by Paolucci et al. \cite{paolucci2000impulsive}. The fitting parameters of interesting temperature-wavenumber combinations treated by Paolucci et al. are recalled in Table \ref{table_Param} and were used to simulate respective theoretical ISS signals, shown in Fig. \ref{ISS_strVsHN} (black curves). \begin{table}[h] \begin{center} \begin{tabular}{|c|c|c|c|c|}\hline Case &\#1 & \#2&\#3&\#4 \\ \hline $T_0$ (K) & 250 & 250 & 230 & 230 \\ \hline $q$ (m$^{-1}$) & 3.05$\times 10^5$ & 1.036$\times 10^6$ & 3.05$\times 10^5$ & 1.036$\times 10^6$ \\ \hline $\Gamma_H$ (s$^{-1}$)\footnote{ The thermal decay rate was estimated as $\Gamma_H=\kappa_T q^2/(\rho\ C)$, where $\kappa_T=0.28\ \um{W/m\ K}$, $\rho=1260\ \um{kg/m^3}$ \cite{gupta2012scope} and $C=1800$ J/kg K (for T$_0=250$ K) or $C=1500$ J/kg K (for T$_0$=230 K), in agreement with Refs. \cite{bentefour2003broadband,bentefour2004thermal}. } & 1.14$\times 10^4$ & 1.33$\times 10^5$ & 1.38$\times 10^4$ & 1.59$\times 10^5$ \\ \hline $\Gamma_A$ [s$^{-1}$]\footnote{ The acoustic damping rate was taken from Fig. 3 of Ref. \cite{paolucci2000impulsive}. } & 3.5$\times 10^7$ & 8.5$\times 10^7$ & 2.0$\times 10^6$ & 2.5$\times 10^6$ \\ \hline $\omega_A$ (Grad/s)\footnote{ The acoustic oscillation frequency was obtained as the product between $q$ and the speed of sound reported in Fig. 3 of Ref. \cite{paolucci2000impulsive}. } & 0.98 & 3.32 & 1.03 & 3.49 \\ \hline $\beta$\footnote{ The stretch exponent was taken from Fig. 5 of Ref. \cite{paolucci2000impulsive}. } & 0.6 & 0.6 & 0.6 & 0.6 \\ \hline $\Gamma_R$ (s$^{-1}$)\footnote{ The structural relaxation rate was obtained as $\Gamma_R=\Gamma({1/ \beta})/\left(\langle\tau\rangle\beta \right)$, where $\Gamma$ is the Gamma function, and $\langle\tau\rangle$ was taken from Fig. 4 of Ref. \cite{paolucci2000impulsive}. } & 5.5$\times 10^6$ & 5.5$\times 10^6$ &1.1$\times 10^5$ & 1.1$\times 10^5$ \\ \hline $B/A$\footnote{ The ratio between the coefficients $B$ and $A$ was calculated as $B/A=f/(1-f)$, where $f=0.67$ is the Debye-Waller factor taken from Fig. 7 of Ref. \cite{paolucci2000impulsive}. } & 2.03 & 2.03 & 2.03 & 2.03 \\ \hline \end{tabular} \end{center} \caption{Material parameters reported by Paolucci et al. based on SEM fits of ISS signals in supercooled glycerol. The parameters reported in case \#4 imply a SEM model ISS signal with the unphysical negative tail, as reported in Fig. \ref{Case4_ISS_signal}. This unphysical behavior is not present in the other cases.} \label{table_Param} \end{table} Three cases are considered in Fig. \ref{ISS_strVsHN}: in panel a and b the ISS of glycerol is reported for the two grating wave numbers $q=3.05\times10^5\ \um{m^{-1}}$ and $q=1.036\times10^6\ \um{m^{-1}}$ respectively, at the same temperature $T_0=250$ K (i.e. cases \#1 and \#2 in Table \ref{table_Param}, respectively). Panel c reports the ISS signal for a lower temperature $T_0=230$ K and for the shortest grating wave vector available, i.e. $q=3.05\times10^5\ \um{m^{-1}}$ (case \#3 in Table \ref{table_Param}). For each panel, the value of the coefficient $A$ was chosen in order to have the maximum of the ISS signal normalized to 1.\\ In order to verify if the Debye and HN models developed in the previous section are able to reproduce the SEM based ISS signal, we have fitted the black curves in Fig. \ref{ISS_strVsHN} with the respective expressions (blue curves: Debye, red curves: HN). The fitting was carried out by implementing a most-squares fitting (MSF) protocol \cite{jackson1976most,salenbien2011laser} to search for the minimum of the cost function, defined as the sum of the squared residuals (SSR). MSF is advantageous over the commonly used least-squares fitting (LSF) as it is able to take into account the possible co-variance of the multiple fitting variables, namely, different combinations of fitting parameters yielding a statistically indistinguishable cost function value SSR (local minima). Both models fit very well, with the residues of the HN model being the smallest, thanks to the additional two fitting parameters. \begin{table*} \begin{center} \begin{tabular}{|c|c|c|c|}\hline Case & \#1 & \#2 & \#3 \\ \hline $C_{\infty}$ Debye (J/kg K) & 452 & 460 & 360 \\ \hline $C_{\infty}$ HN (J/kg K) & $503\in[0,6\times 10^5]$ & $503\in[0,6\times 10^5]$ & $390\in [388,400]$\\ \hline $C_0$ Debye (J/kg K) & 1837 & 1882 & 1645 \\ \hline $C_0$ HN (J/kg K) & $1777\in[1775,1798]$ & $1777\in[1775,1798]$ & $1345\in[1340,1351]$ \\ \hline $\Delta \gamma/\gamma_\infty$ Debye & 10 & 10 & 10 \\ \hline $\Delta \gamma/\gamma_\infty$ HN & $9.4\in[9.3,9\times 10^3]$ & $9.4\in[9.3,9\times 10^3]$ & $9.8\in[9.7,10]$ \\ \hline $\omega_C$ (Mrad/s) Debye & 5.1 & 6.6 & 0.2 \\ \hline $\omega_C$ (Mrad/s) HN &$3.5\in[0,4\times10^3]$&$3.5\in[0,4\times10^3]$&$1.5\in[1.1,500]$\\ \hline $\omega_\gamma$ (krad/s) Debye & 3190 & 4080 & 150 \\ \hline $\omega_\gamma$ (krad/s) HN & $2260\in[2230,2320] $ & $2260\in[2230,2320] $& $38\in[0,39]$ \\ \hline $a_c$ HN & $0.89\in[0.06,0.9]$ & $0.89\in[0.06,0.9]$ & $0.544\in[0.543,0.554]$ \\ \hline $b_c$ HN & $0.52\in[0.02, 0.55]$ & $0.52\in[0.02, 0.55]$ & $0.78\in[0.75,1]$ \\ \hline $a_\gamma$ HN & $0.9\in[0,1]$&$0.9\in[0,1]$ &$0.7\in[0,1]$ \\ \hline $b_\gamma$ HN & $0.68\in[0.67,1]$ & $0.68\in[0.67,1]$ & $0.5\in[0.1,1]$ \\ \hline \end{tabular} \end{center} \caption{Fitting parameter values obtained by fitting the SEM curves reported in Fig. \ref{ISS_strVsHN} with Debye and HN model based expressions for the ISS signal. For the HN model, we report also confidence interval next to the best fit parameter.} \label{table_fitting_coeff} \end{table*} The obtained fitting parameters are summarized in Table \ref{table_fitting_coeff}. For the HN model, for each parameter, the fitting error was determined by most squares analysis of the cost function in the multidimensional space of fitting parameters and thus includes the effect of covariance with other fitting parameters \cite{jackson1976most,salenbien2011laser}. Instead of displaying the parameter $\Delta C$ obtained from the fit, in Table \ref{table_fitting_coeff} we have reported the zero-frequency heat capacity $C_0=C_\infty+\Delta C$. The latter definition was retrieved from Eq. \ref{C_omega} in the limit $\omega\rightarrow0$. The fits with the HN model expression have been performed simultaneously on cases $\#1$ and $\#2$, where the glycerol equilibrium temperature is the same, but the grating wave numbers are different. In ISS signals, besides a change of acoustic frequency and damping, a difference in wave number results (when viewing the signal on a logarithmic time scale) in a different "onset" time ($1/q^2\alpha$) of the thermal diffusion driven washing out of the thermal (expansion) grating. Once ongoing, this exponential decay dominates the signal behavior and masks the influence of the parameters that determine the onset of the relaxation of the temperature to heat and thermal expansion to temperature response. Simultaneously fitting signals at two wavenumbers, and thus considering signals containing two mixing ratios of the respective influences, limits the possibilities for covariant influences of fitting parameter values on the signals, and thus leads to smaller uncertainties. Interestingly, the uncertainty on $C_0$ is always small. This can be explained by the strong influence of the low frequency/late time limit of the heat capacity on the signal, via the always significant thermal diffusion related exponential decay tail of the signal. For the chosen signals, this tail occurs later than the relaxation times, and it is thus not affected by possible degeneracy of the effect of $C_0$ on the signal with relaxation influenced thermal expansion parameters. The influence of $C_\infty$ on the initial part of the signal goes along with influence of the thermal expansion and the acoustic wave related parameters. This leads to a larger fitting covariance. \begin{figure} \begin{center} \includegraphics[width=0.4\textwidth]{Figure2_PRB.pdf} \caption{Time evolution of the temperature obtained for the case of Debye and HN models (blue and red curves, respectively), simulated for different temperature-wavenumber combinations. In each panel, all the curves are normalized to the value of the Debye model temperature at the shortest displayed time\protect\footnote{ Eq.s \ref{T_t_expression_final} and \ref{temperature_in_freq_C_omega_HN} depend on the spatial coordinate. The curves in Fig. \ref{temperature_strVsHN} were evaluated at the same spatial coordinate. The particular choice of the latter is irrelevant thanks to the proposed normalization.}.} \label{temperature_strVsHN} \end{center} \end{figure} The limited effect of $C_\infty$ on the ISS signal can be further understood by looking at the calculated temperature evolution after impulsive illumination, as depicted in Fig. \ref{temperature_strVsHN} for the Debye and HN models (blue and red curves, respectively). The temperature evolution for the Debye model was obtained by evaluating Eq. \ref{T_t_expression_final} upon insertion of the Debye model parameters listed in Table \ref{temperature_strVsHN}. For the calculation based on the HN model, the thermal parameters were first inserted into Eq. \ref{temperature_in_freq_C_omega_HN} to calculate the HN temperature in frequency domain. The latter was then Fourier transformed numerically to time domain, obtaining the red curves in Fig. \ref{temperature_strVsHN}. For long times the Debye and HN model based temperature evolutions match. However, despite the fact that the Debye and HN model yield a very similar ISS signal (as shown in Fig. \ref{ISS_strVsHN}), the corresponding fitting parameters reported in Table \ref{temperature_strVsHN} imply a very different temperature profile at short times. This indicates a very strong degeneracy between the early dynamics of the temperature, the thermal expansion and the acoustic wave generation.\\ From another point of view, one may wonder why two different temperature profiles can give rise to the same ISS signal. This can also be seen by looking into the math: substituting the expression for the temperature into the source term of the Eq. \ref{Expansion_isotropic_FT_compact} for the displacement, we see that the source term is proportional to $\gamma/\left[\rho C\left(\omega-i\alpha q^2\right)\right]$ (as reported in Eq. \ref{def_Z}). Hence, at high frequencies the source term is proportional to $\gamma/C$. Fig. \ref{C(omega)_Gamma(omega)} shows that the real part of $C(\omega)$ and $\gamma(\omega)$ for cases $\#3$ follow a very similar trend, both for the Debye and HN model. \begin{figure}[h] \begin{center} \includegraphics[width=0.41\textwidth]{Figure3_PRB.pdf} \caption{Real part of $\left(C(\omega)-C_\infty\right)/C_\infty$ (brown lines, left axis) and of $\left(\gamma(\omega)-\gamma_\infty\right)/\gamma_\infty$ (purple lines, right axis), as a function of frequency (horizontal axis, log scale). The full and dashed lines refer to the Debye and HN model, respectively. These curves were calculated for the case $T_0=230$ K and $q=3.05\times 10^5\ \um{m^{-1}}$ (case $\#3$). } \label{C(omega)_Gamma(omega)} \end{center} \end{figure} We verified that this is also the case for the imaginary part of these quantities. Hence, even if the time dependences of $C$ and $\gamma$ are quite different between the Debye and HN scenario, their ratio, and thus the corresponding ISS signal is similar.\\ A similar observation has been made for cases $\#1$ and $\#2$.\\ In conclusion, at short times the parameters $C$ and $\gamma$ are degenerate and their individual values cannot be reliably extracted from fitting. Conversely, at low frequency the source term is no longer simply proportional to the ratio $\gamma/C$, hence the degeneracy is lifted and at long times the heat capacity and thermal expansion coefficient can be disentangled precisely by the the fitting procedure. A prospective scenario for lifting the large degeneracy between $C_\infty$ and $\gamma_\infty$ could be to further increase the wavenumber so that the thermal diffusion tail occures before the heat capacity relaxation time. In that scenario, the shape of the thermal diffusion tail is dominated by the decay time, which gives direct information on $C_\infty$, with little influence of the other parameters. \\ It is worth to note that the ISS signal obtained for case $\#4$, which is representative for a rather low temperature (long relaxation times) and a rather long grating spacing (long thermal diffusion time) in Fig. \ref{Case4_ISS_signal} reveals a temporal span in which the ISS signal, calculated according to the SEM model, is negative. \begin{figure}[h] \begin{center} \includegraphics[width=0.36\textwidth]{Figure4_PRB.pdf} \caption{Plot of the ISS signal obtained with SEM (black curves), for the case $T_0=230$ K and $q=1.036\times 10^6\ \um{m^{-1}}$ (case $\#4$). The curve is normalized to 1 at the maximum. The graph shows that the ISS signal goes below zero (dashed red line).} \label{Case4_ISS_signal} \end{center} \end{figure} This unphysical result, which is a consequence of a particular mix between positive and negative terms in Eq. \ref{Str_exp_expr}, prevents an adequate comparison with the here presented models, both in the Debye and HN scenario. For the sake of curiosity, we have evaluated the conditions for which the SEM based ISS signal goes negative. For this evaluation, we have simplified Eq. \ref{Str_exp_expr} by neglecting the acoustic term with respect to the thermal diffusion, because (i) the amplitude of the former ($A$) is smaller than the one of the latter ($A+B$) and (ii) the former decays faster than the latter ($\Gamma_A$ being much larger than $\Gamma_H$). Upon this simplification, the ISS signal in the SEM model becomes: \begin{equation} I(t)=(A+B)\exp\left(-\Gamma_Ht\right)-B\exp\left[-\left(\Gamma_Rt\right)^\beta\right]. \label{Str_exp_expr_simplified} \end{equation} By performing some algebric calculations, one obtains that $I(t)$ is positive for time instants satisfying the following inequality: \begin{equation} t^\beta\geq\left(\Gamma_H\over \Gamma_R^\beta\right)t-{1\over \Gamma_R^\beta}\ln\left(1+\frac{A}{B}\right). \label{inequality_I_bigger_0} \end{equation} It is evident that for the limit $t\rightarrow0$, Eq. \ref{inequality_I_bigger_0} is satisfied, while in the limit $t\rightarrow +\infty$, Eq. \ref{inequality_I_bigger_0} is violated. Hence, there must be at least one time where $I(t)$ changes sign. We call $t^*$ the earliest positive time satisfying $I(t)=0$.\\ Furthermore, since $\beta<1$ for the cases reported in Table \ref{table_Param}, the slope of the left-hand side of Eq. \ref{inequality_I_bigger_0} decreases for increasing time, while the slope of the right-hand side is constant. Therefore, for times later than $t^*$, the right-hand-side grows faster than the left-hand-side and hence, there are not other time instants satisfying $I(t)=0$. Summarizing, for $0\leq t\leq t^*$, the ISS signal is positive, while for $t>t^*$ the ISS signal is negative.\\ The derivation of an analytic expression for $t^*$ is challenging. However, we can have some insight by speculating on the time instant: \begin{equation} t^*_{low}=\left(\Gamma_H\over \Gamma_R^\beta\right)^{1\over(\beta-1)}, \label{t_star_low} \end{equation} which is a lower bound for $t^*$ \footnote{ $t^*_{low}$ is the time instant solving Eq. \ref{inequality_I_bigger_0} without the last term, the latter reading: $$t^\beta=\left(\Gamma_H\over \Gamma_R^\beta\right)t.$$ Hence, we have $$\left(t^*_{low}\right)^\beta=\left(\Gamma_H\over \Gamma_R^\beta\right)t^*_{low}>\left(\Gamma_H\over \Gamma_R^\beta\right)t^*_{low}-{1\over \Gamma_R^\beta}\ln\left(1+\frac{A}{B}\right).$$ The latter relation states that $t^*_{low}>t^*$ or, in other words, $t^*_{low}$ is a lower bound for $t^*$}.\\ In Table \ref{table_t_star_low} we have evaluated $t^*_{low}$ for the four cases considered in the current section. \begin{table}[h] \begin{center} \begin{tabular}{|c|c|c|c|c|}\hline Case &\#1 & \#2&\#3&\#4 \\ \hline $t^*_{low}$ (s) & 0.93 & $2.0\times 10^{-3}$ & $1.6\times 10^{-3}$ & $3.6\times 10^{-6}$ \\ \hline \end{tabular} \end{center} \caption{Coefficients reported by Paolucci et al. concerning the fitting of the supercooled glycerol ISS signals with SEM model.} \label{table_t_star_low} \end{table} For cases $\#1$ to $\#3$, the lower bound for $t^*$ (and, hence, $t^*$ itself) occurs on only after several milliseconds. Therefore, the negative ISS signal is not visible in Fig. \ref{ISS_strVsHN}: it occurs on late times that are not shown in the figure. For case $\#4$ $t^*_{low}\sim 3.6\ \um{\mu s}$, paving the way to the detection of a negative ISS signal for times $t>t^*\sim 10\ \um{\mu s}$. This is confirmed in Fig. \ref{Case4_ISS_signal}.\\ \section{Debye model vs two-temperature model\label{Sec:two_temp_model}} \subsection{Frequency dependent heat capacity \label{Subsec:two_temp_model_HC}} As mentioned earlier, part of the energy that is optically supplied to a relaxing material is channeled, around the relaxation time, to a change of configurational energy that goes along with a structural rearrangement of the amorphous network. In this section, we describe the network's energy distribution in terms of a temperature $T_N$, and we assume that the configurational energy reservoir is in thermal contact with a kinetic (mainy vibrational) energy reservoir (KER), with physically measurable temperature T. The energy flux between the two reservoirs is quantified as $G(T-T_N)$, where $G$ (W/(m$^3$ K)) is a (positive) coupling constant; this term indicates that when $T>T_N$, then energy flows from the KER to the network. The capability of storing and transferring energy within the network are formally described by the network's heat capacity $C_N$, thermal conductivity $k_{T,N}$ and density $\rho_N$.\\ Hence, in this approach, the energy exchange between the KER (vibrational energy) and the network (configurational energy) can be described by a two-temperature model (TTM) \cite{caddeo2017thermal}, yielding the following equations for the KER's temperature $T$ and the network's temperature $T_N$:\\ \begin{equation} \left\{\begin{array}{l} \displaystyle{\rho C \frac{\partial T}{\partial t}=\kappa_T\frac{\partial^2 T}{\partial x^2}+Q(x,t)-G(T-T_N)},\\ \\ \displaystyle{\rho_N C_N \frac{\partial T_N}{\partial t}=k_{T,N}\frac{\partial^2 T_N}{\partial x^2}+G(T-T_N)},\\ \end{array} \right. \label{two_temperatures_model} \end{equation} where $\rho$, $C$ and $\kappa_T$ are the KER's density, heat capacity and thermal conductivity, respectively. The source term $Q$ (W/m$^3$) enters only in the equation for the KER temperature, indicating that the optical excitation delivers energy directly to the KER. The network modifications follow the dynamics occurring in the KER, hence they are driven by variations of $T$.\\ For the sake of simplicity, we assume that local network reconfigurations only depend on the local energy exchange with the KER, and we neglect possible configurational energy flow within the network. This yields a very low value for the network's thermal conductivity, allowing to drop the term $k_{T,N}{\partial^2 T_N}/\partial x^2$ in System \ref{two_temperatures_model}.\\ The first equation of System \ref{two_temperatures_model} can be reformulated as: \begin{equation} T_N=T+\frac{\rho C}{G} \frac{\partial T}{\partial t}-\frac{\kappa_T}{G}\frac{\partial^2 T}{\partial x^2}-{1\over G}Q(x,t). \label{first_TTM_rewritten} \end{equation} Substituting the latter expression into the second equation of System \ref{two_temperatures_model}, we get to the following differential equation: $$\frac{\rho_N C_N}{G}\left(\rho C \frac{\partial^2 T}{\partial t^2}-\frac{\partial Q}{\partial t}+G\frac{\partial T}{\partial t}-\kappa_T\frac{\partial^3 T}{\partial t \partial x^2} \right)=$$ \begin{equation} =-\rho C\frac{\partial T}{\partial t}+Q(x,t)+\kappa_T\frac{\partial^2 T}{\partial x^2}. \label{decoupled_for_T} \end{equation} Applying a Fourier transform to Eq. \ref{decoupled_for_T} we get to: $$\frac{\rho_N C_N}{G}\left[-\omega^2\rho C \tilde{T}-i\omega \tilde{Q}(x,\omega)+i\omega G\tilde{T}-i\omega \kappa_T\frac{\partial^2 \tilde{T}}{ \partial x^2} \right]=$$ \begin{equation} =-i\omega\rho C\tilde{T}+\tilde{Q}(x,\omega)+\kappa_T\frac{\partial^2 \tilde{T}}{\partial x^2}, \label{decoupled_for_T_freq} \end{equation} which can be rearranged as: $$\kappa_T\left(1+i\frac{\rho_N C_N}{G}\omega\right)\frac{\partial^2 \tilde{T}}{\partial x^2}+$$ $$+\left[\frac{\rho_N C_N \rho C}{G}\omega^2-i\omega\left(\rho_N C_N +\rho C\right)\right]\tilde{T}+$$ \begin{equation} +\left(1+i\omega\frac{\rho_N C_N}{G} \right) \tilde{Q}(x,\omega)=0. \label{decoupled_for_T_freq_2} \end{equation} Dividing by $\kappa_T(1+i\omega \rho_N C_N/G)$ and rewriting the second term we have: \begin{equation} \frac{\partial^2 \tilde{T}}{\partial x^2}-i\omega\frac{\rho}{\kappa_T}\left[C+\frac{{\rho_N \over \rho} C_N}{1+i\frac{\rho_N C_N}{G}\omega}\right]\tilde{T}=-{1\over \kappa_T}\tilde{Q}(x,\omega). \label{decoupled_for_T_freq_4} \end{equation} The latter equation can be remapped into Eq. \ref{diffusion_equation_in_freq_C_omega} upon substitutions $C\rightarrow C_\infty$, $\rho_N C_N/\rho\rightarrow \Delta C$ and $G/(\rho_N C_N)\rightarrow \omega_C$. Hence, it can be stated that the frequency dependent heat capacity in terms of the Debye model (as reported in Eq. \ref{C_omega}) is equivalent to the TTM.\\ \subsection{Frequency dependent thermal expansion in the frame of the two-temperature model \label{Sec:two_temp_model_therm_exp}} Along with the process of taking up potential energy, the network is undergoing structural changes, and it can thus change its volume. Hence, it can contributute to the system's thermal expansion and add up to the thermal strain that is related to the increase of vibrational energy (which is connected to the anharmonicity of the intermolecular potential minima). To the best of our knowledge, no models describing the latter point are reported in literature. Nevertheless, it seems reasonable to assume that the thermal strain produced both by the KER and the network depends on the history of the network's temperature. Assuming an isotropic material, the total thermal strain can hence be written as $\left[\gamma \Delta T\circledast \varphi(t)+\gamma_{N} \Delta T_N\circledast \varphi_N(t)\right]I_d$, where $I_d$ is the identity matrix. Indeed, the two temperatures have been convolved with memory functions $\varphi(t)$ and $\varphi_N$ describing the thermal history of the KER and network respectively.\\ After substituting $\gamma_M \Delta T(x,t)$ with $\left[\gamma \Delta T\circledast \varphi(t)+\gamma_{N} \Delta T_N\circledast \varphi_N(t)\right]I_d$ into Eq \ref{definition_of_strain_C}, we can repeat analogously the derivation presented in Section \ref{Sec:displacement}.\\ In this way, we reach to the following equation for the displacement ($\xi(\omega)$ is defined in Eq. \ref{def_xi}): $$\frac{\partial^2 \tilde{u}_x}{\partial x^2}+\frac{\omega^2}{c^2(\omega)} \tilde{u}_x=$$ $$=\xi(\omega)\frac{\partial }{ \partial x}\left(\gamma\tilde{T}(x,\omega)\tilde{\varphi}(\omega)+\gamma_N\tilde{T}_N(x,\omega)\tilde{\varphi}_N(\omega)\right)=$$ \begin{equation} =\xi(\omega)\left(\gamma\tilde{\varphi}(\omega)+\frac{\gamma_N\tilde{\varphi}_N(\omega)}{1+i\frac{\rho_N C_N}{G}\omega}\right)\frac{\partial \tilde{T}}{ \partial x}. \label{Expansion_isotropic_FT_compact_TN} \end{equation} The last step involved the substitution $\tilde{T}_N=\tilde{T}(x,\omega)/\left(1+i\frac{\rho_N C_N}{G}\omega\right)$ (see Appendix \ref{app:Derivation of the} for the proof of the latter expression).\\ We suppose that the network contribution to the thermal strain contains an instantaneous term and a second term accounting for the network's thermal history. These considerations are well reproduced by a memory function of the type: \begin{equation} \varphi_N(t)=2\pi\delta(t)+2\pi\omega_C\left(1-\chi_\gamma\right)\exp(-\omega_\gamma t)\theta(t) \label{def_of_varphi} \end{equation} which corresponds to the following expression for the network's contribution to the thermal strain: $$\gamma_{N} \Delta T_N\circledast \varphi_N(t)=2\pi \gamma_{N}\Delta T_N(x,t)+$$ \begin{equation} +2\pi\omega_C\left(1-\chi_\gamma\right)\int_{-\infty}^{t}\exp\left[-\omega_\gamma(t-\tau)\right]\Delta T_N(\tau)d\tau. \label{equivalence_therm_strain_varphi} \end{equation} With the latter definitions, the past thermal events in the network are exponentially less and less important with increasing time in the past. The temporal cutoff for the exponential is the inverse of a frequency $\omega_\gamma$, which can be written in terms of the frequency $\omega_C=G/(\rho_NC_N)$ (already introduced in Subsection \ref{Subsec:two_temp_model_HC}) as: \begin{equation} \omega_\gamma=\chi_\gamma \omega_C=\chi_\gamma\frac{G}{\rho_NC_N}. \label{chi_gamma} \end{equation} $\chi_\gamma$ quantifies to what extent the thermal expansion relaxation is slower than the heat capacity relaxation.\\ The Fourier transform of the memory function reads \footnote{The Fourier transform of $2\pi\delta(t)$ is $1$. Furthermore, the Fourier transform of $e^{-A t}\theta(t)$, with $A$ real and strictly positive, is $(2\pi)^{-1}(i\omega+A)^{-1}$.}: \begin{equation} \tilde{\varphi}_N(\omega)=1+\frac{\omega_C\left(1-\chi_\gamma\right)}{i\omega+\omega_\gamma}={1\over\chi_\gamma}\frac{1+i{\omega\over \omega_C}}{1+i{\omega\over \omega_\gamma}}. \label{phi_omega} \end{equation} By introducing Eq. \ref{phi_omega} into Eq. \ref{Expansion_isotropic_FT_compact_TN} we get to: \begin{equation} \frac{\partial^2 \tilde{u}_x}{\partial x^2}+\frac{\omega^2}{c^2(\omega)} \tilde{u}_x=\xi(\omega)\left(\gamma\tilde{\varphi}(\omega)+\frac{\gamma_N/\chi_\gamma}{1+i{\omega\over\omega_\gamma}}\right)\frac{\partial \tilde{T}}{ \partial x}. \label{Expansion_isotropic_FT_compact_semi_final} \end{equation} We should also provide an expression for the KER's memory function. If we assume that only the instantaneous value of $T$ is important for the evaluation of the thermal strain, i.e. $\varphi(t)=2\pi\delta(t)$, then $\tilde{\varphi}(\omega)$ becomes identically 1, yielding: \begin{equation} \frac{\partial^2 \tilde{u}_x}{\partial x^2}+\frac{\omega^2}{c^2(\omega)} \tilde{u}_x=\xi(\omega)\left(\gamma+\frac{\gamma_N/\chi_\gamma}{1+i{\omega\over\omega_\gamma}}\right)\frac{\partial \tilde{T}}{ \partial x}. \label{Expansion_isotropic_FT_compact_final} \end{equation} By performing the substitution $\gamma\rightarrow \gamma_\infty$ and $\gamma_N/\chi_\gamma\rightarrow \Delta \gamma$, the latter equation can be mapped on Eq. \ref{Expansion_isotropic_FT_compact_gamma_omega}. Again, the choice of the Debye model for $\gamma$ (relation \ref{gamma_omega}) can be justified in terms of the TTM.\\ Equivalently, the choice of the thermal expansion in the frame of Debye model implies that the thermal strain is related instantaneously to the KER's temperature. On the other hand, the thermal history of the network has to be accounted for to estimate the thermal strain. The Debye model implies that the memory function for the network is described by Eq. \ref{def_of_varphi}.\\ Analogously, the thermal expansion ruled by the HN model can be justified considering the thermal history of the KER and of the network. However, the complexity of the HN model, yielding also non-integer exponentials, prevents the possibility of having a simple and general expression for the memory functions.\\ \section{Experimental results and discussion} \label{experiment} We have heretofore developed the generalized physical model addressing the ISS response of glass-forming liquids subject to ultrafast photothermal excitation. \begin{figure}[h] \begin{center} \includegraphics[width=0.5\textwidth]{Figure5_PRB.png} \caption{Scheme of the experimental setup based on the heterodyne-detected transient grating technique. A spatially periodical laser pattern from a pulsed pump laser (red) is formed on the sample to create thermoelastic transients, the latter detected by a coaxially aligned probe laser (green and arrows). In the scheme we sketch the phase mask (PM), the lenses (L1 and L2), the optical cryostat (OC) and the photodetector (PD).} \label{exp_setup} \end{center} \end{figure} Now, an experimental study of the ISS response of glycerol ($>99.0 \%$ purity) under supercooling is presented. An ultrafast heterodyne-detected transient grating (HD-TG) setup is used for the experiment. Fig. \ref{exp_setup} shows the scheme of the setup, in which a ps pump laser pulse at 1064 nm (shown in red) is diffracted by a transmission phase mask (PM) into two 1st diffraction orders, namely $\pm1$ orders. The two are then recombined via a two-lens (4\textit{f}) imaging system into the bulk of the sample. The sample is accommodated in a liquid nitrogen optical cryostat (OC) to allow temperature control over it. The light interference forms a spatially periodical light pattern and creates a transient local density grating (thermoelastic transients), at a wavelength identical to the spacing of the light grating, $d$. For a given light wavelength light $\lambda$, one can tune the spacing of the excitation pattern by varying the intersecting angle of the two beams $\theta$, namely via $d=\lambda/(2 n) \sin{(\theta/2)}$ with $n$ the optical refractive index of the sample medium. In this setup, the $\theta$-tuning is implemented by translating a phase mask (PM) array containing multiple PMs of varying period \cite{verstraeten2015determination}. Alternatively, one can also rotate the PM to realize the $\theta$-tuning \cite{vega2015laser}. The detection of ISS takes advantage of the optical heterodyne scheme \cite{maznev1998optical}, in which the probe beam from a continuous wave (CW) laser at wavelength 532 nm (shown in green with black arrows in Fig. \ref{exp_setup}), is aligned to be coaxial with the pump beam. Both beams are sent to the PM and diffracted into excitation and probe/reference beam pairs. This heterodyne scheme has been widely used in the field for studying optical transparent or weakly absorbing liquids \cite{brodard2005application,taschin2008time,glorieux2002thermal} owing to its high sensitivity. More detailed description of the setup can be found in Ref. \cite{salenbien2012photoacoustic}. In our experiments, the temperature scanning measurements are performed from 320 K to 200 K with a step of 1 K, under the excitation of three different gratings with $d$ of 10, 14, and 20 \textmu m.\\ \begin{figure*} \begin{center} \includegraphics[width=1\textwidth]{Figure6_PRB.png} \caption{a) Experimental ISS signal (colour scale, arbitrary units) of supercooled glycerol over a broad temperature (vertical axis) and time window (horizontal axis). The grating size is 10 \textmu m (top), 14 \textmu m (central), and 20 \textmu m (bottom). b) to d) best fit based on Debye, HN models, and the SEM, respectively. A full presentation of the best fit of all the waveforms is shown in the online Movies.} \label{results_ISS} \end{center} \end{figure*} Fig. \ref{results_ISS}(a) presents the recorded ISS waveform datasets. As DC-temperature decreases, the acoustic ripples at short times shift the oscillation frequencies from low to high, 60-350 MHz covered by the three gratings, with the attenuation reaching a maximum around 280 K. This observation reflects the undercooling of the sample, the latter undergoing a transition from liquid-like to glassy-like, and solid-like due to reduced molecular mobility \cite{liu1998jamming}. The overshoot-like response is noteworthy, spanning from the start of the signal (bluish region), where it overlaps with the acoustic oscillations and fast part of thermal expansion, till the late times (reddish region), when it is quenched by the thermal diffusion dominated part (bluish tail). This process is the manifestation of the relaxation of heat capacity and thermal expansion coefficient, which are strongly (quasi exponentially) temperature dependent.\\ \begin{figure}[h] \begin{center} \includegraphics[width=0.45\textwidth]{Figure7_PRB.png} \caption{Temperature dependent complex longitudinal velocity ($c$) determined with the three gratings. The top (bottom) panel corresponds to the real (imaginary) part of $c$.} \label{V_complex} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=0.45\textwidth]{Figure8_PRB.png} \caption{a) low-frequency limit of $C$ (left axis, red) and $\gamma$ (right axis, blue) vs temperatures. b) The relative ratio of each relaxation quantity, $\Delta C$/$C_0$ (left axis, red) and $\Delta\gamma $/$\gamma_0$ (right axis, blue). The data are obtained from the fit in the frame of Debye model and for $d=14$ \textmu m.} \label{C_gamma_debye_waller} \end{center} \end{figure} A comparative fitting analysis of the acquired ISS datasets is carried out through the two analytical physical models, developed in this work, coupled with Debye and HN relaxation function, and also through the SEM, the latter relying on a single stretched exponential. A full presentation of the best fits for all signals, obtained with the three models, is summarized in Fig. \ref{results_ISS} (b-d) and also available in the online Movies (1-3) in the Supplemental Material. Satisfactory fit quality is overall achieved at all temperatures and grating periods by the three models, confirming again the reliability of physical models developed in this work. ISS signals are information-rich, providing access to the mechanical and thermal relaxation dynamics in a single waveform, which will be discussed in the following.\\ By fitting the experimental traces with our models, for every temperature and light grating we retrieve $c_L$ and $\tau_\eta$. By inserting these parameters into Eq. \ref{def_c}, we can calculate the complex velocity of the medium at the acoustic frequency imposed by the grating $\omega_a=2\pi c_L/d$. Fig. \ref{V_complex} shows the obtained complex sound velocity of the supercooled glycerol at different temperatures determined with the three gratings. The real part of $c$ (top panel) increases upon cooling because of the stiffening of the liquid. The imaginary part of $c$ (bottom panel) reaches a maximum around 280 K, where the structural relaxation timescale overlaps with the acoustic frequency. The results are in good agreement with those reported in Ref. \cite{paolucci2000impulsive}. It is interesting to notice that both the real and imaginary part undergoes a transition around 280 K, which is a reflection of the strong coupling between the acoustic motion and structural changes of the network when $1/f_A$ being of the order of the structural relaxation time. This feature provides a way to study the mechanical relaxation by performing measurements at numerous grating spacings in a broad range \cite{hecksher2017toward}, namely a mechanical spectroscopic analysis like the traditional rheological spectroscopy \cite{jensen2018slow} or ultrasonic spectroscopy \cite{jeong1986ultrasonic,schroyen2020bulk}.\\ In addition to the mechanical relaxation dynamics, the models developed in this work enable the individual and simultaneous determination of the heat capacity $C$ and the thermal expansion coefficient $\gamma$ relaxation. \begin{figure} \begin{center} \includegraphics[width=0.45\textwidth]{Figure9_PRB.png} \caption{The relaxation of $C$ is also manifested in the thermal diffusion tail of the signal, via its influence on the effective thermal diffusivity, $\alpha_{eff}$=$\kappa/(\rho C_{eff})$. At the low and high temperature limits, the value of $C_{eff}$ extracted from the thermal diffusion tail corresponds well to the respective asymptotic values $C_0$ and $C_{\infty}$, indicated by the dashed lines. } \label{C_effective} \end{center} \end{figure} In the following, we focus on the case of $d=14$ \textmu m, analyzed in the frame of Debye model, the other cases yielding the same conclusions. Fig. \ref{C_gamma_debye_waller} (a) shows the obtained low-frequency limit response of $C$ (left axis, red) and $\gamma$ (right axis, blue), in the frame of Debye model. Panel (b) displays the fitted ratio of $\Delta C/C_{\infty}$ (left axis, red) and $\Delta \gamma/\gamma_{\infty}$ (right axis, blue) at different temperatures. Within the margin of uncertainty (error bars in Fig. \ref{C_gamma_debye_waller}), determined by the most square error analysis \cite{salenbien2011laser, ThermalLens}, no temperature dependence is observed for all the parameters. Large fitting uncertainty was found when $T<230$ K and $T>260$ K. This is because the slow parts of the responses, which are determined by the relaxation strengths and the relaxation frequencies, occur later than 100 \textmu s (when $T<230$ K), and thus after the thermal diffusion driven decay of the signal, or before 1 ns (when $T> 260$ K), the experimentally accessible time window, respectively. We thus used the values between 230-260 K to calculate the average as a representation. Access to lower temperatures can be enabled by using larger grating spacing. In an accompanying work, we have demonstrated the use of thermal lens technique \cite{ThermalLens}, with a focused Gaussian beam of about 30 \textmu m, to study the relaxation down to 200 K.\\ The average $C_0$ and $\gamma_0$ from ISS technique are $1980\pm160$ J Kg$^{-1}$ K$^{-1}$ and $(5.5\pm0.7)\times 10^{-4}$ K$^{-1}$, respectively. The average ratios $\Delta C/C_\infty$ and $\Delta \gamma/\gamma_\infty$ are $1.2\pm0.2$ and $4.9\pm0.7$, for $C$ and $\gamma$, respectively. Using the latter four fitting parameters, one can further calculate the high-frequency limit response, $910\pm150$ J Kg$^{-1}$ K$^{-1}$ for $C_{\infty}$ and $(1.0\pm0.2)\times 10^{-4}$ K$^{-1}$ for $\gamma_{\infty}$, and the relaxation strength ($R_S$), defined as $\Delta C/C_0$ and $\Delta \gamma/\gamma_0$, $0.5\pm0.1$ and $0.81\pm0.04$, respectively. The obtained results comply well with the data reported in literature, as summarized in Table \ref{table_rel_quan_exp}.\\ By fitting with SEM model, the Debye-Waller factor \cite{paolucci2000impulsive}, $B/(A+B)$ in Eq. \ref{Str_exp_expr}, is used to describe the relaxation strength and we found a value of about $0.65\pm0.05$, which is in good agreement with the one reported in Ref. \cite{paolucci2000impulsive}, the latter reading being $0.66$. \begin{table} \begin{center} \begin{tabular}{|c|c|c|c|c|}\hline & Fit & 3-omega & PPE & DSC\\ \hline $C_0$ (J Kg$^{-1}$ K$^{-1}$) & $1980\pm160$ & 2071 & 2100 & 2000\\ \hline $C_\infty$ (J Kg$^{-1}$ K$^{-1}$) & $910\pm150$ & 1070 & 1180 & 1000\\ \hline $R_{S}$ & $0.5\pm0.1$ & 0.48 & 0.44 & 0.5\\ \hline Ref. & Current work & \cite{birge1985specific,birge1986specific} & \cite{bentefour2003broadband,bentefour2004thermal} & \cite{wang2002direct}\\ \hline \multicolumn{5}{c}{}\\ \cline{1-3} & Fit & Dilatometer & \multicolumn{2}{c}{}\\ \cline{1-3} $\gamma_0$ ($10^{-4}K^{-1}$) & $5.5\pm0.7$ & 1 & \multicolumn{2}{c}{}\\ \cline{1-3} $\gamma_\infty$ ($10^{-4}$K$^{-1}$) & $1.0\pm0.2$ & 5 & \multicolumn{2}{c}{}\\ \cline{1-3} $R_{S}$ & $0.81\pm0.04$ & 0.8 & \multicolumn{2}{c}{}\\ \cline{1-3} Ref. & Current work & \cite{blazhnov2004temperature} & \multicolumn{2}{c}{}\\ \cline{1-3} \end{tabular} \end{center} \caption{Low-frequency and high-frequency limit of the average relaxing quantity $C$ and $\gamma$ and comparison with results in literature obtained with 3-omega, differential scanning calorimetry (DSC), and photopyroelectric spectroscopy (PPE). In 3-omega and PPE, one measures thermal effusivity ($e$), from which $C(\omega)$ may be indirectly obtained via $e^2=C\kappa_T$, with $\kappa_T$ the thermal conductivity. To perform the conversion, we used $\kappa_T=$0.29 W m$^{-1}$ K$^{-1}$. } \label{table_rel_quan_exp} \end{table} The value lies in between the relaxation strength of $C$ and $\gamma$, which is expected in the sense that the two relaxing quantities are implicitly incorporated together into a single stretched exponential function.\\ Interestingly, the asymptotic values of the heat capacity can also be extracted, independently of the used models, from the temperature dependence of the thermal diffusion tail of the signal, as depicted in Fig. \ref{ISS_strVsHN}. Provided the relaxation time of the heat capacity and thermal expansion occur before or after the time window of the thermal diffusion tail, the signal tail evolves simply proportional with $\exp (-q^2\alpha_{eff} t)$ with $\alpha_{eff}$ an effective thermal diffusivity value, connected to the specific heat via $\alpha_{eff}$=$\kappa_T/(\rho C_{eff})$. $\kappa_T$ and $\rho$ denote thermal conductivity and mass density respectively. In light of their weak temperature dependence \cite{blazhnov2004temperature,minakov2001simultaneous}, in this work the latter two parameters have been assumed as constant, as 0.29 W m$^{-1}$ K$^{-1}$ and 1260 Kg m$^{-3}$, respectively.\\ In Fig. \ref{C_effective} we report $C_{eff}$ as a function of temperature for the three gratings. The asymptotic values of $C$ for low and high temperatures were found to be $960\pm20$ and $2190\pm30$ J Kg$^{-1}$ K$^{-1}$, as indicated by the two dashed lines, corresponding to a relaxation strength of 0.56, consistent with the value obtained by model fitting, 0.53. \begin{figure*} \begin{center} \includegraphics[width=0.8\textwidth]{Figure10_PRB.png} \caption{Comparison of the temperature dependent relaxation frequency $f_R$ extracted through Debye model (yellow squares), HN model (blue circles) and SEM (purple circles) and their fit with VFT (solid lines).} \label{f_relax} \end{center} \end{figure*} The empirical model assumes that the relaxation for $C$ and $\gamma$ occurring on the same time scale and connects their contribution to the ISS signal into a single stretched exponential function.\\ In order to experimentally verify whether the two response functions are indeed characterized by the same time scale and to what extent they can be disentangled, in Fig. \ref{f_relax} we compare the characteristic relaxation frequency of the heat capacity and of the thermal expansion coefficient (defined as $2\pi/\omega_C$ and $2\pi/\omega_\gamma$, respectively), both in the frame of Debye and Hn models, with the relaxation frequency $\Gamma_R$ of the SEM. In the case of Debye model (Fig. \ref{f_relax} a), the heat capacity relaxation frequency (yellow squares) is systematically higher (about a factor of $1.5\pm 0.1$) than the one of the thermal expansion coefficient (blue diamonds). This implies that after photothermally supplying energy, first heat is transferred from vibrational energy levels to configurational energy changes and, somewhat later, \begin{table} \begin{center} \begin{tabular}{|c|c|c|}\hline \multicolumn{3}{|c|}{Heat capacity}\\ \hline &Debye&HN\\ \hline $\log_{10}$($f_0/1$ Hz)&14.5 &14.5\\ \hline $B$ (K)&2140 &2100\\ \hline $T_0$ (K)&127 &124\\ \hline $m$ & 50.9 &50.9\\ \hline \multicolumn{3}{c}{}\\ \hline \multicolumn{3}{|c|}{Thermal expansion coefficient}\\ \hline &Debye&HN\\ \hline $\log_{10}$($f_0/1$ Hz)&13.9 &13.9\\ \hline $B$ (K)&2011 &2195\\ \hline $T_0$ (K)&130 &125\\ \hline $m$ & 54.1 &49.9\\ \hline \multicolumn{3}{c}{}\\ \cline{1-2} \multicolumn{2}{|c|}{SEM}&\multicolumn{1}{l}{}\\ \cline{1-2} $\log_{10}$($f_0/1$ Hz)&14.8&\multicolumn{1}{l}{}\\ \cline{1-2} $B$ (K)&2138&\multicolumn{1}{l}{}\\ \cline{1-2} $T_0$ (K)&135&\multicolumn{1}{l}{}\\ \cline{1-2} \end{tabular} \end{center} \caption{Summary of the parameters extracted by fitting with the VFT expression. } \label{table_VFT_parameters} \end{table} the configurational energy changes result in an increase of volume, in agreement with the two-temperature model developed in Section \ref{Sec:two_temp_model}.\\ Similar conclusions can be drawn from the results obtained by the HN model as shown in Fig. \ref{f_relax} (b). However, the results from HN model fitting are more dispersed due to the co-variance with the additional two fitting variables, namely a and b in Eq. \ref{C_omega_HN}.\\ The structural relaxation frequency $\Gamma_R$ (purple circles), obtained by fitting the experimental data with the SEM model, characterizes the (combined thermal and thermal expansion) structural relaxation and turn out to lie somewhat in between the other two relaxation frequencies.\\ The obtained temperature dependence of the relaxation frequencies were fitted to the Vogel-Fulcher-Tamman (VFT) equation (solid lines in Fig. \ref{f_relax}), defined by $f_{relax}=f_0\exp\left[-B/(T-T_0)\right]$, with $f_0$ the relaxation frequency in the high temperature limit, $T_0$ the Vogel-Fulcher temperature, around 130 K for glycerol. In Table \ref{table_VFT_parameters} we report the fitted VFT parameters based on Debye, HN and SEM. The results for the latter model are in line with values reported in Ref. \cite{paolucci2000impulsive}.\\ From the ratio $D=B/T_0$ we have determined the so-called fragility $m$, via $m= 16+590/D$, which can be considered as a measure for the deviation from Arrhenius behavior (and thus as a measure for the degree of temperature dependence of the potential energy landscape morphology). The fragility values obtained with our model are summarized in Table \ref{table_VFT_parameters} and are close to 53, the latter being the fragility for glycerol reported in Ref. \cite{bohmer1993nonexponential}.\\ \section{conclusion} In this manuscript, a model to describe ISS signals generated in relaxing materials has been introduced, which is based on the solution of the thermal diffusion equation and the continuum mechanics equation, in combination with a frequency dependent heat capacity and thermal expansion coefficient. As functional forms for the frequency dependencies, Debye and Havriliak-Negami expressions were assumed. The assumption of a Debye frequency dependence of the heat capacity was shown to be compatible with a two-temperature model, in which the experimentally measured temperature refers to the energy distribution of the kinetic degrees of freedom, and a network temperature describes the state of the amorphous network, which is assumed to be in thermal contact with the kinetic energy reservoir.\\ The obtained physical models for describing ISS response, were shown to fit well ISS signals that had been simulated, for different temperature-wavenumber combinations in glycerol, by a semi-empirical model \cite{yang1995impulsive1} that has been historically used to describe ISS signals in relaxing materials.\\ Furthermore, we have carried out an experimental ISS investigation of glycerol under supercooling and also a comparative model fitting analysis based the physical models developed in this work and the existing empirical model. Satisfied fitting quality has been achieved for all ISS waveforms, confirming the models developed in this work and allowing us to study the relaxation of $C$ and $\gamma$, up to several tens of MHz, largely extending the upper limit of spectroscopy of thermal susceptibility, by nearly 3 and 7 decades for $C(\omega)$ and $\gamma(\omega)$, respectively. The best fit results also suggest that the relaxation of heat capacity and thermal expansivity occur on a slight different time scale, relaxation of $C$ is about 1.5 times faster than that of $\gamma$, which is line with the observation by a thermal lens spectroscopy investigation, reported in an accompanying article \cite{ThermalLens}.
1,314,259,994,073
arxiv
\section{Proofs of Our Theory Development \label{sec:Proofs}} We here give the proof for the equivalence in optimizing two equations (\ref{eq:dro_p1}) and (\ref{eq:dro_p2}) in Section \ref{subsec:dro_p12}. Then, we detail how to derive the optimization formulations (\ref{thm:opt_sol}) and (\ref{eq:last_opt}) for solving the problems discussed in Section \ref{subsec:Our-Framework}. \subsection{Proof of Lemma \ref{lem:dro_p2} \label{subsec:dro_p12}} Let \[ \gamma^{*}=\argmax{\gamma:\in\Gamma_{\epsilon}}\mathbb{E}_{\left(Z,\tilde{Z}\right)\sim\gamma}\left[r\left(\tilde{Z};\phi,\theta\right)\right] \] be the optimal solution of the inner max in equation (\ref{eq:dro_p2}). Denote $\tilde{\mathbb{P}}^{*}$ as the distribution obtained from $\gamma^{*}$ by maginalizing the dimensions of $Z$. We prove that $\tilde{\mathbb{P}}^{*}$ is the optimal solution of the inner max in equation (\ref{eq:dro_p1}). Let $\tilde{\mathbb{P}}$ be a feasible solution of the inner max in equation (\ref{eq:dro_p1}), meaning that $\mathcal{W}_{\rho}\left(\mathbb{P},\tilde{\mathbb{P}}\right)\leq\epsilon$. Therefore, there exists $\gamma\in\Gamma$$\left(\mathbb{P},\tilde{\mathbb{P}}\right)$ such that $\mathbb{E}_{\left(Z,\tilde{Z}\right)\sim\gamma}\left[\rho\left(Z,\tilde{Z}\right)\right]^{1/q}\leq\epsilon$ or $\gamma\in\Gamma_{\epsilon}$. We have \begin{align*} \mathbb{E}_{\tilde{\mathbb{P}}}\left[r\left(\tilde{Z};\phi,\theta\right)\right] & =\mathbb{E}_{\gamma}\left[r\left(\tilde{Z};\phi,\theta\right)\right]\\ & \leq \mathbb{E}_{\gamma^{*}}\left[r\left(\tilde{Z};\phi,\theta\right)\right]=\mathbb{E}_{\mathbb{P}^{*}}\left[r\left(\tilde{Z};\phi,\theta\right)\right]. \end{align*} We reach the conclusion that $\tilde{\mathbb{P}}^{*}$ is the optimal solution of the inner max in equation (\ref{eq:dro_p1}). That concludes our proof. \subsection{Proof of Theorem \ref{thm:opt_sol}} Given $\gamma\in\Gamma_{\epsilon}$, we first prove that if $\mathbb{E}_{\left(Z,\tilde{Z}\right)\sim\gamma}\left[\rho\left(Z,\tilde{Z}\right)\right]$ is finite $\forall q>1$ then \begin{gather*} M_{\gamma}:=\lim_{q\goto\infty}\mathbb{E}_{\left(Z,\tilde{Z}\right)\sim\gamma}\left[\rho\left(Z,\tilde{Z}\right)\right]^{1/q}=\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\\ \sup_{\left(Z,\tilde{Z}\right)\in\support\left(\gamma\right)}\max\left\{ \max_{k,i,j}\norm{X_{kij}^{S}-\tilde{X}_{kij}^{S}}_{p},\max_{i,j}\norm{X_{ij}^{T}-\tilde{X}_{ij}^{T}}_{p}\right\} \end{gather*} Let denote $A_{\gamma}$ as the set of $\left(Z,\tilde{Z}\right)\in\support\left(\gamma\right)$ such that \[ \max\left\{ \max_{k,i,j}\norm{X_{kij}^{S}-\tilde{X}_{kij}^{S}}_{p},\max_{i,j}\norm{X_{ij}^{T}-\tilde{X}_{ij}^{T}}_{p}\right\} =M_{\gamma}. \] We have \begin{align*} & \mathbb{E}_{\left(Z,\tilde{Z}\right)\sim\gamma}\left[\rho\left(Z,\tilde{Z}\right)\right]^{1/q} = \left[\int_{A_{\gamma}}\rho\left(Z,\tilde{Z}\right)d\gamma\left(Z,\tilde{Z}\right)+\int_{A_{\gamma}^{c}}\rho\left(Z,\tilde{Z}\right)d\gamma\left(Z,\tilde{Z}\right)\right]^{1/q}. \end{align*} It is obvious that if $\left(Z,\tilde{Z}\right)\sim\gamma$ then \begin{alignat*}{1} \rho\left(Z,\tilde{Z}\right) & :=\sum_{i=1}^{B^{T}}\sum_{j=1}^{n^{T}}\norm{X_{ij}^{T}-\tilde{X}_{ij}^{T}}_{p}^{q} +\sum_{k=1}^{K}\sum_{i=1}^{B_{k}^{S}}\sum_{j=1}^{n^{S}}\norm{X_{kij}^{S}-\tilde{X}_{kij}^{S}}_{p}^{q}. \end{alignat*} Therefore, for $\left(Z,\tilde{Z}\right)\in A_{\gamma}^{c}$, we have \[ \lim_{q\goto\infty}\frac{\rho\left(Z,\tilde{Z}\right)}{M_{\gamma}^{q}}=0, \] while for $\left(Z,\tilde{Z}\right)\in A_{\gamma}$, we have \[ \lim_{q\goto\infty}\frac{\rho\left(Z,\tilde{Z}\right)}{M_{\gamma}^{q}}=1. \] We derive as \begin{align*} \lim_{q\goto\infty}\mathbb{E}_{\left(Z,\tilde{Z}\right)\sim\gamma}\left[\rho\left(Z,\tilde{Z}\right)\right]^{1/q} & = M_{\gamma}\lim_{q\goto\infty}\left[\int_{A_{\gamma}}\frac{\rho\left(Z,\tilde{Z}\right)}{M^{q}}d\gamma\left(Z,\tilde{Z}\right)+\int_{A_{\gamma}^{c}}\frac{\rho\left(Z,\tilde{Z}\right)}{M^{q}}d\gamma\left(Z,\tilde{Z}\right)\right]^{1/q}\\ & = M_{\gamma}\lim_{q\goto\infty}\gamma\left(A_{\gamma}\right)^{1/q}=M_{\gamma}. \end{align*} Therefore, $\gamma\in\Gamma_{\epsilon}$ with $q=\infty$ is equivalent to the fact that the support set $\support\left(\gamma\right)$ is the union of $B_{Z}$ with $Z\in\support\left(\mathbb{P}\right)$, where $B_{Z}$ is defined as follows: \begin{align*} B_{Z} & :=\prod_{k=1}^{K}\prod_{i=1}^{B_{k}^{S}}\prod_{j=0}^{n_{k}^{S}}B_{\epsilon}\left(X_{kij}^{S}\right)\prod_{i=1}^{B^{T}}\prod_{j=1}^{n^{T}}B_{\epsilon}\left(X_{ij}^{T}\right)\\ & =\prod_{k=1}^{K}\prod_{i=1}^{B_{k}^{S}}\prod_{j=0}^{n_{k}^{S}}B_{\epsilon}\left(X_{ki}^{S}\right)\prod_{i=1}^{B^{T}}\prod_{j=1}^{n^{T}}B_{\epsilon}\left(X_{i}^{T}\right). \end{align*} We can equivalently turn the optimization problem in equation (\ref{eq:dro_entropic}) as follows: \begin{align} \max_{\gamma\in\Gamma}\mathbb{E}_{\left(Z,\tilde{Z}\right)\sim\gamma}\left[r\left(\tilde{Z};\phi,\theta\right)\right]+\frac{1}{\lambda}\mathbb{H}\left(\gamma\right)\label{eq:equi_entropic}\\ \text{s.t.}: \support\left(\gamma\right)=\cup_{Z\in \: \support\left(\mathbb{P}\right)}B_{Z,}\nonumber \end{align} where $\Gamma=\cup_{\tilde{\mathbb{P}}}\Gamma\left(\mathbb{P},\tilde{\mathbb{P}}\right)$. Because $\gamma\in\Gamma\left(\mathbb{P},\tilde{\mathbb{P}}\right)$ for some $\tilde{\mathbb{P}}$, we can parameterize its density function as: \[ \gamma\left(Z,\tilde{Z}\right)=p\left(Z\right)\tilde{p}\left(\tilde{Z}\mid Z\right), \] where $p\left(Z\right)$ is the density function of $\mathbb{P}$ and $\tilde{p}\left(\tilde{Z}\mid Z\right)$ has the support set $B_{Z}$. Please note that the constraint for $\tilde{p}\left(\tilde{Z}\mid Z\right)$ is $\int_{B_{Z}}\tilde{p}\left(\tilde{Z}\mid Z\right)d\tilde{Z}=1$. The Lagrange function for the optimization problem in equation (\ref{eq:equi_entropic}) is as follows: \begin{align*} \mathcal{L} & =\int r\left(\tilde{Z};\phi,\theta\right)p\left(Z\right)\tilde{p}\left(\tilde{Z}|Z\right)dZd\tilde{Z}\\ & -\frac{1}{\lambda}\int p\left(Z\right)\tilde{p}\left(\tilde{Z}|Z\right)\log\left[p\left(Z\right)\tilde{p}\left(\tilde{Z}|Z\right)\right]dZd\tilde{Z}\\ + & \int\alpha\left(Z\right)\left[\tilde{p}\left(\tilde{Z}\mid Z\right)d\tilde{Z}-1\right]d\tilde{Z}dZ, \end{align*} where the integral w.r.t $Z$ over on $\support\left(\mathbb{P}\right)$ and the one w.r.t. $\tilde{Z}$ over $B_{Z}$. Taking the derivative of $\mathcal{L}$ w.r.t. $\tilde{p}\left(\tilde{Z}\mid Z\right)$ and setting it to $0$, we obtain \begin{align*} 0 & =r\left(\tilde{Z};\phi,\theta\right)p\left(Z\right)+\alpha\left(Z\right) -\frac{p\left(Z\right)}{\lambda}\left[\log p\left(Z\right)+\log\tilde{p}\left(\tilde{Z}|Z\right)+1\right]. \end{align*} \[ \tilde{p}\left(\tilde{Z}|Z\right)=\frac{\exp\left\{ \lambda\left[r\left(\tilde{Z};\phi,\theta\right)+\frac{\alpha\left(Z\right)}{p\left(Z\right)}\right]-1\right\} }{p\left(Z\right)}. \] Taking into account $\int_{B_{Z}}\tilde{p}\left(\tilde{Z}\mid Z\right)d\tilde{Z}=1$, we achieve \[ \int_{B_{Z}}\exp\left\{ \lambda r\left(\tilde{Z};\phi,\theta\right)\right\} d\tilde{Z}=\frac{p\left(Z\right)}{\exp\left\{ \lambda\frac{\alpha\left(Z\right)}{p\left(Z\right)}-1\right\} }. \] Therefore, we arrive at \[ \tilde{p}^{*}\left(\tilde{Z}|Z\right)=\frac{\exp\left\{ \lambda r\left(\tilde{Z};\phi,\theta\right)\right\} }{\int_{B_{Z}}\exp\left\{ \lambda r\left(\tilde{Z};\phi,\theta\right)\right\} d\tilde{Z}}. \] \begin{equation} \gamma^{*}\left(Z,\tilde{Z}\right)=p\left(Z\right)\frac{\exp\left\{ \lambda r\left(\tilde{Z};\phi,\theta\right)\right\} }{\int_{B_{Z}}\exp\left\{ \lambda r\left(\tilde{Z};\phi,\theta\right)\right\} d\tilde{Z}}.\label{eq:optimal_gamma} \end{equation} Finally, by noting that \[ p\left(Z\right)=\prod_{k=1}^{K}\prod_{i=1}^{B_{k}^{S}}\prod_{j=0}^{n_{k}^{S}}p_{k}^{S}\left(X_{ki}^{S},Y_{ki}^{S}\right)\prod_{i=1}^{B^{T}}\prod_{j=0}^{n^{T}}p^{T}\left(X_{i}^{T}\right)\exp\left\{ \lambda r\left(\tilde{Z};\phi,\theta\right)\right\} \] \begin{align*} =\exp\left\{ \lambda\beta r^{g}\left(\tilde{Z};\psi\right)\right\} \prod_{k=1}^{K}\prod_{i=1}^{B_{k}^{S}}\prod_{j=0}^{n_{k}^{S}}\exp\left\{ \lambda[\alpha s(X_{ki}^{S},\tilde{X}_{kij}^{S};\psi)+\ell(\tilde{X}_{kij}^{S},Y_{ki}^{S};\psi)]\right\} \\ \prod_{i=1}^{B^{T}}\prod_{j=1}^{n^{T}}\exp\left\{ \lambda\alpha s\left(X_{i}^{T},\tilde{X}_{ij}^{T};\psi\right)\right\} . \end{align*} And \begin{align*} \int_{B_{Z}}\exp\left\{ \lambda r\left(\tilde{Z};\phi,\theta\right)\right\} d\tilde{Z}=\exp\left\{ \lambda\beta r^{g}\left(\tilde{Z};\psi\right)\right\} \\\prod_{k=1}^{K}\prod_{i=1}^{B_{k}^{S}}\prod_{j=0}^{n_{k}^{S}} \int_{B_{\epsilon}\left(X_{ki}^{S}\right)}\exp\left\{ \lambda[\alpha s(X_{ki}^{S},\tilde{X}_{kij}^{S};\psi)+\ell(\tilde{X}_{kij}^{S},Y_{ki}^{S};\psi)]\right\} d\tilde{X}_{kij}^{S}\\ \prod_{i=1}^{B^{T}}\prod_{j=1}^{n^{T}}\int_{B_{\epsilon}\left(X_{i}^{T}\right)}\exp\left\{ \lambda\alpha s\left(X_{i}^{T},\tilde{X}_{ij}^{T};\psi\right)\right\} d\tilde{X}_{ij}^{T}, \end{align*} we reach the conclusion. \subsection{Proof of the optimization problem in equation~(\ref{eq:last_opt})} By substituting $\gamma^{*}\left(Z,\tilde{Z}\right)$ in equation (\ref{eq:optimal_gamma}) back to the objective function in (\ref{eq:dro_p2}), we obtain\\ \[ \min_{\psi}\,\min_{\theta,\phi}\max_{\gamma:\in\Gamma_{\epsilon}}\mathbb{E}_{\left(Z,\tilde{Z}\right)\sim\gamma^{*}}\left[r\left(\tilde{Z};\phi,\theta\right)\right]. \] By referring to the construction of $Z$ and $\tilde{Z}$ and noting that for $\left(Z,\tilde{Z}\right)\sim\gamma^{*}$ \begin{align*} r^{l}\left(\tilde{Z};\psi\right) :=\sum_{i=1}^{B^{T}}\sum_{j=1}^{n^{T}}s\left(\tilde{X}_{i0}^{T},\tilde{X}_{ij}^{T};\psi\right) + &\sum_{k=1}^{K}\sum_{i=1}^{B_{k}^{S}}\sum_{j=1}^{n_{k}^{S}}s\left(\tilde{X}_{ki0}^{S},\tilde{X}_{kij}^{S};\psi\right)\\ = \sum_{i=1}^{B^{T}}\sum_{j=1}^{n^{T}}s\left(X_{i}^{T},\tilde{X}_{ij}^{T};\psi\right) + \sum_{k=1}^{K}\sum_{i=1}^{B_{k}^{S}}\sum_{j=1}^{n_{k}^{S}}s\left(X_{ki}^{S},\tilde{X}_{kij}^{S};\psi\right) \end{align*} \[ \mathcal{L}\left(\tilde{Z};\psi\right):=\sum_{k=1}^{K}\sum_{i=1}^{B_{k}^{S}}\sum_{j=0}^{n_{k}^{S}}\ell\left(\tilde{X}_{kij}^{S},Y_{ki}^{S};\psi\right), \] we gain the final optimization problem. \section{Implementation Details \label{sec:Additional-Experiments}} In this section, we provide the detailed implementation for all of our experiments along with some additional experimental results. \subsection{Entropic Regularized Duality for WS} To enable the application of optimal transport in machine learning and deep learning, Genevay et al. developed an entropic regularized dual form in \cite{stochastic_ws}. First, they proposed to add an entropic regularization term to the primal form: \begin{equation} \mathcal{W}_{d}^{\epsilon}\left(\mathbb{P},\mathbb{Q}\right) :=\min_{\gamma\in\Gamma\left(\mathbb{Q},\mathbb{P}\right)}\Biggl\{\mathbb{E}_{\left(\bx,\by\right)\sim\gamma}\left[d\left(\bx,\by\right)\right]\nonumber+\epsilon D_{KL}\left(\gamma\Vert\mathbb{P}\otimes\mathbb{Q}\right)\Biggr\} \label{eq:entropic_primal} \end{equation} where $\epsilon$ is the regularization rate, $D_{KL}\left(\cdot\Vert\cdot\right)$ is the Kullback-Leibler (KL) divergence, and $\mathbb{P}\otimes\mathbb{Q}$ represents the specific coupling in which $\mathbb{Q}$ and $\mathbb{P}$ are independent. Note that when $\epsilon\goto0$, $\mathcal{W}_{d}^{\epsilon}\left(\mathbb{P},\mathbb{Q}\right)$ approaches $\mathcal{W}_{d}\left(\mathbb{P},\mathbb{Q}\right)$ and the optimal transport plan $\gamma_{\epsilon}^{*}$ of equation (\ref{eq:entropic_primal}) also weakly converges to the optimal transport plan $\gamma^{*}$ of the primal form. In practice, we set $\epsilon$ to be a small positive number, hence $\gamma_{\epsilon}^{*}$ is very close to $\gamma^{*}$. Second, using the Fenchel-Rockafellar theorem, they obtained the following dual form w.r.t. the potential $\phi$ \begin{align} \mathcal{W}_{d}^{\epsilon}\left(\mathbb{P},\mathbb{Q}\right) & =\max_{\phi}\left\{ \int\phi_{\epsilon}^{c}\left(\bx\right)\mathrm{d}\mathbb{Q}\left(\bx\right)+\int\phi\left(\by\right)\mathrm{d}\mathbb{P}\left(\by\right)\right\} \nonumber \\ & =\max_{\phi}\left\{ \mathbb{E}_{\mathbb{Q}}\left[\phi_{\epsilon}^{c}\left(\bx\right)\right]+\mathbb{E}_{\mathbb{P}}\left[\phi\left(\by\right)\right]\right\} ,\label{eq:entropic_dual} \end{align} where $\phi_{\epsilon}^{c}\left(\bx\right):=-\epsilon\log\left(\mathbb{E}_{\mathbb{P}}\left[\exp\left\{ \frac{-d\left(\bx,\by\right)+\phi\left(\by\right)}{\epsilon}\right\} \right]\right)$. In order to calculate the global WS-related regularization terms in equations (\ref{eq:global_DA_SSL}), (\ref{eq:global_DG}), and (\ref{eq:global_AML}), we apply the above entropic regularized dual form. The Kantorovich potential network $\phi$ is a simple network with two fully connected layers with ReLU activation in the middle: $\mathrm{FC_{latent\_dim\times512}}\rightarrow\mathrm{ReLU}\rightarrow\mathrm{FC_{512\times1}}$ is used throughout experiments. Note that the $\mathrm{latent\_dim}$ depends on the main network. Additionally, the distance $\rho_{d}$ in equation (\ref{eq:ot_cost}) used in all experiments is the Euclidean distance $\mathrm{d(x_{1},x_{2})=||x_{1}-x_{2}||_{2}^{2}}$ , the prediction discrepancy trade-off $\gamma$ is set equal to $0.5$, and the entropic regularization parameter $\lambda$ in equation (\ref{eq:dro_entropic}) is $0.1$. \subsection{Projected SVGD Setting} For Projected SVGD in Algorithm \ref{alg:svgd}, we employ an RBF kernel \[ k\left(X,\tilde{X}\right)=\exp\left\{ \frac{-\norm{X-\tilde{X}}_{2}^{2}}{2\sigma^{2}}\right\} , \] where the kernel width is set according to the main paper \cite{NIPS2016_b3ba8f1b}. \subsection{Experiments for DG} \subsubsection{Network Architecture and Hyperparamters} As mentioned in the main paper, we incorporate well-studied backbones for our experiments, following the implementation of for single domain generalization tasks in \cite{zhao_maximum}. In particular: \begin{itemize} \item LeNet5 \cite{lecun1989backpropagation} is employed in the MNIST experiment. We first pre-train the network on the MNIST dataset without applying any DG method for 100 iterations, then on each iteration 100, 200, 300 we generate particles with $n^{S}=n^{T}=n\in\left\{ 1,2,4\right\} $ by running the Projected SVGD sampling\ref{alg:svgd} in $L=15$ iterations, step size $\eta=0.002$. We use Adam optimizer \cite{kingma2014adam} with learning rate $10^{-5}$ and train for 15000 iteration in total with batch size of $32$. \item CIFAR-C \footnote{Note that in both CIFAR-C and MNIST experiments, we are provided with only a single source domain, thus GLOT-DR downgrades exactly to LOT-DR.} experiment uses 4 different backbone architectures, namely: All Convolutional Network (AllConvNet) \cite{springenberg2014striving}, DenseNet \cite{huang2017densely} , WideResNet \cite{zagoruyko2016wide}, and ResNeXt \cite{xie2017aggregated}. We set $n^{S}=n^{T}=n=2$ particles, $L=15$ iterations, step size $\eta=0.001$ and minimize the loss with SGD optimizer with initial learning rate of $0.1$ and batch size 128. Similar to MNIST experiment, we first pretrain the network for 10 epochs then generate augmented images on epoch 10 and 20, number of total epochs required for training are 150 in the case of AllConvNet and WideResNet, 250 epochs for DenseNet and ResNeXt. \item We used an AlexNet \cite{krizhevsky2012imagenet} pretrained on ImageNet \cite{russakovsky2015imagenet} in the PACS experiment. Different from the two above experiments, which generate augmented images and append them directly to the training set, we generate the augmented images in each mini-batch and calculate the local/global regularization terms. $n^{S}=n^{T}$ are set qual to 2, $L=15$ iterations, step size $\eta=0.007$. The initial global and local trade-off are $3.10^{-5}$ and $50$, these parameters are is adjusted by $\frac{\mathrm{iter}}{\mathrm{\#num\_iter}}$ in $\mathrm{iter}$-th iteration. We train AlexNet for 45.000 iterations with SGD optimizer and $10^{-3}$ learning rate. \end{itemize} \subsubsection{Datasets and Baselines } \begin{table} \caption{Details on the domain generalization benchmark datasets\label{tab:dg-data}} \vspace*{2mm} \centering{}\resizebox{.5\columnwidth}{!} \begin{tabular}{c|c|c} \hline Dataset & \# classes & Shape\tabularnewline \hline MNIST \cite{lecun1998gradient} & 10 & 32 \texttimes{} 32\tabularnewline SVHN \cite{netzer2011reading} & 10 & 32 \texttimes{} 32\tabularnewline MNIST-M \cite{ganin2015unsupervised} & 10 & 32 \texttimes{} 32\tabularnewline SYN \cite{ganin2015unsupervised} & 10 & 32 \texttimes{} 32\tabularnewline USPS \cite{denker1989neural} & 10 & 32 \texttimes{} 32\tabularnewline CIFAR-10-C \cite{hendrycks2019robustness} & 15 & 3 \texttimes{} 32 \texttimes{} 32\tabularnewline CIFAR-100-C \cite{hendrycks2019robustness} & 15 & 3 \texttimes{} 32 \texttimes{} 32\tabularnewline PACS \cite{li2017deeper} & 7 & 3 \texttimes{} 224 \texttimes{} 224\tabularnewline \hline \end{tabular}} \end{table} We present the details on each dataset used in domain generalization experiments in Table. \ref{tab:dg-data}. Digits datasets: MNIST \cite{lecun1998gradient}, SVHN \cite{netzer2011reading}, MNIST-M \cite{ganin2015unsupervised}, SYN \cite{ganin2015unsupervised}, USPS \cite{denker1989neural} - each contains 10 classes from $0-9$, which are resized to $32\times32$ images in our experiment. CIFAR-10-C \cite{hendrycks2019robustness}, and CIFAR-100-C \cite{hendrycks2019robustness} consist of corrupted images from the original CIFAR \cite{krizhevsky2009learning} datasets with 15 corruptions types applied. In terms of multi-source domain generalization, we test our proposed model on PACS dataset \cite{li2017deeper}, which includes $3\times224\times224$ images from four different datasets, including Photo, Art painting, Cartoon, and Sketch. In the digits experiment, 10000 images are sellected from MNIST dataset as the training set for the source domain and the other four data sets as the target domains: SVHN , MNIST-M, SYN , USPS. We compare our method with the following baselines: (i) Empirical Risk Minimization (ERM), (ii) PAR \cite{NEURIPS2019_3eefceb8}, (iii) ADA \cite{volpi2018generalizing} and (iv) ME-ADA \cite{zhao_maximum}. For a fair comparison, we did not use any data augmentation in this digits experiment, all the samples are considered as RGB images (we duplicate the channels if they are grayscale images). \subsubsection{Experimental Results} \begin{table*} \caption{Average classification accuracy (\%) on MNIST benchmark, we first train the LeNet5 \cite{lecun1989backpropagation} architecture on MNIST then evaluate on SVHN, MNIST-M, SYN, USPS\label{tab:mnist-dg}. We repeat experiment for 10 times and report the mean value and standard deviation.\vspace*{2mm}} \centering{}\resizebox{1.\columnwidth}{!} \begin{tabular}{c|c|c|c|c|c} \hline & SVHN & MNIST-M & SYN & USPS & Average\tabularnewline \hline Standard (ERM) & 31.95\textpm{} 1.91 & 55.96\textpm{} 1.39 & 43.85\textpm{} 1.27 & 79.92\textpm{} 0.98 & 52.92\textpm{} 0.98\tabularnewline PAR & 36.08\textpm{} 1.27 & 61.16\textpm{} 0.21 & 45.48\textpm{} 0.35 & 79.95\textpm{} 1.18 & 55.67 \textpm{} 0.33\tabularnewline ADA & \multicolumn{1}{c|}{35.70 \textpm{} 2.00} & 58.65\textpm{} 1.72 & 47.18\textpm{} 0.61 & 80.40\textpm{} 1.70 & 55.48\textpm{} 0.74\tabularnewline ME-ADA & 42.00\textpm{} 1.74 & 63.98\textpm{} 1.82 & 49.80\textpm{} 1.74 & 79.10\textpm{} 1.03 & 58.72\textpm{} 1.12\tabularnewline \hline GLOT-DR n=1 & 42.70 \textpm{} 1.03 & 67.72 \textpm{} 0.63 & \textbf{50.53 \textpm{} 0.88} & 82.32 \textpm{} 0.63 & 60.82 \textpm{} 0.79\tabularnewline GLOT-DR n=2 & 42.35 \textpm{} 1.44 & 67.95 \textpm{} 0.56 & \textbf{50.53 \textpm{} 0.99} & 82.33 \textpm{} 0.61 & 60.81\textpm{} 0.90\tabularnewline GLOT-DR n=4 & \textbf{43.10 \textpm{} 1.16} & \textbf{68.44 \textpm{} 0.46} & 50.49 \textpm{} 1.04 & \textbf{82.48 \textpm{} 0.51} & \textbf{61.13 \textpm{} 0.79}\tabularnewline \hline \end{tabular}} \end{table*} Table. \ref{tab:mnist-dg} shows that our model achieves the highest average accuracy compared to the other baselines for all values of $n^{S}=n^{T}=n\in\left\{ 1,2,4\right\} $, with the highest overall score when $n=4$. In particular, we observe the highest improvement in MNIST-M target domain of $\thickapprox5\%$, and $\thickapprox2.5\%$ overall. Our GLOT-DR also exhibits more consistent with smaller variation in terms of accuracy between runs compared to the second-best method, $(0.79\%-1.12\%)$. \subsection{Experiments for DA} \subsubsection{Network architectures and Hyperparameters} The ResNet50 \cite{he2016deep} architecture pretrained on ImageNet, followed by a two fully connected layers classifier. is the same as that of the previous work. We evaluate GLOT-DR on the standard object image classification benchmarks in domain adaptation: Office-31 and ImageCLEF-DA. The proposed method is employed on the latent space, trade-off parameters for global and local terms are set equal to $0.02$ and $5$ throughout all the DA experiments. We train the ResNet50 model for 10000 steps with batch size of 36, following the standard protocols in \cite{long2017conditional}, with data augmentation techniques like random flipping and cropping. \begin{table} \begin{centering} \caption{Accuracy (\%) on ImageCLEF-DA of ResNet50 model \cite{he2016deep} in unsupervised domain adaptation methods. We set the number of particles $n_{S}=n_{T}=4$ for the local regularization.\label{tab:clef}\vspace*{2mm}} \par\end{centering} \centering{}\resizebox{.7\columnwidth}{!} \begin{tabular}{cccccccc} \hline & I\textrightarrow P & P\textrightarrow I & I\textrightarrow C & C\textrightarrow I & C\textrightarrow P & P\textrightarrow C & Avg\tabularnewline \hline ResNet & 74,8 & 83,9 & 91,5 & 78,0 & 65,5 & 91,3 & 80,7\tabularnewline DAN & 74,8 & 83,9 & 91,5 & 78,0 & 65,5 & 91,3 & 80,7\tabularnewline DANN & 75,0 & 86,0 & 96,2 & 87,0 & 74,3 & 91,5 & 85,0\tabularnewline JAN & 76,8 & 88,4 & 94,8 & 89,5 & 74,2 & 91,7 & 85,8\tabularnewline CDAN & 76,7 & 90,6 & 97,0 & 90,5 & 74,5 & 93,5 & 87,1\tabularnewline ETD & \textbf{81,0} & 91,7 & \textbf{97,9} & \textbf{93,3} & 79,5 & 95,0 & 89,7\tabularnewline \hline GLOT-DR & 79.8 & \textbf{93.7} & \textbf{97.9} & \textbf{93.3 } & \textbf{79.7 } & \textbf{96.0 } & \textbf{90.1 }\tabularnewline \hline \end{tabular}} \end{table} \subsubsection{Dataset} The details of the Office-31 \cite{saenko2010adapting} dataset is provided in the main paper, we conduct one more experiment on another dataset: ImageCLEF-DA, containing 12 categories from three public datasets: Caltech-256 (C), ImageNet ILSVRC 2012 (I) and Pascal VOC 2012 (P). Each of these domains includes 50 images per class and 600 in total, which were resized to $3\times224\times224$ in our experiment. We evaluate all baselines in 6 adaptation scenarios. \subsubsection{Experimental Results} As reported in Table. \ref{tab:clef}, the GLOT-DR approach outperforms the comparison methods on nearly all settings, except the pair of I\textrightarrow P, where our score is $1\%$ below ETD \cite{li2020enhanced}. Our proposed method achieves $90.1\%$ average accuracy overall, which is highest compared to all baselines. \begin{figure} \centering{}\includegraphics[width=.8\columnwidth]{figures/office-31-particles} \caption{Classification accuracy when varying the number of generated examples sampled from Project SVGD Algorithm.\ref{alg:svgd}\label{fig:office-31-particles}. } \end{figure} Up till now, we have almost finished the needed experiments to examine the effectiveness of our method on domain adaptation. In this ultimate experiment, we illustrate the strength of our proposed regularization technique by varying the number of generated adversarial examples (i.e. $n^{S}$and $n^{T}$) from 1 to 16. Results are presented in Figure \ref{fig:office-31-particles}, where we perform extensive experiment via comparing GLOT-DR against its variants on different values of $n^{S},n^{T}$. It can be easily seen that, increasing the number of generated samples can consistently improves the performance in both LOT-DR and GLOT-DR (note that in GOT-DR there is no local regularization term involved, thus there is no difference between different number of samples). Setting $n^{S}=n^{T}\geq2$ helps LOT-DR surpass the performance of GOT-DR. \subsection{Experiments for SSL} \subsubsection{Network architectures and Hyperparameters} In the semi supervised learning experiment, our main competitor is Virtual Adversarial Training (VAT) \cite{VAT}, we thus replicate their Conv-Large\footnote{$\mathrm{LReLU}$ indicates the Leaky ReLU \cite{maas2013rectifier} activation function with the negative slope equal to 0.1.} architecture as: \begin{align*} & \mathrm{32\times32\,RGB\,image\shortrightarrow3\times3\,conv.128\,LReLU}\\ & \mathrm{\shortrightarrow3\times3\,conv.128\,LReLU\shortrightarrow3\times3\,conv.128\,LReLU}\\ & \mathrm{\shortrightarrow2\times2\,MaxPool,stride\,2\shortrightarrow Dropout(0.5)}\\ & \mathrm{\shortrightarrow3\times3\,conv.256\,LReLU\shortrightarrow3\times3\,conv.256\,LReLU}\\ & \mathrm{\shortrightarrow3\times3\,conv.256\,LReLU\shortrightarrow2\times2\,MaxPool,stride\,2}\\ & \mathrm{\mathrm{\shortrightarrow Dropout(0.5)\shortrightarrow3\times3\;conv.512\,LReLU}}\\ & \mathrm{\shortrightarrow1\times1\,conv.256\,LReLU\shortrightarrow1\times1\,conv.128\,LReLU}\\ & \mathrm{\shortrightarrow Global\,Average\,Pool,6\times6\rightarrow1\times1\shortrightarrow FC_{128\times10}} \end{align*} We train the Conv-Large network in 600 epochs with batch size of 128 using SGD optimizer and cosine annealing learning rate scheduler \cite{loshchilov2016sgdr}. The global and local trade-off parameters are ajusted by exponential rampup from \cite{samuli2017temporal}: \[ \tau=\begin{cases} \exp^{-5(1-\mathrm{\frac{\mathrm{epoch}}{ramup\,length}})^{2}} & \mathrm{epoch<rampup\,length}\\ 1 & \mathrm{otherwise} \end{cases} \] with $\mathrm{rampup\,length=30}$ and initial trade-off for global and local terms are $0.1$ and $10$, respectively. \subsubsection{Experimental Results} In this section, we compare the training time in section \ref{subsec:ssl} of LOT-DR and GLOT-DR against VAT in a single epoch. We repeat this process several times to get the average results, which are plotted in Figure \ref{fig:runtime}. While VAT and LOT-DR run in almost equivalent time for all values of generated examples, GLOT-DR requires approximately $25\%$ extra running time. Note that this is worthy because of the superior performance and great flexibility it brings on different scenarios. \begin{figure} \centering{}\includegraphics[width=.8\columnwidth]{figures/runtime_bar}\caption{Running time of our proposed approach on: Intel(R) Xeon(R) CPU @ 2.00GHz CPU and Tesla P100 16GB VRAM GPU\label{fig:runtime}. Results are averaged over 3 runs.} \end{figure} \subsection{Experiments for AML} \subsubsection{General setting } We follow the setting in \cite{pang2020bag} for the experiment on adversarial machine learning domain. Specifically, the experiment has been conducted on CIFAR-10 dataset with ResNet18 architecture. All models have been trained with 110 epochs with SGD optimizer with momentum 0.9, weight decay $5\times10^{-4}$. The initial learning rate is 0.1 and reduce at epoch 100-th and 105-th with rate 0.1 as mentioned in \cite{pang2020bag}. \subsubsection{Attack setting } We use different SOTA attacks to evaluate the defense methods including: (1) PGD attack \cite{madry2017towards} which is a gradient based attack with parameter $\{k=200,\epsilon=8/255,\eta=2/255\}$ where $k$ is the number of attack iterations, $\epsilon$ is the perturbation boundary and $\eta$ is the step size of each iteration. (2) Auto-Attack (AA) \cite{croce2020reliable} which is an ensemble methods of four different attacks. We use standard version with $\epsilon=8/255$. (3) B\&B attack \cite{brendel2019accurate} which is a decision based attack. Following \cite{tramer2020adaptive}, we initialized with the PGD attack with $k=20,\epsilon=8/255,\eta=2/255$ then apply B\&B attack with 200 steps. We use $L_{\infty}$ for measuring the perturbation size and we use the full test set of 10k samples of the CIFAR-10 dataset in all experiments. \subsubsection{Baseline setting} We compare our method with PGD-AT \cite{madry2017towards} and TRADES \cite{zhang2019theoretically} which are two well-known defense methods in AML. PGD-AT seeks the most violating examples that maximize the loss w.r.t. the true hard-label $\mathcal{L}_{CE}(h_{\theta}(x_{a}),y)$ while TRADES seeks the most divergent examples by maximizing the KL-divergence w.r.t. the current prediction (as consider as a soft-label) $\mathcal{L}_{KL}\left(h_{\theta}\left(x_{a}\right)\parallel h_{\theta}\left(x\right)\right)$. To be fair comparison, we use the same training setting for all methods, and succesfully reproduce performance of PGD-AT and TRADES as reported in \cite{pang2020bag}. We also compare with adversarial distributional training \cite{deng2020adversarial} (ADT-EXP and ADT-EXPAM) which assume that the adversarial distribution explicitly follows normal distribution. \subsection{Experiments for DG} \begin{table*}[t] \caption{Single domain generalization accuracy (\%) on CIFAR-10-C and CIFAR-100-C datasets with different backbone architectures\label{tab:cifar-c-1}. We use the\textbf{ bold} font to highlight the best results.\vspace*{2mm}} \centering{}\resizebox{1.\columnwidth}{!} \begin{tabular}{c|cccccccccc} \hline \multicolumn{1}{c}{Datasets} & Backbone & Standard & Cutout & CutMix & AutoDA & Mixup & AdvTrain & ADA & ME-ADA & GLOT-DR\tabularnewline \hline \multirow{5}{*}{CIFAR-10-C} & AllConvNet & 69.2 & 67.1 & 68.7 & 70.8 & 75.4 & 71.9 & 73 & 78.2 & \textbf{82.5}\tabularnewline & DenseNet & 69.3 & 67.9 & 66.5 & 73.4 & 75.4 & 72.4 & 69.8 & 76.9 & \textbf{83.6}\tabularnewline & WideResNet & 73.1 & 73.2 & 72.9 & 76.1 & 77.7 & 73.8 & 79.7 & 83.3 & \textbf{84.4}\tabularnewline & ResNeXt & 72.5 & 71.1 & 70.5 & 75.8 & 77.4 & 73 & 78 & 83.4 & \textbf{84.5}\tabularnewline \cline{2-11} \cline{3-11} \cline{4-11} \cline{5-11} \cline{6-11} \cline{7-11} \cline{8-11} \cline{9-11} \cline{10-11} \cline{11-11} & Average & 71 & 69.8 & 69.7 & 74 & 76.5 & 72.8 & 75.1 & 80.5 & \textbf{83.7}\tabularnewline \hline \hline \multirow{5}{*}{CIFAR-100-C} & AllConvNet & 43.6 & 43.2 & 44 & 44.9 & 46.6 & 44 & 45.3 & 51.2 & \textbf{54.8}\tabularnewline & DenseNet & 40.7 & 40.4 & 40.8 & 46.1 & 44.6 & 44.8 & 45.2 & 47.8 & \textbf{53.2}\tabularnewline & WideResNet & 46.7 & 46.5 & 47.1 & 50.4 & 49.6 & 44.9 & 50.4 & 52.8 & \textbf{56.5}\tabularnewline & ResNeXt & 46.6 & 45.4 & 45.9 & 48.7 & 48.6 & 45.6 & 53.4 & 57.3 & \textbf{58.4}\tabularnewline \cline{2-11} \cline{3-11} \cline{4-11} \cline{5-11} \cline{6-11} \cline{7-11} \cline{8-11} \cline{9-11} \cline{10-11} \cline{11-11} & Average & 44.4 & 43.9 & 44.5 & 47.5 & 47.4 & 44.8 & 48.6 & 52.3 & \textbf{55.7}\tabularnewline \hline \end{tabular}} \end{table*} In DG experiments, our setup closely follows \cite{zhao_maximum}. In particular, we validate our method on the CIFAR-C single domain generalization benchmark: train the model on either CIFAR-10 or CIFAR-100 dataset \cite{krizhevsky2009learning}, then evaluate it on CIFAR-10-C or CIFAR-100-C \cite{hendrycks2019robustness}, correspondingly. In terms of network architectures, we use the exact backbones from \cite{zhao_maximum} to examine the versatility of our method that can be adopt in any type of classifier. GLOT-DR is compared with other state-of-the-art methods in image corruption robustness: Mixup \cite{zhang2018mixup}, Cutout \cite{devries2017improved} and Cutmix \cite{yun2019cutmix}, AutoDA \cite{cubuk2019autoaugment}, ADA \cite{volpi2018generalizing}, and ME-ADA \cite{zhao_maximum}. Table. \ref{tab:cifar-c-1} shows the average accuracy when we alternatively train the model on one category and evaluate on the rest. In every setting, GLOT-DR outperforms other methods by large margins. Specifically, our method exceeding the second-best method ME-ADA \cite{zhao_maximum} by $3.2\%$ on CIFAR-10-C and $3.4\%$ on CIFAR-100-C. The substantial gain in terms of the accuracy on various backbone architectures demonstrates the high applicability of the proposed techniques. Furthermore, we examine multi-source DG where the classifier needs to generalize from multiple source domains to an unseen target domain, using the popular PACS dataset \cite{li2017deeper}. Our proposed method is applicable in this scenario since it is designed to better learn domain invariant features as well as leverage the diversity from generated data. We compare GLOT-DR against DSN \cite{bousmalis2016domain}, L-CNN \cite{li2017deeper}, MLDG \cite{li2018learning}, Fusion \cite{mancini2018best}, MetaReg \cite{balaji2018metareg} and Epi-FCR, AGG \cite{li2019episodic}, HEX \cite{wang2018learning}, and PAR \cite{NEURIPS2019_3eefceb8}. Table \ref{tab:pacs-1} shows that our GLOT-DR outperforms the baselines for three cases and averagely surpasses the second best baseline by $0.9\%$. The most noticeable improvement is on the Sketch domain ($\thickapprox2.4\%)$, which is the most challenging - the styles of the images are colorless and far different from the ones from Art Painting, Cartoon or Photos (i.e. larger domain shift). \begin{table*}[t] \caption{Multi-source domain generalization accuracy (\%) on PACS datasets\label{tab:pacs-1}. Each column title indicates the target domain used for evaluation, while the rest are for training.\vspace*{2mm}} \centering{}\resizebox{1.0\columnwidth}{!} \begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c|c} \hline & DSN & L-CNN & MLDG & Fusion & MetaReg & Epi-FCR & AGG & HEX & PAR & ADA & ME-ADA & GLOT-DR\tabularnewline \hline Art & 61.1 & 62.9 & 66.2 & 64.1 & 69.8 & 64.7 & 63.4 & 66.8 & 66.9 & 64.3 & \textbf{67.1} & 66.1 \tabularnewline Cartoon & 66.5 & 67.0 & 66.9 & 66.8 & 70.4 & \textbf{72.3} & 66.1 & 69.7 & 67.1 & 69.8 & 69.9 & \textbf{72.3 }\tabularnewline Photo & 83.3 & 89.5 & 88.0 & 90.2 & 91.1 & 86.1 & 88.5 & 87.9 & 88.6 & 85.1 & 88.6 & \textbf{90.4 }\tabularnewline Sketch & 58.6 & 57.5 & 59.0 & 60.1 & 59.2 & 65.0 & 56.6 & 56.3 & 62.6 & 60.4 & 63.0 & \textbf{65.4 }\tabularnewline \hline Average & 67.4 & 69.2 & 70.0 & 70.3 & 72.6 & 72 & 68.7 & 70.2 & 71.3 & 69.9 & 72.2 & \textbf{73.5 }\tabularnewline \hline \end{tabular}} \end{table*} \subsection{Experiments for DA} In this section, we conduct experiments on the commonly used dataset for real-world unsupervised DA - Office-31 \cite{saenko2010adapting}, comprising $4,110$ images in $31$ classes from three domains: Amazon (A), Webcam (W) and DSLR (D). Our proposed GLOT-DR is compared against baselines: ResNet-50 \cite{he2016deep}, DAN \cite{pmlr-v37-long15}, RTN \cite{NIPS2016_ac627ab1}, DANN \cite{ganin2016domain}, JAN \cite{long2017deep}, GTA \cite{sankaranarayanan2018generate}, CDAN \cite{long2017conditional}, DeepJDOT \cite{damodaran2018deepjdot} and ETD \cite{li2020enhanced}. For a fair comparison, we follow the training setups of CDAN and compare with other works using this configuration. As can be seen from Table \ref{tab:office-31}, GLOT-DR achieves the best overall performance among baselines with $87.4\%$ accuracy. Compared with ETD, which is another OT-based domain adaptation method, our performance significantly increase by $4.1\%$ on A\textrightarrow W task, $2.1\%$ on W\textrightarrow A and $1.2\%$ on average. \begin{table} \begin{centering} \caption{Accuracy (\%) on Office-31 \cite{saenko2010adapting} of ResNet50 model \cite{he2016deep} in unsupervised domain adaptation methods.\label{tab:office-31}\vspace*{2mm}} \par\end{centering} \centering{}\resizebox{.75\columnwidth}{!} \begin{tabular}{cccccccc} \hline Method & A\textrightarrow W & D\textrightarrow W & W\textrightarrow D & A\textrightarrow D & D\textrightarrow A & W\textrightarrow A & Avg\tabularnewline \hline ResNet & 68.4 & 96.7 & 99.3 & 68.9 & 62.5 & 60.7 & 76.1\tabularnewline DAN & 80.5 & 97.1 & 99.6 & 78.6 & 63.6 & 62.8 & 80.4\tabularnewline RTN & 70.2 & 96.6 & 95.5 & 66.3 & 54.9 & 53.1 & 72.8\tabularnewline DANN & 84.5 & 96.8 & 99.4 & 77.5 & 66.2 & 64.8 & 81.6\tabularnewline JAN & 82 & 96.9 & 99.1 & 79.7 & 68.2 & 67.4 & 82.2\tabularnewline GTA & 89.5 & 97.9 & 99.8 & 87.7 & 72.8 & \textbf{71.4} & 86.5\tabularnewline CDAN & 93.1 & 98.2 & \textbf{100} & \textbf{89.8} & 70.1 & 68 & 86.6\tabularnewline DeepJDOT & 88.9 & 98.5 & 99.6 & 88.2 & \textbf{72.1} & 70.1 & 86.2\tabularnewline ETD & 92.1 & \textbf{100} & \textbf{100} & 88 & 71 & 67.8 & 86.2\tabularnewline \hline GLOT-DR & \textbf{96.2 } & 98.9 & \textbf{100.0 } & 90.6 & 68.7 & 69.9 & \textbf{87.4 }\tabularnewline \hline \end{tabular}} \end{table} We further extensively investigate the role of different components in GLOT-DR. Specifically, elimination of the global-regularization term in equation (\ref{eq:global_DA_SSL}) downgrades our method to Local Optimal Transport based Distributional Robustness (LOT-DR). Similarly, when discarding the local distribution robustness term, the attained method is denoted by GOT-DR. We then compare these 2 variants of GLOT-DR to the well-known adversarial machine learning method VAT \cite{VAT}. To be more specific, in the adversarial samples generation, we apply VAT by perturbing on the: (i) input space, (ii) latent space. Figure \ref{fig:office-31-ablation} shows that the employment of VAT on latent space (orange) is more effective than on the input space (purple), $83\%$ and $80.6\%$. However, using GOT-DR or LOT-DR is even more effective: performance is boosted to $84.3\%$ and $85.4\%$, respectively. Lastly, using the full method GLOT-DR yields the highest average accuracy score. \subsection{Experiments for SSL\label{subsec:ssl}} \begin{figure} \begin{centering} \includegraphics[width=0.7\columnwidth]{figures/office-31}\vspace{-2mm} \caption{Average accuracy of ResNet50 \cite{he2016deep} on Office-31: Comparision between GLOT-DR's variants and VAT \cite{VAT} on the input and latent spaces \label{fig:office-31-ablation}. Best viewed in color. } \par\end{centering} \vspace{-3mm} \end{figure} Sharing a similar objective with DA, which utilizes the unlabeled samples for improving the model performance, SSL methods can also benefit from our proposed technique. In this section, we present the empirical results on CIFAR-10 benchmark with ConvLarge architecture, following VAT's protocol \cite{VAT}, which serves as a strong baseline in this experiment. We refer readers to Appendix \ref{sec:Additional-Experiments} for more details on the architecture of ConvLarge. Results in Figure \ref{fig:ssl} (when training with 1000 and 4000 labeled examples) demonstrate that, with only $n^{S}=n^{T}=1$ perturbed sample per anchor, the performance of LOT-DR slightly outperforms VAT with $\sim0.5\%$. With more perturbed samples per anchor, this gap increases: approximately $1\%$ when $n^{S}=n^{T}=2$ and $1.5\%$ when $n^{S}=n^{T}=4$. Similar to the previous DA experiment, adding the global regularization term helps increase accuracy by approximately $1\%$ in this situation. \begin{figure}[t] \centering{}\includegraphics[width=1\columnwidth]{figures/ssl2}\vspace{-2mm} \caption{Accuracy (\%) on CIFAR-10 of ConvLarge model in SSL settings when using 1,000 and 4,000 labeled examples (i.e. 100 and 400 labeled samples each class, respectively). Best viewed in color. \label{fig:ssl}} \vspace{-5mm} \end{figure} \subsection{Experiments for AML} Table. \ref{tab:aml-cf10-r18} shows the evaluation against adversarial examples. We compare our method with PGD-AT \cite{madry2017towards} and TRADES \cite{zhang2019theoretically}, which are two well-known defense methods in AML. For the sake of fair comparison, we use the same adversarial training setting for all methods which is carefully investigated in \cite{pang2020bag}. The detail setting can be found in Appendix \ref{sec:Additional-Experiments}. We also compare with adversarial distributional training methods \cite{deng2020adversarial} (ADT-EXP and ADT-EXPAM), which assume that the adversarial distribution explicitly follows normal distribution. It can be seen from Table \ref{tab:aml-cf10-r18} that our GLOT-DR method outperforms all these baselines in both natural and robustness performance. Specifically, comparing to PGD-AT, our method has an improvement of 0.8\% in natural accuracy and around 1\% robust accuracies against PGD200 and AA attacks. Comparing to TRADES, while achieving the same level of robustness, our method has a better performance with benign examples with a gap of 2.5\%. Especially, our method significantly outperforms ADT by around 7\% under the PGD200 attack. \begin{table} \begin{centering} \caption{Adversarial robustness evaluation on CIFAR10 of ResNet18 model. PGD, AA and B\&B represent the robust accuracy against the PGD attack (with 10/200 iterations) \cite{madry2017towards}, Auto-Attack \cite{croce2020reliable} and B\&B attack \cite{brendel2019accurate}, respectively, while NAT denotes the natural accuracy. \label{tab:aml-cf10-r18}\vspace*{2mm}} \par\end{centering} \centering{}\resizebox{.65\columnwidth}{!} \begin{tabular}{cccccc} \hline Method & NAT & PGD10 & PGD200 & AA & B\&B\tabularnewline \hline $\text{PGD-AT}^{\star}$ & 82.52 & 53.58 & - & 48.51 & -\tabularnewline $\text{TRADES}^{\star}$ & 81.45 & 53.51 & - & 49.06 & -\tabularnewline $\text{PGD-AT}^{\diamond}$ & 83.36 & 53.52 & 52.21 & 49.00 & 48.50\tabularnewline $\text{TRADES}^{\diamond}$ & 81.64 & 53.73 & 53.11 & 49.77 & 49.02\tabularnewline $\text{ADT-EXP}$ & 83.02 & - & 45.80 & 45.80 & 46.50\tabularnewline $\text{ADT-EXPAM}$ & 84.11 & - & 46.10 & 44.50 & 45.83\tabularnewline \hline GLOT-DR & \textbf{84.13} & \textbf{54.13} & \textbf{53.1}8 & \textbf{49.94} & \textbf{49.40}\tabularnewline \hline \end{tabular}} \end{table} \blfootnote{$^{\star}$ Results are taken from Pang et al. \cite{pang2020bag}} \blfootnote{$^{\diamond}$ Our reproduced results} \subsection{Our Framework \label{subsec:Our-Framework}} We propose a regularization technique based on optimal transport DR, widely applied to many settings including i) \emph{semi-supervised learning} (SSL), ii) \emph{domain adaptation} (DA), iii) \emph{domain generalization} (DG), and iv) \emph{adversarial machine learning} (AML). In what follows, we present the general setting and technical details of our framework. Assume that we have \emph{multiple labeled source domains} with the \emph{data/label} distributions $\left\{ \mathbb{P}_{k}^{S}\right\} _{k=1}^{K}$ and a \emph{single unlabeled target domain} with the \emph{data} distribution $\mathbb{P}^{T}$. For the $k$-th source domain, we sample a batch of $B_{k}^{S}$ examples as $\left(X_{ki}^{S},Y_{ki}^{S}\right)\iid\mathbb{P}_{k}^{S}$, where $i=1,\ldots,B_{k}^{S}$ is the sample index. Meanwhile, for the target domain, we sample a batch of $B^{T}$ examples as $X_{i}^{T}\iid\mathbb{P}^{T},\,i=1,\ldots,B^{T}$. It is worth noting that for the DG setting, we set $B^{T}=0$ (i.e., not use any target data in training). Furthermore, we examine the multi-class classification problem with the label set $\mathcal{Y}:=\left\{ 1,...,M\right\} $. Hence, the prediction of a classifier is a prediction probability belonging to the \emph{label simplex} $\Delta_{M}:=\left\{ \pi\in\mathbb{R}^{M}:\norm{\pi}_{1}=1\,\text{and}\,\pi\geq\bzero\right\} $. Finally, let $f_{\psi}=h_{\theta}\circ g_{\phi}$ with $\psi=(\phi,\theta)$ be parameters of our deep net, wherein $g_{\phi}$ is the feature extractor and $h_{\theta}$ is the classifier on top of feature representations. As explained below, our method involves the construction of a random variable $Z$ with distribution $\mathbb{P}$ and another random variable $\tilde{Z}$ with distribution $\mathbb{\tilde{P}}$, ``containing'' anchor samples $\left(X_{ki}^{S},Y_{ki}^{S}\right),X_{i}^{T}$ and their perturbed counterparts $\left(\tilde{X}_{kij}^{S},\tilde{Y}_{kij}^{S}\right),\tilde{X}_{ij}^{T}$. The inclusion of both anchor samples and perturbed samples allows us to define a unifying cost function containing local regularization, global regularization and classification loss. Concretely, we first start with the construction of $Z$, containing repeated anchor samples \begin{align} Z & :=\left[\left[\left[X_{kij}^{S},Y_{kij}^{S}\right]_{k=1}^{K}\right]_{i=1}^{B_{k}^{S}}\right]_{j=0}^{n^{S}},\left[\left[X_{ij}^{T}\right]_{i=1}^{B^{T}}\right]_{j=0}^{n^{T}}.\label{eq:Z_construct} \end{align} Here, each source sample is repeated $n^{S}+1$ times $\left(X_{kij}^{S},Y_{kij}^{S}\right)=\left(X_{ki}^{S},Y_{ki}^{S}\right),\,\forall j$, while each target sample is repeated $n^{T}+1$ times $X_{ij}^{T}=X_{i}^{T},\,\forall j$. The corresponding distribution of this random variable is denoted as $\mathbb{P}$. In contrast to $Z$, we next define random variable $\tilde{Z}\sim\tilde{\mathbb{P}}$, whose form is: \begin{equation} \tilde{Z}:=\left[\left[\left[\tilde{X}_{kij}^{S},\tilde{Y}_{kij}^{S}\right]_{k=1}^{K}\right]_{i=1}^{B_{k}^{S}}\right]_{j=0}^{n^{S}},\left[\left[\tilde{X}_{ij}^{T}\right]_{i=1}^{B^{T}}\right]_{j=0}^{n^{T}}.\label{eq:Z_til_construct} \end{equation} We would like $\tilde{Z}$ to contain both: i) anchor examples, e.g., $\left(\tilde{X}_{ki0}^{S},\tilde{Y}_{ki0}^{S}\right)=\left(X_{ki}^{S},Y_{ki}^{S}\right)$ and $\tilde{X}_{i0}^{T}=X_{i}^{T}$; ii) $n^{S}$ perturbed source samples $\left\{ \tilde{X}_{kij}^{S},\tilde{Y}_{kij}^{S}\right\} _{j=1}^{n^{S}}$ and $n^{T}$ perturbed target samples $\left\{ \tilde{X}_{ij}^{T}\right\} _{i=1}^{n^{T}}$. In order to enforce this requirements, we only consider sampling $\tilde{Z}$ from distribution $\tilde{\mathbb{P}}$ inside the Wasserstein-ball of $\mathbb{P}$, i.e., satisfying $\mathcal{W}_{\rho}\left(\mathbb{P},\tilde{\mathbb{P}}\right):=\inf_{\gamma\in\Gamma\left(\mathbb{P},\tilde{\mathbb{P}}\right)}\mathbb{E}_{\left(Z,\tilde{Z}\right)\sim\gamma}\left[\rho\left(Z,\tilde{Z}\right)\right]^{\frac{1}{q}}\leq\epsilon$, where the cost metric is defined as \begin{align*} \rho\left(Z,\tilde{Z}\right) & :=\infty\sum_{k=1}^{K}\sum_{i=1}^{B_{k}^{S}}\norm{X_{ki0}^{S}-\tilde{X}_{ki0}^{S}}_{p}^{q}\\ & + \infty\sum_{i=1}^{B^{T}}\norm{X_{i0}^{T}-\tilde{X}_{i0}^{T}}_{p}^{q} +\sum_{k=1}^{K}\sum_{i=1}^{B_{k}^{S}}\sum_{j=1}^{n^{S}}\norm{X_{kij}^{S}-\tilde{X}_{kij}^{S}}_{p}^{q} \\ & + \sum_{i=1}^{B^{T}}\sum_{j=1}^{n^{T}}\norm{X_{ij}^{T}-\tilde{X}_{ij}^{T}}_{p}^{q} +\infty\sum_{k=1}^{K}\sum_{i=1}^{B_{k}^{S}}\sum_{j=0}^{n^{S}}\rho_{l}\left(Y_{kij}^{S},\tilde{Y}_{kij}^{S}\right), \end{align*} where $\rho_{l}$ is a metric on the \emph{label simplex} $\Delta_{M}$. Here we slightly abuse the notion by using $Y\in\mathcal{Y}$ to represent its corresponding one-hot vector. By definition, this cost metric almost surely : i) enforces all $j=0$ samples in $\tilde{Z}$ to be anchor samples; ii) allows perturbations on the input data, e.g., $\tilde{X}_{kij}^{S}\neq X_{ki}^{S}$ and $\tilde{X}_{ij}^{T}\neq X_{i}^{T}$, for $\forall j\neq0$; iii) restricts perturbations on labels, e.g., $Y_{kij}^{S}=\tilde{Y}_{kij}^{S}$ for $\forall j$ (cf. Figure \ref{fig:Overview-of-GLOT-DR.}). Upon clear definitions of $\tilde{Z}$ and $\tilde{\mathbb{P}}$, we wish to learn good representations and regularize the classifier $f_{\psi}$, via the following distributional robustness problem: \begin{equation} \min_{\theta,\phi}\max_{\tilde{\mathbb{P}}:\mathcal{W}_{\rho}\left(\mathbb{P},\tilde{\mathbb{P}}\right)\leq\epsilon}\mathbb{E}_{\tilde{Z}\sim\tilde{\mathbb{P}}}\left[r\left(\tilde{Z};\phi,\theta\right)\right].\label{eq:dro_p1} \end{equation} The cost function $r\left(\tilde{Z};\phi,\theta\right):=\alpha r^{l}\left(\tilde{Z};\phi,\theta\right)+\beta r^{g}\left(\tilde{Z};\phi,\theta\right)+\mathcal{L}\left(\tilde{Z};\phi,\theta\right)$ with $\alpha,\beta>0$ is defined as the sum of a \emph{local-regularization function} $r^{l}\left(\tilde{Z};\phi,\theta\right)$, a \emph{global-regularization function} $r^{g}\left(\tilde{Z};\phi,\theta\right)$, and the \emph{loss function} $\mathcal{L}\left(\tilde{Z};\phi,\theta\right)$, whose explicit forms are dependent on the task (DA, SSL, DG, AML, etc). Intuitively, the optimization in equation (\ref{eq:dro_p1}) iteratively searches for the worst-case $\tilde{\mathbb{P}}$ w.r.t. the cost $r\left(\cdot;\phi,\theta\right)$, then changes the network $f_{\psi}$ to minimize the worst-case cost. Let us define{ \[ \Gamma_{\epsilon}:=\left\{ \gamma:\gamma\in\cup_{\tilde{\mathbb{P}}}\Gamma\left(\mathbb{P},\tilde{\mathbb{P}}\right)\,\text{and}\,\mathbb{E}_{\left(Z,\tilde{Z}\right)\sim\gamma}\left[\rho\left(Z,\tilde{Z}\right)\right]^{1/q}\leq\epsilon\right\} , \] }and show that the inner max problem in equation (\ref{eq:dro_p1}) is equivalent to search in $\Gamma_{\epsilon}$. \begin{lem} \label{lem:dro_p2}(Proof in Appendix \ref{sec:Proofs}) The optimization problem in equation (\ref{eq:dro_p1}) is equivalent to the following optimization problem: \begin{equation} \min_{\theta,\phi}\max_{\gamma:\in\Gamma_{\epsilon}}\mathbb{E}_{\left(Z,\tilde{Z}\right)\sim\gamma}\left[r\left(\tilde{Z};\phi,\theta\right)\right].\label{eq:dro_p2} \end{equation} \end{lem} To tackle the optimization problem (OP) in equation (\ref{eq:dro_p2}), we add the entropic regularization and arrive at the following OP: \begin{equation} \min_{\theta,\phi}\max_{\gamma:\in\Gamma_{\epsilon}}\left\{ \mathbb{E}_{\left(Z,\tilde{Z}\right)\sim\gamma}\left[r\left(\tilde{Z};\phi,\theta\right)\right]+\frac{1}{\lambda}\mathbb{H}\left(\gamma\right)\right\} ,\label{eq:dro_entropic} \end{equation} where $\lambda>0$ is the entropic regularization parameter and $\mathbb{H}$ returns the entropy of a given distribution. The following theorem indicates the optimal solution of the inner max in the OP in equation (\ref{eq:dro_entropic}). \begin{thm} \label{thm:opt_sol}(Proof in Appendix \ref{sec:Proofs}) Assuming $r\left(\tilde{Z};\psi\right)=\alpha r^{l}\left(\tilde{Z};\psi\right)+\beta r^{g}\left(\tilde{Z};\psi\right)+\mathcal{L}\left(\tilde{Z};\psi\right)$ with $\psi=(\phi,\theta)$. In addition, $Z$ and $\tilde{Z}$ are constructed as in equation (\ref{eq:Z_construct}) and equation (\ref{eq:Z_til_construct}), respectively. Let $\ell$ denote the loss function, so the expected classification loss becomes \[ \mathcal{L}\left(\tilde{Z};\psi\right):=\sum_{k=1}^{K}\sum_{i=1}^{B_{k}^{S}}\sum_{j=0}^{n_{k}^{S}}\ell\left(\tilde{X}_{kij}^{S},\tilde{Y}_{kij}^{S};\psi\right). \] Moreover, let the global-regulazation $r^{g}\left(\tilde{Z};\psi\right):=r^{g}\left(\left[\tilde{X}_{ki0}^{S}\right]_{k,i},\left[\tilde{X}_{i0}^{T}\right]_{i};\psi\right)$ depend only on anchor samples, while the local-regularization depend on the differences between anchor samples and perturbed samples, \begin{align*} r^{l}\left(\tilde{Z};\psi\right) & :=\sum_{i=1}^{B^{T}}\sum_{j=1}^{n^{T}}s\left(\tilde{X}_{i0}^{T},\tilde{X}_{ij}^{T};\psi\right) + \sum_{k=1}^{K}\sum_{i=1}^{B_{k}^{S}}\sum_{j=1}^{n_{k}^{S}}s\left(\tilde{X}_{ki0}^{S},\tilde{X}_{kij}^{S};\psi\right), \end{align*} where $s\left(\tilde{X}_{0},\tilde{X}_{j};\psi\right)$ measures the difference between 2 input samples, and $s\left(X,X;\psi\right)=0,\forall X$. To this end, the inner max in the OP when $q=\infty$ has the following solution{ \begin{align} \gamma^{*}\left(Z,\tilde{Z}\right)=\prod_{k=1}^{K}\prod_{i=1}^{B_{k}^{S}}\prod_{j=0}^{n_{k}^{S}}p_{k}^{S}\left(X_{ki}^{S},Y_{ki}^{S}\right)\nonumber \prod_{i=1}^{B^{T}}\prod_{j=0}^{n^{T}}p^{T}\left(X_{i}^{T}\right)\\\times\prod_{k=1}^{K}\prod_{i=1}^{B_{k}^{S}}\prod_{j=0}^{n_{k}^{S}}\nonumber \frac{\exp\left\{ \lambda[\alpha s(X_{ki}^{S},\tilde{X}_{kij}^{S};\psi)+\ell(\tilde{X}_{kij}^{S},Y_{ki}^{S};\psi)]\right\} }{\int_{B_{\epsilon}\left(X_{ki}^{S}\right)}\exp\left\{ \lambda[\alpha s(X_{ki}^{S},\tilde{X}_{kij}^{S};\psi)+\ell(\tilde{X}_{kij}^{S},Y_{ki}^{S};\psi)]\right\} d\tilde{X}_{kij}^{S}}\nonumber \\ \times\prod_{i=1}^{B^{T}}\prod_{j=1}^{n^{T}}\frac{\exp\left\{ \lambda\alpha s\left(X_{i}^{T},\tilde{X}_{ij}^{T};\psi\right)\right\} }{\int_{B_{\epsilon}\left(X_{i}^{T}\right)}\exp\left\{ \lambda\alpha s\left(X_{i}^{T},\tilde{X}_{ij}^{T};\psi\right)\right\} },\label{eq:opt_gamma} \end{align} }where $\left(X_{ki}^{S},Y_{ki}^{S}\right)_{i=1}^{B_{k}^{S}}\iid\mathbb{P}_{k}^{S},\forall k$, $X_{1:B^{T}}^{T}\iid\mathbb{P}^{T}$, $p_{k}^{S}$ is the density function of $\mathbb{P}_{k}^{S}$, $p^{T}$ is the density function of $\mathbb{P^{T}}$, and $B_{\epsilon}\left(X\right):=\left\{ X':\norm{X'-X}_{p}\leq\epsilon\right\} $. \end{thm} By substituting the optimal solution in equation (\ref{eq:opt_gamma}) back to equation (\ref{eq:dro_p2}), we reach the following OP with $\psi=(\phi,\theta)$: \begin{align} \min_{\psi}\mathbb{E}_{\forall k:\left(X_{ki}^{S},Y_{ki}^{S}\right)_{i=1}^{B_{k}^{S}}\iid\mathbb{P}_{k}^{S},X_{1:B^{T}}^{T}\iid\mathbb{P}^{T}}\left[r\left(\tilde{Z};\psi\right)\right],\label{eq:last_opt} \end{align} where $r\left(\tilde{Z};\psi\right)$ is defined as { \begin{multline} \mathbb{E}_{\left[\tilde{X}_{kij}^{S}\right]_{j}\sim\text{\ensuremath{\mathbb{P}_{ki}^{S}}}}\left[\alpha s(X_{ki}^{S},\tilde{X}_{kij}^{S};\psi)+\ell(\tilde{X}_{kij}^{S},Y_{ki}^{S};\psi)\right]+\\ \mathbb{E}_{\left[\tilde{X}_{ij}^{T}\right]_{j}\sim\text{\ensuremath{\mathbb{P}_{i}^{T}}}}\left[\alpha s\left(X_{i}^{T},\tilde{X}_{ij}^{T};\psi\right)\right]+\beta r^{g}\left(\left[X_{ki}^{S}\right]_{k,i},\left[X_{i}^{T}\right]_{i};\psi\right),\label{eq:final_opt} \end{multline} } where the \emph{local distribution} $\ensuremath{\mathbb{P}_{ki}^{S}}$ over $B_{\epsilon}\left(X_{ki}^{S}\right)$ has the density proportional to \\$\exp\left\{ \lambda[\alpha s(X_{ki}^{S},\cdot;\psi)\ell(\cdot,Y_{ki}^{S};\psi)]\right\} $, and the \emph{local distribution} $\ensuremath{\mathbb{P}_{i}^{T}}$ over $B_{\epsilon}\left(X_{i}^{T}\right)$ has the density proportional to $\exp$$\left\{ \lambda\alpha s\left(X_{i}^{T},\cdot;\psi\right)\right\} $. It is worth noting the how flexible the global-regularization function $r^{g}\left(\left[X_{ki}^{S}\right]_{k,i},\left[X_{i}^{T}\right]_{i};\psi\right)$ \label{potential} is in enforcing various characteristics suitable for the task, e.g., bridging the gap between labeled (source) and unlabeled (target) data in SSL and DA, or learning domain-invariant features in DG. Moreover, our global and local regularization terms can be naturally applied to the latent space induced by the feature extractor $g_{\phi}.$ Additionally, the theory development for this case is similar to that for the data space except replacing $X$ in the data space by $g_{\phi}\left(X\right)$ in the latent space. \subsection{Training Procedure of Our Approach \label{subsec:training-procedure}} In what follows, we present how to solve the OP in equation (\ref{eq:last_opt}) efficiently. Accordingly, we first need to sample $\left(X_{ki}^{S},Y_{ki}^{S}\right)_{i=1}^{B_{k}^{S}}\iid\mathbb{P}_{k}^{S},\forall k\,\text{and}\,X_{1:B^{T}}^{T}\iid\mathbb{P}^{T}$. For each source anchor $\left(X_{ki}^{S},Y_{ki}^{S}\right)$, we sample $\left[\tilde{X}_{kij}^{S}\right]_{j=1}^{n^{S}}\iid\ensuremath{\mathbb{P}_{ki}^{S}}$ in the ball $B_{\epsilon}\left(X_{ki}^{S}\right)$ with the \emph{density function proportional} to $\exp\left\{ \lambda[\alpha s(X_{ki}^{S},\bullet;\psi)+\ell(\cdot,Y_{ki}^{S};\psi)]\right\} $. Furthermore, for each target anchor $X_{i}^{T}$, we sample $\left[\tilde{X}_{ij}^{T}\right]_{j=1}^{n^{T}}\iid\ensuremath{\mathbb{P}_{i}^{T}}$ in the ball $B_{\epsilon}\left(X_{i}^{T}\right)$ with the d\emph{ensity function proportional} to $\exp$$\left\{ \lambda\alpha s\left(X_{i}^{T},\bullet;\psi\right)\right\} $. To sample the particles from their local distributions, we use Stein Variational Gradient Decent (SVGD) \cite{NIPS2016_b3ba8f1b} which is a particle-based inference approach using a functional gradient decent to approximate a ground-truth distribution known up to a normalizing factor. The update formula of SVGD consists of two terms with different roles: the first term drives the particles towards the high probability areas of the distribution by following a smoothed gradient direction (i.e., the weighted sum of the gradients of all the points weighted by the kernel function), while the second term acts as a repulsive force that prevents all the particles to collapse together into local modes of the distribution \cite{NIPS2016_b3ba8f1b}. More specifically, to keep the particles inside their balls, we employ projected SVGD as presented in Algorithm \ref{alg:svgd}. In our experiments, we use a RBF kernel with kernel width $\sigma$: $k\left(X,\tilde{X}\right)=\exp\left\{ -\norm{X-\tilde{X}}/\left(2\sigma^{2}\right)\right\} $. After obtaining the particles $\tilde{X}_{kij}^{S}$ and $\tilde{X}_{ij}^{T}$, we make use of them to minimize the objective function in equation (\ref{eq:last_opt}) for updating $\psi=(\phi,\theta)$. Specifically, we utilize cross-entropy for the classification loss term $\ell$ and the symmetric Kullback-Leibler (KL) divergence for the local regularization term $s\left(X,\tilde{X};\psi\right)$ as \[ \frac{1}{2}KL\left(f_{\psi}\left(X\right)\Vert f_{\psi}\left(\tilde{X}\right)\right)+\frac{1}{2}KL\left(f_{\psi}\left(\tilde{X}\right)\Vert f_{\psi}\left(X\right)\right). \] Finally, the global-regularization function of interest $r^{g}\left(\left[X_{ki}^{S}\right]_{k,i},\left[X_{i}^{T}\right]_{i};\psi\right)$ is defined accordingly depending on the task and explicitly presented in the sequel. \begin{algorithm}[H] \caption{Projected SVGD.\label{alg:svgd} } \begin{algorithmic} \STATE \textbf{Input:} A local distribution around $X$ with an unnormalized density function $\tilde{p}(\cdot)$ and a set of initial particles $\{X_{i}^{0}\}_{i=1}^{n}$. \STATE \textbf{Output:} A set of particles $\{X_{i}\}_{i=1}^{n}$ that approximates the local distribution corresponding to $\tilde{p}(\cdot)$. \FOR{$l=1$ to $L$} \STATE $X_{i}^{l+1}=\prod_{B_{\epsilon}\left(X\right)}\left[X_{i}^{l}+\eta_{l}\hat{\phi}^{*}(X_{i}^{l})\right]$ where $\hat{\phi}^{*}(X)=\frac{1}{n}\sum_{j=1}^{n}[k(X_{j}^{l},X)\triangledown_{X_{j}^{l}}\log\tilde{p}(X_{j}^{l})+\triangledown_{X_{j}^{l}}k(X_{j}^{l},X)]$ and $\eta_{l}$ is the step size at the $l^{\text{th}}$ iteration. \ENDFOR \end{algorithmic} \end{algorithm} \subsection{Setting for Domain Adaptation and Semi-supervised Learning} By considering the single source domain as the labeled portion and the target domain as the unlabeled portion, the same setting can be employed for DA and SSL. Particularly, we denote the data/label distribution of the source domain or labeled portion by $\mathbb{P}_{1}^{S|l}$ and the data distribution of target domain or unlabeled portion by $\mathbb{P}^{T|u}$. Notice that for SSL, $\mathbb{P}^{T|u}$ could be the marginal of $\mathbb{P}^{S|l}$ by marginalizing out the label dimension. Evidently, with this consideration, DA and SSL are special cases of our general framework in Section \ref{subsec:Our-Framework}, where the global-regularization function of interest $r^{g}\left(\left[X_{i}^{S}\right]_{i},\left[X_{j}^{T}\right]_{j};\psi\right)$ is defined as \begin{equation} \mathcal{W}_{d}\left(\frac{1}{B^{S}}\sum_{i=1}^{B^{S}}\delta_{U_{i}^{S}},\frac{1}{B^{T}}\sum_{j=1}^{B^{T}}\delta_{U_{j}^{T}}\right),\label{eq:global_DA_SSL} \end{equation} where $U_{i}^{S}=\left[g_{\phi}\left(X_{i}^{S}\right),h_{\theta}\left(g_{\phi}\left(X_{i}^{S}\right)\right)\right]$, $U_{j}^{T}=\left[g_{\phi}\left(X_{j}^{T}\right),h_{\theta}\left(g_{\phi}\left(X_{j}^{T}\right)\right)\right]$, and $\delta$ is the Dirac delta distribution. The cost metric $d$ is defined as \begin{equation} d\left(U_{i}^{S},U_{j}^{T}\right) :=\rho_{d}\left(g_{\phi}\left(X_{i}^{S}\right),g_{\phi}\left(X_{j}^{T}\right)\right)\nonumber + \gamma\rho_{l}\left(h_{\theta}\left(g_{\phi}\left(X_{i}^{S}\right)\right),h_{\theta}\left(g_{\phi}\left(X_{j}^{T}\right)\right)\right) \label{eq:ot_cost} \end{equation} where $\rho_{d}$ is a metric on the latent space and $\gamma>0$. With the global term in equation (\ref{eq:global_DA_SSL}), we aim to reduce the discrepancy gap between the\emph{ source (labeled)} domain and the \emph{target (unlabeled)} domain for learning domain-invariant representations. It is worth noting that this global term in equation (\ref{eq:global_DA_SSL}) was inspected in DeepJDOT \cite{damodaran2018deepjdot} for DA setting. Our approach is different from that approach in the local regularization term. \subsection{Setting for Domain Generalization} By setting $B^{T}=0$ (i.e., do not use any target data in training), our general framework in Section \ref{subsec:Our-Framework} is applicable to DG, where the global-regularization function of interest $r^{g}\left(\left[X_{ki}^{S}\right]_{k,i},\left[X_{i}^{T}\right]_{i};\psi\right)$ is defined as \begin{equation} \sum_{m=1}^{M}\sum_{k=1}^{K}\frac{1}{K}\mathcal{W}_{d}\left(\tilde{\mathbb{P}}_{km},\tilde{\mathbb{P}}_{m}\right),\label{eq:global_DG} \end{equation} where the cost metric $d=\rho_{d}$ is a metric on the latent space, $\tilde{\mathbb{P}}_{km}$ is the empirical distribution over $g_{\phi}\left(X_{ki}^{S}\right)$ with $Y_{ki}^{S}=m$, and $\tilde{\mathbb{P}}_{m}=\frac{1}{K}\sum_{k=1}^{K}\tilde{\mathbb{P}}_{km}$. \subsection{Setting for Adversarial Machine Learning} For AML, we have only \emph{single source domain} and need to train a deep model which is robust to adversarial examples. We denote the data/label distribution of the source domain by $\mathbb{P}_{1}^{S}$ and propose using a dynamic and pseudo target domain of the \emph{on-the-fly adversarial examples} $\left[\left[X_{1ij}^{S}\right]_{i=1}^{B^{S}}\right]_{j=1}^{n^{S}}$. In addition to the local and loss terms as in equation (\ref{eq:last_opt}), to strengthen model robustness, we propose the following global term to move adversarial examples ($\sim\mathbb{P}^{T}$) to benign examples ($\sim\mathbb{P}_{1}^{S}$): \begin{equation} \mathcal{W}_{d}\left(\frac{1}{B_{1}^{S}}\sum_{i=1}^{B_{1}^{S}}\delta_{U_{i}^{S}},\frac{1}{B_{1}^{S}n^{S}}\sum_{i=1}^{B_{1}^{S}}\sum_{j=1}^{n^{S}}\delta_{U_{ij}^{S}}\right),\label{eq:global_AML} \end{equation} where $U_{i}^{S}=\left[g_{\phi}\left(X_{1i}^{S}\right),h_{\theta}\left(g_{\phi}\left(X_{1i}^{S}\right)\right)\right]$, $U_{ij}^{S}=\left[g_{\phi}\left(X_{1ij}^{S}\right),h_{\theta}\left(g_{\phi}\left(X_{1ij}^{S}\right)\right)\right]$, and the cost metric $d$ is defined as \begin{multline} d\left(U_{i}^{S},U_{\bar{i}j}^{S}\right):=\mathbb{I}_{Y_{1i}^{S}=Y_{1\bar{i}}^{S}}\Biggl[\rho_{d}\left(g_{\phi}\left(X_{1i}^{S}\right),g_{\phi}\left(X_{1\bar{i}j}^{S}\right)\right)+\gamma\rho_{l}\left(h_{\theta}\left(g_{\phi}\left(X_{1i}^{S}\right)\right),h_{\theta}\left(g_{\phi}\left(X_{1\bar{i}j}^{S}\right)\right)\right)\Biggr],\label{eq:cost_AML} \end{multline} where $\mathbb{I}$ is the indicator function. Here we note that $X_{1\bar{i}j}^{S}$ is an adversarial example of $X_{1\bar{i}}^{S}$ which has the ground-truth label $Y_{1\bar{i}}^{S}$, hence by using the cost metric as in equation (\ref{eq:cost_AML}), we encourage the adversarial example $X_{1\bar{i}j}^{S}$ to move to a group of the benign examples with the same label. Finally, to tackle the WS-related terms in equations (\ref{eq:global_DA_SSL}), (\ref{eq:global_DG}), and (\ref{eq:global_AML}), we employ the entropic regularization dual form of WS, which had been demonstrated to have favorable computational complexities~\cite{Lin-2019-Efficient, Lin-2019-Efficiency, Lin-2020-Revisiting}. \section{Introduction} \input{introduction.tex} \section{Related Work} \input{related_work.tex} \section{Proposed Approach} \input{framework.tex} \section{Experiments} \input{experiments.tex} \section{Conclusion} \input{conclusion.tex} \clearpage
1,314,259,994,074
arxiv
\section{Advertisement} GZKFast is a capable event generator which produces results comparable with applications that incorporate far more substantial particle physics kinematics and accelerator data. \vskip 6pt GZKFast provides... \begin{itemize} \item A full featured simulation with rich run time configuration. \item Monte Carlo event generation. \item Management of individual astrophysical sources. \item Individual event simulation and tracking including relevant particle kinematics. \item Modular C++ design. \item Histogram and extensive tabular output. \item Practical standalone approach designed for ease of use, portability, and redistribution. \item Reusable program library for easy incorporation into new projects. \item GNU Library General Public License. \end{itemize} \section{Introduction} This year the ``GZK Effect'' of Greisen \cite{Greisen} and Zatsepin and Kuz'min \cite{Zatesepin} is celebrating its fortieth anniversary. These authors independently realized that a uniform cosmic microwave background would provide an optically thick background for the highest energy cosmic rays. Cosmic ray protons of sufficiently large energy would be likely to interact with CMB photons over suitably large distances. Accelerator experiments predict that the important channels for such interactions would be direct photopion production and the resonance channels for the $\Delta$ \cite{Mucke}. \begin{align} \label{pprxn} p + \gamma \rightarrow n + \pi^+ \notag\\ p + \gamma \rightarrow \Delta^+ \rightarrow n + \pi^+\\ \ldots \rightarrow p + \pi^0\notag \end{align} The stable daughters of these processes may be observable in Earth based detectors. Therefore we wish to predict the expected fluxes of these particles. Four momentum invariance of these reactions implies that in the proton rest frame \cite{Stecker}: $$ s = M_p^2 + 2 M_p E_\gamma $$ where $M_p$ is the proton mass and $E_\gamma$ is the energy of the photon in the lab frame. Therefore the measured cross section for photo-hadron interactions, $\sigma (s)$, determines the likelihood for proton attenuation. The expected attenuation length can be characterized by a mean free path, $\lambda$. $$ \lambda = {1\over n\sigma}$$ The mean free path also depends on the average density of CMB photons $n_{avg}$. The average density may be found by assuming the CMB is characterized by a black body spectrum and integrating the Bose distribution. Including redshift, the average density of the CMB is given by (\ref{navg}). \begin{equation} \label{navg} n_{avg} = (1+z)^3 {2\zeta(3)k_b^3 T^3\over\hbar^3 c^3 \pi^2} \end{equation} Here $\zeta(x)$ is the zeta function defined in statistical mechanics texts and $T$ is the temperature of the microwave background today. Using accepted values, at present this quantity is: $$ n_{avg} = 410.5\, cm^{-3} $$ The cross section for (\ref{pprxn}) can be determined from the Breit-Wigner formula using the measured data for the expected resonances. The relativistic form can be found in most introductory particle physics texts. \begin{equation} \label{bw} \sigma(s) = \sigma_{max} {{M_0}^2\Gamma^2\over{(s-{M_0}^2)^2 + \Gamma^2{M_0}^2}} \end{equation} The peak of the resonance is scaled by $\sigma_{max}$ which may be measured experimentally. $M_0$ is the mass of the resonance, and $\Gamma$ is the width. The mean attenuation length for a cosmic ray proton of a given energy is the energy fraction of scattering times the attenuation length. $$ L_0 = \left( {E\over \Delta E} \right) \lambda $$ For photopion production the energy fraction can be deduced from kinematics, and averages roughly ${\Delta E/ E} = 15 \%$ in the relevant energy regime. We can estimate the mean attenuation length of a $ 10^{20} \, eV $ cosmic ray proton, taking an average value of $\sigma$, $500 \mu b$: $$ L_0 = \left( {E\over \Delta E} \right) (n \sigma)^{-1} \approx 10^{25} \, cm \approx 10 Mpc$$ A cosmic ray traveling a distance, $L$, would have a survival probability less than $P_{survival}$. $$ P_{survival} (L) = 1.0 - \exp{(-L/L_0)} $$ Therefore, the flux of ultra high energy cosmic rays should be significantly attenuated on cosmological scales. The product of these photohadron interactions is a flux of electrons, photons, protons, neutrons and neutrinos. However, only the neutrinos travel without attenuation on these cosmological distances. These \em cosmogenic neutrinos \rm could be detected by Earth based experiments. It is a topical goal of the astrophysics community to measure and characterize the flux of ultra high energy neutrinos. Still, no contemporary experiment is capable of measuring the cosmogenic neutrino flux, so suitable Monte Carlo event generators must be available to support ongoing research. Here we present the C++ code named {\tt gzkfast} as one possible avenue for simulation of this process. \section{Program Operation} GZKFast is principally comprised of two components, a reusable library and a command line event generator. The reusable program library, \tt libgzkparticle, \rm is written in C++ and designed to be modular and portable across platforms. The simple invocation program, \tt gzkfast, \rm gives command line access to the key functionality provided in the GZK library. When a user invokes the \tt gzkfast \rm program, a universe is created with a simplified particle physics model. Cosmic ray point sources are inserted at random locations and managed by an \tt EventGenerator \rm thread. The event generator thread iterates through the point sources and injects subsequent ultra high energy particles into an \tt EvolutionThread \rm with momentum oriented towards the Earth. Cosmic ray events are managed in particle queues of the evolution thread modules using a ``round robin'' strategy. As a result execution time is divided relatively equally between available threads. Each evolution thread maintains a list of relevant \tt Space \rm objects, including the \tt CMB, \rm and extragalactic space, or \tt BFieldSpace. \rm The evolution threads also maintain a list of suitable Earth based \tt Detector \rm objects. Particle evolution continues as particles propagate through each of the known spaces and are given a chance to be detected by known detectors. Basic particle kinematics are used for propagation. In the extragalactic $\bf B \rm$ field, assumed to be static and uniform over distance $dx$, the kinematics of particle propagation are governed by a few simple relations. $${d\bf p\rm\over dt} = {e\over c} \bf v\times B$$ $$ \bf p \rm = \bf p \rm + {\rm d\bf p\over\rm dt}\delta t $$ $$ \bf x \rm = \bf x \rm + \bf v\rm\delta t$$ $$ \rm t = t + \delta t$$ The time step, $\delta t$, is defined based on the user specified distance step, $dx$. Particle interactions with CMB photons are evaluated with an accept-reject strategy. Then after propagation, each particle is allowed to decay with probability $P_{decay}$. $$ P_{decay}(t) = 1.0 - \exp{(-t/\tau)} $$ Decay products are reinserted into the evolution thread particle queue. If no decay or particle production takes place during propagation, each known detector is tested to see if it can see the particle. If the particle ``hits'' the detector the particle becomes an ``event.'' Each event is logged to a relevant output location. Particle evolution continues until the user specified number of events have been detected. Once the specified number of events have been recorded, event generation is halted. Program execution then terminates once all particle queues are empty. \section{Basic Results and Spectra} Each run of \tt gzkfast \rm is configurable to allow the user to explore different aspects of cosmogenic neutrino phenomenon. With suitable configuration it is easy to produce a wide variety of relevant data. GZKFast uses integrated data provided by the Particle Data Group \cite{RPP} to sample the proton-photon cross section versus $\sqrt{s}$. Accelerator data provided in the PDG dataset are linearly interpolated from point to point to produce a complete distribution as depicted in Figure \ref{deltares}. Using (\ref{bw}) to determine the cross section for the $\Delta_{1232}$ and $\Delta_{1600}$. These data are tabulated in the file specified by the ``cmb'' basename, with ``\_sigmaplot.dat'' appended. \makefigure { \centering \label{deltares} \includegraphics{delta.eps} \caption{The $p\gamma$ cross section measured in accelerator experiments (Black) \cite{RPP}. The 1232 MeV $\Delta$ resonance as determined from the Breit-Wigner formula (Red). The difference is taken to be the cross section for direct pion photoproduction.} } The cosmic microwave background is uniformly sampled as a black body distribution at 2.725~K \cite{Bennett}. These data are output in the file specified by the ``cmb'' basename, with ``\_photonhist.dat'' appended. \makefigure { \centering \includegraphics{cmb_photonhist.eps} \caption{The number of photons at a given energy for a 5000 $\nu$ event simulation.} } At program termination the \tt gzkfast \rm program produces a variety of output spectra for the various processes that it tracked. Most importantly GZKFast is capable of producing neutrino spectra resulting from super-GZK cosmic rays. These data are output in the file specified by the ``neutrino'' basename, with ``\_hist.dat'' or ``\_event.dat'' appended depending on if the data set is a histogram or event file. \makefigure { \centering \includegraphics{neutrino.eps} \caption{The neutrino histogram gives the energy distribution of an expected neutrino flux.} } GZKFast also provides individual event data for protons and neutrons, neutrinos and photons. These event data can be utilized in detector simulation or since they also include right ascension and declination information they can be used to produce sky maps with variable sources, distances, and other configuration differences. \end{multicols} GZKFast source code is available to the public upon request. \section{Acknowledgment} I acknowledge John~Beacom, Matthew~D.~Kistler and Michael~S.~Sutherland for discussions involving this work. \section{GZKFast Reference} \subsection{Class Listing} \begin{longtable}{lp{4.75in}} \tt CMB \rm & The Cosmic Microwave Background Object samples the cross section of $p\gamma$ and the distribution of a user specified black body spectrum to determine if an interaction with a cosmic ray will occur.\\ \tt CMBDist \rm & Sample the distribution of a black body radiation of a given temperature.\\ \tt Delta \rm & Program representation of a $\Delta$ (1232 MeV) particle. Provides decay kinematics and particle properties.\\ \tt Delta1600 \rm & Program representation of a $\Delta$ (1600 MeV) particle. Provides decay kinematics and particle properties.\\ \tt Detector \rm & Abstract virtual class interface for GZKFast spherical detectors. A detector is a volume of space which can be \em hit \rm by a simulated particle.\\ \tt Electron \rm & Program representation of an $e^-$ particle. Provides decay kinematics and particle properties.\\ \tt EventGenerator \rm & Threaded object which iterates through cosmic ray sources producing particle events from a predetermined energy spectrum and inserting them into the particle queues.\\ \tt EvolutionThread \rm & Threaded object which iterates through a specific particle event queue and propagates the particles through one of an arbitrary number of spaces. After the particle propagates, the detectors are given a chance to detect it.\\ \tt G4Vector \rm & A 4 vector representation.\\ \tt GGuard \rm & Guard a critical section.\\ \tt GHistogram \rm & A linear scale histogram class. \\ \tt GLogHistogram \rm & A log scale histogram class. \\ \tt GMath \rm & Provides rudimentary numerical operations and random number generation. \\ \tt GMatrix \rm & An object representation of an $n\times n$ matrix.\\ \tt GMutex \rm & An object to provide mutual exclusion across platforms.\\ \tt GRunThread \rm & A C++ implementation of a Java-like ``java.lang.Thread'' object.\\ \tt GThread \rm & An object to provide rudimentary threading support across platforms.\\ \tt GVector \rm & A vector of arbitrary size, based on \tt std::valarray.\rm\\ \tt GVegas \rm & The Vegas Numerical Recipe.\\ \tt BFieldSpace \rm & Propagate a particle through a ``static'' magnetic field using Hamiltonian formalism. \\ \tt Mu \rm & Program representation of an $\mu$ particle. Provides decay kinematics and particle properties.\\ \tt Neutron \rm & Program representation of an neutron. Provides decay kinematics and particle properties.\\ \tt Nu \rm & Program representation of a neutrino. Provides decay kinematics and particle properties.\\ \tt NuDetector \rm & An instance of \tt Detector \rm which is capable of detecting neutrinos.\\ \tt Particle \rm & An abstract virtual class interface for particle objects. \\ \tt Photon \rm & Program representation of a photon. Provides decay kinematics and particle properties.\\ \tt PhotonDetector \rm & An instance of \tt Detector \rm which is capable of detecting photons.\\ \tt Pion \rm & Program representation of a $\pi$. Provides decay kinematics and particle properties.\\ \tt PiZero \rm & Program representation of a $\pi^0$. Provides decay kinematics and particle properties.\\ \tt Proton \rm & Program representation of a $p^+$. Provides decay kinematics and particle properties.\\ \tt ProtonDetector \rm & An instance of \tt Detector \rm which is capable of detecting protons.\\ \tt ProtonSource \rm & A point source for cosmic ray protons.\\ \tt ProtonSpectrum \rm & An object to sample a power law energy spectrum of cosmic ray protons. \\ \tt Source \rm & An abstract virtual class interface for cosmic ray sources.\\ \tt Space \rm & An abstract virtual class interface for spaces. A space allows a particle to propagate.\\ \tt Sphere \rm & A geometric representation of a sphere. Provides ray intersection for detector operation.\\ \tt ThreeBodyDecay \rm & A generalized Monte Carlo of the reaction $A \rightarrow B + C + D$.\\ \tt TwoBodyDecay \rm & A solution for two body decay in the ultra high energy limit. Decay products are collinear with parent particles.\\ \tt Universe \rm & Provides a frame for calculating redshift and volume integrals.\\ \end{longtable} \subsection{Command Line Reference} \begin{longtable}{|l|l|l|p{2in}|}\hline \bf Parameter & \bf Units & \bf Default & \bf Description \\\hline\rm -sources & Number & 3 & The number of cosmic ray sources to add to the simulation. Sources are added randomly to a shell of size specified by -near and -far.\\\hline -events & Number & 100 & The number of neutrino events to simulate before stopping. \\\hline -threads & Number & 2 & The number of asynchronous execution paths simultaneously processing input events. \\\hline -alpha & Number & -2.7 & Simulate a proton input spectrum of the form $E^\alpha$. \\\hline -low & EeV & 50 & The starting energy for the input proton distribution.\\\hline -hi & EeV & 50000 & The ending energy for the input proton distribution.\\\hline -near & Mpc & 150 & The distance to the nearest cosmic ray source.\\\hline -far & Mpc & 200 & The distance to the furthest cosmic ray source.\\\hline -dx & Kpc & 250 & The distance corresponding to one iteration or step of the Monte Carlo integration.\\\hline -rad & Kpc & 250 & The radius of the ``detector'' volume. The ``detector'' is defined to be a volume of given radius centered about the earth. Any particle which will intersect the ``detector'' is considered to be an ``event.'' \\\hline -bfield & Gauss & 1e-9 & The maximum magnitude of the uniform extragalactic magnetic field.\\\hline -quality & Number & 5e-9 & The precision of the beta decay Monte Carlo.\\\hline -proton & File Name & ``proton'' & The base name of the files for writing proton events and histogram output.\\\hline -2nd & File Name & ``secondaryproton'' & The base name of the files for writing secondary proton events and histogram output.\\\hline -v & File Name & ``neutrino'' & The base name of the files for writing neutrino events and histogram output.\\\hline -cmb & File Name & ``cmb'' & The base name of the files for writing the cmb photon energy histogram and cross section sample data.\\\hline -photon & File Name & ``photon'' & The base name of the files for writing photon events and histogram output.\\\hline \end{longtable} \subsection{Program Output} \begin{verbatim} $ gzkfast: -[ arguments ] ... Simulate a flux of ultra high energy neutrinos from cosmic ray sources. -sources # - The number of sources. -events # - The number of events to simulate. -threads # - The number of processor threads. -alpha # - Simulate E^alpha spectrum. -low # - Least energy [EeV]. -hi # - Highest energy [EeV]. -near # - Least source distance [Mpc]. -far # - Highest source distance [Mpc]. -dx # - distance step [Kpc]. -rad # - detector radius [Kpc]. -bfield # - B field strength [Gauss]. -quality # - The precision of Monte Carlo convergence. -proton file - Name of file for input protons. -2nd file - Name of file for secondary protons. -v file - Name of file for neutrinos -cmb file - Name of file for cmb distributions -gamma file - Name of file for photons. \end{verbatim} \subsection{File Formats} \begin{table}[h] \begin{tabular}{cc} \rm Energy [eV] & \rm E dN/dE [$cm^{-2} s^{-1} sr^{-1}$] \rm\\ 4.235324597282258e+16& 6.205560549060830e-05\\ 4.829213438776459e+16& 5.812199626341997e-05\\ 5.506379003919574e+16& 1.066115138632034e-04\\ \end{tabular} \caption {Example neutrino\_hist.dat output.} \end{table} \begin{table}[h] \begin{tabular}{cccccc} \rm RA [deg] & \rm Dec [deg] & \rm E [eV] & \rm $p_x$ [eV] & \rm $p_y$ [eV] & \rm $p_z$ [eV]\rm\\ 73.8507& -31.1954& 4.2491+16&2.6999+16& -7.5372+15&-3.1932+16\\ 73.8507& -31.1954& 8.9698+18&5.6996+18& -1.5911+18&-6.7409+18\\ 82.8030& -90.6093& 1.0938+16&-3.6303+15& 3.6691+15&-9.6441+15\\ 82.8030& -90.6093& 1.4671+19&-4.8691+18& 4.9212+18&-1.2934+19\\ \end{tabular} \caption {Example neutrino\_event.dat output.} \end{table}
1,314,259,994,075
arxiv
\section{Introduction} The \textit{stochastic differential equations} (SDE) with reflecting boundary conditions, also called \textit{reflected} \textit{stochastic differential equations }(RSDE) appears from the modeling of different kinds of constrained phenomenon. The elliptic and parabolic PDEs with Neumann type and mixed boundary conditions lead us to probabilistic interpretations, via the Feynman--Ka\c{c} formula, in terms of reflected diffusion processes, which are solutions of RSDEs. This type of equations were studied for the first time by Skorohod in \cite{S} and after this, considered in general domains (see \cite{LS}, \cite{M}, \cite{S1}, \cite{S2}...). Since 1990s a lot of researchers focused their attention to numerical schemes, methods and algorithms for the study of the behavior of the solution for RSDEs. In the recent years, some new techniques consist in splitting-step algorithms and mixed penalization methods. The Euler approximation was considered for the first time by Chitashvili and Lazrieva in \cite{CL}, followed by the Euler-Peano approximation, which was introduced by Saisho in \cite{S1}. Lepingle in \cite{L} and Slominski in \cite{S2} analyzed the corresponding numerical schemes and their rates of convergence. In order to approximate the solution of RSDEs, the penalization method was also very useful (see Menaldi, \cite{M}). Approximation methods for a diffusion reflected and stopped at the boundary appear in the literature in 1998, in the paper of Constantini, Pacchiarotti and Sartoretto \cite{CPS}. They defined a standard Euler projected approach to stopped reflected diffusions, approach which yields a method with weak order of convergence ($1/2$ in particular) and they give an easy example where this convergence rate is precise. Regarding adaptive approximations of one-dimensional reflected Brownian motion, it can be used a simple method of two fixed step sizes chosen according to the distance at the boundary. In the paper \cite{AR}, Asiminoaei and R\u{a}\c{s}canu used a mixed method consisting in penalization and splitting-up for the study of multivalued SDE with reflection at the boundary of the domain. The penalization method was also used by R\u{a}\c{s}canu in \cite{R} for the study of the generalized Skorohod problem and of its link to multivalued SDE governed by a general maximal monotone operator (of sudifferential type). Recently, in \cite{DZ}, Ding and Zhang combined the penalization technique with the splitting-step idea to propose some new schemes for the RSDE in the upper half space. In 1990, Pardoux and Peng introduced in \cite{PP} the notion of nonlinear \textit{backward stochastic differential equation} (for short, BSDE), and they obtained the existence and uniqueness result for this kind of equation. Since then, the interest in BSDEs has kept growing and there have been a lot of works on that subject, both in the direction of the generalization of the equations that appear and in constructing schemes of approximation for them. The \textit{backward stochastic variational inequalities} (for short, BSVI) were analyzed by Pardoux and R\u{a}\c{s}canu in \cite{PR} and \cite{PR2} (the extension for Hilbert spaces case) by a method that consists in a penalizing scheme, followed by its convergence. Starting with the paper of Pardoux and Peng \cite{PP2}, have been given a stochastic approach for the existence problem of a solution for many type of deterministic partial differential equations (PDE for short). In \cite{PR} it is proved, using a probabilistic interpretation, the existence for the viscosity solution for a multivalued PDE (with subdifferential operator) of parabolic and elliptic type. More recently, Maticiuc and R\u{a}\c{s}canu in \cite{MR1}, prove an extended result concerning generalized type of BSDE (including an integral with respect to an adapted continuous increasing function and two subdifferential operators). These type of BSVI allows to prove Feynman-Kac type formula for the representation of the solution of PVI with mixed nonlinear multivalued Neumann-Dirichlet boundary conditions. Even this type of the penalization approach is very useful when we deal with multivalued backward stochastic dynamical systems governed by a subdifferential operator, it fails for the case of a general maximal monotone operator. This motivated a new approach, via convex analysis, for the study of both forward and backward multivalued differential systems. In \cite{RR}, R\u{a}\c{s}canu and Rotenstein identified the solutions of those type of equations with the minimum points of some proper, convex, lower semicontinuous functions, defined on well chosen Banach spaces. Euler-type approximation schemes for BSDE, and for BSDE with exit time for the forward part of the system, were introduced by Bouchard and Touzi in \cite{BT} and Bouchard and Menozzi in \cite{BM}. They considered the Markovian framework of a coupled forward-backward stochastic differential system and they defined an adapted backward Euler scheme for the strong approximation of the backward SDE with finite stopping time horizon, namely the first exit time of the forward SDE from a cylindrical domain. In \cite{BC}, Bouchard and Chassagneux study the discrete-time approximation of the solution of a BSDE with a reflecting barrier. The paper is organized as follows. Section 2 presents some basic notations, hypothesis and results that are used throughout this paper. Section 3 is dedicated to the analysis of the behavior of an approximation scheme defined for a backward stochastic variational inequality. In Section 4 we present an existence and uniqueness result for a generalized BSVI and we propose a mixed Euler type approximation scheme for its solution. \section{Notations. Hypothesis. Preliminaries} In all that follows we shall consider a finite horizon $T>0$ and a complete probability\ space $\left( \Omega,\mathcal{F},\mathbb{P}\right) \mathcal{\ }$on which is defined a standard $d$-dimensional Brownian motion $W=\left( W_{t}\right) _{t\leq T}$ whose natural filtration is denoted $\mathbb{F=}\{\mathcal{F}_{t},\ 0\leq t\leq T\}.$ More precisely, $\mathbb{F}$ is the filtration generated by the process $W$ and augmented by $\mathcal{N _{\mathbb{P}}$, the set of all $\mathbb{P}$-null sets, i.e. $\mathcal{F _{t}=\sigma\{W_{s},$\ $s\leq t\}\vee\mathcal{N}_{\mathbb{P}}$.$\smallskip$ We denote by $L_{ad}^{r}(\Omega;C(\left[ 0,T\right] ;\mathbb{R}^{k})),$ $r\in\lbrack1,\infty),$ the closed linear subspace of adapted stochastic processes $f\in L^{r}(\Omega,\mathcal{F},\mathbb{P};C(\left[ 0,T\right] ;\mathbb{R}^{k}))$, i.e. $f(\cdot,t):\Omega\rightarrow\mathbb{R}^{k}$ is $\mathcal{F}_{t}$-measurable for all $t\in\left[ 0,T\right] $ and $\mathbb{E}\left( \sup_{t\in\left[ 0,T\right] }\left\vert f\left( t\right) \right\vert ^{r}\right) <\infty$. Also, we shall use the notation $L_{ad}^{r}(\Omega;L^{q}(\left[ 0,T\right] ;\mathbb{R}^{k})),$ $r,q\in\lbrack1,\infty)$ the Banach space of $\mathcal{F}_{t}$-measurable stochastic processes $f:\Omega\times\left[ 0,T\right] \rightarrow \mathbb{R}^{k}$ such that $\mathbb{E}\left( \int_{0}^{T}\left\vert f\left( t\right) \right\vert ^{q}dt\right) ^{r/q}<\infty.$\medskip \noindent Consider the following data:\vspace{-0.06in} \begin{itemize} \item the continuous coefficient functions $b:\left[ 0,T\right] \times\mathbb{R}^{m}\rightarrow\mathbb{R}^{m},$ $\sigma:\left[ 0,T\right] \times\mathbb{R}^{m}\rightarrow\mathbb{R}^{m\times d},$ $g:\mathbb{R ^{m}\rightarrow\mathbb{R}^{n}$ and $F:\left[ 0,T\right] \times\mathbb{R ^{m}\times\mathbb{R}^{n}\times\mathbb{R}^{n\times d}\rightarrow\mathbb{R},$ which satisfies the following standard assumptions: for some constants $\alpha\in\mathbb{R},$ $L,$ $\beta,$ $\gamma\geq0$ and for all $t\in\left[ 0,T\right] ,$ $x,$ $\tilde{x}\in\mathbb{R}^{m},$ $y,$ $\tilde{y}\in\mathbb{R}^{n}$ and $z,$ $\tilde{z}\in\mathbb{R}^{n\times d}: \begin{equation \begin{array} [c]{cl \left( i\right) & \left\vert b\left( t,x\right) -b\left( t,\tilde {x}\right) \right\vert +\left\Vert \sigma\left( t,x\right) -\sigma\left( t,\tilde{x}\right) \right\Vert \leq L\left\vert x-\tilde{x}\right\vert ,\medskip\\ \left( ii\right) & \left\langle y-\tilde{y},F(t,x,y,z)-F(t,x,\tilde {y},z)\right\rangle \leq\alpha\left\vert y-\tilde{y}\right\vert ^{2 ,\medskip\\ \left( iii\right) & \left\vert F(t,x,y,z)-F(t,x,y,\tilde{z})\right\vert \leq\beta\left\Vert z-\tilde{z}\right\Vert , \end{array} \label{coeff assumpt \end{equation} and there exist some constants $M>0$ and $p,$ $q\in\mathbb{N}$ such that, for all $t\in\left[ 0,T\right] $, $x\in\mathbb{R}^{m}$ and $y\in\mathbb{R}^{n}: \begin{equation} \left\vert g(x)\right\vert \leq M(1+\left\vert x\right\vert ^{q )\quad\text{and}\quad\left\vert F(t,x,y,0)\right\vert \leq M(1+\left\vert x\right\vert ^{p}+\left\vert y\right\vert ). \label{sublinear assumpt \end{equation} \item the function $\varphi:\mathbb{R}^{n}\rightarrow(-\infty,+\infty]$ which is a proper convex lower semicontinuous function and satisfies that there exist $M>0$ and $r\in\mathbb{N}$ such that, for all $x\in\mathbb{R}^{m}: \begin{equation} \left\vert \varphi(g(x))\right\vert \leq M(1+\left\vert x\right\vert ^{r}). \label{fi assumpt \end{equation} \end{itemize} The following theorem summarizes some already well known results concerning forward and backward SDE, considered in the Markovian framework (for the proof see Karatzas \& Shreve \cite{KS}, for forward case, and Pardoux \& R\u{a}\c{s}canu \cite{PR} for the backward system). \begin{theorem} Let $\left( t,x\right) \in\left[ 0,T\right] \times\mathbb{R}^{m}$ be fixed. Under the assumptions (\ref{coeff assumpt}), (\ref{sublinear assumpt}) and (\ref{fi assumpt}), the forward-backward coupled syste \begin{equation} \left\{ \begin{array} [c]{l dX_{s}^{t,x}=b(s,X_{s}^{t,x})ds+\sigma(s,X_{s}^{t,x})dW_{s}~,\text{ s\in\left[ 0,T\right] ,\medskip\\ dY_{s}^{t,x}+F(s,X_{s}^{t,x},Y_{s}^{t,x},Z_{s}^{t,x})ds\in\partial \varphi(Y_{s}^{t,x})ds+Z_{s}^{t,x}dW_{s}~,\text{ }t\in\left[ 0,T\right] ,\medskip\\ X_{t}^{t,x}=x,\text{ }Y_{T}^{t,x}=g(X_{T}^{t,x}), \end{array} \right. \label{forward_backward_system \end{equation} has a unique solution, i.e. there exist a unique process $X^{t,x}\in L_{ad}^{2}(\Omega;C(\left[ 0,T\right] ;\mathbb{R}^{m}))$ such tha \begin{equation} X_{s}^{t,x}=x+\int_{t}^{t\vee s}b(r,X_{r}^{t,x})dr+\int_{t}^{t\vee s \sigma(r,X_{r}^{t,x})dW_{r}~,\text{ }s\in\left[ 0,T\right] , \label{integral_forward_system \end{equation} and respectivel \[ (Y^{t,x},Z^{t,x},U^{t,x})\in L_{ad}^{2}(\Omega;C(\left[ 0,T\right] ;\mathbb{R}^{n}))\times L_{ad}^{2}(\Omega;L^{2}(\left[ 0,T\right] ;\mathbb{R}^{n\times d}))\times L_{ad}^{2}(\Omega;L^{2}(\left[ 0,T\right] ;\mathbb{R}^{n})), \] such tha \begin{equation} \left\{ \begin{array} [c]{l \displaystyle Y_{s}^{t,x}+\int_{s}^{T}U_{r}^{t,x}dr=g(X_{T}^{t,x})+\int _{s}^{T}\mathbf{1}_{\left[ t,T\right] }(r)F(r,X_{r}^{t,x},Y_{r}^{t,x ,Z_{r}^{t,x})dr-\int_{s}^{T}Z_{r}^{t,x}dW_{r}~,\;s\in\left[ 0,T\right] \medskip\\ U_{s}^{t,x}\in\partial\varphi\left( Y_{s}^{t,x}\right) ,\;d\mathbb{P\times }ds\;\text{on }\Omega\times\left[ 0,T\right] . \end{array} \right. \label{integral_backward_system \end{equation} Moreover, for all $p\geq2$, there exists some constant $C_{p}>0,$ $q\in\mathbb{N}^{\ast},$ such that, for all $t,$ $\tilde{t}\in\left[ 0,T\right] ,$ $x,$ $\tilde{x}\in\mathbb{R}^{n}: \ \begin{array} [c]{ll \left( j\right) & \mathbb{E}\big(\sup\nolimits_{s\in\left[ 0,T\right] }\big|X_{s}^{t,x}\big|^{p}\big)\leq C_{p}(1+\left\vert x\right\vert ^{p}),\medskip\\ \left( jj\right) & \mathbb{E}\big(\sup\nolimits_{s\in\left[ 0,T\right] }\big|X_{s}^{t,x}-X_{s}^{\tilde{t},\tilde{x}}\big|^{p}\big)\leq C_{p (1+\left\vert x\right\vert ^{p}+\left\vert \tilde{x}\right\vert ^{pq )(|t-\tilde{t}|^{p/2}+|x-\tilde{x}|^{p}),\medskip\\ \left( jjj\right) & \mathbb{E}\big(\sup\nolimits_{s\in\left[ 0,T\right] }\big|Y_{s}^{t,x}\big|^{2}\big)\leq C(1+\left\vert x\right\vert ^{2 ),\medskip\\ \left( jv\right) & \mathbb{E}\big(\sup\nolimits_{s\in\left[ 0,T\right] }\big|Y_{s}^{t,x}-Y_{s}^{\tilde{t},\tilde{x}}\big|^{2}\big)\leq C_{2}\left[ \mathbb{E}\big|g(X_{T}^{t,x})-g(X_{T}^{\tilde{t},\tilde{x}})\big|^{2}\right. +\smallskip\\ & \quad\quad\quad\quad\left. +\mathbb{E {\displaystyle\int_{0}^{T}} \big|\mathbf{1}_{\left[ t,T\right] }(r)F(r,X_{r}^{t,x},Y_{r}^{t,x ,Z_{r}^{t,x})-\mathbf{1}_{[\tilde{t},T]}(r)F(r,X_{r}^{\tilde{t},\tilde{x },Y_{r}^{t,x},Z_{r}^{t,x})\big|^{2}dr\right] . \end{array} \] \end{theorem} \section{Approximations schemes for BSVI} We will consider a partition of $\left[ 0,T\right] $ \[ \pi=\{t_{i}=ih:0\leq i\leq n\}\text{,}\;\text{with }h:=T/n,\;n\in \mathbb{N}^{\ast}, \] on which we approximate the solution of the backward stochastic variational inequality (\ref{integral_backward_system}). For the numerical simulations of the forward part, the most standard approach consists in approximating the SDE in a proper way on each interval $\left[ t_{i},t_{i+1}\right] $ by the classical Euler scheme (see, e.g. Kloeden \& Platen \cite{KP}) \[ \left\{ \begin{array} [c]{l X_{t_{i+1}}^{h}=X_{t_{i}}^{h}+b\left( X_{t_{i}}^{h}\right) ~h+\sigma\left( X_{t_{i}}^{h}\right) \left( W_{t_{i+1}}-W_{t_{i}}\right) ,\;i=\overline {0,n-1}\medskip\\ X_{0}^{h}=X_{0}. \end{array} \right. \] We remark that the above numerical scheme is easy to implement since it requires only the simulation of $d$-independent Gaussian variables for the Brownian increments, providing a weak error of $h$ order. For $t\in\left[ t_{i},t_{i+1}\right] $ le \[ X_{t}^{h}=X_{t_{i}}^{h}+b(X_{t_{i}}^{h})\left( t-t_{i}\right) +\sigma (X_{t_{i}}^{h})\left( W_{t}-W_{t_{i}}\right) . \] We have the following estimation of the error given by the Euler scheme (see \cite{KP}). \begin{proposition} \label{Euler for X}Under the assumptions (\ref{coeff assumpt}) on the coefficients $b$ and $\sigma$, for all $p\geq1$, there exists $C_{p}>0$ such tha \[ \max_{\overline{0,n-1}}\mathbb{E}\bigg(\sup\limits_{t\in\left[ 0,T\right] }\big|X_{t}-X_{t}^{h}\big|^{p}+\sup\limits_{t\in\left[ t_{i},t_{i+1}\right] }\big|X_{t}-X_{t_{i}}\big|^{p}\bigg)^{1/p}\leq C_{p}~\sqrt{h}. \] \end{proposition} Here and subsequently we will consider the one-dimensional BSDE case. Using the Yosida approximation $\nabla\varphi_{\varepsilon}$ of the multivalued operator $\partial\varphi$, with $\varepsilon=h^{a}$ and $a\in\left( 0,1/2\right) $ (the way of choosing this constant will be detailed later), we deduce that the following approximate equatio \begin{equation} Y_{t}^{h}+\int_{t}^{T}\nabla\varphi_{h^{a}}(Y_{r}^{h})dr=g(X_{T})+\int_{t ^{T}F(r,X_{r},Y_{r}^{h},Z_{r}^{h})dr-\int_{t}^{T}Z_{r}^{h}dW_{r},\;\forall t\in\left[ 0,T\right] ,\;\mathbb{P}-a.s., \label{approximation of BSVI \end{equation} admits a unique solution $\left( Y_{t}^{h},Z_{t}^{h}\right) \in L_{ad ^{2}(\Omega;C(\left[ 0,T\right] ;\mathbb{R}))\times L_{ad}^{2}(\Omega ;L^{2}(\left[ 0,T\right] ;\mathbb{R}^{d})).$ Further, inspired by the paper of Bouchard and Touzi, \cite{BT}, let us define an Euler type approximation for the Yosida approximation process $Y^{\varepsilon}$. For an intuitive introduction, let $Y_{T}^{h}:=g(X_{T ^{\pi})$ be the initial condition, and, for $i=\overline{n-1,0}$, remark tha \begin{equation} Y_{t_{i}}^{h}\sim Y_{t_{i+1}}^{h}+h\left[ F(t_{i},X_{t_{i}}^{h},Y_{t_{i} ^{h},Z_{t_{i}}^{h})-\nabla\varphi_{h^{a}}(Y_{t_{i}}^{h})\right] -Z_{t_{i }^{h}(W_{t_{i+1}}-W_{t_{i}}); \label{intuitive consideration \end{equation} taking the conditional expectation $\mathbb{E}^{i}\left( \cdot\right) :=\mathbb{E}\left( \cdot~|\mathcal{F}_{t_{i}}\right) $, we obtai \[ Y_{t_{i}}^{h}\sim\mathbb{E}^{i}(Y_{t_{i+1}}^{h})+h\left[ F(t_{i},X_{t_{i }^{h},Y_{t_{i}}^{h},Z_{t_{i}}^{h})-\nabla\varphi_{h^{a}}(Y_{t_{i} ^{h})\right] . \] If we multiply (\ref{intuitive consideration}) by $W_{t_{i+1}}-W_{t_{i}}$ it follow \[ Z_{t_{i}}^{h}\sim\dfrac{1}{h}\mathbb{E}^{i}(Y_{t_{i+1}}^{h}(W_{t_{i+1 }-W_{t_{i}})). \] Therefore, we propose the following implicit discretization procedure, which define the pair $\left( Y^{h},Z^{h}\right) $ inductively, for $i=\overline {n-1,0}: \begin{equation} \left\{ \begin{array} [c]{l \tilde{Y}_{T}^{h}:=g(X_{T}^{h}),\;\tilde{Z}_{T}^{h}=0,\medskip\\ \tilde{Y}_{t_{i}}^{h}:=\mathbb{E}^{i,h}(\tilde{Y}_{t_{i+1}}^{h})+h\left[ F(t_{i},X_{t_{i}}^{h},\tilde{Y}_{t_{i}}^{h},\tilde{Z}_{t_{i}}^{h )-\nabla\varphi_{h^{a}}(\tilde{Y}_{t_{i}}^{h})\right] ,\medskip\\ \tilde{Z}_{t_{i}}^{h}:=\dfrac{1}{h}\mathbb{E}^{i,h}(\tilde{Y}_{t_{i+1} ^{h}(W_{t_{i+1}}-W_{t_{i}})),\medskip\\ \tilde{U}_{t_{i}}^{h}:=\nabla\varphi_{h^{a}}(\mathbb{E}^{i,h}(\tilde {Y}_{t_{i+1}}^{h})), \end{array} \right. \label{approximation scheme \end{equation} where $\mathbb{E}^{i,h}\left( \cdot\right) :=\mathbb{E}\left( \cdot~|\mathcal{F}_{t_{i}}^{h}\right) $ and $\mathcal{F}_{t_{i}}^{h :=\sigma(X_{t_{j}}^{h}:0\leq j\leq i).$ \begin{remark} Observe that $\tilde{Y}_{t_{i}}^{h}$ is defined implicitly as the solution of a fixed point problem. Since the involved functions are Lipschitz, it is well defined. Moreover, for small values of $h>0$ it can be estimated numerically in an accurate way. \end{remark} \begin{remark} We can also use an explicit scheme to defin \[ \tilde{Y}_{t_{i}}^{h}:=\mathbb{E}^{i,h}(\tilde{Y}_{t_{i+1}}^{h})+h\mathbb{E ^{i,h}\left[ F(t_{i},X_{t_{i}}^{h},\tilde{Y}_{t_{i+1}}^{h},\tilde{Z}_{t_{i }^{h})-\nabla\varphi_{h^{a}}(\tilde{Y}_{t_{i+1}}^{h})\right] . \] The advantage of this scheme is that it does not require a fixed point procedure but, from a numerical point of view, adding a term in the conditional expectation makes it more difficult to estimate. Therefore the implicit scheme can be more tractable in practice. \end{remark} \begin{remark} We have that the filtration $\mathcal{F}_{t}$ generated by the Brownian motion coincides with the filtration generated by the diffusion process $X$, i.e. $\mathcal{F}_{t}=\mathcal{F}_{t}^{X}$, and, from the Markov property of the process $X^{h}$, it follows tha \ \begin{array} [c]{l \mathbb{E}^{i}(\tilde{Y}_{t_{i+1}}^{h})=\mathbb{E}^{i,h}(\tilde{Y}_{t_{i+1 }^{h})=\mathbb{E}(\tilde{Y}_{t_{i+1}}^{h}~|X_{t_{i}}^{h}),\medskip\\ \mathbb{E}^{i}(\tilde{Y}_{t_{i+1}}^{h}(W_{t_{i+1}}-W_{t_{i}}))=\mathbb{E ^{i,h}(\tilde{Y}_{t_{i+1}}^{h}(W_{t_{i+1}}-W_{t_{i}}))=\mathbb{E}(\tilde {Y}_{t_{i+1}}^{h}(W_{t_{i+1}}-W_{t_{i}})~|X_{t_{i}}^{h}). \end{array} \] \end{remark} Consider now a continuous version of (\ref{approximation scheme}). From the martingale representation theorem there exists a square integrable process $\tilde{Z}^{h}$ such tha \begin{equation} \tilde{Y}_{t_{i+1}}^{h}=\mathbb{E}^{i}(\tilde{Y}_{t_{i+1}}^{h})+\int_{t_{i }^{t_{i+1}}\tilde{Z}_{s}^{h}dW_{s}, \label{martingale repres \end{equation} and, therefore, we define, for $t\in(t_{i},t_{i+1}], \begin{equation} \bar{Y}_{t}^{h}:=\tilde{Y}_{t_{i}}^{h}-\left( t-t_{i}\right) \left[ f(t_{i},X_{t_{i}}^{h},\tilde{Y}_{t_{i}}^{h},\tilde{Z}_{t_{i}}^{h )-\nabla\varphi_{h^{a}}(\tilde{Y}_{t_{i}}^{h})\right] +\int_{t_{i}}^{t \tilde{Z}_{s}^{h}dW_{s}. \label{cont version of Y \end{equation} Obviously, we obtain that $\bar{Y}_{t_{i}}^{h}=\tilde{Y}_{t_{i}}^{h}$, and, for the simplicity of the notation, we will continue to write $\tilde{Y _{t}^{h}$ for $\bar{Y}_{t}^{h}$. \begin{remark} From (\ref{approximation scheme}), (\ref{martingale repres}) and the isometry property, we notice that, for $i=\overline{0,n-1}, \begin{equation} h~\tilde{Z}_{t_{i}}^{h}=\mathbb{E}^{i}(\tilde{Y}_{t_{i+1}}^{h}(W_{t_{i+1 }-W_{t_{i}}))=\mathbb{E}^{i}\left[ (W_{t_{i+1}}-W_{t_{i}})\int_{t_{i }^{t_{i+1}}\tilde{Z}_{s}^{h}dW_{s}\right] =\mathbb{E}^{i}\left[ \int_{t_{i }^{t_{i+1}}\tilde{Z}_{s}^{h}ds\right] . \label{connection between Zs \end{equation} \end{remark} To approximate $Z_{t}^{h}$ we us \[ \bar{Z}_{t}^{h}:=\frac{1}{h}\mathbb{E}^{i}\left[ \int_{t_{i}}^{t_{i+1} Z_{s}^{h}ds\right] ,\;t\in\lbrack t_{i},t_{i+1}) \] rather than $Z_{t_{i}}^{h}$, which is the best approximation in $L^{2}\left( \Omega\times\left[ 0,T\right] \right) $ of $Z^{h}$ by adapted processes which are constant on each interval $[t_{i},t_{i+1})$ (see Lemma 3.4.2 from Zhang \cite{Z1}) \[ \mathbb{E}\left[ {\displaystyle\int_{t_{i}}^{t_{i+1}}} |Z_{s}^{h}-\bar{Z}_{t_{i}}^{h}|^{2}ds\right] \leq\mathbb{E}\left[ {\displaystyle\int_{t_{i}}^{t_{i+1}}} |Z_{s}^{h}-\eta|^{2}ds\right] , \] for all $\mathcal{F}_{t_{i}}$-measurable stochastic process $\eta$. \begin{remark} From (\ref{connection between Zs}), the definition of $\bar{Z}_{t_{i}}^{h}$ and Jensen inequality we obtai \begin{equation \begin{array} [c]{l \displaystyle\mathbb{E}|\bar{Z}_{t_{i}}^{h}-\tilde{Z}_{t_{i}}^{h}|^{2 =\frac{1}{h^{2}}\mathbb{E}\left[ \mathbb{E}^{i}\int_{t_{i}}^{t_{i+1} \Delta^{h}Z_{s}ds\right] ^{2}\leq\frac{1}{h}\mathbb{E}\int_{t_{i}}^{t_{i+1 }\left[ \mathbb{E}^{i}\Delta^{h}Z_{s}\right] ^{2}ds\leq\medskip\\ \displaystyle\leq\frac{1}{h}\mathbb{E}\int_{t_{i}}^{t_{i+1}}\mathbb{E ^{i}|\Delta^{h}Z_{s}|^{2}ds=\frac{1}{h}\int_{t_{i}}^{t_{i+1}}\mathbb{E |\Delta^{h}Z_{s}|^{2}ds. \end{array} \label{ineq for approx Z \end{equation} \end{remark} In order to prove an error estimate of the scheme first we use the solution $\left( Y_{t}^{h},Z_{t}^{h}\right) _{t\in\left[ 0,T\right] }$ of the approximating equation (\ref{approximation of BSVI}). The next result is a straightforward consequence of Proposition 2.3 from Pardoux \& R\u{a}\c{s}canu \cite{PR}. \begin{proposition} Under the assumptions (\ref{coeff assumpt})-(\ref{fi assumpt}), there exists $C>0$ such tha \begin{equation} \sup_{t\in\left[ 0,T\right] }\mathbb{E}|Y_{t}-Y_{t}^{h}|^{2}+\mathbb{E {\displaystyle\int_{0}^{T}} |Z_{t}-Z_{t}^{h}|^{2}dt\leq C\Gamma\left( T\right) \hspace{1pt}h^{a}, \label{lemma 1 \end{equation} where $\Gamma\left( T\right) :=\mathbb{E}\left[ 1+\left\vert g(X_{T )\right\vert ^{2}+\left\vert X_{T}\right\vert ^{r}+\int_{0}^{T}F\left( 0,X_{s}^{h},0,0\right) ds\right] $ \end{proposition} We recall Theorem 3.4.3 from \cite{Z1}, applied for the solution $\left( Y_{t}^{h},Z_{t}^{h}\right) $ of (\ref{approximation of BSVI}). To otain a similar conclusion we have to impose more restrictive assumptions than (\ref{coeff assumpt}-\ref{fi assumpt}): \begin{itemize} \item there exists some constant $K>0$, such that\vspace{-0.1in \begin{equation \begin{array} [c]{cl \left( i\right) & \left\vert b\left( x\right) -b\left( \tilde{x}\right) \right\vert +\left\Vert \sigma\left( x\right) -\sigma\left( \tilde {x}\right) \right\Vert \leq K\left\vert x-\tilde{x}\right\vert ,\;\forall x,\tilde{x}\in\mathbb{R}^{m},\medskip\\ \left( ii\right) & |F(\xi)-F(\tilde{\xi})|\hspace{1pt}\leq K|\xi-\tilde{\xi }|,\;\forall\xi,\tilde{\xi}\in\left[ 0,T\right] \times\mathbb{R}^{m \times\mathbb{R\times R}^{d},\medskip\\ \left( iii\right) & \left\vert g\left( y\right) -g(\tilde{y})\right\vert \leq K\left\vert y-\tilde{y}\right\vert ,\;\forall y,\tilde{y}\in\mathbb{R}; \end{array} \label{cond coef Lipschitz \end{equation} \vspace{-0.25in} \item the function $\varphi:\mathbb{R}^{n}\rightarrow(-\infty,+\infty]$ is a proper convex lower semicontinuous function and there exist $M>0$ and $r\in\mathbb{N}$ such that\vspace{-0.1in \begin{equation} \left\vert \varphi(g(x))\right\vert \leq M(1+\left\vert x\right\vert ^{r}),\;\forall x\in\mathbb{R}^{m}. \label{cond fi 2 \end{equation} \end{itemize} \begin{proposition} \label{Lemma from Zhang}Let the assumptions (\ref{cond coef Lipschitz}) and (\ref{cond fi 2}) be satisfied. We have the following estimate, for some $C>0$ \[ \max_{i=\overline{0,n-1}}\sup_{t\in\left[ t_{i},t_{i+1}\right] \mathbb{E}|Y_{t}^{h}-Y_{t_{i+1}}^{h}|^{2} {\displaystyle\sum\limits_{i=1}^{n}} \mathbb{E {\displaystyle\int_{t_{i}}^{t_{i+1}}} |Z_{s}^{h}-\bar{Z}_{t_{i}}^{h}|^{2}ds\leq Ch, \] where $\bar{Z}_{t_{i}}^{h}:=\frac{1}{h}\mathbb{E}^{i}\left[ \int_{t_{i }^{t_{i+1}}Z_{s}^{h}ds\right] .$ \end{proposition} \begin{proof} The inequalit \[ \max_{i=\overline{0,n-1}}\sup_{t\in\left[ t_{i},t_{i+1}\right] \mathbb{E}|Y_{t}^{h}-Y_{t_{i+1}}^{h}|^{2}\leq Ch \] can be obtained by classical calculus, using It\^{o}'s formula, Lipschitz property of the coefficient functions and the bounds of the approximate solution $\left( Y_{t}^{h},Z_{t}^{h}\right) _{t\in\left[ 0,T\right] }$ of (\ref{approximation of BSVI}) (see Proposition 2.1 and 2.2 from \cite{PR}). For the proof of the inequalit \ {\displaystyle\sum\limits_{i=1}^{n}} \mathbb{E {\displaystyle\int_{t_{i}}^{t_{i+1}}} |Z_{s}^{h}-\bar{Z}_{t_{i}}^{h}|^{2}ds\leq Ch \] is sufficient to recall the proof of Theorem 3.4.3 from \cite{Z1}.\hfill \end{proof} Using the estimates from the above Propositions we can prove the following: \begin{proposition} Let the assumptions (\ref{cond coef Lipschitz}) and (\ref{cond fi 2}) be satisfied. Then there exists $C>0$ such that\vspace{-0.13in \[ \sup_{t\in\left[ 0,T\right] }\mathbb{E}|Y_{t}^{h}-\tilde{Y}_{t}^{h |^{2}+\mathbb{E {\displaystyle\int_{0}^{T}} |Z_{t}^{h}-\tilde{Z}_{t}^{h}|^{2}dt\leq Ch^{1-2a}. \] \end{proposition} \begin{proof} From (\ref{approximation of BSVI}) and (\ref{cont version of Y}) we deduce that, for $i=\overline{0,n-1}$ and $t\in\left[ t_{i},t_{i+1}\right] $ \begin{align*} Y_{t}^{h} & =Y_{t_{i+1}}^{h}+\int_{t}^{t_{i+1}}\left[ F(s,X_{s},Y_{s ^{h},Z_{s}^{h})-\nabla\varphi_{h^{a}}(Y_{s}^{h})\right] ds-\int_{t}^{t_{i+1 }Z_{s}^{h}dW_{s},\medskip\\ \tilde{Y}_{t}^{h} & =\tilde{Y}_{t_{i+1}}^{h}+\int_{t}^{t_{i+1}}\left[ F(t_{i},X_{t_{i}}^{h},\tilde{Y}_{t_{i}}^{h},\tilde{Z}_{t_{i}}^{h )-\nabla\varphi_{h^{a}}(\tilde{Y}_{t_{i}}^{h})\right] ds-\int_{t}^{t_{i+1 }\tilde{Z}_{s}^{h}dW_{s}. \end{align*} Throughout the proof let $\Delta^{h}F_{t}:=F(t,X_{t},Y_{t}^{h},Z_{t ^{h})-F(t_{i},X_{t_{i}}^{h},\tilde{Y}_{t_{i}}^{h},\tilde{Z}_{t_{i}}^{h})$, $\Delta^{h}Y_{t}:=Y_{t}^{h}-\tilde{Y}_{t}^{h}$ and $\Delta^{h}Z_{t}:=Z_{t ^{h}-\tilde{Z}_{t}^{h}$, $t\in\left[ t_{i},t_{i+1}\right] $. Applying Energy equality we obtain tha \begin{align} \mathbb{E}|\Delta^{h}Y_{t}|^{2}+\int_{t}^{t_{i+1}}\mathbb{E}|\Delta^{h Z_{s}|^{2}ds & =\mathbb{E}|\Delta^{h}Y_{t_{i+1}}|^{2}+2\mathbb{E}\int _{t}^{t_{i+1}}\Delta^{h}Y_{s}~\Delta^{h}F_{s}ds\label{Energy equality}\\ & -2\mathbb{E}\int_{t}^{t_{i+1}}\Delta^{h}Y_{s}\left( \nabla\varphi_{h^{a }(Y_{s}^{h})-\nabla\varphi_{h^{a}}(\tilde{Y}_{t_{i}}^{h})\right) ds,\nonumber \end{align} We first compute $\Delta^{h}Y_{s}\cdot\left[ \Delta^{h}F_{s}-(\nabla \varphi_{h^{a}}(Y_{s}^{h})-\nabla\varphi_{h^{a}}(\tilde{Y}_{t_{i} ^{h}))\right] $ for which we use Lipschitz property of $F$ and $\nabla \varphi_{h^{a}}: \begin{equation \begin{array} [c]{l \displaystyle2\mathbb{E}\int_{t}^{t_{i+1}}\Delta^{h}Y_{s}\cdot\left[ \Delta^{h}F_{s}-(\nabla\varphi_{h^{a}}(Y_{s}^{h})-\nabla\varphi_{h^{a} (\tilde{Y}_{t_{i}}^{h}))\right] ds\medskip\leq\\ \displaystyle\leq2K\mathbb{E}\int_{t}^{t_{i+1}}\left\vert \Delta^{h Y_{s}\right\vert \cdot\left[ \left\vert s-t_{i}\right\vert +|X_{s}-X_{t_{i }^{h}|+|Y_{s}^{h}-\tilde{Y}_{t_{i}}^{h}|+|Z_{s}^{h}-\tilde{Z}_{t_{i} ^{h}|+\frac{1}{h^{a}}|Y_{s}^{h}-\tilde{Y}_{t_{i}}^{h}|\right] ds\leq \medskip\\ \displaystyle\leq\left( K^{2}\alpha+\beta\right) \mathbb{E}\int_{t ^{t_{i+1}}|\Delta^{h}Y_{s}|^{2}ds+\frac{4}{\alpha}\mathbb{E}\int_{t}^{t_{i+1 }|X_{s}-X_{t_{i}}^{h}|^{2}ds+\frac{4}{\alpha}h^{3}+\medskip\\ \displaystyle+\frac{4}{\alpha}\mathbb{E}\int_{t}^{t_{i+1}}|Z_{s}^{h}-\tilde {Z}_{t_{i}}^{h}|^{2}ds+\Big(\frac{4}{\alpha}+\frac{1}{\beta h^{2a }\Big)\mathbb{E}\int_{t}^{t_{i+1}}|Y_{s}^{h}-\tilde{Y}_{t_{i}}^{h}|^{2}ds, \end{array} \label{Lipschitz consequence \end{equation} where $\alpha,\beta>0$ will be chosen later. From now on, let $C>0$ be a constant independent of $h$, constant which can take different values from one line to another. From Proposition \ref{Euler for X} we have that there exists $C>0$ such tha \[ \mathbb{E}|X_{s}-X_{t_{i}}^{h}|^{2}\leq2\mathbb{E}\left\vert X_{s}-X_{t_{i }\right\vert ^{2}+2\mathbb{E}|X_{t_{i}}-X_{t_{i}}^{h}|^{2}\leq Ch, \] and, from Proposition \ref{Lemma from Zhang} \[ \mathbb{E}|Y_{s}^{h}-\tilde{Y}_{t_{i}}^{h}|^{2}\leq2\mathbb{E}|Y_{s ^{h}-Y_{t_{i}}^{h}|^{2}+2\mathbb{E}|Y_{t_{i}}^{h}-\tilde{Y}_{t_{i}}^{h |^{2}\leq Ch+2\mathbb{E}|\Delta^{h}Y_{t_{i}}|^{2}. \] Using (\ref{ineq for approx Z} \[ \mathbb{E}|Z_{s}^{h}-\tilde{Z}_{t_{i}}^{h}|^{2}\leq2\mathbb{E}|Z_{s}^{h -\bar{Z}_{t_{i}}^{h}|^{2}+2\mathbb{E}|\bar{Z}_{t_{i}}^{h}-\tilde{Z}_{t_{i }^{h}|^{2}=2\mathbb{E}|Z_{s}^{h}-\bar{Z}_{t_{i}}^{h}|^{2}+\frac{2}{h \int_{t_{i}}^{t_{i+1}}\mathbb{E}|\Delta^{h}Z_{s}|^{2}ds. \] Then (\ref{Lipschitz consequence}) yield \begin{equation \begin{array} [c]{c \displaystyle A_{i}\left( t\right) :=\mathbb{E}|\Delta^{h}Y_{t}|^{2 +\int_{t}^{t_{i+1}}\mathbb{E}|\Delta^{h}Z_{s}|^{2}ds\leq\left( K^{2 \alpha+\beta\right) \mathbb{E}\int_{t}^{t_{i+1}}|\Delta^{h}Y_{s}|^{2 ds+B_{i}, \end{array} \label{ineq for Gronwall \end{equation} wher \begin{equation \begin{array} [c]{l \displaystyle B_{i}:=\mathbb{E}|\Delta^{h}Y_{t_{i+1}}|^{2}+\frac{8}{\alpha }\int_{t_{i}}^{t_{i+1}}\mathbb{E}|Z_{s}^{h}-\bar{Z}_{t_{i}}^{h}|^{2 ds+\frac{8}{\alpha}\int_{t_{i}}^{t_{i+1}}\mathbb{E}|\Delta^{h}Z_{s |^{2}ds+\Big(\frac{4}{\alpha}+\frac{1}{\beta h^{2a}}\Big)Ch^{2}+\medskip\\ \;\;\;\;\;+2h\Big(\frac{4}{\alpha}+\frac{1}{\beta h^{2a}}\Big)\mathbb{E \left\vert \Delta^{h}Y_{t_{i}}\right\vert ^{2}. \end{array} \label{def of B \end{equation} Using a backward Gronwall type inequality we deduc \[ \mathbb{E}|\Delta^{h}Y_{t}|^{2}\leq B_{i}e^{\displaystyle\int_{t}^{t_{i+1 }(K^{2}\alpha+\beta)ds}\leq B_{i}e^{\left( K^{2}\alpha+\beta\right) h}\leq CB_{i}, \] and, therefore \ \begin{array} [c]{c \displaystyle A_{i}\left( t\right) \leq B_{i}+\left( K^{2}\alpha +\beta\right) \int_{t}^{t_{i+1}}CB_{i}ds=B_{i}\left[ 1+C\left( K^{2 \alpha+\beta\right) h\right] \leq B_{i}\left[ 1+Ch\right] ,\;h\in\left( 0,1\right) . \end{array} \] The above inequality and the definition of $B_{i}$ implie \ \begin{array} [c]{l \displaystyle\left[ 1-\left( 1+Ch\right) \Big(\frac{4}{\alpha}+\frac {1}{\beta h^{2a}}\Big)2h\right] \mathbb{E}|\Delta^{h}Y_{t_{i}}|^{2}+\left[ 1-\left( 1+Ch\right) \frac{8}{\alpha}\right] \int_{t_{i}}^{t_{i+1 }\mathbb{E}|\Delta^{h}Z_{s}|^{2}ds\leq\medskip\\ \displaystyle\leq\left( 1+Ch\right) \left[ \mathbb{E}|\Delta^{h}Y_{t_{i+1 }|^{2}+\frac{8}{\alpha}\int_{t_{i}}^{t_{i+1}}\mathbb{E}|Z_{s}^{h}-\bar {Z}_{t_{i}}^{h}|^{2}ds+Ch^{2-2a}\right] . \end{array} \] Taking $a\in\left( 0,1/2\right) $, we can chose $h>0$ sufficiently small and $\alpha,\beta>0$ large enough such tha \ \begin{array} [c]{c C_{1}:=1-\left( 1+Ch\right) \left( \frac{4}{\alpha}+\frac{1}{\beta h^{2a }\right) 2h>0\text{ and }C_{2}:=1-\left( 1+Ch\right) \frac{8}{\alpha}>0 \end{array} \] and, therefore \begin{equation \begin{array} [c]{l \displaystyle C_{1}\mathbb{E}|\Delta^{h}Y_{t_{i}}|^{2}+C_{2}\int_{t_{i }^{t_{i+1}}\mathbb{E}|\Delta^{h}Z_{s}|^{2}ds\leq\medskip\\ \displaystyle\leq\left( 1+Ch\right) \left[ \mathbb{E}|\Delta^{h}Y_{t_{i+1 }|^{2}+\frac{8}{\alpha}\int_{t_{i}}^{t_{i+1}}\mathbb{E}|Z_{s}^{h}-\bar {Z}_{t_{i}}^{h}|^{2}ds+Ch^{2-2a}\right] . \end{array} \label{ineq for conclusion \end{equation} Writing the above inequality for each $i=\overline{0,n-1}$, we can deduc \[ \mathbb{E}|\Delta^{h}Y_{t_{i}}|^{2}\leq\left( 1+Ch\right) ^{n}\left[ h^{2-2a}+\mathbb{E}|\Delta^{h}Y_{T}|^{2} {\displaystyle\sum\limits_{i=1}^{n}} {\displaystyle\int_{t_{i}}^{t_{i+1}}} \mathbb{E}|Z_{s}^{h}-\bar{Z}_{t_{i}}^{h}|^{2}ds\right] ,\;i=\overline {0,n-1}. \] From the Lipschitz property of $g$ and Proposition \ref{Lemma from Zhang} we obtain, for each $i=\overline{0,n-1}$, since $a\in\left( 0,1/2\right) $, tha \begin{equation} \mathbb{E}|\Delta^{h}Y_{t_{i}}|^{2}\leq Ch,\;\forall h\in\left( 0,1\right) \;\text{small enough.} \label{estimation for delta Y \end{equation} For the proof of the inequality concerning $\left\Vert \Delta^{h Z_{s}\right\Vert _{L^{2}\left( \Omega\times\left[ 0,T\right] \right) }$ we act in the following manner (see, e.g. Bouchard \& Touzi \cite{BT}). From (\ref{ineq for conclusion}) it follows tha \ \begin{array} [c]{l \displaystyle\int_{0}^{T}\mathbb{E}|\Delta^{h}Z_{s}|^{2}ds {\displaystyle\sum\limits_{i=0}^{n-1}} \int_{t_{i}}^{t_{i+1}}\mathbb{E}|\Delta^{h}Z_{s}|^{2}ds\leq\medskip\\ \displaystyle\leq\left( 1+Ch\right) \left[ {\displaystyle\sum\limits_{i=0}^{n-1}} \mathbb{E}|\Delta^{h}Y_{t_{i+1}}|^{2}+Ch^{2-2a}~n+\frac{8}{\alpha {\displaystyle\sum\limits_{i=0}^{n-1}} \int_{t_{i}}^{t_{i+1}}\mathbb{E}|Z_{s}^{h}-\bar{Z}_{t_{i}}^{h}|^{2}ds\right] -C_{1 {\displaystyle\sum\limits_{i=0}^{n-1}} \mathbb{E}|\Delta^{h}Y_{t_{i}}|^{2}=\medskip\\ \displaystyle=\left( 1+Ch\right) \left[ \mathbb{E}|\Delta^{h}Y_{T |^{2}+Ch^{1-2a}+\frac{8}{\alpha {\displaystyle\sum\limits_{i=0}^{n-1}} \int_{t_{i}}^{t_{i+1}}\mathbb{E}|Z_{s}^{h}-\bar{Z}_{t_{i}}^{h}|^{2}ds\right] +\medskip\\ +\left( \left( 1+Ch\right) +\left( 1+Ch\right) \big(\frac{4}{\alpha }+\frac{1}{\beta h^{2a}}\big)2h-1\right) {\displaystyle\sum\limits_{i=1}^{n-1}} \mathbb{E}|\Delta^{h}Y_{t_{i}}|^{2}-C_{1}\mathbb{E}|\Delta^{h}Y_{0}|^{2}. \end{array} \] and, therefore, from (\ref{estimation for delta Y}) \ \begin{array} [c]{l \displaystyle\int_{0}^{T}\mathbb{E}|\Delta^{h}Z_{s}|^{2}ds\leq C\left[ \mathbb{E}|\Delta^{h}Y_{T}|^{2} {\displaystyle\sum\limits_{i=0}^{n-1}} \int_{t_{i}}^{t_{i+1}}\mathbb{E}|Z_{s}^{h}-\bar{Z}_{t_{i}}^{h}|^{2 ds+h^{1-2a}\right] +\\ \displaystyle+\left( Ch+\frac{8}{\alpha}h+\frac{2}{\beta}h^{1-2a}+\frac {8}{\alpha}h^{2}+\frac{2}{\beta}h^{2-2a}\right) {\displaystyle\sum\limits_{i=1}^{n-1}} \mathbb{E}|\Delta^{h}Y_{t_{i}}|^{2}\leq\medskip\\ \displaystyle\leq C\left[ \mathbb{E}|\Delta^{h}Y_{T}|^{2} {\displaystyle\sum\limits_{i=0}^{n-1}} \int_{t_{i}}^{t_{i+1}}\mathbb{E}|Z_{s}^{h}-\bar{Z}_{t_{i}}^{h}|^{2 ds+h^{1-2a}+h^{1-2a}Ch~n\right] =\medskip\\ \displaystyle=C\left[ Ch+Ch+Ch^{1-2a}\right] \leq Ch^{1-2a}. \end{array} \] Using the definition (\ref{def of B}) of $B_{i}$ we deduce that $B_{i}\leq Ch$ and, respectively, $\max\mathbb{E}|\Delta^{h}Y_{t_{i}}|^{2}\leq Ch$, which completes the proof.\hfill\medskip \end{proof} Consequently we have proved our main result: \begin{theorem} There exists the constant $C>0$ which depends only on the Lipschitz constants of the coefficients, such that \begin{equation} \sup_{t\in\left[ 0,T\right] }\mathbb{E}|Y_{t}-\tilde{Y}_{t}^{h |^{2}+\mathbb{E {\displaystyle\int_{0}^{T}} \left[ |Y_{t}-\tilde{Y}_{t}^{h}|^{2}+|Z_{t}-\tilde{Z}_{t}^{h}|^{2}\right] dt\leq Ch^{a\wedge\left( 1-2a\right) }. \label{error estimate \end{equation} \end{theorem} \section{Generalized BSVI. A proposed scheme for numerical approximation} Let $\mathcal{D}$ be a open bounded subset of $\mathbb{R}^{d}$ of the for \[ \mathcal{D}=\{x\in\mathbb{R}^{d}:\ell\left( x\right) <0\},\;\;Bd\left( \mathcal{D}\right) =\{x\in\mathbb{R}^{d}:\ell\left( x\right) =0\},\smallskip \] where $\ell\in C_{b}^{3}\left( \mathbb{R}^{d}\right) $, $\left\vert \nabla\ell\left( x\right) \right\vert =1,\;$for all $x\in Bd\left( \mathcal{D}\right) $. We know that (see, e.g., Lions \& Sznitman \cite{LS}), for every $\left( t,x\right) \in\mathbb{R}_{+}\times\overline{\mathcal{D}}$, there exists a unique pair of $\overline{\mathcal{D}}\times\mathbb{R}_{+ -$valued progressively measurable continuous processes $(X_{s}^{t,x ,A_{s}^{t,x})_{s\geq0},$ solution of the reflected SDE \begin{equation} \left\{ \begin{array} [c]{l \displaystyle X_{s}^{t,x}=x+\int_{t}^{s\vee t}b(r,X_{r}^{t,x})dr+\int _{t}^{s\vee t}\sigma(r,X_{r}^{t,x})dW_{r}-\int_{t}^{s\vee t}\nabla\ell (X_{r}^{t,x})dA_{r}^{t,x},\medskip\\ s\longmapsto A_{s}^{t,x}\text{\ is increasing,}\medskip\\ \displaystyle A_{s}^{t,x}=\int_{t}^{s\vee t}\mathbf{1}_{\{X_{r}^{t,x}\in Bd\left( \mathcal{D}\right) \}}dA_{r}^{t,x}. \end{array} \right. \label{defX \end{equation} Moreover, it can be proved tha \[ \mathbb{E}\underset{s\in\left[ 0,T\right] }{\sup}\left( \big|X_{s ^{t,x}-X_{s}^{t^{\prime},x^{\prime}}\big|^{p}\right) \leq C(\big|x-x^{\prime }\big|^{p}+\big|t-t^{\prime}\big|^{p/2}) \] and, for all $\mu>0$, $\mathbb{E(}e^{\mu A_{T}^{t,x}})<\infty.\smallskip$ Consider now the following generalized backward stochastic variational inequality \begin{equation} \left\{ \begin{array} [c]{l dY_{t}+F\left( t,X_{t},Y_{t},Z_{t}\right) dt+G\left( t,X_{t},Y_{t}\right) dA_{t}\in\partial\varphi\left( Y_{t}\right) dt+Z_{t}dW_{t},\;0\leq t\leq T,\medskip\\ Y_{T}=g\left( X_{T}\right) . \end{array} \right. \label{generalized BSVI \end{equation} We will suppose that the functions $F$ and $G$ satisfy the same assumption as the generator function $F$ from the previous section. It is known (see Maticiuc \& R\u{a}\c{s}canu \cite{MR2}) that the above equation admits a unique solution, i.e., for all $t\in\left[ 0,T\right] $,$\;\mathbb{P}$-a.s. \[ Y_{t}+\int_{t}^{T}U_{s}ds=g\left( X_{T}\right) +\int_{t}^{T}F\left( s,X_{s},Y_{s},Z_{s}\right) ds+\int_{t}^{T}G\left( s,X_{s},Y_{s}\right) dA_{s}-\int_{t}^{T}Z_{s}dW_{s}, \] where $U_{t}\in\partial\varphi\left( Y_{t}\right) ,\;$a.e.\ on $\Omega \times\left[ 0,T\right] .$ \begin{theorem} Under the considered assumptions, the generalized BSVI (\ref{generalized BSVI ) admits a unique solution $\left( Y_{t},Z_{t},U_{t}\right) $ of $\mathcal{F}_{t}$-progressively measurable processes. Moreover, for any $0\leq s\leq t\leq T$, we have, for some positive constant $C$ \begin{equation \begin{array} [c]{rl \left( a\right) & \displaystyle\mathbb{E}\left[ \int_{s}^{t}\big(\left\vert Y_{r}\right\vert ^{2}+\left\vert \left\vert Z_{r}\right\vert \right\vert ^{2}\big)dr+\int_{s}^{t}\left\vert Y_{r}\right\vert ^{2}dA_{r}\right] +\mathbb{E}\underset{s\leq r\leq t}{\sup}\left\vert Y_{r}\right\vert ^{2}\leq CM_{1},\medskip\\ \left( b\right) & \mathbb{E}\big(\varphi\left( Y_{t}\right) \big)\leq CM_{2}\quad\text{and}\quad\displaystyle\mathbb{E}\left[ \int_{s ^{t}\left\vert U_{r}\right\vert ^{2}dr\right] \leq CM_{2},\medskip \end{array} \label{propr \end{equation} wher \[ M_{1}=\mathbb{E}\left[ \left\vert \xi\right\vert ^{2}+\int_{0}^{T \Big(\big|F\left( s,0,0\right) \big|^{2}ds+\big|G\left( s,0\right) \big|^{2}dA_{s}\Big)\right] \quad\text{and}\quad M_{2}=\mathbb{E \big(\left\vert \xi\right\vert ^{2}+\varphi\left( \xi\right) \big). \] \end{theorem} For the generalized system considered above, we propose a mixed approximation scheme, considering, for the simplicity of the presentation, only the case $\varphi\equiv0$. Consider the grid of $\left[ 0,T\right] :\pi=\{t_{i}=ih,$ $i\leq n\},$ with $h:=T/n,$ $n\in\mathbb{N}^{\ast}$ and we will define $X^{\pi}$, the approximating Euler scheme for the reflected process $X$. We follow the paper \cite{CPS}, where the authors present the standard projected Euler approach to stopped reflected diffusions \[ \left\{ \begin{array} [c]{l X_{0}^{\pi}=x,\quad A_{0}^{\pi}=0,\smallskip\\ \hat{X}_{t_{i+1}}^{\pi}=X_{t_{i}}^{\pi}+b(t_{i},X_{t_{i}}^{\pi})(t_{i+1 -t_{i})+\sigma(t_{i},X_{t_{i}}^{\pi})(W_{t_{i+1}}-W_{t_{i}}),\medskip\\ \text{Taking the projection on the domain, we define}\smallskip\\ X_{t_{i+1}}^{\pi}=\bigskip\left\{ \begin{array} [c]{ll \hat{X}_{t_{i+1}}^{\pi}\text{ }, & \hat{X}_{t_{i+1}}^{\pi}\in\overline {\mathcal{D}},\medskip\\ \Pr_{\overline{\mathcal{D}}}(\hat{X}_{t_{i+1}}^{\pi}), & \hat{X}_{t_{i+1 }^{\pi}\notin\overline{\mathcal{D}}, \end{array} \right. \quad\text{and}\\ A_{t_{i+1}}^{\pi}=\left\{ \begin{array} [c]{ll A_{t_{i}}^{\pi}\;, & \hat{X}_{t_{i+1}}^{\pi}\in\overline{\mathcal{D} ,\medskip\\ A_{t_{i}}^{\pi}+||\Pr_{\overline{\mathcal{D}}}(\hat{X}_{t_{i+1}}^{\pi )-\hat{X}_{t_{i+1}}^{\pi}||, & \hat{X}_{t_{i+1}}^{\pi}\notin\overline {\mathcal{D}}.\smallskip \end{array} \right. \end{array} \right. \] To define an approximation scheme for the generalized BSVI (\ref{generalized BSVI}) consider $Y_{T}^{\pi}:=g(X_{T}^{\pi})$ and, for $i=\overline{n-1,0},$ in an intuitive manner, using the notation $\Delta A_{t_{i}}^{\pi}:=A_{t_{i+1}}^{\pi}-A_{t_{i}}^{\pi}$ and $\Delta W_{t_{i }:=W_{t_{i+1}}-W_{t_{i}}~$ \[ Y_{t_{i}}\sim Y_{t_{i+1}}-G(X_{t_{i+1}}^{\pi},Y_{t_{i+1}})\Delta A_{t_{i }^{\pi}-Z_{t_{i}}\Delta W_{t_{i}}~. \] \noindent We take the conditional expectation $\mathbb{E}^{\mathcal{F}_{i}}$ and it follow \[ Y_{t_{i}}\sim\mathbb{E}^{\mathcal{F}_{i}}(Y_{t_{i+1}})-\mathbb{E ^{\mathcal{F}_{i}}[G(X_{t_{i+1}}^{\pi},Y_{t_{i+1}})\Delta A_{t_{i}}^{\pi}]. \] \noindent This suggest to define the following approximation scheme \begin{equation} \left\{ \begin{array} [c]{l Y_{t_{i}}^{\pi}:=\mathbb{E}^{\mathcal{F}_{i}}[Y_{t_{i+1}}^{\pi}-G(X_{t_{i+1 }^{\pi},Y_{t_{i+1}})\Delta A_{t_{i}}^{\pi}],\quad Y_{T}^{\pi}:=g(X_{T}^{\pi }),\smallskip\\ Z_{t_{i}}^{\pi}:=\dfrac{1}{h}\mathbb{E}^{\mathcal{F}_{t_{i}}}[Y_{t_{i+1} ^{\pi}\Delta W_{t_{i}}-G(X_{t_{i+1}}^{\pi},Y_{t_{i+1}})\Delta A_{t_{i}}^{\pi }\Delta W_{t_{i}}]. \end{array} \right. \label{proposed scheme \end{equation} \begin{problem} The proof of the convergence for the scheme defined by (\ref{proposed scheme}) can provide a useful tool for the approximation of the solution for the Generalized BSVI (\ref{generalized BSVI}). For the moment this is still an open problem, which the interested reader is kindly invited to approach it. \end{problem} \bigskip
1,314,259,994,076
arxiv
\section{Introduction} Statistical physics is often concerned with the problem of determining a closed set of equations of motion for a relatively small number of macroscopic degrees of freedom, given those for a much larger number of underlying degrees of freedom. This problem occurs in the classical theoretical context of deriving fluid dynamic equations from underlying kinetic equations~\cite{bib:chapmancowling}, and also in the more modern computational context of so-called {\it multiphysics} simulations~\cite{bib:multiphysics1,bib:multiphysics2}. These examples span a wide range of difficulty. The derivation of the Boltzmann equation assumes excellent separation between three different scales of length and time: \begin{enumerate} \item The scales associated with a molecular collision are assumed to be the smallest and fastest of all the relevant scales. In particular, the range of intermolecular force is assumed to be much smaller than a mean-free path, and the duration of a collision is assumed to be much smaller than a mean-free time. \item The scales associated with the spatial and temporal intervals between collisions -- the mean-free path and the mean-free time, respectively -- are assumed to be much larger than the collision duration, but smaller than any hydrodynamic length and time scales. \item Hydrodynamic length and time scales are assumed to be longest and slowest. \end{enumerate} Under these assumptions, an asymptotic approach may be adopted. This is the basis of the Chapman-Enskog expansion~\cite{bib:chapmancowling} with which it is possible to derive the Navier-Stokes equations of viscous hydrodynamics. Over the past few decades, it has been recognized that a breakdown of scale separation between scales 1 and 2 is much less catastrophic than a breakdown between scales 2 and 3. In a dense gas or liquid, for example, the mean-free path may be comparable to the interaction range, so good separation is lost between scales 1 and 2. This invalidates Boltzmann's {\it stosszahlansatz}, by which the Boltzmann equation is derived. In spite of this, as long as there remains good separation between scales 2 and 3, kinetic ring theory shows that the form of the Navier-Stokes equations is robust, and the interparticle correlations introduced by the loss of separation between scales 1 and 2 may be accounted for by an appropriate renormalization of the hydrodynamic transport coefficients~\cite{bib:kineticring}. The Navier-Stokes equations hold reasonably well for water at standard temperature and pressure, after all, even though scales 1 and 2 are comparable. A breakdown of separation between scales 2 and 3 is a much more serious issue. The ratio of mean-free path to hydrodynamic scale lengths is called the Knudsen number, Kn, and the Chapman-Enskog analysis is asymptotic in this quantity. Even for single-component, single-phase fluids, the Navier-Stokes equations may break down spectacularly when this quantity becomes too large. For complex fluid configurations, such as immiscible flow or two-phase coexistence, spatial gradients of order parameters may be very large indeed. The characteristic width of the interface between two immiscible fluids, for example, may be on the order of a mean-free path, so that the corresponding Knudsen number is of order unity. In this circumstance, asymptotic methods are not a viable option. For the last decade, physicists, chemists, applied mathematicians and engineers faced with the problem of modeling complex fluids have studied lattice models of hydrodynamics. These models consist of particle populations moving about on a lattice and colliding at lattice sites, whose emergent hydrodynamic behavior is that of a Navier-Stokes fluid~\cite{bib:lb}. It was found much easier to introduce effective forces between particle populations on a lattice than to introduce such forces in a continuum setting~\cite{bib:shanchen}. In this way, the dynamics of immiscible~\cite{bib:immiscible}, coexisting~\cite{bib:coexisting}, and amphiphilic~\cite{bib:amphiphilic} fluids have all been modeled successfully. It may be argued that the success of lattice models of fluids is purely phenomenological in nature. Attractive or repulsive interparticle potentials are introduced to make immiscible species separate, and surface tension is emergent. Potentials with both attractive and repulsive regions, in the spirit of the Van der Waals potential, are introduced, and the liquid-gas coexistence is emergent. Simplified BGK-style collision operators are used~\cite{bib:bgk}. As long as the dimensionless parameters associated with the simulation match the fluid being modeled, the details of the microscopic interactions are deemed unimportant. Faith in this phenomenological approach is, to some extent, justified by the observed robustness of hydrodynamic equations to details of the kinetic interactions. If both the interparticle potential range and the mean-free path are on the order of a lattice spacing, loss of separation between scales 1 and 2 is evident. As noted above, however, the hydrodynamic equations for a simple fluid are robust with respect to this loss; the hope is that a similar thing is true for more complex fluids. Another reason for faith in the lattice-BGK approach owes to its second-order accuracy. Mathematically, the lattice-BGK equation may be written \begin{equation} f_j\left({\mybold{r}}+{\mybold{c}}_j,t+1\right) = f_j({\mybold{r}},t) + \frac{1}{\tau}\left[f^{(\mbox{\tiny eq})}_j\left({\mybold{\rho}}({\mybold{r}},t)\right)-f_j({\mybold{r}},t)\right], \label{eq:bgk} \end{equation} where $f_j({\mybold{r}},t)$ is the discrete-velocity distribution function corresponding to the $j$th velocity ${\mybold{c}}_j$ at spatial position ${\mybold{r}}$ and time $t$, likewise $f^{(\mbox{\tiny eq})}_j$ is an equilibrium distribution function that depends only on the conserved densities ${\mybold{\rho}}$, and $\tau$ is a collisional relaxation time. Second-order accuracy is not at all obvious from a cursory inspection of Eq.~(\ref{eq:bgk}). To make it more evident, Dellar~\cite{bib:dellar} has pointed out that we may define the new dependent variable \begin{equation} F_j := f_j-\frac{1}{2\tau}\left(f^{(\mbox{\tiny eq})}_j-f_j\right). \end{equation} It is then straightforward to derive the lattice BGK equation for the transformed variable. Using $P$ to denote the {\it propagation operator}, defined by \begin{equation} Pf_j(x,t) = f_j(x+{\mybold{c}}_j,t+1), \end{equation} the result may be written \begin{equation} PF_j = F_j + \frac{1}{\tau-\frac{1}{2}}\left(\frac{I+P}{2}\right)\left[f^{(\mbox{\tiny eq})}_j-f_j\right], \label{eq:PIform} \end{equation} where $I$ is the identity operator. This form makes manifest the fact that the collision is applied between sites. It also allows a glimpse at the origin of the term $\tau-1/2$ which will emerge as a factor in the transport coefficient. One might hope that the success of lattice BGK models would represent progress toward the end of deriving hydrodynamic equations for complex fluids from first principles. For example, it would be very satisfying if a Chapman-Enskog analysis of an interacting-particle lattice-BGK equation gave rise to a Ginzburg-Landau or Cahn-Hilliard equation for phase separation, coupled to the Navier-Stokes equations in the style of Halperin and Hohenberg's Model H~\cite{bib:hh}. To date, however, a successful derivation of this sort does not exist, most likely due to the aforementioned loss of separation between scales 2 and 3 for such fluids. There is simply no small parameter analogous to the Knudsen number on which to base an asymptotic expansion. Part of the problem is that the very first step of the Chapman-Enskog analysis of the lattice-BGK equation is the Taylor expansion of the propagation operator, effectively in powers of the Knudsen number. This has the effect of yielding partial differential equations (PDEs) in space and time. In this work, we argue that the insistence on PDEs is the real problem. If we were willing to entertain the possibility of hydrodynamic equations that are difference equations in space and/or time, we would be able to model fluids with rapidly varying hydrodynamic fields. The question arises: Is it possible to derive a closed-form equation for the hydrodynamic degrees of freedom of a lattice BGK model without Taylor expanding, or doing anything else tantamount to an asymptotic expansion in Knudsen number? In this paper, we answer this question affirmatively. We circumvent the usual need for Taylor expansion by applying the projection operator formalism to the problem of deriving the exact hydrodynamics of the lattice-BGK equation. As an alternative to the more usual Chapman-Enskog expansion, this approach offers many benefits. Most remarkably, it produces absolutely exact, though non-Markovian, hydrodynamic difference equations as an intermediate step. These are accurate to all orders in Knudsen number and hence contain all of the physics of the Burnett equations and beyond for the lattice BGK fluid. If appropriate -- but {\it only} if appropriate -- these equations may then be Taylor expanded in space and/or time (to second order in Knudsen number) to obtain the hydrodynamic equations that would have resulted from the Chapman-Enskog analysis. The method thereby offers the potential to derive hydrodynamic difference equations for complex fluids with sharp gradients, such as immiscible and amphiphilic flow, for which the assumptions underlying the Chapman-Enskog approach are generally invalid. We begin by applying the methodology to a lattice-BGK model for Burgers' equation. This example is simple enough to display every step of the calculation, but complicated enough to raise most all of the above issues. In particular, spatially varying initial conditions may lead to a shock of characteristic width equal to the square root of the viscosity. For reasonable values of the viscosity, this shock width may be on the order of a lattice spacing. \section{Burgers' equation} \subsection{Lattice BGK model} The method is best illustrated by example, so we begin by applying it to a lattice BGK model for Burgers' equation in one spatial dimension ($D=1$). A number of such models for Burgers' equation have been developed over the years~\cite{bib:lgburgers,bib:lbburgers,bib:elboburgers}; this one is a variant of an entropic version considered by Boghosian et al.~\cite{bib:elboburgers}. At each point $x\in{\mathbb Z}$, and at each time $t\in{\mathbb Z}^+$, we have a two-component distribution function $f_\pm(x,t)$, from which it is possible to recover the hydrodynamic density \begin{equation} \rho(x,t) = f_+(x,t) + f_-(x,t). \label{eq:forward} \end{equation} The distribution function obeys the lattice BGK kinetic equation \begin{equation} f_\pm\left(x\pm 1,t+1\right) = f_\pm(x,t) + \frac{1}{\tau}\left[f^{(\mbox{\tiny eq})}_\pm\left(\rho(x,t)\right)-f_\pm(x,t)\right], \label{eq:bgkBurgers} \end{equation} where we have defined the local equilibrium distribution function \begin{equation} f^{(\mbox{\tiny eq})}_\pm\left(\rho\right) := \frac{\rho}{2} \pm \frac{\kappa}{2}\rho\left(1-\frac{\rho}{2}\right). \end{equation} Here $\kappa$ is a parameter that is taken to be of first order in the scaling limit. That is, as the number of lattice points per physical distance is doubled, $\kappa$ will be halved. For the analysis of this model, we shall also find it useful to define the kinetic moment \begin{equation} \xi(x,t) := f_+(x,t) - f_-(x,t), \end{equation} so that the distribution function may be recovered from knowledge of its hydrodynamic and kinetic moments, \begin{equation} f_\pm(x,t) = \frac{\rho(x,t) \pm \xi(x,t)}{2}. \label{eq:backward} \end{equation} \subsection{Chapman-Enskog analysis} The simple lattice BGK model described above has pedagogical value as a simple introduction to the Chapman-Enskog asymptotic procedure. To make this presentation self-contained, and to facilitate comparison of the proposed new methodology with the Chapman-Enskog procedure, we outline the latter in this subsection. The usual analysis begins with a Taylor expansion of the kinetic equation (\ref{eq:bgkBurgers}) assuming parabolic ordering, wherein spatial derivatives are first order quantities, and time derivatives are second order quantities. The result is \begin{equation} \epsilon^2 \partial_t f_\pm \pm \epsilon \partial_x f_\pm + \frac{\epsilon^2}{2}\partial_x^2 f_\pm + {\mathcal O}\left(\epsilon^3\right) = \frac{1}{\tau}\left[\frac{\rho}{2} \pm \frac{\epsilon\kappa}{2}\rho\left(1-\frac{\rho}{2}\right) - f_\pm\right], \label{eq:bgkBurgersOrd} \end{equation} where all quantities are understood to be evaluated at $(x,t)$, and where $\epsilon$ is a formal expansion parameter, introduced for purposes of bookkeeping, that will be set to one at the end of the calculation. Note that we made the substitution $\kappa\rightarrow\epsilon\kappa$ to reflect the fact that $\kappa$ is first-order in the scaling limit. Taking the sum of the $\pm$ components of this equation yields the conservation equation \begin{equation} \epsilon^2 \partial_t\rho+\epsilon\partial_x\xi + \frac{\epsilon^2}{2}\partial_x^2 \rho + {\mathcal O}\left(\epsilon^3\right) = 0. \label{eq:hydro} \end{equation} To close this equation, it will be necessary to express $\xi$ in terms of $\rho$. This is done by solving Eq.~(\ref{eq:bgkBurgersOrd}) perturbatively by taking \begin{equation} f_\pm = f^{(0)}_\pm + \epsilon f^{(1)}_\pm + \epsilon^2 f^{(2)}_\pm + {\mathcal O}\left(\epsilon^3\right). \end{equation} At order zero in $\epsilon$, we immediately obtain \begin{equation} f^{(0)}_\pm = \frac{\rho}{2}. \end{equation} This lowest approximation to $f_\pm$ implies $\xi=0$; it is therefore insufficient to calculate the kinetic moment, which first appears at order one. Proceeding to order one, we obtain \begin{equation} \pm \partial_x f^{(0)}_\pm = \frac{1}{\tau}\left[\pm \frac{\kappa}{2}\rho\left(1-\frac{\rho}{2}\right) - f^{(1)}_\pm\right], \end{equation} so \begin{equation} f^{(1)}_\pm = \pm \frac{\kappa}{2}\rho\left(1-\frac{\rho}{2}\right)\mp \frac{\tau}{2}\partial_x \rho, \end{equation} where we have used the order zero solution in the last step. This order-one result for the distribution function, \begin{equation} f_\pm = \frac{\rho}{2}\pm \frac{\epsilon\kappa}{2}\rho\left(1-\frac{\rho}{2}\right)\mp \frac{\epsilon\tau}{2}\partial_x \rho + {\mathcal O}\left(\epsilon^2\right) \end{equation} yields the kinetic moment \begin{equation} \xi = \epsilon\kappa\rho\left(1-\frac{\rho}{2}\right)- \epsilon\tau\partial_x \rho + {\mathcal O}\left(\epsilon^2\right). \end{equation} Inserting this result in Eq.~(\ref{eq:hydro}) yields \begin{equation} \epsilon^2 \partial_t\rho+\epsilon\partial_x \left[\epsilon\kappa\rho\left(1-\frac{\rho}{2}\right)- \epsilon\tau\partial_x \rho\right] + \frac{\epsilon^2}{2}\partial_x^2 \rho + {\mathcal O}\left(\epsilon^3\right) = 0, \label{eq:burgers1} \end{equation} or \begin{equation} \partial_t\rho+\kappa\left(1-\rho\right)\partial_x\rho = \left(\tau-\frac{1}{2}\right)\partial^2_x \rho + {\mathcal O}\left(\epsilon\right). \label{eq:burgers2} \end{equation} Upon the substitution $u:=\kappa\left(1-\rho\right)$, this becomes Burgers' equation in canonical form, \begin{equation} \partial_t u+u\,\partial_x u = \nu\,\partial^2_x u + {\mathcal O}\left(\epsilon\right), \end{equation} where we have defined the transport coefficient $\nu := \tau - 1/2$. This method of simulating Burgers' equation is simple to implement and remarkably robust. Fig.~\ref{fig:burgers} shows the results of simulating of the model on a periodic lattice of size $N=1024$, with $\kappa=0.25$ and $\tau=0.60$. The initial conditions used were \begin{equation} \rho(x,0) = 1+\frac{1}{2}\cos\left(\frac{2\pi x}{N}\right). \end{equation} It is seen that the solution captures the formation and decay of the shock, and that the width of the shock at late times is comparable to the lattice spacing. \begin{figure} \centering \mbox{ \includegraphics[bbllx=0,bblly=0,bburx=300,bbury=210,width=2.0truein]{burg0.eps} \includegraphics[bbllx=0,bblly=0,bburx=300,bbury=210,width=2.0truein]{burg128.eps}} \mbox{ \includegraphics[bbllx=0,bblly=0,bburx=300,bbury=210,width=2.0truein]{burg256.eps} \includegraphics[bbllx=0,bblly=0,bburx=300,bbury=210,width=2.0truein]{burg512.eps}} \mbox{ \includegraphics[bbllx=0,bblly=0,bburx=300,bbury=210,width=2.0truein]{burg1024.eps} \includegraphics[bbllx=0,bblly=0,bburx=300,bbury=210,width=2.0truein]{burg2048.eps}} \mbox{ \includegraphics[bbllx=0,bblly=0,bburx=300,bbury=210,width=2.0truein]{burg4096.eps} \includegraphics[bbllx=0,bblly=0,bburx=300,bbury=210,width=2.0truein]{burg8192.eps}} \caption{Simulation of lattice BGK model for Burgers' equation. The formation and subsequent decay of the shock are evident. Note that the shock width is on the order of a lattice spacing, calling into question the very first step of the Chapman-Enskog analysis, namely the assumption that spatial gradients are small.} \label{fig:burgers} \end{figure} Before going on to the exact analysis, it is worth making some general observations about the Chapman-Enskog analysis for this model: \begin{itemize} \item We solved the kinetic equation only to first order, but used that solution in the hydrodynamic equation at second order. This interlacing of orders is characteristic of the Chapman-Enskog analysis. \item The second-order accuracy of the lattice BGK equation is not at all evident in the final result obtained. If we were to carry on to the next order -- which would involve solving the kinetic equation to second order -- it is not at all clear that we would not find corrections to the hydrodynamic equation obtained. \item The transport coefficient is equal to $\tau-1/2$. The first term, $\tau$, arose from the gradient correction to the local equilibrium distribution function. The second term, $-1/2$, came from the Taylor expansion of the left-hand side of the kinetic equation at order two, and is an artifact of the discrete nature of the model. Note that the combination $\tau-1/2$ is manifest in Eq.~(\ref{eq:PIform}). \end{itemize} \subsection{Exact analysis} The exact analysis that is the point of this paper comes from projecting the kinetic equation onto hydrodynamic and kinetic subspaces, solving for the kinetic field $\xi(x,t)$ as though the hydrodynamic field $\rho(x,t)$ were a known function of position and time, and then using this solution to obtain a closed dynamical equation for $\rho(x,t)$. This general technique has been known for a long time under a variety of different names. In physics, it is sometimes called the Mori-Zwanzig projection operator formalism~\cite{bib:zwanzig}. In linear algebra, it is related to the Schur complement~\cite{bib:schur}. To the best of our knowledge, however, it has not heretofore been applied to the problem of obtaining exact solutions of the lattice BGK equation. We begin by using Eqs.~(\ref{eq:forward},\ref{eq:bgkBurgers},\ref{eq:backward}) to write coupled evolution equations for $\rho$ and $\xi$. After a bit of straightforward algebra, we obtain \begin{eqnarray} \lefteqn{\rho(x,t+1) = +\frac{1}{2}\left[\rho(x+1,t)+\rho(x-1,t)\right]- \frac{\tau-1}{2\tau}\left[\xi(x+1,t)-\xi(x-1,t)\right]}\nonumber\\ & & \;\;\;\;\;\;\;\;\;\;\;\;\;\; + \frac{\kappa}{2\tau} \left[ -\rho(x+1,t) \left( 1-\frac{\rho(x+1,t)}{2} \right) + \rho(x-1,t) \left( 1-\frac{\rho(x-1,t)}{2} \right) \right]\label{eq:rhoProj} \\ \lefteqn{\xi(x,t+1) = -\frac{1}{2}\left[\rho(x+1,t)-\rho(x-1,t)\right]+ \frac{\tau-1}{2\tau}\left[\xi(x+1,t)+\xi(x-1,t)\right]}\nonumber\\ & & \;\;\;\;\;\;\;\;\;\;\;\;\;\; + \frac{\kappa}{2\tau} \left[ +\rho(x+1,t) \left( 1-\frac{\rho(x+1,t)}{2} \right) + \rho(x-1,t) \left( 1-\frac{\rho(x-1,t)}{2} \right) \right].\label{eq:piProj} \end{eqnarray} These two dynamical equations for the hydrodynamic and kinetic fields respectively, taken together, are equivalent to the original kinetic equation. The equation for the kinetic field may be written in the suggestive form \begin{equation} \xi(x,t+1) = \frac{\tau-1}{2\tau}\left[\xi(x+1,t)+\xi(x-1,t)\right]+Q(x,t), \label{eq:dhe} \end{equation} where we have grouped together everything involving the hydrodynamic field into a single effective source term, defined by \begin{eqnarray} Q(x,t) &:=& -\frac{1}{2} \left[\rho(x+1,t)-\rho(x-1,t)\right]\nonumber\\ & & +\frac{\kappa}{2\tau} \left[ \rho(x+1,t)\left(1-\frac{\rho(x+1,t)}{2}\right)+ \rho(x-1,t)\left(1-\frac{\rho(x-1,t)}{2}\right) \right]. \label{eq:qdef} \end{eqnarray} Eq.~(\ref{eq:dhe}) is a linear difference equation. For an infinite lattice, it has the exact solution \begin{equation} \xi(x,t) = \sum_{n=0}^{t-1}\sum_{m=0}^n\binom{n}{m} \left(\frac{\sigma}{2}\right)^n Q(x+2m-n,t-n) + \sum_{m=0}^{t}\binom{t}{m} \left(\frac{\sigma}{2}\right)^{t} \xi(x+2m-t,0), \label{eq:xiSol} \end{equation} where we have defined $\sigma :=1-1/\tau$, and we note that $|\sigma|<1$ for $\tau>1/2$. The first term above describes the excitation of the kinetic field due to gradients of the hydrodynamic field. The second term is a transient, due to the initial conditions. \subsection{Self-consistent hydrodynamic difference equation} Supposing, for the time being, that the kinetic field is initialized to zero~\footnote{It should be evident that this is an inessential assumption that we make here only for the sake of simplicity of presentation.}, or that we have waited long enough for transient behavior to be unimportant, we can insert the first term on the right-hand side of Eq.~(\ref{eq:xiSol}) into the dynamical equation for $\rho$ and rearrange to obtain the nonlinear difference equation \begin{eqnarray} \lefteqn{\rho(x,t+1)-\rho(x,t) = +\frac{1}{2}\left[\rho(x+1,t)-2\rho(x,t)+\rho(x-1,t)\right]}\nonumber\\ & & -\frac{\kappa}{2\tau}\left\{ \rho(x+1,t)\left[1-\frac{\rho(x+1,t)}{2}\right] - \rho(x-1,t)\left[1-\frac{\rho(x-1,t)}{2}\right] \right\}\nonumber\\ & & +\frac{1}{2}\sum_{n=0}^{t-1}\sum_{m=0}^n\binom{n}{m} \left(\frac{\sigma}{2}\right)^{n+1}\nonumber\\ & & \phantom{\int\int} \left[\rho(x+2m-n+2,t-n)-2\rho(x+2m-n,t-n)+\rho(x+2m-n-2,t-n)\right] \nonumber\\ & & -\frac{\kappa}{2\tau}\sum_{n=0}^{t-1}\sum_{m=0}^n\binom{n}{m} \left(\frac{\sigma}{2}\right)^{n+1} \left\{ \rho(x+2m-n+2,t-n)\left[1-\frac{\rho(x+2m-n+2,t-n)}{2}\right] \right.\nonumber\\ & & \phantom{\int\int} \left. -\rho(x+2m-n-2,t-n)\left[1-\frac{\rho(x+2m-n-2,t-n)}{2}\right] \right\}. \label{eq:exact1} \end{eqnarray} The remarkable thing about this result is that no essential approximations have been made in its derivation. It is an exact difference equation that must be obeyed by a hydrodynamic field $\rho(x,t)$ satisfying the lattice BGK equation. \subsection{Recovery of a hydrodynamic differential equation} Examination of Fig.~\ref{fig:burgers} indicates that the hydrodynamic field changes reasonably slowly in time so that a Taylor expansion in time is justified, but that it can change very rapidly in space after the onset of the shock, rendering a spatial Taylor expansion of questionable validity. For this reason, we begin by Taylor expanding in time only, incurring an error of at most second order, \begin{eqnarray} \lefteqn{ \partial_t\rho(x,t) = +\frac{1}{2}\left[\rho(x+1,t)-2\rho(x,t)+\rho(x-1,t)\right]}\nonumber\\ & & -\frac{\kappa}{2\tau}\left\{ \rho(x+1,t)\left[1-\frac{\rho(x+1,t)}{2}\right] - \rho(x-1,t)\left[1-\frac{\rho(x-1,t)}{2}\right] \right\}\nonumber\\ & & +\frac{1}{2}\sum_{n=0}^{t-1}\sum_{m=0}^n\binom{n}{m} \left(\frac{\sigma}{2}\right)^{n+1}\nonumber\\ & & \phantom{\int\int} \left[\rho(x+2m-n+2,t)-2\rho(x+2m-n,t)+\rho(x+2m-n-2,t)\right] \nonumber\\ & & -\frac{\kappa}{2\tau}\sum_{n=0}^{t-1}\sum_{m=0}^n\binom{n}{m} \left(\frac{\sigma}{2}\right)^{n+1} \left\{ \rho(x+2m-n+2,t)\left[1-\frac{\rho(x+2m-n+2,t)}{2}\right] \right.\nonumber\\ & & \phantom{\int\int} \left. -\rho(x+2m-n-2,t)\left[1-\frac{\rho(x+2m-n-2,t)}{2}\right] \right\}+{\mathcal O}\left(\epsilon^2\right). \label{eq:exact2} \end{eqnarray} This is a second-order accurate set of coupled discrete-space, continuous-time equations describing the hydrodynamics of the model. To proceed to a spatiotemporal partial differential equation, we may Taylor expand Eq.~(\ref{eq:exact2}) about the spatial point $x$, retaining spatial derivatives to second order. Because of the symmetry of the differences, it is straightforward to see that the error incurred is of second order. The sum over $m$ is then a binomial series and that over $n$ is a finite geometric series, yielding \begin{equation} \partial_t\rho+ \kappa\left(1-\sigma^{t+1}\right) \left(1-\rho\right)\partial_x\rho = \left[\tau\left(1-\sigma^{t+1}\right)-\frac{1}{2}\right]\partial_x^2\rho + {\mathcal O}\left(\epsilon^2\right). \end{equation} Ignoring the terms that decay exponentially in time (note $|\sigma|<1$), we recover Eq.~(\ref{eq:burgers2}). Before going on to consider the exact analysis for more general lattice BGK equations, we offer some preliminary observations: \begin{itemize} \item The usual Chapman-Enskog procedure begins with a Taylor series expansion of the kinetic equation in difference form, and then assembles continuum hydrodynamic equations from that series. This method, by contrast, first derives exact hydrodynamic equations in difference form, and only then (optionally) Taylor expands the results to obtain differential equations. \item Closely related to the above point is the idea that the hydrodynamic difference equation first obtained is not expanded in Knudsen number, and therefore expected to hold in limits that would not ordinarily be considered ``hydrodynamic.'' Such limits may include situations with steep spatial gradients or long mean-free paths. \item The method yields an exact hydrodynamic equation in difference form that can be Taylor expanded to yield the hydrodynamic equation. The second-order accurate nature of this expansion is manifest. \item The method works only for lattice BGK equations, and relies on the fact that the equilibrium distribution function is a function of the conserved quantities only. (If this were not the case, Eq.~(\ref{eq:dhe}) would not be linear in the kinetic field, so we would not be able to solve it exactly.) \item Note that as $t\rightarrow\infty$, the effect of the last two terms on the right-hand side of Eq.~(\ref{eq:exact2}) is to simply alter the coefficients in front of -- or ``renormalize'' -- the other terms in the equation. The penultimate term evaluates to $C\rho_{xx}$ so it renormalizes the diffusion term, and the last term is $-D\kappa\rho(1-\rho)\rho_x$ so it renormalizes the advection term. The sum determines that $C=\sigma\tau$ and $D=\sigma$ whence \begin{equation} \partial_t\rho = \frac{1}{2}\rho_{xx} -\frac{1}{\tau}\partial_x\left[\kappa\rho\left(1-\frac{\rho}{2}\right)\right] +\sigma\tau\rho_{xx} -\sigma\partial_x\left[\kappa\rho\left(1-\frac{\rho}{2}\right)\right], \end{equation} from which Eq.~(\ref{eq:burgers2}) follows. \end{itemize} \section{General procedure} \subsection{Projection operators} We now suppose that we have a $b$-component discrete-velocity distribution function $f_j({\mybold{r}},t)$, where $j\in\{1,\ldots,b\}$, where ${\mybold{r}}\in{\mathcal L}$ is a point on lattice ${\mathcal L}$, and where $t\in{\mathbb Z}^+$ is the time. The $b_h$ hydrodynamic variables are obtained by projection with the $b_h\times b$ matrix $H$, \begin{eqnarray} \rho_\alpha &=& \sum_j^b \tensor{H}{\alpha}{j} f_j,\\ \noalign{\noindent \mbox{and the $b_k=b-b_h$ kinetic variables are obtained by projection with the $b_k\times b$ matrix $K$,}} \xi_\mu &=& \sum_j^b \tensor{K}{\mu}{j} f_j. \end{eqnarray} The rows of $H$ and $K$ are linearly independent, so that knowledge of all the hydrodynamic and kinetic variables is sufficient to reconstruct the distribution function, \begin{equation} f_j = \sum_\beta^{b_h} \tensor{A}{j}{\beta}\rho_\beta + \sum_\nu^{b_k} \tensor{B}{j}{\nu}\xi_\nu. \label{eq:reconstruct} \end{equation} It is easily verified that the $b\times b_h$ matrix $A$ and the $b\times b_k$ matrix $B$ obey the relations \begin{eqnarray} \sum_j^b \tensor{H}{\alpha}{j}\tensor{A}{j}{\beta} = \tensor{\delta}{\alpha}{\beta} & & \sum_j^b \tensor{H}{\alpha}{j}\tensor{B}{j}{\nu} = 0 \label{eq:ida}\\ \sum_j^b \tensor{K}{\mu}{j}\tensor{A}{j}{\beta} = 0 & & \sum_j^b \tensor{K}{\mu}{j}\tensor{B}{j}{\nu} = \tensor{\delta}{\mu}{\nu}, \end{eqnarray} and \begin{equation} \tensor{\delta}{j}{i} = \sum_\beta^{b_h} \tensor{A}{j}{\beta}\tensor{H}{\beta}{i} + \sum_\nu^{b_k} \tensor{B}{j}{\nu}\tensor{K}{\nu}{i}. \label{eq:idc} \end{equation} In the above presentation, we have adopted the convention of using Greek letters from the beginning of the alphabet ($\alpha,\beta,\ldots$) to label the hydrodynamic variables, from the middle of the alphabet ($\mu,\nu,\ldots$) to label the kinetic variables, and Latin letters ($j,k,\ldots$) to label the distribution function components. We shall adhere to this notational convention as closely as possible in the forthcoming development. \begin{example} \label{ex:triangular} A triangular grid in two spatial dimensions ($D=2$) has the $b=6$ lattice vectors \begin{equation} {\mybold{c}}_j := {\mybold{e}}_x\cos\left(\frac{2\pi j}{6}\right) + {\mybold{e}}_y\sin\left(\frac{2\pi j}{6}\right). \end{equation} In the widely adopted nomenclature for lattice Boltzmann models, this is called the D2Q6 model~\footnote{We note in passing that the D2Q6 model is no longer extensively used for two-dimensional lattice BGK simulations of fluids. It has been abandoned in favor of the so-called D2Q9 lattice which may be implemented on a Cartesian grid. We use the D2Q6 model in this example only for the sake of simplicity, since it allows us to exhibit matrices of dimension six rather than nine. It should be clear that there is nothing preventing this method from being applied to the D2Q9 lattice, or even to the much larger lattices used in modern lattice-BGK simulations.}. If mass and momentum are conserved, the hydrodynamic and kinetic projection operators may be taken to be \begin{equation} H = \left[ \begin{array}{rrrrrr} 1 & 1 & 1 & 1 & 1 & 1\\ 1 & \slantfrac{1}{2} & -\slantfrac{1}{2} & -1 & -\slantfrac{1}{2} & \slantfrac{1}{2}\\ 0 & \slantfrac{\sqrt{3}}{2} & \slantfrac{\sqrt{3}}{2} & 0 & -\slantfrac{\sqrt{3}}{2} & -\slantfrac{\sqrt{3}}{2} \end{array} \right]\phantom{,} \label{eq:H} \end{equation} and \begin{equation} K = \left[ \begin{array}{rrrrrr} 1 & -1 & 1 & -1 & 1 & -1\\ 1 & -\slantfrac{1}{2} & -\slantfrac{1}{2} & 1 & -\slantfrac{1}{2} & -\slantfrac{1}{2}\\ 0 & \slantfrac{\sqrt{3}}{2} & -\slantfrac{\sqrt{3}}{2} & 0 & \slantfrac{\sqrt{3}}{2} & -\slantfrac{\sqrt{3}}{2} \end{array} \right], \end{equation} respectively. Note that the first row of $H$ corresponds to the mass density $\rho$, while the second and third rows correspond to the momentum density ${\mybold{\pi}}$. Collectively, we refer to the conserved densities as ${\mybold{\rho}}=(\rho,{\mybold{\pi}})$. The corresponding reconstruction matrices are then \begin{equation} \begin{array}{ccc} A = \left[ \begin{array}{rrr} \slantfrac{1}{6} & \slantfrac{1}{3} & 0\\ \slantfrac{1}{6} & \slantfrac{1}{6} & \slantfrac{\sqrt{3}}{6}\\ \slantfrac{1}{6} & -\slantfrac{1}{6} & \slantfrac{\sqrt{3}}{6}\\ \slantfrac{1}{6} & -\slantfrac{1}{3} & 0\\ \slantfrac{1}{6} & -\slantfrac{1}{6} & -\slantfrac{\sqrt{3}}{6}\\ \slantfrac{1}{6} & \slantfrac{1}{6} & -\slantfrac{\sqrt{3}}{6}\\ \end{array} \right] & & B = \left[ \begin{array}{rrr} \slantfrac{1}{6} & \slantfrac{1}{3} & 0\\ -\slantfrac{1}{6} & -\slantfrac{1}{6} & \slantfrac{\sqrt{3}}{6}\\ \slantfrac{1}{6} & -\slantfrac{1}{6} & -\slantfrac{\sqrt{3}}{6}\\ -\slantfrac{1}{6} & \slantfrac{1}{3} & 0\\ \slantfrac{1}{6} & -\slantfrac{1}{6} & \slantfrac{\sqrt{3}}{6}\\ -\slantfrac{1}{6} & -\slantfrac{1}{6} & -\slantfrac{\sqrt{3}}{6}\\ \end{array} \right] \end{array} \label{eq:AB} \end{equation} The identities of Eqs.~(\ref{eq:ida}) through (\ref{eq:idc}) are then readily verified. \label{ex:tri} \end{example} \subsection{General form of the lattice BGK equation} The general form of the lattice BGK equation, Eq.~(\ref{eq:bgk}), may be rewritten \begin{equation} f_j({\mybold{r}},t+1) = \sigma f_j({\mybold{r}}-{\mybold{c}}_j,t) + \left(1-\sigma\right) f^{(\mbox{\tiny eq})}_j\left({\mybold{\rho}}\left({\mybold{r}}-{\mybold{c}}_j,t\right)\right), \end{equation} where $\sigma := 1-1/\tau$ is defined as before. Note that the equilibrium distribution function $f^{(\mbox{\tiny eq})}$ is allowed to depend only on the hydrodynamic moments ${\mybold{\rho}}$. In what follows, it shall prove useful to write the equilibrium distribution in the form \begin{equation} f^{(\mbox{\tiny eq})}_j({\mybold{\rho}})=\sum_\beta^{b_h}\tensor{A}{j}{\beta}\rho_\beta + \xi^{(\mbox{\tiny eq})}_j({\mybold{\rho}}). \end{equation} Comparing this with Eq.~(\ref{eq:reconstruct}), we see that $\xi^{(\mbox{\tiny eq})}_j({\mybold{\rho}})$ is the kinetic portion of the equilibrium distribution function. \begin{example} For a fluid with ${\mybold{\rho}}=(\rho,{\mybold{\pi}})$, the form generally used for this dependence is the Mach-expanded distribution \begin{equation} f^{(\mbox{\tiny eq})}_j({\mybold{\rho}}) = W_j\left[\rho + \frac{1}{c_s^2}{\mybold{\pi}}\cdot{\mybold{c}}_j + \frac{1}{2c_s^4\rho}{\mybold{\pi}}\bfpi : \left({\mybold{c}}_j{\mybold{c}}_j - c_s^2 I_2\right) \right], \label{eq:feq} \end{equation} where the $W_j$ are weights associated with each direction, $I_2$ is the rank-two unit tensor and $c_s$ is the sound speed defined by the isotropy requirement \begin{equation} \sum_j^b W_j{\mybold{c}}_j{\mybold{c}}_j = c_s^2 I_2\sum_j^b W_j. \end{equation} In particular, for the D2Q6 lattice of Example~\ref{ex:tri}, it may be verified that $W_j=1$ and $c_s=1/\sqrt{2}$. The first two terms of Eq.~(\ref{eq:feq}) are then the hydrodynamic portion of the equilibrium distribution function, and \begin{equation} \xi^{(\mbox{\tiny eq})}_j({\mybold{\rho}}) = \frac{W_j}{2c_s^4\rho}{\mybold{\pi}}\bfpi : \left({\mybold{c}}_j{\mybold{c}}_j - c_s^2 I_2\right) \end{equation} is the kinetic portion. \end{example} By means of the projection operators defined in the previous subsection, it is straightforward to decompose this into coupled evolution equations for the hydrodynamic and kinetic moments, \begin{eqnarray} \rho_\alpha({\mybold{r}},t+1) &=& \sigma \sum_j^b \tensor{H}{\alpha}{j} \left[ \sum_\beta^{b_h} \tensor{A}{j}{\beta}\rho_\beta({\mybold{r}}-{\mybold{c}}_j,t) + \sum_\nu^{b_k} \tensor{B}{j}{\nu}\xi_\nu({\mybold{r}}-{\mybold{c}}_j,t)\right]\nonumber\\ & & + \left(1-\sigma\right)\sum_j^b \tensor{H}{\alpha}{j}\,f^{(\mbox{\tiny eq})}_j\left({\mybold{\rho}}\left({\mybold{r}}-{\mybold{c}}_j,t\right)\right) \label{eq:rhoGen}\\ \noalign{\noindent \mbox{and}} \xi_\mu({\mybold{r}},t+1) &=& \sigma \sum_j^b \tensor{K}{\mu}{j} \left[ \sum_\beta^{b_h} \tensor{A}{j}{\beta}\rho_\beta({\mybold{r}}-{\mybold{c}}_j,t) + \sum_\nu^{b_k} \tensor{B}{j}{\nu}\xi_\nu({\mybold{r}}-{\mybold{c}}_j,t)\right]\nonumber\\ & & + \left(1-\sigma\right)\sum_j^b \tensor{K}{\mu}{j}\,f^{(\mbox{\tiny eq})}_j\left({\mybold{\rho}}\left({\mybold{r}}-{\mybold{c}}_j,t\right)\right), \label{eq:piGen} \end{eqnarray} respectively. Eqs.~(\ref{eq:rhoGen}) and (\ref{eq:piGen}) are the generalizations of Eqs.~(\ref{eq:rhoProj}) and (\ref{eq:piProj}), respectively. The equation for the kinetic modes may be written more succinctly as \begin{equation} \xi_\mu({\mybold{r}},t+1) = \sigma \sum_{\nu}^{b_k}\sum_j^b \left( \tensor{K}{\mu}{j}\tensor{B}{j}{\nu}\right)\xi_\nu({\mybold{r}}-{\mybold{c}}_j,t) + Q_\mu({\mybold{r}},t), \label{eq:kinGen} \end{equation} where we have defined \begin{equation} Q_\mu({\mybold{r}},t) := \sum_{\beta}^{b_h}\sum_j^b \left(\tensor{K}{\mu}{j}\tensor{A}{j}{\beta}\right)\rho_\beta({\mybold{r}}-{\mybold{c}}_j,t) + \left(1-\sigma\right)\sum_j^b \tensor{K}{\mu}{j}\,\xi^{(\mbox{\tiny eq})}_j\left({\mybold{\rho}}\left({\mybold{r}}-{\mybold{c}}_j,t\right)\right). \label{eq:qdefg} \end{equation} Eqs.~(\ref{eq:kinGen}) and (\ref{eq:qdefg}) are the generalizations of Eqs.~(\ref{eq:dhe}) and (\ref{eq:qdef}), respectively. \subsection{Exact analysis in the general case} To proceed as in the example, it is now necessary to find an exact solution to the linear equation Eq.~(\ref{eq:kinGen}) for the kinetic modes, assuming that the hydrodynamic modes are known. Consider a labeled path of $n$ steps along lattice vectors, whose vertices are labeled by kinetic modes. More specifically, consider a path along the lattice beginning at position ${\mybold{r}}'$ and mode $\mu_n=\nu$ at time $t'=t-n$, and ending at position ${\mybold{r}}\in{\mathcal L}$ and mode $\mu_0=\mu$ at time $t$. One such path in the D2Q6 model is illustrated in Fig.~\ref{fig:path}. The set of all such $n$-step paths will be denoted by ${\mathcal P}^{(n)}({\mybold{r}}',\nu;{\mybold{r}},\mu)$. A path $p\in{\mathcal P}^{(n)}({\mybold{r}}',\nu;{\mybold{r}},\mu)$ is thus characterized by its sequence of indices ${\mybold j}(p):=\{j_n,\ldots,j_1\}$ of the lattice vectors traversed, and also the sequence of kinetic modes ${\mybold{\mu}}:=\{\mu_n,\ldots,\mu_0\}$ at the visited vertices. Note that it must be true that \begin{equation} \sum_{\ell=1}^n {\mybold{c}}_{j_\ell} = {\mybold{r}}-{\mybold{r}}', \label{eq:addrule} \end{equation} and \begin{eqnarray} \mu_0&=&\mu\\ \mu_n&=&\nu. \end{eqnarray} We use $\Sigma_p^{{\mathcal P}^{(n)}({\mybold{r}}',\nu;{\mybold{r}},\mu)}$ to denote the sum over all paths, $p\in{\mathcal P}^{(n)}({\mybold{r}}',\nu;{\mybold{r}},\mu)$. \begin{example} A path of length $n=5$ in the D2Q6 model of Example~\ref{ex:triangular} is shown in Fig.~\ref{fig:path}. The sequence of lattice vector indices pictured is ${\mybold j}(p)=\{3,2,5,1,3\}$. \end{example} To a path $p$ we assign the {\it weight} \begin{equation} \tensor{w^{(n)}}{\mu}{\nu}\left({\mybold j},{\mybold{\mu}}\right) = \prod_{\ell=1}^n \left(\tensor{K}{\mu_{\ell-1}}{j_\ell}\tensor{B}{j_\ell}{\mu_{\ell}}\right), \end{equation} where there is no understood summation on repeated indices. The exact solution to Eq.~(\ref{eq:kinGen}) is then \begin{eqnarray} \xi_\mu({\mybold{r}},t) &=& \sum_{n=0}^{t-1} \sum_{{\myboldsm{r}}'\in{\mathcal L}} \sum_\nu^{b_k} \sum_{p}^{{\mathcal P}^{(n)}({\myboldsm{r}}',\nu;{\myboldsm{r}},\mu)} \sigma^n\; \tensor{w^{(n)}}{\mu}{\nu}\left({\mybold j}(p),{\mybold{\mu}}(p)\right)\; Q_\nu({\mybold{r}}',t-n)\nonumber\\ & & + \sum_{{\myboldsm{r}}'\in{\mathcal L}} \sum_\nu^{b_k} \sum_{p}^{{\mathcal P}^{(t)}({\myboldsm{r}}',\nu;{\myboldsm{r}},\mu)} \sigma^{t}\; \tensor{w^{(t)}}{\mu}{\nu}\left({\mybold j}(p),{\mybold{\mu}}(p)\right)\; \xi_\nu({\mybold{r}}',0). \label{eq:xiSolGen} \end{eqnarray} Note that if ${\mybold{r}}'$ can not be connected to ${\mybold{r}}$ by a sequence of $n$ lattice vectors, perhaps because it is too far away, then ${\mathcal P}^{(t)}({\mybold{r}}',\mu_t;{\mybold{r}},\mu_0)$ is understood to be the null set. \begin{figure} \centering \mbox{\includegraphics[bbllx=0,bblly=100,bburx=1330,bbury=1147,width=6.0truein]{hexRanWalk3d.eps}} \caption{Path from ${\mybold{r}}'$ to ${\mybold{r}}$ in the D2Q6 model} \label{fig:path} \end{figure} \begin{example} Eq.~(\ref{eq:xiSolGen}) is the generalization of Eq.~(\ref{eq:xiSol}). To see this, note that our $D=1$ example for Burgers' equation had $b_h=b_k=1$ and $b=2$. The projection operators were \begin{eqnarray} H &=& \left[ \begin{array}{cc} +1 & +1 \end{array} \right]\\ K &=& \left[ \begin{array}{cc} +1 & -1 \end{array} \right], \end{eqnarray} and \begin{eqnarray} A &=& \left[ \begin{array}{c} +1/2\\ +1/2 \end{array} \right]\\ B &=& \left[ \begin{array}{c} +1/2\\ -1/2 \end{array} \right], \end{eqnarray} from which it follows that $\tensor{w^{(n)}({\mybold j},{\mybold{\mu}})}{1}{1}=2^{-n}$. The number of paths connecting ${\mybold{r}}'=x+2m-n$ and ${\mybold{r}}=x$ is then the binomial coefficient $\binom{n}{m}$, resulting in Eq.~(\ref{eq:xiSol}). \end{example} \subsection{Self-consistent hydrodynamic difference equations} As with our one-dimensional example, we suppose that the kinetic field is initialized to zero, and we insert Eq.~(\ref{eq:xiSolGen}) into Eq.~(\ref{eq:rhoGen}) and rearrange to obtain the {\it exact} hydrodynamic difference equation \begin{eqnarray} \lefteqn{\rho_\alpha({\mybold{r}},t+1)}\nonumber\\ &=& \phantom{+}\sigma \sum_j^b \sum_\beta^{b_h} \tensor{H}{\alpha}{j} \tensor{A}{j}{\beta}\rho_\beta({\mybold{r}}-{\mybold{c}}_j,t) + \left(1-\sigma\right)\sum_j^b \tensor{H}{\alpha}{j}\,f^{(\mbox{\tiny eq})}_j\left({\mybold{\rho}}\left({\mybold{r}}-{\mybold{c}}_j,t\right)\right) \nonumber\\ & & +\sum_j^b \sum_{\mu,\nu}^{b_k}\sum_{n=0}^{t-1}\sum_{{\myboldsm{r}}'\in{\mathcal L}}\; \sum_{p}^{{\mathcal P}^{(n)}({\myboldsm{r}}',\nu;{\myboldsm{r}}-{\myboldsm{c}}_j,\mu)} \sigma^{n+1}\; \tensor{H}{\alpha}{j} \tensor{B}{j}{\mu} \tensor{w^{(n)}}{\mu}{\nu}\left({\mybold j}(p),{\mybold{\mu}}(p)\right)\; Q_\nu({\mybold{r}}',t-n). \label{eq:rhoGenSol} \end{eqnarray} Eq.~(\ref{eq:rhoGenSol}) is an exact nonlinear hydrodynamic difference equation for the conserved densities, albeit in terms of a diagrammatic summation. We are now free to Taylor expand in either time or space, as appropriate to the phenomenon under consideration. Taylor expansion of Eq.~(\ref{eq:rhoGenSol}) to second order in the time variable followed by a bit of rearranging yields the continuum-time, discrete-space hydrodynamic equation \begin{eqnarray} \lefteqn{\partial_t\rho_\alpha({\mybold{r}},t)}\nonumber\\ &=& \phantom{+}\sum_j^b \sum_\beta^{b_h} \tensor{H}{\alpha}{j} \tensor{A}{j}{\beta} \left[\rho_\beta({\mybold{r}}-{\mybold{c}}_j,t) - \rho_\beta({\mybold{r}},t)\right] \nonumber\\ & & + \left(1-\sigma\right)\sum_j^b \tensor{H}{\alpha}{j}\, \left[\xi^{(\mbox{\tiny eq})}_j\left({\mybold{\rho}}\left({\mybold{r}}-{\mybold{c}}_j,t\right)\right) - \xi^{(\mbox{\tiny eq})}_j\left({\mybold{\rho}}\left({\mybold{r}},t\right)\right)\right] \nonumber\\ & & +\sum_j^b \sum_{\mu,\nu}^{b_k}\sum_{n=0}^{t-1}\sum_{{\myboldsm{r}}'\in{\mathcal L}}\; \sum_{p}^{{\mathcal P}^{(n)}({\myboldsm{r}}',\nu;{\myboldsm{r}}-{\myboldsm{c}}_j,\mu)} \sigma^{n+1}\; \tensor{H}{\alpha}{j} \tensor{B}{j}{\mu} \tensor{w^{(n)}}{\mu}{\nu}\left({\mybold j}(p),{\mybold{\mu}}(p)\right)\; Q_\nu({\mybold{r}}',t). \label{eq:rhoGenSol2} \end{eqnarray} \subsection{Recovery of a hydrodynamic differential equation} Finally, to proceed to a hydrodynamic differential equation, we may Taylor expand in space, retaining terms to second order. While this step is not technically difficult, it is tedious enough to warrant splitting the calculation into three parts, corresponding to each of the three terms on the right-hand side of Eq.~(\ref{eq:rhoGenSol2}). We begin by defining some useful tensors in terms of the projection and reconstruction operators and the lattice vectors. \subsubsection{Some useful tensors} In preparation for the forthcoming analysis, it is useful to define the tensors\begin{eqnarray} \tensor{S}{\alpha}{\beta}(N) &:=& \sum_j^b\tensor{H}{\alpha}{j} \tensor{A}{j}{\beta}\bigotimes^N{\mybold{c}}_j\\ \tensor{T}{\alpha}{\beta}(N) &:=& \sum_j^b\tensor{H}{\alpha}{j} \frac{\partial\xi_j}{\partial\rho_\beta}\bigotimes^N{\mybold{c}}_j\\ \tensor{U}{\alpha}{\beta\gamma}(N) &:=& \sum_j^b\tensor{H}{\alpha}{j} \frac{\partial^2\xi_j}{\partial\rho_\beta\partial\rho_\gamma}\bigotimes^N{\mybold{c}}_j, \label{eq:uNdef} \end{eqnarray} where $\otimes^N$ denotes an $N$-fold outer product. If all of the conserved quantities are scalars, these tensors have $N$ spatial indices ranging from $1$ to $D$ in which they are completely symmetric; that is, all have spatial rank $N$. In addition, the ${\mybold{S}}(N)$ and ${\mybold{T}}(N)$ each have two hydrodynamic indices ranging, $\alpha$ and $\beta$, ranging from $1$ to $b_h$, and the ${\mybold{U}}(N)$ have three such hydrodynamic indices. When it becomes necessary to refer to these tensors by their spatial indices, we adopt the convention of replacing the number $N$ in parentheses by the actual list of $N$ spatial indices. Thus, for example, we have \begin{equation} \tensor{S}{\alpha}{\beta}(\{a,b,c\}) = \sum_j^b\tensor{H}{\alpha}{j} \tensor{A}{j}{\beta}c_{ja}c_{jb}c_{jc}, \end{equation} where $c_{ja}$ is the $a$th spatial component of lattice vector ${\mybold{c}}_j$. Note that the value of $N$, namely $N=3$ in this case, can be inferred by the fact that there are three indices in the list. When listing components for $N=0$, we denote the empty list by $\{\}$ If one or more of the conserved quantities are spatial vectors, such as momentum, it is convenient to abuse notation by allowing the hydrodynamic indices to be either scalars or vectors. Each one that is a vector will increase the spatial rank of the tensors by one. For example, if $\zeta$ is the index of a conserved scalar and ${\mybold{\eta}}$ is the index of a conserved vector, $\tensor{S}{\zeta}{\zeta}(N)$ is a completely symmetric tensor of spatial rank $N$, while $\tensor{S}{{\myboldsm{\eta}}}{\zeta}(N)$ is a tensor of spatial rank $N+1$ that is symmetric under interchange of $N$ of its indices, and $\tensor{S}{{\myboldsm{\eta}}}{{\myboldsm{\eta}}}(N)$ is a tensor of spatial rank $N+2$ that is also symmetric under interchange of $N$ of its indices. To specify components of these, it may be necessary to use nested subscripts. Thus, we have that \begin{equation} \tensor{U}{\eta_a}{\zeta\zeta}(\{b,c\}) = \sum_j^b\tensor{H}{\eta_a}{j} \frac{\partial^2\xi_j}{\partial\rho_\zeta\partial\rho_\zeta}c_{jb}c_{jc} \end{equation} is the $(a,b,c)$ component of a tensor of spatial rank three that is symmetric under interchange of its rightmost two indices. Note that the notation automatically ensures that the $N$ symmetric indices will be the rightmost indices. Finally, we also define alternative versions of the ${\mybold{S}}(N)$, ${\mybold{T}}(N)$ and ${\mybold{U}}(N)$ tensors, using the same names for economy of notation, but with upper hydrodynamic and lower kinetic indices, \begin{eqnarray} \tensor{S}{\mu}{\beta}(N) &:=& \sum_j^b\tensor{K}{\mu}{j} \tensor{A}{j}{\beta}\bigotimes^N{\mybold{c}}_j\\ \tensor{T}{\mu}{\beta}(N) &:=& \sum_j^b\tensor{K}{\mu}{j} \frac{\partial\xi_j}{\partial\rho_\beta}\bigotimes^N{\mybold{c}}_j\\ \tensor{U}{\mu}{\beta\gamma}(N) &:=& \sum_j^b\tensor{K}{\mu}{j} \frac{\partial^2\xi_j}{\partial\rho_\beta\partial\rho_\gamma}\bigotimes^N{\mybold{c}}_j. \end{eqnarray} All of the above-mentioned considerations about counting independent components likewise apply to these alternative versions. \begin{example} For the D2Q6 model in Example~\ref{ex:triangular}, supposing that mass and momentum are both conserved, the matrices $H$, $K$, $A$ and $B$ are given in Eqs.~(\ref{eq:H}) through (\ref{eq:AB}). If we denote~\footnote{Unfortunately, denoting mass by index $\rho$ and momentum by vector index ${\mybold{\pi}}$ violates our convention that indices representing conserved quantities should come from the beginning of the Greek alphabet. The standard use of these letters to represent mass and momentum density, however, was deemed to override this concern.} the hydrodynamic indices by $\alpha,\beta,\gamma\in\{\rho,{\mybold{\pi}}\}$, it is straightforward to compute the above tensors. Tabulations of the results for $S(N)$, $T(N)$ and $U(N)$ are shown in Tables~\ref{eq:sH}, \ref{eq:tH} and \ref{eq:uH}, respectively. We have not bothered listing entries that are anisotropic and/or not needed for the derivation of the hydrodynamic equations. \end{example} To assemble the hydrodynamic equations from Eq.~(\ref{eq:rhoGenSol2}), we label the terms on its right-hand side as \textcircled{1}, \textcircled{2} and \textcircled{3}, and consider each separately. \subsubsection{Evaluation of first term in Eq.~(\ref{eq:rhoGenSol2})} The first term on the right of Eq.~(\ref{eq:rhoGenSol2}) can be written \begin{eqnarray} \mbox{\textcircled{1}}_\alpha &:=& \sum_j^b \sum_\beta^{b_h} \tensor{H}{\alpha}{j} \tensor{A}{j}{\beta} \left[\rho_\beta({\mybold{r}}-{\mybold{c}}_j,t) - \rho_\beta({\mybold{r}},t)\right]\nonumber\\ &=& \sum_j^b \sum_\beta^{b_h} \tensor{H}{\alpha}{j} \tensor{A}{j}{\beta} \left[-{\mybold{c}}_j\cdot{\mybold{\nabla}}\rho_\beta({\mybold{r}},t) +\frac{1}{2}{\mybold{c}}_j{\mybold{c}}_j : {\mybold{\nabla}}\bfnabla\rho_\beta({\mybold{r}},t) +\cdots\right]\nonumber\\ &=& -\sum_\beta^{b_h}\tensor{S}{\alpha}{\beta}(1)\cdot{\mybold{\nabla}}\rho_\beta({\mybold{r}},t) + \frac{1}{2}\sum_\beta^{b_h}\tensor{S}{\alpha}{\beta}(2) : {\mybold{\nabla}}\bfnabla\rho_\beta({\mybold{r}},t)+\cdots \label{eq:FirstTerm} \end{eqnarray} In Appendix~\ref{sec:Afirst}, we evaluate this expression for the D2Q6 model. When doing so, we adopt incompressible scaling~\cite{bib:LandauLifshitzFluids}, wherein spatial gradients and Mach number are taken to be order $\epsilon$, and time derivatives and density fluctuations are taken to be order $\epsilon^2$. To leading order, we find that the $\rho$ and ${\mybold{\pi}}$ components of the above term are \begin{eqnarray} \mbox{\textcircled{1}}_\rho &=& -{\mybold{\nabla}}\cdot{\mybold{\pi}}\\ \noalign{\noindent \mbox{and}} \mbox{\textcircled{1}}_{\myboldsm{\pi}} &=& -\frac{1}{2}{\mybold{\nabla}}\rho+\frac{1}{8}\nabla^2{\mybold{\pi}}. \end{eqnarray} \subsubsection{Evaluation of second term in Eq.~(\ref{eq:rhoGenSol2})} Similar considerations can be applied to the second term on the right of Eq.~(\ref{eq:rhoGenSol2}) which may be written \begin{eqnarray} \mbox{\textcircled{2}}_\alpha &:=& \left(1-\sigma\right) \sum_j^b\tensor{H}{\alpha}{j} \left[\xi^{(\mbox{\tiny eq})}_j({\mybold{\rho}}({\mybold{r}}-{\mybold{c}}_j,t)) -\xi^{(\mbox{\tiny eq})}_j({\mybold{\rho}}({\mybold{r}},t))\right]\nonumber\\ &=& \left(1-\sigma\right) \sum_j^b\tensor{H}{\alpha}{j} \left[-{\mybold{c}}_j\cdot{\mybold{\nabla}}\xi^{(\mbox{\tiny eq})}_j({\mybold{\rho}}({\mybold{r}},t)) +\frac{1}{2}{\mybold{c}}_j{\mybold{c}}_j : {\mybold{\nabla}}\bfnabla\xi^{(\mbox{\tiny eq})}_j({\mybold{\rho}}({\mybold{r}},t)) +\cdots\right], \end{eqnarray} or \begin{equation} \frac{\mbox{\textcircled{2}}_\alpha}{1-\sigma} = -\sum_\beta^{b_h}\tensor{T}{\alpha}{\beta}(1)\cdot{\mybold{\nabla}}\rho_\beta +\frac{1}{2}\sum_\beta^{b_h}\tensor{T}{\alpha}{\beta}(2) : {\mybold{\nabla}}\bfnabla\rho_\beta +\frac{1}{2}\sum_{\beta,\gamma}^{b_h}\tensor{U}{\alpha}{\beta\gamma}(2) : \left({\mybold{\nabla}}\rho_\beta\right)\left({\mybold{\nabla}}\rho_\gamma\right)+\cdots \label{eq:SecondTerm} \end{equation} In Appendix~\ref{sec:Asecond}, we evaluate this for the D2Q6 model in the limit of incompressible scaling~\cite{bib:LandauLifshitzFluids}. To leading order, we find that the $\rho$ and ${\mybold{\pi}}$ components of the above term are \begin{eqnarray} \mbox{\textcircled{2}}_\rho &=& 0\\ \noalign{\noindent \mbox{and}} \mbox{\textcircled{2}}_{\myboldsm{\pi}} &=& -\frac{1}{2\rho}\left({\mybold{\pi}}\cdot{\mybold{\nabla}}{\mybold{\pi}} -\frac{1}{2}{\mybold{\nabla}}\left|{\mybold{\pi}}\right|^2\right). \end{eqnarray} \subsubsection{Evaluation of third term in Eq.~(\ref{eq:rhoGenSol2})} The third term on the right-hand side of Eq.~(\ref{eq:rhoGenSol2}) involves $Q_\mu({\mybold{r}},t)$ which is given by Eq.~(\ref{eq:qdefg}), so it is necessary to compute this first. We note that it may be expanded to second-order accuracy to obtain \begin{eqnarray} Q_\mu({\mybold{r}},t) &=& \sum_{\beta}^{b_h}\tensor{S}{\mu}{\beta}(0)\rho_\beta -\sum_{\beta}^{b_h}\tensor{S}{\mu}{\beta}(1)\cdot{\mybold{\nabla}}\rho_\beta +\frac{1}{2}\sum_{\beta}^{b_h}\tensor{S}{\mu}{\beta}(2):{\mybold{\nabla}}\bfnabla\rho_\beta+\cdots \nonumber\\ & & + \left(1-\sigma\right) \left[ \sum_j^{b}\tensor{K}{\mu}{j}\xi^{(\mbox{\tiny eq})}_j -\sum_{\beta}^{b_h}\tensor{T}{\mu}{\beta}(1)\cdot{\mybold{\nabla}}\rho_\beta +\frac{1}{2}\sum_{\beta}^{b_h}\tensor{T}{\mu}{\beta}(2):{\mybold{\nabla}}\bfnabla\rho_\beta \right.\nonumber\\ & & \left. +\frac{1}{2}\sum_{\beta,\gamma}^{b_h}\tensor{U}{\mu}{\beta\gamma}(2):\left({\mybold{\nabla}} \rho_\beta\right)\left({\mybold{\nabla}}\rho_\gamma\right) +\cdots \right]. \label{eq:QTerm} \end{eqnarray} In Appendix~\ref{sec:AQ}, we evaluate this for the D2Q6 model in the limit of incompressible scaling~\cite{bib:LandauLifshitzFluids}. To second order, we find that its $\rho$ and ${\mybold{\pi}}$ components are \begin{eqnarray} Q_\rho &=& \rho\\ Q_{\myboldsm{\pi}} &=& {\mybold{\pi}}-\frac{1}{2}{\mybold{\nabla}}\rho+\frac{1}{8}\nabla^2{\myboldsm{\pi}} -\left(\frac{1-\sigma}{2\rho}\right) \left({\mybold{\pi}}\cdot{\mybold{\nabla}}{\mybold{\pi}}-\frac{1}{2}{\mybold{\nabla}}\left|{\mybold{\pi}}\right|^2 \right) \end{eqnarray} As noted for the example of Burgers' equation, the effect of the diagrammatic sum as $t\rightarrow\infty$ and in the continuum limit is to apply a linear operator to these terms, thereby renormalizing other terms in the equation. The most general form for the third term in Eq.~(\ref{eq:rhoGenSol2}) is then \begin{equation} Q_\alpha = C_\alpha\left[ -\frac{1}{2}{\mybold{\nabla}}\rho+\frac{1}{8}\nabla^2{\mybold{\pi}} -\left(\frac{1-\sigma}{2\rho}\right) \left({\mybold{\pi}}\cdot{\mybold{\nabla}}{\mybold{\pi}}-\frac{1}{2}{\mybold{\nabla}}\left|{\mybold{\pi}}\right|^2 \right)\right] + D_\alpha{\mybold{\nabla}}\rho + E_\alpha\nabla^2{\mybold{\pi}}, \end{equation} where $C_\alpha$, $D_\alpha$ and $E_\alpha$ are determined by the diagrammatic series. The results are \begin{equation} C_\rho = D_\rho = E_\rho = 0 \end{equation} and \begin{eqnarray} C_{\myboldsm{\pi}} &=& \frac{1+\sigma}{1-\sigma}\\ D_{\myboldsm{\pi}} &=& \frac{1}{2}\left(\frac{1+\sigma}{1-\sigma}\right)\\ E_{\myboldsm{\pi}} &=& \frac{1}{4}\left(\frac{\sigma}{1-\sigma}\right), \end{eqnarray} so the third term of Eq.~(\ref{eq:rhoGenSol2}) has no $\rho$ component, \begin{equation} \mbox{\textcircled{3}}_\rho = 0 \end{equation} and ${\mybold{\pi}}$ component equal to \begin{eqnarray} \mbox{\textcircled{3}}_{\myboldsm{\pi}} &=& \frac{1+\sigma}{1-\sigma}\left[ -\frac{1}{2}{\mybold{\nabla}}\rho+\frac{1}{8}\nabla^2{\mybold{\pi}} -\left(\frac{1-\sigma}{2\rho}\right) \left({\mybold{\pi}}\cdot{\mybold{\nabla}}{\mybold{\pi}} -\frac{1}{2}{\mybold{\nabla}}\left|{\mybold{\pi}}\right|^2 \right)\right]\nonumber\\ & & + \frac{1}{2}\left(\frac{1+\sigma}{1-\sigma}\right){\mybold{\nabla}}\rho + \frac{1}{4}\left(\frac{\sigma}{1-\sigma}\right)\nabla^2{\mybold{\pi}} \end{eqnarray} \subsubsection{Assembly of hydrodynamic equations} Armed with these terms, we may now assemble the full hydrodynamic equations, \begin{eqnarray} \rho_t &=& \mbox{\textcircled{1}}_\rho + \mbox{\textcircled{2}}_ \rho + \mbox{\textcircled{3}}_ \rho\\ {\mybold{\pi}}_t &=& \mbox{\textcircled{1}}_{\myboldsm{\pi}} + \mbox{\textcircled{2}}_{\myboldsm{\pi}} + \mbox{\textcircled{3}}_{\myboldsm{\pi}}. \end{eqnarray} Since $\rho_t$ may be ignored in the incompressible limit, the first of these reduces to \begin{equation} {\mybold{\nabla}}\cdot{\mybold{\pi}} = 0, \label{eq:ns1} \end{equation} while the second gives \begin{eqnarray} \partial_t{\mybold{\pi}} &=& \left(1+\frac{1+\sigma}{1-\sigma}\right) \left[ -\frac{1}{2}{\mybold{\nabla}}\rho +\frac{1}{8}\nabla^2{\mybold{\pi}} -\frac{1-\sigma}{2\rho} \left({\mybold{\pi}}\cdot{\mybold{\nabla}}{\mybold{\pi}}-\frac{1}{2}{\mybold{\nabla}}\left|{\mybold{\pi}}\right|^2\right) \right]\nonumber\\ & & + \frac{1}{2}\left(\frac{1+\sigma}{1-\sigma}\right){\mybold{\nabla}}\rho + \frac{1}{4}\left(\frac{\sigma}{1-\sigma}\right)\nabla^2{\mybold{\pi}} \end{eqnarray} This simplifies to yield \begin{equation} \partial_t{\mybold{\pi}} + \frac{1}{\rho}{\mybold{\pi}}\cdot{\mybold{\nabla}}{\mybold{\pi}} = -{\mybold{\nabla}} P +\nu\nabla^2{\mybold{\pi}}, \label{eq:ns2} \end{equation} where the pressure $P$ is given by the equation of state \begin{equation} P = \frac{\rho}{2}-\frac{\left|{\mybold{\pi}}\right|^2}{2\rho}, \end{equation} and the kinematic viscosity is \begin{equation} \nu = \frac{1}{2}\left(\tau - \frac{1}{2}\right). \end{equation} Eqs.~(\ref{eq:ns1}) and (\ref{eq:ns2}) are seen to be the Navier-Stokes equations of viscous, incompressible hydrodynamics. The expressions for the equation of state~\footnote{The equation of state includes a term that is clearly nonphysical because it depends on the hydrodynamic velocity. Great concern is often expressed about this term, but it is entirely misplaced and indeed reflects a serious misunderstanding of the nature of the incompressible limit, in which the form of the equation of state is entirely irrelevant and where the pressure is given by the solution to a Poisson equation. As long as one takes care to keep the Mach number small enough so that the density fluctuation scales as its square, the method is perfectly valid for simulating incompressible flow, the form of the equation of state notwithstanding.} and the viscosity are well known for the D2Q6 fluid~\cite{bib:SucciBook}. \section{Discussion} Diagrammatic methods often yield new physical insights, and this one is no exception. Since the $Q_\alpha$ are the driving terms in the linear equations for the kinetic modes $\xi_\nu$, we may think of the $Q_\alpha$ as the precise combination of hydrodynamic gradients that excite kinetic modes. Thus excited, the kinetic modes may propagate and couple with other kinetic modes. The diagrams trace the evolution of these kinetic excitations in space and time until they ultimately project, via the $\tensor{B}{j}{\nu}$ matrices, a contribution back to the hydrodynamic modes. Alternatively stated, the sum over diagrams gives the Green's function of Eq.~(\ref{eq:kinGen}) which governs the evolution of the kinetic modes. It is remarkable that the effect of these kinetic excitations is often nothing more than the renormalization of terms already present in the hydrodynamic equation. The diagrammatic sum is necessary to obtain the correct answer for the transport coefficients (advection, diffusion and viscosity in the above examples), but it is not necessary to obtain the general form of the hydrodynamic equation. In addition, principles of symmetry and covariance may be employed to reduce the diagrammatic sum to the determination of a few scalar quantities -- the $A$, $B$ and $C$ quantities in the above examples. \section{Conclusions} We have described a new method to derive hydrodynamic equations for lattice BGK fluids. It is qualitatively different from the usual Chapman-Enskog analysis, and superior insofar as it results in absolutely exact hydrodynamic equations. Additionally, while more demanding in terms of calculation, the method is arguably conceptually simpler than the Chapman-Enskog approach, easier to automate using symbolic mathematics software, and more transparently a description of the generation and propagation of kinetic modes in a moving fluid. We have illustrated this new methodology, first by presenting a simple lattice-BGK model for Burgers' equation, and second by presenting a more complex lattice-BGK model for a viscous, incompressible fluid. The final step in this method is the extraction of a diagrammatic sum, but we have shown that all that we really need is the asymptotic form of this sum for $n$ large, and also that the effect of this sum is often nothing more than the renormalization of hydrodynamic transport coefficients. Because the method relegates the Taylor expansion of the propagation operator to an optional step at the end of the analysis, it is possible to derive hydrodynamic equations that are either discrete or continuous in space and/or time. For the examples presented in this work, we showed the result of expanding in time but not space. For certain phenomena in transport theory that tend to be uniform in space but rapidly changing in time, such as spontaneous micelleization, it may be more appropriate to expand in space but not time. Future work will take up the application of this method to complex fluid phenomena, such as multiphase fluids described by the Shan-Chen model~\cite{bib:shanchen}. It is hoped that this method will yield accurate hydrodynamic equations in the spirit of Halperin and Hohenberg's Model H~\cite{bib:hh} for such complex fluids. \section*{Acknowledgments} This work was partially funded by ARO award number W911NF-04-1-0334, AFOSR award number FA9550410176, and facilitated by scientific visualization equipment funded by NSF award number 0619447. The lattice BGK model for Burgers' equation was worked out in preparation for a presentation given at the Consiglio Nazionale delle Ricerche (CNR) in Rome, Italy on 3-8 March 2008. Part of this work was conducted while visiting the School of Engineering at Peking University in November and December of 2007, and the Physique Nonlin\'{e}aire group at the Laboratoire de Physique Statistique of the \'{E}cole Normale Sup\'{e}rieure de Paris in April and early May of 2008. The author would like to thank Tufts University School of Arts and Sciences for his sabbatical and FRAC leave during academic year 2007-2008 when much of this work was conducted. Finally, the author is grateful to Jonas L\"{a}tt and Scott MacLachlan for helpful discussions. \bibliographystyle{plain.bst}
1,314,259,994,077
arxiv
\section{Introduction}\label{sec:introduction} A Markov random field (MRF) $X = \{X_i : i \in V \}$ is a collection of random variables on an undirected graph graph $G=(V,E)$, where the nodes\footnote{We use the terms \emph{nodes}, \emph{sites} and \emph{pixels} interchangeably.} in $V$ are the random variable indices and the edges in $E$ represent direct dependencies between the random variables \cite{wain:03b}, and is often proposed as a model for many sources of data, such as images. A family of MRFs on a graph $G$ is defined by a vector statistic $t$ having a component for each edge and each node. An individual MRF within this family is indicated by an exponential parameter vector $\theta$ whose components correspond to the components of $t$. Since there has been relatively little development of algorithms or theory for the compression of MRFs \cite{anas:82, kont:03, reyes2009t, reyes2009b, reyes2010, reyes2011, reyes2014}, we feel that this is an important problem to consider. In this paper we explore design tradeoffs of the lossless Reduced Cutset Coding method introduced in \cite{reyes2010}. Reduced Cutset Coding (RCC) is a two-stage algorithm for lossless compression of an MRF defined on an intractable graph, where tractability is with respect to Belief Propagation (BP) \cite{wain:03b,reyes2010,reyes2011}. The method consists, first, of suboptimal lossless encoding of a cutset $U\subset V$, chosen such that the subgraphs $G_U$ and $G_W$ induced by $U$ and $W\!\!=\!\!V\setminus U$, respectively, are tractable. The components of $X_U$ are encoded with Arithmetic Coding (AC) using BP to compute a {\em reduced MRF} coding distribution. A reduced MRF for $X_U$ is an MRF on the subgraph $G_U$ induced by $U$, with the statistic $t$ limited to $U$, and a possibly different exponential parameter vector $\tilde \theta_U$. Secondly, conditioned on the encoded cutset $X_U$, the component subsets of the remaining variables $X_W$ are encoded conditioned on their respective boundaries, again using AC, with BP used to compute the true conditional coding distributions of the variables in $X_W$ with respect to the original MRF. The rate of this scheme can be expressed as \begin{eqnarray} R & = & \frac{\mid U\mid}{\mid V\mid}R_U + \frac{\mid W\mid}{\mid V\mid}R_{W},\label{eq:rcc_ratedecomp} \end{eqnarray} \noindent where $R_U$ is the rate in bits per pixel for the cutset $U$, and likewise $R_{W}$ for the remainder $W$. Because $G_W$ is tractable for BP, the conditional coding distributions for the components of $X_W$ can be exactly computed. Thus AC will encode each component on average at its conditional entropy plus an overhead of one or two bits \cite{whitten87}. Since we have in mind the components of $W$ having many pixels, the rate $R_{W}$ is well-approximated by $ {1 \over |W|} H(X_W|X_U)$, the ideal coding rate for $X_W$ given $X_U$. Similarly, since $U$ is tractable for BP, the reduced MRF coding distribution can be computed exactly, and $R_U$ is well-approximated by the (normalized) cross entropy ${1 \over |U|} H(X_U \| \tilde X_U)$ between the marginal distribution for $X_U$ and the reduced MRF distribution for the same variables, which we denote $\tilde X_U$, and which equals the entropy $ {1 \over |U|} H(X_U)$ of $X_U$ plus the divergence ${1 \over |U|} D(X_U||\tilde X_U)$ between the true and reduced MRF distributions for $X_U$. It follows that the rate of this scheme exceeds the rate of an optimal code, which is \begin{equation} {1 \over |V|} H(X_U, X_W) = {|U| \over |V|} {1 \over |U|} H(X_U) + {|W| \over |V|} {1 \over |W|} H(X_W|X_U), \nonumber \end{equation} by the divergence ${1 \over |V|} D(X_U||\tilde X_U)$. For a given cutset $U$, this divergence is minimized by choosing the parameter vector $\tilde \theta_U$ to be that which causes the mean of the statistic $t_U$ of the reduced MRF $\tilde X_U$ to be the same as the mean of $t_U$ on the marginal $X_U$ of the original MRF $X$ \cite{amar:00,wain:03b,reyes2010}. This is called the \emph{moment-matching parameter} and denoted $\theta_U^*$. In Section \ref{sec:mom_match} we present a consistent algorithm for estimating $\theta^*_U$ for a tractable subset $U$, and as such, for the rest of this paper we let $\tilde X_U$ denote this moment-matching reduced MRF. Even when divergence is minimized, one normally expects ${1 \over |U|} H(X_U)$ to be larger than ${1 \over |W|} H(X_W|X_U)$. In the present paper we consider an MRF on an $M\times N$ rectangular lattice of sites. The statistic $t$ as well as the parameter $\theta$ are both row-invariant, and the image height $M$ is assumed to be very large, so that the sequences of rows of the image are assumed to form a stationary process. The cutset $U$ consists of $k+1$ evenly spaced $n_L\times N$ rectangular regions $L_1,\ldots,L_{k+1}$, referred to as {\em lines}, so that the $k$ components of $G_{W}$ are themselves $n_S\times N$ rectangular regions $S_1,\ldots,S_k$, referred to as {\em strips}. This is an extension of the RCC method of \cite{reyes2010}, \cite{reyes2011}, which restricted $n_L$ to be 1.% \footnote{Even though now $n_L$ can be larger than one, we continue to use the nomenclature of {\em lines}.} Here, $M=kn_S + (k+1)n_L$, so that lines and strips alternate, beginning with a line and ending with a line. This class of cutsets was chosen to simplify both the algorithm and the analysis. For example, the lines (strips) can be transformed into a simple chain graph by grouping the pixels in each column of a line (strip) into one superpixel. If $n_L$ and $n_S$ are both moderate, for instance at most 10, then BP can be used to perform exact inference efficiently. An interesting question is how the cutset parameters $n_L$ and $n_S$ affect the individual rates $R_U$ and $R_{W}$ as well as the weightings of $R_U$ and $R_{W}$ by the respective sizes of $U$ and $W$. First, consider $R_U$. The lines of $U$ are encoded independently with the respective moment-matching reduced MRF coding distributions. From the stationarity assumption, these moment-matching reduced MRFs are the same for each line, and therefore $$ R_U = {1 \over |U|} H(X_U || \tilde X_U ) = {1 \over n_L N} H(X_L \| \tilde X_L) = {1 \over n_L N} H(X_L) + {1 \over n_L N} D(X_L \| \tilde X_L) ,$$ where $L$ denotes a block of $n_L$ consecutive rows of the image, $X_L$ is the subset of the MRF on $L$, and $\tilde X_L$ is the same random variables with the moment-matching reduced MRF distribution. Next, by the Markov property and stationarity, $$ R_W = {1 \over |W|} H(X_W | X_U) = {1 \over n_S N} H(X_S | X_{\partial S}),$$ where $S$ denotes $n_S$ consecutive rows, $\partial S$ denotes the boundary of $S$, and $X_S$ and $X_{\partial S}$ are the respective subsets of random variables on $S$ and $\partial S$. Therefore, as a function of line and strip widths, the \emph{per-row rate}\footnote{The overall rate is the per-row rate divided by the row width $N$. From now on, we mainly focus on per-row rate to simplify expressions, and use an overbar to indicate such.} is $$ \bar R(n_L, n_S) = {(k + 1) n_L \over k n_S + (k+1)n_L } {1 \over n_L} H(X_L \| \tilde X_L) + {k n_S \over k n_S + (k+1) n_L} {1 \over n_S} H(X_S | X_{\partial S}) . $$ When $k$ is large, this is well approximated by \begin{eqnarray} \bar R(n_L, n_S) &\approx& {n_L \over n_L + n_S} {1 \over n_L} H(X_L \| \tilde X_L) + {n_S \over n_L + n_S} {1 \over n_S} H(X_S | X_{\partial S}) \nonumber \\ &=& {n_L \over n_L + n_S} {1 \over n_L} \big( H(X_L) + D(X_L \| \tilde X_L) \big) + {n_S \over n_L + n_S} {1 \over n_S} H(X_S | X_{\partial S}) . \nonumber \end{eqnarray} Intuitively, as the cutset line width $n_L$ increases, $R_U$ decreases because both ${1 \over n_L}H(X_L)$ and the divergence ${1 \over n_L} D(X_L ||\tilde X_L)$ would decrease. However, the fraction of sites ${n_L \over n_L + n_S}$ encoded at the larger $R_U$ rate increases. Hence, there is a potential tradeoff between choosing $n_L$ to be large in order to reduce the cutset rate, and choosing $n_L$ to be small in order to reduce the fraction of sites in the cutset. Similarly, as $n_S$ increases, the fraction of pixels ${n_S \over n_L + n_S}$ encoded at the lower rate increases, but one intuitively expects $\bar R_W = { 1 \over n_S}H(X_S|X_{\partial S})$ to increase. Again, a potential tradeoff. On the other hand, since the overall rate is $R(n_L, n_S) = {1 \over |V|} H(X_V)+ {1 \over |V|} D(X_U||\tilde X_U)$, we see that the divergence term ${1 \over |V|} D(X_U||\tilde X_U)$ is the redundancy of the code, and one can therefore focus on what makes it small. Letting $ \Delta(n_L,n_S) {\stackrel{\Delta}{=}} {1 \over |V|} D(X_U||\tilde X_U)$ denote the \emph{redundancy of the code}, we will show that the per-row redundancy has the form \begin{eqnarray} \bar \Delta(n_L, n_S) & = & {(k + 1) n_L \over k n_S + (k+1)n_L } {1 \over n_L} D(X_L \| \tilde X_L) + {k n_S \over k n_S + (k+1) n_L} {1 \over n_S} I(X_{L_i};X_{L_{i-1}}) \nonumber \\ & \approx & {n_L \over n_L + n_S} {1 \over n_L} D(X_L \| \tilde X_L) + {n_S \over n_L + n_S} {1 \over n_S} I(X_{L_i};X_{L_{i-1}}) \nonumber \end{eqnarray} where $I(X_{L_i};X_{L_{i-1}})$ is the mutual information between the random variables $X_{L_i}$ on a line and the random variables $X_{L_{i-1}}$ on the previous line. Note that in the above formula for redundancy, which is entirely due to the encoding of the lines, the first term, which we call the \emph{distribution redundancy} is due to use of the reduce MRF coding distribution on each line and the second term, which we call the \emph{correlation redundancy} is due the fact that lines are coded independently. Note also that while the redundancy is entirely due to encoding of the lines, the correlation redundancy depends on the strip width $n_S$. Moreover, since there is no correlation redundancy in the encoding of the first line, it is appropriate to think of $I(X_{L_i};X_{L_{i-1}})$ as a penalty per strip. From this viewpoint, one would expect that increasing $n_L$ reduces the divergence per cutset pixel ${1 \over n_L} D(X_L||\tilde X_L)$, but increases the fraction ${n_L \over n_L + n_S}$ of the image included in the cutset. Hence, it is not clear what is the best value for $n_L$. Similarly, one would expect that information $I(X_{L_i};X_{L_{i-1}})$ decreases in $n_S$, while the fraction of pixels ${n_S \over n_L + n_S}$ increases in $n_S$. Therefore, it is likewise not clear what $n_S$ should be. The results of this paper are to show the following results, most of which have been conjectured above. Under the stationarity assumption, the coding rate $R^S_{n_S}$ of a strip increases with $n_S$, the coding rate $R^L_{n_L}$ of a line decreases with $n_L$ when the moment-matching reduced MRF is used to encode the lines, and $R^S_{n_S} < R^L_{n_L}$ for all choices of $n_S$ and $n_L$. We also present a consistent estimation algorithm for the moment-matching parameter $\theta^*_{U}$. We show that the divergence $D(X_U||\tilde X_U)$, equivalently the redundancy, can be decomposed into a correlation redundancy due to encoding the lines independently and a distribution redundancy due to approximating the lines as reduced MRFs, and present analysis of these two sources of redundancy. Numerical simulations with an Ising model illustrate the propositions. In the rest of this paper, Section \ref{sec:background} provides background on MRFs and lossless coding and Section \ref{sec:rcc} provides an overview Reduced Cutset Coding in the current setting. Section \ref{sec:mom_match} presents an estimation algorithm for $\theta^*_U$, Section \ref{sec:tradeoffs} establishes the anticipated tradeoffs between cutset thickness and spacing, and finally, Section \ref{sec:simulation} discusses numerical simulations with an Ising model. \section{Background}\label{sec:background} We introduce notation for lossless coding of MRFs. \subsection{Graphs and Markov Random Fields} A {\em path} in a graph $G=(V,E)$ is a sequence of nodes, each successive pair of nodes being joined by an edge in $E$. A graph is said to be {\em connected} if every pair of nodes $i,j\in V$ can be joined by some path, and {\em disconnected} otherwise. For any $U \subset V$, its \emph{boundary} $\partial U$ is the set of nodes not in $U$ connected by an edge to a member of $U$. The subgraph $G_U = (U,E_U)$ {\em induced by} $U$ is the graph consisting of nodes and edges contained in $U$. Likewise, the subgraph $G_{V\setminus U} $ is obtained by removing $U$ and all edges incident to it from $G$. If $G_{V \setminus U}$ is disconnected, each maximal connected subset of $G_{V \setminus U}$ is called a {\em component}, and $G_{V \setminus U}$ is simply the collection of the (disjoint) subgraphs induced by the respective components. A subset $U\subset V$ is called a {\em cutset} if $G_{V\setminus U}$ consists of more than one component. A family of MRFs is specified by an alphabet $\cal X$ and a vector statistic $t=(t_i, i \in V; t_{i,j}, \{i,j\} \in E)$ defined on the site values at individual nodes and the endpoints of edges.\footnote{Properly, this is a {\em pairwise} MRF. Generalizations to other MRFs are straightforward.} That is, for a given image ${ \bf x } = \{x_i: i\in V\}$, the function $t_{ij}:{\mathcal{X}}\times{\mathcal{X}}\longrightarrow\mathbb{R}$ determines the contribution of the pair $(x_i,x_j)$ to the probability of ${ \bf x }$, and similarly for $t_i:{\mathcal{X}}\longrightarrow\mathbb{R}$. We say that $X$ is an MRF based on $t$. The entire family of MRFs based on $t$ is generated by introducing an exponential parameter vector $\theta=(\theta_i, i \in V; \theta_{ij}, \{i,j\} \in E)$ where for each node $i$, and neighbor $j\in\partial i$, $\theta_i$ and $\theta_{ij}$ scale the sensitivity of the distribution $p(G;{ \bf x };\theta)$ to the functions $t_i$ and $t_{ij}$, respectively. Specifically, for an MRF $X$ on $G$ based on $t$ with exponential parameter $\theta$, configuration ${ \bf x }$ has probability $p(G;{ \bf x };\theta)$ given by \begin{eqnarray} p(G;{ \bf x };\theta) & = & \exp\{\langle\theta,t({ \bf x })\rangle - \Phi(\theta)\},\label{eq:mrf_1} \end{eqnarray} where $\langle ~ , ~ \rangle$ denotes inner product, $\Phi(\theta)$ is the \emph{log-partition function}, and the arguments of $p(\cdot;\cdot;\cdot)$ indicate, respectively, the graph on which the MRF is defined, the configuration in question, and the exponential parameter on the graph. For a given exponential coordinate vector $\theta$, we let $\mu=\mu(\theta)$ denote the expected value of the statistic $t$ under the MRF induced by $\theta$, and we refer to $\mu$ as the {\em moment} of the MRF. The MRF distribution over all configurations is denoted $p(G;X;\theta)$, and the entropy of an MRF is denoted $H(G;X;\theta)$. The conditional probability of a configuration ${ \bf x }_W$ on subset $W\subset V$ given the values ${ \bf x }_{U}$ on another subset $U\subset V$ is denoted $p(G;{ \bf x }_W| { \bf x }_{U};\theta)$. It is straightforward to check that $p(G;{ \bf x }_W| { \bf x }_{\partial W};\theta) = p(G;{ \bf x }_W| { \bf x }_{V\setminus W};\theta)$ for all $W$, ${ \bf x }_W$, and ${ \bf x }_{\partial W}$. This is the {\em Markov Property}. The conditional distributions of random subfield $X_W$ given a specific configuration ${ \bf x }_{\partial W}$, or on the random subfield $X_{\partial W}$, are denoted $p(G;X_W| { \bf x }_{\partial W};\theta)$ and $p(G;X_W| X_{\partial W};\theta)$, respectively. Likewise, $H(G;X_W | { \bf x }_{\partial W};\theta)$ and $H(G;X_W | X_{\partial W};\theta)$ are the respective conditional entropies of $X_W$ given a specific configuration ${ \bf x }_{\partial W}$ or the random subfield $X_{\partial W}$. For subset $U$, the marginal probability distribution on $X_U$ is denoted $p(G;X_U;\theta)$, where $p(G;{ \bf x }_U;\theta)$ denotes the marginal probability of configuration ${ \bf x }_U$. The {\em reduced MRF} distribution for $X_U$ on $G_U$ based on statistic $t_U$ with exponential parameter $\tilde\theta_U$ is denoted $p(G_U;X_U;\tilde\theta_U)$ and has the same form as in (\ref{eq:mrf_1}), where $\Phi_U(\tilde\theta_U)$ denotes the log-partition function for the reduced MRF. Similarly, $p(G_U;{ \bf x }_U;\tilde\theta_U)$ denotes the probability of configurations ${ \bf x }_U$ under the reduced MRF distribution. The statistic $t_U$ is inherited from the original statistic $t$. The marginal entropy of $X_U$ is denoted $H(G;X_U;\theta)$ while the entropy of a reduced MRF $p(G_U;X_U;\tilde\theta_U)$ is denoted $H(G_U;X_U;\tilde\theta_U)$. \subsection{Belief Propagation and Lossless Coding}\label{sec:lossless} In general, ones uses Belief Propagation (BP) \cite{wain:03b} to compute $p(G;{ \bf x }_U;\theta)$ for a configuration ${ \bf x }_U$. Since the inner product $\langle t_U({ \bf x }_U),\theta_U \rangle$ can be computed directly, BP is used to compute the log-partition function $\Phi(\theta)$, and more generally, to marginalize over $X_{V\setminus U}$. If $G$ has no cycles, then $p(G;{ \bf x }_U;\theta)$ can be computed with complexity linear in the number of nodes in $V$. If $G$ has cycles, one can compute $p(G;{ \bf x }_U;\theta)$ by grouping subsets of $V$ into supernodes such that the new graph is acyclic \cite{wain:03b}. In this case, complexity is exponential in the size of the largest supernode. A graph is said to be {\em tractable} if either $G$ has no cycles or if $G$ can be clustered into an acyclic graph where the size of the largest supernode is moderate. Similarly, a subset $U$ is said to be tractable if $G_U$ is tractable, in which case $p(G_U;{ \bf x }_U;\tilde\theta_U)$ can be computed for the reduced MRF on $G_U$. Also, for tractable subset $W$, $p(G;{ \bf x }_W | { \bf x }_{\partial W};\theta)$ can be computed for configurations ${ \bf x }_W$ and ${ \bf x }_{\partial W}$. \begin{figure*} \centerline{ \hbox{ \hspace{0.1in} \includegraphics[scale = .4]{line_ac_coding} \hspace{0.2in} \includegraphics[scale = .4]{strip_ac_coding} } } \hbox{\hspace{1in} (a) \hspace{3in} (b)} \caption{(a) AC encoding of a line $X_L$ with reduced MRF $P(G_L;X_L;\tilde\theta_L)$ coding distribution using Belief Propagation, and (b) AC encoding of a strip $X_S$ conditioned on its boundary $X_{\partial S}$ with conditional distribution $p(G;X_S | X_{\partial S};\theta)$ using Belief Propagation.} \label{fig:coding} \end{figure*} For the purposes of this paper it suffices to say that lossless compression with an {\em optimal encoder} involves computation of a {\em coding distribution}. For a tractable subset $U$, if configuration ${ \bf x }_U$ is losslessly compressed with reduced MRF coding distribution $p(G_U;X_U;\tilde\theta_U)$, then the average number of bits produced is the {\em cross entropy} $H(G;X_U;\theta || G_U;X_U;\tilde\theta_U)$ between the marginal distribution $p(G;X_U;\theta)$ and the reduced MRF coding distribution $p(G_U;X_U;\tilde\theta_U)$ for $X_U$, defined as \begin{eqnarray} H(G;X_U;\theta || G_U;X_U;\tilde\theta_U) & = & H(G;X_U;\theta) + D(p(G;X_U;\theta) || p(G_U;X_U;\tilde\theta_U))\nonumber \end{eqnarray} \noindent where $D(p(G;X_U;\theta) || p(G_U;X_U;\tilde\theta_U))$ is the {\em divergence} from $p(G;X_U;\theta)$ to $p(G_U;X_U;\tilde\theta_U)$ and is the {\em redundancy} in the code \cite{cove:05}. We showed in \cite{reyes2010} that the above divergence is minimized at $\theta^*_U$, the exponential parameter on $G_U$ such that the corresponding moment $\mu^*_U$ is equal to the moment subvector $\mu_U$ under the original MRF $p(G;X;\theta)$. The distribution of the reduced MRF $p(G_U;X_U;\theta^*_U)$ is called the {\em moment-matching} reduced MRF distribution for $X_U$, denoted $\tilde X_U$. When the moment-matching reduced MRF $p(G_U;X_U;\theta^*_U)$ is used as the coding distribution to encode $X_U$, the cross entropy is in fact the entropy $H(G_U;X_U;\theta^*_U)$ of the moment-matching reduced MRF \cite{reyes2010}. For a tractable subset $W$, if configuration ${ \bf x }_{W}$ is encoded conditioned on ${ \bf x }_{\partial W}$ using coding distribution $p(G;X_W|{ \bf x }_{\partial W};\theta)$, then the average number of bits produced is $H(G;X_W|X_{\partial W};\theta)$. Therefore, encoding ${ \bf x }_W$ conditioned on ${ \bf x }_{\partial W}$ is optimal, i.e., there is no redundancy. In \cite{reyes2010}, Arithmetic Coding (AC) was proposed as the optimal encoder. Figure \ref{fig:coding} illustrates the encoding of a line and a strip. The mathematical details of using AC in the encoding of an MRF are given in \cite{reyes2009t}, \cite{reyes2010}, and \cite{reyes2011}, specifically in Chapter VI of \cite{reyes2011}. \vspace{2mm} \section{Reduced Cutset Coding}\label{sec:rcc} In general, since the cutset $U$ consists of disjoint lines, the entropy of the moment-matching reduced MRF on $G_U$ is actually the sum $\sum_{L_i}H(G_{L_i};X_{L_i};\theta^*_{L_i})$ of the entropies of the reduced MRFs on the individual lines. Similarly, the conditional entropy of $X_W$ given $X_{\partial W}$ is the sum $\sum_{S_i}H(G;X_{S_i} | X_{\partial S_i};\theta)$ of the conditional entropies of the individuals strips given their respective boundaries. In the present paper, we simplify this by considering vertically homogeneous parameters for the MRF, i.e., the components of the statistic $t$ and the exponential parameter $\theta$ do not vary vertically within the image. Furthermore, focusing only on the middle $M' = (k'+1) n_L + k' n_S \approx M/2$ rows of $V$, therefore excluding boundary effects, the image will be roughly stationary in the vertical direction. We let $B_n$ be an $n\times N$ rectangular subset of sites. The random field $X_{B_{n_L}}$ on a line is encoded with reduced MRF coding distribution $p(G_{n_L};X_{B_{n_L}};\theta^*_{B_{n_L}})$. Normalizing by the number of pixels, the per-row rate for encoding a line is then \begin{eqnarray} \bar R^L_{n_L} & = & \frac{1}{n_L }H(G;X_{B_{n_L}};\theta || G_{n_L};X_{B_{n_L}};\theta^*_{B_{n_L}}) \nonumber\\ & = & \frac{1}{n_L }H(G_{B_{n_L}};X_{B_{n_L}};\theta^*_{B_{n_L}}). \nonumber \end{eqnarray} The random field $X_{B_{n_S}}$ on a strip is encoded conditioned on $X_{\partial B_{n_S}}$ with coding distribution $p(G;X_{B_{n_S}}|X_{\partial B_{n_S}};\theta)$. The per-row rate for encoding a strip is then \begin{eqnarray} \bar R^S_{n_S} & = & \frac{1}{n_S }H(G;X_{B_{n_S}}\mid X_{\partial B_{n_S}};\theta). \nonumber \end{eqnarray} We let $\bar R_{n_S,n_L}$ denote the total per-row rate of RCC with cutset parameters $n_S$ and $n_L$, given by \begin{eqnarray} \label{eq:RSL} \bar R(n_S,n_L) \!\!\!\! & = & \!\!\!\! \frac{(k+1)n_L}{(k+1)n_L + kn_S} \bar R^{L}_{n_L} + \frac{kn_S}{(k+1)n_L + kn_S} \bar R^S_{n_S}. \nonumber \end{eqnarray} \noindent Assuming further that $M'$ is very large relative to $n_L$ and $n_S$, so that $k$ is very large, this rate is well-approximated by \begin{eqnarray} \label{eq:RSL} \bar R (n_S,n_L) & \approx & \frac{n_L}{n_L+n_S} \bar R^{L}_{n_L} + \frac{n_S}{n_L+n_S} \bar R^S_{n_S}. \label{eq:rate_approx} \end{eqnarray} \noindent We now see that the performance of RCC with cutset parameters $n_S$ and $n_L$ is characterized by the rates $\bar R^L_{n_L}$ and $\bar R^S_{n_S}$, and the fractions $\frac{n_L}{n_L+n_S}$ and $\frac{n_S}{n_L+n_S}$. \vspace{2mm} \section{Moment-matching $\theta^*_U$}\label{sec:mom_match} Recall from the previous section that the cross-entropy $H(p(G;X_U;\theta) || p(G_U;X_U;\tilde\theta_U)$ between the marginal distribution $p(G;X_U;\theta)$ of subset $X_U$ within an MRF on $G$ with statistic $t$ and a reduced MRF $p(G_U;X_U;\tilde\theta_U)$ on $G_U$ with statistic $t_U$ is minimized by the parameter $\theta^*_U$ such that the expected value $\mathbb{E}_{\theta^*_U}[t_U(X_U)]$ of the statistic $t_U$ in the reduced MRF equals the expected value $(\mathbb{E}_{\theta}[t(X)])_U$ of the statistic $t$ under the original MRF on the subset $U$, referred to as the {\em moment-matching} parameter. We will estimate $\theta^*_U$ from $n$ observations ${ \bf x }^{(1)}_U,\ldots,{ \bf x }^{(n)}_U$ on $U$, by seeking an $\hat\theta^n_U$ that minimizes an empirical version of the cross entropy, at least approximately. First, some background. We let $\Theta = \{\theta\}$ denote the set of parameter vectors for MRFs on $G$ based on the statistic $t$. We restrict attention to the case where $\Theta$ is the subset of $\mathbb{R}^{|V| + |E|}$ of $\theta$'s with positive components. In this case, due to the openness of $\Theta$, the family of MRFs based on $t$ is said to be {\em regular} \cite{wain:03b}. For parameter $\theta\in\Theta$, the function \begin{eqnarray} \Lambda(\theta) & {\stackrel{\Delta}{=}} & \mathbb{E}_{\theta}[t(X)] \nonumber \\ & {\stackrel{\Delta}{=}} & \mu \nonumber \end{eqnarray} maps $\theta$ to $\mu$, the expected value of $t$ under the MRF induced by $\theta$, referred to as the {\em moment} of the MRF. The set $\mathcal{M} = \{\mu=\Lambda(\theta):\theta\in\Theta\}$ is the set of {\em achievable moments} for MRFs on $G$ based on $t$. We assume that the statistic $t$ is {\em minimal} in that the components of $t$ are affinely independent, meaning that the components of $t({ \bf x })$ do not sum to a constant for all configurations ${ \bf x }$. In this case, the function $\Lambda(\cdot)$ is one-to-one \cite{wain:03b}. Then, for $\mu\in\mathcal{M}$, the inverse function \begin{eqnarray} \Lambda^{-1}(\mu) & = & \theta \nonumber \end{eqnarray} is well-defined. Moreover, $\mu$ is a dual parameter to $\theta$, in that the MRF $p(G;X;\theta)$ can alternatively be expressed as $p(G;X;\mu)$. For the MRF induced by parameter $\theta$, the subvector of moments on the set $U$ is given by \begin{eqnarray} \Lambda_U(\theta) & = & \mu_U \nonumber \end{eqnarray} which can be seen as the restriction of $\Lambda(\cdot)$ to the set $U$. For reduced MRFs on $G_U$ based on statistic $t_U$, $\tilde\Theta_U$ denotes the associated set of exponential parameters. Now, consider the function \begin{eqnarray} \tilde\Lambda_U(\tilde\theta_U) & = & \tilde\mu_U , \nonumber \end{eqnarray} which maps a parameter $\tilde\theta_U \in \tilde \Theta_U$ to the corresponding moment $\tilde\mu_U$ for the reduced MRF $p(G_U;X_U;\tilde\theta_U)$ on $G_U$. Likewise, $\tilde{\mathcal{M}}_U=\{\tilde\mu_U=\tilde\Lambda_U(\tilde\mu_U):\tilde\theta_U\in\tilde\Theta_U\}$ denotes the set of achievable moments for reduced MRFs on $G_U$. Since we have assumed that the statistic $t$ for the original family of MRFs on $G$ is minimal, the statistic $t_U$ for the family of reduced MRFs on $G_U$ is also minimal, and the inverse map $\tilde\Lambda_U^{-1}(\tilde\mu_U) = \tilde\theta_U$ is well-defined. Again, a reduced MRF $p(G_U;X_U;\tilde\theta_U)$ can also be parameterized as $p(G_U;X_U;\tilde\mu_U)$. Given a parameter $\theta$ for an MRF $p(G;X;\theta)$, a subset $U$, and a sequence of observations ${ \bf x }^{(1)}_U,\ldots,{ \bf x }^{(n)}_U$ on $U$, we define the {\em empirical moment} of $p(G;X_U;\theta)$ as \begin{eqnarray} \hat\mu^n_U & {\stackrel{\Delta}{=}} & \frac{1}{n}\sum\limits_{i=1}^nt_U({ \bf x }^{(i)}_U) . \nonumber \end{eqnarray} While $\mu_U=\Lambda_U(\theta)$ is always contained in $\tilde{\mathcal{M}}_U$, it is not necessarily the case that the empirical moment $\hat\mu^n_U$ is contained in $\tilde{\mathcal{M}}_U$. However, even if $\hat\mu^n_U$ is not in $\tilde{\mathcal{M}}_U$, $\hat\mu^n_U$ is still a limit point of $\tilde{\mathcal{M}}_U$ \cite{wain:03b}, meaning that for every $\epsilon > 0$, there is an $\epsilon$-ball containing $\hat\mu^n_U$ that contains infinitely many points of $\tilde{\mathcal{M}}_U$. Moreover, as stated in the following proposition, as the number of observations $n$ approaches $\infty$, not only is $\hat\mu^n_U$ in $\tilde{\mathcal{M}}_U$, but $\hat\mu^n_U$ converges to $\mu_U$. \vspace{2mm} \begin{proposition}\label{prop:momconv} The empirical moment $\hat\mu^n_U$ converges in probability to $\mu_U$, i.e., for any $\epsilon >0$, \begin{eqnarray} \label{eq:convergeinprob} \Pr \big( \big| \hat \mu^n_U - \mu_U \big| \leq \epsilon \big) \rightarrow 1, \mbox{ as } n \rightarrow \infty . \end{eqnarray} \end{proposition} \vspace{2mm} \begin{proof} To prove the proposition, one should recall that on a finite graph $G$, there does not exist a phase transition \cite{georgii}, and therefore, there is a unique MRF on $G$ for the specified statistic $t$ and exponential parameter $\theta$. It follows that the sequence ${ \bf x }^{(1)}_U,\ldots,{ \bf x }^{(n)}_U,\ldots$ is not only stationary but also ergodic, from which the proposition follows \cite{grimmett}. This completes the proof. $\hfill \Box$ \end{proof} \vspace{2mm} We now discuss the empirical version of cross entropy that we will minimize as a surrogate for cross entropy. From a sequence of observations ${ \bf x }^{(1)}_U,\ldots,{ \bf x }^{(n)}_U$, we define the \emph{empirical cross entropy} \begin{eqnarray} H_U^n(\hat\mu^n_U || \tilde\theta_U) & {\stackrel{\Delta}{=}} &- {1 \over n} \sum\limits_{i=1}^n \log p(G_U;{ \bf x }^{(i)}_U;\tilde\theta_U) \nonumber \\ &=& -\sum_{ { \bf x } _U} f( { \bf x }_U : { \bf x }^{(1)}_U,\ldots,{ \bf x }^{(n)}_U ) \log p(G_U;{ \bf x }_U;\tilde\theta_U) \nonumber \end{eqnarray} between the empirical distribution $f( { \bf x }_U : { \bf x }^{(1)}_U,\ldots,{ \bf x }^{(n)}_U )$ generated by $ { \bf x }^{(1)}_U,\ldots,{ \bf x }^{(n)}_U$ and the reduced MRF $p(G_U;X_U;\tilde\theta_U)$ induced by a candidate parameter $\tilde \theta_U$. That it makes sense to consider the empirical cross entropy to be a function of the empirical moment $\hat \mu_U^n$ is due to the proposition presented later. If $U$ is a tractable subset, then the probabilities in the summation can be efficiently computed. Now, our estimate for the moment-matching parameter $\theta^*_U$ will be the $\hat \theta^n _U$ that minimizes this empirical cross entropy, at least approximately. It is well-known that $\Phi_U(\tilde\theta_U)$ is convex in $\tilde\theta_U$, and, as follows from the following theorem, so is the empirical cross-entropy $H^n_U(\hat\mu^n_U || \tilde\theta_U)$. If, as we have assumed, the components of $t_U$ are affinely independent, then $\Phi_U(\tilde\theta_U)$ and hence $H_U^n(\hat\mu^n_U || \tilde\theta_U)$ is strictly convex. Therefore, either $H^n_U(\hat\mu^n_U || \tilde\theta_U)$ has a unique minimum at a $\tilde\theta_U$ at which the gradient of $H^n_U(\hat\mu^n_U || \tilde\theta_U)$ is zero, or since $\tilde\Theta_U$ is open, $H^n_U(\hat\mu^n_U || \tilde\theta_U)$ does not have a minimum but approaches an infimum at a limit point of $\tilde\Theta_U$. Moreover, from the following theorem and the fact that for any $\hat \mu^n_U$ there exists $\tilde \theta_U$ such that $\tilde \Lambda_U(\tilde \theta_U)$ is arbitrarily close to $\hat \mu^n_U$, we can find $\tilde \theta_U$ such that the gradient is arbitrarily small and such $\tilde \theta_U$ must come arbitrarily close to attaining the infimum of $H_U^n(\hat\mu^n_U || \tilde\theta_U)$. In either case, our ``moment-matching" estimate $\hat\theta^n_U$ will be a $\tilde \theta_U$ that induces a very small gradient. \begin{figure*} \centerline{ \hbox{ \hspace{0in} \includegraphics[scale = .5]{find_momentmatching} } } \caption{ Block diagram for finding the moment-matching parameter $\theta^*_U$ for encoding $X_U$.} \label{fig:block_moment} \end{figure*} \vspace{2mm} \begin{proposition}\label{prop:emp_cross} \begin{eqnarray} H^n_U(\hat\mu^n_U || \tilde\theta_U) & = & \Phi_U(\tilde\theta_U) - \langle \hat\mu^n_U,\tilde\theta_U\rangle \nonumber \\[2ex] \nabla H_U^n(\hat\mu^n_U || \tilde\theta_U) & = & \tilde\mu_U - \hat\mu^n_U \nonumber \\[1ex] & = & \tilde\Lambda_U(\tilde\theta_U) - \hat\mu^n_U ,\nonumber \end{eqnarray} where the gradient is with respect to $\tilde \theta_U$. \end{proposition} \vspace{2mm} \begin{proof} Using relation (\ref{eq:mrf_1}) for the reduced MRF on $G_U$ with parameter $\tilde\theta_U$, we get \begin{eqnarray} H^n_U(\hat\mu^n_U || \tilde\theta_U)\!\!\!\! & = & \!\!\!\! \frac{1}{n}\sum\limits_{i=1}^n\left[\Phi_U(\tilde\theta_U)-\big \langle t({ \bf x }^{(i)}_U),\tilde\theta_U \big \rangle\right]\nonumber\\ & = & \Phi_U(\tilde\theta_U) - \Big \langle \sum\limits_{i=1}^nt({ \bf x }^{(i)}_U),\tilde\theta_U \Big \rangle\nonumber\\ & = & \Phi_U(\tilde\theta_U) - \big \langle \hat\mu^n_U,\tilde\theta_U \big \rangle\nonumber \end{eqnarray} It is well-known that $\nabla \Phi_U(\tilde\theta_U) = \tilde\mu_U$ \cite{wain:03b}. Then, taking the gradient of $H^n_U(\hat\mu^n_U || \tilde\theta_U)$ yields \begin{eqnarray} \nabla H^n_U(\hat \mu^n_U || \tilde\theta_U) & = & \nabla \frac{1}{n}\sum\limits_{i=1}^n\left[\Phi_U(\tilde\theta_U)- \big \langle t_U({ \bf x }^{(i)}_U),\tilde\theta_U \big\rangle \right] \nonumber \\ & = & \nabla \Phi_U(\tilde\theta_U) - \nabla \frac{1}{n}\sum\limits_{i=1}^n \big \langle t_U({ \bf x }^{(i)}_U),\tilde\theta_U \big \rangle \nonumber \\ & = & \nabla \Phi_U(\tilde\theta_U) - \nabla \Big \langle \frac{1}{n}\sum\limits_{i=1}^nt_U({ \bf x }^{(i)}_U),\tilde\theta_U \Big \rangle \nonumber \\ & = & \tilde\mu_U - \frac{1}{n}\sum\limits_{i=1}^nt_U({ \bf x }^{(i)}_U) \nonumber \\ & = & \tilde\mu_U - \hat\mu^n_U . \nonumber \end{eqnarray} This completes the proof. $\hfill \Box$ \end{proof} \vspace{2mm} We now describe how a gradient descent algorithm can be used to find an estimate $\hat \theta^n_U$ of $\theta^*_U$ at which the gradient of $H^n_U(\hat \mu^n_U || \tilde\theta_U) $ is arbitrarily small. From the sequence ${ \bf x }^{(1)}_U,\ldots,{ \bf x }^{(n)}_U$, we first compute the empirical moment $\hat\mu^n_U = {1 \over n}\sum_{i=1}^nt_U({ \bf x }^{(i)}_U)$. Then, given a candidate parameter $\tilde\theta_{U}$, use Belief Propagation to compute the negative log-likelihood $-\log p(G_{U};{ \bf x }^{(i)}_{U};\tilde\theta_{U})$ of the configuration ${ \bf x }^{(i)}$ under the reduced MRF $p(G_{U};X_{U};\tilde\theta_{U})$, for each $i=1,\ldots,n$. Additionally, we compute the moment $\tilde\mu_{U}$ of the reduced MRF induced by the candidate parameter $\tilde\theta_U$, which like the probabilities, can be computed due to tractability of $U$. We then compute the objective function $H^n_{U}(\hat\mu^n_U || \tilde\theta_U) = - {1 \over n} \sum_{i=1}^n\log p(G_U;{ \bf x }^{(i)}_U;\tilde\theta_U)$ and the gradient $\nabla H^n_U(\hat\mu^n_U || \tilde\theta_U) = \tilde\mu_U - \hat\mu^n_U$. Finally, given a tolerance $\epsilon_{\mu}$, if $\| \nabla H^n_U(\hat\mu^n_U || \tilde\theta_U) \| < \epsilon_{\mu}$, the algorithm terminates and we set $\hat\theta^n_U = \tilde\theta_U$ which corresponds to the estimated moment $\hat{\hat \mu}^n_U = \tilde\Lambda_U(\hat\theta^n_U)$ at which the algorithm is terminated. Note that by Proposition \ref{prop:emp_cross}, the estimated moment $\hat{\hat \mu}^n_U$ is within $\epsilon_{\mu}$ of $\hat\mu^n_U$. If $\| \nabla H^n_U(\hat\mu^n_U || \tilde\theta_U) \| \geq \epsilon_{\mu}$, we determine a new candidate parameter $\tilde\theta_U$ using a standard gradient descent method \cite{boyd2004} and repeat the above steps. This is illustrated in Figure \ref{fig:block_moment}. \vspace{2mm} \begin{proposition} The estimate $\hat\theta^n_U$ is consistent, i.e., for any $\epsilon >0$, \begin{eqnarray} \label{eq:convergeinprob} \Pr \big( \big| \hat \theta^n_U - \theta^*_U \big| \leq \epsilon \big) \rightarrow 1, \mbox{ as } n \rightarrow \infty . \end{eqnarray} \end{proposition} \vspace{2mm} \begin{proof} Let $B(\theta^*_U,\epsilon_{\theta})$ be the $\epsilon_{\theta}$-ball centered at $\theta^*_U$. Assume without loss of generality that $B(\theta^*_U,\epsilon_{\theta})\subset\tilde\Theta_U$. Then, let $\epsilon_{\mu}$ be the largest tolerance around $\mu_U$ such that the $\epsilon_{\mu}$-ball $B(\mu_U,\epsilon_{\mu})$ centered at $\mu_U$ is contained in $\tilde \Lambda_U(B(\theta^*_U,\epsilon_{\theta}))$. It follows that \begin{eqnarray} \Pr \big( \big| \hat \theta^n_U - \theta^*_U \big| \leq \epsilon_{\theta} \big) & = & \Pr \big( \hat{ \hat \mu}^n_U \in \tilde \Lambda_U(B(\theta^*_U,\epsilon_{\theta})) \big) \nonumber \\ & \geq & \Pr \big( \hat{\hat \mu}^n_U \in B(\mu_U,\epsilon_{\mu}) \big) \nonumber \\ & = & \Pr \big( \big| \hat{\hat \mu}^n_U - \mu_U \big| \leq \epsilon_{\mu} \big) . \nonumber \end{eqnarray} Now let $\epsilon'_{\mu}=\epsilon_{\mu}/2$ be the tolerance on $\|\nabla H^n_U(\hat\mu^n_U || \tilde\theta_U)\|$ in the gradient descent algorithm. This means that $| \hat{\hat \mu}^n_U - \hat\mu^n_U | \leq \epsilon'_{\mu}$, which in turn implies that \begin{eqnarray} \Pr \big( \big| \hat{\hat \mu}^n_U - \mu_U \big| \leq \epsilon_{\mu} \big) & = & \Pr \big( \big| \hat \mu^n_U - \mu_U \big| \leq \epsilon'_{\mu} \big) . \nonumber \end{eqnarray} Using Proposition \ref{prop:momconv}, we can now say that for an arbitrary tolerance $\delta > 0$, there exists $N$ such that if the number of observations $n$ is greater than or equal to $N$, then \begin{eqnarray} \Pr \big( \big| \hat \theta^n_U - \theta^*_U \big| \leq \epsilon_{\theta} \big) & \geq & \Pr \big( \big| \hat \mu^n_U - \mu_U \big| \leq \epsilon'_{\mu} \big) \nonumber \\ & \geq & 1 - \delta . \nonumber \end{eqnarray} This completes the proof. $\hfill \Box$ \end{proof} \vspace{2mm} \section{Tradeoffs between Lines and Strips}\label{sec:tradeoffs} The following proposition shows that, as intuited earlier, strip rate increases with strip width. \vspace{2mm} \begin{proposition}\label{prop:strips} \begin{eqnarray} \bar R^S_{n+1} & > & \bar R^S_n .\nonumber \end{eqnarray} \end{proposition} \vspace{2mm} \begin{lemma}\label{lemma:firstRow_strip} Let $r_1$ denote the first row of rectangular region $B_n$ of sites of height $n$. Then, \begin{eqnarray} H(G;X_{r_1} | X_{\partial B_n};\theta) & < & H(G;X_{r_1} | X_{\partial B_{n+1}};\theta) . \end{eqnarray} \end{lemma} \vspace{2mm} \begin{proof} Note $B_{n+1}$ consists of $B_n$ and an additional row $r_{n+1}$, which is part of the boundary of $B_n$. By the Markov property, $H(G;X_{r_1} | X_{\partial B_n};\theta) = H(G;X_{r_1} | X_{\partial B_{n+1}},X_{r_{n+1}};\theta)$. That is, conditioning on $\partial B_{n+1}$ and $r_{n+1}$ is the same as conditioning on $\partial B_n$. Finally, $H(G;X_{r_1} | X_{\partial B_{n+1}},X_{r_{n+1}};\theta) < H(G;X_{r_1} | X_{\partial B_{n+1}};\theta)$ as the left side has more conditioning. In summary \begin{eqnarray} H(G;X_{r_1} | X_{\partial B_n};\theta) & = & H(G;X_{r_1} | X_{\partial B_{n+1}},X_{r_{n+1}};\theta)\nonumber\\ & < & H(G;X_{r_1} | X_{\partial B_{n+1}};\theta)\nonumber. \end{eqnarray} This completes the proof of Lemma \ref{lemma:firstRow_strip}. $\hfill \Box$ \end{proof} \vspace{2mm} \noindent We continue with the proof of Proposition \ref{prop:strips}. \vspace{2mm} \begin{proof} By direct calculation we have for a strip of height $n+1$ that \begin{eqnarray} \bar R^S_{n+1} & = & \frac{1}{(n+1)}H(G;X_{B_{n+1}} | X_{\partial B_{n+1}};\theta)\nonumber\\ & = & \frac{1}{(n+1)}H(G;X_{B_n} | X_{\partial B_n};\theta) + \frac{1}{(n+1) }H(G;X_{r_1} | X_{\partial B_{n+1}};\theta) , \label{eq:R_S_n+1} \end{eqnarray} and for a strip of height $n$, \begin{eqnarray} \bar R^S_n & = & \frac{1}{n}H(G;X_{B_n} | X_{\partial B_n};\theta)\nonumber\\ & = & \frac{n+1}{n}\frac{1}{(n+1)}H(G;X_{B_n} | X_{\partial B_n};\theta)\nonumber\\ & = & \frac{1}{(n+1)}H(G;X_{B_n} | X_{\partial B_n};\theta) + \frac{1}{n(n+1)}H(G;X_{B_n} | X_{\partial B_n};\theta)\nonumber\\ & = & \frac{1}{(n+1)}H(G;X_{B_n} | X_{\partial B_n};\theta) + \frac{1}{n}\sum\limits_{i=1}^n\frac{1}{(n+1)}H(G;X_{r_i} | X_{\partial B_{n-i+1}};\theta)\nonumber\\ & < & \frac{1}{(n+1)}H(G;X_{B_n} | X_{\partial B_n};\theta) + \frac{1}{(n+1)}H(G;X_{r_1} | X_{\partial B_{n+1}};\theta)\nonumber\\ & = & \bar R^S_{n+1}\nonumber \end{eqnarray} by (\ref{eq:R_S_n+1}) and Lemma \ref{lemma:firstRow_strip}. This completes the proof. $\hfill \Box$ \end{proof} \vspace{2mm} Likewise, the next proposition shows that, as supposed earlier, line rate decreases with line width. \vspace{2mm} \begin{proposition}\label{prop:lines} \begin{eqnarray} \bar R^L_{n+1} & < & \bar R^L_n . \nonumber \end{eqnarray} \end{proposition} \vspace{2mm} \begin{proof} First we note that reducing $X_{B_{n+1}}$ to $\tilde X_{B_{n+1}}$ by matching moments and further reducing the $X_{B_n}$ marginal of $\tilde X_{B_{n+1}}$ to $\tilde X_{B_n}$ by matching moments results in the same reduced MRF on $G_{B_n}$ as would reducing the original $X_{B_n}$ to $\tilde X_{B_n}$ by matching moments. Let $\theta^*_n$ be the moment matching parameter for $\tilde X_{B_n}$. \begin{eqnarray} \bar R^l_{n+1} & = & \frac{1}{n+1}H(G_{B_{n+1}};X_{B_{n+1}};\theta^*_{n+1})\nonumber\\ & = & \frac{1}{n+1}\left[H(G_{B_{n+1}};X_{B_{n}};\theta^*_{n+1}) + H(G_{B_{n+1}};X_{r_{n+1}}|X_{B_n};\theta^*_{n+1})\right]\nonumber\\ & < & \frac{1}{n+1}\left[H(G_{B_{n+1}};X_{B_{n}};\theta^*_{n+1}) + \frac{1}{n}H(G_{B_{n+1}};X_{B_{n}};\theta^*_{n+1})\right]\nonumber\\ & = & \frac{1}{n}H(G_{B_{n+1}};X_{B_{n}};\theta^*_{n+1})\nonumber\\ & < & \frac{1}{n}H(G_{B_{n}};X_{B_{n}};\theta^*_{n})\nonumber\\ & = & \bar R^L_n , \nonumber \end{eqnarray} \noindent where the second inequality is from the maximum entropy property of MRFs. This completes the proof $\hfill \Box$ \end{proof} \vspace{2mm} \vspace{2mm} \begin{proposition}\label{prop:lines_over_strips} For all strip widths $n_S$ and line widths $n_L$, \begin{eqnarray} \bar R^L_{n_L} & > & \bar R^S_{n_S} . \nonumber \end{eqnarray} \end{proposition} \vspace{2mm} \begin{proof} We prove the proposition by cases: $n_S = n_L$, $n_S > n_L$, and $n_S < n_L$. First assume $n_S=n_L=n$. Then, \begin{eqnarray} \bar R^S_{n_S} & = & \frac{1}{n}H(G;X_{B_n}|X_{\partial B_n};\theta) \nonumber \\ & \leq & \frac{1}{n}H(G;X_{B_n};\theta) \nonumber \\ & < & \frac{1}{n}H(G_{B_n};X_{B_n};\theta^*_n) \label{eq:max1} \\ & = & \bar R^L_{n_L} , \nonumber \end{eqnarray} where (\ref{eq:max1}) follows from the maximum entropy property of MRFs. Next, assume $n_S>n_L$. Then, \begin{eqnarray} \bar R^S_{n_S} & = & \frac{1}{n_S}H(G;X_{B_n}|X_{\partial B_n};\theta) \nonumber \\ & \leq & \frac{1}{n_S}H(G;X_{B_n};\theta) \nonumber \\ & < & \frac{1}{n_S}H(G_{B_n};X_{B_{n_S}};\theta^*_{n_S}) \label{eq:max2} \\ & = & \bar R^L_{n_S} \nonumber \\ & < & \bar R^L_{n_L} , \nonumber \end{eqnarray} where (\ref{eq:max2}) follows from the maximum entropy property of MRFs. Finally, assume $n_S<n_L$. Then, \begin{eqnarray} \bar R^S_{n_S} & < & \bar R^S_{n_L} \nonumber \\ & = & \frac{1}{n_L}H(G;X_{B_{n_L}}|X_{\partial B_{n_L}};\theta) \nonumber \\ & \leq & \frac{1}{n_L}H(G;X_{B_{n_L}};\theta) \nonumber \\ & < & \frac{1}{n_L}H(G_{B_{n_L}};X_{B_{n_L}};\theta^*_{n_L}) \label{eq:max3} \\ & = &\bar R^L_{n_L} , \nonumber \end{eqnarray} where (\ref{eq:max3}) follows from the maximum entropy property of MRFs. This completes the proof. $\hfill \Box$ \end{proof} Together these three propositions indicate that $\bar R^L_{n_L}$ and $\bar R^S_{n_S}$ always behave as in Figure \ref{fig:example} (a), which as discussed in the next section, plots them for a specific case. They also illustrate the potential tradeoffs between line width $n_L$ and strip width $n_S$. Specifically, by increasing $n_L$ the line rate $\bar R^L_{n_L}$ decreases, though the fraction $\frac{n_L}{n_S+n_L}$ of pixels encoded at the higher rate increases, while increasing $n_S$ increases the fraction $\frac{n_S}{n_L+n_S}$ of pixels encoded at the lower rate, though the strip rate $\bar R^S_{n_S}$ increases. In addition to considering the effect of $n_S$ and $n_L$ on rate, we can look at their influence on the rate redundancy $\Delta (n_S, n_L) {\stackrel{\Delta}{=}} \frac{1}{|V|}D(X_U || \tilde X_U)$, which is entirely due to encoding the lines independently and as moment-matching reduced MRFs. We use the shorthand notation $\tilde X_{B_{n_L}}$ to indicate the moment-matching reduced MRF on $B_{n_L}$ and $D(X_{B_{n_L}}||\tilde X_{B_{n_L}})$ to denote the divergence between the marginal and moment-matching reduced MRF distributions for $X_{B_{n_L}}$. \vspace{2mm} \begin{proposition}\label{prop:redundancy} The per-row rate redundancy due to coding $X_U\sim p(G;X_U;\theta)$ as a reduced MRF $X_U\sim p(G_U;X_U:\theta^*_U)$ is \begin{eqnarray} \bar \Delta(n_S, n_L) &=& {n_S \over n_S + n_L} I(X_{r_1};X_{r_{-n_S}}) + {n_L \over n_S + n_L} D(X_{B_{n_L}}||\tilde X_{B_{n_L}}) , \nonumber \end{eqnarray} where $r_1$ is the 1st row of a line, and $r_{-n_S}$ is the last row of the previous line. \end{proposition} \vspace{2mm} \begin{proof} To prove the proposition, consider a joint distribution $p(x_1,\ldots,x_N)$ on $N$ variables, where we have in mind each variable representing one of the $N=k+1$ lines. By approximating $p(x_1,\ldots,x_N)$ with $\tilde p(x_1,\ldots,x_N)=\prod_{i=1}^N\tilde p(x_i)$ we can see that the divergence between $p$ and $\tilde p$ is \begin{eqnarray} D(p || \tilde p) & = & \sum\limits_{x_1,\ldots,x_N}p(x_1,\ldots,x_N)\log\frac{p(x_1,\ldots,x_N)}{\tilde p(x_1)\cdots\tilde p(x_N)} \nonumber \\ & = & -\sum\limits_{x_1,\ldots,x_N}p(x_1,\ldots,x_N)\log\tilde p(x_1)\cdots\tilde p(x_N) - H(X_1,\ldots,X_N) \nonumber \\ & = & \sum\limits_{i=1}^N\sum\limits_{x_i}-p(x_i)\log\tilde p(x_i) - H(X_1,\ldots,X_N) \nonumber \\ & = & \sum\limits_{i=1}^N\left[H(X_i) + D(p(X_i)||\tilde p(X_i))\right] - H(X_1,\ldots,X_N) \nonumber \\ & = & \sum\limits_{i=1}^N\left[H(X_i) - H(X_i|X_{i-1},\ldots,X_1) + D(p(X_i)||\tilde p(X_i))\right] \nonumber \\ & = & \sum\limits_{i=2}^N I(X_i;X_{i-1}) + \sum\limits_{i=1}^N D(p(X_i) || \tilde p(X_i)) . \nonumber \end{eqnarray} Applying the stationarity assumption, weighting the last two terms by the (approximate) fractions in (\ref{eq:rate_approx}), and substituting $N=k+1$ and $X_i=X_{B_{n_L}}$ yields \begin{eqnarray} \bar \Delta(n_S, n_L) & = & {n_S \over n_S + n_L} I(X_{B_{n_L}};X_{B_{n_L},-n_S}) + {n_L \over n_S + n_L} D(X_{B_{n_L}}||\tilde X_{B_{n_L}}) , \nonumber \end{eqnarray} where $I(X_{B_{n_L}};X_{B_{n_L},-n_S})$ is the mutual information between two $n_L\times N$ rectangular blocks of sites separated by a $n_S\times N$ rectangular block of sites. To finish the proof, it suffices to consider $I(X_1,X_2;Y_1,Y_2)$ where $X_1-X_2-Y_1-Y_2$ form a Markov Chain. In this case, \begin{eqnarray} I(X_1,X_2;Y_1,Y_2) & = & H(Y_1,Y_2) - H(Y_1,Y_2 | X_1,X_2) \nonumber \\ & = & H(Y_1) + H(Y_2 | Y_1) - H(Y_1 | X_1,X_2) - H(Y_2 | Y_1,X_1,X_2) \nonumber \\ & = & H(Y_1) + H(Y_2 | Y_1) - H(Y_1 | X_2) - H(Y_2 | Y_1) \nonumber \\ & = & H(Y_1) - H(Y_1 | X_2) \nonumber \\ & = & I(Y_1;X_2) . \nonumber \end{eqnarray} Making the appropriate substitutions yields \begin{eqnarray} \bar \Delta(n_S, n_L) & = & {n_S \over n_S + n_L} I(X_{r_1};X_{r_{-n_S}}) + {n_L \over n_S + n_L} D(X_{B_{n_L}}||\tilde X_{B_{n_L}}) , \nonumber \end{eqnarray} where $I(X_{r_1};X_{r_{-n_S}})$ is the mutual information between the 1st row of a line and the last row of the previous line. This completes the proof. $\hfill \Box$ \end{proof} \vspace{2mm} This proposition shows specifically how the redundancy of RCC has two components: a correlation redundancy $I(X_{r_1};X_{r_{-n_S}}) $ due to encoding the lines independently of one another, and a distribution redundancy $D(X_{B_{n_L}}||\tilde X_{B_{n_L}})$ due to approximating the lines as moment matching reduced MRFs. \vspace{2mm} \begin{proposition}\label{prop:info} $I(X_{r_1};X_{r_{-n_S}})$ is decreasing in $n_S$. \end{proposition} \vspace{1mm} \begin{proof} We let $X_{r_{i,1}}$ denote the 1st row of the $i$-th line and $X_{r_{i-1,n_L}}$ and $X_{r_{i-1,n_L-1}}$ denote, respectively, the $n_L$-th and $(n_L-1)$-st lines of the $(i-1)$-st line. \begin{eqnarray} I(X_{r_1};X_{r_{-n_S}}) & = & H(G;r_{i,1};\theta) - H(G;X_{r_{i,1}} | X_{r_{i-1,n_L}};\theta) \nonumber \\ & = & H(G;r_{i,1};\theta) - H(G;X_{r_{i,1}} | X_{r_{i-1,n_L}}, X_{r_{i-1,n_L-1}};\theta) \label{eq:mark} \\ & > & H(G;r_{i,1};\theta) - H(G;X_{r_{i,1}} | X_{r_{i-1,n_L-1}};\theta) \label{eq:condineq} \\ & = & I(X_{r_1};X_{r_{-(n_S+1)}}) , \nonumber \end{eqnarray} where (\ref{eq:mark}) is due to the Markov property and (\ref{eq:condineq}) is due to removing conditioning. This completes the proof. $\hfill \Box$ \end{proof} \vspace{2mm} To analyze the distribution redundancy, we let $\tilde{\tilde {X}}_{B_{n}}$ be the marginal distribution of $X_{B_{n-1}}$ as a subset of the moment-matching reduced MRF ${\tilde {X}}_{B_{n}}$on $B_{n}$. More generally, $X_{B_n}$ decorated with $k$ ``tildes" indicates the marginal distribution of $X_{B_{n-k+1}}$ as a subset of the moment-matching reduced MRF $\tilde X_{B_{n}}$ on $B_{n}$. Moreover, we let $\theta^*_n$ be shorthand for $\theta^*_{B_{n}}$. We then have the following recursive expression for the distribution redundancy. \vspace{2mm} \begin{proposition}\label{conj:div} \begin{eqnarray} D(X_{B_{n_L}}|| \tilde X_{B_{n_L}}) & = & \! D(X_{B_{n_L-1}}||\tilde X_{B_{n_L-1}}) - D(\tilde{\tilde{X}}_{B_{n_L}}||\tilde X_{B_{n_L-1}})\nonumber\\[.4ex] & & + \, H(G_{B_{n_L}};r_{n_L}|r_{n_L-1};\theta^*_{n_L}) - H(G;r_{n_L}|r_{n_L-1};\theta) , \nonumber \end{eqnarray} where $D(\tilde{\tilde X}_{B_{n_L}}||\tilde X_{B_{n_L-1}})$ is the divergence between the marginal distribution of $X_{B_{n_L-1}}$ as a subfield of $\tilde X_{B_{n_L}}$ and the reduced MRF $\tilde X_{B_{n_L-1}}$ on $B_{n_L-1}$, and where $H(\cdot ; r_n | r_{n-1} ; \cdot)$ is the conditional entropy of row $r_n$ condition on row $r_{n-1}$ for the specified graph and parameter vector. \end{proposition} \vspace{2mm} \begin{proof} We prove the proposition by using the fact that the divergence $D(X_{B_{n_L}}|| \tilde X_{B_{n_L}})$ between the marginal distribution of $X_{B_{n_L}}$ and the reduced MRF for $X_{B_{n_L}}$ can be expressed as the difference between the entropy of the latter and that of the former. Specifically, \begin{eqnarray} D(X_{B_{n_L}}|| \tilde X_{B_{n_L}}) & = & H(G_{B_{n_L}};X_{B_{n_L}};\theta^*_{n_L}) - H(G;X_{B_{n_L}};\theta) \nonumber \\ & = & H(G_{B_{n_L}};X_{B_{n_L-1}};\theta^*_{n_L}) - H(G;X_{B_{n_L-1}};\theta) \nonumber \\ & & + H(G_{B_{n_L}};r_{n_L} | r_{n_L-1};\theta^*_{n_L}) - H(G;r_{n_L} | r_{n_L-1};\theta) \nonumber \\ & = & H(G_{B_{n_L-1}};X_{B_{n_L-1}};\theta^*_{n_L-1}) - D(\tilde{\tilde{X}}_{B_{n_L}}|| \tilde X_{B_{n_L-1}}) \nonumber \\ & & - H(G;X_{B_{n_L-1}};\theta) + H(G_{B_{n_L}};r_{n_L} | r_{n_L-1};\theta^*_{n_L}) - H(G;r_{n_L} | r_{n_L-1};\theta) \nonumber \\ & = & H(G_{B_{n_L-1}};X_{B_{n_L-1}};\theta^*_{n_L-1}) - H(G;X_{B_{n_L-1}};\theta) \nonumber \\ & & - D(\tilde{\tilde{X}}_{B_{n_L}}|| \tilde X_{B_{n_L-1}}) + H(G_{B_{n_L}};r_{n_L} | r_{n_L-1};\theta^*_{n_L}) - H(G;r_{n_L} | r_{n_L-1};\theta) \nonumber \\ & = & D(X_{B_{n_L-1}}|| \tilde X_{B_{n_L-1}}) - D(\tilde{\tilde{X}}_{B_{n_L}}|| \tilde X_{B_{n_L-1}}) \nonumber \\ & & + H(G_{B_{n_L}};r_{n_L} | r_{n_L-1};\theta^*_{n_L}) - H(G;r_{n_L} | r_{n_L-1};\theta) . \nonumber \end{eqnarray} This completes the proof. $\hfill \Box$ \end{proof} \vspace{2mm} Furthermore, the divergence $D(\tilde{\tilde{X}}_{B_{n_L-1}}||\tilde X_{B_{n_L-1}})$ has the following recursive relationship. \vspace{2mm} \begin{proposition} \begin{eqnarray} D(\tilde{\tilde{X}}_{B_{n-k+1}}||\tilde X_{B_{n-k}}) & = & D(\tilde{\tilde{\tilde{X}}}_{B_{n-k+1}}||\tilde X_{B_{n-k-1}}) - D(\tilde{\tilde{X}}_{B_{n-k}}||\tilde X_{B_{n-k-1}})\nonumber\\[.5ex] && + H(G_{B_{n-k}};r_{n-k}|r_{n-k-1};\theta^*_{n-k}) - H(G_{B_{n-k+1}};r_{n-k}|r_{n-k-1};\theta^*_{n-k+1})\nonumber \end{eqnarray} where $D(\tilde{\tilde{\tilde{X}}}_{B_{n-k+1}}||\tilde X_{B_{n-k-1}})$ is the divergence between the marginal distribution of $X_{B_{n-k-1}}$ as a subfield of $\tilde X_{B_{n-k+1}}$ and the reduced MRF $\tilde X_{B_{n-k-1}}$ on $B_{n-k-1}$. \end{proposition} \vspace{2mm} \begin{proof} \begin{eqnarray} D(\tilde{\tilde{X}}_{B_{n-k+1}}||\tilde X_{B_{n-k}}) & = & H(G_{B_{n-k}};X_{B_{n-k}};\theta^*_{n-k}) - H(G_{B_{n-k+1}};X_{B_{n-k}};\theta^*_{n-k+1}) \nonumber \\ & = & H(G_{B_{n-k}};X_{B_{n-k-1}};\theta^*_{n-k}) - H(G_{B_{n-k+1}};X_{B_{n-k-1}};\theta^*_{n-k+1}) \nonumber \\ & & + H(G_{B_{n-k}};r_{n-k} | r_{n-k-1};\theta^*_{n-k}) - H(G_{B_{n-k+1}};r_{n-k} | r_{n-k-1};\theta^*_{n-k+1}) \nonumber \\ & = & H(G_{B_{n-k-1}};X_{B_{n-k-1}};\theta^*_{n-k-1}) - D(\tilde{\tilde{X}}_{n-k} || \tilde{X}_{n-k-1}) \nonumber \\ & & - H(G_{B_{n-k+1}};X_{B_{n-k-1}};\theta^*_{n-k+1}) + H(G_{B_{n-k}};r_{n-k} | r_{n-k-1};\theta^*_{n-k}) \nonumber \\ & & - H(G_{B_{n-k+1}};r_{n-k} | r_{n-k-1};\theta^*_{n-k+1}) \nonumber \\ & = & D(\tilde{\tilde{\tilde{X}}}_{n-k+1} || \tilde{X}_{n-k-1}) - D(\tilde{\tilde{X}}_{n-k} || \tilde{X}_{n-k-1}) \nonumber \\ & & + H(G_{B_{n-k}};r_{n-k} | r_{n-k-1};\theta^*_{n-k}) - H(G_{B_{n-k+1}};r_{n-k} | r_{n-k-1};\theta^*_{n-k+1}) . \nonumber \end{eqnarray} This completes the proof. $\hfill \Box$ \end{proof} \vspace{2mm} Intuitively we would expect the term $D(X_{B_{n_L}}||\tilde X_{B_{n_L}})$ to decrease in $n_L$, as this divergence is zero when $n_L = M$ and indeed we conjecture that this is the case. At the very least, we expect ${1 \over n_L} D(X_{B_{n_L}}||\tilde X_{B_{n_L}})$ to decrease in $n_L$. We now consider the effects of changing $n_S$ and $n_L$ on redundancy, as expressed in Proposition \ref{prop:redundancy}. Increasing $n_S$ decreases distribution redundancy through the factor ${n_L \over n_S + n_L}$. It is not so clear what happens to the correlation redundancy, as increasing $n_S$ increases the fraction ${n_S \over n_S + n_L}$, while decreasing the information $I(X_{r_1};X_{r_{-n_S}})$. However, if we keep $n_S$ and $n_L$ proportional to one another, as $n_S$ increases, the fraction stays the same, the correlation redundancy decreases, and assuming the conjecture, so too does distribution redundancy. Similarly, increasing $n_L$ decreases the correlation redundancy through the factor ${n_S \over n_S + n_L}$. Even assuming the above conjecture, it is not clear what happens to the distribution redundancy, as increasing $n_L$ increases the fraction ${n_L \over n_S + n_L}$, while decreasing the divergence $D(X_{B_{n_L}}||\tilde X_{B_{n_L}})$. However, as mentioned above, if $n_S$ and $n_L$ increase proportionally to one another, then the fraction stays the same and both the correlation and distribution redundancies decrease in $n_L$. \begin{figure*} \centerline{ \hbox{ \hspace{0.0in} \includegraphics[scale = .35]{rate_vs_width_lines_strips} \hspace{1.0in} \includegraphics[scale = .35]{rates_ns=1} } } \hbox{\small \hspace{1.38in} (a) \hspace{2.9in} (b)} \vspace{3mm} \centerline{ \hbox{ \hspace{0.0in} \includegraphics[scale = .35]{totalrate_ns=nl} \hspace{1.0in} \includegraphics[scale = .35]{rates_constantsum} } } \hbox{\small \hspace{1.38in} (c) \hspace{2.9in} (d)} \caption{ Rate (a) for lines (blue) and strips (red); (b) as a function of $n_L$ for $n_S=1$; (c) as a function of $n=n_S=n_L$; and (d) for $n_S+n_L = 8$.} \label{fig:example} \end{figure*} The complexity of this coding scheme can be expressed as \begin{eqnarray} C_{n_S,n_L} & = & \frac{n_S}{n_S+n_L}|\mathcal{X}|^{n_S}c_S + \frac{n_L}{n_S+n_L}|\mathcal{X}|^{n_L}c_L \nonumber \end{eqnarray} \noindent where $| \mathcal{X}|$ denotes the number of elements of $\mathcal X$, and $c_S$ and $c_L$ are factors relating the complexity of encoding a strip versus a line. For example, numerical simulations show that for $n_S=n_L$, the run-time involved in encoding a strip is a little higher than that for a line, which is due to additional operations for conditioning on the boundary of a strip. However, the difference becomes negligible as $n_S$ and $n_L$ become larger. As a result, the complexity $C_{n_S,n_L}$ is dominated by $\max\{n_S,n_L\}$. Given a constraint $\max\{n_S,n_L\}\leq n^*$ on the maximum exponent in the complexity, since both Proposition \ref{prop:info} and our conjecture indicate choosing $n_S$ and $n_L$ each to be as large as possible, we propose setting $n_S = n_L$. \section{Example: Homogeneous Ising Model}\label{sec:simulation} We simulated a homogeneous Ising model with edge parameter $\theta_{ij}=0.4$ and node parameter $\theta_i = 0$ using Gibbs sampling. To encode the lines with line width $n_L$, we approximate the moment-matching parameter $\theta^*_{n_L}$ by minimizing the empirical cross entropy \begin{eqnarray} H^{nK}_{n_L}(\tilde\theta_{n_L}) & = & \frac{1}{nK}\sum\limits_{L_i}\sum\limits_{j=1}^n -\log p(G_{L_i};{ \bf x }^{(j)}_{L_i}; \tilde\theta_{n_L}). \nonumber \end{eqnarray} \noindent Note that even for a homogeneous MRF, the moment-matching parameter for a subset $U$ will in general not be homogeneous. The line rate $\bar R^L_{n_L}$ is approximated by \begin{eqnarray} \hat R^L_{n_L} & = & \frac{1}{nK}\sum\limits_{L_i}\sum\limits_{j=1}^n -\log p(G_{L_i};{ \bf x }^{(j)}_{L_i};\theta^*_{n_L}) . \nonumber \end{eqnarray} Similarly, $\bar R^S_{n_S}$ is approximated by \begin{eqnarray} \hat R^S_{n_S} & = & \frac{1}{nK}\sum\limits_{S_i}\sum\limits_{j=1}^n -\log p(G;{ \bf x }^{(j)}_{S_i}|{ \bf x }^{(j)}_{\partial S_i};\theta) . \nonumber \end{eqnarray} Figure \ref{fig:example}(a) shows $\hat R^L_{n_L}$ and $\hat R^L_{n_S}$. As predicted by Propositions \ref{prop:strips}, \ref{prop:lines}, and \ref{prop:lines_over_strips}, $\hat R^S_{n_S}$ is increasing in $n_S$, $\hat R^L_{n_L}$ is decreasing in $n_L$, and $\hat R^S_{n_S} < \hat R^L_{n_L}$ for all $n_S,n_L$. We computed $\hat R_{n_S,n_L}$ from $\hat R^L_{n_L}$ and $\hat R^L_{n_S}$ using (\ref{eq:RSL}), and as seen in Figure \ref{fig:example}(b), we found that $\hat R_{n_S,n_L}$ decreases as $n_L$ increases for constant $n_S$. We also found, see Figure \ref{fig:example}(c), that $\hat R_{n_S,n_L}$ decreases with $n$ increasing when $n =n_L = n_S$, which is consistent with the earlier discussion that presumed the conjecture. Finally, we found that if one holds the sum $n_L +n_S$ constant, then the rate $\hat R_{n_S,n_L}$ is minimized when $n_L = 1$. This indicates that the information $I(X_{r_1};X_{r_{-n_S}})$ decreases with $n_S$ faster than the divergence $D(X_{B_{n_L}}||\tilde X_{B_{n_L}})$ decreases with $n_L$. Though not apparent in the Figure, we found that $\hat R_{7,7} < \hat R_{7,1}$, an improvement over our earlier paper \cite{reyes2010} which focused exclusively on $n_L = 1$. However, the improvement is nominal, so therefore, at least for this particular value of $\theta_{ij}$, does not justify the significantly increased complexity. \section{Concluding Remarks} In this paper we have addressed the topic of tradeoffs in the choice of the width $n_L$ and spacing $n_S$ of the cutset components in Reduced Cutset Coding of Markov random fields. We have provided analysis from the perspective of the rate of this scheme in terms of the rates for encoding lines and strips and the relative contributions of each to the overall rate. We have shown that the rate for encoding lines with the moment-matching reduced MRF decreases with $n_L$, and that the rate for encoding strips increases with $n_S$, and on the basis of just these results one might conclude that large $n_L$ and small $n_S$ would provide an optimal combination. However, we also show that for all combinations of $n_L$ and $n_S$, the rate for encoding lines is strictly greater than the rate for encoding strips. Moreover, the fraction ${n_L \over n_S + n_L}$ of sites encoded at the larger rate obviously increases with $n_L$, while the fraction ${n_S \over n_S + n_L}$ of sites encoded at the smaller rate obviously decreases with $n_S$. Additionally, we have analyzed the problem from the perspective of the redundancy in the code, showing that this redundancy decomposes into a distribution redundancy due to approximating the lines as moment-matching reduced MRFs, and a correlation redundancy due to independent coding of the lines. We show that the correlation redundancy is decreasing in $n_S$ and provide analysis of the distribution redundancy and conjecture that it is decreasing in $n_L$. Indeed, numerical experiments with an Ising model corroborate this conjecture. Moreover, if we let $n_L$ be the height of the original image, then clearly the divergence $D(X_{B_{n_L}} || \tilde X_{B_{n_L}})=0$, and at least offhand, there is no reason to suspect that this divergence is non-monotonic in $n_L$. Naturally, though, further analysis of $D(X_{B_{n_L}}||\tilde X_{B_{n_L}})$ remain to be done, and at least at the moment, we suspect that the recursive relations for $D(X_{B_{n_L}}||\tilde X_{B_{n_L}})$ will be useful in proving our conjecture. While for general row-invariant statistics $t$ and exponential parameters $\theta$ it is not clear what the best choices of $n_L$ and $n_S$ should be, our numerical experiments with a uniform Ising model with parameters $\theta_{ij}=0.4,\theta_i=0$ suggest that letting $n_S$ and $n_L$ both be as large as possible achieves a lower rate. However, since the decrease in rate over a large $n_S$ and $n_L=1$ is in the fourth decimal place (in terms of per-site rate), the greatly increased complexity in encoding lines with large $n_L$ does not seem worth it. However, more work remains to be done in understanding how differences in parameter values affect these tradeoffs. And more generally, beyond the Ising model, we would like to understand how the apparent tradeoffs between $n_S$ and $n_L$ vary with $\theta$ for different types of statistic $t$. Previous work of the authors \cite{reyes2009b,reyes2011,reyes2013} has looked at the relationship between {\em positively correlated} statistics $t$ and quantities of interest and it will be interesting to see if such statistics can be shown to have significant consequences for RCC. \Section{References}
1,314,259,994,078
arxiv
\section{Introduction} Hydraulic fractures (HF) are a class of tensile fractures that propagate in a material as a result of fluid pressurization \citep{detournay2016mechanics}. They are encountered in a number of industrial applications such as oil and gas production, geothermal energy and block caving mining. HFs also propagate naturally in the form of magmatic dikes \citep{rivalta2015review}, or at glaciers bed due to the sudden release of surface melt water lakes \citep{TsRi10}. Investigation of the growth of such fluid-driven fractures under controlled conditions at the laboratory scale plays an important role in order to validate theoretical predictions. Since the early work of \cite{HuWi57}, the measurement of the hydraulic fracture geometry has evolved from simple postmortem observations after the experiment to the use of continuous monitoring techniques during fracture growth. These developments have been slow and in most cases only post mortem observations are reported although sometimes via high resolution X-ray CT \citep{LiJu16}. A photometry method based on the intensity drop of a back-light source as it passes through a dyed fracturing fluid has been successfully used to monitor hydraulic fractures in transparent materials \citep{Bunger06}. Such an optical technique has allowed to measure the evolution of both the fracture extent and the full field of fracture opening. Combined with particle image velocimetry, it also allows to measure the fluid velocity field in the growing fracture \citep{ohubbert2018experimental}. These experimental techniques have provided invaluable data sets and insights into hydraulic fracture growth in transparent materials, such as PMMA, glass and hydrogel. They have notably helped in validating important theoretical predictions of hydraulic fracture mechanics \citep{bunger2008experimental,WuBu08,Bunger13,xing2017laboratory}. However, by definition, these optical methods can not be used in non-transparent materials. In rocks, acoustic emission (AE) monitoring is the main technique used to track the evolution of rupture (\cite{LoBy77,zoback1977laboratory,Ishi01,stanchits2014onset,StBu15,GoNa15,stoeckhert2015fracture} to cite a few). AE events, however, do not provide a direct measurement of fracture geometry as they are mostly associated with micro-slips around the growing fracture \citep{RoSt16}. The observations of self-potential \citep{MoGl07,HaRe13} during HF experiments correlate with pressure evolution and appear to highlight fluid flow patterns but do not provide an accurate measurement of the growing fracture. Advanced imaging techniques such as Neutron imaging have been recently attempted \citep{RoMa18} as well as 2D digital image correlation (DIC) \citep{JeKe15,zhao20}. Neutron imaging necessitates the use of relatively small specimen (to achieve sufficient resolution) while the application of 2D DIC imposes the use of intricate specimen geometry and boundary conditions not suited to hydraulic fracturing. We focus here on active acoustic imaging, a method akin to a 4D seismic survey at the laboratory scale in the ultrasonic range. Earlier studies \citep{MeMa84,de1996physical,Glas98,van1999influence,groenenboom2000monitoring} have shown its capability to obtain quantitative information during laboratory hydraulic fracture experiments. The wave-field interacts in different ways with the growing fracture. It can be diffracted by the fracture tip (as well as the fluid front if a lag is present near the fracture tip) but also reflected by and transmitted through the fluid-filled fracture. The evolution of transmitted waves has notably allowed to identify a dry region near the fracture tip (fluid lag) \citep{MeMa84,de1996physical}. Records of the arrival times of waves diffracted by the fracture have enabled to estimate the evolving fracture tip position under the hypothesis of a horizontal radial fracture centered on the injection point \citep{groenenboom2000monitoring,groenenboom2000scattering}. Opening of the fracture results in attenuation and delay of transmitted waves. It thus allows to evaluate the fluid layer thickness by matching the spectrum of the transmitted signals with the transmission coefficient of a three layers model \citep{groenenboom1998monitoring}. These two techniques (diffraction and transmission) have been shown to provide results in agreement with optical methods \citep{kovalyshen2014comparison}. In this paper, we improve the imaging of a growing hydraulic fracture by using an unprecedented amount of piezoelectric source/receiver pairs and relaxing the assumption of a centered horizontal radial fracture. We test the method on two experiments performed in quasi-brittle rocks (marble and gabbro) which may exhibit different fracture growth behavior compared to PMMA, plaster or cement-based materials used in the previous studies cited above. We first present our experimental set-up, the rocks and experimental conditions used. After illustrating the type of diffracted waves measured in these experiments, we develop an inverse problem for the fracture / fluid front reconstruction. This inversion is then performed repeatedly in time for each acquisition sequence. We use different shapes (ellipse, circle with or without tilt) to parameterize the fracture front geometry and use Bayesian model selection to rank these different models. We finally compare the results obtained from the inversion of diffracted waves with transmitted waves data. \section{Experimental methods} \subsection{Experimental set-up and specimen preparation} Hydraulic fracture growth experiments are carried out in 250~mm cubic rock samples under a true triaxial compressive state of stress as illustrated in Fig.~\ref{fig:Transducers}. The confinement is applied by symmetric pairs of flat jacks in the three axis of a poly-axial reacting frame. Compressive stresses up to 20 MPa can be applied prior to injection. The fracturing fluid is injected in a central wellbore by a syringe pump (ISCO D160) at a constant flow rate (in the range 0.001~mL/min to 107~mL/min) with a maximum injection pressure of 51~MPa. An interface vessel in the injection line allows to inject a wide range of fluid type with viscosity ranging from 1 mPa.s to 1000 Pa.s. Due to the compliance of the injection system (i.e. volume of fluid in the injection line), upon fracture initiation the flow rate entering the fracture does not equal the pump injection rate $Q_o$ during a transient phase \citep{lhomme05,LeDe17}. A needle valve is thus placed in the injection line close to the well-head in order to control the release of fluid compressed during the pressurization phase. Using volume conservation within the injection system (pump to fracture inlet), it is possible to estimate the flow rate $Q_{in}(t)$ entering the fracture by taking the derivatives of the fluid pressure measurements (see appendix \ref{app:inletflux} for details). \begin{figure} \centering \includegraphics[width=0.8\linewidth]{figures/Fig1Transducer.pdf} \caption{Schematic illustration of the rock sample showing the transducers disposition for the GABB-001 experiment. Additional holes are available in the platens allowing the use of various transducer dispositions. Two facing platens share the same transducers disposition and source / receivers transducers are alternately located on opposite platens for robustness. } \label{fig:Transducers} \end{figure} The rock sample is first rectified as a cube of 250~mm~$\times$~250~mm~$\times$~250~mm dimensions. We polish the specimen surfaces to minimize friction and to ensure a good contact between the piezo-electric transducers and the rock. A wellbore of 16 mm diameter is drilled through the block and a horizontal axisymmetric notch (with a diameter of 21~mm $\pm$~1~mm) is created in the middle of the sample via a specifically designed rotating cutting tool. The resulting notch is axisymmetric and its plane perpendicular to the well axis. A completion tool connected to a injection tubing is epoxied in the wellbore and allows to inject fluid only at the notch level. Active acoustic monitoring is integrated within the poly-axial cell. 64 piezoelectric transducers are included in the loading platens: 32 transducers act as sources and 32 as receivers. This array of transducers consists of 10 shear-wave transducers and 54 longitudinal-wave ones. We use a source function generator connected to a high-power amplifier to send a Ricker excitation signal with a given central frequency that can be set between 300 and 750~kHz depending on the material type. The source signal is routed to one of the 32 source transducers via a multiplexer. The 32 receiver transducers are connected to a high-speed acquisition board in order to record the signal simultaneously on all receivers with a sampling frequency of 50~MHz. As the switch between sources is limited by the multiplexer, the excitation of a given source is repeated 50 times and the data stacked to improve signal to noise. Spanning of the 32 sources defines an acquisition sequence and takes about 2.5 seconds in total. In addition to acoustic data, we record fluid injection pressure (upstream and downstream the needle valve), volume and pressure of each flat-jack pairs at 1 Hz. All the measurements are synchronized via a dedicated LabView application. \begin{table} \centering \caption{Material properties ($V_p$ and $V_s$ are measured directly on the cubic rocks during the pressurization stage prior to any fracture growth). \label{tab:Materials}} \begin{tabular}{c|c|c|c|c} \hline Rock & $V_p$ (m/s) & $V_s$ (m/s) & $\rho$ ($\times 10^3$kg/m$^3$) & Grain size (mm) \\ \hline Carrara marble & 6249.8 $\pm$ 54.0 & 3229.9 $\pm$ 176.2 & 2.69 & 0.1-0.2 \\ Zimbabwe gabbro & 6679.0 $\pm$ 113.2 & 3668.5 $\pm$ 41.3 & 3.00 & 1-3 \\ \hline \end{tabular} \end{table} \begin{table} \centering \caption{Sample configuration and experimental parameters for Zimbabwe gabbro and Carrara marble. \label{tab:ExpInfo}} \begin{tabular}{c|c|c|c|c} \hline Rock & Block size & $\sigma_3$ & $\sigma_1=\sigma_2$ & Location of the notch from \\ sample & (mm) & (MPa) & (MPa) & the sample bottom $x_3$ (mm)\\ \hline GABB-001 & $250\times 250\times 251$ & 0.5 & 10.5 & 128.5\\ MARB-005 & $257\times 256\times 256$ & 10 & 20 & 131\\ \hline & Fracturing fluid & Viscosity & Injection rate & System compliance\\ & & $\mu$ (Pa.s) &$Q_o$ (mL/min) & $U$ (mL/GPa)\\ \hline GABB-001 & Glycerol & 0.6 & 0.2 & 217.3 \\ MARB-005 & Silicone oil & 100 & 0.2 & 282.5 \\ \hline \end{tabular} \end{table} \begin{table} \centering \caption{Characteristic time-scales for a radial hydraulic fracture: viscosity-toughness transition $t_{mk}$, fluid lag disappearance time scale $t_{om}$. We estimate these time-scales using the averaged entering flow rate into the fracture $<Q_o>$ during the fracture duration $t_{prop}$ and the following material properties: a) for Zimbabwe gabbro, $E=68.4$~GPa \citep{TruStone}, $\nu=0.3$ (assumed), $K_{Ic}=3.03$ MPa.m$^{1/2}$ \citep{meredith1985fracture}; b) for Carrara marble, $E=65$~GPa, $\nu=0.25$ \citep{gulli2015mechanical}, $K_{Ic}=1.38$ MPa.m$^{1/2}$ \citep{ouchterlony1990fracture}. \label{tab:ExpScaling}} \begin{tabular}{c|c|c|c|c} \hline Rock & $<Q_o>$ & Propagation & $t_{mk}$ & $t_{om}$ \\ & (mL/min) & duration $t_{prop}$ (s) & (s) & (s) \\ \hline GABB-001 & 0.074 & $\approx$ 410 & $4.0\times 10^{-4}$ & $3.3\times 10^5$ \\ MARB-005 & 0.046 & $\approx$ 582 & $5.0\times 10^4$ & $5.8\times 10^3$ \\ \hline \end{tabular} \end{table} \subsection{Laboratory hydraulic fracturing experiments} We discuss in this paper two experiments performed respectively in Zimbabwe gabbro and Carrara marble. Zimbabwe gabbro (plagioclase, mica, biotite and amphibole, quartz) has a larger grain size than Carrara marble (calcite, mica) as illustrated in Fig.~\ref{fig:PostMortem}, which usually implies a larger fracture process zone \citep{ouchterlony1982review}. Both rocks have isotropic acoustic properties (see in Table.~\ref{tab:Materials}). We impose a bi-axial state of stress on the block setting $\sigma_1=\sigma_2$, while the the minimum stress $\sigma_3<\sigma_1=\sigma_2$ is set perpendicular to the wellbore in order to favour a planar fracture initiating from the axisymmetric notch, in other words, promoting a transverse fracture to the wellbore (as shown in Fig.~\ref{fig:PostMortem}). Glycerol (gabbro) or silicone oil (marble) are used as fracturing fluids. The fluid is injected at a constant rate. All experimental parameters are listed in Table \ref{tab:ExpInfo}. The active acoustic monitoring is conducted with a central frequency of 750~kHz for the source excitation. This results in a wavelength of around 9~mm for compressional waves in both rocks whose grain size is at most 3~mm in gabbro and 0.2~mm in marble. In both experiments, acoustic acquisition is performed with a larger period during the pressurization phase and then switched to every 4 seconds during fracture propagation as shown in Fig.~\ref{fig:G01NonAcoustic} and Fig.~\ref{fig:M05NonAcoustic}. \subsubsection{Injection Design} The propagation of a fluid-driven fracture is a multiscale physical process. For a radial hydraulic fracture propagating in a tight material, it is now well established that initially at early time due to the injection of a viscous fluid, a fluid-less cavity (fluid lag) forms at the fracture tip (see \cite{detournay2016mechanics} and references therein). The fluid front then catches up with the fracture front over a characteristic time-scale \begin{equation} t_{om}=E^{\prime 2} \mu^\prime /\sigma_3^3 \end{equation} where $E^\prime=\frac{E}{1-\nu^2}$ is the plane-strain elastic modulus related to the Young's modulus and Poisson's ratio, $\mu^\prime=12\mu$ with $\mu$ the fluid viscosity and $\sigma_3$ is the minimum applied stress (normal to the fracture plane). In addition, as the perimeter of the radial fracture grows, the energy spent in creating new fracture surfaces increases. The propagation switches from a regime dominated by fluid viscosity to a regime dominated by fracture toughness. This evolution is captured by a dimensionless toughness $\mathcal{K}=(t/t_{mk})^{1/9}$, where $t_{mk}$ is the transition time-scale from the viscosity to toughness dominated regimes of growth: \begin{equation} t_{mk}=\frac{E^{\prime 13/2}Q_o^{3/2}\mu^{\prime 5/2}}{K^{\prime 9}},\quad K^\prime=\sqrt{\frac{32}{\pi}}K_{Ic}, \end{equation} where $K_{Ic}$ is the mode I fracture toughness. More precisely, the fracture grows in the viscosity dominated regime as long as $\mathcal{K}\lesssim 1$ and strictly in the toughness dominated regime for $\mathcal{K}\gtrsim 3.5$ \citep{savitski2002propagation}. Moreover, the fluid lag vanishes at all time in the toughness dominated regime \citep{lecampion2007implicit}. In other words, if $t_{mk}\ll t_{om}$, no fluid lag is observed. Experimentally, for a given rock, one can adjust the injection rate $Q_o$, fluid type (viscosity $\mu$) and the minimum stress perpendicular to the fracture plane $\sigma_3$ to explore a given propagation regime. We refer to \cite{BuJe05} for the proper scaling and experimental design of laboratory hydraulic fracture experiments. The two experiments reported herein are characteristic examples of two very different hydraulic fracture propagation regimes (toughness and lag-viscosity dominated). Table \ref{tab:ExpScaling} lists the corresponding time-scales estimated using values of the rock properties from the literature and the averaged injection rate. In both experiments, the compliance of the injection system is rather important. As a result, the flow rate entering the fracture $Q_{in}$ is not constant and equal to the pump rate. It can however be estimated from the injection pressure and injection system compliance (see appendix \ref{app:inletflux} for details). The GABB-001 experiment is toughness dominated as the propagation time is much larger than the viscosity - toughness transition time-scale and no fluid lag is expected during the fracture propagation ($t_{mk}<t_{om}$). On the other hand, the MARB-005 experiment is such that the propagation occurs in the so-called lag / viscosity dominated regimes \citep{LeDe17}: the propagation duration is smaller than both $t_{om}$ and $t_{mk}>t_{om}$. \begin{figure} \centering \includegraphics[width=0.8\linewidth]{figures/Fig2PostMortem.pdf} \caption{Thin section of Zimbabwe gabbro and Carrara marble and the postmortem photos of the cut blocks after GABB-001 and MARB-005 experiments. The wet region around the fracture in the gabbro block does not necessarily indicate a large leak-off during the injection. The visible inhibition occurred mostly after the experiment as the block was cut and photographed 25 days after the experiment.} \label{fig:PostMortem} \end{figure} \subsubsection{Toughness-dominated experiment GABB-001} The gabbro experiment (GABB-001) is a so-called toughness dominated experiment. As a result, the fluid pressure downstream of the valve responds almost instantly to fracture growth. As illustrated in Fig.~\ref{fig:G01NonAcoustic}, the pressure increases linearly with a pressurization rate $Q_o/U$ (pending the fix of an initial leak in the injection line and the adjustment of the needle valve). Upon fracture initiation from the notch, the needle valve prevents a complete sudden release of the fluid pressurized in the line: the pressure downstream of the valve drops while the upstream one keeps increasing illustrating the damping of the entering flux by the needle valve. The fracture initiation time is confirmed by the response of the flat-jacks volume parallel to the fracture plane - as well as the acoustic data as later presented. When the fracture front reaches the edges of the block, there are no more constraints on its deformation and a sudden response of the flat-jacks is observed as well as a kink in the downstream injection pressure (see Fig.~\ref{fig:G01NonAcoustic}). \begin{figure} \centering \includegraphics[height=0.28\linewidth]{figures/Fig3LGPressure.pdf} \includegraphics[height=0.28\linewidth]{figures/Fig3RGQin.pdf} \caption{GABB-001 experiment: evolution of the upstream and downstream pressure, and the mapping relation between the sequence number and acquisition time (left); and evolution of the entering flow rate of the fluid into the fracture, and the volume change of the top-bottom flat-jacks (right). The yellow coloured time interval indicates the propagation of the fracture through the block (from the notch to the end of the block).} \label{fig:G01NonAcoustic} \end{figure} \subsubsection{Lag-viscosity experiment MARB-005} The marble experiment (MARB-005) is characterised by a large fluid lag and strong viscous drop. A restriction tube with a diameter of $1$~mm was placed in the injection line instead of the needle valve. Silicone oil was used as a fracturing fluid. Due to the large viscous effect, it takes time for the fluid to enter the fracture. The fracture initiation can be estimated from the response of the top-bottom flat-jacks and the appearance of acoustic diffraction. It is however indistinguishable from the pressure record - inline with previous observations \citep{ZhHa96,LeDe17}. As can be seen from Fig.~\ref{fig:M05NonAcoustic}, the entering flow rate remains limited after fracture initiation. A large fluid lag develops behind the fracture front (as can be seen from the acoustic data on Fig.~\ref{fig:Diffraction}). The pressure in the injection line keeps increasing until the fracture front almost reaches the edges of the block. The fluid front continues to grow afterwards but the elastic deformation (and thus the hydro-mechanical coupling) is now different. This results in an increase of the entering flow rate which correlates with a small kink in the downstream pressure. We thus use this point to approximate the time at which the fracture reaches the end of the block. It is worth noting that the maximum pressure (often misleadingly denoted as the breakdown pressure) occurs just before that time. \begin{figure} \centering \includegraphics[height=0.28\linewidth]{figures/Fig4LMPressure.pdf} \includegraphics[height=0.28\linewidth]{figures/Fig4RMQin.pdf} \caption{MARB-005: evolution of the upstream and downstream pressure, and the mapping relation between the sequence number and acquisition time (left); and evolution of the entering flow rate of the fluid into the fracture, and the volume change of the top-bottom flat-jacks (right). The volume change of the top-bottom flat-jacks in MARB-005 is much smaller than for the GABB-001 experiment. This is most likely due to the presence of air bubbles inside the flat-jacks. The yellow coloured time interval indicates the propagation of the fracture through the block (from the notch to the end of the block).} \label{fig:M05NonAcoustic} \end{figure} \section{Examples of acoustic diffraction data} \begin{figure} \centering \includegraphics[width=0.48\linewidth]{figures/Fig5LG01S28R24WhiteEdit.pdf} \includegraphics[width=0.48\linewidth]{figures/Fig5RM05S26R10GrayEdit.pdf} \caption{Illustration of different diffracted waves pattern in GABB-001 and MARB-005 experiments. For MARB-005, a gray-scaled image of the data is shown combined with a wiggle plot. The yellow coloured time interval represents approximately the propagation of the fracture (from the notch to the end of the block). } \label{fig:Diffraction} \end{figure} As observed in previous laboratory experiments \citep{groenenboom2000scattering,deGr01}, the initial notch, fracture front, and fluid front \citep{groenenboom2000monitoring} may all serve as a source of diffraction. Each receiver may record both compressional (P-wave) and shear (S-wave) components of the wave depending on its incident angle. Following \cite{groenenboom2000scattering}, we categorise different acoustic events. We denote the diffraction along the fracture or fluid front with a 'd' and the interactions (reflection or diffraction) at the notch with an 'n'. We recall here some travel paths of diffracted waves observed in the two experiments presented here: \begin{itemize} \item direct diffraction of the body wave at the fracture or fluid front with or without mode conversion: for example compressional waves diffracted by the fracture front without mode conversion (PdP), or shear waves diffracted by the fracture front with mode conversion (SdP). \item diffraction of the head wave (P wave guided by the fracture interface, denoted as H \citep{savic1995ultrasonic}) at the fracture tip with or without mode conversion. For example, PnHdP represents a P wave that is guided along the fracture interface after interacting with the notch. It is then diffracted at the fracture tip without mode conversion (see Fig.~\ref{fig:Diffraction}). \end{itemize} Previous studies \citep{groenenboom2000scattering,deGr01} have observed more events related to reflections of the wavefield at the borehole tube and generalized Rayleigh waves propagating along the fracture. Such diffraction events arrive much later than the direct diffraction of body waves. Their signal to noise is often not sufficient to allow a proper picking of these later arrivals. We thus focus mainly on the diffracted body waves (PdP, PdS and SdP here). As shown in Fig.~\ref{fig:Diffraction}, we are only able to observe PdP and PdS arrivals in GABB-001 for the chosen transducer pair, while for most transducer pairs only PdP arrivals can be clearly identified. In MARB-005, we are able to recognize more diffracted waves, notably by both the fracture and the fluid fronts but also by the notch (PnHdP). In particular, the fluid front acts as a strong diffractor. There are multiple techniques to visualize the evolution of the diffracted waves, as shown in Fig.~\ref{fig:Diffraction}. \cite{groenenboom1998acousticThesis,savic1995ultrasonic} have proposed to remove the direct incident wavefield by subtracting the signal recorded at a given time with the one recorded in the absence of the fracture at the beginning of the experiment. This is sufficient to properly see the first diffracted wave arrival PdP. However, we have found that for our experiments, subtracting the signal recorded at a given acquisition sequence with the one of the previous sequence provide a clearer image of the diffracted waves (notably the later ones). This is akin as a high pass filter along the experimental time axis (dimension of the sequence number) on the diffraction plots as in Fig.~\ref{fig:Diffraction}. The reason why making the difference with the first sequence provides more blurry image is likely due to the evolution of the scattering background associated with the fracture roughness. The diffracted coda wave is much more noisy in the gabbro compared to the marble experiment, in line with the difference of rock grain size and posterior fracture roughness observed. \section{Reconstruction of the fracture and fluid fronts using Bayesian inversion} From the picking of the diffracted wave arrivals (for different source-receiver pairs), we invert for the geometry of the fracture or fluid front. We do so using different geometries for the diffraction front. We use a Bayesian framework to rank these different geometrical models. The inverse problem is performed for each acquisition sequence independently in order to finally obtain the time evolution of the fracture geometry. \subsection{Forward models} In the case of the direct diffraction of a body wave by the fracture front, the theoretical arrival $ t^d_{sr}$ of the diffracted wave for a chosen source-receiver pair is simply given by \begin{equation} t^d_{sr} = \frac{\left\lVert \bf{x}_s-\bf{x}_d\right\rVert}{V_{sd}}+\frac{\left\lVert \bf{x}_d-\bf{x}_r\right\rVert}{V_{dr}} \label{eq:arrivaltime} \end{equation} where $V_{sd}$ and $V_{dr}$ are respectively velocities of the incident wave and diffracted wave (P-wave or S-wave). $\bf x_s$, $\bf x_r$ represent the coordinates of the source and receiver transducers and $\bf x_d$ the coordinates of the diffractor, as illustrated in Fig.~\ref{fig:DiffractionFrontIllustration}. For a given fracture front geometry, $\bf x_d$ is obtained as the point which gives the minimal diffracted wave arrival $t_{sr}$ for a given source-receiver pair. We assume here that the diffraction front is planar. We parametrize it using a simple shape. We consider a possible offset of the fracture geometric center $C$ with the injection point and denote $\mathbf{x_c}=(x_1, x_2, x_3)$ its coordinates. We also allow a possible tilt of the fracture plane captured by the three Euler angles: the dip $\theta$, azimuth $\phi$, and precession $\psi$ (see Fig.~\ref{fig:DiffractionFrontIllustration}). \begin{figure} \centering \includegraphics[height=0.40\linewidth]{figures/Fig6DiffractionFrontIllustration.pdf} \caption{Illustration of the fracture front geometry and its corresponding diffractor position for a given source-receiver pair. The diffractor position $D$ (in green) characterises the shortest travelling time between the source $S$ (in red) and the receiver $R$ (in blue) among all positions along a given fracture front. The fracture geometry is assumed to be planar and is defined in a local coordinate system ($\mathbf{e'_i}$, $i=1, 2, 3$) with its geometric center $C$ as the origin. The geometric description in the global coordinate system ($\bf{e_i}$, $i=1,2,3$) is obtained after coordinates transformation (rotation and translation). } \label{fig:DiffractionFrontIllustration} \end{figure} We use an ellipse or a circle to describe the geometry of the diffraction front. In order to account for a possible tilt of the fracture, we invert the data with four different forward models having different number of parameters $\mathbf{m}$ as listed in Table~\ref{tab:Model}. In the case of an elliptical diffraction front, 8 parameters describe its geometry: the semi-lengths of the ellipse $a$ and $b$, $\mathbf{x_c}=(x_1, x_2, x_3)$ the position of the geometric center and three Euler angles $\phi, \theta, \psi$ characterising the fracture plane orientation (see Fig.~\ref{fig:DiffractionFrontIllustration}). For a given geometrical model of the fracture front, we relate the measured diffracted arrivals for the different source-receiver pairs with the forward predictions as \begin{equation} {\bf d}=\mathbf{G}(\mathbf{m})+{\bf \epsilon} \end{equation} where $\bf d$ denotes the picked arrival time for the different source-receiver pairs, $G(\mathbf{m})$ the arrival time predicted by the forward model, and ${\bf \epsilon}$ combines both measurement and modelling errors. For simplicity \citep{Tara05}, we will assume that $\epsilon$ follows a Gaussian distribution with zero mean and variance of $\sigma^2$. We will notably invert for $\sigma$ here thus providing a measure of the modelling error. For a chosen source-receiver pair, we manually pick the arrival time of the diffracted waves for different sequences using plots similar as Fig.~\ref{fig:Diffraction}. A spline is first drawn along the diffracted arrival. The coordinates of the spline passing through the corresponding sequence numbers are then collected as picked arrival time. It is worth noting that the number of picked source-receiver pairs may vary with time, as notably the diffracted arrivals are less visible for some pairs at early (close to initiation for small fractures) and late time (due to proximity of the fracture front with the edges of the block). The number of picked arrivals and their types of diffraction events for the different acquisition sequence is reported in Fig.~\ref{fig:PickedPairsNumber}. \begin{figure} \centering \includegraphics[height=0.40\linewidth]{figures/Fig7PairNumber.pdf} \caption{The number of diffracted arrivals picked from different source-receiver pairs in GABB-001 and MARB-005 experiments. The corresponding acquisition time is indicated at the top of the figures with one tick every 15 seconds for GABB-001 and every minute for MARB-005.} \label{fig:PickedPairsNumber} \end{figure} \begin{table} \centering \caption{Model description and model parameters\label{tab:Model}. $N_p$ is the number of the parameters. $A=\ln a$, $B=\ln b$, $R=\ln r$ are adjusted fracture size parameters accounting for the values of original parameters, e.g. $a, b, r$, greater than zero. $\mathbf{x_c}=(x_1, x_2, x_3)$ characterises the offset of the geometric center of the diffraction front with respect to the origin of the global coordinate system.} \begin{tabular}{c|c|c|c|c} \hline Model & Model description & $N_p$ & Model parameters $\bf m$ \\ \hline $\mathcal{M}_1$ & Elliptical shape & 8 & $[A, B, x_1, x_2, x_3, \psi, \theta, \phi]$ \\ $\mathcal{M}_2$ & Circular shape ($a=b=r$, $\psi=0$) & 6 & $[R, x_1, x_2, x_3, \theta, \phi]$ \\ $\mathcal{M}_3$ & Horizontal elliptical shape ($\theta=\phi=0$) & 6 & $[A, B, x_1, x_2, x_3, \psi]$ \\ $\mathcal{M}_4$ & Horizontal circular shape ($\psi=\theta=\phi=0$) & 4 & $[R, x_1, x_2, x_3]$\\ \hline \end{tabular} \end{table} \subsection{Inverse problem} We seek to estimate both the model parameters $\bf m$ as well as the measurement/model error $\sigma$ \citep{LeGu07}. The likelihood of the data being correctly predicted by the model assumes a multivariate normal probability density function (PDF) for Eq.~(\ref{eq:likelihood}) with a standard deviation $\sigma$: \begin{equation} p({\bf d}|{\bf m},\sigma)=\frac{1}{(2\pi \sigma^2)^{N_{d}/2}}\exp\left(-\frac{1}{2 \sigma^2}\left({\bf d}-{\bf G}({\bf m})\right)^{T}\left({\bf d}-{\bf G}({\bf m})\right)\right) \label{eq:likelihood} \end{equation} where $N_{d}$ is the number of measurements. The standard deviation $\sigma$ encapsulates both measurement and modelling errors and is determined here during the inversion. We refer to it later as the estimated 'noise' level. It has to be ultimately compared with the typical accuracy of the picking of the diffracted waves arrival (denoted as $\sigma_d$) to quantify the modelling error. We assume independent the prior PDFs on the model parameters $\bf{m}$ and noise variance $\sigma$: $p({\bf m},\sigma)=p({\bf m})p(\sigma)$. The noise level can only be positive (Jeffrey's parameter). We thus invert of $\beta=\ln \sigma, \beta \in \left(-\infty,\infty\right)$ and assume a uniform prior PDF for $\beta$ $p(\beta)=1$ ($p(\sigma)=1/\sigma$ - Jeffrey's prior \citep{Tara05}). We model the prior knowledge on the model parameters as a multivariate Gaussian PDF: \begin{equation} p({\bf m})=\frac{1}{\left(2\pi\right)^{N_{p}/2}|{\bf C}_{p}|^{1/2}}\exp\left(-\frac{1}{2}\left({\bf m}-{\bf m}_{p}\right)^{T}{\bf C}_{p}^{-1}\left({\bf m}-{\bf m}_{p}\right)\right) \label{eq:prior} \end{equation} $\mathbf{m}_{p}$ are prior means for the $N_p$ model parameters (see Table.~\ref{tab:Model}) where we use $A=\ln a, B=\ln b, R=\ln r$ as the fracture dimensions ($a, b, r$) must be strictly positive. We assume that the different model parameters are a-priori un-correlated (${\bf C}_p$ is diagonal). As shown in Table.~\ref{tab:Priors}, the same priors standard deviations are taken for all the models and are chosen to be rather uninformative. The vertical position of the fracture center $x_3$ and the tilting angle of the fracture plane $\theta$ are however more constrained than the other parameters according to post mortem analysis of the fracture plane location inside the block. \begin{table} \centering \caption{Table of priors used for different models. Prior standard deviations are shown in the parentheses. \label{tab:Priors}} \begin{tabular}{c|c|c|c|c|c|c} \hline Experiment & $A$, $B$, $R$ & $x_1$, $x_2$ (m) & $x_3$ (m)& $\psi$ (rad) & $\theta$ (rad) & $\phi$ (rad)\\ \hline GABB-001 & ln(0.05) (-ln(0.125)) & 0.125 (0.02) & 0.1285 (0.006) & 0 ($\pi/4$) & 0 ($\pi/60$) & 0 ($\pi/2$) \\ MARB-005 & ln(0.05) (-ln(0.125)) & 0.125 (0.02) & 0.131 (0.013) & 0 ($\pi/4$) & 0 ($\pi/40$) & 0 ($\pi/2$) \\ \hline \end{tabular} \end{table} Using Bayes theorem and considering the probability of the data being observed as a normalizing constant, introducing $\mathbf{z}=(\mathbf{m},\,\beta)$ we write the normalized posterior PDF as $ \Pi(\mathbf{z}|\mathbf{d})=p(\mathbf{d}|\mathbf{z})p(\mathbf{m})p(\beta)$. Several techniques can be used to quantify such a posterior PDF - e.g. from global Markov-Chain Monte Carlo (MCMC) sampling, to local quasi-Newton searches. Our aim here is to seek the most probable solution (PDF mode) and estimate the posterior uncertainties around this solution. This is equivalent to finding the minimum of $-\text{ln }\Pi(\mathbf{z}|\mathbf{d})$. The posterior uncertainties can be grasped either via direct Monte Carlo sampling or in a cheaper way by approximating the posterior PDF as a multivariate Gaussian near its mode: \begin{equation} \Pi(\mathbf{m},\sigma_m|\mathbf{d})=\Pi(\mathbf{z}|\mathbf{d})\approx\Pi(\mathbf{\tilde{z}}|\mathbf{d})\text{exp}(-\frac{1}{2}(\mathbf{z}-\mathbf{\tilde{z}})^{T}\tilde{\mathbf{C}}^{-1}(\mathbf{z}-\mathbf{\tilde{z}})) \label{eq:Posterior} \end{equation} where $\mathbf{\tilde{z}}$ represents the most probable model parameters and modelling noise and $\mathbf{\tilde{C}}$ the posterior covariance matrix at $\mathbf{\tilde{z}}$. We have applied different algorithms to estimate $\mathbf{\tilde{z}}$ here. For robustness, although a simple local quasi-Newton scheme is sufficient for most cases, we present results obtained using a global minimization algorithm (direct differential evolution \citep{StPr97}). The posterior covariance matrix is estimated from the Hessian of $-\text{ln }\Pi(\mathbf{z}|\mathbf{d})$ at $\mathbf{\tilde{z}}$. We have notably compared the posterior mode and uncertainties with the results obtained from MCMC sampling of the posterior PDF $\Pi(\mathbf{m},\sigma_m|\mathbf{d})$. The results are similar. \subsection{Bayes factor} The data are inverted with the different geometrical models listed in Table \ref{tab:Model}. The selection of the most suitable model is drawn not only from the quality of fit, but must also account for model complexity. We use Bayes factor to rank between two possible models assuming equi-probable models a-priori. The Bayes factor between model $i$ and $j$ is defined as the ratio of the marginal probability of the data for the given model \citep{raftery1995hypothesis}: \begin{equation} B_{ij}=\frac{p({\bf d|{\bf \mathcal{M}_{i}}})}{p({\bf d|{\bf \mathcal{M}_{j}}})} \label{eq:Bayesfactor} \end{equation} where the marginal probability of the data $p(\mathbf{d|\mathcal{M}})$ is obtained by integrating the posterior PDF for the given model over the complete model parameters space: \begin{equation} p(\mathbf{d|}\mathcal{M})=\int_{\mathbf{m}_k} \int_{\sigma_k} \Pi(\mathbf{d}|\mathbf{m}_{k},\sigma_{k})\text{d}\mathbf{m}_{k}\text{d}\sigma_{k} \label{eq:modelpr} \end{equation} To obtain such a probability by cheaper means than Monte Carlo sampling, we approximate the posterior PDF around the most probable value as a multivariate Gaussian (see Eq.(\ref{eq:Posterior})). We thus estimate the marginal probability of the model (\ref{eq:modelpr}) as \begin{equation} p(\mathbf{d}|\mathcal{M})\approx\Pi(\mathbf{\tilde{z}}|\mathbf{d})(2\pi)^{(N_{p}+1)/2}|\tilde{\mathbf{C}}|^{1/2} \label{eq:PosteriorModel} \end{equation} As noted in \cite{raftery1995hypothesis}, for a Bayes factor $B_{ij}>10$, the data clearly favours the model $\mathcal{M}_i$ over the model $\mathcal{M}_j$ (respectively $\mathcal{M}_j$ over $\mathcal{M}_i$ for $B_{ij}< 0.1$). For a Bayes factor between $0.2$ and $5$, the models are equivalent. \section{Results and discussions} The number of source-receiver pairs with picked diffracted arrivals vary between acquisition sequences, type of diffracted waves (fracture tip PdP, fluid front PdP and PdS) and experiments. Except for early sequences, the data pair is always larger than 100 (see Fig.~\ref{fig:PickedPairsNumber}). For one chosen model and a given sequence, we perform the inversion using all the picked diffracted arrivals. Fig.~\ref{fig:Illstration} displays an example of the model predictions and data (left panel) as well as the corresponding fracture front and diffracted waves ray path for that sequence (right panel). We repeat the inversion procedure for each sequence and obtain the evolution of the fracture, fluid front together with their posterior covariances. We do this for all four forward geometric models in Table~\ref{tab:Model} and rank them using the estimated Bayes factors. We also compare the noise level $\sigma$ estimated with the picking accuracy of the diffracted wave arrivals $\sigma_d$. A large difference between $\sigma$ and $\sigma_d$ (about one order of magnitude) indicates that the chosen model is clearly not capable of properly reproducing the data. \begin{figure} \centering \includegraphics[height=0.30\linewidth]{figures/Fig8LG01Seq50M2Matching.pdf} \includegraphics[height=0.30\linewidth]{figures/Fig8RG01Seq50M2Diffraction.pdf} \caption{Comparison of the measured arrival and predicted arrival time for model $\mathcal{M}_2$ - sequence 50 in GABB-001 experiment and illustration of diffractors (in green) along the fracture front with its corresponding travel path (in gray) for acoustic diffraction. The red and blue points represent respectively the sources and receivers.} \label{fig:Illstration} \end{figure} \subsection{Toughness dominated experiment GABB-001} The GABB-001 experiment presents a steady fracture growth throughout the block as illustrated in Fig.~\ref{fig:G01FractureSize}. This is in line with the steady increasing entering flux shown in Fig.~\ref{fig:G01NonAcoustic} as previously discussed. No fluid lag was observed from the acoustic diffraction data during the fracture growth. Larger posterior uncertainties and estimated model noises (Fig.~\ref{fig:G01ModelComparison}) are found for early sequences due to the limited number of picked arrivals pairs (see Fig.~\ref{fig:PickedPairsNumber}). \begin{figure} \centering \begin{tabular}{cc} \includegraphics[height=0.33\linewidth]{figures/Fig9aG01shape.pdf}& \includegraphics[height=0.33\linewidth]{figures/Fig9bGFootPrint.pdf} \\ (a)& (b)\\ \includegraphics[height=0.33\linewidth]{figures/Fig9cG01Offset.pdf} & \includegraphics[height=0.33\linewidth]{figures/Fig9dG01Tilt.pdf}\\ (c) & (d)\\ \end{tabular} \caption{GABB-001 experiment: evolution of the fracture size (a), offset of the fracture center (c) and tilt of the fracture plane (d). The figure (b) displays the footprint of the fracture from a top view (from sequence 22 to sequence 82) shown every 10 sequences. The yellow dots in (b) indicate the diffractors at the fracture front for the different source receiver pairs picked.} \label{fig:G01FractureSize} \end{figure} All four geometrical models provide a good fit to the data with an estimated noise level of the same order of magnitude as the estimated manual picking accuracy $\sigma_d=0.5$~$\mu$s as shown in Fig.~\ref{fig:G01ModelComparison}. From the Bayes factor, the fracture shape appears to be better described by a radial geometry than an elliptical one, particularly during the first half stage of the propagation as $B_{21}, B_{23}>10$ (see Fig.~\ref{fig:G01ModelComparison}). The strong posterior correlation between the two semi-lengths for the elliptical model $\mathcal{M}_1$ (Fig.~\ref{fig:Correlation}) and an aspect ratio around 1 (Fig.~\ref{fig:AspectRatioEvolution}) confirms the preference for the circular geometry. The latter stage of the fracture propagation presents a slightly larger estimated noise level (Fig.~\ref{fig:G01ModelComparison}) even though the number of picked arrival remains large (see Fig.~\ref{fig:PickedPairsNumber}). This hints that the chosen models start to become inadequate at later times. This is most likely due to the non-uniformity of the stress field near the bock edges which results in a deviation of the fracture geometry from a circular / elliptical shape. Over that period, a drop of $B_{21}, B_{23}$ in Fig.~\ref{fig:G01ModelComparison} can be observed. The inversion then slightly favours the elliptical fracture models although their estimated noise level increase similarly than the radial models (Fig.~\ref{fig:G01ModelComparison}). The fracture plane remains approximately horizontal during the whole fracture growth with a dip fluctuating around zero. This horizontal geometry is further confirmed by the Bayesian analysis. Given that $B_{24}$ remains in the range $0.1-10$ most of the time, the model ($\mathcal{M}_2$) allowing for a possible tilt of the fracture plane is nearly equivalent to the strictly horizontal one ($\mathcal{M}_4$) for the same radial fracture geometry. The fracture center deviates little in the vertical direction through the entire propagation (Fig.~\ref{fig:G01FractureSize}). As a result, strong posterior correlations among the different Euler angles as well as with the fracture center coordinates are observed for the elliptical model (Fig.~\ref{fig:Correlation}). \begin{figure} \centering \begin{tabular}{cc} \includegraphics[height=0.33\linewidth]{figures/Fig10LG01ModelNoise.pdf} & \includegraphics[height=0.33\linewidth]{figures/Fig10RG01Bayes.pdf} \end{tabular} \caption{GABB-001 experiment: evolution of estimated noise level (left, the estimated picking error $\sigma_d=0.5$~$\mu$s in GABB-001) and Bayes factor (right). The gray region characterises $0.1<B_{ij}<10$, where $\mathcal{M}_i$ and $\mathcal{M}_j$ can not be decisively ranked.} \label{fig:G01ModelComparison} \end{figure} \begin{figure} \centering \includegraphics[height=0.33\linewidth]{figures/Fig11CorrelationEdit.pdf} \caption{GABB-001 experiment: posterior correlation (absolute value) between $\mathcal{M}_1$ (ellipse) model parameters corresponding respectively to the early time and the mid-to-late stage of fracture growth.} \label{fig:Correlation} \end{figure} \subsection{Lag-viscosity dominated experiment MARB-005} \begin{figure} \centering \begin{tabular}{cc} \includegraphics[height=0.33\linewidth]{figures/Fig12aM05shapeShort.pdf}& \includegraphics[height=0.33\linewidth]{figures/Fig12bMFootPrintEdit.pdf}\\ (a) & (b)\\ \includegraphics[height=0.33\linewidth]{figures/Fig12cM05CoordinatesShort.pdf}& \includegraphics[height=0.33\linewidth]{figures/Fig12dM05TiltEdit.pdf}\\ (c) & (d)\\ \end{tabular} \caption{MARB-005 experiment: evolution of fracture size (a), fracture center offset (c) and the tilt of the fracture plane (d). The yellow coloured time interval represents the propagation of the fracture through the specimen. The figure (b) displays the footprint of the fracture front and the fracture center from the top view (from sequence 23 to sequence 43, shown every 4 sequences). The yellow dots in (b) indicate the diffraction points at the fracture front for the different source receiver pairs picked.} \label{fig:M05FractureSize} \end{figure} \begin{figure} \centering \includegraphics[height=0.33\linewidth]{figures/Fig13LM05BayesEdit.pdf} \includegraphics[height=0.33\linewidth]{figures/Fig13RM05lBayesEdit.pdf} \caption{MARB-005 experiment: evolution of Bayes factor for the fracture front (left) and the fluid front (right). The yellow coloured time interval represents approximately the propagation of the fracture from initiation to the edges of the block. The gray region characterises $0.1<B_{ij}<10$, where $\mathcal{M}_i$ and $\mathcal{M}_j$ are considered equivalent in fracture geometry description.} \label{fig:MBayes} \end{figure} \begin{figure} \centering \includegraphics[height=0.33\linewidth]{figures/Fig14LM05ModelNoiseEdit.pdf} \includegraphics[height=0.33\linewidth]{figures/Fig14RAspectRatioEdit.pdf} \caption{Evolution of the modelling noise in MARB-005 (left, the estimated picking error $\sigma_d=1$~$\mu$s in MARB-005). The yellow coloured time interval represents approximately the propagation of the fracture from initiation to the edges of the block. Evolution of aspect ratio (right) in GABB-001 and MARB-005 experiments by assuming an elliptical fracture geometry ($\mathcal{M}_1$).} \label{fig:AspectRatioEvolution} \end{figure} Fig.~\ref{fig:M05FractureSize} shows a fast growth of the fracture front followed by a gradual evolution of the fluid front. Due to the strong viscous effect, we observe a continuous increase of the fluid pressure even after fracture initiation. The entering flow rate remains very small up to around 8 minutes after initiation (Fig.~\ref{fig:M05NonAcoustic}). It then increases significantly when the fracture reaches the edge of the block. During most of fracture growth, the fracture front geometry is better described by the circular model $\mathcal{M}_2$ as $B_{21}, B_{23}, B_{24}>10$ (see Fig.~\ref{fig:MBayes}). The radial geometry is favoured for the fluid front shortly after fracture initiation. The tilted models and particularly the elliptical tilted model (for the fluid front) become more probable after 44 to 45 minutes of injection which corresponds to the time when the fracture front reaches the end of the block (see Fig.~\ref{fig:MBayes}). The fracture plane does not remain horizontal during the fracture growth as illustrated in Fig.~\ref{fig:M05FractureSize}. This is in line with the evolution of the Bayes factors with $B_{23}, B_{24}>10$ for most sequences (Fig.~\ref{fig:MBayes}). The tilt of the fracture plane has been confirmed by postmortem observations with an averaged tilt of around $\pi/50$ ($3.6^\circ$). The fluid front however evolves differently from the fracture front, characterised by a different size and center (Fig.~\ref{fig:M05FractureSize}). At early time, no decisive discrimination can be made between the tilted plane models and horizontal ones. After 44 to 45 minutes of injection when the fracture front reaches the edge, the fluid flows much more freely between the fractured surfaces and the fluid front geometry tends to favour the tilted models given the increase of $B_{23}$ and $B_{24}$. The quality of the fit to data is acceptable. The estimated modelling noise is of the same order of magnitude as the picking error $\sigma_d=1$~$\mu$s in Fig.~\ref{fig:AspectRatioEvolution}. The fluid front presents a lower noise level even though less picked arrivals were used in the inversion (Fig.~\ref{fig:PickedPairsNumber}). This explains a better fit of the predicted arrivals to the picked ones for the fluid front. In the latter half of the fluid front propagation, we observe an important difference between models and their noise level $\mathcal{M}_1<\mathcal{M}_2<\mathcal{M}_3<\mathcal{M}_4$. This is consistent with the estimation of Bayes factors which indicates also a preference for $\mathcal{M}_1$ over $\mathcal{M}_2$, $\mathcal{M}_3$, $\mathcal{M}_4$ (Fig.~\ref{fig:MBayes}) during the same period. \subsection{Comparison with acoustic transmission data} We now compare our estimation of the fracture (and fluid for MARB-005) front position using transmitted waves. Transmitted waves exhibit an increase in arrival time and attenuation when passing through a fracture. The attenuation of transmitted waves appears when the fracture front crosses the line between facing source receiver transducers located on opposite platens. It can thus be compared with the estimation from diffracted waves. We evaluate a transmitted "energy" by computing the signal strength $E_i^{1/2}$ of a given wave arrival (P or S) for the acquisition sequence $i$ as \begin{equation} E_i^{1/2}=\sqrt{\sum_{j=j_{min}}^{j_{max}} u_{i}^2(t_j)} \label{eq:transmissionenergy} \end{equation} where $u_i(t)$ is a low-pass filtered (at 2MHz) waveform which is cropped by a tapered Hamming window centered on the interest arrival with a size of $(t_{j_{max}}-t_{j_{min}})=14 \mu$s. We then choose a reference signal obtained before fracture initiation and define the attenuation ratio $(E_i/E_{ref})^{1/2}$. An alternative is to follow the procedure presented in \cite{groenenboom1998monitoring} to estimate the fracture width from transmitted waves. In the frequency domain, the transmitted compressional signal $\hat{u}_{fracture}$ can be compared to the prediction obtained by the product of a transmission coefficient $T(\zeta,\,w)$ for a three layers model (rock-fluid-rock) and a reference signal $\hat{u}_{base}$ recorded before fracture initiation. The fluid layer thickness (fracture width) $w$ is estimated by minimizing the misfit for frequencies around the central frequency of the source signal ($\zeta_{min}<\zeta<\zeta_{max}$). In addition to frequency, the transmission coefficient $T(\zeta,\,w)$ depends on solid and fluid properties (acoustic impedance) and the layer of fluid (the fracture thickness). The method compares well with optical measurements for the case of waves arriving at a 90 degree incident angle to the fracture \citep{kovalyshen2014comparison}. We apply this method to the toughness-dominated GABB-001 experiment as the fracture is flat (thus ensuring a 90 degree incident angle) and do not exhibit any fluid lag (we take $\rho=1260 \text{kg}/\text{m}^3$ and $V_{p }=1960 \text{m/s}$ for glycerol). We set the lower and upper frequency bounds as $\zeta_{min}=500$~kHz and $\zeta_{max}=1100$~kHz given the central frequency of $750$~kHz. We do not use such a method for the marble experiment (MARB-005) which exhibits a very large fluid lag. \subsubsection{Lag-viscosity dominated experiment MARB-005} \begin{figure} \centering \includegraphics[width=0.48\linewidth]{figures/Fig15LM05PSR9.pdf} \includegraphics[width=0.48\linewidth]{figures/Fig15RM05ShearSR30.pdf} \caption{Evolution of the recorded signal for opposite source-receiver pair during the MARB-005 experiment: attenuation of the compressional (left) and shear wave (right).} \label{fig:MShearShadow} \end{figure} As illustrated in Fig.~\ref{fig:MShearShadow}, both compressional and shear waves significantly lose their amplitude in between the time when the fracture and fluid front passes through the corresponding ray path. Following the arrival of the fluid front, the compressional wave regains its amplitude but not the shear wave. Such a characteristic shear wave shadowing due to the lack of shear stiffness of the fluid is consistent with previous observations \citep{groenenboom2000monitoring}. We use the time evolution of the transmitted energy ratio defined in Eq.~(\ref{eq:transmissionenergy}) for all the pairs of compressional waves transducers in the opposite top and bottom platens (sub-parallel to the created fractures) and compare it with the fracture and fluid front previously reconstructed from diffracted waves (see Fig.~\ref{fig:Mtransmission}). We use a threshold of 0.8 for $(E/E_{ref})^{1/2}$ to binarize the loss (for value below the threshold) of the transmitted wave. As shown in Fig.~\ref{fig:Mtransmission}, we first clearly see that the transmitted signal is lost when the reconstructed fracture front reaches the transducers location (black curve in the snapshot of Fig.~\ref{fig:Mtransmission}). The signal is then regained upon the arrival of the fluid front (blue curve in the snapshot of Fig.~\ref{fig:Mtransmission}), but eventually lost again due to the increase of the fracture width as the viscous fluid front penetrates more into the fracture. Overall, we observe in Fig.~\ref{fig:Mtransmission} a good agreement between the evolution of the signal strength ratio of the transmitted P waves with the evolution of the fracture and fluid fronts reconstructed from diffraction data. \begin{figure} \centering \includegraphics[width=0.9\linewidth]{figures/Fig16M05Matching.pdf} \caption{MARB-005 experiment: top view of the extent of the hydraulic fracture from transmitted waves (from top-bottom platens). The P-wave transducers turn yellow if the attenuation ratio of the signal strength $(E/E_{ref})^{1/2}$ goes below 0.8. We take sequence 16 (before fracture initiation) as the reference sequence for the calculation of the attenuation ratio. The corresponding footprint of the fracture and fluid fronts obtained from the inversion of diffracted waves are displayed for comparisons.} \label{fig:Mtransmission} \end{figure} \subsubsection{Toughness-dominated experiment GABB-001} \begin{figure} \centering \includegraphics[width=0.48\linewidth]{figures/Fig17LG01PSR3.pdf} \includegraphics[width=0.48\linewidth]{figures/Fig17RG01ShearSR31.pdf} \caption{GABB-001 experiment: gradual attenuation of the compressional wave (left) and shear wave (right).} \label{fig:GShearShadow} \end{figure} In the GABB-001 experiment, the fracture front coincides with the fluid front (no fluid lag). The transmitted shear waves present a gradual attenuation after the arrival of the fracture front as shown in Fig~\ref{fig:GShearShadow}. Such a weak shear shadowing effect is probably due to the smaller width of the fracture as well as the possible existence of solid bridges between fractured surfaces. In order to separate apart the created fracture surfaces, we had to hammer a sub-sampled part of the specimen in gabbro. On the other hand, it is worth noting that the marble block was already completely separated after the experiment and exhibited smoother surfaces. The compressional waves do not attenuate significantly during the fracture growth (Fig.~\ref{fig:GShearShadow}) but sufficiently to allow the reconstruction of the fracture width using the three layers model described previously. The evolution of fracture extent grasped via the evolution of the fracture widths inverted for all the top bottom platens P transducers pairs is shown in Fig.~\ref{fig:Gtransmission}. It agrees relatively well with the fracture front reconstructed from the diffracted waves although a damage zone ahead of the reconstructed fracture tip may indeed exist. In addition, the order of magnitude of the fracture widths is in line with the predictions of the toughness dominated solution for a radial hydraulic fracture \citep{savitski2002propagation}. By using the averaged entering flow rate $<Q_o>$ for estimation and the properties listed in Table~\ref{tab:ExpScaling}, the toughness dominated radial solution predicts respectively a maximum width (at the fracture center) of respectively 12~$\mu$m, 19~$\mu$m and 24~$\mu$m for the presented sequences. The existence of a "tappered" width profile near the tip associated with the existence of the process zone is not extremely clear (due to the resolution of the fracture width estimation). Another line of evidence for the presence of a process zone relates to attenuation of transmitted waves propagating parallel to the fracture plane (above and below it). We choose side transmitted pairs with a good signal to noise ratio and evaluate their attenuation ratio using Eq.~(\ref{eq:transmissionenergy}). Transducers on the north-southern sides of the block which are located approximately 5~cm away from the fracture plane present a difference of signal strength of around 5\% after fracturing. The transducer pairs on the west-eastern sides which are 2~cm from the fracture plane, present a larger attenuation as shown in Fig.~\ref{fig:GSideTransmission}. It looks like a band of $\pm$ 2~cm above and below the fracture plane is influenced. Such an attenuation has two possible explanations. First, the waves interact with the presence of the fracture thus decreasing the received amplitude. Another possibility lies in the presence of micro-cracks surrounding the growing fracture which are known to strongly attenuate transmitted waves \citep{ZhGr93,zang2000fracture}. Further analysis is required in order to decipher between these two explanations. \begin{figure} \centering \includegraphics[width=0.9\linewidth]{figures/Fig18G01MatchingNew.pdf} \caption{GABB-001 experiment: top view of the extent of the hydraulic fracture. The P-wave transducers turn yellow if the fracture opening goes above $6$~$\mu m$ with an error of around $3$~$\mu m$. This error on the fracture width estimation was obtained from the maximum width obtained when no fracture was present in the block (using acquisition sequences prior to fracture initiation).} \label{fig:Gtransmission} \end{figure} \begin{figure} \centering \includegraphics[width=0.48\linewidth]{figures/Fig19TransmissionSideEdit.pdf} \caption{GABB-001 experiment: attenuation of the compressional waves propagating parallel to the fracture surface with a location of around 2 cm away from the fracture.} \label{fig:GSideTransmission} \end{figure} \section{Conclusions} We have improved the resolution of the monitoring of growing hydraulic fracture using diffraction data recorded by a large amount of piezoelectric source/receiver pairs. Using Bayesian inversion, we have developed a workflow to select the most probable fracture geometry from a finite number of simple models (circle, ellipse, horizontal or not). This model ranking allows to gain confidence in the important possible deviation of the fracture from the simplest radial shape. Moreover, the inversion of the modelling error $\sigma$ (which combines model and measurement error) allows to quantify the ability of any chosen model to reproduce the data. The method has been successfully tested on two experiments representative of two very different HF propagating regimes (toughness and lag-viscous dominated). In both cases, the fractures were mostly radial although a deviation towards a more elliptical shape is visible when the fracture feels the edge of the specimen (where the applied stress field is likely less uniform). Although it is difficult to precisely gauge the accuracy of the reconstruction, the resulting posterior uncertainties of the fracture extent are around 1-2~mm for the gabbro experiment (GABB-001) and 2-4~mm for the marble experiment (MARB-005). The fronts reconstructed from diffracted waves agree well with the analysis of the attenuation of compressional waves traversing the propagating fractures. It is also important to recall that in our analysis, the data for one acquisition sequence is assumed to be acquired at the same time while an acquisition lasts about 2.5~seconds (spanning of all the sources). As a result, this imaging technique is appropriate only for low velocity fractures: the average fracture velocity is around 300 $\mu$m/s for GABB-001, and 1~mm/s for the (faster) MARB-005 experiment. The method presented here can be improved in a number of ways. First, instead of using parametrized fracture shapes, a direct extension is to use a 3D spline curve to describe the fracture front (at the expense of more model parameters). Secondly, in order to better quantify the possible damage around the growing fracture, the hypothesis of a constant wave velocity in the sample during fracture growth should be at least partly relaxed (to account for the effect of possible micro-cracking around the fracture). It would be interesting to combine the analysis of diffracted waves with recent acoustic tomography inversion that reconstructs such velocity changes in the bulk using only direct wave arrivals but combining passive and active acoustic data \citep{brantut2018time,aben2019rupture}. Full waveform inversion (likely in the frequency domain) would ultimately allow to combine the information of diffracted, transmitted (and reflected) waves, but this requires proper sensor calibration and the use of computationally expensive models able to resolve the sharp discontinuities induced by the fluid-filled fracture. A more immediate/simpler improvement will likely come from the combination of the active 4D acoustic method presented here with passive listening for AE events (with localization and moment tensor estimation, see for example \cite{StBu15}). This will surely enhance our understanding of hydraulic fracture growth, in particular with respect to a better quantification of the fracture process zone in rocks. In particular, the interplay between the evolution of the fluid lag and the process zone appears to be strongly influencing the overall HF propagation according to recent theoretical predictions \citep{garagash2019cohesive}. \paragraph*{Acknowledgements\\} { Partial support from the SCCER-SoE (second phase 2017-2020) funded by the Swiss National Science Foundation and the Swiss Innovation Agency InnoSuisse is greatly acknowledged. } \paragraph*{Data availability \\} { The raw and processed data of these two experiments as well as the inversion results will be made available via the Zenodo platform. } \paragraph*{CRediT Authors contributions \\} { Dong Liu: Conceptualization, Methodology, Data curation, Formal analysis, Visualization, Writing – original draft. \\ Brice Lecampion: Conceptualization, Methodology, Formal analysis, Supervision, Resources, Writing – review \& editing. \\ Thomas Bum: Conceptualization, Methodology, Data curation, Formal analysis. }
1,314,259,994,079
arxiv
\section{Preliminaries} \label{sec:intro} In their 1973 paper \cite{NPS}, Piatetski-Shapiro and Novodvorsky defined a model for irreducible admissible representations of the group $\GSp(4)$ over a $p$-adic field called the Bessel model, and showed that the dimension of such an embedding is at most 1. Our first task will be to define a generalized Bessel model for irreducible admissible representations of $G = \GSp(2n)$, which we believe possesses the same uniqueness properties as the Bessel model on $\GSp(4)$; we offer this as a formal conjecture later in this section. We then prove that each unramified principal series representation of $\GSp(4)$ has a non-split Bessel model, and we provide an explicit integral representation of the corresponding Bessel functional, which we then generalize to $\GSp(2n)$. Finally, we will proceed to our ultimate goal of providing an explicit expression for the Iwahori-fixed vectors in the model. In particular, when $n=2$, the formula that we develop for the spherical function agrees with the formula for the spherical function in the Bessel model on $\SO(5)$ established by Bump, Friedberg, and Furusawa in \cite{BFF}. Assuming the conjectured uniqueness of the Bessel model on $\GSp(2n)$, we are able to extend these results to rank $n$. Along the way, we will describe how our construction of the Bessel functional fits into a conjectural program for connecting characters of the Iwahori-Hecke algebra $\calH$ of $G$ and multiplicity-free models of principal series representations. This program, formulated by Brubaker, Bump, and Friedberg in \cite{BBF2}, was motivated by the study of the Whittaker and spherical functionals, which it contains as special cases. We will show momentarily that the most natural way to view the connection between these models and characters of the Iwahori-Hecke algebra is from the perspective of the ``universal principal series.'' Our description of the universal principal series and its structure as an $\calH$-module follows the treatment provided by Haines, Kottwitz, and Prasad in \cite{HKP}. Although our results in this paper are with regards to $\GSp(2n)$ over a $p$-adic field, we expect that the methods we use to analyze the Bessel model in this context can be used to analyze other models over other algebraic groups. With this in mind, we will place the following discussion in a more general context. In particular, for this section, let $G$ be a split, connected reductive group over a $p$-adic field $F$ with ring of integers $\goth{o}$ and uniformizer $\pi$. Let $k$ denote the residue field $\goth{o}/(\pi)$, and let $q$ denote its cardinality. Let $B$ be a Borel subgroup of $G$ with maximal torus $T$ and unipotent subgroup $U$ such that $B = TU$. Let $\overline{U}$ denote the opposite unipotent of $U$ in $B$. We assume that these subgroups, as well as $G$, are defined over $\goth{o}$. Note that this means that $K = G(\goth{o})$ is a maximal compact subgroup of $G$. Let $J$ denote the Iwahori subgroup, which is the preimage of $B(k)$ under the canonical homomorphism $G(\goth{o})\to G(k)$. Let $\calH$ denote the Iwahori-Hecke algebra of $G$, which is the $\C$-algebra of functions $C_c(J\bs G/J)$, with multiplication given by convolution. We define the universal principal series $M$ to be the vector space $C_c(T(\goth{o})U\bs G / J)$. Evidently, we can make $M$ into a right $\calH$-module where $\calH$ acts by convolution. Now, observe that $T/T(\goth{o})$ is isomorphic to the cocharacter group $X_{\ast}(T)$ of $G$ under the map that sends $\mu \in X_{\ast}(T)$ to $\mu(\pi) \in T/T(\goth{o})$. We will write $\mu(\pi)$ as $\pi^{\mu}$ throughout this paper. Define $R := C_c(T/T(\goth{o})) = \C[X_{\ast}(T)]$, and regard $R$ as a left $(T/T(\goth{o}))$-module via the inverse of the ``universal'' character $\chi_{\textup{univ}}: \pi^{\mu} \mapsto \pi^{\mu}$. If we use normalized induction to form $\ind_B^G \chi_{\textup{univ}}^{-1}$, and then take its $J$-fixed vectors $\ind_B^G(\chi_{\textup{univ}}^{-1})^J$, then we can see that $M \simeq \ind_B^G(\chi_{\textup{univ}}^{-1})^J$ as right $\calH$-modules; explicitly we have $\eta: M \to \ind_B^G(\chi_{\textup{univ}}^{-1})^J$ where $$\eta(\phi)(g) = \sum_{\mu \in X_{\ast}(T)} \delta_B(\pi^{\mu})^{-1/2} \pi^{\mu} \phi(\pi^{\mu}g).$$ Here we can see the motivation for our terminology: if we're given an unramified principal series obtained from parabolic induction by a character $\chi: T/T(\goth{o}) \to \C^{\times}$, then $\chi$ determines a $\C$-algebra homomorphism $R \to \C$, and $$\C \otimes_R M \simeq \ind_B^G(\chi^{-1})^J,$$ the Iwahori-fixed vectors of our original unramified principal series. In order to gain a better understanding of the Hecke algebra $\calH$, we are going to make use of an alternate point of view of $M$. First, we note that $M$ is isomorphic to $\calH$ as a free, rank one right $\calH$-module; it has a $\C$-basis made up of the characteristic functions $1_{T(\goth{o})UwJ}$ where $w$ is an element of the affine Weyl group $\widetilde{W}$. The isomorphism from $\calH$ to $M$ is given by the map $h \mapsto 1_{T(\goth{o})UJ} \ast h$. We can define a left action of $\calH$ on $M$ via this isomorphism: in particular, we identify $h \in \calH$ with the endomorphism $$h: 1_{T(\goth{o})UJ} \ast h' \mapsto 1_{T(\goth{o})UJ} \ast (hh').$$ Using $\eta$, we can transfer this left $\calH$-action to $\ind_B^G(\chi_{\textup{univ}}^{-1})^J$, so that $h \in \calH$ sends $\phi_1 \ast h'$ to $\phi_1 \ast (hh')$, where $\phi_1 = \eta(1_{T(\goth{o})UJ})$. Note that this left action identifies $\calH$ with $\End_{\calH}(M)$. Now, if we take the obvious left action of $R$ on $\ind_B^G(\chi_{\textup{univ}}^{-1})^J$ and transfer it via $\eta^{-1}$ to $M$, we see that $R$ embeds into $\End_{\calH}(M)$, and hence embeds into $\calH$. Additionally, the finite Hecke algebra $\calH_0 = C(J\bs K/J)$ is a subalgebra of $\calH$, and there is a vector space isomorphism $\calH \simeq R \otimes_{\C} \calH_0$. While we will often conflate $\pi^{\mu} \in R$ with its embedded image in $\calH$, we would like to point out that the image of $\pi^{\mu}$ is convolution with the characteristic function $1_{J\pi^{\mu}J}$ only when $\mu$ is dominant. We use $T_{s}$ to denote the generator $1_{JsJ}$ of $\calH_0$, where $s$ is a simple reflection in the Weyl group, $W$, of $G$. The generators of $\calH_0$ satisfy the same braid relations that the simple reflections in $W$ satisfy, in addition to satisfying the quadratic relation $$(T_s - q)(T_s + 1) = 0.$$ Finally, to understand $\calH$ in terms of these generators, we need the Bernstein relation, first proved in \cite{Lusz}, which says that, for $\pi^{\mu} \in R$ and $T_s \in \calH_0$, \begin{equation} \label{bernstein} T_s\pi^{\mu} = \pi^{s(\mu)}T_s + (1-q)\frac{\pi^{s(\mu)}-\pi^{\mu}}{1-\pi^{-\alpha^{\vee}}}, \end{equation} where $s = s_{\alpha}$ for a simple root $\alpha$ in the root system $\Phi$ of $G$. \begin{comment} we want to get left $\calH$ action so that we can see how $\calH$ decomposes as $R\calH_0$? \end{comment} Recall that the spherical function, $\phi^{\circ}$, in $\ind_B^G(\chi_{\textup{univ}}^{-1})^J$ is defined as $$\phi^{\circ}(g) := \delta^{-1/2}(\pi^{\mu})\pi^{-\mu},$$ where $g = tuk$ is the Iwasawa decomposition of $g$ with $u \in U$, $k \in G(\goth{o})$, and $t \in T(F)$ where $t \equiv \pi^{\mu} \in T(F)/T(\goth{o})$ . Using the Iwahori-Bruhat decomposition, we see that $$\phi^{\circ} = \sum_{w \in W} \phi_w,$$ where $\phi_w := \eta(1_{T(\goth{o})UwJ})$. In order to provide an explicit expression for $\phi^{\circ}$ in the Bessel model, we are going to need to use the fact that the spherical function in the model is contained in a submodule isomorphic to $V_{\ve} :=\calH \otimes_{\calH_0} \ve$, where $\ve$ is a linear character of $\calH_0$. From the quadratic relation for $\calH_0$, we can see that the only possible eigenvalues for the generators of $\calH_0$ are $-1$ and $q$. The braid relations for $\calH_0$ then imply that we either have two or four linear characters of $\calH_0$, depending on whether or not the Dynkin diagram for $G$ is simply laced. We can see that $V_{\ve} \simeq R$ as vector spaces, so we can transfer the $\calH$-action on $V_{\ve}$ to $R$ via $v_{\ve} \mapsto r$, where $r$ is any element of $R$ and $v_{\ve}$ is the eigenvector of $\calH_0$ corresponding to $\ve$. In practice, our choice of $r$ such that $v_{\ve} \mapsto r$ will be crucial. Roughly stated, a goal of Brubaker, Bump, and Friedberg is to find many examples where, if $\calL$ is an $R$-valued map arising from a unique model, then there is a character $\ve$ of $\calH_0$ and a subgroup $S \subset G$ such that the transformation properties of $\calL$ under $S$ imply that $\calL$ is an $\calH$-map from $M$ to $V_{\ve}$; a key idea here is that the models are connected to the representations of $\calH_0$ via the Springer correspondence - we will discuss this connection further at the end of this section, as well as in Section \ref{sec:springer}. The simplest examples in this program are the Whittaker and spherical models. If we take $\calL$ to be the $R$-valued spherical functional, uniquely determined up to scalar by the condition that $\calL(k\cdot \phi) = \calL(\phi)$ for all $\phi \in \ind_B^G(\chi_{\textup{univ}}^{-1})$ and $k \in K$, and $\ve$ to be the trivial character on $\calH_0$, then it was shown by Brubaker, Bump, and Friedberg (based on the work of Casselman in \cite{Cas}) that $\calL$ is an $\calH$-intertwiner from $M$ to $V_{\ve}$; in the case where $\calL$ is taken to be the $R$-valued Whittaker functional, uniquely determined up to scalar by the condition that $\calL(\overline{u}\cdot \phi) = \psi(\overline{u})\calL(\phi)$ for all $\phi \in \ind_B^G(\chi_{\textup{univ}}^{-1})$ and $\overline{u} \in \overline{U}$, where $\psi$ is a non-degenerate character of $\overline{U}$, it was shown by Brubaker, Bump, and Licata, in \cite{BBL}, that $\calL$ is an $\calH$-intertwiner from $M$ to $V_{\ve}$, where $\ve$ is the sign character of $\calH_0$. Most recently, in \cite{BBF2}, Brubaker, Bump, and Friedberg showed that the Bessel functional on the doubly-laced group $\SO(2n+1)$ is an $\calH$-intertwiner from $M$ to $V_{\ve}$ in the manner described above; in this case, $\ve$ is the character of $\calH_0$ that acts by $-1$ on long simple roots and by $q$ on short simple roots. In general, we start with a subgroup $S$ of $G$ and a linear $\C$-valued character $\psi$ of $S$, and we look for an $R$-module homomorphism $\calL: \ind_B^G(\chi_{\textup{univ}}^{-1}) \to R$ such that \begin{equation}\label{equiv} \calL(s \cdot \phi) = \psi(s)\calL(\phi) \textup{ for all $s \in S$ and $\phi \in \ind_B^G(\chi_{\textup{univ}}^{-1})$,}\end{equation} where the action of $G$ on $\ind_B^G(\chi_{\textup{univ}}^{-1})$ is given by right translation. In order to find $\calL$, we will use Mackey theory. In the case where $F$ is a finite field, Mackey theory tells us that the space of $R$-module homomorphisms satisfying \eqref{equiv} is in bijection with the vector space of functions $\Delta: G \to R$ that satisfy the equivariance properties \begin{equation}\label{finitemackey} \Delta(sgb) = \psi(s)\Delta(g)\chi_{\textup{univ}}^{-1}(b) \end{equation} for all $s \in S$, $b \in B$; here we are thinking of $\psi$ as taking values in $R$, since $R$ is a commutative $\C$-algebra with $\C$ included in it. When $F$ is a $p$-adic field, Mackey theory tells us that the space of $R$-module homomorphisms satisfying \eqref{equiv} is in bijection with the vector space of \emph{distributions} satisfying \eqref{finitemackey}.\footnote{In practice, for the models that we are considering, any nonzero $\Delta$ satisfying \eqref{finitemackey} is defined on an open set, so that, in these cases, such $\Delta$ are, in fact, functions.} If such a $\Delta$ exists, we get the corresponding $R$-module homomorphism $\calL$ from the convolution $$\calL(\phi)(g) = \int_{B \bs G} \Delta(h^{-1})\phi(hg)\,dh.$$ If such an $\calL$ exists then the space $\Ind_S^G \psi$ is called a model for $\ind_B^G(\chi_{\textup{univ}}^{-1})$ - we say that the model is unique for $\ind_B^G(\chi_{\textup{univ}}^{-1})$ if the space $\calHom_G( \ind_B^G( \chi_{\textup{univ}}^{-1}), \Ind_S^G \psi)$ is one-dimensional, i.e.~if the space of functionals satisfying \eqref{equiv} is one-dimensional. Based on the formalism of \cite{BBF2}, it can be shown that if $\calL$ is restricted to the space of Iwahori-fixed vectors, $\ind_B^G(\chi_{\textup{univ}}^{-1})^J$, then $\calL$ induces a left $\calH$-module structure on its image. In particular, the group algebra $R$ embedded in $\calH$, as described earlier, acts on the image of $\calL$ by translation. Since $R \simeq \Ind_{\calH_0}^{\calH}\ve$ as vector spaces if $\ve$ is a linear character of $\calH_0$, the following conjecture of Brubaker, Bump and Friedberg is natural: \begin{Conjecture}[\cite{BBF2}]\label{conjecture} Let $\calL$ be an $R$-valued linear map on $\ind_B^G(\chi_{\textup{univ}}^{-1})$ obtained from a unique model. Then $\calL$ is an $\calH$-map from $M$ to $V_{\ve} = \Ind_{\calH_0}^{\calH} \ve$ for some choice of linear character $\ve$ of $\calH_0$ and the following diagram commutes: \begin{equation}\label{diagram} \begin{tikzpicture}[>=angle 90] \matrix(a)[matrix of math nodes, row sep=3em, column sep=4em, text height=1.5ex, text depth=0.25ex] {\ind_B^G(\chi_{\textup{univ}}^{-1})^{J} &\\ M \simeq \calH&V_{\ve} \\}; \path[dashed,->,font=\scriptsize] (a-1-1) edge node[above]{$\calL$} (a-2-2); \path[->,font=\scriptsize] (a-2-1) edge node[left]{$\eta$} (a-1-1); \path[->,font=\scriptsize] (a-2-1) edge node[below]{$\calF_{v}$} (a-2-2); \end{tikzpicture} \end{equation} \noindent with $v_{\ve} := \calL(\phi_1)$ and $\calF_{v_{\ve}}: h \mapsto h\cdot v_{\ve}$ where $h$ acts on $v_{\ve}$ according to the module structure on $V_{\ve}$. \end{Conjecture} Of course, such an $\calH$-map $\calL$ is guaranteed to exist since $\calF_{v_{\ve}}$ and $\eta$ are isomorphisms; rather, the dotted line is meant to reiterate the point made earlier that we are looking for a subgroup such that the transformation properties of $\calL$ under this subgroup imply that $\calL$ is an $\calH$-map to $V_{\ve}$. One promising set of models that appear to fit into this picture are the ``generalized Gelfand-Graev representations'' introduced by Kawanaka in \cite{Kaw} - these models are classified by nilpotent elements of the Lie algebra of $U$, and the subgroup under which $\calL$ transforms is connected to the associated nilpotent element via the Kirillov orbit method. The particular appeal of this family of representations lies in their conjectured low-multiplicity properties. In particular, we are inspired by Furusawa's use of the Bessel model on $\SO(2n+1)$ in his construction of the standard $L$-function on $\SO(2n+1) \times \GL(n)$ in Section 6 of \cite{BFF} and believe that we will be able to use these models to construct new integral representations of $L$-functions. We provide a more detailed description of generalized Gelfand-Graev representations and their connection to the program described above in Section \ref{sec:gggr}. In this paper, we will realize the Bessel model on $\GSp(2n)$ as a generalized Gelfand-Graev model in Section \ref{sec:besselmodel}, and, assuming that $\ind_B^G (\chi_{\textup{univ}}^{-1})$ embeds uniquely into the model, we will show in Section \ref{sec:intertwiner} that the associated Bessel functional provides another example of an $\calH$-map $\calL$ as described in Conjecture \ref{conjecture}. We now explicitly state the uniqueness assumption that we are placing on the model: \begin{Theorem/Conjecture}\label{conjecture2} There is a unique embedding of $\ind_B^G (\chi_{\textup{univ}}^{-1})$ into the generalized Bessel model, $\Ind_{\overline{U}_AZ_L}^G \widetilde{\psi}_A$ (as defined in Section \ref{sec:bessel}). \end{Theorem/Conjecture} This uniqueness condition was verified in the rank 2 case in \cite{NPS}; we only need it for the Theorem to hold for rank $n$. We will provide existence in the rank $n$ case in Section \ref{sec:mackey}. While the symplectic version of the Bessel model has sat untouched since \cite{NPS}, the Bessel model in the odd-orthogonal case was proved to satisfy this uniqueness condition in \cite{GGP} (Corollary 15.3). Indeed, in addition to the conjecture above, we suspect that the Bessel model for $\GSp(2n)$ ($n>2$) has a similar multiplicity one property. Our main theorem in this paper is the following: \begin{Theorem}\label{thm:main} Let $G = \GSp(2n)$ and let $\ve$ be the character of $\calH_0$ that acts by multiplication by $-1$ on long simple roots and acts by $q$ on short simple roots. Let $V_{\ve} = \Ind_{\calH_0}^{\calH} \ve$. Then, assuming Theorem/Conjecture \ref{conjecture2}, the diagram \eqref{diagram} commutes by taking $v_{\ve} = \pi^{\rho_{\ve}^{\vee}}$, where $\rho_{\ve}$ is half of the sum of the long positive roots; and by taking $\calL= \calB$, the non-split Bessel functional (originally defined on $\GSp(4)$ by Piatetski-Shapiro and Novodvorsky). \end{Theorem} It should be noted that the split Bessel model should also give rise to a functional fitting into Conjecture \ref{conjecture} - however, in this case we suspect that one can show that this model is related to the sign character of $\calH_0$. As mentioned above, before we prove this theorem, we will discuss our generalization of the Bessel model of Novodvorsky and Piatetski-Shapiro from $\GSp(4)$ to $\GSp(2n)$ in Section \ref{sec:besselmodel}, and then we will use Mackey theory to prove the existence of a Bessel model for $\ind_B^G (\chi_{\textup{univ}}^{-1})$ in Section \ref{sec:mackey}. We conclude Section \ref{sec:mackey} with an explicit realization of the Bessel functional as an integral. And then, once we prove Theorem \ref{thm:main} in Section \ref{sec:intertwiner}, we will use that result in Section \ref{sec:besspherical} to calculate the images of the Iwahori-fixed vectors $\{\phi_w\}_{w \in W}$ on torus elements in the model $V_{\ve}$, which has not previously appeared in the literature, even for $n=2$. In particular, we prove the following theorem: \begin{Theorem}\label{thm:iwahori} For dominant $\lambda$ and fixed $w$, $$\calB(\pi^{-\lambda}\cdot \phi_w) = \frac{1}{m(J\pi^{\lambda}J)} T_w \pi^{\lambda} \cdot v_{\ve},$$ where the action of $T$ on $\ind_B^G(\chi_{\textup{univ}}^{-1})$ is by right translation and where the action of $T_w\pi^{\lambda}$ on $v_{\ve}$ is the left action on $v_{\ve}$ appearing in the definition of $\calB$. \end{Theorem} Using Theorem \ref{thm:iwahori}, we will also be able to calculate the image of the spherical function in the model, which, in the case when $n=2$, gives a new proof of the same result from \cite{BFF} (in what follows, let $\Phi^+$ denote a choice of positive roots of $\Phi$): \begin{Theorem}\label{thm:spherical} Let $\rho$ be the half-sum of the positive roots of $\Phi$, and let $\rho_{\ve}$ be as defined in Theorem \ref{thm:main}. Then, for any dominant coweight $\lambda$, $$\calB(\pi^{-\lambda}\cdot \phi^{\circ}) = \frac{\pi^{-\rho_{\ve}^{\vee}}\displaystyle\prod_{\alpha \in \Phi^+,\, \alpha \textup{ long}} (1-q\pi^{\alpha^{\vee}})} {\pi^{\rho^{\vee}}\displaystyle\prod_{\alpha \in \Phi^+} (1-\pi^{-\alpha^{\vee}})} \A\left(\left( \prod_{\alpha \in \Phi^+,\,\alpha \textup{ short}} (1-q\pi^{\alpha^{\vee}})\right) \pi^{2\rho_{\ve}^{\vee}-\rho^{\vee}+\lambda}\right),$$ where the action of $T$ on $\ind_B^G (\chi_{\textup{univ}}^{-1})$ is by right translation and where $\A$ denotes the standard alternator expression $\A(\pi^{\mu}) = \sum_{w \in W}(-1)^{\ell(w)}w\pi^{\mu}$ with $W$ acting on $X_{\ast}(T)$ in the usual way. \end{Theorem} We note here that our proof of Theorem \ref{thm:main} does not rely on prior knowledge of the image of the spherical function in the model - in this way our method of proof differs from the proofs of similar results in \cite{BBF2}. Instead, we will calculate the relevant intertwining constants directly. In Section \ref{sec:womodels}, we discuss the fourth character, $\sigma$, of the finite Hecke algebra of $\GSp(2n)$, which acts by multiplication by $q$ on long simple roots and $-1$ on short simple roots, specifically in the case where $n=2$. At this time we do not have a realization of the intertwiner $\calL$ satisfying the diagram \eqref{diagram} that is also defined according to a subgroup transformation, but we have matched the image of the spherical function under $\calF_{v_\sigma}$ for $\sigma$ to the image of the spherical function in the Whittaker-Orthogonal models defined by Bump, Friedberg and Ginzburg in \cite{BFG}. In particular, we prove the following proposition: \begin{Proposition}\label{prop:shalika} Let $WO$ be the Whittaker-Orthogonal functional on an unramified principal series representation $\tau$ of $\SO(6)$, such that $\tau$ is a local lifting of an unramified principal series representation of $\Sp(4)$. Then $\calF_{v_{\sigma}}(\pi^{-\lambda}\cdot 1_{T(\goth{o})UK})$ and $\WO(z^{-\lambda}\cdot \phi^{\circ})$ agree, for any dominant coweight $\lambda$. \end{Proposition} Piatetski-Shapiro and Novodvorsky do not provide an explicit integral formula for their functional, so part of our task in proving Theorem \ref{thm:main} is coming up with the correct integral formula for $\calB$. Our method for doing this follows what we believe to be the general method for connecting models of the form $\Ind_S^G \psi$ to characters of $\calH_0$. We will say a bit about this in the next section before moving on to the main sections of the paper, which will be focused on the theorems mentioned above. In Section \ref{sec:springer}, we will give further details on this conjectured construction of unique models for characters of $\calH_0$. We thank Ben Brubaker for many helpful conversations and communications. \section{Generalized Gelfand-Graev Representations} \label{sec:gggr} Let $G$ be as in Section \ref{sec:intro}. With notation carried over from Section \ref{sec:intro}, we will let $\goth{g}$ denote the Lie algebra of $G$, and $\goth{u}$ denote the Lie algebra of $U$. Let $f$ denote the bijective $F$-morphism from $U$ to $\goth{u}$.\footnote{Explicit choices of the ``Springer's morphism'' $f$ for classical type and exceptional type are given in Section 1.2 of \cite{Kaw2}.} Following Yamashita in \cite{Yam}, we let $\theta$ where $^{\theta}X = -X^{\top}$ denote the Cartan involution of $\goth{g}$, and let $\goth{u}^{\ast}$ denote the dual space of $\goth{u}$. Then, for $X \in \goth{u}$, we define $X^{\ast} \in \goth{u}^{\ast}$ by \begin{equation} \langle X^{\ast}, Y \rangle = B(Y, {^{\theta}X}), \quad \textup{for } Y \in \goth{u},\end{equation} where $B$ denotes the Killing form of $\goth{g}$. We believe that the unique models that give rise to an $R$-homomorphism $\calL$ as described in Section \ref{sec:intro} are related to Kawanaka's construction of the ``generalized Gelfand-Graev representations" of $G$ (gGGr) in \cite{Kaw}. Although Kawanaka's results are given in the context of finite groups of Lie type, we believe that they can be suitably adapted for the $p$-adic setting. \begin{comment} For the moment, all notations for algebraic groups indicate points over the finite field $F= F_q$. \end{comment} To construct a gGGr, we begin with a nilpotent $\Ad(G)$-orbit in $\goth{u}$ with representative $A$. One can define a $\Z$-grading of $\goth{g}$ according to $A$, \begin{equation} \goth{g} = \bigoplus_i \goth{g}(i)_A, \end{equation} such that $A \in \goth{g}(2)_A$, $\goth{p}_A = \oplus_{i\geq 0} \goth{g}(i)_A$ is the Lie algebra of a parabolic subgroup $P_A$ of $G$, and $\goth{u}_{i,A} = \oplus_{j\geq i} \goth{g}(j)_A$ ($i\geq 1$) is the Lie algebra of the unipotent subgroup $U_{i,A}$ of $P_A$. Note that $\goth{u}_{i,A}^{\ast}$ can be identified with $\oplus_{j\geq i} \goth{g}(-j)_A$ via $\langle \cdot, \cdot \rangle$. We then use Kirillov's orbit method to form the attached representation $\eta_A$ on $U_A = U_{1,A}$ - this is done by taking the character \begin{equation}\xi_{A}(\exp(Y)) = \xi_0(\langle A^{\ast},Y\rangle),\quad Y \in \goth{u}_{2,A}\end{equation} defined on $U_{2,A}$ and extending it to a character of an intermediate subgroup before inducing to $U_{A}$ (here $\xi_0$ is a non-trivial additive character of $F$). Let $L_A$ denote the Levi subgroup of $P_A$. This subgroup acts on $U_A$ via conjugation, and hence acts on the unitary dual $\widehat{U}_A$ of $U_A$ via \begin{equation} \ell \cdot [\eta] = [\ell \cdot \eta], \quad (\ell \cdot \eta)(u) = \eta(\ell^{-1}u\ell) \quad (u \in U_A),\end{equation} where $\ell \in L_A$ and $[\eta] \in \widehat{U}_A$ is the equivalence class of the irreducible representation $\eta$ of $U_A$. We denote by $Z_L(\eta_A)$ the stabilizer subgroup of the equivalence class of the representation $\eta_A$ in $L_A$. As the following lemma shows, this subgroup is equal to the centralizer, $Z_L(A)$, of $^{\theta}A$ in $L_A$: \begin{Lemma}[\cite{Yam}, Lemma 2.1]\label{stabilizer} The subgroup $Z_L(\eta_A)$ coincides with $Z_L(A)$. \end{Lemma} \begin{proof} By the Ad-invariance of the Killing form, we can see that $\ell^{-1} \cdot [\eta_A] = [\eta_{\Ad( ^{\theta}\ell)A}]$ for any $\ell \in L_A$. If we let $\nu$ denote the Kirillov correspondence $\nu: \goth{u}_A^{\ast}/U_A \to \widehat{U}_A$, then the previous statement is equivalent to the statement $\ell^{-1}\cdot \nu([A^{\ast}]) = \nu([(\Ad( ^{\theta}\ell)A)^{\ast}])$, where $[X^{\ast}]$ denotes the $\Ad^{\ast}(U_A)$-orbit through $X^{\ast}$ in $\goth{u}_A$. Thus, $\ell^{-1}$ (and hence $\ell$) is in $Z_L(\eta_A)$ if and only if $[A^{\ast}] = [(\Ad( ^{\theta}\ell)A)^{\ast}]$. The result will follow if we can show that \begin{equation} \label{eqn:orbit} [A^{\ast}] = [(\Ad( ^{\theta}\ell)A)^{\ast}] \textup{ if and only if } ^{\theta}A = \Ad(\ell) ({^{\theta}A}).\end{equation} In order to prove this final statement, we first show that \begin{equation}\label{eqn:coadjoint} [X^{\ast}] = X^{\ast} + \goth{g}(-1)_A \textup{ for any } X \in \goth{g}(2)_A,\end{equation} where we are thinking of $\goth{g}(-1)_A$ as being identified with the subspace of $\goth{u}^{\ast}$ consisting of elements that vanish on $\goth{u}_{2,A}$ (\eqref{eqn:coadjoint} is Lemma 1.2.4~in \cite{Kaw2}). The identity \eqref{eqn:coadjoint} is essentially a consequence of the identity $\Ad(u)X = f^{-1}(\ad f(u))X$, where $u\in U_A$ and $X \in \mathfrak{g}$; in order to prove \eqref{eqn:coadjoint}, it will be useful to rewrite the previous identity as in Lemma 1.2.1~in \cite{Kaw2}: \begin{equation}\label{eqn:kaw} \Ad(u)X - (X+d[f(u),X]) \in \bigoplus_{\ell \geq 2i+j} \goth{g}(\ell)_A, \end{equation} where $u \in U_{i,A}$, $X \in \goth{g}(j)_A$, and $d \in F-\{0\}$. Now, if $X \in \goth{u}_A$, then, using the identification of $\goth{u}_A^{\ast}$ with $\oplus_{i>0} \goth{g}(-i)$ and the $\Ad$-invariance of the Killing form, we see that $\Ad^{\ast}(u)X^{\ast} = p(u^{-1}X^{\ast}u)$, where $p$ denotes projection onto $\oplus_{i>0} \goth{g}(-i)_A$.\footnote{The projection map shows up here because $B(Y,W) = 0$ if $Y \in \goth{u}_A$ and $W \in \oplus_{i\geq 0} \goth{g}(i)_A$.} Putting the preceding discussion together with \eqref{eqn:kaw}, we see that $$[X^{\ast}] \subset X^{\ast}+ \goth{g}(-1)_A \textup{ if } X \in \goth{g}(2)_A.$$ It remains to show that this containment is actually an equality. By Theorem 2 in \cite{Ros}, we know that $[X^{\ast}]$ is closed, so to prove \eqref{eqn:coadjoint} it suffices to check the dimensions of each side. To find $\dim [X^{\ast}]$, we first note that \eqref{eqn:kaw} implies that $\{g \in G \mid g^{-1}X^{\ast}g = X^{\ast}\} \subset \overline{P}_A$, where $\overline{P}_A$ is the opposite parabolic associated to $A$. Thus, if $u \in U_A$ such that $u^{-1}X^{\ast}u = X^{\ast}$, then $u$ is the identity element. Now, since $\Ad^{\ast}(u)X^{\ast} = p(u^{-1}X^{\ast}u)$, we see that, if $u \in U_A$, then $\Ad^{\ast}(u)X^{\ast} = X^{\ast}$ if and only if $[f(u),X^{\ast}] = 0$ or $f(u) \in \mathfrak{u}_{2,A}$ (i.e.~$u \in U_{2,A}$). But, if $[f(u),X^{\ast}] = 0$, then $u^{-1}X^{\ast}u = X^{\ast}$, and hence $u$ is the identity. Hence, $$U_{2,A} = \{ u \in U_A \mid \Ad^{\ast}(u)X^{\ast} = X^{\ast}\},$$ which tells us that $$\dim [X^{\ast}] = \dim U_A - \dim U_{2,A} = \dim \goth{g}(1)_A = \dim (X^{\ast} + \goth{g}(-1)_A),$$ and we have proved \eqref{eqn:coadjoint}. Using \eqref{eqn:coadjoint}, we see that $[A^{\ast}] = [(\Ad(^{\theta}\ell)A)^{\ast}]$ if and only if $A^{\ast} = (\Ad(^{\theta}\ell)A)^{\ast}$; this last equality holds if and only if $\ell \in Z_L(A)$, proving \eqref{eqn:orbit}. \end{proof} It is natural to extend $\eta_A$ to a representation of $U_A \rtimes Z_L(A)$; our next step, then, is to build a representation $\widetilde{\eta}_{A,\alpha}$ on $U_A\rtimes Z_L(A)$ by taking the tensor product of $\eta_A$ with a representation $\alpha$ of $Z_L(A)$. For each irreducible representation $\alpha$ of $Z_L(A)$, we say that the gGGr associated to the pair $(A,\alpha)$ is $\Gamma_{A,\alpha} := \Ind_{U_AZ_L(A)}^G \widetilde{\eta}_{A,\alpha}$. If the group $G$ is defined over a finite field instead of a $p$-adic field, Kawanaka offers a method of producing gGGr's that contain each unipotent representation with multiplicity one (Conjecture 2.4.5 in \cite{Kaw}). Since the principal series representations are precisely those representations containing a $B$-fixed vector, Kawanaka's conjecture implies that, for each irreducible $\calH_0$-module, there should be a unique gGGr containing it with multiplicity one. Kawanaka's notes after the conjecture suggest that the nilpotent element $A$ used in the construction of a gGGr $\Gamma_{A,\alpha}$ and the irreducible representation of $\calH_0$ contained inside the $B$-fixed vectors of $\Gamma_{A,\alpha}$ are linked via the Springer correspondence. Shifting back to the $p$-adic setting, we note that, in \cite{MW}, M\oe glin and Waldspurger give a treatment of those representations - also referred to as gGGr's in \cite{Kaw} - that are constructed by inducing $\eta_A$ from $U_A$ up to $G$ directly. However, one of our goals is to find useful models - for example, as mentioned in Section \ref{sec:intro}, we expect that the gGGr's (as defined in the previous paragraph) will find applications in the construction of integral representations of $L$-functions, in a sense similar to the application of the Bessel model on $\SO(2n+1)$ discussed in Section 6 of \cite{BFF} - and gGGr's of the form $\Gamma_A := \Ind_{U_A}^G \eta_A$ will not have the low-multiplicity properties that we desire. The idea, then, is to decompose $\Gamma_A$ into a direct sum of gGGr's of the form $\Gamma_{A,\alpha}$ which will have the desired low-multiplicity properties. In the finite field setting, this is exactly what happens, since the stabilizer $Z_L(A)$ is reductive. It is also true that $Z_L(A)$ is reductive when $G$ is defined over a $p$-adic field; we state this result without proof: \begin{Lemma}[\cite{Car}, Proposition 5.5.9] The subgroup $Z_L(A)$ is reductive. \end{Lemma} A proof of this result can be found in Section 5.5 in \cite{Car}. It should be noted that Carter's proof is given for $G$ defined over an algebraically closed field, and relies on a proof of the Jacobson-Morozov Lemma given in this context. That the Jacobson-Morozov Lemma holds over a field of characteristic 0 seems to be a well-known result (cf.~Section 2.4 in \cite{KL}), and a proof of a closely-related result can be found in Section 8 of \cite{BT} (more recently, a proof of this exact result can be found in Section 2 of \cite{Wit}). The rest of Carter's proof applies to this context without alteration. However, in contrast to what we observe in the finite field setting, the representation $\widetilde{\eta}_{A,\alpha}$ is not necessarily guaranteed to be a genuine representation if $G$ is instead defined over a $p$-adic field; in general, we are only guaranteed that it is a projective representation of $U_A\rtimes Z_L(A)$. With that said, if $\eta_A$ is a character and $\widetilde{\eta}_{A,\alpha}$ is formed by tensoring with a character of $Z_L(A)$ - as is the case for the Bessel model on $\GSp(4)$ - then $\widetilde{\eta}_{A,\alpha}$ will be a genuine representation. Unlike the Whittaker model, which served as the inspiration for the definition of a Gelfand-Graev representation (see \cite{GG}), the spherical model and Bessel model are not realized directly as gGGr's. Instead, we realize these models by extending $\eta_A$ from $U_A$ to $U_A\rtimes (Z_L(A)\cap G(\goth{o}))$, and then inducing to $G$. Note that this choice to induce from $Z_L(A) \cap G(\goth{o})$ means that the central character of a given representation will not play a role in whether or not that representation appears in the model. We also note that, in the case of the Whittaker model, $Z_L(A)$ is trivial, so it appears that this method of extending $\eta_A$ to the semidirect product of $U_A$ and $Z_L(A) \cap G(\goth{o})$ is a step towards understanding the general construction of gGGr's over local fields. As mentioned in Section \ref{sec:intro}, in Section \ref{sec:springer} we will expand on the conjectured connection between nilpotent orbits and unique models for characters of the Hecke algebra. \section{The Bessel Model and the Bessel Functional} \label{sec:bessel} We return now to the setting where $G = \GSp(2n)$, and show how the Bessel model as formulated in \cite{NPS} fits into the narrative described in Section \ref{sec:intro} before we move on to establishing our main results. We carry all of our notation through from the previous section. We will have need to realize specific elements of $G$, and so we will explicitly define $G$ as $$G:= \{g \in M_{2n}(F) \mid g^{\top}\Omega g = k\Omega, k \in F^{\times}\},$$ where $$\Omega = \begin{pmatrix} & -\Omega' \\ \Omega' & \end{pmatrix}$$ and $\Omega'$ is the $n\times n$ matrix with 1's on the antidiagonal. As in Section \ref{sec:intro}, we let $\Phi$ denote the root system of $G$, with short simple roots $\alpha_1,\hdots, \alpha_{n-1}$ and long simple root $\alpha_n$. Let $s_1,\hdots, s_n$ and $w_0$ denote the corresponding simple reflections and long element, respectively, in $W$. Let $\rho$ denote the half-sum of the positive roots of $\Phi$, and let $\Phi^+$ and $\Phi^-$ denote the sets of positive and negative roots of $\Phi$, respectively. \subsection{The Bessel Model as a Generalized Gelfand-Graev Representation} \label{sec:besselmodel} The transformation property satisfied by the Bessel model depends on the parabolic subgroup $P_A$ of $G$ containing the subgroup corresponding to the negative short simple roots. We can factor $P_A = L_AU_A$ where $L_A$ is the Levi component of $P$, and $U_A$ is the unipotent component of $P$, as described in Section \ref{sec:gggr}. In this case, the nilpotent element $A$ can be chosen so that $A$ is the sum of non-zero elements in the subalgebra $\sum \alpha$, where the sum is taken over the long roots in $\Phi^+$. Let $\overline{U}_A$ denote the opposite unipotent of $U_A$. Let $\psi_0$ be a non-degenerate additive character on $F^+$, and let $\psi_A(u) = \psi_0(\Tr(ru'))$ for $u \in \overline{U}_A$, where $u'$ is the lower left $(n\times n)$-block of $u$ and $r$ is the upper right $(n\times n)$-block of $A$. The linear character $\psi_A$ is the representation of $\overline{U}_A$ that we denoted as $\eta_A$ in Section \ref{sec:gggr}. We wish to extend $\psi_A$ to a character, $\widetilde{\psi}_A$, of $\overline{U}_A \rtimes Z_L(\psi_A)$, where $Z_L(\psi_A)$ is the stabilizer of the equivalence class of $\psi_A$. From Lemma \ref{stabilizer}, we know that $Z_L(\psi_A) = Z_L(A)$, where $Z_L(A)$ is the centralizer of $A$ in $L_A$. We choose $A$ so that $$r = \begin{pmatrix} & & & -\omega_{n-1} \\ & & \iddots & \\ & -\omega_1 & & \\ 1 & & & \end{pmatrix}.$$ In order to have a unique model in the rank $n$ case, it is likely that we will need to have some sort of condition on $\omega_1,\hdots \omega_{n-1}$, much like we do in the case where $G = \SO(2n+1)$ (cf.~the discussion of Bessel models in Section 1 of \cite{BFF}). Indeed, we see such a condition arise already in the rank 2 case - namely, that $\omega_1 \in F^{\ast}\bs (F^{\ast})^2$ (cf.~the proof of Theorem \ref{thm:functional}). From our choice of $A$, we see that $Z_L = Z_L(A)$ is the subgroup of $L_A$ with $\GSO(n)$ blocks on the diagonal according to the symmetric bilinear form $$\begin{pmatrix} -\omega_{n-1} & & & \\ & \ddots & & \\ & & -\omega_1 & \\ & & & 1 \end{pmatrix}.$$ We pause here to note that the simple reflections $s_1,\hdots, s_{n-1}$, corresponding to the short simple roots, are contained in $Z_L$. We will denote the subgroup of $W$ generated by these simple reflections as $W_L$, and we define $$\Phi_L := \left\{\alpha \in \Phi \mid \alpha = \sum_{i=1}^{n-1} c_i\alpha_i\right\}.$$ Note that $\alpha \in \Phi_L$ if and only if the root subgroup corresponding to $\alpha$, denoted $x_{\alpha}(F)$, is a subgroup of $L$. Finally, we define $\Phi_L^+ := \Phi_L \cap \Phi^+$, and we define $\Phi_L^-$ analogously. Turning our attention back to our realization of the Bessel model as a gGGr, we define $\widetilde{\psi}_A(ut) = \psi_A(u)$ for $u \in \overline{U}_A$ and $t \in Z_L$. Note that this representation is the one that would be denoted by $\widetilde{\eta}_{A,1}$ in the previous section, constructed from $\psi_A$ and the trivial representation of $Z_L$. Then, following the previous section, we define the Bessel model to be $\Ind_{\overline{U}_AZ_L}^G(\widetilde{\psi}_A)$. The Bessel functional for an irreducible admissible representation $\theta$ on $G$ is defined to be a linear functional $\calB$ on the representation space $V_{\theta}$ of $\theta$ such that $$\calB(\theta(ut) v) = \widetilde{\psi}_A(ut)\calB(v),$$ for $v \in V_{\theta}$, $t \in Z_L$ and $u \in \overline{U}_A$. In particular, note that this means that $\widetilde{\psi}_A$ must agree with the central character of $\theta$. Following \cite{BFF}, let $Z_L(\goth{o}) = Z_L \cap \SL(2n,\goth{o})$, so that $Z_L$ is the semidirect product of the compact group $Z_L(\goth{o})$ and the center of $G$. We want the character $\widetilde{\psi}_A$ to have $\goth{o}$ as its conductor, so we choose $r \in \textup{Mat}(n,\goth{o})$. We will discuss questions of existence and uniqueness of a Bessel functional for $\ind_B^G(\chi_{\textup{univ}}^{-1})$ in depth in the next section. We end this section with the statement of Novodvorsky and Piatetski-Shapiro's theorem regarding the (more general) uniqueness of the Bessel functional for $\GSp(4)$: \begin{Theorem}[\cite{NPS}, Theorem 1]\label{thm:uniqueness} Let $\theta$ be an irreducible admissible representation of the group $G = \GSp(4)$ in a complex space $V$. Then the dimension of the space of all linear functionals $\calB$ on $V$ for which $$\calB(\theta(ut) v) = \widetilde{\psi}_A(ut)\calB(v), \textup{ for all $t \in Z_L$, $u \in \overline{U}_A$, $v \in V$}$$ does not exceed one. \end{Theorem} \subsection{Existence and Uniqueness of Bessel Functionals for Principal Series Representations}\label{sec:mackey} In this section, we will describe how we arrive at an integral realization of the Bessel functional. Explicitly, this section is dedicated to explaining how we arrive at the following result: \begin{Theorem}\label{thm:functional} Let $G = \GSp(2n)$. The functional, $\calB$, on $\ind_B^G(\chi_{\textup{univ}}^{-1})$ whose restriction to functions supported on the big cell $P_A\overline{U}_A$ is given by $$\calB(\phi) = \pi^{\rho_{\ve}^{\vee}} \int_{Z_L(\goth{o})}\int_{\overline{U}_A} \psi_A(u)\phi(uz)\,du\,dz$$ is a Bessel functional. \end{Theorem} Essentially, we use Bruhat's extension of Mackey theory as described in \cite{Rod} to arrive at this integral realization of the functional in the rank 2 case, and then generalize. In particular, much of the argument used to prove the analogous statement for $\SO(2n+1)$ given in \cite{FG} can be applied to the rank 2 case without significant alteration, so, in the discussion to follow, we will refer the reader to the relevant results in \cite{FG} where appropriate. Before we begin, we note, per \cite{HKP}, that while the treatment in \cite{FG} ultimately yields a $\C$-valued functional on principal series representations, the method of proof applies equally well to a functional taking values in any commutative $\C$-algebra, and so the fact that $\chi_{\textup{univ}}^{-1}$ takes values in $R$ does not introduce any new complications when translating results from \cite{FG}. Fix $G = \GSp(4)$ for the following discussion. As mentioned above, the argument that we will use to show that $\ind_B^G (\chi_{\textup{univ}}^{-1})$ admits a Bessel model, or in other words, that $$\dim \calHom_G\left(\ind_B^G(\chi_{\textup{univ}}^{-1}), \Ind_{\overline{U}_AZ_L(\goth{o})}^G \widetilde{\psi}_A\right) = 1$$ originated with Rodier in \cite{Rod}, and it makes use of the following theorem of Bruhat: \begin{Theorem}[\cite{Bru}]\label{thm:mackey} Let $G$ be a locally compact, totally disconnected unimodular group. Let $H_1$ and $H_2$ be two closed subgroups of $G$, and $\delta_i$ the module of $H_i$. Let $\tau_i$ be a smooth representation of $H_i$ in the vector space $E_i$, $\pi_i$ be the induced representation $\Ind_{H_i}^G \tau_i$ in the Schwarz space of $\tau_i$. Then the space of all intertwining forms $I$ of $\pi_1$ and $\pi_2$ is isomorphic to the space of $(E_1 \otimes E_2)$-distributions $\Delta$ on $G$ such that \begin{equation} \lambda(h_1) \ast \Delta \ast \lambda(h_2^{-1}) = (\delta_1(h_1)\delta_2(h_2))^{1/2}\Delta \circ(\tau_1(h_1)\otimes \tau_2(h_2))\end{equation} where $h_i \in H_i$ and $\lambda(x)$ is the Dirac distribution in $x$. The correspondence between $I$ and $\Delta$ is given by \begin{equation}\label{eqn:intertwiningform}I(p_1(f_1), p_2(f_2)) = \int_G dg_2\int_G f_1(g_1g_2)\otimes f_2(g_2)\,d\Delta(g_1),\end{equation} where $f_i$ are locally constant functions on $E_i$ with compact support, and $p_i$ is the projection from this space of functions to the Schwarz space of $\tau_i$. \end{Theorem} Let $\calD(X,R)$ denote the space of $R$-distributions on a locally compact, totally disconnected space $X$. Following \cite{FG}, we begin by noting that $$\calHom_G\left(\ind_B^G \chi_{\textup{univ}}^{-1}, \Ind_{\overline{U}_AZ_L(\goth{o})}^G \widetilde{\psi}_A\right) \simeq \calHom_G\left(\ind_{\overline{U}_AZ_L(\goth{o})}^G \widetilde{\psi}_A^{\ast}, \ind_B^G (\chi_{\textup{univ}}^{-1})^{\ast}\right),$$ where $\widetilde{\psi}_A^{\ast}$ and $(\chi_{\textup{univ}}^{-1})^{\ast}$ are the smooth contragredients of $\widetilde{\psi}_A$ and $\chi_{\textup{univ}}^{-1}$, respectively. Then, by Theorem \ref{thm:mackey}, this latter space is isomorphic to the subspace $\calD_{\widetilde{\psi}_A,\chi_{\textup{univ}}^{-1}}(G,R)$ of $\calD(G,R)$ of $R$-distributions $\Delta$ on $G$ satisfying \begin{equation}\label{mackey}\lambda(b) \ast \Delta \ast \lambda(h^{-1}) = \delta_B^{1/2}(b)\chi_{\textup{univ}}^{-1}(b)\widetilde{\psi}_A^{\ast}(h)\Delta.\end{equation} for all $h \in \overline{U}_AZ_L(\goth{o})$ and $b \in B$. With this condition in mind, we will use the double-coset decomposition of $G$ suggested in the following lemma to analyze $\calD_{\widetilde{\psi}_A,\chi_{\textup{univ}}^{-1}}(G,R)$: \begin{Lemma}\label{lemma:bruhat} Let $g \in G=\GSp(4)$. Then $g \in Bwx_{-\alpha_1}(F)\overline{U}_AZ_L(\goth{o})$, where $w$ can be chosen from $\{1,s_2,s_1s_2,s_2s_1s_2\}$. \end{Lemma} \begin{proof} Using the Bruhat decomposition, we can write $g =bwu$, where $b \in B$, $w \in W$, and $u \in \overline{U}$. Note that $s_1 \in W$ can be written as the product of a diagonal matrix, $d$, and the matrix $$\omega = \begin{pmatrix} & \omega_1 & & \\ 1 & & & \\ & & & -\omega_1 \\ & & -1 & \end{pmatrix} \in Z_L.$$ Hence, if $w = w's_1$, where $\ell(w')< \ell(w)$ (here $\ell(w)$ denotes the length of $w$), then $g$ can be written as $bw'd\omega u = (bd')w'\omega u$, where $bd' = bw'd(w')^{-1} \in B$ and $w' \in \{1,s_2,s_1s_2, s_2s_1s_2\}$. Factoring $u = x_{-\alpha_1}(t)u_A$ for some $t \in F$ and $u_A \in \overline{U}_A$, we can see that $\omega x_{-\alpha_1}(t)u_A = x_{\alpha_1}(t')u_A\omega$, for some $t' \in F$. Then $w'x_{\alpha_1}(t) = b'w'$ for some $b' \in B$, so that we have $g = (bdb')w'u_A\omega \in Bw'\overline{U}_AZ_L(\goth{o})$, as desired. \end{proof} In a series of results in \cite{FG}, starting with Proposition 2.4, Friedberg and Goldberg show that, for a given non-zero $\Delta \in \calD_{\widetilde{\psi}_A,\chi_{\text{univ}}^{-1}}(G,R)$, $\Delta$ can only be supported on one specific double coset, and that, in addition, $\Delta$ is completely determined by its restriction to that double coset. The same thing is true in our case, and we will show that the only double coset in the refined double coset \eqref{eqn:bruhat} that can serve as the support of $\Delta$ is $B\overline{U}_AZ_L(\goth{o})$. \begin{comment} \begin{Lemma}\label{lemma:cosets} If $w(\alpha) \in \Phi^+$ for any $\alpha$ such that $x_{\alpha}(t)$ is in the support of $\tilde{\psi}_A$, then every element of $\calD_{\widetilde{\psi}_A,\chi_{\text{univ}}^{-1}}(G,R)$ must vanish on $Bwu_{-\alpha_1}\overline{U}_AZ_L(\goth{o})$, where $u_{-\alpha_1} \in x_{-\alpha_1}(F)$. \end{Lemma} \begin{proof} In this case, we have $\chi_{\text{univ}}^{-1}(x_{w(\alpha)}(t)) = 1$ for all $t \in F$, so in order for \eqref{compatibility} to hold, we would need $\widetilde{\psi}_A \end{proof} \end{comment} \begin{proof}[Proof of Theorem \ref{thm:functional}] Following the proof of Proposition 2.4 in \cite{FG}, we will start by showing that many double cosets in Lemma \ref{lemma:bruhat} fail to satisfy the following compatibility criterion (Theorem 1.9.5~in cite \cite{Sil}): For a given double coset $Bwu_{-\alpha_1}\overline{U}_AZ_L(\goth{o})$ (where $u_{-\alpha_1} \in x_{-\alpha_1}(F)$), if there exists $b \in B$ such that $w^{-1}bw \in \overline{U}_AZ_L(\goth{o})$ and \begin{equation}\label{compatibility} \chi_{\textup{univ}}^{-1}(b) \neq \widetilde{\psi}_A(w^{-1}bw),\end{equation} then the double coset in question is not part of the support of any distribution in $\calD_{\widetilde{\psi}_A,\chi_{\textup{univ}}^{-1}}(G,R)$. To begin, let $u_A \in \overline{U}_A$ such that $u_{-\alpha_1}u_Au_{-\alpha_1}^{-1} \in x_{-\alpha_2}(F)$. Then, since $w(-\alpha_2) \in \Phi^+$ for $w\in \{s_2, s_1s_2, s_2s_1s_2\}$, we know that $wu_{-\alpha_1}u_Au_{-\alpha_1}^{-1}w^{-1}$ is contained in some positive root subgroup in $U$. Back in Section \ref{sec:besselmodel}, we chose $A$ such that $\omega_1 \in F^{\ast}\bs (F^{\ast})^2$; under this assumption, we can pick $u_A$ such that $\widetilde{\psi}_A(u_A)\neq 1$, as verified by some routine root subgroup calculations. Then, since $\chi_{\text{univ}}^{-1}(u) = 1$ for all $u \in U$, we see that \eqref{compatibility} does not hold on $Bwu_{-\alpha_1}\overline{U}_AZ_L(\goth{o})$ for any $w \in \{s_2, s_1s_2, s_2s_1s_2\}$ or $u_{-\alpha_1} \in x_{-\alpha_1}(F)$. At this point, the remainder of the proof that $$\dim \calHom_G\left(\ind_B^G(\chi_{\textup{univ}}^{-1}), \Ind_{\overline{U}_AZ_L(\goth{o})}^G \widetilde{\psi}_A\right) \leq 1$$ is analogous to the end of the proof of Theorem 2.1 in \cite{FG}. We will leave the proof of the existence of a non-zero Bessel functional for Section \ref{sec:intertwiner}. The Bessel functional is realized as an integral using Theorem \ref{thm:mackey}. In particular, Theorem \ref{thm:mackey} tells us that, if $\Delta$ is a non-zero element of $\calD_{\widetilde{\psi}_A,\chi_{\textup{univ}}^{-1}}(G,R)$, then the corresponding intertwining form, $I$, of $\ind_B^G (\chi_{\textup{univ}}^{-1})$ and $\Ind_{U_AZ_L(\goth{o})}^G \widetilde{\psi}_A$, is given by \eqref{eqn:intertwiningform}. Hence, the corresponding Bessel functional is realized as the inner integral of $I$, which in this case is \begin{align*} \calB(\phi)(g) &= \int_G \phi(hg)\,d\Delta(h) \\&= \int_{Z_L(\goth{o})}\int_{\overline{U}_A} \psi_A(u)\phi(uzg)\,du\,dz, \end{align*} with $g$ set equal to $1$. It is readily verified that generalizing this integral to $\GSp(2n)$ yields a Bessel functional for $\Ind_B^G(\chi_{\textup{univ}}^{-1})$, as defined in Section \ref{sec:besselmodel}. As mentioned in the previous paragraph, we will show that this integral is non-zero in the proof of Lemma \ref{besselvalue}. \end{proof} \begin{remark} Note that, in the statement of Theorem \ref{thm:functional} we have normalized the Bessel functional so that the diagram \eqref{diagram} will commute with $v_{\ve} = \pi^{\rho_{\ve}^{\vee}}$ as in Theorem \ref{thm:main}. \end{remark} Letting $G = \GSp(2n)$ once more, we conclude this section with the following proposition regarding the convergence of $\calB$: \begin{Proposition} If $\phi \in \ind_B^G (\chi_{\textup{univ}}^{-1})$ then $\calB(\phi)$ converges in $R$. \end{Proposition} \begin{proof} Following Section 6.2 of \cite{HKP}, we begin by showing that $\calB(\phi)$ converges in a particular completion of $R$. Let $\calJ = \{-\alpha^{\vee} \mid \alpha \not\in \Phi_L^+\}$, and let $\C[\calJ]$ denote the subalgebra of $R$ generated by $\calJ$. Denote the completion of $\C[\calJ]$ with respect to the maximal ideal generated by $\calJ$ by $R_{\calJ}$. Our initial claim is that $\calB(\phi) \in R_{\calJ}$. Note that since $\phi \in \ind_B^G \chi_{\textup{univ}}^{-1}$ is compactly supported $\textup{mod}\, B$, there is no need to include any positive coroots in $\calJ$ to ensure convergence of the functional in $R_{\calJ}$. Additionally, since $\calB(\phi)$ is an integral over $\overline{U}_AZ_L(\goth{o})$, we can see that there is no need to include $\{-\alpha^{\vee} \mid \alpha \in \Phi_L^+\}$ in $\calJ$ either. Then, in order to see that $\calB(\phi)$ actually converges in $R_{\calJ}$, we apply the following lemma from \cite{HKP}: \begin{Lemma}[\cite{HKP}, Lemma 1.10.1] Let $\mu \in X_{\ast}(T)$. Then the set $\overline{U} \cap \pi^{\mu}UK$ is compact. \end{Lemma} Finally, we observe that, due to the oscillation of the character $\psi_A$, all but finitely many of the coefficients of the Laurent series $\calB(\phi)$ will vanish, which means that $\calB(\phi)$ is, in fact, an element of $R$, not just $R_{\calJ}$. \end{proof} \begin{remark} Note that, if we were to specialize $\chi_{\textup{univ}}^{-1}$ to a $\C$-valued character on $B$, we could show that the resulting functional converges in $\C$ on elements of the corresponding principal series representation using an argument analogous to that presented in Section 3 of \cite{FG} (cf.~Proposition 3.5). \end{remark} \section{The Bessel Functional as a Hecke Algebra Intertwiner} \label{sec:intertwiner} In this section we will prove Theorem \ref{thm:main}. This proof relies on exploiting the connection between the generators $T_s$ of $\calH_0$ and the principal series intertwining operators $A_s$. In Sections \ref{sec:psio} and \ref{sec:intercalcn} we introduce these intertwining operators and describe how they interact with $T_s$ and the Bessel functional before offering the proof of Theorem \ref{thm:main} in Section \ref{sec:proof}. \subsection{Principal Series Intertwining Operators}\label{sec:psio} As mentioned above, the principal series intertwining operators turn out to be closely connected to the left action of the elements of the finite Hecke algebra on $\ind_B^G(\chi_{\textup{univ}}^{-1})^J$, and we will exploit this connection in order to show that our functional acts as a Hecke algebra intertwiner in the way predicted in Theorem \ref{thm:main}. Our initial goal is to define a family of intertwining operators, one for each $w \in W$, that take $\M$ to itself. Our first guess at such an operator $$\calI_w : \phi \mapsto \int_{U \cap w\overline{U}w^{-1}} \phi(w^{-1}ug)\,du,$$ does not quite work, because it does not preserve $\M$. As shown in Section 1.10 of \cite{HKP}, one can extend $\M$ by scalars to a completion of $R$ according to the roots $$\Phi_{w}^+ := \{\alpha \in \Phi^+ \mid w^{-1}(\alpha) \in \Phi^-\},$$ such that this extension of $\M$ is preserved by $\calI_w$. Instead of doing this, we choose to use normalized versions of these intertwiners, $A_w$, where $$A_w := \left( \prod_{\alpha \in \Phi^+} (1-\pi^{\alpha^{\vee}}) \right)\calI_w,$$ since, using basic properties of $\calI_w$ recorded in Lemma 1.13.1 in \cite{HKP}, we can see that $A_w$ preserves $\M$. Now, since $A_w \in \End_{\calH}(\M)$, we can regard $A_w$ as an element of $\calH$ acting on the left of $\M$. In particular, for a simple reflection $s_{\alpha}$, one can show that the desired relation between $A_{s_{\alpha}}$ and $T_{s_{\alpha}}$ is \begin{equation} \label{heckeintertwiner} A_{s_{\alpha}} = (1-q^{-1})\pi^{\alpha^{\vee}} + q^{-1}(1-\pi^{\alpha^{\vee}})T_{s_{\alpha}}.\end{equation} We pause here to note that it was Rogawski in \cite{Rog} who first used \eqref{heckeintertwiner} to recover earlier results of Rodier and others on the structure of the unramified principal series representations. However, Rogawski was using \eqref{heckeintertwiner} to recover information about the intertwining operators from his knowledge of the Hecke algebra action, which is the opposite of what we will do. \subsection{Calculating Intertwining Factors}\label{sec:intercalcn} In order to prove Theorem \ref{thm:main}, we will use \eqref{heckeintertwiner} to reduce the problem to understanding the interaction between the principal series intertwiners and the functional. In particular, if we make the assumption that the Bessel functional is unique, then, since $\calB \circ A_{s_{\alpha}}$ is a Bessel functional on $\ind_B^G(s_{\alpha}\cdot \chi_{\textup{univ}}^{-1})$, we know that it must be a constant multiple of $s_{\alpha} \circ \calB$. Hence, for each simple root $\alpha$, we want to calculate $c_{\alpha} \in R$ such that $$\calB \circ A_{s_{\alpha}} = c_{\alpha}(s_{\alpha} \circ \calB).$$ This turns out to be a tractable calculation, yielding the following results: \begin{Proposition} \label{intertwiners} Assume Theorem/Conjecture \ref{conjecture2}. With notation as above, we have that \begin{equation} \label{intertwiner1} \calB \circ A_{s_i} = (1-q^{-1}\pi^{\alpha_i^{\vee}})(s_{i}\circ \calB), \,\textup{if $i<n$,} \end{equation} and \begin{equation} \label{intertwiner2} \calB \circ A_{s_n} = (\pi^{\alpha_n^{\vee}}-q^{-1})(s_n\circ \calB). \end{equation} \end{Proposition} \begin{remark} Note that we need Theorem/Conjecture \ref{conjecture2} in order to assert that $\calB \circ A_{s_{\alpha}}$ is a scalar multiple of $s_{\alpha} \circ \calB$ in the rank $n>2$ case. After this point, the rest of the proof of Theorem \ref{thm:main} procedes with no caveats. \end{remark} In order to prove \eqref{intertwiner1}, we will need to calculate the image of the Iwahori-fixed vectors $\phi_1$ and $\phi_{s_i}$, for $i<n$, in the model: \begin{Lemma}\label{besselvalue} The Bessel functional takes on the following values on the following Iwahori-fixed vectors: \begin{equation} \label{phi1} \calB(\phi_1) = \pi^{\rho_{\ve}^{\vee}} m(\overline{U}_A Z_L(\goth{o})\cap BJ), \end{equation} and \begin{equation} \label{phis} \calB(\phi_{s_i}) = \pi^{\rho_{\ve}^{\vee}} m(\overline{U}_AZ_L(\goth{o}) \cap Bs_iJ), \text{ if $i < n$}. \end{equation} Moreover, these values are non-zero, as the sets $\overline{U}_AZ_L(\goth{o}) \cap BJ$ and $\overline{U}_AZ_L(\goth{o}) \cap Bs_iJ$ have non-zero measure. \end{Lemma} \begin{remark} We already know that the integrals $\calB(\phi_1)$ and $\calB(\phi_{s_i})$ converge in $R$ from Section \ref{sec:mackey}; however, it will be important to the proof of Proposition \ref{intertwiners} for us to show that they are non-zero and invariant under the composition $s_i \circ \calB$. \end{remark} Before we can prove Lemma \ref{besselvalue}, we must first prove the following lemma: \begin{Lemma}\label{blockiwahori} $\overline{U}_A \cap P_AJ_A = \overline{U}_A \cap J_A$. \end{Lemma} \begin{proof} Let $u \in \overline{U}_A \cap P_AJ_A$. We see that the standard argument for the rank 1 Iwahori factorization $J = (J\cap B)(J\cap \overline{U})$ can be adapted here to give $J_A = (J_A \cap P_A)(J_A \cap \overline{U}_A)$. Using this, we see that we can factor $u = pj$, with $p \in P_A$ and $j \in J_A \cap \overline{U}_A$. Rewriting this as $uj^{-1} = p$, we see that $uj^{-1} \in \overline{U}_A \cap P_A$, so $u = j \in J_A\cap \overline{U}_A$. \end{proof} \begin{proof}[Proof of Lemma \ref{besselvalue}] Consider the Iwahori-Bruhat-like decomposition $$G = P_AJ_A \sqcup P_As_nJ_A,$$ where $J_A$ is the preimage of $P_A(k)$ under the canonical homomorphism $G(\goth{o}) \to G(k)$ (note that $P_A$ is the parabolic subgroup generated by $B$ and the root subgroups $x_{-\alpha_i}(F)$ for $i < n$). In order to see that \begin{equation} \label{eqn3} \calB(\phi_1) = \int_{Z_L(\goth{o})} \int_{\overline{U}_A} \psi_A(u) \phi_1(uz) \,du\,dz = m(\overline{U}_AZ_L(\goth{o}) \cap BJ),\end{equation} we must first show that \begin{equation}\label{eqn1} \overline{U}_AZ_L(\goth{o}) \cap BJ \subset (\overline{U}_A \cap J_A)J_A. \end{equation} Now, if $u \in \overline{U}_A$ and $z \in Z_L(\goth{o})$, then $uz \in BJ$ only if $u$ has an Iwahori-Bruhat decomposition $u = bwj$ with $b \in B$, $j \in J$, and $w \in W_L$, since \begin{equation} Z_L(\goth{o}) \subset J_A = \bigsqcup_{w\in W_L} JwJ .\end{equation} Additionally, we see that $\overline{U}_A \cap BwJ \subset \overline{U}_A\cap P_AJ_A$ whenever $w \in W_L$, so that we have \begin{equation} \label{eqn2} \overline{U}_AZ_L(\goth{o}) \cap BJ \subset (\overline{U}_A \cap P_AJ_A)J_A. \end{equation} Equation \eqref{eqn1} now follows from \eqref{eqn2} by Lemma \ref{blockiwahori}. Since the conductor of $\psi_A$ is $\goth{o}$, \eqref{eqn1} tells us that \begin{equation} \int_{Z_L(\goth{o})} \int_{\overline{U}_A} \psi_A(u) \phi_1(uz) \,du\,dz = \int_{Z_L(\goth{o})} \int_{\overline{U}_A} \phi_1(uz) \,du\,dz,\end{equation} and so we see that \eqref{eqn3} holds. Finally, we note that $$(\overline{U}_A \cap J)(Z_L(\goth{o}) \cap J) \subset \overline{U}_AZ_L(\goth{o}) \cap BJ,$$ which means that $\calB(\phi_1) \neq 0$. Making suitable adjustments to the argument given above gives us \eqref{phis}. \end{proof} \begin{remark} The proof of Theorem \ref{thm:functional} is now complete as well. \end{remark} We make the choice now to normalize our Haar measure so that $m(\overline{U}_AZ_L(\goth{o}) \cap BJ) = 1$. We are ready to prove Proposition \eqref{intertwiners}: \begin{comment} \begin{Lemma} \label{iwahoria} $J = (J\cap B)(J\cap \overline{U}_AZ_L(\goth{o}))$. \end{Lemma} \begin{proof} Using the usual Iwahori factorization, we can see that it suffices to show that the subgroup $x_{-\alpha_1}(\goth{p})$ of $J$ is contained in $(J \cap B)(J \cap \overline{U}_AZ_L(\goth{o}))$. To see that this is the case, observe that, for $t = u\pi^j$ with $u \in \goth{o}^{\times}$ and $j > 0$, then $x_{-\alpha_1}(t) = bh$, where $$b = \begin{pmatrix} g & 0 \\ 0 & \det g \cdot (g')^{-1} \end{pmatrix}\textup{ with } g = \begin{pmatrix} (1-\ve t^2)^{-1} & -\ve t(1-\ve t^2)^{-1}\\ 0 & 1\end{pmatrix},$$ and $$h = \begin{pmatrix} \gamma & 0 \\ 0 & \det \gamma\cdot (\gamma')^{-1} \end{pmatrix} \textup{ with } \gamma = \begin{pmatrix} 1 & \ve t \\ t & 1 \end{pmatrix}.$$ \end{proof} This lemma will be used again later when we calculate the value of $\calB$ on torus elements. In order to prove Proposition \eqref{intertwiners}, we only need to apply \eqref{iwahori} in service of obtaining the following technical result: \begin{Lemma}\label{iwahori2} $\overline{U}_AZ_L(\goth{o}) \cap BJ = \overline{U}_AZ_L(\goth{o}) \cap J$. \end{Lemma} \begin{proof} Let $s \in \overline{U}_AZ_L(\goth{o}) \cap BJ$ be given. Then $s$ can be written as $s = bj$ for some $b \in B$, $j \in J$. From \eqref{iwahori}, we know that we can write $j$ as $j = b's'$ where $b' \in B \cap J$ and $s' \in \overline{U}_AZ_L(\goth{o} \cap J$. Hence, $$s(s')^{-1} = bb' \in B,$$ which means that $s(s')^{-1}$ is an element of $J$, and therefore $s \in J$. \end{proof} Note that $Z_L(\goth{o}) \subset BJ \cup Bs_1J$. In particular, we can see that $Z_L(\goth{o}) \cap BJ$ consists only of the diagonal matrices in $Z_L(\goth{o})$. \begin{Lemma}\label{iwahoris1} $\overline{U}_A$ and $Bs_1J$ have empty intersection. \end{Lemma} \begin{proof} We begin with a general element $\overline{u}_A$ of $\overline{U}_A$ factored into root subgroups as $$\overline{u}_A = x_{-\alpha_2}(t_2)x_{-\alpha_1-\alpha_2}(t_{12})x_{-2\alpha_1-\alpha_2}(t_{112}).$$ Note that we can rewrite $\overline{u}_A$ as \begin{equation}\label{iwasawa} \overline{u}_A = s_2s_1s_2x_{\alpha_2}(t_2)x_{\alpha_1+\alpha_2}(t_{12})x_{2\alpha_1+\alpha_2}(t_{112})s_2s_1s_2.\end{equation} In order to show that such an element cannot lie in the Bruhat cell $Bs_1J$, we will use an algorithm consisting essentially of repeated applications of the rank 1 Bruhat decomposition \begin{equation}\label{bruhat} s_{\alpha}x_{\alpha}(t) = h_{\alpha}(t^{-1})x_{\alpha}(t)s_{\alpha}x_{\alpha}(t^{-1})s_{\alpha},\end{equation} for simple roots $\alpha_i$ and $|t| \geq 1$. We can obtain the Iwasawa decomposition of $\overline{u}_A$ using \eqref{iwasawa} by moving each simple reflection $s_{\alpha}$ rightward across the expression until we get to the Weyl group element on the right, which is a reduced word beginning with $s_{\alpha}$. If we have the simple reflection $s_{\alpha}$ next to the positive root $x_{\beta}$, then, if $\alpha\neq \beta$, we can commute $s_{\alpha}$ past $x_{\beta}$ to get $x_{s_{\alpha}\cdot \beta}$, where $s_{\alpha}\cdot \beta \in \Phi^{+}$, or we can use \eqref{bruhat} to write $s_{\alpha}x_{\alpha}$ as a product of the with two factors of $s_{\alpha}$ in it. In either case, when the one or two factors of $s_{\alpha}$ reach the Weyl group element on the right end of the expression \eqref{iwasawa}, that word $w$ will either remain unchanged or be the word $s_{\alpha}w$ of length 1 less than $w$. In particular, since the word we begin with in $\eqref{iwasawa}$ ends with $s_{\alpha_2}$, we see that our Iwasawa decomposition for $\overline{u}_A$ could only be of the form $bwj$ for $b \in B$ and $j \in J$ with $w=s_2s_1s_2, s_1s_2,s_2$ or 1. \end{proof} \end{comment} \begin{proof}[Proof of Proposition \ref{intertwiners}] In order to make our calculation of $c_{\alpha_i}$ easier, for $i<n$, we will evaluate $\calB \circ A_{s_i}$ on the Iwahori-fixed vector $\phi_1+\phi_{s_i}$. From Lemma 1.13.1 in \cite{HKP}, we know that $$\calB(A_{s_i}(\phi_1 + \phi_{s_i})) = (1-q^{-1}\pi^{\alpha_i^{\vee}})\calB(\phi_1 + \phi_{s_i}).$$ Note that, if we can show that $\calB(\phi_1)$ and $\calB(\phi_{s_i})$ are both invariant under the reflection $s_i$, then we will have proved \eqref{intertwiner1}. By Lemma \ref{besselvalue}, we know that $\calB(\phi_1) = \pi^{\rho_{\ve}^{\vee}},$ and hence $\calB(\phi_1)$ is invariant under the reflection $s_i$. Similarly, from \eqref{phis}, we know that $\calB(\phi_{s_i})$ is a non-zero multiple of $\pi^{\rho_{\ve}^{\vee}}$, and so we see that $\calB(\phi_{s_i})$ is also invariant under the reflection $s_i$ Next, we calculate $c_{\alpha_n}$. Finding this intertwining constant is similar to the corresponding calculation for the Whittaker functional on $\GL(2)$. Let $\phi$ be an element of $\ind_B^G(\chi_{\textup{univ}}^{-1})^J$ on which $\calB$ is non-zero. A priori, we do not know that such an element exists - however, in our proof of \eqref{intertwiner1} we showed that $\phi_{1}$ is such a function. We see that $$\calB(A_{s_{n}}\phi)(1) = \pi^{\rho_{\ve}^{\vee}} \int_{Z_L(\goth{o})} \int_{\overline{U}_A} \int_{F}\psi_A(u) \phi(s_nx_{\alpha_n}(\tau)uz)\,d\tau\,du\,dz;$$ note that we only need to evaluate the functional at 1 in order to determine the intertwining constant. Using the rank 1 Bruhat decomposition $$s_nx_{\alpha_n}(\tau) = h_{\alpha_n}(\tau^{-1})x_{\alpha_n}(\tau) x_{-\alpha_n}(\tau^{-1}),$$ where $h_{\alpha_n}$ denotes the semisimple subgroup of the embedded $\SL(2)$ triple corresponding to $\alpha_n$, and excluding the point $\tau = 0$, we can rewrite this integral as $$\int_{Z_L(\goth{o})}\int_{\overline{U}_A} \int_{F^{\times}}\psi_A(u)\chi_{\textup{univ}}^{-1}(h_{\alpha_n}(\tau^{-1}))\phi(x_{-\alpha_n}(\tau^{-1})uz)\,d\tau\,du\,dz.$$ After factoring $u$ into root subgroups and performing a linear change of variables, we find that \begin{align*} \calB(A_{s_{\alpha_n}}\phi)(1) &= \pi^{\rho_{\ve}^{\vee}} \int_{Z_L(\goth{o})} \int_{\overline{U}_A} \psi_A(u)\phi(uz) \int_{F^{\times}}\psi_A(-\tau^{-1})\chi_{\textup{univ}}^{-1}(h_{\alpha_n}(\tau^{-1}))\,d\tau\,du\,dz \\&= c_{\alpha_n}(s_{\alpha_n}\circ \calB(\phi))(1), \end{align*} where $$c_{\alpha_n} = \int_{F^{\times}}\psi_A(-\tau^{-1})\chi_{\textup{univ}}^{-1}(h_{\alpha_n}(\tau^{-1}))\,d\tau.$$ This last integral can be evaluated by shells so that, after normalizing the Haar measure so that $m(x_{\alpha_n}(\goth{o}))=1$, we get the familiar Whittaker intertwining constant $$c_{\alpha_n} = (\pi^{\alpha_n^{\vee}}-q^{-1}).$$ \end{proof} \begin{remark} Note that we were able to verify the long root intertwining constant, \eqref{intertwiner2}, on an arbitrary Iwahori-fixed vector without invoking the uniqueness of the model. Thus, in the proof of Theorem \ref{thm:main}, we only make use of Theorem/Conjecture \ref{conjecture2} when we prove \eqref{intertwiner1} (in the rank $n>2$ case). \end{remark} \subsection{Proof of Theorem \ref{thm:main}} \label{sec:proof} In order to show that $\calB$ is an $\calH$-intertwiner as claimed in Theorem \ref{thm:main}, we will need to know the action of $T_{s_{\alpha}}$ on $V_{\ve} \simeq R$ explicitly for simple reflections $s_{\alpha}$. The calculation of this action follows easily from the Bernstein relation \eqref{bernstein}: for a basis element $\pi^{\mu}v_{\ve}$ - where, as before, $v_{\ve}$ denotes the eigenvector of $\calH_0$ corresponding to $\ve$ - we see that \begin{align*} T_{s_{\alpha}} \cdot \pi^{\mu}v_{\ve} &= \pi^{s_{\alpha}(\mu)} \ve(T_{s_{\alpha}})v_{\ve} + (1-q)\frac{\pi^{s_{\alpha}(\mu)}-\pi^{\mu}}{1-\pi^{-\alpha^{\vee}}}v_{\ve} \\&= \left(\ve(T_{s_{\alpha}}) + \frac{1-q}{1-\pi^{-\alpha^{\vee}}}\right)\pi^{s_{\alpha}(\mu)}v_{\ve} + \frac{q-1}{1-\pi^{-\alpha^{\vee}}}\pi^{\mu}v_{\ve}; \end{align*} in the second equality, we have rearranged terms so that we can see how $T_{s_{\alpha}}\cdot \pi^{\mu}v_{\ve}$ is expressed as a linear combination of $\pi^{\mu}v_{\ve}$ and $\pi^{s_{\alpha}(\mu)}v_{\ve}$ over $R$. Thus, regarding $T_{s_{\alpha}}$ as an operator on $R$, we see that $T_{s_{\alpha}}$ acts on $f \in R$ by \begin{equation}\label{heckeoperator} T_{s_{\alpha}}: f \mapsto \left(\ve(T_{s_{\alpha}}) + \frac{1-q}{1-\pi^{-\alpha^{\vee}}}\right)f^{s_{\alpha}} + \frac{q-1}{1-\pi^{-\alpha^{\vee}}}f.\end{equation} \begin{comment} In order to prove \eqref{thm:main}, we are going to need to show that the diagram \eqref{diagram} commutes on $1_{T(\goth{o})UJ}$, and in order to show this, we will need to use the following Iwahori factorization: \begin{Lemma} \label{iwahori} $J = (J\cap B)(J\cap \overline{U}_AZ_L)$. \end{Lemma} \begin{proof} Using the usual Iwahori factorization, we can see that it suffices to show that the subgroup $x_{-\alpha_1}(\goth{p})$ of $J$ is contained in $(J \cap B)(J \cap U_A^-Z_L)$. To see that this is the case, observe that, for $t = u\pi^j$ with $u \in \goth{o}^{\times}$ and $j > 0$, then $x_{-\alpha_1}(t) = bh$, where $$b = \begin{pmatrix} g & 0 \\ 0 & \det g \cdot (g')^{-1} \end{pmatrix}\textup{ with } g = \begin{pmatrix} (1-\ve t^2)^{-1} & -\ve t(1-\ve t^2)^{-1}\\ 0 & 1\end{pmatrix},$$ and $$h = \begin{pmatrix} \gamma & 0 \\ 0 & \det \gamma\cdot (\gamma')^{-1} \end{pmatrix} \textup{ with } \gamma = \begin{pmatrix} 1 & \ve t \\ t & 1 \end{pmatrix}.$$ \end{proof} This lemma will be used again in the next section when we calculate the value of $\calB$ on torus elements. In order to prove \eqref{thm:main}, we only need to apply \eqref{iwahori} in service of obtaining the following technical result: \begin{Lemma}\label{iwahori2} $\overline{U}_AZ_L \cap BJ = \overline{U}_AZ_L \cap J$. \end{Lemma} \begin{proof} Let $s \in \overline{U}_AZ_L \cap BJ$ be given. Then $s$ can be written as $s = bj$ for some $b \in B$, $j \in J$. From \eqref{iwahori}, we know that we can write $j$ as $j = b's'$ where $b' \in B \cap J$ and $s' \in \overline{U}_AZ_L \cap J$. Hence, $$s(s')^{-1} = bb' \in B,$$ which means that $s(s')^{-1}$ is an element of $J$, and therefore $s \in J$. \end{proof} \end{comment} \begin{proof}[Proof of Theorem \ref{thm:main}] The main result we need to prove is that $\calB$ is indeed a left $\calH$-module intertwiner from $\ind_B^G(\chi_{\textup{univ}}^{-1})^J$ to $V_{\ve}$, where $\ve$ is the character that acts by multiplication by $-1$ on long simple roots and acts by $q$ on short simple roots. Once we have done this and checked that $\calF(1_{T(\goth{o})UJ}) = \calB(\phi_1) = \pi^{\rho_{\ve}^{\vee}}$, we can see that the diagram commutes since $\ind_B^G(\chi_{\textup{univ}}^{-1})^J \simeq M \simeq \calH$. That the diagram commutes on $1_{T(\goth{o})UJ}$ is immediate - we know that $\calB(\phi_1) = \pi^{\rho_{\ve}^{\vee}}$ from Lemma \ref{besselvalue}, and we observe that $\calF(1_{T(\goth{o})UJ}) = \calF(1_{T(\goth{o})UJ} \ast 1_J) = \pi^{\rho_{\ve}^{\vee}}$. In order to prove that $\calB$ is a left $\calH$-module intertwiner, it suffices to show, on a set of generators $\{h\}$ for $\calH$, that $$\calB(h\cdot \phi) = h\cdot \calB(\phi), \textup{ for any $\phi \in \ind_B^G(\chi_{\textup{univ}}^{-1})^J.$}$$ In particular, we will choose our set of generators to be those elements of the form $\pi^{\mu}T_{s_{\alpha}}$ where $\mu \in X_{\ast}(T)$ and $s_{\alpha}$ is a simple reflection. Since $\pi^{\mu}$ acts by translation on both $V_{\ve}$ and $\ind_B^G(\chi_{\textup{univ}}^{-1})^J$, we can reduce to checking the equality on $T_{s_{\alpha}}$. From \eqref{heckeintertwiner}, we immediately see that $$q^{-1}(1-\pi^{\alpha^{\vee}})\calB(T_{s_{\alpha}}\cdot \phi) = \calB(A_{s_{\alpha}}\phi)- (1-q^{-1})\pi^{\alpha^{\vee}}\calB(\phi).$$ Applying Proposition \ref{intertwiners}, we see that $$q^{-1}(1-\pi^{\alpha^{\vee}})\calB(T_{s_{\alpha}}\cdot \phi) = \left\{\begin{array}{ll} (1-q^{-1}\pi^{\alpha^{\vee}})(s_{\alpha}\circ \calB)(\phi) + (q^{-1}-1)\pi^{\alpha^{\vee}}\calB(\phi) & \textup{if $\alpha = \alpha_i$ $(i<n)$} \\ (\pi^{\alpha^{\vee}}-q^{-1})(s_{\alpha}\circ \calB)(\phi) + (q^{-1}-1)\pi^{\alpha^{\vee}}\calB(\phi) & \textup{if $\alpha = \alpha_n$.} \end{array}\right.$$ Dividing by $q^{-1}(1-\pi^{\alpha^{\vee}})$, we see that the operator acting on $B(\phi)$ is $$f \mapsto \frac{q}{1-\pi^{\alpha^{\vee}}}\left\{\begin{array}{ll} (1-q^{-1}\pi^{\alpha^{\vee}}) f^{s_{\alpha}} + (q^{-1}-1)\pi^{\alpha^{\vee}}f & \textup{if $\alpha = \alpha_i$ $(i<n)$} \\ (\pi^{\alpha^{\vee}}-q^{-1})f^{s_{\alpha}} + (q^{-1}-1)\pi^{\alpha^{\vee}}f & \textup{if $\alpha = \alpha_n$.} \end{array}\right.$$ If we compare this with the operator in \eqref{heckeoperator} that described the action of $T_{s_{\alpha}}$ on $R$, we see that it matches it exactly in both cases (recall that $\ve(T_{s_i}) = q$, if $i<n$, and $\ve(T_{s_n}) = -1$). Thus, $\calB(T_{s_{\alpha}}\cdot \phi) = T_{s_{\alpha}}\cdot \calB(\phi)$ for any $\phi \in \ind_B^G(\chi_{\textup{univ}}^{-1})$ and simple reflection $s_{\alpha}$. \end{proof} \begin{comment} Next, we need to calculate the intertwining constant $c_{s_1}(\chi)$. As mentioned above, calculating $c_{s_1}(\chi)$ will be somewhat similar to the calculation of the intertwining constant of the spherical functional in $\GL(2)$. In particular, unlike in the case above, it will be helpful to make a specific choice of function in $M(\chi)$ on which to calculate $\calB^{s_1\cdot \chi}\circ A_{s_1}$. Our plan is to calculate the value of the functional on the Iwahori-fixed vector $\varphi = \varphi_{w_0}^{\chi}$, and then show that this value is invariant under twisting by $s_1$, i.e.~that $\calB^{\chi}(\varphi^{\chi}) = \calB^{\chi^{s_1}}(\varphi^{\chi^{s_1}})$. If we can show this then we have that \begin{align*} \calB^{\chi^{s_1}}(A_{s_1}\varphi^{\chi}) &= \calB^{\chi^{s_1}}(c_{s_1}(\chi)\varphi^{\chi^{s_1}}) \\&= c_{s_1}(\chi)\calB^{\chi^{s_1}}(\varphi^{\chi^{s_1}}) \\&= c_{s_1}(\chi)\calB^{\chi}(\varphi^{\chi}). \end{align*} Our motivation for calculating the value of the functional at $\varphi$ comes, in part, from the definition of $Z_L$. We want to evaluate our functional at an Iwahori-fixed vector whose support is contained within $Bs_2s_1s_2U_PZ_L(\goth{o})$ and is also $Z_L(\goth{o})$-invariant on the right. With this in mind, an obvious choice of such a function would be $\varphi_{w_0}^{\chi} + \varphi_{s_2s_1s_2}^{\chi}$. Our motivation for picking an Iwahori-fixed vector in the first place is illuminated by the following lemma: Let $w_1 = s_2s_1s_2$ and let $\varphi_{w} = \varphi_{w}^{\chi}$ if our choice of character is clear from context. \begin{Lemma} If $u \in U_P$ and $w_1u \in Bw_0J$, then $u \in U_P(\goth{o})$. \end{Lemma} \begin{proof} For $w_1u \in Bw_0J$, we write $s_2s_1s_2u = bw_0j$ with $b \in B$ and $j \in J$. We further factor $j$ as $j= \gamma x_{\alpha_1}(t)u'$ using the factorization $J = U^-(\goth{p})T(\goth{o})U(\goth{o})$, with $\gamma \in U^-(\goth{p})T(\goth{o})$, $t \in \goth{o}$, and $u' \in U_P(\goth{o})$. Then $u(u')^{-1} = w_1bw_0\gamma x_{\alpha_1}(t) \in U_P$. Additionally, we see that $$u(u')^{-1} = w_1bw_0\gamma x_{\alpha_1}(t)w_0w_0,$$ and we know that $\gamma' = w_0\gamma x_{\alpha_1}(t) w_0 \in P(\goth{o})$, so that $b \gamma' \in P$. Since $L_PU_P^-s_1 = L_PU_P^-$, we have that $w_1b \gamma' w_0 \in L_PU_P^- \cap U_P$, which is simply the identity, so that $u = u' \in U_P(\goth{o})$. \end{proof} \begin{Lemma} If $u \in U_P$ and $s_2s_1s_2u \in Bw_0J \cup Bs_2s_1s_2J$, then $u \in U_P(\goth{o})$. \end{Lemma} \begin{proof} First, we assume that $s_2s_1s_2u \in Bs_2s_1s_2J$, and we write $s_2s_1s_2u = bs_2s_1s_2j$ with $b \in B$ and $j \in J$. We further factor $j$ as $j= \gamma x_{\alpha_1}(t)u'$ using the factorization $J = U^-(\goth{p})T(\goth{o})U(\goth{o})$, with $\gamma \in U^-(\goth{p})T(\goth{o})$, $t \in \goth{o}$, and $u' \in U_P(\goth{o})$. Then $u(u')^{-1} = s_2s_1s_2bs_2s_1s_2\gamma x_{\alpha_1}(t) \in U_P$. Additionally, we see that $$u(u')^{-1} = s_2s_1s_2bs_2s_1s_2\gamma x_{\alpha_1}(t)s_2s_1s_2s_2s_1s_2,$$ and we know that $\gamma' = s_2s_1s_2\gamma x_{\alpha_1}(t) s_2s_1s_2 \in P(\goth{o})$, so that $b \gamma' \in P$. Thus, $s_2s_1s_2b \gamma' s_2s_1s_2 \in L_P(U_P)^- \cap U_P$, which is simply the identity, so that $u = u' \in U_P(\goth{o})$. In the second case, we assume that $s_2s_1s_2u \in Bw_0J$. After writing $s_2s_1s_2u = bw_0j$ as in the case above, we simply follow the same argument (using the same notation) to arrive at the condition that $$u(u')^{-1} \in U_P \cap L_P(U_P)^-s_1.$$ However, $L_P(U_P)^-s_1 = L_P(U_P)^-$, and so the result follows. \end{proof} Using this lemma, we can now easily evaluate $\calB^{\chi}(\varphi_{w_1})$: \begin{Proposition} For dominant $\lambda \in \C[P^{\vee}]$, $$\calB^{\chi}(\varphi_{w_1})(a_{\lambda}) = m(Z_L(\goth{o}))m(\goth{o})^3\delta(a_{\lambda})\delta^{1/2}(w_1a_{\lambda}w_1)z^{w_1\lambda}.$$ \end{Proposition} \begin{proof} In order to evaluate the functional at $a_{\lambda}$, we begin by moving $a_{\lambda}$ to the left in order to get a factor of $\chi$ out front: \begin{align*} \calB^{\chi}(\varphi_{w_1})(a_{\lambda}) &= \int_{Z_L(\goth{o})} \int_{U_P} \psi(u)\varphi_{w_1}(w_1uha_{\lambda})\,du\,dh \\&= \int_{Z_L(\goth{o})}\int_{U_P} \psi(u)\varphi_{w_1}(w_1a_{\lambda}(a_{\lambda}^{-1}ua_{\lambda})(a_{-\lambda}ha_{\lambda}))\,du\,dh \\&= \delta^{1/2}\chi^{w_1}(a_{\lambda})\int_{Z_L(\goth{o})}\int_{U_P} \psi(u)\varphi_{w_1}(w_1(a_{\lambda}^{-1}ua_{\lambda})(a_{\lambda}^{-1}ha_{\lambda}))\,du\,dh. \end{align*} Next, we make the change of variables $u \mapsto a_{\lambda}ua_{\lambda}^{-1}$, which has Jacobian $\delta(a_{\lambda})$, giving us $$\calB^{\chi}(\varphi_{w_1})(a_{\lambda}) = \delta^{1/2}\chi^{w_1}(a_{\lambda})\int_{Z_L(\goth{o})}\int_{U_P} \psi(a_{\lambda}ua_{\lambda}^{-1})\varphi_{w_1}(w_1u(a_{\lambda}^{-1}ha_{\lambda}))\,du\,dh.$$ Now, in order to finish evaluating the integral, we need to think about what happens when we conjugate $h \in Z_L(\goth{o})$ by $a_{\lambda}^{-1}$. Consider $Z_L(\goth{o}) \subset L_P(\goth{o})$ as subgroups of $\GL(2)$ embedded in $G$, and let $B_1$ is the Borel subgroup of $\GL(2)$ contained in $B$ and $J_1$ is the Iwahori subgroup of $\GL(2)$ corresponding to $B_1$. Now, if $\lambda \in \C[P^{\vee}]$ is dominant, $h \in Z_L(\goth{o})$, and $a_{\lambda}^{-1}ha_{\lambda} \in B_1J_1$, then $a_{\lambda}^{-1}ha_{\lambda} \in B_1'J_1$, where $B_1'$ is the kernel of the character $\chi'$, the unramified character of $B_1$ that has the same Langlands parameters as $\chi$. Then, for $u \in U_P$ and $h \in Z_L(\goth{o})$, we can rewrite $a_{\lambda}^{-1}ha_{\lambda}$ as $bj$, so that \begin{align*} w_1ua_{\lambda}^{-1}ha_{\lambda} &= w_1ubj \\&= bw_1u'j, \end{align*} where $u'\in U_A$ is obtained by \end{proof} \end{comment} \section{Calculating Distinguished Vectors at Torus Elements} \label{sec:spherical} In this section, we will focus on calculating the images of distinguished vectors in unique models of the universal principal series of $\GSp(2n)$. In \ref{sec:besspherical}, we will conclude our discussion of the Bessel functional with a proof of Theorem \ref{thm:spherical}. Then, in \ref{sec:womodels}, we will move on to discussing the connection between the Whittaker-Orthogonal models defined in \cite{BFG} and the proposed $\calH$-intertwiner corresponding to the fourth character, $\sigma$, of the finite Hecke algebra of $\GSp(4)$. \subsection{Calculating Distinguished Vectors in the Bessel Model} \label{sec:besspherical} We will use Theorem \ref{thm:main} to calculate the images of certain distinguished vectors under $\calB$ on anti-dominant, integral torus elements, culminating with a proof of Theorem \ref{thm:spherical}. Before we can prove Theorem \ref{thm:spherical}, we will calculate the images of the Iwahori-fixed vectors $\phi_w = \eta(1_{T(\goth{o})UwJ})$ in terms of the action of $\calH$ on $v_{\ve} = \pi^{\rho_{\ve}^{\vee}}$, as described in Theorem \ref{thm:iwahori}. Using the linearity of $\calB$ along with the alternator formula developed in \cite{BBF2}, we will arrive at a proof of Theorem \ref{thm:spherical}, which we will show matches the expression obtained in Corollary 1.8 in \cite{BFF}. \begin{comment} \begin{Proposition}\label{prop:jfixed} For dominant $\lambda$ and fixed $w$, $$\calB(\pi^{-\lambda}\cdot \phi_w) = \frac{1}{m(J\pi^{\lambda}J)}T_w\pi^{\lambda}\cdot v_{\ve},$$ where the action of $T$ on $\ind_B^G\chi_{\textup{univ}}^{-1}$ is by right translation and where the action of $T_w\pi^{\lambda}$ on $v_{\ve}$ is the left action on $v_{\ve}$ appearing in the definition of $\calF$. \end{Proposition} \end{comment} In order to prove Theorem \ref{thm:iwahori}, we must first prove the following Iwahori factorization: \begin{Proposition} \label{prop:iwahori} $J = (J\cap B)(J\cap \overline{U}_AZ_L(\goth{o}))$. \end{Proposition} The proof of this proposition relies on the same result in the rank 2 case, which we prove now as a separate lemma: \begin{Lemma} \label{iwahoria} If $G=\GSp(4)$, then $J = (J\cap B)(J\cap \overline{U}_AZ_L(\goth{o}))$. \end{Lemma} \begin{proof} Using the usual Iwahori factorization, we can see that it suffices to show that the subgroup $x_{-\alpha_1}((\pi))$ of $J$ is contained in $(J \cap B)(J \cap \overline{U}_AZ_L(\goth{o}))$. To see that this is the case, observe that, for $\tau = u\pi^j$ with $u \in \goth{o}^{\times}$ and $j > 0$, we can factor $x_{-\alpha_1}(\tau) = bh$, where $$b = \begin{pmatrix} g & 0 \\ 0 & \det g \cdot (g')^{-1} \end{pmatrix}\textup{ with } g = \begin{pmatrix} (1-\omega_1 \tau^2)^{-1} & -\omega_1 \tau(1-\omega_1 \tau^2)^{-1}\\ 0 & 1\end{pmatrix},$$ and $$h = \begin{pmatrix} \gamma & 0 \\ 0 & \det \gamma\cdot (\gamma')^{-1} \end{pmatrix} \textup{ with } \gamma = \begin{pmatrix} 1 & \omega_1 \tau \\ \tau & 1 \end{pmatrix}.\footnote{Recall that the parameter $-\omega_1$ was defined to be an entry of $A$ back in \ref{sec:besselmodel}.}$$ \end{proof} \begin{proof}[proof of Proposition \ref{prop:iwahori}] We begin by noting that, using the usual Iwahori factorization (as in Lemma \ref{iwahoria}), it suffices to show that every element in $J \cap \overline{U} \cap L_A$ is contained in $(J \cap B)(J\cap \overline{U}_AZ_L(\goth{o}))$. Let $\overline{u} \in J \cap \overline{U} \cap L_A$. We can factor $\overline{u}$ into a product of elements from the root subgroups contained in $L_A$, so that $$\overline{u} = \prod_{\alpha \in \Phi_L^+} u_{-\alpha},$$ where $u_{-\alpha} \in x_{-\alpha}((\pi))$. Note that, if $\overline{u}$ has no nontrivial factors when it is factored into root subgroups, then $\overline{u} = 1$. Now, suppose that $\overline{u}$ has $k$ distinct nontrivial factors when it is factored into root subgroups as $\overline{u} = \prod_{\alpha \in \Phi_L^+} u_{-\alpha}$. Observe that, since each of the roots in $\Phi_L$ is a short root, if $\alpha \in \Phi_L^+$, then $\alpha = \sum_{i=1}^{n-1} c_i\alpha_i$ where $c_i \in \{0,1\}$; let $c(\alpha) := \sum_{i=1}^{n-1} c_i$. Write $\overline{u}$ as a product where the $u_{-\alpha}$'s are ordered from left to right by increasing $c(\alpha)$. Let $\beta \in \Phi_L^+$ be the root such that $u_{-\beta}$ is the rightmost factor in $\overline{u}$ as described above. Then, by Lemma \ref{iwahoria}, we can write $u_{-\beta} = t_{\beta}u_{\beta}z_{\beta}$, with $t_{\beta} \in T\cap J$, $u_{\beta} \in x_{\beta}(\goth{o})$, and $z_{\beta} \in J \cap \overline{U}_AZ_L(\goth{o})$. Observe that $t_{\beta}^{-1}u_{-\alpha}t_{\beta} \in x_{-\alpha}((\pi))$, so that moving $t_{\beta}$ all the way to the left in this factorization of $\overline{u}$ leaves us with a factorization of $t_{\beta}^{-1}\overline{u}(u_{\beta}z_{\beta})^{-1}$ into elements from the same root subgroups, minus $x_{\beta}$, in the same order that they were in in the initial factorization of $\overline{u}$. Let $u_{-\alpha}$ now refer to the element of $x_{-\alpha}((\pi))$ in the factorization of $t_{\beta}^{-1}\overline{u}(u_{\beta}z_{\beta})^{-1}$ into root subgroups. We would like to show that we can move $u_{\beta}$ across $\prod_{\alpha \neq \beta} u_{-\alpha}$ and end up with $b\left(\prod_{\alpha \neq \beta} u_{-\alpha}\right)z_{\beta}$. To this end, we will make use of the following properties (assume $\alpha_1,\alpha_2 \in \Phi_L^+$): \begin{enumerate} \item \label{fact1} If $c(\alpha)< c(\alpha')$ then $x_{-\alpha}(t)x_{\alpha'}(s) = x_{\alpha'}(s)x_{\alpha'-\alpha}(ts)x_{-\alpha}(t)$, and if $\alpha'-\alpha \in \Phi$ then $\alpha'-\alpha \in \Phi_L^+$ and $c(\alpha'-\alpha) < c(\alpha')$; otherwise $x_{-\alpha}(t)$ and $x_{\alpha'}(s)$ commute. \item If $c(\alpha)> c(\alpha')$ then $x_{-\alpha}(t)x_{\alpha'}(s) = x_{\alpha'}(s)x_{\alpha'-\alpha}(ts)x_{-\alpha}(t)$, and if $\alpha'-\alpha \in \Phi$ then $\alpha'-\alpha \in \Phi_L^-$ and $c(\alpha'-\alpha) < c(\alpha)$; otherwise $x_{-\alpha}(t)$ and $x_{\alpha'}(s)$ commute. \item If $c(\alpha) = c(\alpha')$ but $\alpha'\neq \alpha$, then $x_{-\alpha}(t)$ and $x_{\alpha'}(s)$ commute. \item Thinking of $x_{\alpha}(t)$ as a subgroup of $\GL(2)$ embedded in $G$, $$x_{-\alpha}(t)x_{\alpha}(s) = \begin{pmatrix} (1+ts)^{-1} & \\ & 1+ts\end{pmatrix} \begin{pmatrix} 1 & s(1+ts) \\ & 1 \end{pmatrix} \begin{pmatrix} 1 & \\ t(1+ts)^{-1} & 1 \end{pmatrix}.$$ \end{enumerate} From these properties, we see that when we move $u_{\beta}$ across $\prod u_{-\alpha}$ and next to $t_{\beta}$, we are left with an element that we can factor into root subgroups where the roots in question may be in $\Phi_L^+$ or $\Phi_L^-$, but we know that for each such $\alpha$, $c(\alpha) < c(\beta)$ (or $c(-\alpha) < c(\beta)$). At this point, we move each factor of $x_{\alpha}(t)$ with $\alpha \in \Phi_L^+$ left until there are no factors of the form $x_{\alpha'}(s)$ with $\alpha' \in \Phi_L^-$ to its left, starting with the leftmost such factor. Additionally, we observe from these properties that commuting $x_{\alpha}(s)$ past $x_{\alpha'}(t)$ (with $\alpha,\alpha'$ as in the previous sentence) will produce either (a) a factor of $x_{\alpha+\alpha'}(st)$ where either $\alpha+\alpha' \in \Phi_L^-$ or $\alpha+\alpha' \in \Phi_L^+$ with $c(\alpha+\alpha')< c(\alpha)$, (b) a factor $t_{\alpha}$ in $h_{\alpha}(\goth{o})$ and a factor of $x_{\alpha}(s(1+ts))$, or (c) nothing, if the two factors commute. Note that, in case (b), we can move $t_{\alpha}$ left across all factors of the form $x_{\alpha''}(z)$ with $\alpha'' \in \Phi_L^-$ without creating any additional factors of root subgroups, as described earlier. Thus, we see that the process of moving factors of the form $x_{\alpha}(s)$ with $\alpha \in \Phi_L^+$ will ultimately terminate, leaving us with a factorization of $\overline{u}$ as $$\overline{u} = b \left(\prod_{\alpha \in \Phi_L^+} x_{-\alpha}(r_{\alpha})\right) z_{\beta},$$ where $b \in J \cap B$ and where we know that there is no nontrivial factor of $x_{-\beta}((\pi))$ in this product. We repeat this process on $\prod_{\alpha \in \Phi_L^+} x_{-\alpha}(r_{\alpha})$, and then again until we are left with a factorization $\overline{u} = bz$ with $b \in J \cap B$ and $z \in J \cap \overline{U}_AZ_L(\goth{o})$ - note that we can be sure that this process will terminate since (a) at each step, we are removing the representative from a specific root subgroup from the product; (b) the maximum value of $c(\alpha)$ in a given factorization is no larger than the maximum value in the previous factorization; (c) the value of $c(\alpha)$ for each new factor $u_{-\alpha}$ introduced in the process of moving $u_{-\beta}$ across the product is strictly smaller than $c(\beta)$; and (d) $|\Phi_L^+|< \infty$. \end{proof} \begin{proof}[proof of Theorem \ref{thm:iwahori}] We begin by looking at the right-hand side, $T_w\pi^{\lambda} \cdot v_{\ve}$. We will use the commutativity of the diagram \eqref{diagram} and the dominance of $\lambda$ to show that $$\calB(\phi_w \ast 1_{J\pi^{\lambda}J}) = T_w\pi^{\lambda} \cdot v_{\ve},$$ so that it suffices to show that \begin{equation}\label{eqn:torus} \calB(\pi^{-\lambda}\cdot \phi_w) = \frac{1}{m(J\pi^{\lambda}J)}\calB(\phi_w \ast 1_{J\pi^{\lambda}J}).\end{equation} In order to see that this second equality holds, first note that $\pi^{-\lambda}\cdot \phi_w = \eta(1_{T(\goth{o})UwJ\pi^{\lambda}})$ by definition (here we emphasize the definition of $\eta$ as a vector-space isomorphism from $C_c(T(\goth{o})U \bs G)$ to $\ind_B^G(\chi_{\textup{univ}}^{-1})$). Now, if we look at $\calB(\phi_w \ast 1_{J\pi^{\lambda}J}) = \calB(\eta(1_{T(\goth{o})UwJ} \ast 1_{J\pi^{\lambda}J}))$, we see from the definition of the convolution that $$\calB(\phi_w \ast 1_{J\pi^{\lambda}J}) = \int_{\overline{U}_AZ_L(\goth{o})} \int_{J \bs J\pi^{-\lambda}J} \int_J \widetilde{\psi}(h)\eta(1_{T(\goth{o})UwJ})(hj\gamma)\,dj\,d\gamma\,dh.$$ Using Proposition \ref{prop:iwahori} and making the change of variables $h \mapsto hj^{-1}$, the integral above simplifies to $$\calB(\phi_w \ast 1_{J\pi^{\lambda}J}) = m(J\pi^{\lambda}J) \int_{\overline{U}_AZ_L(\goth{o})} \widetilde{\psi}(h)\eta(1_{T(\goth{o})UwJ\pi^{\lambda}})(h)\,dh,$$ since the conductor of $\psi$ is $\goth{o}$. Thus, we have established \eqref{eqn:torus}. To see that $\calB(\phi_w \ast 1_{J\pi^{\lambda}J}) = T_w\pi^{\lambda}\cdot v_{\ve}$, we first note that $$\phi_w \ast 1_{J\pi^{\lambda}J} = \eta((1_{T(\goth{o})UJ} \ast T_w)\ast 1_{J\pi^{\lambda}J}) = \eta((T_w\pi^{\lambda})\cdot 1_{T(\goth{o})UJ}),$$ where the second equality follows because $\lambda$ is dominant. Thus, by Theorem \ref{thm:main}, we see that $$\calB(\phi_w \ast 1_{J\pi^{\lambda}J}) = T_w\pi^{\lambda} \cdot v_{\ve}.$$ \end{proof} As noted at the beginning of the section, the linearity of $\calB$ gives us the following immediate corollary regarding $\phi^{\circ}$: \begin{Corollary}\label{spherical} For dominant $\lambda$, $$\calB(\pi^{-\lambda} \cdot \phi^{\circ}) = \frac{1}{m(J\pi^{\lambda}J)} \sum_{w \in W} T_w\pi^{\lambda}\cdot v_{\ve}.$$ \end{Corollary} In order to prove Theorem \ref{thm:spherical}, we will need to make use of an identity of operators on $\Frac(R)$. Recall that when we recorded the action of $T_{s_{\alpha}}$ as an operator on $R$ in \eqref{heckeoperator}, it was with $R$ regarded as a left $\calH$-module with eigenvector 1. Our goal is to calculate the image of the spherical function in the model $V_{\ve}$, and, as noted previously, $R$ is isomorphic to $V_{\ve}$ under the isomorphism $1 \mapsto \pi^{\rho_{\ve}^{\vee}}$. Then, extending the action of $\calH$ to $\Frac(R)$, we realize the operator associated to $T_{s_{\alpha}}$ via this isomorphism (now regarded as an isomorphism of $\Frac(R)$) as $$\goth{T}_{s_{\alpha}}:= \pi^{\rho_{\ve}^{\vee}}T_{s_{\alpha}}\pi^{-\rho_{\ve}^{\vee}},$$ so that we can rewrite Corollary \eqref{spherical} as $$\calB(\pi^{-\lambda}\cdot \phi^{\circ}) = \frac{\pi^{-\rho_{\ve}^{\vee}}}{m(J\pi^{\lambda}J)}\sum_{w \in W} \goth{T}_w\pi^{\lambda+2\rho_{\ve}^{\vee}}.$$ Explicitly, the action of $\goth{T}_{s_{\alpha}}$ on $\Frac(R)$ for a simple root $\alpha$ is given by $$\goth{T}_{s_{\alpha}}: f \mapsto \frac{q}{1-\pi^{\alpha^{\vee}}}\left\{\begin{array}{ll} (\pi^{\alpha^{\vee}}-q^{-1})\pi^{\alpha^{\vee}} f^{s_{\alpha}} + (q^{-1}-1)\pi^{\alpha^{\vee}}f & \textup{if $\alpha = \alpha_i$ $(i<n)$} \\ (1-q^{-1}\pi^{\alpha^{\vee}})f^{s_{\alpha}} + (q^{-1}-1)\pi^{\alpha^{\vee}}f & \textup{if $\alpha = \alpha_n$.} \end{array}\right.$$ The operator identity that we will use is a deformed version of the Weyl character formula, established in \cite{BBF2} in a more general setting where $G$ is only assumed to be split, connected and reductive. Let $\Omega$ denote the operator on $\Frac(R)$ given by the Weyl character formula-like expression $$\Omega:= \pi^{-\rho^{\vee}}\prod_{\alpha \in \Phi^+} (1-\pi^{-\alpha^{\vee}})^{-1}\A(\pi^{-\rho^{\vee}}).$$ The deformation depends on the choice of character of the Hecke algebra, as described in the following theorem: \begin{Theorem}[\cite{BBF2}, Theorem 13]\label{thm:alternator} If we have a character $\tau$ of $\calH_0$ for $G$, then $$\sum_{w\in W} \goth{T}_w = \left(\prod_{\alpha \in \Phi_{-1}^+}(1-q\pi^{\alpha^{\vee}})\right) \Omega \left(\prod_{\alpha \in \Phi_{q}^+}(1-q\pi^{\alpha^{\vee}})\right),$$ where $\Phi_{-1}^+$, resp.~$\Phi_{q}^+$, are those positive roots that are the same length as the simple roots $\alpha$ such that $\tau(\alpha) = -1$, resp.~$\tau(\alpha)=q$. \end{Theorem} In the case of $\ve$, the set $\Phi_{-1}^{+}$ consists of the long positive roots, and $\Phi_{q}^{+}$ consists of the short positive roots, which leads us to Theorem \ref{thm:spherical}. The image of the spherical function in the Bessel model evaluated on torus elements was previously calculated in the case when $n=2$ by Bump, Friedberg and Furusawa in Corollary 1.8 in \cite{BFF}, and, indeed, it can be confirmed by observation that our formula matches theirs, up to normalization.\footnote{In \cite{BFF}, they work with a choice of unramified principal series $\ind_B^G \chi$ instead of the universal principal series. The parameters denoted $\alpha_1,\alpha_2$ in \cite{BBF2} can be expressed as $\alpha_1^{2} = \chi(\pi^{-(\alpha_1+\alpha_2)^{\vee}})$ and $\alpha_2^2 = \chi(\pi^{\alpha_1^{\vee}})$.} \subsection{Whittaker-Orthogonal Models and the Shalika character}\label{sec:womodels} In this section, we consider the character $\sigma$ of $\calH_0$, which was defined in Section \ref{sec:intro} to be the character which acts by $q$ on long simple roots and $-1$ on short simple roots. For each of the other three characters of $\calH_0$ on $\GSp(4)$, we have found a subgroup $S \subset G$ such that the model formed by inducing $S$ to $G$ contains that character with multiplicity one - $\sigma$ is the only character for which we have not found an explicit integral realization of $\calL$ as in the diagram \eqref{diagram}. However, even without this information, we can still say what the image of the spherical function under $\calL$ in $V_{\sigma}$, evaluated on torus elements, would have to be, by the commutativity of \eqref{diagram} combined with Theorem \ref{thm:alternator}. As stated in Proposition \ref{prop:shalika}, we will show that the result matches the image of the spherical function in the Whittaker-Orthogonal model (WO-model) defined by Bump, Friedberg, and Ginzburg in \cite{BFG}. The WO-model is defined as a representation on $\SO(2n+2)$. Let $\overline{U}$ be the opposite unipotent radical of the parabolic subgroup of $\SO(2n+2)$ with a Levi component that is diagonal except for a central $\SO(4)$ block and let $\psi$ be a character of $\overline{U}$ defined as $$\psi(\overline{u}) := \psi_0(\overline{u}_{21} + \overline{u}_{32} + \cdots + \overline{u}_{n-1,n-2} + \overline{u}_{n+1,n-1} + \overline{u}_{n+2,n-1}),$$ where $\psi_0$ is a nontrivial additive character of $F$ with conductor $\goth{o}$. Let $Z(\psi) \simeq \SO(3)$ be the stabilizer of this character contained in the Levi. Then, for an irreducible admissible representation $\theta$ of $\SO(2n+2)$, we say that $\theta$ has a WO-model if there exists a nonzero linear functional $\WO$ on the representation space $V_{\theta}$ of $\theta$ such that $$\WO(\theta(\overline{u}h)v) = \psi(\overline{u})\WO(v),$$ for $\overline{u} \in \overline{U}_{\pi}$, $h \in Z(\psi)$, and $v \in V_{\theta}$. The uniqueness of WO-models is established in Theorem 4.1 in \cite{BFG}. The authors then show, in Theorem 4.2, that, if $\hat{\chi} = \ind_B^G(\chi)$ is an irreducible unramified principal series representation, then $\hat{\chi}$ admits a WO-model if and only if $\hat{\chi}$ is a \emph{local lifting} of an unramified principal series representation of $\Sp(2n)$. We call $\hat{\chi}$ a local lifting from $\Sp(2n)$ if one of the Langlands' parameters of $\hat{\chi}$ is 1. The authors note that this is in conformity with Langlands' functoriality since the L-group of $\Sp(2n)$ is $\SO(2n+1)$, and that if one of the Langlands' parameters is 1 then the given conjugacy class is in the image of the inclusion of L-groups $\SO(2n+1) \into \SO(2n+2)$.\footnote{Recall that the L-group of $\SO(2n+2)$ is $\SO(2n+2)$.} Now, suppose that we have an unramified principal series representation of $\SO(6)$, $\hat{\chi} = \ind_B^G \chi$, where $\chi = (\chi_1,\hdots,\chi_{n+1})$ with $\chi_1,\hdots,\chi_{n+1}$ quasicharacters of $F^{\times}$. Let $z_i = \chi_i(\pi)$ for each $i$, and let $z = \chi(\pi)$. Then, if $z_{n+1} = 1$, we have the following formula from \cite{BFG} for the image of the spherical vector of $\hat{\chi}$ under $\WO$ evaluated at torus elements $\pi^{\lambda}$, with $\lambda = (\lambda_1,\lambda_2,0)$: \begin{Theorem}[\cite{BFG}, Theorem 4.3]\label{thm:BFG} Let $\WO$ be the WO-functional on $V_{\hat{\chi}}$ such that $\WO(\phi^{\circ}) = 1$. For $\lambda \in X_{\ast}(T)$, we have $$\WO(z^{-\lambda}\cdot \phi^{\circ}) = z_1^{-2\lambda_1}\cdot \frac{\A(z^{\rho^{\vee}} z_1^{\lambda_1}(1-q^{-1}z_1^{-1}))}{\A(z^{\rho^{\vee}})}.$$ \end{Theorem} \begin{remark} Note that all of the root data here are given with respect to the root system for $\SO(2n+2)$, with $\rho$ denoting the half-sum of the positive roots. \end{remark} We will conflate $\hat{\chi}$ with the representation of $\GSp(4)$ of which it is a local lifting (and, hence, we will also conflate the spherical functions for both representations). In the following proof, let $\lambda_1,\lambda_2 \in X_{\ast}(T)$ with $\lambda_1 = (1,0)$ and $\lambda_2 = (0,1)$. \begin{proof}[proof of Proposition \ref{prop:shalika}] We begin by evaluating the two functionals at $z^{\lambda_1}$ and $\pi^{\lambda_1}$, respectively, where $\lambda_1$ (resp.~$\lambda_2$) is embedded in the cocharacter group of the torus of $\SO(6)$ as $(1,0,0)$ (resp.~$(0,1,0)$). In this case, we have that $$\WO(z^{-\lambda}\cdot \phi^{\circ}) = \frac{\A(z^{\rho^{\vee}}z_1)-q^{-1}\A(z^{\rho^{\vee}})}{\A(z^{\rho^{\vee}})}.$$ In order to give explicit expressions for these alternators, we will need to make a choice of quasicharacters $\mu_i$ such that $\mu_i^2 = \chi_i$ for each $i$ - there are two options for each $\mu_i$, and we make one arbitrarily. Let $\xi_i = \mu_i(\pi)$ for each $i$. We find that \begin{align*} \A(z^{\rho^{\vee}}) = \A(\xi_1^3\xi_2) &= \frac{(\xi_1^2 + 1)(\xi_1\xi_2 + 1)(\xi_1\xi_2 - 1)(\xi_2^2 + 1)(\xi_1 + \xi_2)(\xi_1 - \xi_2)}{\xi_1^3\xi_2^3}, \textup{ and} \\ \A(z^{\rho^{\vee}}z_1) = \A(\xi_1^5\xi_2) &= \frac{\xi_1^4\xi_2^2 + \xi_1^2\xi_2^4 - \xi_1^2\xi_2^2 + \xi_1^2 + \xi_2^2}{\xi_1^2\xi_2^2}\cdot\A(\xi_1^3\xi_2). \end{align*} Simplifying, we see that \begin{equation}\label{wospherical} \WO(z^{-\lambda}\cdot \phi^{\circ}) = \frac{\xi_1^4\xi_2^2 + \xi_1^2\xi_2^4 - \xi_1^2\xi_2^2 + \xi_1^2 + \xi_2^2 - q^{-1}\xi_1^2\xi_2^2}{\xi_1^2\xi_2^2} = z_1+z_2-1+z_2^{-1}+z_1^{-1}-q^{-1}.\end{equation} On the other hand, in order to calculate $\calF(\pi^{-\lambda}\cdot 1_{T(\goth{o})UK})$, where $\calF$ is the functional from the universal principal series $M$ to $V_{\sigma}$ defined in the diagram \eqref{diagram}, we can use Theorem \ref{thm:alternator} with $\Phi_{-1}^{+} = \{\alpha_1,\alpha_1+\alpha_2\}$ and $\Phi_{q}^{+} = \{\alpha_2,2\alpha_1+\alpha_2\}$, along with the commutativity of \eqref{diagram}. Hence, we have that \begin{align*} \calF(\pi^{-\lambda}\cdot 1_{T(\goth{o})UK}) &= \sum_{w \in W} \goth{T}_w \cdot \pi^{\lambda} \\&= N\cdot \frac{\pi^{2\lambda_1+\lambda_2} + \pi^{\lambda_1+2\lambda_2} - q^{-1}\pi^{\lambda_1+\lambda_2} - \pi^{\lambda_1+\lambda_2} + \pi^{\lambda_1}+ \pi^{\lambda_2}}{\pi^{\lambda_1+\lambda_2}}, \\\textup{where } N &= \frac{-(q^{-1} + 1)(q^{-1}\pi^{\lambda_2} - \pi^{\lambda_1})(\pi^{\lambda_1+\lambda_2} - q^{-1})}{\pi^{2\lambda_1+\lambda_2}}. \end{align*} As defined in \eqref{diagram}, $\calF$ is not normalized so that $\calF(1_{T(\goth{o})UK}) = 1$, as WO is in Theorem \ref{thm:BFG}. Indeed, we see that \begin{align*} \calF(1_{T(\goth{o})UK}) &= \sum_{w \in W} \goth{T}_w \cdot 1 \\&= N. \end{align*} So, if we normalize $\calF$ so that $\calF(1_{T(\goth{o})UK}) = 1$, we see that $$\calF(\pi^{-\lambda}\cdot 1_{T(\goth{o})UK}) = \pi^{\lambda_1} + \pi^{\lambda_2}-1+\pi^{-\lambda_2}+\pi^{-\lambda_1}-q^{-1},$$ which agrees with \eqref{wospherical}, indicating that the WO-functional is a lift of the proposed intertwiner corresponding to $\sigma$. \end{proof} \section{Unique Models and the Springer Correspondence} \label{sec:springer} In this section, we will describe how we expect to construct a gGGr containing $V_{\tau}$ in its $J$-fixed vectors for a given irreducible representation $\tau$ of $\calH_0$. As described in Section \ref{sec:intro}, the trivial character and the sign character of $\calH_0$ are connected to the spherical model and the Whittaker model, respectively, and the character $\ve$ of $\calH_0$ that acts by $-1$ on long simple roots and by $q$ on short simple roots is similarly connected to the Bessel model for $G = \SO(2n+1)$ or $G = \Sp(2n)$. As mentioned in Section \ref{sec:gggr}, we believe that the Springer correspondence plays a major role in this connection. The Springer correspondence is a bijection between irreducible representations of $W$ and pairs $(\calO, \mu)$, where $\calO$ is a nilpotent orbit of the Lie algebra and $\mu$ is a character of $A(\calO)$, a subgroup of the $G$-equivariant fundamental group. Geometrically, this bijection arises from the realization of the irreducible representations of $W$ in the top degree cohomology group of partial flag varieties. If $G$ is defined over a finite field, Kawanaka suggests in Conjecture 2.4.5 in \cite{Kaw} that $V_{\tau}$ should appear, with multiplicity one, in the $B$-fixed vectors of the gGGr $\Gamma_{A,\alpha}$, where the orbit containing $A$ is associated to $\tau$ primarily using the Springer correspondence. Taking inspiration from Kawanaka's conjecture, it is believed that the analogous picture for $G$ defined over a $p$-adic field is the following: $$\begin{tikzpicture}[>=angle 90] \matrix(a)[matrix of math nodes, row sep=3em, column sep=4em, text height=1.5ex, text depth=0.25ex] {\ind_B^G(\chi_{\textup{univ}}^{-1})^{J} & \Gamma_{A,\alpha}^J \\& V_{\tau} \simeq R\\}; \path[->,font=\scriptsize] (a-1-1) edge node[above]{$\calF$} (a-1-2); \path[->,font=\scriptsize] (a-1-2) edge node[right]{(evaluate at 1)} (a-2-2); \path[dashed,->,font=\scriptsize] (a-1-1) edge node[below]{$\calF_1$} (a-2-2); \end{tikzpicture}$$ In this diagram, $\calF$ is a left $\calH$-intertwiner of the universal principal series and the gGGr $\Gamma_{A,\alpha}$, and $\calF_1$ is the functional obtained by evaluation at 1, i.e.~$\phi \mapsto \calF(\phi)(1) \in R$, where $\phi \in \ind_B^G (\chi_{\textup{univ}}^{-1})^J$. It should be noted that $A$ is not simply the nilpotent orbit associated to $\tau$ under the bijection described by the Springer correspondence - as we will describe below, the connection between $A$ and $\tau$ goes a bit deeper than this. We emphasize here that the exact nature of this connection is still under investigation, in part due to the limited number of data points currently available - we hope to find explicit examples that fit into this program beyond the three mentioned above. In what follows, we will restrict ourselves to the setting of $\GSp(2n)$. In this case, we can regard the Springer correspondence as a combinatorial recipe between the two relevant sets, so that we can quickly get to the heart of our proposed connection between gGGr's and characters of $\calH_0$. Recalling that for type $C_n$, $W$ is the semidirect product of $S_n$ and $(\Z/2)^n$, we can parametrize these representations as well as the nilpotent orbits of $\sp(2n)$ using the following theorems, which can be found in \cite{CM}: \begin{Theorem}[\cite{CM}, Theorem 10.1.2] The irreducible representations of the Weyl group $W$ of type $C_n$ are parametrized by ordered pairs $({\bf p},{\bf q})$ of partitions such that $|{\bf p}| + |{\bf q}| = n$. The resulting representation has dimension $$\dim\pi_{({\bf p},{\bf q})} = {n \choose |{\bf p}|}(\dim \pi_{{\bf p}})(\dim \pi_{{\bf q}}).$$ We also have $$\pi_{(\overline{{\bf p}},\overline{{\bf q}})} \simeq \pi_{({\bf q},{\bf p})} \otimes sgn,$$ where $\overline{{\bf p}}$ denotes the conjugate partition of ${\bf p}$, and $sgn$ denotes the sign character. The representation $\pi_{({\bf p},{\bf q})}$ is characterized by the following property. Let V be the subspace of $\pi_{({\bf p},{\bf q})}$ consisting of all vectors on which the first $|{\bf p}|$ copies of $\Z/2$ act trivially while the remaining $|{\bf q}|$ copies act by $-1$. Then $S_{|{\bf p}|} \times S_{|{\bf q}|}$ acts on $V$ according to the representation $\pi_{{\bf p}}\times \pi_{{\bf q}}$. \end{Theorem} Using this parametrization, we can see that the four characters of $\calH_0$ correspond to the pairs of partitions $([n],\emptyset)$, $(\emptyset,[1^n])$, $(\emptyset,[n])$, and $([1^n],\emptyset)$. Using Tits' deformation theorem, we can show that these first two partitions correspond to the trivial and sign characters, respectively. The ordered pair $(\emptyset,[n])$ corresponds to the character of $\calH_0$ acting by $q$ on short simple roots and $-1$ on long simple roots, and $([1^n],\emptyset)$ corresponds to the character acting by $-1$ on short simple roots and $q$ on long simple roots. For the nilpotent orbits and their corresponding component groups, we find that we have the following parametrizations in type $C_n$: \begin{Theorem}[\cite{CM}, Theorem 5.1.3]\label{thm:orbits2} Nilpotent orbits in $\sp(2n)$ are in one-one correspondence with the set of partitions of $2n$ in which odd parts occur with even multiplicity. \end{Theorem} \begin{Theorem}[\cite{CM}, Corollary 6.1.6]\label{thm:orbits} $$A(\calO_{{\bf d}}) = \left\{\begin{array}{ll} (\Z/2)^b & \textup{if all even parts have even multiplicity} \\ (\Z/2)^{b-1} & \textup{otherwise,}\end{array}\right.$$ where $b$ is the number of distinct nonzero parts of ${\bf d}$. \end{Theorem} In the case of $\sp(4)$, this means we have four nilpotent orbits corresponding to the partitions $[4],[2^2],[2,1^2]$, and $[1^4]$, and for each of these partitions, the group $A(\calO_{{\bf d}})$ is trivial except for the partition $[2^2]$, for which $A(\calO_{{\bf d}}) \simeq Z/2$. Using these parametrizations, the Springer correspondence gives us the following associations between the irreducible representations of $W$ and pairs of nilpotent orbits and characters of the component group:\\ \begin{center} \begin{tabular}{|l|l|} \hline $({\bf p},{\bf q})$ & $({\bf d},\mu_{\calO})$ \\ \hline $(\emptyset,[1^2])$ & $([1^4], 1)$ \\ \hline $([2],\emptyset)$ & $([4], 1)$ \\ \hline $([1^2],\emptyset)$ & $([2,1^2], 1)$ \\ \hline $(\emptyset,[2])$ & $([2^2], 1)$ \\ \hline $([1],[1])$ & $([2^2], sgn)$ \\ \hline \end{tabular} \end{center} If we use Kawanaka's gGGr construction to build the Whittaker functional, we see that the nilpotent element $A$ that we start with lives in the orbit $[4]$ according to Theorem \ref{thm:orbits}. If we do the same thing for the spherical functional, we see that we begin with an element of the orbit $[1^4]$. However, Brubaker, Bump, and Licata showed in \cite{BBL} that the Whittaker functional is an intertwiner for the sign character of $\calH_0$, which is associated to $[1^4]$ via the Springer correspondence, and Brubaker, Bump, and Friedberg showed in \cite{BBF} showed that the spherical functional is an intertwiner for the trivial character of $\calH_0$, which is associated to $[4]$ via the Springer correspondence. This led to the conjecture that, if one starts with an irreducible representation of $\calH_0$, then one should be able to construct a gGGr in which this representation is realized with multiplicity one from an element of the nilpotent orbit whose associated partition is the transpose of the partition associated to the nilpotent orbit associated to the original $\calH_0$-representation via the Springer correspondence. In the example established in this paper, we see that the Bessel functional is associated to the character $\ve$ of $\calH_0$, which, via the Springer correspondence is associated to the nilpotent orbit parametrized $[2,1^2]$. However, the transpose of this partition is $[3,1]$, which does not appear in the parametrization of the nilpotent orbits of $\sp(4)$ given in Theorem \ref{thm:orbits2}. The issue here is that, while, for type $A_{n-1}$, the transpose is an order-reversing involution of the Hasse diagram for the nilpotent orbits of $\sl(n)$, the analogous involution for type $C_n$ is a bit more complicated. In particular, if the partition ${\bf d}$ is associated to a given nilpotent orbit, but ${\bf d}^{\top}$ is not associated to any nilpotent orbit, then we follow further instructions in \cite{CM} for how to manipulate ${\bf d}^{\top}$ in order to find the image of ${\bf d}$ in the order-reversing involution; these manipulations are referred to as the $C$-collapse of ${\bf d}^{\top}$. In the case of the partition $[3,1] = [2,1^2]^{\top}$, the $C$-collapse of this partition is $[2^2]$, which is exactly the orbit containing our original nilpotent element $A$ in Section \ref{sec:besselmodel}. When we generalize this conjecture to $\sp(2n)$, we see that it correctly associates the sign character with $[2n]$ and the trivial character with $[1^{2n}]$, but that it incorrectly associates $\ve$ with $[2,1^{2n-2}]$; we know from Section \ref{sec:besselmodel} that the gGGr whose $J$-fixed vectors contain $\ve$ is constructed from a representative from the orbit $[2^n]$. With this in mind, we now believe that the path from an irreducible representation of the Hecke algebra to its associated gGGr goes through the Langlands dual group, $^LG$, of $G$ (recall that both $G$ and $^LG$ have the same Weyl group, so having this correspondence go through the dual group versus through $G$ is not something that would be detectable from $\calH$). Inspired by \cite{Ginz}, our idea is that, in order to determine from which nilpotent orbit $A$ should be chosen, we start with an irreducible representation $\tau$ of $\calH_0$ and apply the Springer correspondence to $^LG$ to get the pair $({\bf d}, \mu)$. We then take $\overline{\bf d}$ to be the image of ${\bf d}$ under the appropriate order-reversing involution, $\iota$, of the set of nilpotent orbits, and pick $A$ from the special orbit of $G$ corresponding to $\overline{\bf d}$ under the bijection, $\beta$, between the set of special nilpotent orbits of $G$ and the set of special nilpotent orbits of $^LG$. In types $B_n$ and $C_n$, a special nilpotent orbit is simply a nilpotent orbit ${\bf d}$ such that ${\bf d}^{\top}$ is also a nilpotent orbit. The partial order on the set of nilpotent orbits is defined as follows: geometrically, if $\calO$ and $\calO'$ are two nilpotent orbits, then we say that $\calO \leq \calO'$ if $\overline{\calO} \subset \overline{\calO}'$, where $\overline{\calO}$ refers to the Zariski closure of $\calO$; translated to our parametrizations, we have that ${\bf d} \leq {\bf d}'$ if $$\sum_{1 \leq j \leq k} d_j \leq \sum_{1 \leq j \leq k} d_j' \textup{ for $1 \leq k \leq N$},$$ where ${\bf d} = [d_1,\hdots, d_N]$, ${\bf d}' = [d_1',\hdots, d_N']$ are partitions of $N$. This partial order on partitions is referred to as \emph{dominance order}. The emphasis that is placed on special orbits in this program is due to a result of M\oe glin in \cite{Moeg} regarding the Fourier coefficients of smooth, irreducible representations of $G$ - to summarize, if $\pi$ is such a representation, then the Fourier coefficients of $\pi$ are associated to unipotent orbits. Looking at the set of unipotent orbits associated to the non-zero Fourier coefficients of $\pi$, M\oe glin proved in Theorem 1.4 in \cite{Moeg} that the maximal orbits in this set were special orbits. We end this paper by offering a formal conjecture regarding the connection between the characters of $\calH_0$ and the gGGr's, along with the evidence that has been compiled so far that supports this conjecture. In the following conjecture, $G$ will denote a split, connected reductive group over $F$. \begin{Conjecture}\label{conj:springer} Let $\tau$ be a linear character of $\calH_0$, and let ${\bf d}$ be the nilpotent orbit of $^LG$ associated to $\tau$ via the Springer correspondence and Tits' deformation theorem. Let ${\bf d}'$ be the special nilpotent orbit ${\bf d}' := \beta(\iota({\bf d}))$ of $G$. If $A$ is a representative of ${\bf d}'$, then the gGGr $\Gamma_{A,1}$ (as defined in Section \ref{sec:intro}) is a model for the universal principal series $\ind_B^G\chi_{\textup{univ}}^{-1}$ satisfying the diagram \eqref{diagram}. \end{Conjecture} \begin{remark} Note that the representation $\mu$ in the pair $({\bf d}, \mu)$ associated to $\tau$ via the Springer correspondence does not play a significant role in identifying $A$. \end{remark} \begin{remark} For $G = \Sp(4)$, there are only three special orbits, implying that the spherical, Whittaker, and Bessel models are the only models that are realized via this program in this case. In particular, this case can be considered degenerate, as we cannot associate a gGGr to the remaining character of $\calH_0$. \end{remark} As an example, consider the case where $G = \Sp(6)$, whence $^LG = \SO(7)$. In this case, we have the following list of special orbits, listed according to the partial order described above: \begin{equation}\label{table:orbits} \begin{tabular}{|l|l|} \hline $\Sp(6)$ & $\SO(7)$ \\ \hline $[6]$ & $[7]$ \\ \hline $[4,2]$ & $[5,1^2]$ \\ \hline $[3^2]$ & $[3^2,1]$ \\ \hline $[2^3]$ & $[3,2^2]$ \\ \hline $[2^2,1^2]$ & $[3,1^4]$ \\ \hline $[1^6]$ & $[1^7]$ \\ \hline \end{tabular} \end{equation} The bijection between orbits of $G$ and $^LG$ mentioned above is simply the one suggested by the partial ordering, which is depicted in Table \eqref{table:orbits}. Thus, according to Conjecture \ref{conj:springer}, we see that $\ve$ corresponds to the pair $([3^2,1],1)$ for $^LG = \SO(7)$. Since $[3^2,1]$ is a special orbit, its transpose $[3,2^2]$ is its image under the usual order-reversing involution, and we see that $[3,2^2]$ corresponds to $[2^3]$ under the bijection between special nilpotent orbits of $G$ and special nilpotent orbits of $^LG$, as desired. We also point out that the trivial character of $\calH_0$ still corresponds to $[1^{2n}]$ under this new conjecture, and the sign character still corresponds to $[2n]$. One can check that this new conjecture is also compatible with our results in this paper regarding $\ve$ for $n=2$. Additionally, one can check that this conjecture is compatible with the results of \cite{BBF2}, in which $G = \SO(2n+1)$ and $^LG = \Sp(2n)$. \begin{comment} Note that the gGGr, $\Gamma_{A,\alpha}$, whose $J$-fixed vectors contain $\Ind_{\calH_0}^{\calH} \tau$ in Conjecture \ref{conj:springer} is built, in part, from the trivial representation of $Z_L(A)$. That the gGGr's that function as models of the universal principal series must be induced from representations that are trivial on $Z_L(A)$ is suggested in part by results such as Proposition \ref{sgngggr}, which we prove using Mackey theory: \begin{proof}[Proof of Proposition \ref{sgngggr}] Let $\Gamma_{A,\alpha} = \ind_{\overline{U}_AZ_L(\goth{O})}^G \widetilde{\eta}_{A,\alpha}$. Mimicking our argument for the uniqueness of the Bessel functional, $\calHom_G(\ind_B^G \chi_{\textup{univ}}^{-1}, \Gamma_{A,\alpha})$ is isomorphic to the subspace $\calD_{\widetilde{\eta}_{A,\alpha}, \chi_{\textup{univ}}^{-1}}(G,R)$ of $R$ distributions $\Delta$ satisfying \eqref{mackey}. As in the case of the Bessel functional, we can see that the dimension of this subspace is determined by the number of double cosets in the decomposition of $G$ given in \ref{prop:bruhat2n} that can support such a distribution $\Delta$. As before, we see that a double coset $Bw\overline{U}_AZ_L(\goth{o})$ with $w \in W/W_L$ is only able to support such a distribution if the compatibility condition $$\chi_{\textup{univ}}^{-1}(b) = \widetilde{\eta}_{A,\alpha}(w^{-1}bw)$$ holds for all $b \in B$ with $h= w^{-1}bw \in \overline{U}_AZ_L(\goth{o})$. However... \end{proof} \end{comment} \newpage \bibliographystyle{amsplain}
1,314,259,994,080
arxiv
\section{X-ray emission in early-type stars: winds (and magnetic fields)} \label{sec:hotstars} Early findings of approximately constant $L_{\rm X}/L_{\rm bol}$ for early-type stars, and the low variability of X-ray emission, were well explained by a model in which X-rays originate in shocks produced by instabilities in the radiatively driven winds of these massive stars (e.g.,\cite{LucyWhite80,Owocki88}). These models yield precise predictions for the shapes and shifts of X-ray emission lines, and models can therefore be tested in detail by deriving information on the line formation radius, overall wind properties, and absorption of overlying cool material. The high spectral resolution of \cha\ and \xmm, and especially the High Energy Transmission Grating Spectrometer (\hetgs, \cite{Canizares05}) onboard \cha\, have revealed a much more complex scenario than the standard model described above. In particular, deviations from the standard model seem to suggest that magnetic fields likely play a significant role in some early-type stars. Magnetic fields have in fact recently been detected in a few massive stars (e.g., \cite{Donati02}) -- most likely fossil fields, because no dynamo mechanism of magnetic field production is predicted to exist for these massive stars since they lack a convective envelope. High resolution spectra of several massive stars are mostly consistent with the standard wind-shock model, with soft spectra, and blue-shifted, asymmetric and broad ($\sim 1000$~km~$s^{-1}$) emission lines: e.g., $\zeta$~Pup \cite{Cassinelli01}, $\zeta$~Ori \cite{Cohen06}. Other sources, while characterized by the soft emission predicted by wind-shock models, have spectral line profiles that are rather symmetric, unshifted and narrow with respect to model expectations: e.g., $\delta$~Ori \cite{Miller02}, $\sigma$~Ori \cite{Skinner08}. Furthermore, a few sources have strong hard X-ray emission with many lines narrower than wind-shock model predictions: e.g., $\theta^1$~Ori~C \cite{Gagne05}, $\tau$~Sco \cite{Cohen03}. For this last class of X-ray sources the presence of magnetic fields provides a plausible explanation for the observed deviations from the wind-shock model: the magnetic field can confine the wind which yields hotter plasma and narrower lines, as shown for instance for the case of $\theta^1$~Ori~C by Gagn{\'e} et al.\ through detailed magneto-hydrodynamic simulations which successfully reproduce the observed plasma temperature, $L_{\rm X}$, and rotational modulation \cite{Gagne05}. An important diagnostic for early-type stars is provided by the He-like triplets (comprising $r$ resonance, $i$ intercombination, and $f$ forbidden lines): the metastable upper level of the $f$ line can be depopulated, populating the upper level of the $i$ transition, through absorption of UV photons. Therefore, the $f/i$ ratio depends on the intensity of the UV field produced by the hot photosphere, i.e.\ the distance from the photosphere at the location where the given lines form. The $f/i$ ratio is also density sensitive and can be expressed as $R = f/i = R_0 / [1 + \phi/\phi_c + n_e/n_c]$, where $\phi_c$ is a critical value of the UV intensity at the energy coupling the $f$ and $i$ upper levels, and $n_c$ is the density critical value; we note however that densities are generally expected to be below $n_c$. The observed He-like line intensities appear to confirm the wind-shock model when the spatial distribution of the X-ray emitting plasma is properly taken into account \cite{Leutenegger06}. However there are still unresolved issues. For instance X-ray observations imply opacities that are low and incompatible with the mass loss rates derived otherwise (see e.g., \cite{Owocki01}). \section{Cool stars and the solar analogy} The Sun, thanks to its proximity, is at present the only star that can be studied at a very high level of detail, with high spatial and temporal resolution, and it is usually used as a paradigm for the interpretation of the X-ray emission of other late-type stars. However, while the solar analogy certainly seems to apply to some extent to other cool stars, it is not yet well understood how different the underlying processes are in stars with significantly different stellar parameters and X-ray activity levels. \subsection{X-ray activity cycles} The $\sim 11$~yr cycle of activity is one of the most manifest characteristics of the X-ray emission of the Sun, and yet in other stars it is very difficult to observe. This is because it is intrinsically challenging to carry out regular monitoring of stellar X-ray emission over long enough time scales, and to confidently identify long term cyclic variability from short term variations that are not unusual in cool stars (e.g., flares, rotational modulation). Long term systematic variability similar to the Sun's cycle has now been observed in three solar-like stars: HD~81809 (G5V, \cite{Favata08}), 61~Cyg~A (K5V, \cite{Hempelmann06}), $\alpha$~Cen~A (G2V, \cite{Ayres09}). The existence of X-ray cycles in other stars nicely confirms the solar-stellar analogy, and it is also potentially useful in order to better understand the dynamo activity on the Sun, which remains a significant challenge. \subsection{The Sun in time} Studies of large samples of solar-like stars at different evolutionary stages help investigate the evolution of the dynamo processes that are mainly responsible for the X-ray production in these cool stars. In particular, studies of this type carried out with high resolution spectroscopy, while requiring a large investment of time and therefore focusing necessarily on small samples of stars, have nonetheless provided very important insights into the response of the corona to the decline in rotation-powered magnetic field generation and dissipation, and provide details of how X-ray emission on the Sun has evolved over time, as shown for instance by Telleschi et al.\ \cite{Telleschi05}. This in turn could be relevant to the evolution of the solar system and the earth's atmosphere (see Feigelson's paper in this issue). Within relatively short timescales, during the post T Tauri through early main sequence phase, the efficient mass loss spins down the star significantly. This affects the dynamo process because the stellar rotation rate is one of the most important parameters driving the dynamo. As a consequence, the X-ray activity decreases, with coronal temperature, $L_{\rm X}$, and flare rate all decreasing, as shown in fig.~\ref{fig:Sunlike} for three solar-like stars spanning ages from $\sim 100$~Myr to $\sim 6$~Gyr. \subsection{Element abundances} The study of element abundances has important implications in the wider astrophysical context and also for stellar physics. For instance, chemical composition is a fundamental ingredient for models of stellar structure since it significantly impacts the opacity of the plasma. Spectroscopic studies of the solar corona have provided a robust body of evidence for element fractionation with respect to the photospheric composition (see e.g., \cite{Feldman92} and references therein). Furthermore, this fractionation effect appears to be a function of the element First Ionization Potential (FIP), with low FIP elements such as Fe, Si, Mg, found to be enhanced in the corona by a factor of a few, while high FIP elements such as O have coronal abundances close to their photospheric values (e.g., \cite{Feldman92}). This ``FIP effect'' has strong implications for the physical processes at work in the solar atmosphere (see e.g., \cite{Laming04,Laming09} and references therein). Spectroscopic studies in the extreme ultraviolet have provided the first indication that in other stars as well the chemical composition of coronal plasma is different from that of the underlying photosphere, although with a dependence on FIP that is likely significantly different from that on the Sun (e.g., \cite{Drake96}). High resolution X-ray spectroscopy with \cha\ and \xmm\ has for the first time provided robust and detailed information on the chemical composition patterns of hot coronal plasma. Stellar coronae at the high end of the X-ray activity range appear characterized by an {\em inverse} FIP effect (IFIP), i.e.\ with Fe significantly depleted in the corona, compared to the high FIP oxygen (e.g., \cite{Brinkman01}). Investigation of element abundances in large samples of stars spanning a large range of activity ($L_{\rm X}/L_{\rm bol} \sim 10^{-6}$--$10^{-3}$) find a systematic gradual increase of IFIP effect with activity level (e.g.,\cite{GAlvarez08}). This trend is shown in fig.~\ref{fig:abund} for the abundance ratio of low FIP element Mg to high FIP element Ne, derived from \cha\ \hetgs\ spectra for the same sample of stars for which Drake \& Testa studied the Ne/O abundance ratio \cite{DrakeTesta05}. An important caveat to keep in mind is that the stellar photospheric chemical composition is often unknown for the elements of interest, and the {\bf solar} photospheric composition is instead used as a reference \cite{Sanz04}. In this context an interesting result is the behavior of Ne/O which remains rather constant over almost the whole observed range of activity \cite{DrakeTesta05}, and, interestingly, this almost constant value is about 2.7 times higher than the adopted solar photospheric value. This might help to shed light on an outstanding puzzle in our understanding of our own Sun. Since Ne cannot be measured in the photosphere -- no photospheric Ne lines are present in the solar spectrum -- the solar photospheric Ne/O is not constrained. The remarkably constant Ne/O observed in stellar coronae, despite the significantly different properties of these stars, suggests that the observed coronal Ne/O actually reflects the underlying photospheric abundances. If the same value is assumed for the solar photosphere as well, this would help resolve a troubling inconsistency between solar models and data from helioseismology observations \cite{AntiaBasu05}. It remains unresolved though why the solar coronal Ne/O is found to be systematically lower than in other coronae (e.g., \cite{Young05}), though this is likely similar to other low activity stars \cite{Robrade08}. However, Laming \cite{Laming09} suggests that the low coronal Ne abundance on the Sun might be explained by the same fractionation processes that yield the general FIP effect. \subsection{Spatial structuring of X-ray emitting plasma and dynamic events} High spectral resolution in X-rays has made accessible a whole new range of possible diagnostics for the spatial structuring of stellar coronae, for example: \begin{itemize} \item {\bf opacity} effects in strong resonance lines yield estimates of path length, and therefore the spatial extent of X-ray emitting structures. Only a handful of sources show scattering effects in their strongest lines, and the derived lengths are very small when compared to the stellar radii, analogous to solar coronal structures \cite{Testa04b,Matranga05,Testa07b}. \item {\bf velocity modulation} derived from line shifts allows us to estimate the spatial distribution of the X-ray emitting plasma at different temperatures, or the contribution of multiple system components to the total observed emission (e.g., \cite{Brickhouse01,Chung04,Ishibashi06,Huenemoerder06,Hussain07}). The unprecedented high spectral resolution of \cha\ is crucial for these studies with a velocity resolution down to $\sim 30$~km~s$^{-1}$ (e.g., \cite{Hoogerwerf04,Ishibashi06,Huenemoerder06}). \item {\bf plasma density}, $n_e$, can be derived from the ratios of He-like triplets ($R = f/i \sim R_0 / [1 + n_e/n_c]$; \cite{GabrielJordan69}) \footnote{For cool stars the UV field is typically too weak to affect the He-like lines (which it does for hot stars as mentioned above) and therefore the $f/i$ ratio is mainly sensitive to the plasma density, above a critical density value which depends on the specific triplet (see \cite{GabrielJordan69}).}, therefore providing an estimate of the emitting volumes, since the observed line intensity is proportional to $n_e^2 V$. Several He-like triplet lines lie in the \cha\ and \xmm\ spectral range covering a wide range of temperatures ($\sim 3-10$~MK from \ovii\ to \sixiii), and densities ($\log (n_c[$cm$^{-3}]) \sim 10.5-13.5$ from \ovii\ to \sixiii). We note that the unmatched resolving power of \cha\ \hetgs\ is crucial to resolve the numerous blends that affect the Ne and Mg triplets that cover the important $\sim 3-6 \times 10^6$~K range. Studies of plasma densities from He-like triplets in large samples of stars (\cite{Testa04a} studied \ovii, \mgxii, \sixiii, and \cite{Ness04} \ovii\ and \neix) yield estimates of coronal filling factors which are remarkably small especially for hotter plasma (typically $\ll 1$), but increase with X-ray surface flux \cite{Testa04a}. \item {\bf flares} can provide clues on the size of the X-ray emitting structures and on the underlying physical processes that produce very dynamic events. The timescale of evolution of the flaring plasma (T, $n_e$) is related to the size of the flaring structure(s), and can be modeled to provide constraints on the loop size (see e.g., \cite{Reale07} and references therein). Flares we observe in active stars involve much larger amounts of energy than observed on the Sun, with X-ray luminosities reaching values of $10^{32}$~erg~s$^{-1}$ and above, i.e.\ more than two orders of magnitude larger than the most powerful solar flares. It is therefore not obvious that these powerful stellar flares are simply scaled up ($L_{\rm X}$, T, characteristic timescales of evolution) versions of solar flares which we can study and model with a much higher level of detail. Novel diagnostics are provided by high resolution spectra, and time-resolved high resolution spectroscopy of stellar flares is now possible with \cha\ and \xmm, at least for large flares in bright nearby sources. G{\"u}del et al.\ \cite{Guedel02} have studied a large flare observed on Proxima Centauri, observing phenomena analogous to solar flaring events: density enhancement during the flare, supporting the scenario of chromospheric evaporation, and the Neupert effect, i.e.\ proportionality between soft X-ray emission and the integral of the non-thermal emission (e.g., \cite{Hudson92}). \\ An interesting, and potentially powerful new diagnostic is provided by {\bf Fe~K$\alpha$ }(6.4~keV, 1.94\AA) emission, which can be observed in \cha\ and \xmm\ spectra. On the Sun Fe~K$\alpha$ emission has been observed during flares (e.g., \cite{Parmar84}) and it is interpreted as {\bf fluorescence} emission following inner shell ionization of {\em photospheric} neutral Fe due to hard X-ray coronal emission ($> 7.11$~keV). In this scenario, the efficiency of Fe~K$\alpha$ production depends on the geometry, i.e.\ on the height of the source of hard ionizing continuum, through the dependence on the solid angle subtended and the average depth of formation of Fe~K$\alpha$ photons (e.g., \cite{Bai79,Drake08}). In cool stars other than the Sun, Fe~K$\alpha$ has now been detected in young stars with disks (see next section) where the fluorescent emission is thought to come from the cold disk material, and in only two, supposedly diskless, sources during large flares: the G1 yellow giant HR~9024 \cite{Testa08a}, and the RS~CVn system II~Peg \cite{Osten07,Ercolano08}. For HR~9024 the \cha\ \hetgs\ observations can be matched in detail with a hydrodynamic model of a flaring loop yielding an estimate for the loop height $h \sim 0.3 R_{\star}$ \cite{Testa07a}, and an {\em effective height} for the fluorescence production of $\sim 0.1 R_{\star}$ ($R_{\star}$ being the stellar radius). These values compare well with the value derived from the analysis of the measured Fe~K$\alpha$ emission, $h \lesssim 0.3 R_{\star}$. \end{itemize} \section{Young stars: powerful coronae, accretion, jets, magnetic fields and winds} X-ray emission from young stars is presently one of the hot topics in X-ray astrophysics. Stellar X-rays are thought to significantly affect the dynamics, heating and chemistry of protoplanetary disks, influencing their evolution (see article by E.Feigelson in this same issue). Also, irradiation of close-in planets increases their mass loss rates possibly to the extent of complete evaporation of their atmospheres (e.g., \cite{Penz08}). Young stars are typically characterized by strong and variable X-ray emission (e.g., \cite{Preibisch05}), and many recent \cha\ and \xmm\ studies have been investigating whether their coronae might just be powered up versions of their evolved main sequence counterparts, or whether other processes might be at work in these early evolutionary stages. For example, the observations have addressed the issue of accretion-related X-ray emission processes in accreting (classical) T Tauri stars (CTTS), on which material from a circumstellar disk is channeled onto the central star by its magnetic field. CTTS have observed X-ray luminosities that are systematically smaller by about a factor 2 than non accreting TTS (WTTS), (e.g., \cite{Preibisch05}). It is not yet clear however if accretion might suppress or obscure coronal X-rays, or instead, whether higher X-ray emission levels might increase photoevaporation of the accreting material, modulating the accretion rate \cite{Drake09}. \subsection{Accretion related X-ray production} High resolution spectroscopy has proved crucial for probing the physics of X-ray emission processes in young stars. The first high resolution X-ray spectrum of an accreting TTS, TW~Hya, has revealed obvious peculiarities \cite{Kastner02} with respect to the coronal spectra of main sequence cool stars: \begin{itemize} \item {\bf very soft emission}: the X-ray spectrum of TW~Hya is characterized by a temperature of only few MK ($\sim 3$~MK) whereas coronae with comparable X-ray luminosities ($L_{\rm X} \sim 10^{30}$~erg~s$^{-1}$) typically have strong emission at temperatures $\gtrsim$ 10~MK. \item {\bf high $n_e$}: the strong cool He-like triplets of Ne and O have line ratios that imply very high densities ($n_e \gtrsim 10^{12}$~cm$^{-3}$), whereas in non-accreting sources typical densities are about two orders of magnitude lower. \item {\bf abundance anomalies}: the X-ray spectrum of TW~Hya is characterized by very low metal abundances, while Ne is extremely high \cite{Stelzer04,Drake05} when compared to other stellar coronae. \end{itemize} These peculiar properties strongly suggest that the X-ray emission of TW~Hya is originating from shocked accreting plasma. Indeed, the observed X-ray spectra of some of these sources have been successfully modeled as accretion shocks \cite{Gunther07,Sacco08}. High resolution spectra subsequently obtained for other CTTS have confirmed unusually high $n_e$ from the \ovii\ lines \cite{Schmitt05, Gunther06,Argiroffi07,Robrade07}, indicating that in these stars at least some of the observed X-rays are most likely produced through accretion-related mechanisms. We note that TW~Hya is the CTTS for which the cool X-ray emission produced in the accretion shocks is the most prominent with respect to the coronal emission, while all other CTTS for which high resolution spectra have been obtained have a much stronger coronal component. For these latter sources we are able to probe accretion related X-rays only thanks to the high spectral resolution which allows us to separate the two components. Recent studies of optical depth effect in strong resonance lines in CTTS provide confirmation of the high densities derived from the He-like diagnostics \cite{Argiroffi09}. Another diagnostic of accretion related X-ray production mechanisms is offered by the \oviii/\ovii\ ratio which, in accreting TTS, is much larger than in non-accreting TTS or main sequence stars \cite{Guedel07} (see also fig.\ref{fig:HAe}). Herbig~AeBe stars, young intermediate mass analogs of TTS, appear to share the same properties \cite{Robrade07}. \subsection{Flaring activity and coronal geometry} X-ray emission of young stars is characterized by very high levels of X-ray variability pointing to very intense flaring activity in the young coronae of TTS. This is beautifully demonstrated by the \cha\ Orion Ultradeep Project (COUP) of almost uninterrupted (spanning about 13~days) observations of the Orion Nebula Cluster star forming region\footnote{Movies of this dataset are available at http://www.astro.psu.edu/coup/.}. Hydrodynamic modeling of some of the largest flares of TTS imply, for some of these sources, very large sizes for the flaring structures ($L \gtrsim 10 R_{\star}$). This may provide evidence of a star-disk connection \cite{Favata05}. However, follow-up studies of these flares indicate that the largest structures seem to be associated with non-accreting sources, consistent with the idea that in accreting sources, the inner disk, reaching close to the star, might truncate the otherwise very large coronal structures \cite{Getman08}. In a few of these sources, with strong hard X-ray spectra, Fe~K$\alpha$ emission has been observed (see e.g., \cite{Tsujimoto05} for a survey of Orion stars). The Fe~K$\alpha$ emission is generally interpreted as fluorescence from the circumstellar disk, however in a few cases the observed equivalent widths are extremely high and apparently incompatible with fluorescence models (see e.g., \cite{Czesla07,Giardino07}). This apparent discrepancy could either be due to partial obscuration of the X-ray emission of the flare \cite{Drake08} or could instead point to different physical processes at work, for instance impact excitation \cite{Emslie86}. \subsection{Herbig Ae stars} In their pre-main sequence phase, intermediate mass stars appear to be moderate X-ray sources (e.g., \cite{Stelzer09} and references therein). Their X-ray emission characteristics are overall similar to the lower mass TTS (hot, variable), possibly implying that the same X-ray emission processes are at work in the two classes of stars, or that the emission is due to unseen TTS companions. However, a handful of Herbig Ae stars show unusually soft X-ray emission: e.g., AB~Aur \cite{Telleschi07}, HD~163296 \cite{Swartz05,Gunther09}. High resolution spectra have been obtained for these stars, together with HD~104237 \cite{Testa08}. One similarity with the high resolution spectra of CTTS, appears to be the presence of a soft excess (\oviii/\ovii), compared to coronal sources, as shown by \cite{Gunther09} (see figure~\ref{fig:HAe}). However their He-like triplets are generally compatible with low density, at odds with the accreting TTS (with maybe the exception of HD~104237, possibly indicating higher $n_e$). AB~Aur and HD~104237 have X-ray emission that seems to be modulated on timescales comparable with the rotation period of the A-type star therefore rendering the hypothesis that X-ray emission originates from low-mass companions less plausible. \section{Conclusions} The past decade of stellar observations has led to exciting progress in our understanding of the X-ray emission processes in stars, also shifting in the process the perspective of stellar studies which are now much more focused on star and planet formation. In particular high resolution X-ray spectroscopy, available for the first time with \cha\ and \xmm, is now playing a crucial role in constraining and developing models of X-ray emission, e.g., for early-type stars, late-type stellar coronae, and in the case of young stars, by providing a unique means for probing accretion related X-ray emission processes, as well as the opportunity to examine the effects of X-rays on the circumstellar environment. \subsection{Progress and some open issues} X-ray emission processes in early-type stars now present a much more complex scenario, in which magnetic fields also likely play a key role. Some puzzling results found for several massive stars concern the hard, variable X-ray spectra with relatively narrow lines, which cannot be explained by existing models. Spectroscopic studies of large samples of stars have provided robust findings on chemical fractionation in X-ray emitting plasma, which now require improved models to understand the physical processes yielding the observed abundance anomalies. A satisfactory understanding of activity cycles is lacking even for our own Sun, and recent discoveries of X-ray cycles on other stars can provide further constrains for dynamo models. We are now taking the first steps in studying flares with temporally resolved high resolution spectra, and this will greatly help constrain our models and really test whether the physics of these dynamic events, in the extreme conditions seen in some cases (e.g., $T \gtrsim 10^8$~K), are still the same as for solar flares. At present, the effective areas are often insufficient to obtain good S/N at high spectral resolution on the typical timescales of the plasma evolution during these very dynamic events. The International X-ray Observatory (IXO), in the planning stages for a launch about a decade from now, will make a large number of stars accessible for this kind of study. In young stars, a very wide range of phenomena are observed to occur, and while this young field has already offered real breakthroughs there is still a long way to go to understand the details of accretion, jets, extremely large X-ray emitting structures, the influence of X-rays on disks and planets, and the interplay between accretion and X-ray activity. \begin{acknowledgments} This work has greatly benefited greatly from discussions with several people, and, in particular, I would like to warmly thank Jeremy Drake and Manuel G{\"u}del. I also would like to thank Hans Moritz G{\"u}nther for permission to use original figure material. This work has been supported by NASA grant GO7-8016C. \end{acknowledgments}
1,314,259,994,081
arxiv
\section{Distances to Open Clusters} The results of the Hipparcos mission have recently appeared (\cite{ESA97}, 1997), and they provide unprecedented astrometric precision and accuracy for a very large sample of stars. Analyses of these results are just beginning, of course, but to us some of the most intriguing Hipparcos measurements are those of nearby open clusters, such as the Hyades, Pleiades, Praesepe, and $\alpha$ Persei. Open clusters are critical laboratories for testing stellar evolution models since they provide large samples of stars of a single age and composition (as near as we can tell, anyway) over a broad range of mass. Those models are calibrated against the Sun, the one star for which we know fundamental properties with very high accuracy. Thus we construct stellar models, adjust them to match the Sun, and then test them against open clusters because those clusters have near-solar composition, making it possible to work differentially. Having passed those tests, we have some confidence the models can then be applied to significantly different conditions, such as globular clusters, which are vital for establishing the cosmic distance scale. This paper and the companion paper by \cite{Pins98} (1998) are motivated by concern over the accuracy of the Hipparcos results. Nearly all the Hipparcos cluster distances disagree with conventionally-determined values, although in most cases the differences do not conflict with the estimated uncertainties. But the Pleiades is an especially egregious case. The Hipparcos estimates of the Pleiades parallax range from 8.54 to 8.65 mas, depending on the solution used: Robichon gets $8.54\pm0.22$ (see Table XXVI of \cite{FVL97} 1997); \cite{Mer97} (1997) get $8.60\pm0.24$; \cite{vL97} (1997) quote $8.61\pm0.23$ as their solution A [this value also appears in \cite{vLE97} (1997) and \cite{FVL97} (1997)]; and \cite{vL97} (1997) cite $8.65\pm0.24$ as their solution B\@. These correspond to a distance of about 116 pc or a distance modulus of 5.33 magnitudes. Traditional determinations of the cluster's distance (e.g., \cite{VdB84} 1984; \cite{SSHJ} 1993, hereafter SSHJ) are based on comparing the cluster's main sequence to that of nearby stars, and these lead to a distance modulus of about 5.6. The same value of 5.6 has been derived by fitting a spectroscopic binary to isochrones (\cite{Gia95} 1995). \cite{BF90} (1990) show that the Pleiades has [Fe/H] = $-0.034\pm0.024$, i.e., it has essentially solar metallicity. Thus the Hipparcos results suggest that Pleiades members are about 0.3 magnitude fainter than we have thought up to now. Can these different estimates be reconciled? Can a Zero-Age Main Sequence (ZAMS) star with solar metallicity be 30\% fainter than our current models predict? These are the essential questions that we address here. The Hipparcos parallax of van Leeuwen \& Hansen Ruiz is based on measurements of 54 Pleiades members, ranging in $m_V$ from about 3 to 11, so it represents a good cross-section of the cluster. Hipparcos observations are reduced to an absolute reference frame, but the measurements are correlated within a limited region of the sky as the satellite swept out great circles. These correlated measures have been corrected for (\cite{vL97} 1997) as part of the effort to reduce all the Hipparcos observations in a consistent and systematic way. Reconciling the Hipparcos distance with the traditional estimate would imply systematic errors larger than the quoted uncertainties. There is, therefore, no obvious reason to believe the Hipparcos distance to the Pleiades contains a systematic error that is large enough to bring it into accord with the traditional distance. The traditional distance measure, on the face of it, appears to be just as sound. It is based on comparing a Pleiades color-magnitude diagram (CMD) -- corrected for a small amount of reddening -- to a CMD created from nearby stars with large parallaxes, or to a CMD of the Hyades, suitably corrected for the difference in metallicity. Theoretical isochrones can also be converted to observational coordinates using a color calibration, and the offset between the isochrone and the cluster can be used to infer the distance modulus. This technique is used in the companion paper by Pinsonneault et al. and yields similar results. In this paper we reexamine the comparison of the Pleiades to nearby stars. Our hypothesis is that the stars of the Pleiades cannot be completely unique in our Galaxy and that there must be nearby examples of stars that share the same unknown stellar physics or unusual parameters that result in the Pleiades stars being so faint. It should therefore be possible to find examples of anomalously-faint ZAMS stars that are so close to the Sun that errors in parallax cannot account for their faintness. If no such stars exist, as we will show, then either we have failed to account for some fundamental aspects of stellar physics adequately, or there are unappreciated errors in the Hipparcos parallaxes. \section{An Observational ZAMS Using Nearby Solar-Type Stars} We start by showing that nearby solar-type stars that are known to be young do not lie below the usual ZAMS. The idea of comparing a cluster main sequence to one constructed from nearby stars with large parallaxes is not new, but the nearby stars are of many ages and evolutionary states, which spreads the apparent main sequence considerably. The appropriate comparison, of course, is to very young nearby stars, since the clusters in questions are essentially ZAMS themselves. In this case by young we mean very active, as determined from observations of the \ion{Ca}{2} H and K lines. Table 1 lists our sample. The northern stars have been observed as part of the Mount Wilson survey of chromospheric emission in late-type dwarfs (\cite{VP80} 1980; \cite{Sod85} 1985; \cite{SM93} 1993), from which we have taken the $R_{\rm HK}^\prime$ index of HK emission. To the extent they have been measured, these stars have metallicities near solar (\cite{Cay92} 1992). The photometry of the northern stars is from \cite{MM94} (1994). We divided these northern stars into two subsets. The first consists of the most active of the stars, those with \mbox{$\log R_{\rm HK}^\prime$} $>-4.40$, to which we added a few others which are slightly less active but which are so well studied that there is no ambiguity about their youth (HD 39587 = $\chi^1$ Ori is an example). The second subset of northern stars is also active, but not as much so or not as well-studied; they have \mbox{$\log R_{\rm HK}^\prime$}\ values from $-4.41$ to $-4.44$. We have also included some southern stars from the HK survey of \cite{Hen96} (1996) that have \mbox{$\log R_{\rm HK}^\prime$}\ values from $-4.20$ to $-4.40$; that paper provides the photometry. The parallaxes in Table 1 are from Hipparcos (\cite{ESA97}, 1997). We kept only those stars with $\sigma_\pi/\pi \la 0.1$ so that parallax error could not accidentally place a star significantly below the ZAMS. We also excluded stars with known companions unless we were confident that the companion is not influencing the HK observations or the photometry. Our young stars are shown in Figure 1. The large dots represent the first subset; i.e., the stars most likely to be {\it bona fide} ZAMS objects. The small dots represent the other northern stars and the open circles are the southern stars. The solid line is a theoretical ZAMS from VandenBerg (1997, private communication). It has been calibrated to reproduce the solar temperature and luminosity (represented by the diamond) at the Sun's age, and to fit the M67 cluster main sequence at its age. The dashed line is the same ZAMS transformed to the CMD using the color-temperature relation of \cite{Be79} (1979). For reference, the long-dashed line shows the same ZAMS (for 100 Myr age and [Fe/H] = 0.0) used in the companion paper by Pinsonneault et al. About half the difference between the VandenBerg and Pinsonneault isochrones arises in the color-temperature relations used. Their zero points are close (the VandenBerg isochrone is, on average, 0.04 magnitude fainter in the range of 0.5 to 0.9 in \bv), and there is a slight difference in the slopes of the main sequences. Differences in the color-temperature relations are a larger source of uncertainty for the cooler stars, as the increasing difference between the VandenBerg and Bessell lines indicates. The theoretical isochrones are clearly an excellent representation of the observations. We anticipate finding stars above the ZAMS by modest amounts because they are photometric binaries, but we note that none of the young stars falls below the ZAMS\@. Thus there is no hint in this small sample of there being any nearby young stars that are 0.3 magnitude below the usual ZAMS. Figure 2 shows a similar CMD for the Pleiades, taken from SSHJ and corrected for reddening of 0.04 magnitude in \bv\ and 0.12 magnitude in $V$. The lines are the same ones as in Figure 1, but displaced by 5.6 magnitudes. This comparison shows that different isochrones can differ from one another and from the cluster by 0.1 magnitude or more for \bv $\ga0.7$. The Bessell relation is clearly too blue, while both the VandenBerg and Pinsonneault isochrones are too red for \bv $\ga0.8$. Note, however, that these theoretical ZAMS lines deviate from the Pleiades in the same way that they deviate from the field stars of Figure 1, underscoring the comparability of the two samples. To emphasize that the traditional distance to the Pleiades does not depend on assumptions of age, in Figure 3 we show a CMD for nearby stars and the Pleiades, for $(m-M) = 5.6$. The color used in Figure 3 is $(V-I)$ in the Cousins system, in order to have an index that is less sensitive to metallicity than is \bv, and field stars of all ages are represented. The nearby star parallaxes and colors are from the Hipparcos catalog, and we used only stars with measured $(V-I)$, excluding those where $(V-I)$ had been estimated from \bv\ or other colors. The Pleiades data are from Stauffer (1997, private communication), who transformed his observations of Pleiads in the Kron $(V-I)$ color (\cite{St84} 1984) to Cousins $(V-I)$ using the relation of \cite{BW87} (1987), correcting for reddening in the process. The Pleiades $V$ magnitudes have been shifted by 5.6 for distance and 0.12 to correct for extinction. Both main sequences overlap for $(V-I) \la 1.7$. The Pleiads redder than this depart from the field star sequence simply because they are so young that they lie above the main sequence. There are essentially no nearby stars below the ZAMS defined by the Pleiades. \section{Sub-Luminous Stars} We have just shown that nearby young stars lie on or above the usually accepted ZAMS and that none lie below. We now show that those stars that do lie below the ZAMS are old stars of low metallicity, not young stars analogous to Pleiads. We began by extracting from the Hipparcos catalog all stars within 60 pc. We kept only those stars with $\sigma_\pi/\pi < 0.050$ and $\sigma(B-V) < 0.025$. That portion of those stars that lie below the ZAMS is shown in Figure 4. We observed six of these stars, which are in squares in Figure 4. We used the Hamilton spectrograph on the Lick 3 m Shane reflector, reducing the data in the usual way within IRAF (see SSHJ for details). The stars and the spectroscopic results are listed in Table 2. [Fe/H] was determined from the strength of the \ion{Fe}{1} 6750 \AA\ line in comparison to a solar spectrum of similar high resolution. This is a small number of stars due to poor observing conditions, but they were chosen randomly from the stars that lie about 0.3 magnitude below the ZAMS. As we anticipated, most of these stars have unresolved rotation, and have metallicities that are sub-solar, which accounts for their locations in the CMD. (\cite{Car94} 1994 show [Fe/H] = $-0.61$ for HIP 23431, in accord with our value.) There is one star, HIP 25127, that has obvious filling-in of the H$\alpha$ line (Fig. 5). This star also has relatively strong Li, the indicated equivalent width implying $\log N{\rm (Li)}\approx 2.35$. Also, we estimate \mbox{$v\,\!\sin\,\!i$}\ for HIP 25127 to be approximately 7 \mbox{${\rm km~s}^{-1}$}, based on a comparison of line breadths in this star to others in the sample. All these factors suggest youth, but this star's position in the CMD is due to its low metallicity of $-0.3$, and so HIP 25127 validates models of ZAMS stars by confirming that low-metals stars appear to lie well below the solar-metallicity ZAMS, even if they may be young. The symbols in Figure 4 indicate the transverse velocities of the subluminous stars, calculated from the Hipparcos proper motions and parallaxes. Small filled circles indicate $v_{\rm trans} < 30$ \mbox{${\rm km~s}^{-1}$} (the median velocity for all stars within 50 pc). Small circles indicate $30 \leq v_{\rm trans} < 100$ \mbox{${\rm km~s}^{-1}$}\ (the 95th percentile), while the large circles have transverse velocities that exceed 100 \mbox{${\rm km~s}^{-1}$}. The scarcity of low-velocity stars and the higher velocities of the more subluminous stars strongly suggest that the objects in Figure 4 represent an old population, lying below the ZAMS because of low metallicity. The lack of subluminous stars with \bv $\la 0.5$ is also indicative of an old population. A more detailed examination of the kinematics of these stars requires radial velocities to provide the third dimension, and a more strictly limited sample to minimize the effects of observational errors. For this purpose, we extracted stars within 50 pc from the Hipparcos catalog, accepting only those stars with $\sigma_\pi/\pi < 0.05$ and $\sigma(B-V) < 0.025$. Binaries and stars with other astrometric problems were rejected using flag H59 (ESA 1997, vol. 1, p. 126). This left a clean sample with 3,345 stars. Of these, we found radial velocities in the Hipparcos Input Catalog for 1,799 of them, and these were used to calculate Galactic space motions $U$, $V$, and $W$. Correction to the Local Standard of Rest (LSR) was done using the new solar motion $(U, V, W)_\odot^{\rm LSR} = (+10, +5, +7)$ \mbox{${\rm km~s}^{-1}$}\ from Hipparcos data (\cite{DB97} 1997). Figure 6 shows the $(U, V)_{\rm LSR}$ and $(V, W)_{\rm LSR}$ diagrams for these 1,799 stars. The sample has been divided into 1,598 stars lying on or above the ZAMS (left panels) and 201 stars falling 0.1 or more magnitudes below the ZAMS (right panels). Table 3 summarizes the kinematic properties of these stars. The net range of velocities is roughly the same for both samples, but the ZAMS-and-above sample is highly concentrated near the LSR, and its core shows vertex deviation and clumpiness, which are characteristics of a young, low-velocity population. By contrast, these characteristics are completely absent in the diffuse velocity distribution of the subluminous stars. The conclusions to be drawn from Figure 6 and Table 3 are clear: The 201 stars that appear to lie below the ZAMS are dispersed in velocity space and chiefly represent the Galaxy's thick disk population, with a small admixture of halo stars. These stars are subluminous simply because this old population has metallicities substantially below solar. It is still possible, of course, that some small fraction of these subluminous stars are, in fact, young. Such stars should have low space motions; to attempt to identify any young, subluminous stars we selected 30 stars with \bv $<1.2$ that lie within 1 $\sigma$ of the LSR in all three coordinates. Ten of these stars, listed in Table 4, fall 0.2 magnitude or more below the ZAMS. We used SIMBAD to search for additional information which might indicate the ages of these 10 stars or reveal the reasons for their apparent subluminosity. As indicated in Table 4, most of these stars have low [Fe/H] or some other spectroscopic indication of old age (such as weak H and K emission or a low Li abundance). Two stars appear to have significant errors in their colors, including the only star of the 10 with any evidence of youth. \section{Conclusions} We have been unable to find any nearby stars with solar metallicity that are very young and which are below the traditional ZAMS, despite the Hipparcos results which suggest that Pleiades stars are 0.3 magnitude fainter than that ZAMS. We have also shown that those nearby stars that do lie below the ZAMS show evidence of old age, as expected. This leaves two possibilities. The first is that the Hipparcos parallaxes for the Pleiades and other clusters are correct but that Pleiades-like stars are rare in the immediate solar neighborhood or we have just been unlucky in finding them. Surveys of activity in nearby stars have been comprehensive enough to not have missed any significant number of genuinely young stars, and we cannot accept the {\it ad hoc} explanation that the Pleiades is simply bizarre. We should note here that \cite{Gat95} (1995) measured a parallax for the Coma cluster and found those stars to be subluminous to an extent similar to what is found for the Pleiades. However, Gatewood's result depends on only three stars in Coma, one of which had especially large uncertainty. Moreover, the proper motions of those two remaining stars differ significantly. Thus Gatewood's measurements are intriguing but are not sufficient to substantiate Hipparcos. (Also, Gatewood's Coma results differ from the traditional measures in a sense opposite to that seen by Hipparcos, meaning that they conflict with each other.) If stars in the Pleiades are indeed 0.3 magnitude fainter than we have thought up to now, then there are significant aspects of stellar physics that have so far gone unappreciated. The companion paper by \cite{Pins98} (1998) shows how difficult this notion is to accept, but if this is true, then we surely cannot trust our inferences of distances to the globular clusters if we cannot reproduce the behavior of stars that are nearly identical to the Sun. The second possibility is that the Hipparcos parallaxes have small systematic errors. The correction needed to bring the Hipparcos Pleiades distance into agreement with the traditional value is almost exactly one milliarcsec. As shown in the companion paper by \cite{Pins98} (1998), the Hipparcos parallaxes of the brightest Pleiads in the core of the cluster are the most discrepant and weigh most heavily in the net cluster parallax because of their low formal errors. For this reason we suspect that the Hipparcos net parallax for the Pleiades is wrong. The detection and measurement of visual binary orbits for Pleiades stars could provide an independent estimate of the cluster's distance. Such binaries would be difficult to observe, but are within the capabilities of the Fine Guidance Sensors on the {\it Hubble Space Telescope}, for example. \acknowledgments This work was supported, in part, by NASA grant NAGW-4837 to DS\@. RBH and BFJ acknowledge partial support from NASA Grant NAG5-4830 and NSF Grant AST 9530632. This research made use of the SIMBAD database, operated by the CDS, Strasbourg, France. We thank the anonymous referee for his or her remarks.
1,314,259,994,082
arxiv
\section{Introduction} In a non-relativistic theory where one allows for a tunable potential (or other parameters like the mass $m$ of the particles), we refer to the unitary window as to the range of those parameters for which the scattering(s) length(s) $a$ attains a value close to infinity (the unitary limit). This is a relevant limit because the physics becomes universal~\cite{braaten:2006_PhysicsReports} and a common description can be used for totally different systems, ranging from nuclear physics up to atomic physics or down in scale to hadronic systems. For instance, in the two-body sector there is the appearance of a shallow (real or virtual) bound state whose energy is governed by the scattering length, $E_2=-\hbar^2/ma^2$. This state is shallow if compared with the energy related to the typical interaction length $\ell$, defined as $-\hbar^2/m\ell^2$, and in the limit $a/\ell\rightarrow\infty$, where it becomes resonant. This limit can be understood either as the scattering length going to infinity or as the range of the interaction going to zero; in the last case one talks of zero-range limit or scaling limit. In the scaling limit, the two-body sector displays continuous scale invariance due to the fact that the only dimensionful parameter is the scattering length. As soon as we change the number of particles, the above symmetry is dynamically broken to a discrete scale invariance (DSI); for example, for three equal bosons at the unitary limit, an infinite tower of bound states appears - the Efimov effect~\cite{efimov:1970_Phys.Lett.B,efimov:1971_Sov.J.Nucl.Phys.} - related by a discrete scale transformation $r\rightarrow \exp(\pi/s_0) \,r \approx 22.7\,r$, with the scaling factor $s_0=1.0062\dots$ a universal transcendental number that does not depend on the actual physical system. The anomalous breaking of the symmetry gives rise to an emergent scale at the three-body level which is usually referred to as the three-body parameter $\kappa_*$, giving the binding energy $\hbar^2\kappa_*^2/m$ of a reference state of the above tower of states. The fine tuning of the parameters that brings a systems inside the unitary window can be realized either artificially, like it has been the case in the field of cold atoms with Feshbach resonances~\cite{chin:2010_Rev.Mod.Phys.}, or can be provided by nature, as in the case of atomic $^4$He, where the $^4$He$_2$ molecule has a binding energy of several order smaller than the typical interaction energy~\cite{luo:1993_J.Chem.Phys.}. Nuclear physics is another example of tuned-by-nature system; the binding energy of the deuteron, $B_d=2.22456$~MeV is much smaller then the typical-nuclear energy $\hbar^2/m\ell^2 \approx 20$~MeV , considering that the interaction length is given by the inverse of the pion mass $m_\pi$, $\ell\sim 1/m_\pi \approx 1.4$~fm. The fact that nuclear physics resides inside such a window has been used in the pioneering works of the thirties where the binding energies of light nuclei have been calculated using either boundary conditions~\cite{wigner:1933_Phys.Rev.,fermi:1936_Ricercasci.} or pseudopotentials~\cite{huang:1957_Phys.Rev.}. Nuclear physics is the low energy aspect of the strong interaction, namely Quantum Chromo Dynamics (QCD); in this limit, QCD is a strong interacting quantum field theory and only non-perturbative approaches could be used to describe the spectrum of nuclear physics. Such non-perturbative approaches start to appear, one example being Lattice QCD (LQCD); however, a complete calculation of nuclear properties seems at present not yet feasible using these techniques. Historically, the description of light nuclear systems were based on potential models constructed to reproduce a selected number of observables; first attempts have been based on the expansion of the potential on the most general operator basis compatible with the symmetries observed in the spectrum. Lately, it has been realized that a potential could be constructed starting from the symmetries of QCD in an Effective Field Theory (EFT) approach. One of the important symmetry of QCD in the limit of zero-mass light quarks is the Chiral Symmetry; this symmetry is indeed spontaneous broken and its Goldstone boson is the $\pi$-meson. The mass of the pion $m_\pi$ is not really zero because of the soft breaking term introduced by the explicit mass of the quarks up and down, but still is much lower than the typical hadronic masses. The chiral limit is not the only interesting limit in QCD; the actual mass of the pion is probably close to a value for which the nucleon-nucleon scattering lengths diverge~\cite{braaten:2003_Phys.Rev.Lett.}; in fact, one can study the variation of the $^1S_0$ (singlet) $a_0$ and $^3S_1$ (triplet) $a_1$ scattering lengths as a function of the masses of the up and down quarks, or equivalently of $m_\pi$ which is related to the quark mass by the Gell-Mann-Oakes-Renner relation. It has been shown that for $m_\pi\approx 200$~MeV both scattering lengths go to infinity~\cite{epelbaum:2006_Eur.Phys.J.C,beane:2002_Nucl.Phys.A} . At the physical point, $m_\pi\approx138$~MeV, the values of the two scattering lengths, $a_0 \approx -23.7$~fm and $a_1\approx 5.4$~fm, are still (much) larger than the typical interaction length $\ell\approx 1.4$~fm; this is a further indication that nuclear physics is close to the unitary limit and well inside the universal window. A model-independent description of the physics inside the unitary window is given by an EFT based on the clear separation of scales between the typical momenta~$Q \sim 1/a$ of the system and the underling high momentum scale $\sim 1/\ell$~\cite{vankolck:1999_NuclearPhysicsA,bedaque:1999_Phys.Rev.Lett.,bedaque:1999_Nucl.Phys.A}. Using EFT, if the power-counting is correct~\cite{griesshammer:2005_Nucl.Phys.A}, one can systematically improve the prediction on observables. For instance, in the two-body sector the usual effective range expansion (ERE) can be reproduced by such an expansion~\cite{vankolck:1999_NuclearPhysicsA}; the leading order (LO), which is just a two-body contact interaction, captures all the information encoded in the scattering length $a$, while the next-to-the-leading order term (NLO), which contains derivatives, captures the information encoded in the effective range $r_e$. Starting from the three-body sector, a LO three-body interaction is necessary~\cite{bedaque:1999_Phys.Rev.Lett.,bedaque:1999_Nucl.Phys.A,kievsky:2017_Phys.Rev.C} which introduces the emergent three-body scale. It is possible to investigate the universal window by using potential models; this approach allows to follow the behaviour of two- and three-particle binding energies inside the window of universality. Also a higher number of particles can be considered as in Refs.~\cite{gattobigio:2011_Phys.Rev.A,kievsky:2014_Phys.Rev.A,kievsky:2017_Phys.Rev.A} where it has been shown that the use of a simple Gaussian potential gives a good description of bosonic systems like Helium droplets in this regime. In this paper we want to explore the window of universality for nucleons, that means for fermions with 1/2 spin and isospin degrees of freedom; the idea is to follow, as a function of the interaction range, the states which represent light nuclei in the region of universality and to observe which part of the nuclear spectrum is in fact governed by universality. The major difference with respect the bosonic case is the presence of two scattering lengths. There has been previous studies of the Efimov physics with two scattering lengths~\cite{bulgac:1976_Sov.J.Nucl.Phys.,kievsky:2016_Few-BodySyst,% kievsky:2018_J.Phys.Conf.Ser.,konig:2017_Phys.Rev.Lett.,% vankolck:2018_J.Phys.Conf.Ser.,hammer:2018_Few-BodySyst.,beane:2002_Nucl.Phys.A}, and there are different ways to explore the space of parameters; one possible choice is to keep constant the ration between the scattering lengths $a_0/a_1$, selecting some cuts for in that space. Accordingly, we explore the nuclear cut, that means $a_0/a_1\approx -4.3$, moving from the unitary point, $a_0,a_1\rightarrow\infty$, to the physical point; at a more fundamental level, this is equivalent to change the mass of the pion $m_\pi$ ( or the sum of up and quark masses in QCD), as it was shown in Refs.~\cite{beane:2001_Phys.Rev.A,epelbaum:2006_Eur.Phys.J.C}. Interestingly, we observe that, at unitary, in addition to the $A=5$ gap we observe a $A=6$ gap. The paper is organized at follows. In Section~\ref{sec:efimovPlot} we will show how the spectrum of $A=2,3,4,6$ nucleons representing light nuclei depend on the scattering lengths when we change them from the unitary limit to the their real value, and we discuss what are the aspects of the universality of Efimov physics that still remain. In Section~\ref{sec:physicalPoint} we concentrate our study at the physical point, where a three-body force, as well as the Coulomb interaction, are introduced. In Section~\ref{sec:pwaves} we investigate the possible r\^ole of $p$-waves in the binding of $A=6$ nuclei. Finally, in Section~\ref{sec:conclusions} we give our conclusions. \section{$1/2$ Spin-Isospin energy levels close to unitary}\label{sec:efimovPlot} In this section, we describe the discrete spectrum of spin $1/2$- isospin $1/2$ particles from the unitary limit to the point where nuclear physics is located, the physical point, defined as the point in which the scattering lengths take their observed values. To this end we construct the Efimov plot, a plot in the plane $(K,1/a)$ defined by the energy momentum $K$ of the bound state with energy $\hbar^2K^2/m$, as a function of the inverse of the two-body scattering length $a$. In the case of two nucleons there are two different scattering lengths, $a_0$ and $a_1$, in spin-isospin channels with $S,T=0,1$ and $S,T=1,0$ respectively. Accordingly, following Refs.~\cite{kievsky:2016_Few-BodySyst,kievsky:2018_J.Phys.Conf.Ser.}, the plane is defined with the triplet scattering length $(K,1/a_1)$, taking care that for each value of $a_1$, $a_0$ is varied accordingly maintaining the ratio $a_0/a_1$ constant. In Refs.~\cite{kievsky:2016_Few-BodySyst,kievsky:2018_J.Phys.Conf.Ser.}, the main characteristic of the Efimov plot for three $1/2$ spin-isospin particles have been studied. In particular it was shown that for the ratio $a_0/a_1\approx-4.3$, corresponding to the nuclear physics case, the infinite tower of states at unitary disappear very fast as $a_1$ decreases and, at $a_1<20\;$fm, only one state survives. This simple analysis explains the existence of only one bound state for $^3$H and $^3$He. Conversely, in the case of three identical bosons, calculations using finite-range potentials have shown that the first excited state survive along the unitary window. \subsection{The potential model} In order to explore the unitary window through the Efimov plot, we calculate the binding energies of $A$ nucleons for different values of the two-body scattering lengths. In the case of a zero-range interaction the $A=2$ energy of the real (virtual) state for $a>0$ ($a<0$) is simply $E_2=-\hbar^2/m a^2$. For three particles, and using a zero-range interaction, the binding energies can be obtained by solving the Faddeev zero-range equations encoded in the Skorniakov-Tern-Martirosian equations (see Ref.~\cite{braaten:2006_PhysicsReports} and references there in for details). It is well known that the contact interaction can be represented by different functional forms introducing finite-range effects. In particular, as it has been shown in Ref.~\cite{alvarez-rodriguez:2016_Phys.Rev.A}, inside the unitary window a Gaussian potential captures the main characteristics of the dynamics, confirming the universal behavior of the system in this particular region. Considering that, for two nucleons, there are four different spin-isospin channels with quantum numbers $ST=01,10,00,11$, we define the following spin-isospin dependent potential of a Gaussian type \begin{equation} V(r) = \sum_{ST} V_{ST}\,e^{-(r/r_{ST})^2} {\cal P}_{ST} \,, \label{eq:twoBody} \end{equation} where we have introduced the spin-isospin $ {\cal P}_{ST}$ projectors. The minimal requirement to construct a fully antisymmetric two-body wave function with the lowest value of the angular momentum $L$ is to consider the spin-isospin channels $S=0,T=1$ and $S=1,T=0$. Therefore, in this first analysis, the other two components of the potential are set to zero: $V_{00}=V_{11}=0$. In each of the two remaining terms there are two parameters, the strength of the Gaussian and its range; we fix both ranges to be the same $r_{10} = r_{01} = r_0 = 1.65$~fm, and of the order of the nuclear range. With this choice an acceptable description of the two-body low-energy data is obtained (a refinement of the model will be discussed in the next section). The tuning of the two strengths allows us to control the scattering lengths; the value of $V_{01}$ defines the singlet scattering length $a_0$, while the value of $V_{10}$ defines triplet one $a_1$. In all our calculations we fix the value of the nucleon mass $m$ so that $\hbar^2/m=41.47$~MeV~$\text{fm}^2$. In some of the following Table/Figures, as unit length we use we use $r_0=1.65$~fm and as unit of energy we use $E_0 = \hbar^2/mr_0^2 = 15.232$~MeV. In order to calculate the binding energies for the nuclear systems having $A=3,4,6$, we have solved the Schr\"odinger equation using two different variational methods. One method is based on the Hyperspherical Harmonic (HH)~\cite{kievsky:1997_Few-BodySyst} basis in its unsymmetrized version~\cite{gattobigio:2009_Phys.Rev.A,gattobigio:2009_Few-BodySyst.,gattobigio:2011_Phys.Rev.C}. We have used this approach mainly for $A=3,4$ since it is very accurate for states far from thresholds. Close to a threshold, as for $A=6$ or for excited stated in $A=3,4$, the dimension of the basis tends to become too big to have a good precision. To overcome this problem we implemented a version of the stochastic variational method (SVM)~\cite{varga:1995_Phys.Rev.C} with correlated-Gaussian functions as basis set; this method allows for a more economical description of the excited states close to the threshold. By giving values to $V_{10}$ and $V_{01}$, the values of the scattering lengths vary along the nuclear cut defined from the ratio $a_0/a_1 = -4.3066$. Along this path we have calculated the binding energies of $A=2,3,4,6$ nucleon systems. The calculations, for selected values of the potential strengths, are reported in Table~\ref{tab:nuclearPlane} in the case of positive triplet-scattering length values, for which a two-body bound state in the $^3S_1$ channel exists. The calculations cover a region between the unitary point, for which both scattering lengths attain an infinite value, and the physical point, for which the value of the two-body state is $E_2 = -2.2255$~MeV (the experimental binding energy of the deuteron is $2.224575(9)$~MeV), and the two scattering lengths have the values $a_1 = 5.4802$~fm and $a_0=-23.601$~fm, with the experimental values $a_1=5.424(3)$~fm and $a_0=-23.74(2)$~fm, respectively. \begin{table*}[ht] \caption{Calculations belonging to the nuclear cut, $a_0/a_1 = -4.3066 $ for selected values of the strengths $V_{01}$ and $V_{10}$. The ground-state energy $E_A$ and, if it exists, the excited-state energy $E_A^*$ of the $A$-particle system are reported. In the $A=6$ case we distinguish between the total isospin $T=1$ and total spin $S=0$ case, $^6$He, and the $T=0$, $S=1$ case, $^6$Li. The Coulomb interaction is not taken into account.} \label{tab:nuclearPlane} \begin{tabular} {@{}l l c c c c c c c c@{}} \hline\hline $V_{10}$(MeV)& $V_{01}$(MeV) & $a_1$(fm) & $E_2$(MeV) & $E_3$(MeV) & $E_3^*$(MeV) & $E_4$(MeV) & $E_4^*$(MeV) & $^6$He(MeV) & $^6$Li(MeV)\\ \hline -60.575 & -37.9 & 5.4802 & -2.2255 & -10.2455 & - & -39.843 & -11.19 & -41.60 & -46.74 \\ -60. & -37.95858685 & 5.5980 & -2.1098 & -10.0056 & - & -39.221 & -10.93 & -40.87 & -45.82 \\ -58. & -38.17113668 & 6.0683 & -1.72703 & -9.19027 & - & -37.093 & -10.01 & -38.36 & -42.71 \\ -56. & -38.39860618 & 6.6607 & -1.37621 & -8.40544 & - & -35.017 & -9.14 & -35.95 & -39.67 \\ -54. & -38.64295075 & 7.4310 & -1.05929 & -7.65258 & - & -32.997 & -8.31 & -33.58 & -36.77 \\ -52. & -38.90658498 & 8.4756 & -0.77842 & -6.93330 & - & -31.035 & -7.52 & -31.31 & -33.95 \\ -50.0 & -39.19224 & 9.97497 & -0.53599 & -6.24929 & - & -29.135 & -6.78 & - & -31.23 \\ -48.0 & -39.50320907 & 12.31255 & -0.334659 & -5.60235 & - & -27.300 & -6.08 & - & -28.62 \\ -46.0 & -39.8434712 & 16.47151 & -0.17735880 & -4.99446 & - & -25.536 & -5.43 & - & -26.17 \\ -45.0 & -40.026055 & 20.06376 & -0.1163 & -4.7058 & -0.116853 & -24.682 & -5.13 & - & -24.96 \\ -44.5 & -40.120751 & 22.6040702 & -0.0903760 & -4.5654 & -0.091991 & -24.262 & -4.98 & - & -24.41 \\ -44.0 & -40.217947 & 25.95893 & -0.067559 & -4.4278 & -0.07054 & -23.847 & -4.83 & - & - \\ -43.5 & -40.3174375 & 30.5953 & -0.047939 & -4.2927 & -0.053034 & -23.437 & -4.69 & - & - \\ -43.0 & -40.4196253 & 37.421571 & -0.0315796 & -4.1605 & -0.03853 & -23.032 & -4.55 & - & - \\ -42.5 & -40.52452499 & 48.46985 & -0.0185467 & -4.0311 & -0.02697 & -22.633 & -4.42 & - & - \\ -42.0 & -40.63225234 & 69.413066 & -0.0089083 & -3.9044 & -0.01816 & -22.238 & -4.28 & - & - \\ -41.5 & -40.74293099 & 124.3314 & -0.00273453 & -3.7807 & -0.01186 & -21.850 & -4.15 & - & - \\ -40.88363& -40.88363 & $\infty$ & 0 & -3.6322 & -0.00678 & -21.378 & -4.00 & - & - \\ \hline \hline \end{tabular} \end{table*} Considering the potential of Eq.(\ref{eq:twoBody}) in all cases the lowest state corresponds to total orbital angular momentum $L=0$. Moreover, in this first analysis the Coulomb interaction between protons were disregarded, so the isospin is conserved. In the three-body sector, the quantum numbers of $^3$H and of $^3$He are $S=1/2$ and $T=1/2$. In this exploration we disregarded other charge-symmetry breaking terms, accordingly the two nuclei have the same energy. We refer to their ground-state energy as $E_3$ and to their excited-state energy as $E_3^*$. The total wave function is antisymmetric with the spatial wave function mostly symmetric. We would like to stress a big difference between the bosonic and the nuclear cut already mentioned above: in the bosonic case the first excited state never disappears into the particle-dimer continuum whereas, in the nuclear case, the excited state disappear in the continuum and it becomes a virtual state already at a large value of $a_1$ ($a_1\approx 20$~fm). The motivation is the following: at unitary, since we are using the same range for both gaussians, the system is equivalent to a bosonic system and an infinite set of excited states appears showing the Efimov effect (in Table~\ref{tab:nuclearPlane} only the first one is reported). Moreover, the system is completely symmetric, no other symmetry is present. As the strength of the potentials starts to vary, keeping the ratio $a_0/a_1$ constant, the three-body wave function develops a spatial mixed symmetry component making the energy gain slower than in the bosonic case. The two-body system is not affected by the singlet potential (which is smaller) and its energy gain is the same as in the bosonic case; as a consequence the first excited state crosses the particle-dimer continuum becoming a virtual state. \begin{figure}[!tbp] \begin{center} \includegraphics[width=\linewidth]{fourBodyEfimov.pdf} \end{center} \caption{Efimov plot for $N=2,3,4$ particles along the nuclear cut $a_0/a_1=-4.3066$. The triplet scattering length $a_1$ is in units of $r_0=1.65$~fm and the energies are expressed in units of $E_0=\hbar^2/mr_0^2 = -15.232$~MeV.} \label{fig:efimovPlot} \end{figure} From the results reported in Table~\ref{tab:nuclearPlane} we also observe that the three-body binding energy at the physical point is much larger than the experimental value of -8.48 MeV; this is a well known fact related to the necessity of including a three-body force, a point we discuss in the next section. In Fig.~\ref{fig:efimovPlot} we show the Efimov plot up to four particles; we clearly see the three-body excited state disappearing in the continuum. We also observe the usual feature of two four-body states attached to the three-body ground state. The four body calculations are done for the same quantum numbers as $^4$He, that means $S=0$ and $T=0$, thus the two states have mostly a symmetric spatial wave function. The ratio between the ground state energy of the four-body state and the ground state of the three-body state $E_4/E_3$ is not constant along the path, but it varies from $E_4/E_3 = 5.89$ at the unitary point to $E_4/E_3 = 3.89$ at the physical point, close to the realistic case of $3.67$. As far as the excited stated of the four-body system is concerned, the ratio between its energy and that of the three-body state is more or less constant along the path $E_4^*/E_3 = 1.09 - 1.1$; the finite-range corrections result in a bigger value of this ratio with respect to the zero-range limit~\cite{deltuva:2013_Few-BodySyst}. \subsection{Universal behavior} To analyse the universal behavior of the few-nucleon systems we start recalling the Efimov radial law for three equal bosons~\cite{braaten:2006_PhysicsReports} \begin{subequations} \begin{equation} E_3/E_2 = \tan^2\xi \label{eq:zenergy} \end{equation} \begin{equation} \kappa_*a = \text{e}^{(n-n_*)\pi/s_0}\frac{\text{e}^{-\Delta(\xi)/2s_0}}{\cos\xi} \,, \label{eq:zkstara} \end{equation} \label{eq:ZeroRange} \end{subequations} where, due to its zero-range character, $E_2=-\hbar^2/ma^2$ and the three-body binding energy of level $n_*$ at unitary is $\hbar^2/m \kappa_*^2$. The function $\Delta(\xi)$ is universal in the sense that it is the same for all the energy levels. It can be calculated solving the STM equations as explained for example in Ref.~\cite{braaten:2006_PhysicsReports}, and its expression can be given in a parametric form~\cite{gattobigio:2019_arXiv}. To be noticed that the spectrum given by the above equation is not bounded from below. For a real three-boson system located close to the unitary limit and interacting through short-range forces with a typical length $\ell$, the discrete spectrum is bounded from below with the number of levels roughly approximate by $(s_0/\pi)\ln(|a|/\ell)$. The extension of Eqs.~(\ref{eq:ZeroRange}) to describe finite-range interactions, considering more particles and eventually spin-isospin degrees of freedom, is given in a series of papers, Refs.\cite{gattobigio:2014_Phys.Rev.A,gattobigio:2014_JournalofPhysics:ConferenceSeries,kievsky:2014_Phys.Rev.A,kievsky:2016_Few-BodySyst,kievsky:2018_J.Phys.Conf.Ser.}, and it reads \begin{subequations} \begin{equation} E^m_A/E_2 = \tan^2\xi \label{eq:energy} \end{equation} \begin{equation} \kappa^m_A a_B +\Gamma^m_A= \frac{\text{e}^{-\Delta(\xi)/2s_0}}{\cos\xi} \,, \label{eq:kstara} \end{equation} \label{eq:FiniteRange} \end{subequations} where for three particles, $E^m_3$, $m=0,1,\ldots$, is the energy of the different branches; in Fig.~\ref{fig:efimovPlot} the first two branches ($m=0,1$) are shown. For four particles $E^m_4$, $m=0,1$, is the energy of the two states attached to the lowest three-body branch, $E^0_3$. The length $a_B$ is defined from the two-body energy as $E_2=-\hbar^2/m a_B^2$. Finally, we have introduced the shift parameter, $\Gamma^m_A$, which results almost constant along the unitary window. A recent analysis of the shift parameter for three equal bosons is given in Ref.~\cite{gattobigio:2019_arXiv}, where it is related to the running three-body parameter introduced in Ref.~\cite{ji:2015_Phys.Rev.A}. Eq.~(\ref{eq:kstara}) can also be written as \begin{equation} \kappa^m_A a_B = \frac{\text{e}^{-\Delta_A^m(\xi)/2s_0}}{\cos\xi} \,, \label{eq:kstara1} \end{equation} where the shift $\Gamma^m_A$ is absorbed in the level function $\Delta^m_A(\xi)$; in the present work it is calculated from a Gaussian potential as in the Bosonic case~\cite{alvarez-rodriguez:2016_Phys.Rev.A}. In Ref.~\cite{alvarez-rodriguez:2016_Phys.Rev.A} it has been shown that the level function $\Delta^m_A(\xi)$, which incorporates the finite-range corrections given by a Gaussian potential, is about the same for different potentials close to the unitary limit. Accordingly a Gaussian potential can be thought as a universal representation of potential models inside the universality window. Moreover, the level function $\Delta^m_A(\xi)$ is unique for all Gaussian potentials, it does not depend on the particular range $r_0$ used for the actual calculations because, as shown in Fig.~\ref{fig:efimovPlot}, this parameter is just used to have a dimensionless scattering length and energy. This is an important point because the limit $r_0/a_1\rightarrow 0$ can be read either as $a_1\rightarrow \infty$ or as $r_0\rightarrow 0$. In the limit $r_0/a_1=0$ the unitary point coincides with the finite-range-regularized scaling-limit point and the dimensionless values of the binding momenta are the same for all Gaussian potentials. They are given in Tab.~\ref{tab:purenumber} for $A=3,4$ and $m=0,1$. \begin{table}[t] \caption{In this table we report the universal-Gaussian values of the momentum energies of $A=3,4$ systems at the unitary point for the branches $m=(0,1)$, and we summarise the values of the physical angles, of the Gaussian two-body binding energies corresponding to the same angle reproduced by the Gaussian potential, and of the momentum and energy at the unitary limit for the real-nuclear systems that we have predicted using Eq.~(\ref{eq:equalxi}).} \label{tab:purenumber} \begin{tabular} {@{}c c c c c c c @{}} \hline\hline \rule[-1.2ex]{0pt}{0pt} $A$ & $m$ & $r_0\kappa^m_A\bigr|_G$ & $\tan^2\xi\bigr|_{\text{exp}}$ & $a_B/r_0\bigr|_G$ & $\kappa^m_A\bigr|_{\text{exp}}$(fm$^{-1}$) & $E^m_A\bigr|_{\text{exp}}$(MeV)\\ \hline 3 & 0 & 0.4883 & 3.81 & 2.1866 & 0.2473 & 2.536\\ 3 & 1 & 0.0211 \\ 4 & 0 & 1.1847 & 13.13 & 2.0774 & 0.570 & 13.474\\ 4 & 1 & 0.5124 \\ \hline \hline \end{tabular} \end{table} The uniqueness of the Gaussian-level functions and the fact that the Gaussian potential is an universal representation of potential models close to the unitary limit, allows us to use the Gaussian potential to predict the values of the energies at the unitary limit for real systems, which in principle are described by more realistic potentials. We proceed in the following way: from Eq.~(\ref{eq:ZeroRange}) we observe that the product $\kappa_* a$ is a function of the only angle $\xi$ through the universal function $\Delta(\xi)$. This property is related to the DSI and it is well verified for real systems, which, close to the unitary limit, are well represented by the Gaussian level functions as given in Eq.~(\ref{eq:kstara1}). Therefore, the product $\kappa^m_A a_B$ is function of solely the angle $\xi$ verifying the following equality \begin{equation} \kappa^m_A a_B\Bigr|_{\text{exp}} = \kappa^m_A a_B\Bigr|_{G} \,, \label{eq:equalk} \end{equation} where $\kappa^m_A a_B\bigr|_{\text{exp}}$ is the function evaluated at the angle given by the experimental values, and the function $\kappa^m_A a_B\bigr|_G$ is evaluated at the same angle but calculated with the gaussian potential. From Eq.~(\ref{eq:equalk}) the energy momentum at unitary for the real systems is \begin{equation} \kappa^m_A\Bigr|_{\text{exp}} = \frac{1}{a_B}\Bigr|_{\text{exp}}\kappa^m_A a_B\Bigr|_G = \frac{1}{a_B}\Bigr|_{\text{exp}}(r_0 \kappa^m_A) \frac{a_B}{r_0}\Bigr|_G\,, \label{eq:equalxi} \end{equation} where the universal Gaussian values of $r_0 \kappa^m_A\bigr|_G$ are reported in Table~\ref{tab:purenumber}. We can apply Eq.~(\ref{eq:equalxi}) to predict the value of the three- and four-body energies at the unitary limit for nuclear physics. For the three-body case, the experimental binding energies of the deuteron, $a_B\bigr|_{\text{exp}}=4.3176$~fm, and of the $^3$H fix the experimental value of the angle $\xi$ to be $\tan^2\xi\bigr|_{exp} = 3.81$. Using the range value $r_0=1.65\;$fm, this angle is reproduced by the Gaussian strengths $V_{10}=-64.96$~MeV and $V_{01}=-37.4855$~MeV, which corresponds to a deuteron energy of $E_2=-3.1858639$~MeV, or, equivalently, $a_B/r_0\bigr|_G = 2.1866$. Using the Gaussian value of $r_0\kappa_3^0\bigr|_G = 0.4883$, from Eq.~(\ref{eq:equalxi}) we obtain $\kappa^0_3\bigr|_{\text{exp}}=0.2473$~fm$^{-1}$ corresponding to a three-nucleon binding energy at unitary of $E_3^0\bigr|_\text{exp} = 2.536$~MeV. We proceed in the same way for the four-body case. We take $E_4^0=29.1$~MeV as the experimental value of $^4$He without Coulomb interaction~\cite{pudliner:1995_Phys.Rev.Lett.}; with this value and that of the deuteron we obtain $\tan^2\xi\bigr|_{\text{exp}} = 13.1$, which can be reproduced using the Gaussian potential with $V_{10} = -66.4$~MeV and $V_{01}=-37.36047$~MeV that also gives $a_B/r_0\bigr|_G = 2.0774$. Using Eq.~(\ref{eq:equalxi}) and the universal-Gaussian value $r_0\kappa_4^0\bigr|_G=1.1847$ we obtain $\kappa^0_4\bigr|_{\text{exp}}=0.570$~fm$^{-1}$, or, equivalently, $E_4^0\bigr|_\text{exp} = 13.474$~MeV. All the results are summarised in Table~\ref{tab:purenumber}, and it should be noted that predictions of the same order exist for $A=3$~\cite{epelbaum:2006_Eur.Phys.J.C}. In order to study further the close relation between the zero- and finite-range descriptions we look at the behavior of \begin{equation} y(\xi) = \text{e}^{-\Delta(\xi)/2s_0}/{\cos\xi} \label{eq:yofxi} \end{equation} as a function of $\kappa^m_A a_B$. For zero-range this function is a line going through the origin at $45$ degrees. As already observed~\cite{gattobigio:2014_Phys.Rev.A,alvarez-rodriguez:2016_Phys.Rev.A} for bosons, if the shift parameter $\Gamma^m_A$ is almost constant, three and four particles results should give a linear relation between $y(\xi)$ and $\kappa^m_A a_B$ though not going through the origin. The results are given in Fig.~\ref{fig:ylines} showing the expected behavior in a very extended range of $\kappa^m_A a_B$ values. \begin{figure}[!tbp] \begin{center} \includegraphics[width=\linewidth]{ylines.pdf} \end{center} \caption{Efimov plot for the nuclear cut in the form of $y(\xi)$, Eq.~(\ref{eq:yofxi}), as a function of $\kappa_A^m a_B$. The zero-range limit is given by the straigth line $y(\xi) = \kappa_A^m a_B$.} \label{fig:ylines} \end{figure} \subsection{Including the $A=6$ energies} In the following we study the six-body bound states as a function of the triplet scattering length along the nuclear cut; we expect a bigger deviation from the bosonic case because the totally symmetric spatial component cannot be anymore present; with only four internal degrees of freedom, the spin and the isospin, there are only mixed components. In the $A=6$ case we distinguish two different states, one with quantum numbers $S=0$ and $T=1$, to which we refer to as $^6$He even in absence of Coulomb interaction, and one with quantum numbers $S=1$ and $T=0$, to which we refer to as $^6$Li. The results of Table~\ref{tab:nuclearPlane} are reported in Fig.~\ref{fig:sixBodyEfimov}; clearly, we can observe the absence of these states close to the unitary limit. This is a big difference with respect to the bosonic case, where, for $6\ge A>3$ the $A$-boson system at unitary has two states, one deep and one shallow, attached to the $A-1$ ground state ~\cite{gattobigio:2012_Phys.Rev.A,gattobigio:2014_Phys.Rev.A,kievsky:2014_Phys.Rev.A}. Instead, the two fermionic $A=6$ states are not bound below the $^4$He threshold (at unitary the $^4$He and $^4$He+d threshold coincide since the two-body system has zero energy). This is clearly a sign of the absence of the symmetric component in the spatial wave function. From the previous discussion we notice the interesting result that, at the unitary limit, there is a mass gap for $A=5,6$. This gap continues to exist only for the case $A=5$ at the physical point. In fact, following the behavior of the $A=6$ states from Fig.~\ref{fig:sixBodyEfimov} we observe that, as the two-body system acquires energy, there is a point around $r_0/a_1\approx0.07$ in which $^6$Li emerges from the $^4$He+d threshold and, at $r_0/a_1\approx0.2$, $^6$He emerges from the $^4$He threshold. The difference in energy between the two states at this last point is of $2.64$~MeV, of the order of the experimental mass difference; it becomes of the order of $5.14$~MeV at the physical point. We can conclude that this is a subtle effect of the finite-range character of the force, as we are going to discuss in the next sections. Finally, we investigate the universal character of the fermionic $A=6$ states using Eq.~(\ref{eq:yofxi}). A linear behavior of the function $y(\xi)$ indicates a behavior controlled by the scattering lengths and the three-body parameter. In Fig.~\ref{fig:ylines6} we plot the value of $y(\xi)$, calculated using the $A=6$ energies as a function of $\kappa_4^0a_B$; the latter has been chosen because, at the unitary point, is the energy representing the threshold. We find a dominant linear relation close to the thresholds where the structure of the state is dominated by the $^4$He. For $^6$Li deviations from the universal behaviour appears close to the physical point whereas the $^6$He energies follow nicely the linear behavior showing a strongly universal character. \begin{figure} \begin{center} \includegraphics[width=\linewidth]{sixBodyEfimov.pdf} \end{center} \caption{Efimov plot in the nuclear cut for $A=6$ particles. The scattering length is in units of $r_0=1.65$~fm and the energies in units of $E_0=\hbar^2/mr_0^2=15.232$~MeV. We distinguish between the six-body state that has the quantum numbers of $^6$He and the one with the quantum numbers of $^6$Li. We also report the energy of the $A=4$, which has the quantum number of $^4$He, and represents the threshold for the $^6$He, and the energy of $^4$He+d which represents the threshold for $^6$Li. In the present calculations the Coulomb interaction has not been taken into account.} \label{fig:sixBodyEfimov} \end{figure} \begin{figure} \begin{center} \includegraphics[width=\linewidth]{ylines6.pdf} \end{center} \caption{Efimov plot in the nuclear cut for $A=6$ particles, the same as in Fig.~\ref{fig:sixBodyEfimov}, in the form of $y(\xi)$ as a function of $\kappa_4^0a_B$. With respect to the $A=3,4$ cases, we observe a bigger deviation from the universal prediction of Efimov physics.} \label{fig:ylines6} \end{figure} \section{The physical point}\label{sec:physicalPoint} From the calculations of Table~\ref{tab:nuclearPlane} we clearly see that the two-body potential Eq.~(\ref{eq:twoBody}) is too simple to describe the spectrum of light nuclei. On the other hand it captures some important aspects as the one-level three-nucleon spectrum, the $E_4/E_3$ ratio and the $A=5$ mass gap. As discussed in the introduction, the two-body Gaussian potential has to be supplemented with a three-body potential devised to reproduce the $^3$H energy. This corresponds, in Efimov physics, to fix the three-body parameter or, following EFT concepts, the promotion to the LO of the three-body interaction in order to take into account the unnatural large values of the scattering lengths. Here we use an hyper-central three-body potential of the following form \begin{equation} W(\rho) = W_0\,e^{- (r_{12}^2+r_{13}^2+r_{23}^2)/R_3^2} \,, \label{eq:threeBody} \end{equation} where $r_{ij}$ is the relative distance between particle $i$ and $j$. In this potential there are two-independent parameters, the strength of the potential $W_0$ and its range $R_3$. In order to reproduce the $^{3}$H binding energy, $E_{^{3}\text{H}} = -8.482$~MeV, an infinite number of pairs $(W_0,R_3)$ can be chosen. However a very small number of such pairs (in fact only two~\cite{kievsky:2016_Few-BodySyst}) reproduce other physical inputs like the energy of the four-body system or the neutron-deuteron scattering length $a_{nd}$. \begin{table*} \caption{Calculation for $A=3,4$ at the physical point, $V_{10}=-60.575$~MeV, $V_{01} = -37.9$~MeV, and $E_2=-2.2255$~MeV, for selected three-body force parameters. In the left part calculations without the Coulomb interaction are reported for $^3$H, $E_4$, and $E_4^*$. In the right part of the table the Coulomb interaction has been included to calculate $^3$He, $^4$He, and the excited state $^4$He$^*$. The latter disappear as bound state when the three-body force and the Coulomb interaction are consider together. The experimental values are reported in the last row.} \label{tab:physicalPoint} \begin{tabular} {@{}c c c c c| c c c@{}} \hline\hline $W_{0}$(MeV) & $R_3$(fm) & $^3$H(MeV) & $E_4$(MeV) & $E_4^*$(MeV)& $^3$He(MeV) & $^4$He(MeV) & $^4$He$^*$(MeV) \\ \hline 0 & - & -10.2455 & -39.843 & -11.193 & -9.426 & -38.789 & -10.655 \\ 11.922 & 2.5 & -8.48 & -28.670 & -8.75 & -7.722 & -27.754 & - \\ 9.072 & 2.8 & -8.48 & -29.014 & -8.79 & -7.718 & -28.060 & - \\ 7.8 & 3.0 & -8.48 & -29.223 & -8.80 & -7.715 & -28.258 & - \\ 7.638 & 3.03 & -8.48 & -29.255 & -8.80 & -7.714 & -28.290 & - \\ 7.612 & 3.035 & -8.48 & -29.260 & -8.80 & -7.714 & -28.295 & - \\ 7.6044 & 3.035 & -8.482 & -29.269 & -8.80 & -7.716 & -28.305 & - \\ \cline{3-7} \multicolumn{2}{c}{Experimental Values} & -8.482 & & &-7.718 & -28.296& \\ \hline \hline \end{tabular} \end{table*} In Table~\ref{tab:physicalPoint} we show selected parameters of the three-body used to reproduce the energy of $^3$H. In the left part of the table we report calculations without Coulomb interaction; we observe the repulsive nature of the three-body force. Without Coulomb interaction $^3$He is degenerate with $^3$H and the four body state has an energy $E_4$ lower than the one of $^4$He. Moreover, the four-body system shows an unphysical excited state $E_4^*$; this is the universal Efimov aspect of nuclear physics: for each three-body state there are two attached four-body states. In the right part of Table~\ref{tab:physicalPoint} we show calculations where the Coulomb interaction has been taken into account. We observe that there are values of $R_3$ that allow to reproduce $^3$He, and for these values, the description of $^4$He is close to the experimental values. \begin{table}[!tbp] \caption{Calculation for $A=3,4$ for the case $W_0=7.6044$~MeV and $R_3 = 3.035$~fm with a slow switch on of Coulomb interaction controlled by the parameter $\epsilon$. The threshold of $^3$H+p is $E_{^3\text{H}}=-8.482$~MeV, which implies that the four-body exited state $^4$He$^*$ is no more bounded for $\epsilon\approx 0.75$.} \label{tab:coulomb} \begin{tabular} {@{}c c c c c c c c c@{}} \hline\hline $\epsilon$ & $^3$He(MeV) & $^4$He(MeV) & $^4$He$^*$(MeV) \\ \hline 0 & -8.482 & -29.269 & -8.804 \\ 0.2 & -8.327 & -29.076 & -8.706 \\ 0.4 & -8.173 & -28.882 & -8.618 \\ 0.6 & -8.020 & -28.689 & -8.536 \\ 0.65 & -7.982 & -28.641 & -8.520 \\ 0.7 & -7.944 & -28.593 & -8.501 \\ 0.75 & -7.906 & -28.545 & - \\ 0.8 & -7.868 & -28.497 & - \\ 1 & -7.716 & -28.305 & - \\ \hline \hline \end{tabular} \end{table} \begin{figure} \begin{center} \includegraphics[width=\linewidth]{coulomb.pdf} \end{center} \caption{Evolution of the energies for $A=3,4$ as a function of a smooth switching-on of the Coulomb interaction via a multiplicative parameter $\epsilon$. The three-body parameters have been fixed to $W_0=7.6044$~MeV and $R_3=3.035$~fm. The full Coulomb interaction corresponds to $\epsilon=1$. The four-body excited state disappears for a critical value $\epsilon^*=0.754$, while the energies of $^3$He and $^4$He goes to the experimental values better than 0.1\%.} \label{fig:coulomb} \end{figure} The presence of Coulomb interaction makes the four-body excited state disappear. From the table we select the best value of the three-body force, $W_0=7.6044$~MeV and $R_3=3.035$~fm, to follow the evolution of the $^3$He and $^4$He binding energies as a function of a smooth switching-on of the Coulomb interaction by means of a parameter $\epsilon$. In Table~\ref{tab:coulomb} we report our calculation as a function of $\epsilon$ and the same data are graphically represented in Fig.~\ref{fig:coulomb}. For $\epsilon=0$, that means no Coulomb interaction, there is only one three-body bound state and the universal two-attached four-body states. As the value of the Coulomb interaction grows to its full value, $\epsilon=1$, the degeneracy between the $^3$H and $^3$He is removed and also the value of the ground- and exited-state energy of $^4$He start to change; for $\epsilon\approx 0.75$ the $^4\text{He}$ excited state goes behind the $^3$H+p threshold; a polynomial fit gives the value of threshold at $\epsilon^*=0.754$. One can probably expect that the fate of this excited state is to become the known $0^+$ resonance of $^4$He; in order to see this, one should follow the state as it enters the continuum and mixes with it. Some preliminary studies do not support this picture, but there are some indication that the state becomes a virtual state. Just as an exercise, we can make a simple extrapolation; the result of such an exercise is reported in Fig.~\ref{fig:extrapolated}, where the extrapolated energy is at $-8.40$~MeV, quite far from the experimental energy of the resonance (-8.0860~MeV). \begin{figure}[!tbp] \begin{center} \includegraphics[width=\linewidth]{extrapolated.pdf} \end{center} \caption{Energy of the excited four body state $^4$He$^*$ as a function of the switching-on of the Coulomb interaction. The grey zone indicates the continuum, which the state enters at $\epsilon^*=0.754$. We extrapolate the state up to full Coulomb $\epsilon=1$, but this does not mean that the extrapolated energy corresponds to a resonance because we are not taking into account the mixing with the continuum. The experimental position of $0^+$ resonance of $^4$He is -8.086~MeV.} \label{fig:extrapolated} \end{figure} To summarise this section, a simple Gaussian-potential acting mainly on $L=0$ supplemented with a three-body force and the Coulomb interaction describes quite accurately the spectrum of light nuclei up to four nucleons. The emerging DSI, controlled by the values of the scattering lengths and the three-nucleon binding energy, strongly constrains the spectrum inside the universal window. Here we would like to see to which extent the energies of $^6$He and $^6$Li are correlated by those parameters. Though the thresholds are well determined, our observation is that the $L=0$ force, even without considering the Coulomb interaction, is not able to bound the six-fermion system. \section{R\^ole of $P$-waves}\label{sec:pwaves} From the previous discussion we have seen that the simple version of the nuclear interaction dictated by the Efimov physics is not enough to describe the six-body sector of the light nuclei spectrum. In this section we investigate the possible r\^ole of the two terms of the potential Eq.~(\ref{eq:twoBody}), $V_{00}$, and $V_{11}$, that in the previous sections have been set to zero. These terms contribute to the description of the $P$-waves through the antisymmetric condition $(-1)^{(L+S+T)}=-1$. At the two-body level the low energy $P$-wave phase-shifts can be described by an effective range expansion which, for single channels, is of the form \begin{equation} S_k=k^3\cot{}^{2S+1}P_J = \frac{-1}{{}^{2S+1}a_J}+ \frac{1}{2}\; {}^{2S+1}r_J \,k^2\,, \label{eq:skk} \end{equation} where $^{2S+1}P_J$ is the $P$-wave phase-shift in spin channel $S$ coupled to total angular momentum $J$, ${}^{2S+1}a_J$ is the scattering volume and ${}^{2S+1}r_J$ is the $P$-wave effective range. In Fig.\ref{fig:pskk} the effective range function $S_k$ is shown for the uncoupled phases calculated using the AV14 nucleon-nucleon interaction~\cite{wiringa:1984_Phys.Rev.C} (circles) together with a fit for those results (solid lines). The linear behavior is well verified in this energy region and allows to extract the scattering parameters as given in Table \ref{tab:pskk}. \begin{figure}[h] \begin{center} \includegraphics[width=8cm]{skkp.pdf} \end{center} \caption{$P$-wave phase shifts calculated using the AV14 nucleon-nucleon interaction. The points are the effective calculations, while the solid lines are fits to that data allowing to extract the scattering parameters, see Table~\ref{tab:pskk}.} \label{fig:pskk} \end{figure} \begin{table}[h] \caption{Scattering parameters of the effective range expansion Eq~(\ref{eq:skk}) for the $P$-wave phase shift.} \label{tab:pskk} \begin{tabular} {@{}l c | l c @{}} \hline\hline ${}^{2S+1}a_J$ & [fm$^{-3}$] & ${}^{2S+1}r_J$ & [fm$^{-1}$] \\ \hline ${}^{1}a_1$ & 1.437 & ${}^{1}r_1$ & -6.308 \\ ${}^{3}a_1$ & 1.231 & ${}^{3}r_1$ & -7.786 \\ ${}^{3}a_0$ & -1.457 & ${}^{3}r_0$ & 3.328 \\ \hline \hline \end{tabular} \end{table} From the above analysis we can observe that the interaction in channel $S,T=0,0$ is repulsive whereas the interaction in channel $S,T=1,1$ is slightly attractive in $J=0$ wave. In the first case, we reproduce the scattering data with the interaction \begin{equation} V_{00} = +1.625~\text{MeV}\quad r_{00} = 4.03~\text{fm}\,; \label{eq:v00} \end{equation} with this choice, even ${}^1P_1$ phases are well described. The ${}^3P_0$ phases are well described with the interaction \begin{equation} V_{11} = -3.857~\text{MeV}\quad r_{11} = 3.35~\text{fm}\,. \label{eq:v11} \end{equation} However the interaction defined in Eq.(\ref{eq:twoBody}) cannot distinguish between the different two-body $J$-states. Accordingly, for the $S,T=1,1$ channel we use a Gaussian interaction with range $r_{11}=3.35$ and we allow variations of the strength around the value $-3.857\,$ MeV. We make one step further and we optimize the interactions in $V_{10}$ and $V_{01}$ to describe the $L=0$ singlet and triplet scattering lengths and corresponding effective ranges. The choice for the potentials is the following \begin{equation} \begin{gathered} V_{01} = -30.545885~\text{MeV}\quad r_{01} = 1.8310~\text{fm} \\ V_{10} = -66.5824776~\text{MeV}\quad r_{10} = 1.5579~\text{fm}\,. \end{gathered} \label{eq:newSwaves} \end{equation} The potential of Eq.(\ref{eq:twoBody}) is now defined in the four $S,T$ components and, as in the previous calculations, we introduce a three-body force to fix the value of the $^3$H. We use two different range $R_3$ to explore how the six bodies depend on it. \begin{table*}[!tbp] \caption{For each choice of the $V_{11}$ potential the three-body force has been tuned to reproduce the energy of the $^3$H. The range of the potential has been fixed to $r_{11}= 3.35$~fm.} \label{tab:pwaves} \begin{tabular} {@{}c c c c c c c @{}} \hline\hline $V_{11}$ (MeV) & $W_0$ (MeV) & $R_3$ (fm) & $^3$He (MeV) & $^4$He (MeV) & $^6$He (MeV) & $^6$Li (MeV)\\ \hline -3.857 & 7.8375 & 1.4 & -7.746 & -28.32 & -30.93 & -34.86 \\ -3.0 & 7.8104 & 1.4 & -7.746 & -28.34 & -29.90 & -33.67 \\ -3.0 & 13.461 & 1.2 & -7.749 & -28.20 & -30.43 & -34.35 \\ -2.5 & 7.7940 & 1.4 & -7.745 & -28.35 & -29.25 & -33.07 \\ -2.5 & 13.433 & 1.2 & -7.749 & -28.21 & -29.81 & -33.63 \\ -2.0 & 13.405 & 1.2 & -7.749 & -28.22 & -29.16 & -32.93 \\ -1.78 & 13.392 & 1.2 & -7.749 & -28.23 & -28.87 & -32.64 \\ \cline{4-7} \multicolumn{3}{c}{Experimental Values}& -7.718 & -28.296& -29.268 & -31.9938\\ \hline \hline \end{tabular} \end{table*} In Table~\ref{tab:pwaves} we report our calculations for different choices of the strength $V_{11}$ and the corresponding three-body strength $W_0$. In all cases the binding energies of $^3$He and $^4$He are well described considering that the only charge symmetry breaking component of the force taken into account is the Coulomb interaction. It is interesting to notice that the inclusion of the very weak attraction in channel $S,T=1,1$ is enough to bind $^6$He and $^6$Li though their bindings are a little bit overpredicted (see first row of the table). By decreasing the $V_{11}$ strength it is possible to describe better the $^6$He binding energy, as for example using the strength -2.5 MeV, but $^6$Li remains overbind by around $1$ MeV. This is a consequence of the lack of flexibility of the force defined in Eq.(\ref{eq:twoBody}) to distinguish between the different states in the two-body $P$-channels. This can be achieved by a spin-orbit term which can remove the degeneracy between the three $^3P_J$ phase shifts. In fact the present interaction predicts a difference $^6$Li - $^6$He mass more or less constant, 1 MeV greater than the experimental value. In Fig.~\ref{fig:pwavesN6} the results for the six-body sector are represented; on top of each data we write the range of the three-body force we used. We observe a linear dependence of the energies with respect the strength of $V_{11}$ potential, while the different three-body ranges just shift the linear dependence. \begin{figure} \begin{center} \includegraphics[width=\linewidth]{pwavesN6.pdf} \end{center} \caption{Energy of $^6$He and $^6$Li as a function of the potential strength $V_{11}$. The number on top of each point represent the value of the three-body range $R_3$ that has been used. For the sake of comparison, we also draw the experimental values.} \label{fig:pwavesN6} \end{figure} \section{Conclusions}\label{sec:conclusions} The fact that the two $s$-wave scattering lengths, $a_0$ and $a_1$, are large with respect the natural size of the NN interaction, locates nuclear physics inside the universal window. In this context is of interest analyze the spectrum of $1/2$ spin-isospin fermions controlled by these two parameters. This very simplified picture has been studied in the first part of the present work up to six fermion using a Gaussian potential model with variable strength. Assigning values to the Gaussian strengths in the spin-isospin channels $S,T=0,1$ and $1,0$ the two scattering lengths, $a_0$ and $a_1$, were allowed to vary from infinite to their physical values following a path, called nuclear cut, in which the ratio $a_0/a_1=-4.3066$ were kept constant. Considering only two-body Gaussians and disregarding the Coulomb interaction, the main results of this analysis are shown in Fig.~\ref{fig:efimovPlot} and Fig.~\ref{fig:sixBodyEfimov} where the main characteristic of the six fermion spectrum can be seen. At unitary the $A=5,6$ nuclei are not bound from the $A=4$ threshold. As the system moved from unitary through the physical point, the tower of infinite three-body states disappear with only one state surviving. At the same time the six-body system becomes bound, first the state having the $^6$Li quantum numbers and then the state having the $^6$He quantum numbers. Moreover all along the path the excited state of $^4$He is bound with respect to the three-nucleon threshold. Though the values of the energies are not well reproduced using a two-body Gaussian interaction, the spectrum at the physical point is formed by one two-nucleon state, one three-nucleon state, two four-nucleon states and two six-nucleon states. Two ingredients are missing in this analysis. The first one is trivial and consists in the inclusion of the Coulomb interaction. The second ingredient is dictated by EFT concepts and consists in the consideration of a three-body force. Accordingly, in the second part of the study we concentrate in the physical point considering those terms in the interaction. The main results are given in Table~\ref{tab:physicalPoint} where selected parametrization of the three-body force are shown in order to describe the triton binding energy. It should be noticed that considering the Coulomb interaction without including the three-body force or, vice versa, considering the three-body force without including the Coulomb interaction, produces a four-nucleon spectrum with two bound states. The three- and four-nucleon spectra go to the correct place after including both interactions. A detailed study of how the $^4$He$^*$ excited state crosses the threshold to the $^3$H-$p$ continuum is given in Fig~\ref{fig:extrapolated}. Preliminary studies indicate that, with the simply interaction used here, the $^4$He$^*$ excited state becomes a virtual state. Furthermore, when both, the Coulomb interaction and the three-body force, are taken into account the two six-fermion states become unbound. The repulsive character of the three-body force, needed to fix the triton binding energy, produces a delicate cancellation between the different energy terms promoting both $^6$Li and $^6$He above the respective thresholds. In order to see how these two nuclei emerge from their thresholds, in the final part of this study, we extend the Gaussian potential model to include interactions in the spin-isospin channels $S,T=0,0$ and $1,1$. The strengths and ranges of these terms have been fixed to reproduce the NN $P$-wave effective range expansion, as given for example by the AV14 interaction. Our observation was that a very weak attractive force in the $S,T=1,1$ channel is sufficient to bind $^6$Li and $^6$He, however with their mass difference overpredicted by 1 MeV. The present analysis supports the picture of an universal window in which the light nuclear systems are located. To this respect the three control parameters, the two-scattering lengths and the triton binding energy, fix the spectrum of $A\le4$ nuclei, explain the number of levels, the $A=5$ mass gap and locate the $A=6$ thresholds. The very weak binding of the $A=6$ nuclei below the $^4$He and $^4$He$+d$ thresholds are due to a weakly attractive $P$-wave interaction. A more quantitative description of these weakly bound states necessitates the consideration of a more complex set of operators in the interactions as a spin-orbit force. For a similar analysis in the context of chiral perturbation theory we refer to the recent work~\cite{lu:2018_arXiv}.
1,314,259,994,083
arxiv
\section{Introduction and motivation}\label{sec1} \indent The many-worlds interpretation (MWI) proposed by H. Everett in 1957~\cite{Everett1957,Barrett2012,DeWitt1973} is an attempt for providing a complete unitary ontology to quantum mechanics. However, the MWI has some unwarranted features concerning the role of probability plaguing attempts for recovering the standard quantum predictions (i.e., the famous Born rule). During the last two decades new proposals have been discussed in order to make sense of quantum probability in the MWI by using subjectivist and personalist concepts such as self-locating uncertainty, degrees of belief, Laplacian principle of indifference, `envariance'. However, none of these `Bayesian' and `decision-theoretic' based scenarios have successfully and objectively recovered the Born rule, or be able to unambiguously interpret the concept of probability. Nevertheless, all these interesting ideas provide clues which strongly motivate the present work.\\ \indent As we show in this work everything is tied to the meaning given to quantum measurements in the MWI, i.e., as experienced and memorized by the observers. One of the great idea of Everett (beside the central idea of taking seriously unitarity for the evolution of the whole Universe) was to treat the observer quantum mechanically as a memory device or automaton in order to develop a self-consistent theory of measurement within the MWI. As Everett indeed wrote in 1957: \begin{quote} \textit{As models for observers we can, if we wish, consider automatically functioning machines, possessing sensory apparatus and coupled to recording devices capable of registering past sensory data and machine configurations.}~\cite{Everett1957} \end{quote} This strategy in turn had a huge impact on the interpretation of probabilities by observers located in branches of the evolving wave-function. In this work we are going to exploit further this idea of preserving unitary for describing the observers. Here, we will emphasize the role of some old ideas about minds and brain elaborated in the late 1980's and early 1990's in connection with the so-called many-minds interpretation (MMI) of Albert and Loewer \cite{AlbertLoewer1988,Albert1994} that was mainly stochastics. Our work is also motivated by past attempts made by Lockwood~\cite{Lockwood1989,Lockwood1996a} and Donald~\cite{Donald1,Donald2,Donald3} preserving the unitarity of the quantum evolution in a sense similar to the original MWI. Here, based on a new unitary formulation of the MMI we develop a completely self-consistent toy model for a quantum observer that recovers Born's probabilities (in that sense the MMI is indeed a particular application of the MWI solving the probability problems). \\ \indent The paper is structured as follow: In Sections \ref{sec2a} and \ref{sec2b} we will review the probability conundrum in the MWI by summarizing the main past attempts for solving the problem. We will start with the original Everett interpretation in Section \ref{sec2a} and also present the MMI of Albert and Loewer which plays a key role in our work in Section \ref{sec2b}. In Sections \ref{sec2d} we will discuss the quantitative problem of justifying Born's rule in the MWI and focus our analysis on the work made by Deutsch \cite{Deutsch1999}, Wallace \cite{Wallace2012} and Zurek \cite{Zurek2005} in the recent years using Bayesian deductions and `envariance'. We stress that our work doesn't directly rely on the Wallace, Deutsch and Zurek deductions but that a clear description of their analysis helps the discussion and provides clues for introducing our own reasoning. Finally, in Section \ref{sec3} we will propose a unitary toy model for minds in the context of the MWI. In turn this unitary MMI scenario will be used for making sense of quantum probabilities. The proposal strongly differs from the previous attempts by introducing a classical-like molecular chaos hypothesis for describing the distribution of initial conditions for qubits interacting with the minds. In other words, we will show that by taking into account the problem of the mind observer (i.e., described in the context of the all-unitary MWI together with some ingredients of randomness coming from quantum entanglement with the local environment) help us to decipher the still ambiguous concept of `self-locating uncertainty' probability in the MWI~\cite{Sebens2016,McQueen} i.e., assuming our idea of a unitary version of the MMI. Therefore, we obtain a physical and dynamical model for minds justifying why our subjective notion of probability (i.e, credence) must equal the objective one associated with the gas of qubits interacting with the minds. Ultimately, we justify and recover the Born rule and propose a physical interpretation of the probabilities present in this model. \section{Everett and the meta-theorem: the incoherence problem} \label{sec2a} \indent The MWI has been the subject of intense and recurrent debates concerning the meaning and role to be given to quantum probabilities in this theory. Indeed, the MWI is a purely deterministic theory admitting strict unitary Schr\"odinger evolution as the only rule. In this framework the usual Max Born probability law \begin{eqnarray}\mathcal{P}^{(\textrm{Born})}_\alpha=\lVert\Psi_\alpha\rVert^2=\lVert\langle\alpha|\Psi\rangle\rVert^2 \end{eqnarray} for observing an outcome $\alpha$ during a quantum measurement seems to conflict with pure unitarity.\\ \indent Everett~\cite{Everett1957,Barrett2012,DeWitt1973} introduced an additive measure $\mathcal{M}(\lVert\Psi_\alpha\rVert^2)\equiv\lVert\Psi_\alpha\rVert^2$ and subsequently identified it with a probability for the outcome $\alpha$ in the Hilbert space~\footnote{The deduction of Everett must be compared with the famous Gleason theorem~\cite{Gleason1957,Lubkin1979} which is not valid for an Hilbert space of dimension 2.}. Moreover, identifying the Everett measure $\mathcal{M}(\lVert\Psi_\alpha\rVert^2)$ with a probability seems \textit{apriori} paradoxical since the MWI doesn't contain chancy events or randomness which could allow us to speak about probability for being this or that. In the MWI all events occur in parallel and the only certainty is that, after an experiment, we will end-up with a probability $\mathcal{P}=1$ in a superposed quantum state including many branches. In order to solve this contradiction and establish Born's rule Everett based his reasoning on the law of large numbers and the notion of `typicality' advocated by Boltzmann for statistical mechanics \cite{Goldstein2012} (this notion is also assumed in `Bohmian' mechanics \cite{Durr1992}). Indeed, as it is explained in Everett's PhD thesis \cite{Barrett2012} (and as it was more rigorously demonstrated by Hartle \cite{Hartle1968}, DeWitt and Graham~\cite{DeWitt1973,DeWitt1971} and several others\cite{Farhi1989,Aharonov2002}), a long-run experiment reproducing a multinomial sequence leads in the infinite limit, i.e., by a direct application of the law of large numbers, to the empirical statistical Born's rule. In other words, consider \begin{eqnarray} |\Psi\rangle=\sum_\alpha\Psi_\alpha |\alpha\rangle \end{eqnarray} a quantum state with outcomes labeled by $\alpha$ which is analyzed in a multi-gates Stern-Gerlach experiment. By taking a long-run sequence of the same experiment (i.e., by using a tensor product state like $|\Psi_N\rangle\otimes:=|\Psi^{(1)}\rangle\otimes...\otimes|\Psi^{(N)}\rangle$) Everett was able to obtain the relation \begin{eqnarray}\mathcal{M}(\lVert\Psi_\alpha\rVert^2)=\lim_{N \to +\infty}\frac{N_\alpha}{N}:=\mathcal{P}_\alpha\end{eqnarray} i.e., he was able to identify his measure to the relative frequency of occurrence $\frac{N_\alpha}{N}$, where $N_\alpha$ is the number of times the outcome $\alpha$ occurred in the long-run sequence with $N\rightarrow +\infty$ repetitions. Everett here used a standard frequentist definition of probability relying on an infinite ensemble but he also motivated his reasoning with Bayesian and epistemic concepts. In \cite{Barrett2012} he wrote: \begin{quote} \textit{We are then led to the novel situation in which the formal theory is objectively continuous and causal, while subjectively discontinuous and probabilistic.}~\cite{Barrett2012}, p.~9. \end{quote} To understand his reasoning suppose, that $h:=[\alpha_1$,... $\alpha_N]$ is a sequence of outcomes, i.e., an `history'\footnote{ The notion of history used here is reminiscent of GellMann and Hartle work in the context of the consistent/decoherent histories interpretation \cite{GellMann}. Here, we use history for either a chronological series or for describing a large ensemble of $N$ identical subsystems at a given time. The results would be the same since the subsystems are factorized and non-interacting.}. For instance, consider an observer (named Alex) participating to the unitary evolution of a quantum measurement with $N$ repetitions. We have: \begin{eqnarray} |\Psi_N\rangle\otimes|\textrm{Alex}_0,E_0\rangle\rightarrow \sum_h|\Psi(h)\rangle \otimes|\textrm{Alex}_h,E_h\rangle \label{alex} \end{eqnarray} where Alex$_h$ is Alex$_0$ successor having a memory of the particular $h-$history in the whole sum and where $|\Psi_N\rangle\otimes=\sum_h|\Psi(h)\rangle $ is the sum of the quantum histories $h$. By subjective probability Everett actually meant something which is directly measurable by the observer in the branch $h$ where (s)he is located, i.e., completely ignoring the existence of the other decohered branches. In other words, for Everett the natural subjective probability is the limit frequency $\frac{N_\alpha(h)}{N}$ where $N_\alpha(h)$ is the number of times the event $\alpha$ was repeated in the history $h$. In the $N\rightarrow +\infty$ limit Everett shows that the total measure $\delta\mathcal{M}$ associated with histories $h$ not fulfilling Born's rule is `overwhelmingly' smaller than the measure associated with the set of histories satisfying Born's rule. At the limit $N\rightarrow +\infty$ the fraction goes to zero \footnote{If we use the sequence $|\Psi_N\rangle=|\Psi^{(1)}\rangle\otimes...\otimes|\Psi^{(N)}\rangle=\otimes_{i=1}^{N}|\Psi^{(i)}\rangle$ with $|\Psi^{(i)}\rangle=\sum_\alpha\Psi_\alpha |\alpha^{(i)}\rangle$ we can define a frequency operator as \begin{eqnarray}\hat{Q}_\alpha=\sum_{i=1}^{N}\frac{\hat{\Pi}^{(i)}_\alpha}{N}\label{hartle}\end{eqnarray} with the projectors $\hat{\Pi}^{(i)}_\alpha=|\alpha^{(i)}\rangle\langle\alpha^{(i)}|$ associated with the eigenvalue $\alpha$ for the $i^{th}$ subsystem. We can expand the total state $|\Psi_N\rangle$ as a sum over the different histories $h=[\alpha_1,...,\alpha_N]$, i.e., $|\Psi_N\rangle=\sum_h|\Psi(h)\rangle$ with the history quantum state: \begin{eqnarray} |\Psi(h)\rangle=\bigotimes_{i=1}^{N}\hat{\Pi}^{(i)}_{\alpha_i}|\Psi_N\rangle=\Pi_{\alpha}\Psi_\alpha^{N_\alpha(h)}\bigotimes_{i=1}^{N}|\alpha_i^{(i)}\rangle \label{truc} \end{eqnarray} where $N_\alpha(h)$ is the number of times the outcome $\alpha$ occurs for the specific history $h$. Applying $\hat{Q}_\alpha$ on $|\Psi(h)\rangle$ leads directly to \begin{eqnarray} \hat{Q}_\alpha|\Psi(h)\rangle=\sum_{i=1}^{N}\frac{\delta_{\alpha,\alpha_i}}{N}|\Psi(h)\rangle=\frac{N_\alpha(h)}{N}|\Psi(h)\rangle \end{eqnarray} where $\sum_{i=1}^{N}\frac{\delta_{\alpha,\alpha_i}}{N}=\frac{N_\alpha(h)}{N}$ appears as an eigenvalue. From the point of view of the observer memory having access to only one of the various histories $h$ the number $N_\alpha(h)$ is all what is empirically and `subjectively' available. However, for comparing the various histories and objectively define a criterion for the `likelihood' we still need the measure $\lVert|\Psi(h)\rangle\rVert^2$. For example the average on the whole ensemble leads to $\langle\Psi_N|\hat{Q}_\alpha|\Psi_N\rangle=\sum_h\langle\Psi(h)|\hat{Q}_\alpha|\Psi(h)\rangle=\sum_h\frac{N_\alpha(h)}{N}\lVert|\Psi(h)\rangle\rVert^2 =\lVert\Psi_\alpha\rVert^2$ which is the standard quantum result. This could be made even more precise by reintroducing the notion of typicality in the $N\rightarrow +\infty$ limit. Then a typical history $\bar{h}$ seen by a typical observer will confirm the record $\frac{N_\alpha(\bar{h})}{N}\simeq\lVert\Psi_\alpha\rVert^2 $ with a error which is going like $\Delta N_\alpha/N_\alpha(\bar{h})\simeq \frac{1}{\sqrt{N}}\sqrt{\frac{1-\lVert\Psi_\alpha\rVert^2}{\lVert\Psi_\alpha\rVert^2}}\rightarrow 0$.\label{foot1}}. Moreover, a typical history $\bar{h}$ seen by a typical observer entangled with the system confirms the record $\frac{N_\alpha(\bar{h})}{N}\simeq\lVert\Psi_\alpha\rVert^2 $ with a error which is going like $\Delta N_\alpha/N_\alpha(\bar{h})\simeq \frac{1}{\sqrt{N}}\sqrt{\frac{1-\lVert\Psi_\alpha\rVert^2}{\lVert\Psi_\alpha\rVert^2}}\rightarrow 0$. Therefore, the Born rule is `typical' in the Boltzmann sense since the overwhelming majority of the history space (i.e. weighted with Everett's measure $\mathcal{M}$) is filled by terms satisfying the probability rule of quantum mechanics. This great result has been called the meta-theorem by DeWitt \cite{DeWitt1971} but, as previously explained, it was already discussed by Everett (this notion of `almost all except for a set of measure nearly equals to zero' was considered by Everett as the core of his thesis). The theorem relies critically on the definition of an actually infinite sequence which is never encountered in the lab and therefore the deduction was often criticized for being circular~\cite{Ballentine1973,Kent1990,Squires1990}. However, this problem is not so fundamental and is actually generic of the application of the law of large numbers in statistical mechanics through the introduction of collectives or Gibbs ensembles (recently some attempts have been made for making sense of such an infinite sequence $N\rightarrow +\infty$ in the MWI, i.e., by linking the problem with the notion of Multiverses used in cosmology~\cite{Aguire2011}).\\ \indent Moreover, the real issue in Everett reasoning concerns the status and unicity of the Everett measure $\mathcal{M}(\lVert\Psi_\alpha\rVert^2)$ for this quantum branching. To paraphrase Wallace~\cite{Wallacevideo} what is only proven by the Everett `law of large numbers' is that relative frequency tends to weight with high weight... Therefore justifying the choice $\mathcal{M}(\lVert\Psi_\alpha\rVert^2)\equiv\lVert\Psi_\alpha\rVert^2$ is central in order to avoid circularity. Yet, it is known in the context of the pilot-wave interpretation (PWI) \cite{deBroglie,BohmHiley}, i.e., in de Broglie Bohm (aka Bohmian) mechanics, that Everett's choice for the measure is far from being univocal (this point was already stressed by Pauli \cite{Pauli} as an objection to Bohm's theory and it becomes the core of recent Valentini's studies~\cite{Valentini} about quantum non-equilibrium in the PWI). Moreover, changing the measure also changes the notion of typicality and the convergence to a different probability rule \footnote{In Bohmian mechanics we shows~\cite{Valentini} that the density of probability $\rho(X,t)$ in the configuration space reads generally $f(X,t)\lvert\Psi(X,t)\rvert|^2$ where $f(X,t)$ satisfies the relation $\frac{d}{dt}f(X_\Psi(t),t)=0$ along the Bohmian paths $X_\Psi(t)$. This result keeps its importance in the MWI since it shows that any derivation of unicity of Born's rule is necessarily circular.}. This problem also occurs in the context of the MWI and the Everett weight is clearly not the unique possibility for defining a probability measure. The most natural weight in the context of the MWI would be perhaps the simple branch counting but it is known since Graham \cite{DeWitt1973} that this measure generally conflicts with Born's rule \footnote{In particular a simple branch counting is not time invariant \cite{Wallace2012,Bricmont2016}. Additionally it requires a preferred basis which must be chosen perhaps in relation with decoherence or the observer memory states.}. Importantly, in the PWI the ontology of the theory concerns (at least in the non-relativistic regime) the particle positions $X_t$ in the configuration space at time $t$. The distribution of particles defines an additional ontological structure absent in the MWI~\cite{DrezetIJQF}. \\ \indent Moreover, in the recent years the persistent difficulty about defining what probability \emph{actually means} in the MWI has been called the \textit{incoherence problem} and it still plagues any serious discussion about probability in this theory. For instance Albert wrote: \begin{quote} \textit{The questions to which this program is addressed are questions of what we would do if we believed that the fission hypothesis were correct. But the question at issue here is precisely whether to believe that the fission hypothesis is correct!}~\cite{Albert2015} \end{quote} In this optics the Everett measure is at best interpreted as an intensity of the ontological state: `a measure of existence' as it is often called by Vaidman~\cite{Vaidman1998}, and the physical interpretation is still contentious after 60 years since Everett's work. Vaidman \cite{Vaidman1998,Vaidman2012,Vaidman2014,Vaidman2020} and Tappenden~\cite{Tappenden2010,Tappenden2019} for instance, propose to introduce Born's rule as an added postulate (under the name \emph{probability postulate} or Born-Vaidman rule~\cite{Tappenden2010,Vaidman2020}) for assigning a degree of \emph{subjective location uncertainty} to the observer in her/his history $h$ (Greaves \cite{Greaves2004} also speaks about caring measure but the meaning is actually a bit different). However, the exact, physical and empirical meaning of the word probability used in this interpretation has been strongly criticized by Albert~\cite{Albert2015}, Kent~\cite{Kent1990} and Maudlin~\cite{Maudlin2019} (see also the discussions in \cite{bookMWI})\footnote{For similar reasons (and many others that we will not review) J.S. Bell \cite{Bell2004}, p.~192 dubbed the MWI a `romantic counterpart of the pilot wave picture' since despite all its glamorous aspects at first sight it can not apparently be developed into a sharp theoretical framework avoiding internal physical contradictions.}. As emphasized by Albert~\cite{Albert2015} this notion of subjective self-locating uncertainty contrasts and conflicts with the absence of objective uncertainty in the MWI (indeed everything is unitary so there is nothing of fundamentally uncertain). For Albert, it is only by confusing these two notions of subjective (epistemic) and objective (statistical) probability that one could hope to see a virtual solution where probably there is none. Indeed, the subjective self-locating uncertainty is an internal `pattern'~\cite{Wallace2012} of the observer history $h$. There is thus \emph{a priori} no reason for introducing an objective `caring measure' \cite{Greaves2004} or a `measure of existence' \cite{Vaidman1998,Vaidman2014} in order to weight this subjective notion of uncertainty. However, we stress that the Born-Vaidman axiom can be a posteriori justified by using locality and symmetry for branches with equal weights~\cite{Vaidman2012,Vaidman2020} (i.e., in order to agree with empirical evidences). Furthermore, it provides an objective structure to the branches of the wave-function and at the same time (i.e., by using the `principal principle' of Lewis~\footnote{The general idea behind a subjectivist approach of probability is to define a degree of belief or credence $\mathcal{C}_\alpha$ for the occurrence of the outcome $\alpha$. Moreover, following the philosopher D.~Lewis we might use the so called `\emph{principal principle}' for equaling this subjective likelihood to an objective weight playing also the role of probability $\mathcal{P}_\alpha$. More precisely, the credence assigned to the realization of the outcome $\alpha$ and conditioned on the knowledge of the objective probability $\mathcal{P}_\alpha$ equals $\mathcal{P}_\alpha$: i.e., $\mathcal{C}(\alpha|\mathcal{P}_\alpha)\equiv\mathcal{P}_\alpha$.}) it supplies a subjective probability assignment (or degree of belief) matching the objective measure. Yet, if the Born-Vaidman rule is a postulate that must be added to the MWI in order to recover standard's quantum mechanics it also implies that unitarity alone is not enough to explain and justify probability. This motivates the present work since our proposal is to supplement the bare Everett theory, assuming only unitarity, with a (toy) model for quantum observers and minds: a unitary MMI. This model will provide a dynamical structure that in turn legitimize the use of the Born-Vaidman rule. In the next section we will first describe the original stochastics MMI. \section{About many-minds} \label{sec2b} \indent A very different strategy, which goes back to the late Zeh in the 1970's~\cite{Zeh} and was subsequently developed by Albert and Loewer in 1988~\cite{AlbertLoewer1988,Albert1994,Barrett1995}, is the the so called MMI which we will shortly describe below. In brief, the idea is to include the role of states of consciousness or awareness into the quantum game. At the difference of older attempts in the same vein such as the von Neumann~\cite{vonNeumann}, London and Bauer~\cite{LondonBauer} and Wigner~\cite{Wigner} approaches, the MMI involves several mind states $\mathcal{O}^{(1)},\mathcal{O}^{(2)},...$ associated with a single observer. In the approach advocated by Albert and Loewer such mind states are not obeying to the unitary Schr\"{o}dinger equation but are nevertheless guided by solutions $\Psi_t$ of such an equation. Again, in complete analogy with the PWI the mind states associated with the brain structure surf on the $\Psi_t$ associated with the entangled wave-functions coupling the observer to the measurement apparatus and the quantum object under studies. By surfing on the pilot-wave the many mind states $\mathcal{O}^{(i)}$, which are associated with a given observer and which are unaware from each other, are stochastically driven into the distinct grooves and channels associated with the wave-function branches. If the wave-function for the observed system reads as before $|\Psi\rangle=\sum_\alpha\Psi_\alpha |\alpha\rangle$ the MMI of Albert and Loewer postulates that a fraction $\mathcal{P}_\alpha=\lVert\Psi_\alpha\rVert^2$ of mind states given by Born's rule is stochastically driven in the groove, i.e., world corresponding to the outcome $\alpha$. Consider for example a simple non-symmetric beam splitter experiment where the quantum state of let say a single photon or electron evolves as \begin{eqnarray} |\Psi_0\rangle\rightarrow|\Psi_t\rangle=\sqrt{\frac{1}{3}}|\uparrow\rangle+\sqrt{\frac{2}{3}}|\downarrow\rangle\label{Albertstate0} \end{eqnarray} where $\uparrow$ and $\downarrow$ describes the two states of the single particle transmitted or reflected by the beam splitter. In a more realistic way of describing the experiment we must include an observer (Alex), and an experimental environment (E) into the unitary evolution reading now: \begin{eqnarray} |\Psi_0\rangle\otimes |E_0,\textrm{Alex}_0\rangle\rightarrow\sqrt{\frac{1}{3}}|\uparrow\rangle\otimes |E_\uparrow,\textrm{Alex}_\uparrow\rangle+\sqrt{\frac{2}{3}}|\downarrow\rangle\otimes |E_\downarrow,\textrm{Alex}_\downarrow\rangle.\label{Albertstate} \end{eqnarray} Here, the observer has a memory or record of the experimental outcome as indicated by the $\uparrow\downarrow$ label. In the MMI proposed by Albert and Loewer we add mind states moving stochastically. For example with one single mind state we have either \begin{eqnarray} |\Psi_0\rangle\otimes |E_0,\textrm{Alex}_0(\mathcal{O}_0^{(1)})\rangle\rightarrow\sqrt{\frac{1}{3}}|\uparrow\rangle\otimes |E_\uparrow,\textrm{Alex}_\uparrow(\mathcal{O}_\uparrow^{(1)})\rangle \nonumber\\ +\sqrt{\frac{2}{3}}|\downarrow\rangle\otimes |E_\downarrow,\textrm{Alex}_\downarrow\rangle \label{first} \end{eqnarray} if the mind state moves randomly to the brain of Alex seeing the $\uparrow$-photon or alternatively \begin{eqnarray} |\Psi_0\rangle\otimes |E_0,\textrm{Alex}_0(\mathcal{O}_0^{(1)})\rangle\rightarrow\sqrt{\frac{1}{3}}|E_\uparrow,\uparrow\rangle\otimes |\textrm{Alex}_\uparrow\rangle \nonumber\\ +\sqrt{\frac{2}{3}}|\downarrow\rangle\otimes |E_\downarrow,\textrm{Alex}_\downarrow(\mathcal{O}_\downarrow^{(1)})\rangle \label{second} \end{eqnarray} if the mind state moves along the second groove or branch of the wave-function. The probability of the first alternative (i.e., Eq.~\ref{first}) is $\mathcal{P}_\uparrow=\frac{1}{3}$ whereas for the second alternative (i.e., Eq.~\ref{second}) we have $\mathcal{P}_\downarrow=\frac{2}{3}$. By repeating many times the same experiment (and admitting the information about the results of previous experiments are saved) Alex experimentally obtain the Born law by a direct application of the law of large numbers. More precisely, considering a Bernoulli sequence where we repeat $M$ times the previous experiment we define the probability for having $M_\uparrow$ times the outcome $\uparrow$ (similarly $M_\downarrow=M-M_\uparrow$) by the binomial formula: \begin{eqnarray} \mathcal{P}(M_\uparrow,M_\downarrow)=\frac{M!}{M_\uparrow !M_\downarrow !}\mathcal{P}_\uparrow^{M_\uparrow}\mathcal{P}_\downarrow^{M_\downarrow}\label{retrucmuca}. \end{eqnarray} where $\mathcal{P}_\uparrow^{M_\uparrow}\mathcal{P}_\downarrow^{M_\downarrow}$ is the probability of an alternative. By maximizing $\mathcal{P}(M_\uparrow,M_\downarrow)$ in the $M\rightarrow+\infty $ limit we deduce \begin{eqnarray} \mathcal{P}_\uparrow\simeq \frac{\tilde{M}_\uparrow}{M}, \mathcal{P}_\downarrow\simeq \frac{\tilde{M}_\downarrow}{M}\label{retrucmucaproba} \end{eqnarray} where $\tilde{M}_\uparrow$ and $\tilde{M}_\downarrow$ are numbers for typical branches where Born's rule holds.\\ \indent Importantly, in this theory there is no supervenience of the mental states on the brain and more generally all branches but one of the quantum evolution tree contain no mind state (even though the observer brain exists in all these mindless branches). In order to avoid having endless discussions about various issues raised by `mindless-Hulks' (e.g., if a mindless Alex state is discussing with a second observer) Albert and Loewer suggested the introduction of several mind states existing in parallel in the observer brain and also moving randomly. For example with two mind states $\mathcal{O}^{(1)}$ and $\mathcal{O}^{(2)}$ the initial quantum state reads $|\Psi_0\rangle\otimes |E_0,\textrm{Alex}_0(\mathcal{O}_0^{(1)},\mathcal{O}_0^{(2)})\rangle$ and it evolves in one of the four following alternatives: \begin{eqnarray} \sqrt{\frac{1}{3}}|\uparrow\rangle\otimes |E_\uparrow,\textrm{Alex}_\uparrow(\mathcal{O}_\uparrow^{(1)},\mathcal{O}_\uparrow^{(2)})\rangle+\sqrt{\frac{2}{3}}|\downarrow\rangle\otimes |E_\downarrow,\textrm{Alex}_\downarrow\rangle, \nonumber \\ \sqrt{\frac{1}{3}}|\uparrow\rangle\otimes |E_\uparrow,\textrm{Alex}_\uparrow(\mathcal{O}_\uparrow^{(1)})\rangle+\sqrt{\frac{2}{3}}|\downarrow\rangle\otimes |E_\downarrow,\textrm{Alex}_\downarrow(\mathcal{O}_\downarrow^{(2)}) \rangle,\nonumber \\ \sqrt{\frac{1}{3}}|\uparrow\rangle\otimes |E_\uparrow,\textrm{Alex}_\uparrow(\mathcal{O}_\uparrow^{(2)}) \rangle+\sqrt{\frac{2}{3}}|\downarrow\rangle\otimes |E_\downarrow,\textrm{Alex}_\downarrow(\mathcal{O}_\downarrow^{(1)})\rangle,\nonumber \\ \sqrt{\frac{1}{3}}|\uparrow\rangle\otimes |E_\uparrow,\textrm{Alex}_\uparrow \rangle+\sqrt{\frac{2}{3}}|\downarrow\rangle\otimes |E_\downarrow,\textrm{Alex}_\downarrow(\mathcal{O}_\downarrow^{(1)},\mathcal{O}_\downarrow^{(2)})\rangle.\label{trucmuc} \end{eqnarray} with probabilities respectively equal to $\mathcal{P}_\uparrow^2=\frac{1}{9}$ for the first alternative, $\mathcal{P}_\uparrow\mathcal{P}_\downarrow=\frac{2}{9}$ for the second and third alternatives, and $\mathcal{P}_\downarrow^2=\frac{4}{9}$ for the last one. It is clear that if we have $N$ mind states $\mathcal{O}^{(1)},...,\mathcal{O}^{(N)}$ we have now $2^N$ combinations. The total probability for having $N_\uparrow$ mind states in the upper branch and $N_\downarrow=N-N_\uparrow$ mind states in the lower branch is given (again) by the binomial formula: \begin{eqnarray} \mathcal{P}(N_\uparrow,N_\downarrow)=\frac{N!}{N_\uparrow !N_\downarrow !}\mathcal{P}_\uparrow^{N_\uparrow}\mathcal{P}_\downarrow^{N_\downarrow}\label{retrucmuc}. \end{eqnarray} where $\mathcal{P}_\uparrow^{N_\uparrow}\mathcal{P}_\downarrow^{N_\downarrow}$ is the probability of an alternative. By maximizing $\mathcal{P}(N_\uparrow,N_\downarrow)$ in the $N\rightarrow+\infty $ limit we obtain \begin{eqnarray} \mathcal{P}_\uparrow\simeq \frac{\tilde{N}_\uparrow}{N}, \mathcal{P}_\downarrow\simeq \frac{\tilde{N}_\downarrow}{N}\label{retrucmubis} \end{eqnarray} where $\tilde{N}_\uparrow$ and $\tilde{N}_\downarrow$ are the typical numbers of mind states in the upper and lower branch respectively. Therefore, the minds are distributed according to Born's rule. Furthermore, the relative fluctuation to this optimum (written $\frac{\Delta N_\uparrow}{\tilde{N}_\uparrow}=\frac{1}{\sqrt{N}}\sqrt{\frac{\mathcal{P}_\downarrow}{\mathcal{P}_\uparrow}}$) goes to zero as $N\rightarrow+\infty $ and the probability to have maverick branches without mind state tends to vanish. This implies that a second observer (Boris) discussing with Alex about his experimental results will always be in contact with $\tilde{N}_\uparrow \gg 1$ Alex minds if the result $\uparrow$ ocurred in his branch (or similarly $\tilde{N}_\uparrow \gg 1$ Alex minds if the result $\uparrow$ ocurred). Therefore, Boris will typically never meet a mindless hulk (as required). Formally, it means that the probability for the $j^{th}$ mind of Boris recording $\uparrow$ to meet $\tilde{N}_\uparrow=N\mathcal{P}_\uparrow$ Alexs is $\mathcal{P}(\mathcal{O}_\uparrow^{(Boris,j)} \textrm{ to meet } \tilde{N}_\uparrow \mathcal{O}_\uparrow^{(Alex)})\simeq 1$.\\ \indent Moreover, consider a pair of observers Alex and Boris for the same experiment. If one mind of Boris during a long Bernoulli sequence records an history like $h=[\uparrow,\downarrow...]$ (where, in agreement with Eq. \ref{retrucmucaproba}, we have $\mathcal{P}_\uparrow\simeq \frac{\tilde{M}_\uparrow}{M}$, $\mathcal{P}_\downarrow\simeq \frac{\tilde{M}_\downarrow}{M}$) thus we expect at least one mind in Alex brain recording the same sequence $h$ (since otherwhise Bob and Alex would have no mutual agreement about the result). The constraints for this correspondence to happen are however very stringent. Indeed, now we have to consider an ensemble of $M$ repetitions for $N$ Alex-minds and the probability for each of these minds to have an history like $h$ reads $\mathcal{P}_h=\mathcal{P}_\uparrow^{\tilde{M}_\uparrow}\mathcal{P}_\downarrow^{\tilde{M}_\downarrow}$. Across the $N$ minds we now get a multinomial probability law \begin{eqnarray} \mathcal{P}(\{N_h\})=\frac{N!}{\Pi_h N_h !}\Pi_h\mathcal{P}_h^{N_h} \end{eqnarray} where $N_h$ is the number of minds having observed the sequence $h$. The law of large numbers give you the result \begin{eqnarray} \mathcal{P}_h\simeq \frac{\tilde{N}_h}{N} \end{eqnarray}which is in general a very small number. For instance, if $\mathcal{P}_\downarrow=1/2$ we have $\mathcal{P}_h=1/2^M$ and therefore we get $\tilde{N}_h=\frac{N}{2^M}$. If we want $\tilde{N}_h$ finite then at least $N\sim 2^M$. Now, this requires a gigantic number of minds since $N$ can be quite large! This is an issue that a good MMI should clarify one day if the model wants to be taken seriously. \\ \indent The previous example can easily be generalized to a quantum state like $|\Psi_t\rangle=\sum_\alpha\Psi_\alpha |\alpha\rangle$ which (together with the environment+observer) evolves as \begin{eqnarray} |\Psi_0\rangle\otimes |E_0,\textrm{Alex}_0\rangle\rightarrow \sum_\alpha\Psi_\alpha |\alpha\rangle\otimes |E_\alpha,\textrm{Alex}_\alpha \rangle. \end{eqnarray} After the inclusion of $N$ observer mind states and after redoing the previous reasoning we have a multinomial probability for having the set of $\{N_\alpha\}$ mind states (i.e., with $N=\sum_\alpha N_\alpha$): \begin{eqnarray} \mathcal{P}(\{N_\alpha\})=\frac{N!}{\Pi_\alpha N_\alpha !}\Pi_\alpha\mathcal{P}_\alpha^{N_\alpha} \end{eqnarray} which is leading again to the relative frequency of mind states $\mathcal{P}_\alpha:=\lVert\Psi_\alpha\rVert^2 \simeq \frac{\tilde{N}_\alpha}{N}$ for the typical configuration in agreement with Born's rule (the fluctuation reads now $\frac{\Delta N_\alpha}{\tilde{N}_\alpha}=\frac{1}{\sqrt{N}}\sqrt{\frac{1-\mathcal{P}_\alpha}{\mathcal{P}_\alpha}}$ that is also vanishing in the $N\rightarrow +\infty$ limit). The MMI by keeping the psycho-physical parallelism at the statistical average level (i.e., as a very good approximation in the $N\rightarrow +\infty$ limit) is thus remarkably able to reproduce standard quantum mechanical results. \\ \indent However, even if the introduction of the observer memory and mind state has a old and respectable tradition in quantum mechanics interpretation (it was also playing a role in the work of Everett himself) it is fair to say that such an odd approach has been watched with suspicion by many in part because the theory is dualistic in spirit (separating minds from the rest of the unitary evolution in the Universe), i.e., it breaks down the functionalist or psycho-physical parallelism which is generally accepted (see e.g. von Neumann \cite{vonNeumann}) for discussing quantum mechanics of the observer. In other words in the MMI the mind states don't superverne on the brain state even though the psychophysical parallelism holds at the statistical level as explained before. Furthermore, a multiplicity of minds is required for solving the `mindless-hulk' problem at the price of introducing a form of schizophrenic many-worlds. It has been attempted to eliminate this unwarranted feature by reinstating psycho-physical realism at the mind level (see Lockwood~\cite{Lockwood1989,Lockwood1996a} and Donald~\cite{Donald1,Donald2,Donald3}). In other words, it has been proposed to re-establish the supervenience of the mind state $\mathcal{O}$ on the brain state described quantum mechanically. The main difficulty with this new amendment of the MWI (see the interesting discussion following \cite{Lockwood1996a}: \cite{Brown1996,Butterfield1996,Deutsch1996,Loewer1996,Saunders1996,Papineau1996,Lockwood1996b}, see also \cite{Papineau}) is that the mind now becomes a deterministic function of the wave-function state $\Psi_t$, i.e., $\mathcal{O}(\Psi_t)$ (instead of having $\Psi_t(\{\mathcal{O}^{(i)}\})$ in agreement with the theory of Albert and Loewer~\cite{AlbertLoewer1988,Loewer1996}). While this would seem a natural property in a quantum Universe the proponents of the MMI and MWI following this path have not yet been able to justify the Born's rule unambiguously. In Section \ref{sec3} we will develop a unitary version of the MMI which is free from these contradictions. \section{Subjective versus objective probabilities: the quantitative problem}\label{sec2d} \indent Proponents of the MWI in general split the whole discussion concerning probability into two issues. The first one, that we called the incoherence issue is tied to the mere existence of probability. The second the \emph{quantitative problem} is connected with the specific mathematical and physical justification of the Born rule in the MWI. Notwithstanding that the incoherence problem has been solved the quantitative problem is fundamentally interesting by itself and motivated most researches in the last decades.\\ \indent Here, we discuss the issue by giving a brief introduction to the remarkable works of Deutsch~\cite{Deutsch1999}, Wallace~\cite{Wallace2012} and Zurek \cite{Zurek2005} and to more recent works by Carroll, Sebens~\cite{Sebens2016} and Vaidman~\cite{McQueen}. D. Deutsch in his seminal article started with decision-theory and attempted to derive the Born rule from non probabilistic axioms of quantum mechanics. As he wrote: \begin{quote} \textit{Thus we see that quantum theory permits what philosophy would hitherto have regarded as a formal impossibility, akin to `deriving and ought to from an is', namely deriving a probability statement from a factual statement. This could be called deriving a `tends to' from a `does'.}~\cite{Deutsch1999} \end{quote} Clearly this is a very strong claim which is touching both sides of the difficulty, i.e., the incoherence and quantitative issues. Deutsch's proof has been attacked on philosophical and mathematical grounds (see for example \cite{Barnum,Pitowsky}). The incoherence problem will not be further commented~\footnote{The claim has a long tradition in the MWI community. DeWitt for example famously wrote `The mathematical formalism of the quantum theory is capable of yielding its own interpretation'~\cite{DeWitt1971}.}. The formal part of the proof used the notion of Value function $\mathcal{V}_\Psi $ and utility assigned to a quantum `game', i.e., a quantum experiment. The semantics of classical decision-theory leads to the definition $\mathcal{V}_\Psi(\{x_\alpha \})=\sum_\alpha x_\alpha \mathcal{P}_\alpha$ where $x_\alpha$ are eigenvalues of the Hermitian operator $\hat{X}^{(S)}=\sum_\alpha x_\alpha \hat{\Pi}^{(S)}_{\alpha}$ acting on the quantum state $|\Psi^{(S)}\rangle=\sum_\alpha\Psi_\alpha |\alpha^{(S)}\rangle$ associated with system S. Assuming a set of decision-theoretic axioms which are non intrinsically probabilistic Deutsch built the probability function $\mathcal{P}_\alpha:=\lVert\Psi_\alpha\rVert^2$ which is identical to Born's rule. The set of axioms was criticized in particular by Barnum et al. \cite{Barnum} who emphasized the existence of an additional permutation symmetry in the derivation (this is strongly connected with the role of entanglement between S and the observer as we show below). This prompted further important works by Wallace and Saunders~\cite{Wallace2003a,Wallace2003b,Wallace2007,Saunders2005,Saunders2008} (see also~\cite{Greaves2004} and \cite{bookMWI} p.~181 and p.~227) who progressively clarified the whole analysis. It leads Wallace to his simple elegant proof \cite{Wallace2012} which expurgates the reasoning of unwarranted technical sophistications present in the original derivations. Remarkably, in the mean time Zurek \cite{Zurek2003a,Zurek2003b,Zurek2005,Zurek2014} proposed an alternative proof of Born's rule based on \emph{envariance} a neologism for environment-assisted invariance a purely quantum symmetry based on entanglement of a system with its environment. What is however key here is that Wallace and Zurek proofs are actually isomorphic to one another. I am going to resume briefly Zurek's proof which is capital for my own deduction and then go to Wallace's semantics. \\ \indent Zurek starts with a Schmidt symmetric quantum state \begin{eqnarray} |\Psi^{(SE)}\rangle=\sqrt{\frac{1}{N}}\sum_{\alpha\in\Delta}|\alpha^{(S)}\rangle\otimes|\varepsilon_\alpha^{(E)}\rangle \label{Zurek} \end{eqnarray} where $S$ denotes the system and $E$ its environment (the basis vectors are orthogonal). The label of the $\alpha$-mode belongs to a set $\Delta$ with cardinality $N$. Zurek introduces swapping operators acting locally on S and reading $\hat{U} ^{(S)}(\alpha\leftrightarrow \beta)=|\alpha^{(S)}\rangle\langle\beta^{(S)}|+H.c.+\hat{R}^{(S)}$ (with $\hat{R}^{(S)}=\hat{I}^{(S)}-|\alpha^{(S)}\rangle\langle\alpha^{(S)}|-|\beta^{(S)}\rangle\langle\beta^{(S)}|$). We introduce similar operators $\hat{U} ^{(E)}(\alpha\leftrightarrow \beta)$ for the environment. Now, as emphasized in \cite{Zurek2003a} applying successively a swap on S and a counterswap on E lets the state invariant, i.e., \begin{eqnarray} \hat{U} ^{(E)}(\alpha\leftrightarrow \beta)\hat{U} ^{(S)}(\alpha\leftrightarrow \beta)|\Psi^{(SE)}\rangle=|\Psi^{(SE)}\rangle.\label{Zurek2} \end{eqnarray} It is a matter of fact (e.g., from the no-signalling theorem~\cite{Barnum2}) that a local action on S should have no-effect on E and therefore assigning \emph{a priori} probability $\mathcal{P}_\Psi(\alpha^{(S)},\varepsilon_\alpha^{(E)})$ to the branch $|\alpha^{(S)}\rangle\otimes|\varepsilon_\alpha^{(E)}\rangle$ in Eq. \ref{Zurek} we must have after application of $\hat{U} ^{(S)}(\alpha\leftrightarrow \beta)$ on $|\Psi^{(SE)}\rangle$ and by an application of Laplace's principle of indifference the symmetry relation: \begin{eqnarray} \mathcal{P}_\Psi(\alpha^{(S)},\varepsilon_\alpha^{(E)})=\mathcal{P}_{\hat{U}^{(S)}\Psi}(\beta^{(S)},\varepsilon_\alpha^{(E)})\nonumber \\ \mathcal{P}_\Psi(\beta^{(S)},\varepsilon_\beta^{(E)})=\mathcal{P}_{\hat{U}^{(S)}\Psi}(\alpha^{(S)},\varepsilon_\beta^{(E)}).\label{Zurek3} \end{eqnarray} Here, we have the strong correlations $\mathcal{P}_\Psi(\varepsilon_\alpha^{(E)}|\alpha^{(S)})=1$, $\mathcal{P}_{\hat{U}^{(S)}\Psi}(\varepsilon_\alpha^{(E)}|\beta^{(S)})=1$ and thus Eq. \ref{Zurek3} actually reads \begin{eqnarray} \mathcal{P}_\Psi(\varepsilon_\alpha^{(E)})=\mathcal{P}_{\hat{U}^{(S)}\Psi}(\varepsilon_\alpha^{(E)}) \nonumber \\ \mathcal{P}_\Psi(\varepsilon_\beta^{(E)})=\mathcal{P}_{\hat{U}^{(S)}\Psi}(\varepsilon_\beta^{(E)}) \label{Zurek3b} \end{eqnarray} that is a statement of Laplacian indifference about what is occurring at S for the subsystem E . By the same token a subsequent application of $\hat{U} ^{(E)}(\alpha\leftrightarrow \beta)$ yields \begin{eqnarray} \mathcal{P}_{\hat{U}^{(S)}\Psi}(\beta^{(S)},\varepsilon_\alpha^{(E)})=\mathcal{P}_{\hat{U}^{(E)}\hat{U}^{(S)}\Psi}(\beta^{(S)},\varepsilon_\beta^{(E)})\nonumber \\ \mathcal{P}_{\hat{U}^{(S)}\Psi}(\alpha^{(S)},\varepsilon_\beta^{(E)}) =\mathcal{P}_{\hat{U}^{(E)}\hat{U}^{(S)}\Psi}(\alpha^{(S)},\varepsilon_\alpha^{(E)}).\label{Zurek4} \end{eqnarray} The basis of the reasoning is that in Eq. \ref{Zurek3} an hypothetical observer attached to E is indifferent to what is occurring at S (i.e., a swap) whereas in Eq. \ref{Zurek4} an hypothetical observer attached to S is indifferent about the counterswap acting on E \cite{Zurek2003a}. Moreover, this indifference is both subjective (degree of belief $\mathcal{C}$) and objective (physical probability $\mathcal{P}\equiv \mathcal{C}$) and defined by some properties of the system (in agreement with the principal principle). Therefore, here rational agents should conform their credence to physical probabilities. The objectivity is here linked to the Schmidt form of the state which makes the phases of the different branches locally inoperative to S or E (i.e., we have locally a `mixture'). This would not occur without entanglement because interference between branches are in principle possible so that a swap would break the symmetry \cite{Zurek2003a}. Regrouping Eqs. \ref{Zurek3} and \ref{Zurek4} and using the fundamental global envariance Eq. \ref{Zurek2} imply directly \begin{eqnarray} \mathcal{P}_\Psi(\alpha^{(S)},\varepsilon_\alpha^{(E)})=\mathcal{P}_{\Psi}(\beta^{(S)},\varepsilon_\beta^{(E)}). \label{Zurek5} \end{eqnarray} Moreover, the pair of modes $\alpha$ and $\beta$ was arbitrary in the set $\Delta$ and consequently by generalizing to every pairs we deduce the equiprobability condition reading $\mathcal{P}_\Psi(\alpha^{(S)},\varepsilon_\alpha^{(E)})=Const.$. Finally, by normalization we have Born's rule for this special state $|\Psi^{(SE)}\rangle$, i.e., \begin{eqnarray} \mathcal{P}_\Psi(\alpha^{(S)},\varepsilon_\alpha^{(E)})=\frac{1}{N}=\lVert\langle\alpha^{(S)},\varepsilon_\alpha^{(E)}|\Psi^{(SE)}\rangle\rVert^2. \label{Zurek6} \end{eqnarray}What is remarkable about this reasoning is its simplicity relying only on quantum symmetries. As stated by Zurek envariance results `from coexistence between perfect knowledge of the whole and complete ignorance of the parts' \cite{Zurek2003a}. Contrarily to classical Laplace's indifference based on ignorance about information which could be in principle recorded and recovered here the indifference is more fundamental and linked to the entanglement of the system~\cite{Zurek2005}. Indeed, there is no hidden-variable in this approach and nothing more fundamental to find out that the quantum symmetry of the system under swap and counterswap which are local operations acting on S or E. \\ \indent This point was also emphasized by Wallace who explained that there is perfect symmetry between the outcomes and thus that the ignorance considered here must be genuinely quantum~\cite{Wallace2012}. A quantum gambler (observer) attached to S or E acting on her/his own subsystem will bet rationally on the different outcomes by using Laplace's indifference as explained previously. Therefore, the decision-theoretic scenario proposed by Wallace and Deutsch reduces to the one made by Zurek (it is interesting to point out that the hidden symmetry contained in Deutsch's proof and which was discovered by Barnum et al. \cite{Barnum} is precisely envariance). For the seek of clarity we postpone to the end of this section a `derivation' of Wallace's proof using the semantics and logic of Zurek formalism.\\ \indent We stress that the previous analysis assumed some basic physical properties which could at first look innocuous but are actually playing a crucial role. Indeed, the indifference postulate is motivated by three `bare facts' concerning unitarity and locality i.e., facts about the irrelevance of swap on a subsystem S (or E) on the physical properties of the subsystem E (or S) (for a discussion of how the conjunction of Zurek's facts lead to Born's rule see~\cite{Zurek2005}). This is reminiscent either from non-signalling (as already briefly alluded and discussed in \cite{Barnum2}) or from a `natural' postulate concerning `knowledge about the whole versus ignorance of the parts' \cite{Zurek2003a}. Actually, this axiom hides the notion of mixture and reduced density matrix which already assumes the notion of probability to be derived (i.e., the incoherence problem still holds in this formulation). However, this issue is not so harmful if we consider only the quantitative problem independently of the incoherence one, i.e., if we are only interested in recovering Born's rule assuming that probabilities already exist. From this perspective Zurek's axiom only told that beyond assuming the mere existence of probability one must additionally postulate the `strong'\footnote{Such conditions are clearly stronger that mere no-signalling which only requires $\langle \hat{\mathcal{O} }^{(E)}\rangle_\Psi=\langle \hat{\mathcal{O} }^{(E)}\rangle_{\hat{U}^{(E)}\Psi} $ where $\hat{\mathcal{O} }^{(E)}$ is any local Hermitian operator acting on E solely and $\hat{U}^{(E)}$ is any unitary transformation acting on the environement E (a similar equation with the role of E and S reverted also holds true). } symmetry which from standard mechanics reads: \begin{eqnarray} \langle \hat{\Pi }_\alpha^{(S)}\otimes\hat{\Pi}_{\varepsilon_\alpha}^{(E)}\rangle_\Psi=\langle \hat{\Pi }_{\hat{U}^{(S)}\alpha}^{(S)}\otimes\hat{\Pi }_{\varepsilon_\alpha}^{(E)}\rangle_{\hat{U}^{(S)}\Psi}\nonumber \\ \label{Zurek0} \end{eqnarray} where $\hat{\Pi }_{\hat{U}^{(S)}\alpha}^{(S)}$ is a `causal' notation for the projector $\hat{U}^{(S)}|\alpha^{(S)}\rangle\langle\alpha^{(S)}|\left .\hat{U}^{(S)}\right.^\dagger=|\beta^{(S)}\rangle\langle\beta^{(S)}|$. This just leads to Eq. \ref{Zurek3} and similar expressions could be used to obtain Eq. \ref{Zurek4}. These strong symmetries naturally allow us to recover to equiprobability which is indeed the reasoning of Zurek. Therefore, while in the orthodox interpretation Eq. \ref{Zurek0} follows from the symmetries of the Schmidt quantum state Eq. \ref{Zurek} together with the already assumed Born's rule (i.e., here equiprobability), in the axiomatic of Zurek it is enough to use Eqs. \ref{Zurek3}, \ref{Zurek4} for recovering equiprobability (i.e., Born's rule) and thus avoiding circularity.\\ \indent The previous analysis focused on the simple equiprobable case where $|\Psi^{(SE)}\rangle$ is given by Eqs. \ref{Zurek}. In order to generalize the deduction to any Schmidt state Zurek used a `trick' \cite{Zurek1998} (see also Deutsch \cite{Deutsch1999}) that consists into applying a fine graining procedure. We start with a S state $|\Psi^{(S)}\rangle=\sum_{a}\sqrt{\mathcal{P}_a}|a^{(S)}\rangle$ where $\mathcal{P}_a=\frac{N_a}{N}$ is a rational number. Entanglement with the environment E leads to \begin{eqnarray} |\Phi^{(SE)}\rangle=\sum_{a}\sqrt{\mathcal{P}_a}|a^{(S)}\rangle \otimes|e_a^{(E)}\rangle \label{Zurek7} \end{eqnarray} We thus introduce the new vectors \begin{eqnarray} |a^{(S)}\rangle=\frac{1}{\sqrt{N_a}}\sum_{\alpha\in\Delta_a}|\alpha^{(S)}\rangle \label{Zurek8} \end{eqnarray} and where the cardinality of $\Delta_a$ equals $N_a$ (we have also $\Delta_a\cap\Delta_b=0$ if $a\neq b$ and $\cup_a\Delta_a=\Delta$ with $\Delta$ the set of all vectors $|\alpha^{(S)}\rangle$ with cardinality $N$). We have thus \begin{eqnarray} |\Phi^{(SE)}\rangle=\frac{1}{\sqrt{N}}\sum_{\alpha\in\Delta}|\alpha^{(S)}\rangle \otimes|e_{a_\alpha}^{(E)}\rangle \label{Zurek9} \end{eqnarray} where $a_\alpha=a$ if $\alpha\in\Delta_a$. The last step~\footnote{As remarked by an anonymous referee Zurek's original derivation uses an ancillary system $(C)$ and assumes a new basis in the $(E+C)$ subspace, before performing swaps between $(E)$ and $(C)$ to derive probabilities~\cite{Zurek2003a,Zurek2003b,Zurek2005,Zurek2014}.} consists in a global transformation in the SE system reading $|\alpha^{(S)}\rangle \otimes|e_{a_\alpha}^{(E)}\rangle \rightarrow |\alpha^{(S)}\rangle \otimes|\varepsilon_\alpha^{(E)}\rangle $ with $|\varepsilon_\alpha^{(E)}\rangle $ a new environmental basis. We finally obtain \begin{eqnarray} |\Psi^{(SE)}\rangle=\frac{1}{\sqrt{N}}\sum_{\alpha\in\Delta}|\alpha^{(S)}\rangle \otimes|\varepsilon_\alpha^{(E)}\rangle \label{Zurek10} \end{eqnarray} which is a Schmidt symmetric state identical to Eq. \ref{Zurek}. Therefore, Eq. \ref{Zurek6} obtains and we finally get by additivity and application of Laplace's indifference principle: \begin{eqnarray} \mathcal{P}_\Phi(a^{(S)},e_a^{(E)})=\sum_{\alpha\in\Delta_a}\mathcal{P}_\Psi(\alpha^{(S)},\varepsilon_\alpha^{(E)})=\frac{N_a}{N}=\lVert\langle a^{(S)},e_a^{(E)}|\Phi^{(SE)}\rangle\rVert^2. \label{Zurek6} \end{eqnarray} Continuity establishes the generality of the result for the case where $\mathcal{P}_a$ is a real number \cite{Zurek2003a,Zurek2003b,Zurek2005}.\\ \indent The most important part of this proof is the fine-graining procedure which can easily be implemented with beam splitters and unitary gates as shown for example by Vaidman \cite{McQueen,Vaidman2020}. However, observe first that this trick requires to have high dimensionality of the Hilbert space for the S subsystem (which is in general true). Second, there is here a form of conspiratorial preparation. Why indeed should the distinct beams $|a^{(S)}\rangle$ (which could be located in remote regions of space) be separated in such a way (i.e., Eq. \ref{Zurek8}) to have equiprobability at the end? Such choice is clearly motivated by the desire to rely on a simple branch counting argumentation to define histories for the observers. Indeed, back to 1985, this solution naturally avoids the problem existing with the original Deutsch approach \cite{Deutsch1985} (which postulated a density of worlds proportional to $\lVert\Psi_\alpha\rangle\rVert^2$, i.e., different from a naive branch-counting reasoning \footnote{While we can not discuss this issue here we emphasize that the idea of many diverging branches is also advocated in the so called `many-Bohmian-worlds' theory \cite{Tipler2014,Bostrom2014,Sebens2014,Hall2014}.}). In return, the price to be paid consists in dividing the intensity of the original beams $\mathcal{P}_a$ into many sub-beams of equal intensity $1/N$: a procedure which much be defined in advance by an agent knowing the full properties of the entangled SE system. \\ \indent Zurek's fine-graining trick has been accepted by the proponents of the decision-theoretic approach \cite{Deutsch1999,Wallace2012} as well as by Carroll and Sebens \cite{Sebens2016} which all strongly rely on the methodology offered by Zurek with envariance and also involve the notion of self-location uncertainty (with an interpretation different from the one by Vaidman~\footnote{McQueen and Vaidman \cite{McQueen,Vaidman2020} also rely on the no-signalling theorem in order to prohibit faster-than-light communications. A different interpretation of no-signalling has been previously used by Barnum \cite{Barnum2} to recover Zurek's interpretation. We stress that Vaidman \cite{Vaidman2012,Vaidman2014,McQueen,Vaidman2020} also uses Zurek's fine-graining trick but contrarily to the others doesn't consider entanglement and envariance as important or relevant. The key feature for Vaidman is the symmetry between branches with equal weight and Zurek's fine graining is a natural method for reaching this goal. Additionally, Vaidman doesn't really use the word `proof' in his application of Zurek's trick~\cite{McQueen,Vaidman2020}.}). In particular, Carroll and Sebens \cite{Sebens2016} developed a narrative in which an external observer interacting with a SE system like the one described by $|\Psi^{(SE)}\rangle$ in Eq. \ref{Zurek} is going to assign probabilities to the various outcomes depending on the subsystem S or E (s)he is considering and whether or not the unitary swap $\hat{U}^{(S)}$ or counterswap $\hat{U}^{(E)}$ operations are applied. This interesting narrative (based on a principle named Epistemic Separability Principle or ESP) leads directly through Zurek's and Wallace's argument to the equiprobability condition and then ultimately recovers Born's rule as explained above. We emphasize, that there is a disagreement between Carroll and Sebens \cite{Sebens2016} on the one side, and Kent \cite{Kent2015}, McQueen and Vaidman \cite{McQueen,Vaidman2020} on the other side concerning the role of self-location uncertainty in this analysis (see also \cite{Albert2015}).\\ \indent This finally leads us to Wallace scenario that we analyze using Zurek semantics. Wallace~\cite{Wallace2012} like Deutsch was interested into the observation of the system S by an observer Alex that we can directly identify with the state of the environment E. Wallace thus considers the following state \begin{eqnarray} |\Psi^{(SE)}\rangle=\sqrt{\frac{1}{N}}\sum_{\alpha\in\Delta}|\alpha^{(S)}\rangle\otimes|\textrm{Alex}_\alpha^{(E)}\rangle \label{Wallace} \end{eqnarray} as well as the counterswapped state $\hat{U} ^{(E)}(\alpha\leftrightarrow \beta)|\Psi^{(SE)}\rangle$. If the original state contains the terms \begin{eqnarray} |\Psi^{(SE)}\rangle=|\alpha^{(S)}\rangle\otimes|\textrm{Alex}_\alpha^{(E)}\rangle+|\beta^{(S)}\rangle\otimes|\textrm{Alex}_\beta^{(E)}\rangle+|R\rangle\label{Wallace1} \end{eqnarray} the new counterswapped state contains instead the terms \begin{eqnarray} \hat{U}^{(E)}(\alpha\leftrightarrow \beta)|\Psi^{(SE)}\rangle=|\alpha^{(S)}\rangle\otimes|\textrm{Alex}_\beta^{(E)}\rangle+|\beta^{(S)}\rangle\otimes|\textrm{Alex}_\alpha^{(E)}\rangle+|R\rangle\label{Wallace1b} \end{eqnarray} where $|R\rangle$ is the `rest' which is irrelevant. Now, by direct application of Laplace's indifference principle we obtain (see Eq.\ref{Zurek3} ) \begin{eqnarray} \mathcal{P}_\Psi(\alpha^{(S)},\textrm{Alex}_\alpha^{(E)})=\mathcal{P}_{\hat{U}^{(E)}\Psi}(\alpha^{(S)},\textrm{Alex}_\beta^{(E)})\nonumber \\ \mathcal{P}_\Psi(\beta^{(S)},\textrm{Alex}_\beta^{(E)})=\mathcal{P}_{\hat{U}^{(E)}\Psi}(\beta^{(S)},\textrm{Alex}_\alpha^{(E)}).\label{Wallace2} \end{eqnarray} Unlike Zurek Wallace didn't used a swap on the S subsystem. Instead, he used a trick by supposing that (i) the subsystem S has also a ground state $|\emptyset^{(S)}\rangle$ and that (ii) we apply the erasing operation $\hat{U_e}^{(S)}|\alpha^{(S)}\rangle=|\emptyset^{(S)}\rangle$, $\hat{U_e}^{(S)}|\beta^{(S)}\rangle=|\emptyset^{(S)}\rangle$. After application of the erasing process on the states given by Eqs. \ref{Wallace1}, \ref{Wallace1b} we obtain \begin{eqnarray} \hat{U_e}^{(S)}|\Psi^{(SE)}\rangle=|\emptyset^{(S)}\rangle\otimes(|\textrm{Alex}_\alpha^{(E)}\rangle+|\textrm{Alex}_\beta^{(E)}\rangle+|R\rangle\nonumber \\ \hat{U_e}^{(S)}\hat{U}^{(E)}(\alpha\leftrightarrow \beta)|\Psi^{(SE)}\rangle=|\emptyset^{(S)}\rangle\otimes(|\textrm{Alex}_\beta^{(E)}\rangle+|\textrm{Alex}_\alpha^{(E)}\rangle+|R\rangle \label{Wallace3} \end{eqnarray} This motivates the set of equations: \begin{eqnarray} \mathcal{P}_\Psi(\alpha^{(S)},\textrm{Alex}_\alpha^{(E)})=\mathcal{P}_{\hat{U}_e^{(S)}\Psi}(\emptyset^{(S)},\textrm{Alex}_\alpha^{(E)})\nonumber \\ \mathcal{P}_\Psi(\beta^{(S)},\textrm{Alex}_\beta^{(E)})=\mathcal{P}_{\hat{U}_e^{(S)}\Psi}(\emptyset^{(S)},\textrm{Alex}_\beta^{(E)})\nonumber \\ \mathcal{P}_{\hat{U}^{(E)}\Psi}(\alpha^{(S)},\textrm{Alex}_\beta^{(E)})=\mathcal{P}_{\hat{U}_e^{(S)}\hat{U}^{(E)}\Psi}(\emptyset^{(S)},\textrm{Alex}_\beta^{(E)})\nonumber \\ \mathcal{P}_{\hat{U}^{(E)}\Psi}(\beta^{(S)},\textrm{Alex}_\alpha^{(E)})=\mathcal{P}_{\hat{U}_e^{(S)}\hat{U}^{(E)}\Psi}(\emptyset^{(S)},\textrm{Alex}_\alpha^{(E)}).\label{Wallace4} \end{eqnarray} Finally, we use the fact that by branch indifference~\cite{Wallace2012} we have \begin{eqnarray} \mathcal{P}_{\hat{U}_e^{(S)}\Psi}(\emptyset^{(S)},\textrm{Alex}_\alpha^{(E)})=\mathcal{P}_{\hat{U}_e^{(S)}\hat{U}^{(E)}\Psi}(\emptyset^{(S)},\textrm{Alex}_\alpha^{(E)}) \nonumber \\ \mathcal{P}_{\hat{U}_e^{(S)}\Psi}(\emptyset^{(S)},\textrm{Alex}_\alpha^{(E)})=\mathcal{P}_{\hat{U}_e^{(S)}\hat{U}^{(E)}\Psi}(\emptyset^{(S)},\textrm{Alex}_\alpha^{(E)}) \label{Wallace5} \end{eqnarray} to get after combining with Eqs. \ref{Wallace2}, \ref{Wallace4} the result \begin{eqnarray} \mathcal{P}_\Psi(\alpha^{(S)},\textrm{Alex}_\alpha^{(E)})=\mathcal{P}_\Psi(\beta^{(S)},\textrm{Alex}_\beta^{(E)})\label{Wallace6} \end{eqnarray} which is Zurek Eq.~\ref{Zurek5}. By proceeding as in Zurek's case we again obtain equiprobability and thus Born's rule Eq. \ref{Zurek6}.\\ \indent Moreover, to conclude this section it is key to understand that the different strategies made by Zurek, Deutsch or Wallace all rely on some subjectivist and epistemic approaches of probabilities. Strategies of that kind have a old and respectable tradition going back at least to Bernoulli, Laplace or Poisson. In the 20$^{th}$ century they were strongly advocated by Borel, de Finetti, Ramsey, Keynes (who named the `indifference principle') and many others like Jeffreys, Savage and Lewis. In particular, the principal principle of Lewis `$\mathcal{C}(\alpha|\mathcal{P}_\alpha)\equiv\mathcal{P}_\alpha$' equaling objective probabilities to subjective credences often assumes that we can define objectively `chances' (i.e., in the so-called objective bayesianism where probabilities are assigned on the basis of the maximum entropy principle). The strategies of Zurek, Deutsch or Wallace also require such axiomatics, but unfortunately, it is not very clear what an objective chance or probability could be in the MWI (this is the source of the incoherence problem). Therefore, even if we consider the formal proofs of Zurek, Wallace Deutsch and others as very interesting for the quantitative problem we still believe that they don't unambiguously clarify the incoherence problem. This is the motivation for the MMI model presented in the next section. \section{A deterministic and quantum version of the many-minds interpretation}\label{sec3} \indent It is often assumed by proponents of the MWI that the introduction of probability is not worse (and perhaps not better) than it is in other interpretations of quantum mechanics or even in other fields of physical science. The claim goes back to Everett \cite{Everett1957} who saw his measure-theoretic deduction as good as the one used in classical statistical physics. More recently, Papineau \cite{Papineau1996,Papineau} and Wallace \cite{Wallace2012} repeated the same claim that probabilities are very obscure concepts and that the MWI is not in a worst position than for example GRW collapse of Bohmian models are for discussing randomness and chances. Wallace \cite{Wallace2003a} and Zurek \cite{Zurek2003a} following Deutsch \cite{Deutsch1999}, went further by claiming that the genuinely quantum Laplacian indifference, i.e., related to envariance and self-locating uncertainty, provides within the MWI framework a even better basis for a clean foundation of probability than in collapse or Bohmian approaches.\\ \indent As we saw there are serious reasons to doubt about the validity of such strong claims. First, in collapse interpretations such as GRW or in the Copenhagen interpretation the notion of infinite sequences is not problematic and positing a frequency law like $\mathcal{P}_\alpha\equiv \lim_{N \to +\infty}\frac{N_\alpha}{N}$ (interpreted probabilistically) means that the systems know stochastically, i.e., on the long run how to behave. The stochastic rules in this single-world approach can be axiomatized and the (weak) law of large numbers allows us to connect probabilities and statistics. Second, the MMI of Albert and Loewer \cite{AlbertLoewer1988} is also based on a stochastic approach to probability and the model is self-consistent even though strongly dualistic. \\ \indent The same is true for the PWI where probability arizes from the initial conditions of similar systems typically distributed over a space-like surface. In the PWI, which like classical mechanics is fully deterministic, one must impose an `equivariant' distribution of particles and fields at one time in order to recover Born's rule at any other times. Like for classical statistical mechanics there are persistent debates about the probabilistic foundations of the PWI but these debates are not about incoherence \emph{per se} (which is actually irrelevant in the de Broglie Bohm framework) but are instead focused on the uniqueness of Born's rule and on the status of quantum equilibrium versus quantum non-equilibrium particle distributions \cite{Durr1992,Valentini}. The problems are very similar to those existing in statistical thermodynamics for justifying microcanonical and canonical ensembles and for describing the tendency to reach thermal equilibrium. In particular, we emphasize that there exists what could be called a minimalist PWI advocated by Bell \cite{Bell2004} p.~129 and Goldstein D\"urr and Zangh\`{i } \cite{Durr1992} and where the Boltzmanian notion of typicality plays a central role for recovering Born's rule. This approach starts with the same methodology as in Everett's work \cite{Barrett2012}, i.e., by introducing the preferred Everett measure $\mathcal{M}(h)$ assigned to histories $h$ (here defined in the coordinate configuration space for point-like particles or field variables for continuous Bosonic fields). From the weak law of large numbers we deduce, like in the MWI, that Born's rule holds with a near-unit Everett weight in the limit $N\rightarrow +\infty$.\\ \indent Moreover, at the difference of the MWI we can precisely and univocally define what we mean by an actual configuration and this even for a finite $N$. Indeed, taking as an example the non-relativistic de Broglie-Bohm dynamics for a system of $N$ electrons we can write the actual density of Bohmian electrons at the spatial point $\mathbf{q}\in\mathbb{R}^{3}$: \begin{eqnarray} \rho_N(\mathbf{q},t)=\frac{1}{N} \sum^{i=N}_{i=1}\delta^{3}(\mathbf{q}^{(i)}_\Psi(t)-\mathbf{q})\simeq\lVert\Psi(\mathbf{q},t)\rVert^2\label{axiom2} \end{eqnarray} with $\mathbf{q}^{(i)}_\Psi\in\mathbb{R}^{3} $ some `typical' Bohmian paths for the electrons. The second approximate equality means that Born's rule is accurately valid for this `history'\footnote{This relation is equivalent to the frequency relation $\frac{N_\alpha(h)}{N}\simeq\lVert\Psi_\alpha\rVert^2$ defined for some histories $h$ and which is valid even for $N$ finite. Note that the accuracy increases with $N$ since the highly discrete sum of Dirac peaks approaches a continuous fluid with density $\lVert\Psi(\mathbf{q},t)\rVert^2$. A faster convergence is obtained by limiting our analysis to coarse grained probability functions in some elementary but finite spatial cells.} and the Everett measure provides a quantitative figure of merit for that accuracy in the regime $N\gg 1$. We however emphasize that Everett's measure is not the only possibility in the PWI. This is a indeed a measure dependent problem (associated with the choice of the Universe initial conditions) and constitutes the recurrent issue debated by Valentini on the one side \cite{Valentini} and Goldstein, D\"urr and Zangh\`{i } on the other side~\cite{Durr1992}. Therefore, in this framework changing the measure for a non equivariant one $\neq\lVert\Psi(\mathbf{q},t)\rVert^2$ selects a different set of typical histories in which Born's rule will not hold and corresponding to a different choice for the Universe initial conditions. In other words, in the framework of the PWI Wallace's definition of the law of large numbers: `relative frequency tends to weight with high weight' \cite{Wallacevideo} makes physically sense since the weight can be selected in order to agree with a physical quantity $\rho_N(\mathbf{q},t)$ as defined by Eq. \ref{axiom2} and associated with an actual state of the Universe, i.e., $\mathbf{q}^{(1)}_\Psi(t),...,\mathbf{q}^{(N)}_\Psi(t)$ and specific initial conditions. The problem in the MWI is that such a choice is not univocally accepted and therefore (contrarily to the PWI) the notion of probability is not clearly defined.\\ \indent We will not pursue the discussion about the meaning of probability in the PWI (for a `balanced' review of the general problem see \cite{Drezet}). Here, we will instead consider Bohmian and classical statistical mechanics as a motivation for new models applied to the MWI and MMI. In the following we will develop a speculative although mathematically precise and unitary toy model for the MMI (for related ideas see Lockwood~\cite{Lockwood1989,Lockwood1996a} and Donald~\cite{Donald1,Donald2,Donald3}).\\ \indent In our approach we first assume that the `all-is-wave' ontology advocated by Everett is true. Therefore, the quantum state $\Psi_t$ of the Universe has the same physical meaning as in the MWI. However, in order to recover Born's rule and objective probabilities we propose here to mix the theory with ingredients of the MMI. This assumes a completely unitary version of the mind that we will describe with an idealized model. We stress that our speculative toy model of quantum mind states (assuming a purely unitary ontology \`a la Everett) is introduced only in order to recover Born's rule. We don't claim that the Universe should necessarily be populated with such quantum observers but only that a clear probabilistic interpretation of measurements is possible if we accept their existence. For this purpose, we start with the Albert and Loewer MMI \cite{AlbertLoewer1988} and go back to the example of Eq.~\ref{Albertstate}. However, now we replace Alex by some collective excitation of a memory device which we write $|E_0,\mathcal{O}_0^{(1)}\rangle$ before the interaction. $\mathcal{O}^{(i)}$ (with here $i=1$) is going to play the role of a single mind in the MMI. We consider a symmetric beam splitter and during the measurement operation we postulate the unitary evolution: \begin{eqnarray} |\Psi_0\rangle\otimes |E_0,\mathcal{O}_0^{(1)}\rangle\otimes |\spadesuit^{(1)}\rangle\rightarrow\sqrt{\frac{1}{2}}\left( |\uparrow\rangle\otimes |E_\uparrow,\mathcal{O}_\uparrow^{(1)}\rangle+|\downarrow\rangle\otimes |E_\downarrow,\emptyset^{(1)}\rangle\right) \otimes |\spadesuit^{(1)}\rangle, \nonumber \\ |\Psi_0\rangle\otimes |E_0,\mathcal{O}_0^{(1)}\rangle\otimes |\heartsuit^{(1)}\rangle\rightarrow\sqrt{\frac{1}{2}}\left( |\uparrow\rangle\otimes |E_\uparrow,\emptyset^{(1)}\rangle+|\downarrow\rangle\otimes |E_\downarrow,\mathcal{O}_\downarrow^{(1)}\rangle\right) \otimes |\heartsuit^{(1)}\rangle, \nonumber \\ \label{Drezetstate} \end{eqnarray} where $|\spadesuit^{(1)}\rangle$ and $|\heartsuit^{(1)}\rangle$ are two orthogonal normalized states of a single qubit taking part to the interaction. The physical meaning of this qubit is to act as an environement for the mind states and creating a form of `distinguishability' between the different alternatives in Eq. \ref{Drezetstate}. The exponent $i$ labels the qubit `family' which is related to $\mathcal{O}^{(i)}$. Here we start with only one family but we will have to introduce as many families $i=1,...,N$ as we have observer minds $\mathcal{O}^{(i)}$ (see Eqs. \ref{Drezetstaten0}-\ref{Drezetstaten3} below). The $|E_\downarrow,\emptyset^{(1)}\rangle$ is a specific quantum state of the brain where the memory associated with the mind has been lost or destroyed, i.e., this is a kind of ground state. Yet, the brain and environment $E_\downarrow$ could keep some persistent information about the result $\downarrow$ (this is indicated by the state $E_\downarrow$). A similar comment could be done for $|E_\uparrow,\emptyset^{(1)}\rangle$. We emphasize that the presence of the empty states $|E_\downarrow,\emptyset^{(1)}\rangle,|E_\uparrow,\emptyset^{(1)}\rangle$ reinstores the psychophysical parallelism which was lost in the original MMI~\cite{AlbertLoewer1988}. Indeed, here the empty states are imprinted in the wave-function like empty particle modes in quantum electrodynamics. Here (unlike in~\cite{AlbertLoewer1988}) the active mind and empty states are not dualistically separated from the material world. This is clearly an improvement. \\ \indent Moreover, note that (in agreement with the MWI) everything is deterministic and unitary in this model: depending on the value of the qubit $|\spadesuit^{(1)}\rangle$ or $|\heartsuit^{(1)}\rangle$ the evolution follows one or the other of the two alternatives. Now, comes the trick we could easily introduce some randomness or molecular chaos concerning the state of the qubit and defined at, let say, the beginning of the Universe. \begin{figure}[h] \begin{center} \includegraphics[width=10cm]{fig1.pdf} \caption{Principle of the quantum single-mind experiment using (a) a random sequence of $\heartsuit,\spadesuit,\spadesuit,\heartsuit...$ to drive the decision of a quantum machine (`observer') memorizing only one of the two outcomes, i.e., $|\uparrow\rangle$ or $|\downarrow\rangle$ of a quantum experiment (c or d). This situation contrasts with the usual quantum observer, i.e., insensitive to qubits $\heartsuit,\spadesuit,\spadesuit,\heartsuit...$ , which would be in a state of quantum schizophrenia and unable to solve the incoherence problem (b).} \label{figure1} \end{center} \end{figure} More precisely, we assume that we have a large ensemble of $M$ such qubits in a product state like $|h^{(1)}\rangle=|\spadesuit^{(1,1)}\rangle\otimes...\otimes|\heartsuit^{(1,M)}\rangle$ and where the number of $\heartsuit^{(1)}$ typically equals the number of $\spadesuit^{(1)}$, i.e., $\tilde{M}_{\spadesuit^{(1)}}\simeq\tilde{M}_{\heartsuit^{(1)}}$. We have here a `random' but classical distribution of the two states. By doing and redoing the same experiment the observer will interact with one exemplar of the product state (i.e., $\heartsuit^{(1,j)}$ if the j$^{\textrm{th}}$ qubit is a heart or $\spadesuit^{(1,j)}$ if it is a spad). Therefore, for an observer mind $\mathcal{O}^{(1,j)}$ taken in the ensemble we can objectively define the probability for interacting with a spad or a heart as: \begin{eqnarray} \mathcal{P}_{\spadesuit^{(1)}}:=\frac{\tilde{M}_{\spadesuit^{(1)}}}{M}\simeq \frac{1}{2},&& \mathcal{P}_{\heartsuit^{(1)}}:=\frac{\tilde{M}_{\heartsuit^{(1)}}}{M}\simeq \frac{1}{2}.\label{thermal} \end{eqnarray} This probability law could be justified like in classical or Bohmian mechanics by using a typicality approach with equal measures for the two outcomes and by application of the Bernoulli/Laplace law of large numbers in the limit $M\rightarrow +\infty$.\\ \indent We emphasize that the typicality reasoning is here classical unlike the one of Everett. Considering all the different possible histories $|h^{(1)}\rangle=\bigotimes_{k=1}^{M}|s_{k}^{(1,k)}\rangle$ (with $s_{k}=$ spade or heart) we need a density matrix~\footnote{Introducing the frequency operator $\hat{Q}_{s^{(1)}}=\sum_{k=1}^{M}\frac{\hat{\Pi}^{(k)}_{s^{(1)}}}{M}$ (see Eq.\ref{hartle} in footnote \ref{foot1}) we have $\textrm{Tr}[\hat{Q}_{s^{(1)}}|h^{(1)}\rangle\langle h^{(1)}|]=\frac{M_{s^{(1)}}(h^{(1)})}{M}$ which for typical histories gives us: $\frac{M_{s^{(1)}}(\bar{h}^{(1)})}{M}\simeq \frac{1}{2}$. For the whole ensemble we have also $\textrm{Tr}[\hat{Q}_{s^{(1)}}\hat{\rho}^{(1)}]=\frac{1}{2}$.} \begin{eqnarray} \hat{\rho}^{(1)}=\sum_{h^{(1)}}\mathcal{M}(h^{(1)})|h^{(1)}\rangle\langle h^{(1)}| \end{eqnarray} with the probability measure $\mathcal{M}(h^{(1)})=\frac{1}{2^M}$. Note, that here probabilities have objective meaning associated with a distribution of particles (i.e., like for a thermal gas), and this approach doesn't require a subjective/epistemic ignorance \`a la Laplace. The actual state of our Universe is one of the typical $|h^{(1)}\rangle$ in the density matrix $\hat{\rho}^{(1)}$. Like for a Gibbs ensemble in statistical mechanics the other alternatives could also actually exist in separated Universes (forming a Gibbs multiverse~\cite{Aguire2011}). Therefore, the introduction of a mixture like $\hat{\rho}^{(1)}$ should not be thought as a breaking of Unitarity but better as a structure added to the MWI in order to give a classical-like objective meaning to probabilities (i.e., like in the PWI). \\ \indent Moreover, from Eq. \ref{Drezetstate} we know that if a qubit is in the state $\spadesuit$ the unitary evolution imposes to the active observer state to be $|E_\uparrow,\mathcal{O}_\uparrow^{(1)}\rangle$ associated with a memory of the $\uparrow$ outcomes. Of course there is also an empty state like $|E_\downarrow,\emptyset^{(1)}\rangle$ but this not associated with a memory of the observer. We can thus define an objective probability $\mathcal{P}(\mathcal{O}_\uparrow^{(1)})$ that must corresponds to $\mathcal{P}_{\spadesuit^{(1)}}$. In other words we have: \begin{eqnarray} \mathcal{P}_{\spadesuit^{(1)}}=\mathcal{P}(\mathcal{O}_\uparrow^{(1)})\simeq \frac{1}{2},&& \mathcal{P}_{\heartsuit^{(1)}}=\mathcal{P}(\mathcal{O}_\downarrow^{(1)})\simeq \frac{1}{2}. \end{eqnarray} meaning that the probability (i.e., the relative frequency) for the observer mind $\mathcal{O}^{(1)}$ taken in the ensemble to memorize the $\uparrow$ (or $\downarrow$) quantum state is one half. This functionalist approach gives also a clear physical meaning to the subjective concept of `degree of belief'. Here, the principal principle identifying objective probabilities and subjective credences is physically justified if we identify the credence $\mathcal{C}(\uparrow|\mathcal{P}(\mathcal{O}_\uparrow^{(1)}))$ with $\mathcal{P}(\mathcal{O}_\uparrow^{(1)})$. This defines the pivotal result of our deduction for a single mind.\\ \indent This suggests the following narrative (see Fig. \ref{figure1}). Imagine a thermal source or oven of (distinguishable) quantum particles with two internal states $\spadesuit$ and $\heartsuit$. The distributions $\frac{\tilde{M}_{\spadesuit}}{M}$ and $\frac{\tilde{M}_{\heartsuit}}{M}$ are given by Eq. \ref{thermal} and justified by using an infinite Gibbs collective or ensemble like in thermodynamics. Assuming a hole on the oven (see Fig. \ref{figure1}(a)) walls we can define a random sequence of escaping particles in the typical ensemble of qubits belonging to the oven (mathematically this corresponds to a fair typical sample taken in a large population). This typical sequence $h\in [\heartsuit,\spadesuit,\spadesuit,\heartsuit...]$ satisfying Eq. \ref{thermal} will be used by a quantum observer to decide deterministically if the mind will be aware of the recording of the $|\uparrow\rangle$ or $|\downarrow\rangle$ (see Fig. \ref{figure1} (c,d)). The model with the oven here clearly mirrors the discussion of the Bernoulli process in the usual MMI of Albert and Loewer (see the binomial formula Eq.~\ref{retrucmuca}). In the absence of such a mechanism (see Fig \ref{figure1}(b)) the observer mind would be in a schizophrenic superposition of $\uparrow/\downarrow$ information leading to the incoherence difficulty in the MWI. Here we solve the issue for the particular case of a unitary model including minds, i.e., a unitary MMI. With the new mechanism proposed here (see Eq. \ref{Drezetstate}) we can objectively give a meaning to randomness in the MMI. Therefore, here the fundamental quantum randomness of our Universe is revealed in the brain after interaction with qubits $\heartsuit,\spadesuit,\spadesuit,\heartsuit...$ deterministically prepared in a pseudo-random way. This is done without avoiding full unitarity of the quantum evolution (in contrast with the original MMI \cite{AlbertLoewer1988,Albert1994,Barrett1995}). The key feature in this model is the use of molecular chaos associated with the oven. Ultimately this is associated with a specific choice for the initial conditions of our Universe selecting what is typical or not. Like in classical and Bohmian statistical mechanics the notion of typicality is clearly connected with such initial conditions.\\ \indent In a second stage of our analysis we easily extend the pivotal result to an ensemble of many minds. Consider for example two minds (the generalization being obvious for $N$ minds as we see below). We suppose the initial state in Eq. \ref{Drezetstate} transformed into $|\Psi_0\rangle\otimes |E_0,\mathcal{O}_0^{(1)},\mathcal{O}_0^{(2)}\rangle\otimes |s^{(1)}\rangle\otimes|s'^{(2)}\rangle$ where $\mathcal{O}_0^{(1)}$ and $\mathcal{O}_0^{(2)}$ are two mind states (collective excitations) unaware of each other and $|s^{(1)}\rangle\otimes|s'^{(2)}\rangle$ some spin states with $s=$ spade or heart, and $s'=$ spade or heart. After the interaction we obtain four possible outcomes: \begin{eqnarray} |\Psi_0\rangle\otimes |E_0,\mathcal{O}_0^{(1)},\mathcal{O}_0^{(2)}\rangle\otimes |\spadesuit^{(1)}\rangle\otimes |\spadesuit^{(2)}\rangle\rightarrow\nonumber \\ \sqrt{\frac{1}{2}}\left( |\uparrow\rangle\otimes |E_\uparrow,\mathcal{O}_\uparrow^{(1)},\mathcal{O}_\uparrow^{(2)}\rangle+|\downarrow\rangle\otimes |E_\downarrow,\emptyset^{(1)},\emptyset^{(2)}\rangle\right) \otimes |\spadesuit^{(1)}\rangle\otimes |\spadesuit^{(2)}\rangle, \\ \label{Drezetstaten0} |\Psi_0\rangle\otimes |E_0,\mathcal{O}_0^{(1)},\mathcal{O}_0^{(2)}\rangle\otimes |\spadesuit^{(1)}\rangle\otimes |\heartsuit^{(2)}\rangle\rightarrow\nonumber \\ \sqrt{\frac{1}{2}}\left( |\uparrow\rangle\otimes |E_\uparrow,\mathcal{O}_\uparrow^{(1)},\emptyset^{(2)}\rangle+|\downarrow\rangle\otimes |E_\downarrow,\emptyset^{(1)},\mathcal{O}_\downarrow^{(2)}\rangle\right) \otimes |\spadesuit^{(1)}\rangle\otimes |\heartsuit^{(2)}\rangle, \\ \label{Drezetstaten1} |\Psi_0\rangle\otimes |E_0,\mathcal{O}_0^{(1)},\mathcal{O}_0^{(2)}\rangle\otimes |\heartsuit^{(1)}\rangle\otimes |\spadesuit^{(2)}\rangle\rightarrow\nonumber \\ \sqrt{\frac{1}{2}}\left( |\uparrow\rangle\otimes |E_\uparrow,\emptyset^{(1)},\mathcal{O}_\uparrow^{(2)}\rangle+|\downarrow\rangle\otimes |E_\downarrow,\mathcal{O}_\downarrow^{(1)},\emptyset^{(2)}\rangle\right) \otimes |\heartsuit^{(1)}\rangle\otimes |\spadesuit^{(2)}\rangle, \\ \label{Drezetstaten2} |\Psi_0\rangle\otimes |E_0,\mathcal{O}_0^{(1)},\mathcal{O}_0^{(2)}\rangle\otimes |\heartsuit^{(1)}\rangle\otimes |\heartsuit^{(2)}\rangle\rightarrow\nonumber \\ \sqrt{\frac{1}{2}}\left( |\uparrow\rangle\otimes |E_\uparrow,\emptyset^{(1)},\emptyset^{(2)}\rangle+|\downarrow\rangle\otimes |E_\downarrow,\mathcal{O}_\downarrow^{(1)},\mathcal{O}_\downarrow^{(2)}\rangle\right) \otimes |\heartsuit^{(1)}\rangle\otimes |\heartsuit^{(2)}\rangle. \label{Drezetstaten3} \end{eqnarray} This situation clearly mirrors the result discussed in Section \ref{sec2b} surrounding Eq. \ref{trucmuc}. Now, like for the single mind problem we can introduce product states $\bigotimes_{k=1}^{M}(|s_{k}^{(1,k)}\rangle\otimes|s_{k}^{(2,k)}\rangle)$ with $s_{k}=$ spade or heart. Again we suppose a `typical' random distribution of spades and hearts such that in the product state the population is equally distributed. We have a kind of `\emph{Stosszahlansatz}' (molecular chaos hypothesis) discussed in Boltzmann statistical mechanics and which involves an hypothesis about the absence of correlation (i.e. independence) between the $|s_{k}^{(1,k)}\rangle$ and $|s_{k}^{(2,k)}\rangle$. Therefore, the probabilities read \begin{eqnarray} \mathcal{P}_{s^{(1)},s'^{(2)}}=\mathcal{P}_{s^{(1)}}\mathcal{P}_{s'^{(2)}}\simeq \frac{1}{4},\label{mini1} \end{eqnarray} and with the observer mind states: \begin{eqnarray} \mathcal{P}_{\spadesuit^{(1)},\spadesuit^{(2)}}=\mathcal{P}(\mathcal{O}_\uparrow^{(1)},\mathcal{O}_\uparrow^{(2)}),& \mathcal{P}_{\spadesuit^{(1)},\heartsuit^{(2)}}=\mathcal{P}(\mathcal{O}_\uparrow^{(1)},\mathcal{O}_\downarrow^{(2)}) \nonumber \\ \mathcal{P}_{\heartsuit^{(1)},\spadesuit^{(2)}}=\mathcal{P}(\mathcal{O}_\downarrow^{(1)},\mathcal{O}_\uparrow^{(2)}),& \mathcal{P}_{\heartsuit^{(1)},\heartsuit^{(2)}}=\mathcal{P}(\mathcal{O}_\downarrow^{(1)},\mathcal{O}_\downarrow^{(2)})\label{mini2} \end{eqnarray} The previous formalism can be generalized to $N$ minds by introducing states like $|E_0,\mathcal{O}_0^{(1)},...,\mathcal{O}_0^{(N)}\rangle$ and using product spin-states $\bigotimes_{k=1}^{M}\bigotimes_{i=1}^{N}|s_{k}^{(i,k)}\rangle$. The unitary evolution together with Stosszahlansatz allow us to define the binomial probability \begin{eqnarray} \mathcal{P}(N_\uparrow,N_\downarrow)=\frac{N!}{N_\uparrow !N_\downarrow !}\mathcal{P}(\mathcal{O}_\uparrow)^{N_\uparrow}\mathcal{P}(\mathcal{O}_\downarrow)^{N_\downarrow}\label{retrucmucb}. \end{eqnarray} where $N_\uparrow$ and $N_\downarrow$ are the number of observer mind states $\mathcal{O}_\uparrow$ and $\mathcal{O}_\downarrow$ in the two branches $|\uparrow\rangle$ and $|\downarrow\rangle$ respectively~\footnote{We here used the notations $\mathcal{P}(\mathcal{O}_{\uparrow}^{(i)})=\mathcal{P}(\mathcal{O}_\uparrow)$ and $\mathcal{P}(\mathcal{O}_{\downarrow}^{(i})=\mathcal{P}(\mathcal{O}_\downarrow)$ and the fact that (from Eqs.~\ref{mini1},\ref{mini2}) we have $\mathcal{P}(\mathcal{O}_\alpha^{(i)},\mathcal{O}_\beta^{(j)})=\mathcal{P}(\mathcal{O}_\alpha^{(i)})\mathcal{P}(\mathcal{O}_\beta^{(j)})$ with $\alpha,\beta\in\{\uparrow,\downarrow\}$.}. This relation is obviously the same as Eq. \ref{retrucmuc} in the MMI of Albert and Loewer. Therefore, by a direct application of the law of large numbers we get: \begin{eqnarray} \lim_{N \to +\infty} \frac{N_\uparrow}{N}=\mathcal{P}(\mathcal{O}_\uparrow)=\frac{1}{2}\nonumber \\ \lim_{N \to +\infty} \frac{N_\downarrow}{N}=\mathcal{P}(\mathcal{O}_\downarrow)=\frac{1}{2} \label{drezet} \end{eqnarray} which shows that for typical configurations the minds are equiprobably distributed. Like in the MMI of Albert and Loewer the probability of a `mind-less hulk' is vanishing. Here the deductions is very similar to the one made by Albert and Loewer~\cite{AlbertLoewer1988,Albert1994} (see the discussion in Section \ref{sec2b} after Eq.~\ref{retrucmubis} concerning the typicality of never meeting a mindless hulk).\\ \begin{figure}[h] \begin{center} \includegraphics[width=8cm]{fig2.pdf} \caption{A quantum experiment involving an observer with several minds (compare with Fig.~\ref{figure1}). The observer is detecting a typical sequence of qubits $\spadesuit,\heartsuit,\spadesuit,...,\heartsuit$ prepared using the source (`oven') of Fig.~\ref{figure1} (a). This corresponds to a configuration for the different minds of the observer. } \label{figure2} \end{center} \end{figure} \indent The previous narrative of Fig.~\ref{figure1} can be used to illustrate the idea. Again the source or oven prepares a typical sequence like $h_1:= [\spadesuit^{(1,1)},...,\heartsuit^{(N,1)}]$ in a large ensemble. This sequence is used by the quantum observer to drive the $N$ different minds in a `aware of the result' state or in a `not aware of a result' state (see Fig.~\ref{figure2}). At the statistical level the number of minds observing the outcome $\uparrow$ (or $\downarrow$) is approaching $N/2$ for large $N$ as explained before. This means that typically the probability of a maverick state including mindless hulk vanishes. Moreover, repeating again and again $M$ times the same experiment (involving typical sequences of qubits $h_k=[s_{k}^{(1,k)},...,s_{k}^{(N,k)}]$ with $k=1,...,M$) leads to Born's rule for each mind following a random but typical path. \\ \indent In order to further generalize the previous procedure we can use the formal methodology of Zurek \cite{Zurek1998,Zurek2003a,Zurek2005}. For this we consider the time evolution $|\Psi_0\rangle\otimes |E_0\rangle \rightarrow \sqrt{\frac{1}{T}}\sum_{\alpha\in\Delta}|\alpha\rangle\otimes |E_\alpha\rangle $ for the symmetric Schmidt state ($T$ is an integer that defines the cardinality of $\Delta$). In presence of the observer and environment for a single mind we admit now instead of Eq. \ref{Drezetstate}: \begin{eqnarray} |\Psi_0\rangle\otimes |E_0,\mathcal{O}_0^{(1)}\rangle\otimes |\tilde{\beta}^{(1)}\rangle\rightarrow \sqrt{\frac{1}{T}} \sum_{\alpha\in\Delta, \alpha\neq\beta}|\alpha\rangle\otimes |E_\alpha,\emptyset^{(1)}\rangle \otimes |\tilde{\beta}^{(1)}\rangle\nonumber \\ +\sqrt{\frac{1}{T}}|\beta\rangle\otimes |E_\beta,\mathcal{O}_\beta^{(1)}\rangle \otimes |\tilde{\beta}^{(1)}\rangle \label{Drezetstaten} \end{eqnarray}where we introduced a $T-$level quantum system $|\tilde{s}^{(1)}\rangle$ (i.e., with $s\in\Delta$) to drive the system. If $s=\beta$ the mind state $\mathcal{O}_0^{(1)}$ evolves into the $\mathcal{O}_\beta^{(1)}$ channel and the other channels are empty. Naturally, we have $T$ relations like Eq. \ref{Drezetstaten} corresponding to the $T$ different levels $|\tilde{s}^{(1)}\rangle$ and to the $T$ different branches. In complete analogy with the previous case we define probabilities for the spin states and the related mind states $\mathcal{O}_\beta^{(i)}$ as: \begin{eqnarray} \mathcal{P}_{\beta^{(i)}}=\mathcal{P}(\mathcal{O}_\beta^{(i)})\simeq \frac{1}{T} \end{eqnarray} We have from independence and the Stosszahlansatz for two mind states: \begin{eqnarray} \mathcal{P}_{\alpha^{(i)},\beta^{(j)}}=\mathcal{P}(\mathcal{O}_\alpha^{(i)},\mathcal{O}_\beta^{(i)})=\mathcal{P}_{\alpha^{(i)}}\mathcal{P}_{\beta^{(j)}}=\mathcal{P}(\mathcal{O}_\alpha^{(i)})\mathcal{P}(\mathcal{O}_\beta^{(j)})\simeq \frac{1}{T^2} \end{eqnarray} Finally, for a many-minds system (with $N$ mind states $\mathcal{O}^{(1)},...,\mathcal{O}^{(N)}$) we can define in analogy with Eq. \ref{retrucmucb} the multinomial probability for having the distribution $\{N_\alpha\}$ where $N_\alpha$ is the number of mind states $\mathcal{O}_\alpha$ in the branch $|\alpha\rangle$ (with the constraint $\sum_{\alpha\in\Delta}N_\alpha=N$): \begin{eqnarray} \mathcal{P}(\{N_\alpha\})=\frac{N!}{\Pi_\alpha N_\alpha!} \Pi_\alpha\mathcal{P}(\mathcal{O}_\alpha)^{N_\alpha}\simeq \frac{N!}{\Pi_\alpha N_\alpha!}\frac{1}{T^N}. \label{drezetA}\end{eqnarray} In the typical regime we deduce from the law of large numbers the relation \begin{eqnarray} \lim_{N \to +\infty}\frac{N_\alpha}{N}=\mathcal{P}(\mathcal{O}_\alpha)=\frac{1}{T} \label{drezetbb} \end{eqnarray} which shows that with the state $\sqrt{\frac{1}{T}}\sum_{\alpha\in\Delta}|\alpha\rangle$ and for typical configurations the minds are equally distributed between the branches (i.e., the mindlless hulk problem is again avoided). \\ \indent The last step of our deduction follows strictly the fine graining trick of Zurek discussed in Section \ref{sec2d}. Indeed, starting with a quantum state $|\Psi_0\rangle=\sum_{a}\sqrt{\mathcal{P}_a}|a\rangle$ with $\mathcal{P}_a=\frac{T_a}{T}$ a rational number we formally repeat the steps from Eq.~\ref{Zurek7} to Eq. \ref{Zurek10} and obtain after entanglement with a specifically designed environment the Schmidt state $|\Psi_0\rangle\otimes |E_0\rangle \rightarrow \sqrt{\frac{1}{T}}\sum_{\alpha\in\Delta}|\alpha\rangle\otimes |E_\alpha\rangle $. The rest of the reasoning is similar to the one leading to Eq. \ref{drezetA} and Eq. \ref{drezetbb}: we deduce the probability \begin{eqnarray} \lim_{N \to +\infty} \sum_{\alpha\in\Delta_a}\frac{N_\alpha}{N}=\sum_{\alpha\in\Delta_a}\mathcal{P}(\mathcal{O}_\alpha)=\frac{T_a}{T}:=\mathcal{P}_a \label{drezetc} \end{eqnarray} where $\Delta_a$ is the subset of $\Delta$ associated with the quantum state $|a\rangle$ (see Eq.~\ref{Zurek8}). This closes our derivation of Born's rule. We stress that if the formal deduction is similar to the one made by Zurek with his coarse-graining trick we don'tuse envariance as a fundamenal property here. Indeed, invariance is seen by Zurek as a quantum version of the Keynes-Laplace indifference principle. However, here we want to ground the MMI (and thus the MWI) on a objective definition of probability based on molecular chaos on the initial conditions of quantum systems (like our qubits $\spadesuit/\heartsuit$). This is a strategy used in classical statistical mechanical (or in the PWI) that don't require subjective ignorance at a fundamental level (even if ignorance can play a role in the application of the probability calculus).\\ \indent We emphasize, that the role of the environment is central since it allows us to define unambiguously decohered Worlds evolving independently. Therefore, the theory agrees for all practical purposes with the standard quantum mechanical interpretation of decoherence. Like in Zurek existential interpretation \cite{Zurek1998,Zurek2003b} the unitary evolution is all what is required for the theory to hold. However, here Born's rule results as a contingent consequence of the dynamic relying on the Stosszahlansatz hypothesis. Like in Everett's (but unlike in Zurek's) work the role of the observer interacting with the gas of $T-$level systems is here central. Also, we stress that the theory we propose here is naturally generalized to systems of several observers with huge numbers of minds $N\rightarrow +\infty $. This condition is mandatory as stressed by Albert and Loewer in order to recover a common experience agreement between the separate perspective of the various observers (i.e., in order to avoid the mindless Hulk dilemma discussed in \cite{AlbertLoewer1988,Albert1994}). Finally, we emphasize once more that the model is fully unitary and doesn't require a mind/brain dualism (i.e., unlike some readings of the original MMI \cite{AlbertLoewer1988}). Here, the minds are physically linked to the brain and define some quantum excitations of a mechanical structure. \section{Conclusion and comments}\label{secconc} \indent To conclude this work several remarks are necessary~\footnote{An intereresting issue that we don't consider in this article (for reasons of space) concerns Bell's theorem and the notion of non-locality~\cite{Bell2004,Bricmont2016,Maudlin2019} studied in connections with the MWI \cite{Tipler2014} and the MMI \cite{AlbertLoewer1988,Albert1994}. }. First, observe that the present derivation is strongly related to Zurek's work about the Laplace indifference principle. Going back to Eq. \ref{Drezetstaten1} and Eq. \ref{Drezetstaten2} we see that the main difference concern the permutation between two observer mind states $\mathcal{O}^{(1)}$ and $\mathcal{O}^{(2)}$ in the two branches corresponding to the observable $|\uparrow\rangle$ or $|\downarrow\rangle$. Removing the irrelevant degrees of freedom we see that at the end of the unitary evolution we get either \begin{eqnarray} \sqrt{\frac{1}{2}}\left( |\uparrow\rangle\otimes |E_\uparrow,\mathcal{O}_\uparrow^{(1)},\emptyset^{(2)}\rangle+|\downarrow\rangle\otimes |E_\downarrow,\emptyset^{(1)},\mathcal{O}_\downarrow^{(2)}\rangle\right), \label{Drezetstaten1B} \end{eqnarray} or \begin{eqnarray} \sqrt{\frac{1}{2}}\left( |\uparrow\rangle\otimes |E_\uparrow,\emptyset^{(1)},\mathcal{O}_\uparrow^{(2)}\rangle+|\downarrow\rangle\otimes |E_\downarrow,\mathcal{O}_\downarrow^{(1)},\emptyset^{(2)}\rangle\right) , \label{Drezetstaten2B} \end{eqnarray} depending which state (i.e., $|\spadesuit^{(1)}\rangle\otimes |\heartsuit^{(2)}\rangle$ or $|\heartsuit^{(1)}\rangle\otimes |\spadesuit^{(2)}\rangle$) the Nature `randomly' assigned to the full system. Now, the objective probability (i.e., relative frequency) of each evolution is typically $\frac{1}{2}$ in agreement with our previous discussion of the Stosszahlansatz. Therefore, we get \begin{eqnarray} \mathcal{P}(\mathcal{O}_\uparrow^{(i)})=\mathcal{P}(\mathcal{O}_\downarrow^{(i)})=\frac{1}{2} \label{DrezetEND} \end{eqnarray} for $i=1,2$. There is complete indifference for the observer mind state $i$ about where he will finish his journey. This conveys the spirit of Laplace indifference principle which is here driven by symmetry like for a classical die tossing. On the one side, this clean self-locating uncertainty for each mind state is completely classical since it is driven by statistical distributions associated with a Stosszahlansatz for the heart and spade permutations. On the other side, this result is fully quantum and unitary. Unlike in the original MMI of Albert and Loewer no genuine stochastic process breaking the unitarity of the quantum evolution has to be invoked. Compared to the self-locating discussion given by Zurek, Deutsch, Wallace, and Carroll and Sebens our procedure doesn't suffer from the incoherence problem associated with the standard MWI. Clearly, this means that we dont try to stick rigorously to the usual debate done by usual proponents of the MWI (who probably will not agree with our solution~\footnote{In the usual MWI the incoherence problem explicitely excludes initial conditions uncertainty and molecular chaos as a source of randomness. This is not the case in our MMI where we introduce ingredients of statistical mechanics often used in hidden-variable theories like the PWI.}). The price to be paid is of course high since we need to define a unitary version of the MMI and thus have to introduce several mind states to avoid the schizophrenic quantum superposition of the Schr\"odinger cat. \\ \indent The previous analysis prompts at least two potential criticisms. The first criticism concerns the causal structure of the model used in that work which is very conspiratorial. Indeed, the model used by Zurek is already conspiratorial since in order to recover Born's rule for a general quantum state we must include a fine graining procedure (which we named a trick) which looks physically superdeterministic. In this approach the fine graining procedure can be experimentally implemented by using logical gates and beam splitters (see for instance the discussions and proposals made by Vaidman \cite{Vaidman2020}). The idea is to introduce states like $|a^{(S)}\rangle=\frac{1}{\sqrt{N_a}}\sum_{\alpha\in\Delta_a}|\alpha^{(S)}\rangle$ to transform a general wave-function given by Eq. \ref{Zurek7} into a symmetric Schmidt state as given by Eq. \ref{Zurek9}. However, this fine graining is necessarily wave-function dependent and by changing the probability coeeficents in the initial state we would have to modify completely the fine grains for the future experiments. Therefore, from the point of view of causality where the observer is selecting the beam splitters and other apparatus this looks like the fine grains was decided in advance in a conspiratorial way for reproducing Born's rule for a very specific problem. This is of course not necessarily a fatal objection if we are ready to accept such features. Many other interpretations of quantum mechanics involve superdeterministic or even retrocausal properties but we have at least to be aware of the problem. \\ \indent The second criticism concerns of course the notion of minds introduced in the present work. We fully agree that this notion is speculative. In fact, Albert recently presented negative comments concerning the value of this whole business about many minds and about his own work done with Loewer. Albert wrote: \begin{quote} \textit{The many-minds interpretation of quantum mechanics that Barry Loewer and I discussed twenty-five or so years ago, and which is rehearsed in chapter of \cite{Albert1994}, was a (bad, silly, tasteless, hopless, explicitely dualist) attempts of coming to terms with that realization.}~\cite{Albert2015}, p.~163. \end{quote} The model discussed here is presented as a solution for making sense of the MMI and MWI or may be should we say for saving the whole MWI enterprise. Yet, several authors got interested in the past years about the possible role of minds and consciousness in the interpretations of quantum mechanics. So, may be this is a good strategy to try. The `toy' model considered here is far from being complete. However, our model is fully deterministic and preserves unitarity together with the core ideas of the functionalist approach to the mind/body problem. It preserves all the advantages of the original MMI without its bad features such as dualism and stochasticity. Moreover, our approach is physically appealing from the statistical point of view since it relies on the `typical' distribution of qubits $\heartsuit,\spadesuit,...$ for driving the various minds. Therefore, like in classical and Bohmian statistical mechanics this randomness is associated with a typical choice of initial conditions for our Universe. This shows that in order to recover Born's rule or the Born-Vaidman postulate in the fully unitary MWI and MMI we have to introduce random elements associated with the cosmological initial conditions. Like in classical or Bohmian mechanics these initial conditions are contingent elements which must be added to the theory in order to justify the statiscal observations. In turn, we hope that the model suggested here will open new perspectives which could motivate further work in that fascinating area.\\
1,314,259,994,084
arxiv
\section{Introduction} The cover time $t_{\mathrm{cov}}(G)$ of a graph $G$ is the expected number of steps a simple random walk takes to visit every vertex of the graph $G$, starting from the worst possible vertex. It has been studied extensively by computer scientists, due to its intrinsic appeal and its applications to designing universal traversal sequences \cite{AKLLR, Bridgland, Broder}, testing graph connectivity \cite{AKLLR, KR}, and protocol testing \cite{MP}; see \cite{aldous2} for an introduction to cover times. Sophisticated methods to estimate the cover time have been developed \cite{Feige-up, Feige-lower, Matthews, KKLV}. One of the most precise bounds was obtained by Kahn, Kim, Lov{\'a}sz and Vu \cite{KKLV}. They gave polynomially computable upper and lower bounds that differ by a factor of order $(\log\log n)^2$. This breakthrough left several questions open: \begin{description} \item[(i)] Can the bounds in \cite{KKLV} be represented by an explicit formula for concrete graphs of interest? \item[(ii)] For such graphs, can the $(\log\log n)^2$ factor be removed? \end{description} In this work we improve the upper bound from \cite{KKLV} and show the resulting estimate is sharp up to a constant factor, and explicitly computable, for a large variety of graphs, in particular random graphs. Let $G=(V,E)$ be a simple graph and write $R_{{\bf{eff}}}(x,y)$ and $d(x,y)$ for the {\em effective resistance} and graph distance between two vertices $x,y\in V$, respectively. See e.g. \cite{LP} or \cite{LPW} for definitions and properties of effective resistance. It is known that $R_{{\bf{eff}}}(x,y) \leq d(x,y)$ and that $R_{{\bf{eff}}}(\cdot, \cdot)$ forms a metric on $G$. For $x\in V$ and a real number $R>0$ we write $B_{{\bf{eff}}}(x, R)$ for the ball of radius $R$ in the resistance metric, that is, $$ B_{{\bf{eff}}}(x, R) = \{ v \in G : R_{{\bf{eff}}}(x,v) \leq R \} \, .$$ \begin{theorem}\label{thm-cover} Let $G=(V,E)$ be a finite graph with diameter $R$ in the resistance metric. For $i\in \mathbb{N}$, let $A_i =A_i(G)$ be a set of minimal size such that \begin{eqnarray}\label{def-a-cov} G \subset \bigcup_{ v \in A_i} B_{{\bf{eff}}} \big (v, {R \over 2^i}\big )\, ,\end{eqnarray} and write $\alpha_i = 2^{-i}\log |A_i|$. Then there exists a universal constant $C>0$ such that \begin{eqnarray} \label{dudley} t_{\mathrm{cov}} \le C\big(\mbox{$\sum_{i=1}^{\log_2 \log n}$} \sqrt{\alpha_i}\big)^2 R |E| \,.\end{eqnarray} \end{theorem} The right hand side is approximable up to constant factors in polynomial time, see Remark \ref{approx}. Theorem \ref{thm-cover} is a refinement on \cite{KKLV}, in which it is shown that \begin{eqnarray} \max_i \alpha_i R |E|\leq t_{\mathrm{cov}}(G) \leq C (\log\log n)^2 \cdot \max_i \alpha_i R |E|.\end{eqnarray} The lower bound is a variant of Matthews' estimate for cover times \cite{Matthews}, and the upper bound is the main contribution of \cite{KKLV}. We refine the methods of \cite{KKLV} to deduce the stronger statement of Theorem \ref{thm-cover} (clearly, $(\sum_{i=1}^{\log_2 \log n}\sqrt{\alpha_i})^2 \leq (\log_2 \log n)^2 \max_i \alpha_i$). This new bound turns out to be sharp in many concrete examples where we can show that $\max_i \alpha_i$ and $(\sum_{i=1}^{\log_2 \log n}\sqrt{\alpha_i})^2$ are of the same order. Such examples are presented in Theorems \ref{evolution} and \ref{covercrit} below. \\ Cooper and Frieze \cite{CF} studied the cover time of the largest component of the Erd\H{o}s-R\'enyi \cite{ER59} random graph model $G(n,p)$, that is, the random graph obtained from the complete graph $K_n$ by retaining each edge with probability $p$ independently. It is well known that if $p={c\over n}$ for some $c>1$, then the largest connected component, ${\mathcal{C}}_1$, is of size about $xn$ with probability tending to $1$, where $x=x(c)$ is the unique solution in $(0,1)$ of $x=1-e^{-cx}$. Cooper and Frieze \cite{CF} established the asymptotics for the cover time in this regime, $$ t_{\mathrm{cov}}({\mathcal{C}}_1) \sim \varphi(c) n \log^2 n \quad \mbox{ with } \quad \varphi(c) = {cx(2-x) \over 4(cx - \log c)}$$ with probability tending to $1$ as $n \to \infty$. Since $\varphi(c)$ tends to $1$ as $c\to 1$, one might be tempted to guess that $t_{\mathrm{cov}}({\mathcal{C}}_1)$ for $G(n,1/n)$ is of order $n \log^2 n$. However, it is known~\cite{NP1} that the maximal hitting time between two vertices in ${\mathcal{C}}_1$ is typically of order $n$, so Matthews bound \cite{Matthews} shows that $t_{\mathrm{cov}}({\mathcal{C}}_1)$ is at most $O(n \log n)$. In fact, in $G(n,1/n)$ the largest component ${\mathcal{C}}_1$ is roughly of size $n^{2/3}$ \cite{ERDREN, bollo, Lucz}, and with probability uniformly bounded away from $0$ it is a tree. Aldous \cite{Aldous} proved that a random tree on $k$ vertices has cover time of order $k^{3/2}$ (see Theorem \ref{aldoustree} for a precise statement and an alternative proof). Combining these facts yields that $t_{\mathrm{cov}}({\mathcal{C}}_1)$ in $G(n,{1 \over n})$ is of order $n$ with probability uniformly bounded away from $0$. In the following theorem we show that this probability tends to $1$, and moreover, we show how the order of the cover time continuously evolves from the critical regime $c=1$ to the supercritical regime $c>1$. \begin{theorem} \label{evolution} Let $t_{\mathrm{cov}}({\mathcal{C}}_1)$ denote the cover time of largest component of $G(n,p)$ and let $\lambda\in \mathbb{R}$ be fixed and $\varepsilon(n)>0$ be a sequence such that $\varepsilon(n)\to 0$ but $n^{1/3} \varepsilon(n) \to \infty$. Then \begin{enumerate} \item[(a)] If $p={1 - \varepsilon(n) \over n}$, then for any $\delta>0$ there exists $ B>0$ such that $$ \P\Big ( B^{-1} \varepsilon^{-3} \log^{3/2}(\varepsilon^3 n) \leq t_{\mathrm{cov}}({\mathcal{C}}_1) \leq B \varepsilon^{-3} \log^{3/2}(\varepsilon^3 n) \Big ) \geq 1-\delta \, .$$ \item[(b)] If $p={1 + \lambda n^{-1/3} \over n}$, then then for any $\delta>0$ there exists $B>0$ such that $$ \P\Big ( B^{-1} n \leq t_{\mathrm{cov}}({\mathcal{C}}_1) \leq B n \Big ) \geq 1-\delta \, .$$ \item[(c)] There exists a constant $C>0$ such that if $p={1 + \varepsilon(n) \over n}$, then $$ \P \Big ( C^{-1} n \log^2 (\varepsilon^3 n) \leq t_{\mathrm{cov}}({\mathcal{C}}_1) \leq C n \log^2 (\varepsilon^3 n) \Big ) \to 1 \, .$$ \end{enumerate} \end{theorem} Theorem \ref{thm-cover} also allows us to prove sharp bounds on cover time for critical percolation clusters, even when the underlying graph is {\em not} the complete graph. Given a graph $G$ on $n$ vertices and $p\in[0,1]$, the random graph $G_p$ is obtained from $G$ by retaining each edge with probability $p$ independently. In the special case of $G = K_n$, this yields the Erd\H{o}s-R\'enyi graph $G(n,p)$. For a vertex $v\in G$ we write ${\mathcal{C}}(v)$ for the connected component in $G_p$ containing $v$, and denote by ${\mathcal{C}}_1$ the largest connected component of $G_p$. We are interested in critical percolation in which $|{\mathcal{C}}_1|\approx n^{2/3}$. This occurs in numerous underlying graphs $G$. A partial list of examples is: \begin{enumerate} \item The complete graph on $n$ vertices \cite{ERDREN, bollo, Lucz} with $p={1 + \Theta(n^{-1/3}) \over n}$, \item A random $d$-regular graph \cite{NP2, Pittel} with $p={1 + \Theta(n^{-1/3}) \over d-1}$, \item Expanders of high girth and degree $d$ \cite{Nachmias} with $p={1 + \Theta(n^{-1/3}) \over d-1}$, \item The Hamming hypercube $\{0,1\}^m$ \cite{BCHSS} with $p$ satisfying $\mathbb{E}_p|{\mathcal{C}}(v)|=\Theta(n^{1/3})$, \item Discrete tori $\mathbb{Z}_m^d$ for large but fixed dimension $d$ with $p=p_c(\mathbb{Z}^d)$ or $p$ satisfying $\mathbb{E}_p|{\mathcal{C}}(v)|=\Theta(n^{1/3})$ \cite{BCHSS, HH0, HH}. \end{enumerate} In all the examples above it is known that for any $\delta>0$ there exists $B=B(\delta)>0$ such that $$ \P_p\big ( B^{-1} n^{2/3} \leq |{\mathcal{C}}_1| \leq Bn^{2/3} \big ) \geq 1-\delta \, .$$ The following theorem is a generalization of part (b) of Theorem \ref{evolution}, and states that in these cases $t_{\mathrm{cov}}({\mathcal{C}}_1)$ has order $n$. This means that the cover time of the largest component has the same order as the cover time of a random tree on the same number of vertices. We note that unlike the $G(n,p)$ case, in examples 4 and 5, the probability that the largest component is a tree tends to zero as the volume grows, so the Aldous estimate \cite{Aldous} does not apply. \begin{theorem} \label{covercrit} In examples $1-5$ above, we have that for any $\delta>0$ there exists $B=B(\delta)>0$ such that $$ \P_p\big ( B^{-1} n \leq t_{\mathrm{cov}}({\mathcal{C}}_1) \leq Bn \big ) \geq 1-\delta \, .$$ \end{theorem} \noindent In fact, in Section \ref{criticalsec} we provide a general criterion for the conclusion of Theorem \ref{covercrit} to hold, which applies to examples $1-5$, see Theorem \ref{criticalgraphs}. \\ \noindent {\bf Remark.} The {\em blanket\/} time $B$ is the expected first time when the local times at all vertices are within a factor of $2$ from each other (the local time at a vertex $v$ is the number of visits to $v$ divided by the degree of $v$). This quantity was introduced by Winkler and Zuckerman \cite{WZ} (we use the definition of \cite{KKLV}) who conjectured that $B = O(t_{\mathrm{cov}})$ for any graph. The bounds in Theorems $1.1-1.3$ also apply to $B$ in place of $t_{\mathrm{cov}}$. This will be clear from the proofs. \\ Finally, it is natural to guess that adding edges to a graph can only decrease the cover time. However, this is not the case, as shown by the following example. Let $K_n^*$ be the graph obtained from $K_n$ (the complete graph on $n$ vertices) by adding a new vertex $v$ and connecting it to one vertex of $K_n$. The cover time of $K_n^*$ is easily seen to be $n^2$. On the other hand, if we replaces $K_n$ by $H_n$, a bounded degree expander on $n$ vertices, and construct $H_n^*$ by adding a new vertex $v$ and connecting it to one vertex of $H_n$, then the cover time of $H_n^*$ is of order $n \log n$. Since $H_n^*$ is a subgraph of $K_n^*$ on the same $n+1$ vertices, we conclude that adding an edge to a graph may increase the cover time. The increase is at most by a constant factor: \begin{proposition}\label{addedge} Let $G$ be a connected graph and let $u,v\in G$ be two vertices. Let $G^+$ be the graph obtained from $G$ by adding the edge $\{u,v\}$ (if an edge connecting these two vertices already exists, then we add a multiple edge, and if $u=v$, then we add a loop). Then we have $$ t_{\mathrm{cov}}(G^+) \leq 4 t_{\mathrm{cov}}(G) \, .$$ \end{proposition} \section{Proof of Theorem \ref{thm-cover}} Let $S_t$ be a simple random walk on $G$, and for an integer $t\geq 0$, define the {\em local time} $L^v_t$ of a vertex $v\in V$ by \begin{equation} L^v_t \stackrel{\scriptscriptstyle\triangle}{=} \frac{1}{d_v} \sum_{k=0}^t 1_{\{S_k = v\}}\,, \mbox{ for all } v\in V \mbox{ and } t\in \mathbb{N}\,, \end{equation} where $d_v$ is the degree of vertex $v$. Furthermore, let $\tau^v_k = \min \{t\in \mathbb{N}: L^v_t = k/d_v\}$ be the time of the $k$-th visit of the random walk to $v$. The following lemma of \cite{KKLV} implies that if the local time at a vertex $u$ is large, then with high probability, the local time is also large at vertices $v$ that are close to $u$ in the resistance metric. \begin{lemma}\cite{KKLV}*{Lemma 5.2} \label{lem:ltbnd} For all $u,v\in V$, numbers $\lambda \geq 0$ and $t\in \mathbb{N}$ we have \begin{equation*} \P_u\big( L^u_{\tau^u_t} - L^v_{\tau^u_{t}} \geq \lambda \big) \leq \exp \big(-\tfrac{\lambda^2}{4 t R_{\mathrm{eff}}(u, v)}\big)\,. \end{equation*} \end{lemma} We use an idea of Kolmogorov \cite{stroock}*{page 91}. For all $i\geq 1$ and for each $u \in A_{i}$, we can always select $v\in A_{i-1}$ such that $R_{\bf{eff}}(u,v) \leq 2^{-(i-1)} R$ (see (\ref{def-a-cov})). Write $v = h(u)$. Set $\alpha'_i = \alpha_i \vee 2^{-i/2}$ and define \[\Psi = 128 (\mbox{$\sum_{i=1}^\infty $} \mbox{$\sqrt{\alpha'_i}$})^2, \mbox{ and } \beta_i = \frac{\sqrt{\alpha'_i}}{2\sum_{i=1}^\infty \sqrt{\alpha'_i}}\mbox{ for all } i\in \mathbb{N}\,.\] For $i\in \mathbb{N}$, let $t_i = (1 - \sum_{j=1}^{i} \beta_j) \Psi$, and for $u\in A_{i}$ define $M_i(u)$ to be the difference of the local times of vertices $h(u)$ and $u$ at time $\tau^{h(u)}_{t_{i-1} R}$, by \[ M_i(u) = t_{i-1} R - L^u_{\tau^{h(u)}_{t_{i-1}R}} \, .\] Lemma~\ref{lem:ltbnd} then gives that \[ \P( M_i(u) \geq \beta_i \Psi R) \le \exp\Big( -\frac{(\beta_i \Psi R)^2 }{ 4 R 2^{-(i-1)} \cdot t_{i-1} R} \Big) \leq e^{- 2^{i+1} \alpha'_i }\,.\] Define $\displaystyle M_i = \max_{u\in A_{i}} M_i(u)$. Recalling the definition of $\alpha_i$ and $\alpha'_i$, we apply a union bound and get \[\P(M_i \geq \beta_i \Psi R) \leq |A_{i}| e^{- 2^{i+1} \alpha'_i } \leq \mathrm{e}^{- 2^i \alpha'_i} \,.\] It follows that \begin{eqnarray}\label{hitall} \P \Big ( \mbox{$\bigcup_{i \geq 1}$} \{M_i \geq \beta_i \Psi R\} \Big ) \leq \sum _{i \geq 1} \mathrm{e}^{ - 2^i \alpha'_i} \leq \sum_{i\geq 1} \mathrm{e}^{-2^{i/2}} \leq \frac{2}{3} \, .\end{eqnarray} Now, take $v \in V$ and write $\tau_{\mathrm{cov}}$ for the cover time of the random walk. Provided that the event $\mathcal{M} \stackrel{\scriptscriptstyle\triangle}{=} \bigcap_{i\geq 1}\{M_i \leq \beta_i \Psi R\}$ occurs, we have that $L^u_{\tau^v_{\Psi R}} \geq (1- \sum_{j=1}^{i} \beta_i) \Psi R $ for all $u\in A_i$ and hence $L^u_{\tau^v_{\Psi R}} \geq \Psi R/2$ by the definition of $\beta_i$. In particular, on the event $\mathcal{M}$ every vertex in the graph should have been visited at least once. Combined with \eqref{hitall}, it follows that $$\P_v(\tau_{\mathrm{cov}} \geq \tau^v_{\Psi R}) \leq {2 \over 3} \, ,$$ and hence $\mathbb{E}_v \tau_{\mathrm{cov}} \leq 3 \mathbb{E}_v \tau^v_{\Psi R}$. The expected return time to $v$ satisfies $\mathbb{E}_v T^+_v = {2 |E| \over d_v}$ whence \begin{equation}\label{eq-cover-ball}\mathbb{E}_v \tau_{\mathrm{cov}} \leq 3 \Psi R d_v \mathbb{E}_v T^+_v = 6 \Psi R |E| \, .\end{equation} Since the above holds for all $v\in V$, we have $t_{\mathrm{cov}} \leq 6 \Psi R |E|$. Note that $|A_i| \leq n$ for all $i\in \mathbb{N}$ and hence $\sum_{i\geq \log_2 \log n} \sqrt{\alpha_i} = O(1)$. Observing also that $\alpha_i \leq \alpha'_i + 2^{-i/2}$, we get $\Psi \leq 256 ( (\sum_{i=1}^{\log_2 \log n} \sqrt{\alpha_i})^2 + 16)$. It completes the proof of the theorem together with the fact that $\alpha_1 \geq \frac{1}{2} \log 2$ (since $|A_1|$ has to be at least $2$). \begin{remark}\label{approx} Note that the sum $\sum_i \sqrt{\alpha_i}$ can be easily approximated up to constant. To see this, one can use greedy algorithm to find a maximal collection of centers $\tilde{A}_i$ such that $\{B_{\mathbf{eff}}(v, 2^{-(i+1)} R) : v\in \tilde{A_i}\}$ forms a collection of disjoint balls. Thus, $|\tilde{A}_{i-1}| \leq |A_{i}| \leq |\tilde{A}_i|$ and \[\tfrac{1}{\sqrt{2}} \sum_i \sqrt{2^{-i}\log |\tilde{A}_i|} \leq \sum_{i} \sqrt{\alpha_i} \leq \sum_i \sqrt{2^{-i}\log|\tilde{A}_i|}\,.\] \end{remark} \section{Cover time of critical percolation clusters}\label{criticalsec} We are interested in critical percolation in which $|{\mathcal{C}}_1|\approx n^{2/3}$. This occurs in numerous underlying graphs $G$ as listed in the introduction (examples $1-5$). Recall the definition of $G_p$, and write $d_{G_p}(x,y)$ for the length of the shortest path between $x$ and $y$ in $G_p$, or $\infty$ if there is no such path. We call $d$ the {\em intrinsic metric} on $G_p$. Define the random sets \begin{align*} B_p(x,r;G) = \{ u : d_{G_{p}}(x,u) \leq r \} \, , \quad \partial B_p(x,r;G) = \{ u : d_{G_{p}}(x,u) = r \} \, , \end{align*} and the event $H_p(x,r;G) = \Big \{ \partial B_p(x,r;G) \neq \emptyset \Big \} \, . $ Finally, define $$\Gamma_p(x,r; G)=\sup_{G' \subset G } \P (H_p(x,r;G')) \, , $$ where $\P$ here is the percolation probability measure over subgraphs of $G'$. The reason for taking a supremum in the definition of $\Gamma_p$ is that the event $H_p(x,r; G)$ is {\em not} monotone with respect to edge addition (indeed, adding an edge can potentially shorten a shortest path and make $\partial B_p(x,r;G)$ empty even if it were not empty before). The quantity $\Gamma_p$ is called the intrinsic metric {\em arm exponents} and was introduced in \cite{NP1}, see Theorem $2.1$ of that paper for further details there. \begin{theorem}\label{criticalgraphs} Let $G=(V,E)$ be a graph and let $p \in [0,1]$. Suppose that for some constants $c_1, c_2>0$ and all vertices $x \in V$ the following two conditions are satisfied: $$ (i) \,\, \mathbb{E} |B_p(x,r;G)| \leq c_1 r \, , \qquad (ii) \,\, \Gamma_p(x,r; G) \leq c_2/r \, . $$ Then for any $\beta,\delta>0$ there exists $B > 0$ such that $$ \P \Big ( \exists {\mathcal{C}} \mbox{ with } |{\mathcal{C}}|\geq \beta n^{2/3} \mbox{ and } t_{\mathrm{cov}}({\mathcal{C}}) \not \in [B^{-1} n, Bn] \Big ) \leq \delta \, .$$ \end{theorem} \begin{proof} The fact that there exists $B>0$ such that $$ \P \Big ( \exists {\mathcal{C}} \mbox{ with } |{\mathcal{C}}|\geq \beta n^{2/3} \mbox{ and } t_{\mathrm{cov}}({\mathcal{C}}) \leq B^{-1} n \Big ) \leq \delta/2 \, ,$$ follows immediately from the corresponding lower bound on the maximal hitting time, see part (c.2) of Theorem $2.1$ of \cite{NP1} and Lemma $4.1$ in that paper. Also from \cite{NP1} we have that for any $\beta, \delta'>0$ there exists $D = D(\beta, \delta')>0$ such that \begin{eqnarray}\label{diambound} \P \Big ( |{\mathcal{C}}(v)|\geq \beta n^{2/3} \mbox{ and } {\mathop {{\rm diam\, }}}({\mathcal{C}}(v)) \not \in [D^{-1} n^{1/3}, Dn^{1/3}] \Big ) \leq \delta' n^{-1/3} \, .\end{eqnarray} To see this, combine (3.1) and (3.3) of \cite{NP1}. Denote ${\rm diam}_{{\bf{eff}}}({\mathcal{C}}(v))$ for the diameter of ${\mathcal{C}}(v)$ according to the resistance metric. We first show that with high probability components of size $n^{2/3}$ have ${\rm diam}_{{\bf{eff}}}$ of order $n^{1/3}$. Indeed, the upper bound follows immediately from Theorem 2.1 of \cite{NP1} and the fact that $R_{{\bf{eff}}}(x,y)\leq d(x,y)$. For the lower bound, we use Proposition 5.6 of \cite{NP1}, the Nash-Williams inequality and (\ref{diambound}) to deduce that for large enough $D = D(\beta, \delta')>0$ we have \begin{equation}\label{eq-resistance-diam} \P \big ( |{\mathcal{C}}(v)| \geq \beta n^{2/3} \mbox{ and } {\rm diam}_{{\bf{eff}}}({\mathcal{C}}(v)) \leq D^{-1}n^{1/3} \big ) \leq \delta' n^{-1/3} \, .\end{equation} We now proceed to construct covering sets of $G$ on different scales. Fix an integer $i \geq 0$ and we define a sequence of radii $\{r_j\}_{j \leq 2D^2 2^i}$ which have the following properties: \begin{eqnarray*} (a) \,\,r_0 = 0 \, , \quad (b)\,\, {(j-1/2)n^{1/3} \over 2 D 2^i} \leq r_j \leq {jn^{1/3} \over 2 D 2^i} \, , \quad (c) \,\, \mathbb{E} \partial B_p(v,r_j;G) \leq 4D^2 c_1 2^i \, .\end{eqnarray*} This is possible by condition (i) of the theorem, which implies that for each $j \leq 2D^2 2^i$ $$ \sum _{\ell={(j-1/2)n^{1/3}/(2D 2^{i}) }}^{{jn^{1/3}/(2D 2^{i})}} \mathbb{E} \partial B_p(v,\ell;G) \leq c_1D n^{1/3}\, ,$$ and so there must exists $\ell \in [(j-1/2)n^{1/3}/(2D 2^{i}), jn^{1/3}/(2D 2^{i})]$ such that $r_j=\ell$ satisfies condition (c). Given such radii $\{r_j\}$ we say that a vertex $u \in \partial B_p(v,r_j;G)$ is {\em $i$-good} if there exist a path between $u$ and $\partial B_p(v,r_{j+1};G)$ which does not go through $B_p(v,r_j;G)$. We now construct a sequence of sets $\{A'_i\}$ which will serve as a covering. Define $$ A'_i = \bigcup_{j \leq 2D^2 2^i} \big \{ u \in \partial B_p(v,r_j;G) \, : \, u \mbox{ is } \mbox{$i$-good} \big \} \, .$$ Observe that if ${\mathop {{\rm diam\, }}}({\mathcal{C}}(v))\leq Dn^{1/3}$ then we have that $${\mathcal{C}}(v) \subset \bigcup_{u \in A'_i} B_p \big (u, {2^{-i} D^{-1}n^{1/3}}; G \big )\, .$$ Furthermore, if in addition $R={\rm diam}_{{\bf{eff}}}({\mathcal{C}}(v))\geq D^{-1} n^{1/3}$, then we have that ${\mathcal{C}}(v) \subset \bigcup_{u \in A'_i} B_p \big (u, 2^{-i} R; G\big ) $. Given these two events and the fact that $B_p \big (u,r; G) \subset B_{{\bf{eff}}}(u,r; {\mathcal{C}}(v))$, we deduce that \[{\mathcal{C}}(v) \subset \bigcup_{u \in A'_i} B_{\bf{eff}}\big (u, 2^{-i} R ; {\mathcal{C}}(v) \big ) \,,\] and therefore $|A_i| = |A_i({\mathcal{C}}(v))| \leq |A'_i|$ for all $i\in \mathbb{N}$ (see (\ref{def-a-cov})). By (\ref{diambound}) and (\ref{eq-resistance-diam}), we get that \begin{equation}\label{eq-A-A'} \P(|{\mathcal{C}}(v)| \geq \beta n^{2/3}, \exists i \in \mathbb{N}: |A'_i|< |A_i|) \leq 2\delta' n^{-1/3}\,. \end{equation} Now, by condition (ii) of our theorem and our construction of $\{r_j\}$ we get that $$ \mathbb{E}|A'_i| \leq \sum _{j \leq 2D^2 2^i} \mathbb{E} \partial B_p(v,r_j;G) \cdot {4D c_2 2^i \over n^{1/3}} \leq 16 D^3 c_1 c_2 2^{2i} n^{-1/3} \, .$$ So we can choose a large integer $m=m(c_1, c_2, D, \delta')$ such that \begin{eqnarray}\label{fromhere} \sum _{i=1}^\infty \P\big(|A'_i| \geq \mathrm{e}^{m \cdot 2^{i/2}}\big) \leq \sum_{i=1}^\infty \frac{ 16 D^3c_1 c_2 2^{2i} n^{-1/3}}{ \mathrm{e}^{m \cdot 2^{i/2}}} \leq \delta' n^{-1/3}\, .\end{eqnarray} Recalling that (see Theorem~\ref{thm-cover}) $\alpha_i = 2^{-i}\log |A_i|$ and combining the above estimate with \eqref{eq-A-A'}, we obtain that \begin{align}\label{eq-alpha-s}\P\big(|{\mathcal{C}}(v)| \geq \beta n^{2/3}, \mbox{$\sum_{i=1}^\infty$} \sqrt{\alpha_i} \geq 4m \big) \leq& \P\big(|{\mathcal{C}}(v)| \geq \beta n^{2/3}, \exists i\in\mathbb{N} : |A'_i| < |A_i| \big)\nonumber \\ & + \sum _{i=1}^\infty \P\big(|A'_i| \geq \mathrm{e}^{m \cdot 2^{i/2}}\big) \leq 3 \delta' n ^{-1/3}\,.\end{align} We say that ${\mathcal{C}}(v)$ is {\em bad} if $|{\mathcal{C}}(v)| \geq \beta n^{2/3}$ and one of the following holds: \begin{itemize} \item $\sum_{i=1}^\infty \sqrt{\alpha_i} \geq 4m$, or \item ${\rm diam}_{{\bf{eff}}}({\mathcal{C}}(v)) \geq D n ^{1/3}$, or \item $|E({\mathcal{C}}(v))| \geq Dn^{2/3}$. \end{itemize} By \eqref{eq-alpha-s} and Theorem $2.1$ of \cite{NP1} we learn that we can choose $D$ large enough so that the probability that ${\mathcal{C}}(v)$ is bad is at most $5\delta'n^{-1/3}$, whence $\mathbb{E} X \leq 5\delta' n^{2/3}$. Note that if there exists $v$ such that ${\mathcal{C}}(v)$ is bad, then $X \geq \beta n^{2/3}$. By Theorem \ref{thm-cover} we learn that there exists some large constant $B=B(D,m)$ such that if $|{\mathcal{C}}(v)|\geq \beta n^{2/3}$ and $t_{\mathrm{cov}}({\mathcal{C}}(v)) \geq Bn$, then ${\mathcal{C}}(v)$ is bad (taking $B=16 C m^2 D^2$, where $C$ is the constant of Theorem \ref{thm-cover} suffices). Hence, by Markov's inequality $$ \P \Big ( \exists {\mathcal{C}} \mbox{ with } |{\mathcal{C}}|\geq \beta n^{2/3} \mbox{ and } t_{\mathrm{cov}}({\mathcal{C}}) \geq Bn \Big ) \leq \P( X \geq \beta n^{2/3}) \leq 5\delta'/\beta \, ,$$ which concludes the proof of the theorem by setting $\delta' = \delta/(10\beta)$. {\hfill $\square$ \bigskip} \end{proof} \noindent {\bf Proof of Theorem \ref{covercrit}.} We only need to show that the conditions of Theorem \ref{criticalgraphs} holds in examples $1-5$. Indeed, it is shown in \cite{NP1} that the conditions hold for examples $1-3$, and in \cite{KN1} and \cite{KN2} it is shown for examples $4-5$. In \cite{HH, HH0} it is shown for example $5$ that at $p=p_c(\mathbb{Z}^d)$ the largest cluster size is of order $n^{2/3}$. {\hfill $\square$ \bigskip} We will require the following result of Aldous \cite{Aldous}. For the reader's convenience we provide a simpler proof of this theorem based on Theorem \ref{thm-cover}. \begin{theorem} \label{aldoustree} Let $T$ be a Galton-Watson tree with progeny mean $1$ and variance $\sigma^2<\infty$. Then for any $\delta>0$ there exists $A=A(\delta, \sigma^2)>0$ such that $$ \P \big ( t_{\mathrm{cov}}(T) \not \in [A^{-1} k^{3/2}, A k^{3/2}] \, \big | \, |T| \in [k,2k] \big )\leq \delta \, .$$ \end{theorem} \begin{proof} This is very similar to the proof of Theorem \ref{criticalgraphs}. Firstly, we claim that there exists $D>0$ such that $$ \P \big ( {\mathop {{\rm diam\, }}}(T) \not \in [D^{-1} k^{1/2}, Dk^{1/2}] \, , |T| \in [k,2k] \big ) \leq k^{-1/2} \delta /2 \, .$$ Indeed, it is a classical fact \cite{KSN} that $\P({\mathop {{\rm diam\, }}}(T) \geq Dk^{1/2}) = O(D^{-1}k^{-1/2})$. Furthermore, the expected number of particles in $T$ up to level $D^{-1} k^{1/2}$ is precisely $D^{-1} k^{1/2}$, and the event $\{{\mathop {{\rm diam\, }}}(T)\leq D^{-1}k^{1/2} \, , |T| \geq k\}$ implies that this quantity is at least $k$. Hence by Markov's inequality we have that $\P({\mathop {{\rm diam\, }}}(T) \leq D^{-1} k^{1/2}, |T| \geq k) \leq D^{-1} k^{-1/2}$. Now, for each $i$ we define $r_j = j 2^{-i-1} D^{-1} \sqrt{k}$ for $j=0, \ldots, 2^{i+1} D^2$ and define $A_i'$ to be the set of particles at level $r_j$ which survive up to level $r_{j+1}$. As in the proof of Theorem \ref{criticalgraphs}, if ${\mathop {{\rm diam\, }}}(T) \in [D^{-1} k^{1/2}, Dk^{1/2}]$, then $$ T \subset \bigcup_{u \in A'_i} B_{\bf{eff}}\big (u, 2^{-i} R ; T \big ) \, ,$$ where $R$ is the diameter of $T$ with respect to the resistance metric. Now, for each $j$ the expected number of particles in level $r_j$ is precisely $1$ and for each, the probability of surviving up to level $r_{j+1}$ is of order $(r_{j+1}-r_j)^{-1}$ (see \cite{KSN} again), hence $\mathbb{E} |A_i'| \leq C 2^{2i+2} D^3 k^{-1/2}$ and the proof continues as in (\ref{fromhere}) to show using Theorem \ref{thm-cover} that there exists $A$ such that $$ \P \big ( t_{\mathrm{cov}}(T) \geq A k^{3/2} \, , |T| \in [k,2k] \big ) \leq k^{-1/2} \delta/2 \, .$$ Let $L$ be the offspring random variable of $T$. We have that $|T|$ is distributed as the first hitting time of $0$ of a random walk starting $1$ with increments distributed as $L-1$ (see exercise $5.26$ of \cite{LP}). We use this and Theorem 1a of chapter XII.7 in \cite{Feller2} to deduce that $$ \P(|T|\in[k,2k]) = (1+o(1))Ck^{1/2} \, ,$$ for some constant $C>0$. This gives the required upper bound on the cover time. The corresponding lower bound follows immediately from the lower bound on the maximal hitting time, which we obtain via the $\sqrt{k}$ lower bound on the diameter of $T$ together with commute time identity. {\hfill $\square$ \bigskip} \\ \end{proof} \noindent {\bf Proof of part (a) and (b) of Theorem \ref{evolution}.} Part (b) of the theorem follows immediately from Theorem \ref{criticalgraphs}, so we are only left to prove part (a). In this case it is known that the largest cluster is a uniform random tree of order $\varepsilon^{-2} \log(\varepsilon^3 n)$ (see \cite{Jansonbook}). It is a classical fact (see chapter $2.2$ of \cite{Kolchin}) that a uniform random tree of size $k$ is distributed as a Poisson($1$) Galton-Watson tree $T$ conditioned on $|T|=k$. Hence the following statement concludes the proof: let $T$ be a Poisson($1$) Galton-Watson tree, then for any $\delta>0$ there exists $A>0$ such that \begin{eqnarray} \label{aldouswinkler} \P \big ( t_{\mathrm{cov}}(T) \not \in [A^{-1} k^{3/2}, A k^{3/2}] \, \big | \, |T| = k \big ) \leq \delta \, .\end{eqnarray} Note that this assertion does not immediately follow from Theorem \ref{aldoustree}. To fill in the gap, we will infer from a result Luczak and Winkler \cite{LW}, that there exists a coupling between a random tree $T_k$ of size $k$ and a random tree $T_{k+1}$ of size $k+1$ such that $T_k \subset T_{k+1}$. This together with Theorem \ref{aldoustree} shows the the upper bound on the cover time of (\ref{aldouswinkler}) and concludes the proof (the lower bound on the cover time is easier and follows, as in the remark above, by the easy lower bound on the maximal hitting time). To see that such a coupling exists write $T^{(d)}_k$ for a Bin($d,1/d$) Galton-Watson tree conditioned on being of size $k$. Theorem $4.1$ in \cite{LW} shows that there exists a coupling between $T^{(d)}_k$ and $T^{(d)}_{k+1}$ such that $T^{(d)}_k \subset T^{(d)}_{k+1}$. Now, for any fixed $k$ we may take $d \to \infty$ and we get the required coupling between Poisson($1$) Galton-Watson trees. This concludes our coupling since the latter trees are uniform random trees. {\hfill $\square$ \bigskip} \section{Cover time for mildly supercritical Erd\H{o}s-R\'enyi graph} In this section, we prove Part (c) of Theorem~\ref{evolution}, which incorporates the order of the cover time for the largest component of Erd\H{o}s-R\'enyi graph $G(n, p)$ with $p = \frac{1+\varepsilon}{n}$, where $\varepsilon = o(1)$ and $\varepsilon^3 n \to \infty$. Our proof makes use of the following structure result of \cite{DKLP1}. \begin{theorem}\cite{DKLP1}\label{mainthm-struct-gen} Let $\GC$ be the largest component of $G(n,p)$ for $p = \frac{1 + \varepsilon}{n}$, where $\varepsilon^3 n\to \infty$ and $\varepsilon\to 0$. Let $\mu<1$ denote the conjugate of $1+\varepsilon$, that is, $\mu\mathrm{e}^{-\mu} = (1+\varepsilon) \mathrm{e}^{-(1+\varepsilon)}$. Then $\GC$ is contiguous to the model ${\tilde{\mathcal{C}}_1}$ constructed in the following 3 steps: \begin{enumerate} \item[(a)]\label{item-struct-gen-degrees} Let $\Lambda\sim \mathcal{N}\left(1+\varepsilon - \mu, \frac1{\varepsilon n}\right)$ and assign i.i.d.\ variables $D_u \sim \mathrm{Poisson}(\Lambda)$ ($u \in [n]$) to the vertices, conditioned that $\sum D_u \mathbf{1}_{D_u\geq 3}$ is even. Let $N_k = \#\{u : D_u = k\}$ and $N= \sum_{k\geq 3}N_k$. Select a random graph $\mathcal{K}$ on $N$ vertices, uniformly among all graphs with $N_k$ vertices of degree $k$ for $k\geq 3$. \item[(b)]\label{item-struct-gen-edges} Replace the edges of $\mathcal{K}$ by paths of lengths i.i.d.\ $\mathrm{Geom}(1-\mu)$. \item[(c)]\label{item-struct-gen-bushes} Attach an independent $\mathrm{Poisson}(\mu)$-Galton-Watson tree (PGW tree in what follows) to each vertex. \end{enumerate} That is, $\P({\tilde{\mathcal{C}}_1} \in \mathcal{A}) \to 0$ implies $\P(\GC \in \mathcal{A}) \to 0$ for any set of graphs $\mathcal{A}$. \end{theorem} By the above theorem, it suffices to analyze the cover time of ${\tilde{\mathcal{C}}_1}$. In what follows, we will repeatedly use some known facts about ${\tilde{\mathcal{C}}_1}$ and one can see \cite{DKLP1, DKLP2} for references. \subsection{Lower bound} We first show that w.h.p.~there are $(\varepsilon^3 n)^{1/4}$ attached trees, as in part (c) of the construction of ${\tilde{\mathcal{C}}_1}$, of height at least $\frac{1}{2}\varepsilon^{-1} \log (\varepsilon^3 n)$. To this end, note that the height $H$ of a PGW($\mu$) tree satisfies the following for some constant $c>0$ (see, e.g., \cite{DKLP2}*{Lemma 4.2}) \begin{eqnarray}\label{treebound}\P \big (H \geq \tfrac{1}{2} \varepsilon^{-1} \log(\varepsilon^3 n) \big) \geq c \varepsilon (\varepsilon^3 n)^{-1/2 + o(1)}\,, \end{eqnarray} where we used the fact that $\mu = (1 - (1+o(1)) \varepsilon)$. It is an immediate consequence of parts (a) and (b) of the construction of ${\tilde{\mathcal{C}}_1}$ that w.h.p.~there are $(2+o(1)) \varepsilon^2 n$ i.i.d.\ attached PGW($\mu$) trees. Hence, by (\ref{treebound}), we learn that with high probability there are at least $(\varepsilon^3 n)^{1/4}$ PGW trees of height at least $\tfrac{1}{2} \varepsilon^{-1} \log(\varepsilon^3 n)$. Now, take exactly one leaf in the bottom level from each of these trees and denote by $B$ the set of these leaves. We will use the following lemma (see, e.g., \cite{Tetali}, and also see \cite{LP}*{Proposition 2.19}) to bound the hitting time between vertices in $B$. \begin{lemma}\label{lem-network-hitting-time} Given a finite network with a vertex $v$ and a subset of vertices $Z$ such that $v\not \in Z$. Let $vol(\cdot)$ be the voltage when a unit current flows from $v$ to $Z$ and $vol(Z) = 0$. Then we have that $\mathbb{E}_v[\tau_Z] =\sum_{x \in V} c(x) vol(x)$, where $c(x) = \sum_{x \sim y} c(x, y)$ and $c(x, y)$ is the conductance between $(x, y)$. \end{lemma} In our setting, $c(x, y) = 1$ if $(x, y)$ is an edge of ${\tilde{\mathcal{C}}_1}$, and otherwise $c(x, y) = 0$. Let $u, v\in B$, and let $T(v)$ be the attached PGW tree that contains $v$. It is clear that for all $w\not\in T(v)$ the effective resistance between $w$ and $v$ satisfies $R_{\bf{eff}}(w, v) \geq (2\varepsilon)^{-1} \log(\varepsilon^3 n)$. Now, if a unit current flows from $u$ to $v$ and the voltage at $v$ is set to be $0$, we can then deduce that the voltage at vertex $w$ is at least $(2\varepsilon)^{-1} \log(\varepsilon^3 n)$, for all $w\not\in T(v)$. Note that w.h.p.\ simultaneously for all $v\in B$ we have $|{\tilde{\mathcal{C}}_1}\setminus T(v)| = (2+o(1))\varepsilon n$ (see \cite{DKLP1}) and we then assume this. Lemma ~\ref{lem-network-hitting-time} then yields that for all $u, v\in B$ \[\mathbb{E}_{u} \tau_v \geq (2\varepsilon)^{-1} \log(\varepsilon^3 n) (2+o(1)) \varepsilon n = (1+o(1)) n \log (\varepsilon^3 n)\,.\] At this point, an application of the Matthews lower bound \cite{Matthews} (see also, e.g., \cite{LPW}) stating that for any subset $A \subset G$ we have $t_{\mathrm{cov}}(G) \geq \log|A| \min_{u,v\in A} \mathbb{E}_u\tau_v$, completes the proof of the lower bound. {\hfill $\square$ \bigskip} \subsection{Upper bound} In this section we establish the upper bound on the cover time. In light of Theorem~\ref{thm-cover}, it suffices to show that w.h.p.\ for ${\tilde{\mathcal{C}}_1}$ we have that $|A_i| \leq (\varepsilon^3 n)^{ 2i} $ simultaneously for all $i\geq 1$. Let $R$ be the diameter of ${\tilde{\mathcal{C}}_1}$ in resistance metric. As shown in \cite{DKLP2}, with high probability the diameter in graph metric is $(3+o(1))\varepsilon^{-1} \log (\varepsilon^3 n)$ and also the two highest attached trees have height $(1+o(1))\varepsilon^{-1} \log (\varepsilon^3 n)$ each. It implies that $(2+o(1))\varepsilon^{-1} \log (\varepsilon^3 n) \leq R \leq (3+o(1)) \varepsilon^{-1} \log (\varepsilon^3 n)$ w.h.p., and we assume this in what follows. Fix $i\in \mathbb{N}$, we now construct $A'_i$ such that balls of radius $2^{-i}R$ around vertices in $A'_i$ form a covering of $\tilde{C}_1$. We first cover the 2-core $\mathcal{H}$ of ${\tilde{\mathcal{C}}_1}$ by balls of radius $2^{-(i+1)} R$. To this end, consider the disjoint balls of radius $2^{-(i+2)}R$ that can be packed in $\mathcal{H}$. Take such a maximal packing and denote by $A'_{i, 1}$ the set of these centers. Since the packing is maximal, we have that \[\mathcal{H} \subseteq \bigcup_{v\in A'_{i,1}} B_{{\bf{eff}}}(v, 2^{-(i+1)} R)\,.\] Since $R_{{\bf{eff}}}(x,y)\leq d(x,y)$, it follows that $|B_{{\bf{eff}}}(v, 2^{-(i+2)} R) \cap \mathcal{H}| \geq 2^{-(i+2)}R$ for all $v\in A'_{i,1}$. Therefore, since the balls $B_{{\bf{eff}}}(v, 2^{-(i+2)} R)$ for $v\in A'_{i,1}$ are disjoint, we conclude that $|A'_{i,1}| \leq 4 \cdot 2^i|\mathcal{H}|/R$. We now turn to cover the attached trees. For a rooted tree $T$, let $H(T)$ be the height of $T$. For $v\in T$, denote by $T_v$ the subtree of $T$ rooted at $v$ that contains all the descendants of $v$. Also, denote by $L_k$ the vertices in level $k 2^{-(i+1)} R$ of $T$. Define \[F_T \stackrel{\scriptscriptstyle\triangle}{=} \cup_{k=1}^\infty\{v\in L_k: H(T_v) \geq 2^{-(i+1)} R\}\,. \] Let $\mathcal{T}$ be the collection of attached PGW trees in ${\tilde{\mathcal{C}}_1}$ and let $A'_{i,2} = \cup_{T \in \mathcal{T}} F_T$. Defining $A'_i = A'_{i,1} \cup A'_{i,2}$, we deduce from the definition that ${\tilde{\mathcal{C}}_1} \subseteq \bigcup_{v\in A'_i} B_{{\bf{eff}}}(v, 2^{-i}R)$. It remains to bound $|A'_{i,2}|$. Using \cite{DKLP2}*{Lemma 4.2} again, we obtain that for a PGW$(\mu)$ tree $T$ and some absolute constant $C$, \begin{eqnarray}\label{estimate}\P(H(T) \geq 2^{-(i+1)} R) \leq \begin{cases} C \varepsilon\,, &\mbox{ if } 2^{i} \leq \log(\varepsilon^3 n)\,,\\ \frac{C}{2^{-(i+1)}R}\,, & \mbox{ if } 2^{i} \geq \log(\varepsilon^3 n)\,. \end{cases} \end{eqnarray} Also, it is immediate that $\mathbb{E}[|L_k|] = \mu^{k2^{-(i+1)} R}$. Furthermore, by the Markov property, given $|L_k|$ the set $\{T_v: v\in L_k\}$ is distributed as $|L_k|$ independent copies of $T$. By this and (\ref{estimate}) we get that for some absolute constant $C>0$ \begin{align*}\mathbb{E}[F_T]& = \sum_{k\geq 1} \mathbb{E} |\{v \in L_k: H(T_v) \geq 2^{-(i+1)}R\}| = \sum_{k\geq 1} \mathbb{E} [|L_k|] \P(H(T_v \geq 2^{-(i+1)} R)) \\ &\leq \begin{cases} \sum_{k\geq 1} \mu^{k 2^{-i} R /2} \cdot C \varepsilon \leq C^2 \varepsilon\,, &\mbox{ if } 2^{i} \leq \log(\varepsilon^3 n)\,,\\ \sum_{k\geq 1} \mu^{k 2^{-(i+1)}R} \cdot \frac{C}{2^{-(i+1)}R} \leq C^2 2^{2i} /R & \mbox{ if } 2^{i} \geq \log(\varepsilon^3 n)\,. \end{cases} \end{align*} Hence, we can always get $\mathbb{E}[F_T] \leq C^2 \varepsilon 2^{2i}$. Furthermore, it is known that $|\mathcal{H}|= (2+o(1))\varepsilon^2 n$ with high probability so we may assume this. By Markov's inequality and the fact that $|A'_{i,1}| \leq 4 \cdot 2^i|\mathcal{H}| /R = o((\varepsilon^3 n)^{ 2i})$ we have that \begin{align*}\P(|A'_i| \geq (\varepsilon^3 n)^{ 2i}) &= \P(|A'_{i,2}| \geq (\varepsilon^3 n)^{ 2i} - |A'_{i,1}|) \leq \frac{\mathbb{E}[|A'_{i,2}|]}{(\varepsilon^3 n)^{ 2i}- |A'_{i,1}|} = \frac{|\mathcal{H}| \mathbb{E}[F_T]}{(1+o(1))(\varepsilon^3 n)^{ 2i}}\\ & \leq (2+o(1)) C^2 \varepsilon^3 n 2^{2i} (\varepsilon^3 n)^{-2i} \leq o(1)C^2 (\varepsilon^3 n /8)^{-2(i-1)}\,.\end{align*} A simple union bound gives that with high probability $|A'_i| \leq (\varepsilon^3 n)^{ 2i}$ simultaneously for all $i\geq 1$. Recalling the facts that $|E({\tilde{\mathcal{C}}_1})| = (2+o(1)) \varepsilon n$ and $R \leq 3+o(1) \varepsilon^{-1} \log (\varepsilon^3 n)$, we conclude the proof of the upper bound by an application of Theorem~\ref{thm-cover}. {\hfill $\square$ \bigskip} \section{Proof of Proposition \ref{addedge}} We may assume that $|E(G)| \geq 2$. Let $\pi$ be the stationary distribution of $G$ and let $\{S^+_t\}_{t \geq 0}$ be a random walk on $G^+$ starting from the initial distribution $\pi$ (note that $\pi$ is {\em not} the stationary distribution for $G^+$). Let $\tau_0 = \tau'_0 = 0$ and for all $i\geq 1$ define \[\tau_{i}\stackrel{\scriptscriptstyle\triangle}{=} \min \big\{t \geq \tau'_{i-1}: \{S^+_{t}, S^+_{t+1}\} = \{u, v\}\big\}\,, \, X_{i} \stackrel{\scriptscriptstyle\triangle}{=} S^+_{\tau_i}\,, \mbox{ and } \tau'_i \stackrel{\scriptscriptstyle\triangle}{=} \min \{t > \tau_i: S^+_t = X_i\}\,.\] Write $T_i = \{t: \tau_i < t\leq \tau'_i\}$ and for all $t\in \mathbb{N}$ further define \[\Phi(t) = \min\{k : |[0, k]\setminus \cup_{i=1}^\infty T_i| = t\}\,.\] Now let $S_t = S^+_{\Phi(t)}$. We first claim that $S_t$ is a simple random walk on the graph $G$. In order to see that, one just need to note that $S_t$ is obtained from $S^+_t$ by omitting all the excursions started with traveling through the edge $(u, v)$. Let $\tau_{\mathrm{cov}}$ be the first time when $S_t$ visits every vertex of $G$ and it then remains to bound $\mathbb{E}[\Phi(\tau_{\mathrm{cov}})]$. To this end, it is more convenient to consider the first time $\tau_{\mathrm{cov}}^*$ when $S_t$ visits every vertex of $G$ and returns to the starting point. We wish to bound the number of steps spent on the above defined excursions before $\tau_{\mathrm{cov}}^*$. Define \begin{align*}L_u(\tau_{\mathrm{cov}}^*)& = \big | \big \{t\leq \tau_{\mathrm{cov}}^*: S_t = u \big\}\big|\,\, \mbox{ and } \,\,L_v(\tau_{\mathrm{cov}}^*) = \big | \big \{t\leq \tau_{\mathrm{cov}}^*: S_t = v \big \}\big| \,,\\ N_u(\tau_{\mathrm{cov}}^*) &= \big | \big \{i: T_i \subseteq [0, \Phi(\tau_{\mathrm{cov}}^*)], X_{i} = u \big\} \big| \,\, \mbox { and }\,\, N_v(\tau_{\mathrm{cov}}^*) = \big | \big\{i: T_i \subseteq [0, \Phi(\tau_{\mathrm{cov}}^*)], X_{i} = v \big \} \big| \,.\end{align*} Note that every time when $S_t = u$, the corresponding random walk $S^+_{\Phi(t)}$ is also at $u$ and has chance $\frac{1}{d_u+1}$ to travel to $v$ and thus starts an excursion, and moreover, once started the number of excursions has law $\mathrm{Geom}(1/(d_u+1))$ independent of $\{S_t\}$. Therefore, we have $$N_u(\tau_{\mathrm{cov}}^*) = \sum_{i=1}^{L_u(\tau_{\mathrm{cov}}^*)} Y_i Z_i \, ,$$ where $\{(Y_i, Z_i)\}$ are independent and $Y_i \sim \mathrm{Ber}(1/(d_u+1))$ and $Z_i \sim \mathrm{Geom}(1/(d_u + 1))$. Thus, $\mathbb{E}[N_u(\tau_{\mathrm{cov}}^*)] = \frac{1}{d_u}\mathbb{E}[L_u(\tau_{\mathrm{cov}}^*)]$. By \cite{AF}*{Chapter 2, Proposition 3}, we know that $\mathbb{E}[L_u(\tau_{\mathrm{cov}}^*)] = \frac{d_u}{2 |E(G)|} \mathbb{E}[\tau_{\mathrm{cov}}^*]$ and therefore $\mathbb{E}[N_u(\tau_{\mathrm{cov}}^*)] = \frac{1}{2|E(G)|} \mathbb{E} [\tau_{\mathrm{cov}}^*]$. Suppose $X_{i} = u$, each $T_i$ is distributed as $1 + \tau^+_u$ where $\tau^+_u$ is the hitting time of $S^+_t$ to $u$ started at $v$. Observing that $\{|T_i|\}$ are independent of $N_u(\tau_{\mathrm{cov}}^*)$, we can then obtain that \[\mathrm{Exc}(u)\stackrel{\scriptscriptstyle\triangle}{=}\mathbb{E}[|\cup_i\{T_i \subseteq [0, \Phi(\tau_{\mathrm{cov}}^*)]: X_{i} = u\}|] = \frac{1}{2|E(G)|} \mathbb{E} [\tau_{\mathrm{cov}}^*] (1 + \mathbb{E}_v[\tau^+_u])\,.\] In the same manner, we derive that \[\mathrm{Exc}(v)\stackrel{\scriptscriptstyle\triangle}{=}\mathbb{E}[|\cup_i\{T_i \subseteq [0, \Phi(\tau_{\mathrm{cov}}^*)]: X_{i} =v\}|] = \frac{1}{2|E(G)|} \mathbb{E} [\tau_{\mathrm{cov}}^*] (1 + \mathbb{E}_u[\tau^+_v])\,.\] Note that $\mathbb{E}_v [ \tau^+_u] + \mathbb{E}_u [\tau^+_v]$ is the expected commute time between $u$ and $v$ and hence by commute identity \cite{CRRST}, we have $\mathbb{E}_v [ \tau^+_u] + \mathbb{E}_u [\tau^+_v] = 2 |E(G^+)| R^+(u, v)$, where $R^+(u, v)$ is the resistance between $u$ and $v$ in $G^+$. Since $G$ is connected, we get $R^+(u, v) \leq \frac{|E(G)|}{|E(G)| + 1}$. Altogether, \[t_{\mathrm{cov}}(G^+) = \mathbb{E}[\Phi(\tau_{\mathrm{cov}})] \leq t_{\mathrm{cov}}(G) + \mathrm{Exc}(u) + \mathrm{Exc}(v) \leq 3 t_{\mathrm{cov}}(G) + \frac{2}{|E(G)|} t_{\mathrm{cov}}(G) \leq 4 t_{\mathrm{cov}}(G)\,,\] where we used the inequality $\mathbb{E}[\tau_{\mathrm{cov}}^*] \leq 2 t_{\mathrm{cov}}$ and the assumption that $|E(G)| \geq 2$. {\hfill $\square$ \bigskip} \begin{remark} If $G^+$ is obtained from a connected graph $G$ by adding $k$ extra edges, a similar argument gives that \[t_{\mathrm{cov}}(G^+) \leq \big(2k+1 + \tfrac{2k^2}{|E|}\big)t_{\mathrm{cov}}(G)\,.\] \end{remark} \section{A concluding remark} The bound (\ref{dudley}) is reminiscent of Dudley's entropy bound for Gaussian process \cite{Dudley}. Motivated by this, Ding, Lee and Peres \cite{DLP} show the link to Gaussian processes is much tighter. In particular, Talagrand's majorizing measures bound for Gaussian processes (see \cite{Talagrand}) can be used to estimate the cover time up to a multiplicative constant. \newpage \begin{bibdiv} \begin{biblist} \bib{aldous2}{article}{ author={Aldous, David}, title={An introduction to covering problems for random walks on graphs}, journal={J. Theoret. Probab.}, volume={2}, date={1989}, number={1}, pages={87--89}, } \bib{Aldous}{article}{ author={Aldous, David J.}, title={Random walk covering of some special trees}, journal={J. Math. Anal. Appl.}, volume={157}, date={1991}, number={1}, pages={271--283}, } \bib{AF}{book}{ AUTHOR = {Aldous, David}, AUTHOR = {Fill, James Allen}, TITLE = {Reversible {M}arkov Chains and Random Walks on Graphs}, note = {In preparation, \texttt{http://www.stat.berkeley.edu/\~{}aldous/RWG/book.html}}, } \bib{AKLLR}{article}{ author={Aleliunas, Romas}, author={Karp, Richard M.}, author={Lipton, Richard J.}, author={Lov{\'a}sz, L{\'a}szl{\'o}}, author={Rackoff, Charles}, title={Random walks, universal traversal sequences, and the complexity of maze problems}, conference={ title={20th Annual Symposium on Foundations of Computer Science (San Juan, Puerto Rico, 1979)}, }, book={ publisher={IEEE}, place={New York}, }, date={1979}, pages={218--223}, } \bib{Barlow}{article}{ author={Barlow, M. T.}, title={Continuity of local times for L\'evy processes}, journal={Z. Wahrsch. Verw. Gebiete}, volume={69}, date={1985}, number={1}, pages={23--35}, } \bib{BCHSS}{article}{ AUTHOR = {Borgs, Christian}, author= {Chayes, Jennifer T.}, author = {van der Hofstad, Remco}, author = {Slade, Gordon}, author = {Spencer, Joel}, TITLE = {Random subgraphs of finite graphs. {I}. {T}he scaling window under the triangle condition}, JOURNAL = {Random Structures Algorithms}, VOLUME = {27}, YEAR = {2005}, NUMBER = {2}, PAGES = {137--184}, } \bib{bollo}{article} { AUTHOR = {Bollob{\'a}s, B{\'e}la}, TITLE = {The evolution of random graphs}, JOURNAL = {Trans. Amer. Math. Soc.}, VOLUME = {286}, YEAR = {1984}, NUMBER = {1}, PAGES = {257--274}, } \bib{Bridgland}{article}{ author={Bridgland, Michael F.}, title={Universal traversal sequences for paths and cycles}, journal={J. Algorithms}, volume={8}, date={1987}, number={3}, pages={395--404}, issn={0196-6774}, } \bib{Broder}{article}{ author={Broder, Andrei}, title={Universal sequences and graph cover times. A short survey}, conference={ title={Sequences}, address={Naples/Positano}, date={1988}, }, book={ publisher={Springer}, place={New York}, }, date={1990}, pages={109--122}, } \bib{CRRST}{article}{ author={Chandra, Ashok K.}, author={Raghavan, Prabhakar}, author={Ruzzo, Walter L.}, author={Smolensky, Roman}, author={Tiwari, Prasoon}, title={The electrical resistance of a graph captures its commute and cover times}, journal={Comput. Complexity}, volume={6}, date={1996/97}, number={4}, pages={312--340}, } \bib{CF}{article}{ author={Cooper, Colin}, author={Frieze, Alan}, title={The cover time of the giant component of a random graph}, journal={Random Structures Algorithms}, volume={32}, date={2008}, number={4}, pages={401--439}, } \bib{DKLP1}{article}{ author = {Ding, Jian}, author = {Kim, Jeong Han}, author = {Lubetzky, Eyal}, author = {Peres, Yuval}, title = {Anatomy of a young giant component in the random graph}, journal={Random Structures Algorithms, to appear}, note = {Available at \texttt{http://arxiv.org/abs/0906.1839}}, } \bib{DKLP2}{article}{ author = {Ding, Jian}, author = {Kim, Jeong Han}, author = {Lubetzky, Eyal}, author = {Peres, Yuval}, title = {Diameters in supercritical random graphs via first passage percolation}, journal = {Combinatorics, Probability and Computing, to appear} note = {Available at \texttt{http://arxiv.org/abs/0906.1840}}, } \bib{DLP}{article}{ author={Ding, Jian}, author={Lee, James R.}, author={Peres, Yuval}, title={Cover times, blanket times, and majorizing measures}, note={Preprint. Available at \texttt{http://arxiv.org/abs/1004.4371}} } \bib{Dudley}{article} { AUTHOR = {Dudley, R. M.}, TITLE = {The sizes of compact subsets of Hilbert space and continuity of Gaussian processes}, JOURNAL = {J. Functional Analysis}, VOLUME = {1}, YEAR = {1967}, PAGES = {290--330}, } \bib{ER59}{article}{ author={Erd{\H{o}}s, P.}, author={R{\'e}nyi, A.}, title={On random graphs. I}, journal={Publ. Math. Debrecen}, volume={6}, date={1959}, pages={290--297}, } \bib{ERDREN}{article} { AUTHOR = {Erd\H{o}s, P.}, author = {R\'enyi, A.}, TITLE = {On the evolution of random graphs}, JOURNAL = {Bull. Inst. Internat. Statist.}, VOLUME = {38}, YEAR = {1961}, PAGES = {343--347}, } \bib{Feige-up}{article}{ author={Feige, Uriel}, title={A tight upper bound on the cover time for random walks on graphs}, journal={Random Structures Algorithms}, volume={6}, date={1995}, number={1}, pages={51--54}, } \bib{Feige-lower}{article}{ author={Feige, Uriel}, title={A tight lower bound on the cover time for random walks on graphs}, journal={Random Structures Algorithms}, volume={6}, date={1995}, number={4}, pages={433--438}, } \bib{Feller2}{book} { AUTHOR = {Feller, William}, TITLE = {An introduction to probability theory and its applications. {V}ol. {II}. }, SERIES = {Second edition}, PUBLISHER = {John Wiley \& Sons Inc.}, ADDRESS = {New York}, YEAR = {1971}, } \bib{HH0}{article} { author = {Heydenreich, Markus}, author = {van der Hofstad, Remco}, TITLE = {Random graph asymptotics on high-dimensional tori}, JOURNAL = {Comm. Math. Phys.}, VOLUME = {270}, YEAR = {2007}, NUMBER = {2}, PAGES = {335--358}, } \bib{HH}{article} { author={Heydenreich, Markus}, author={van der Hofstad, Remco}, title={Random graph asymptotics on high-dimensional tori II. Volume, diameter and mixing time}, note = {preprint}, } \bib{Jansonbook}{book}{ AUTHOR = {Janson, Svante}, author = {{\L}uczak, Tomasz}, author = {Rucinski, Andrzej}, TITLE = {Random graphs}, SERIES = {Wiley-Interscience Series in Discrete Mathematics and Optimization}, PUBLISHER = {Wiley-Interscience, New York}, YEAR = {2000}, PAGES = {xii+333}, ISBN = {0-471-17541-2}, } \bib{Jonasson}{article}{ author={Jonasson, Johan}, title={Lollipop graphs are extremal for commute times}, journal={Random Structures Algorithms}, volume={16}, date={2000}, number={2}, pages={131--142}, } \bib{KKLV}{article}{ author={Kahn, J.}, author={Kim, J. H.}, author={Lov{\'a}sz, L.}, author={Vu, V. H.}, title={The cover time, the blanket time, and the Matthews bound}, conference={ title={41st Annual Symposium on Foundations of Computer Science (Redondo Beach, CA, 2000)}, }, book={ publisher={IEEE Comput. Soc. Press}, place={Los Alamitos, CA}, }, date={2000}, } \bib{KR}{article}{ author={Karlin, Anna R.}, author={Raghavan, Prabhakar}, title={Random walks and undirected graph connectivity: a survey}, conference={ title={Discrete probability and algorithms}, address={Minneapolis, MN}, date={1993}, }, book={ series={IMA Vol. Math. Appl.}, volume={72}, publisher={Springer}, place={New York}, }, date={1995}, pages={95--101}, } \bib{KSN} {article} { AUTHOR = {Kesten, H.}, author ={Ney, P.}, author ={Spitzer, F.}, TITLE = {The {G}alton-{W}atson process with mean one and finite variance}, JOURNAL = {Teor. Verojatnost. i Primenen.}, VOLUME = {11}, YEAR = {1966}, PAGES = {579--611} } \bib{Kolchin}{book} { AUTHOR = {Kolchin, Valentin F.}, TITLE = {Random mappings}, SERIES = {Translation Series in Mathematics and Engineering}, NOTE = {Translated from the Russian, With a foreword by S. R. S. Varadhan}, PUBLISHER = {Optimization Software Inc. Publications Division}, ADDRESS = {New York}, YEAR = {1986}, PAGES = {xiv + 207}, } \bib{KN1}{article}{ author = {Kozma, Gady}, author={Nachmias, Asaf}, title={The Alexander-Orbach conjecture holds in high dimensions}, journal={Inventiones Mathematicae}, volume={178}, date={2009}, number={3}, pages={635--654}, } \bib{KN2}{article}{ author = {Kozma, Gady}, author={Nachmias, Asaf}, title={A note about critical percolation on finite graphs}, journal={J. of Theoretical Probability, to appear}, } \bib{LPW}{book}{ author={Levin, David A.}, author={Peres, Yuval}, author={Wilmer, Elizabeth L.}, title={Markov chains and mixing times}, note={With a chapter by James G. Propp and David B. Wilson}, publisher={American Mathematical Society}, place={Providence, RI}, date={2009}, pages={xviii+371}, } \bib{Lucz}{article} { AUTHOR = {{\L}uczak, Tomasz}, TITLE = {Component behavior near the critical point of the random graph process}, JOURNAL = {Random Structures Algorithms}, VOLUME = {1}, YEAR = {1990}, NUMBER = {3}, PAGES = {287--310}, } \bib{LP}{book}{ author = {{R. Lyons with Y. Peres}}, title = {Probability on Trees and Networks}, publisher = {Cambridge University Press}, date = {2008}, note = {In preparation. Current version available at \texttt{http://mypage.iu.edu/\~{}rdlyons/prbtree/book.pdf}}, } \bib{LW}{article}{ author={Luczak, Malwina}, author={Winkler, Peter}, title={Building uniformly random subtrees}, journal={Random Structures Algorithms}, volume={24}, date={2004}, number={4}, pages={420--443}, } \bib{Matthews}{article}{ author={Matthews, Peter}, title={Covering problems for Markov chains}, journal={Ann. Probab.}, volume={16}, date={1988}, number={3}, pages={1215--1228}, } \bib{MP}{article}{ author={Mihail, Milena}, author={Papadimitriou, Christos H.}, title={On the random walk method for protocol testing}, conference={ title={Computer aided verification}, address={Stanford, CA}, date={1994}, }, book={ series={Lecture Notes in Comput. Sci.}, volume={818}, publisher={Springer}, place={Berlin}, }, date={1994}, pages={132--141}, } \bib{Nachmias}{article}{ author={Nachmias, Asaf}, title={Mean-field conditions for percolation on finite graphs}, journal={Geometric and Functional Analysis}, volume={19}, date={2009}, pages={1171--1194}, } \bib{NP2}{article}{ author={Nachmias, Asaf}, author={Peres, Yuval}, title={Critical percolation on random regular graphs}, journal={Random Structures and Algorithms}, note = {to appear}, } \bib{NP1}{article}{ author={Nachmias, Asaf}, author={Peres, Yuval}, title={Critical random graphs: diameter and mixing time}, journal={Ann. Probab.}, volume={36}, date={2008}, number={4}, pages={1267--1286}, } \bib{NashWilliams}{article}{ author={Nash-Williams, C. St. J. A.}, title={Random walk and electric currents in networks}, journal={Proc. Cambridge Philos. Soc.}, volume={55}, date={1959}, pages={181--194}, } \bib{Pittel}{article} { AUTHOR = {Pittel, Boris}, TITLE = {Edge percolation on a random regular graph of low degree}, JOURNAL = {Ann. Probab.}, VOLUME = {36}, YEAR = {2008}, NUMBER = {4}, PAGES = {1359--1389}, } \bib{stroock}{book} { AUTHOR = {Stroock, Daniel W.}, author = {Varadhan, S. R. Srinivasa}, TITLE = {Multidimensional diffusion processes}, SERIES = {Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences]}, VOLUME = {233}, PUBLISHER = {Springer-Verlag}, ADDRESS = {Berlin}, YEAR = {1979}, PAGES = {xii+338}, ISBN = {3-540-90353-4}, } \bib{Talagrand}{book} { AUTHOR = {Talagrand, Michel}, TITLE = {The generic chaining}, SERIES = {Springer Monographs in Mathematics}, NOTE = {Upper and lower bounds of stochastic processes}, PUBLISHER = {Springer-Verlag}, ADDRESS = {Berlin}, YEAR = {2005}, PAGES = {viii+222} } \bib{Tetali}{article}{ author={Tetali, Prasad}, title={Random walks and the effective resistance of networks}, journal={J. Theoret. Probab.}, volume={4}, date={1991}, number={1}, pages={101--109}, } \bib{WZ}{article} { AUTHOR = {Winkler, Peter}, author = {Zuckerman, David}, TITLE = {Multiple cover time}, JOURNAL = {Random Structures Algorithms}, VOLUME = {9}, YEAR = {1996}, NUMBER = {4}, PAGES = {403--411}, } \end{biblist} \end{bibdiv} \end{document}
1,314,259,994,085
arxiv
\section{Standalone Optimization Experiment Setup} \label{appendix:cost-details} \noindent We consider three different cost models on the same workload: \vspace{0.25em} \noindent \textbf{CM1: } In the first cost model (inspired by~\citep{leis2015good}), we model a main-memory database that performs two types of joins: index joins and in-memory hash joins. Let $O$ describe the current operator, $O_l$ be the left child operator, and $O_r$ be the right child operator. The costs are defined with the following recursions: \[ \textsf{c}_{ij}(O) = \textsf{c}(O_l) + \textsf{match}(O_l, O_r) \cdot |O_l| \] \[ \textsf{c}_{hj}(O) = \textsf{c}(O_l) + \textsf{c}(O_r) + |O| \] where \textsf{c} denotes the cost estimation function, $|\cdot|$ is the cardinality function, and \textsf{match} denotes the expected cost of an index match, i.e., fraction of records that match the index lookup (always greater than 1) multiplied by a constant factor $\lambda$ (we chose $1.0$). We assume indexes on the primary keys. In this cost model, if an eligible index exists it is generally desirable to use it, since $\textsf{match}(O_l, O_r) \cdot |O_l|$ rarely exceeds $\textsf{c}(O_r) + |O|$ for foreign key joins. Even though the cost model is nominally ``non-linear'', primary tradeoff between the index join and hash join is due to index eligibility and not dependent on properties of the intermediate results. For the JOB workload, unless $\lambda$ is set to be very high, hash joins have rare occurrences compared to index joins. \vspace{0.25em} \noindent \textbf{CM2: } In the next cost model, we remove index eligibility from consideration and consider only hash joins and nested loop joins with a memory limit $M$. The model charges a cost when data requires additional partitioning, and further falls back to a nested loop join when the smallest table exceeds the squared memory: \scriptsize \[ \textsf{c}_{join} = \begin{cases} \textsf{c}(O_l) + \textsf{c}(O_r) + |O| ~~ \text{ if } |O_r| + |O_l| \le M\\ \textsf{c}(O_l) + \textsf{c}(O_r) + 2(|O_r|+|O_l|) + |O| ~~ \text{ if } \min(|O_r|, |O_l|) \le M^2\\ \textsf{c}(O_l) + \textsf{c}(O_r) + (|O_r| + \left \lceil{\frac{|O_r|}{M}}\right \rceil |O_l|) \end{cases} \] \normalsize The non-linearities in this model are size-dependent, so controlling the size of intermediate relations is important in the optimization problem. We set the memory limit $M$ to $10^5$ tuples in our experiments. This limit is low in real-world terms due to the small size of the benchmark data. However, we intend for the results to be illustrative of what happens in the optimization problems. \vspace{0.25em} \noindent \textbf{CM3: } In the next cost model, we model a database that accounts for the reuse of already-built hash tables. We use the Gamma database convention where the left operator as the ``build'' operator and the right operator as the ``probe'' operator~\citep{gerber1986data}. If the previous join has already built a hash table on an attribute of interest, then the hash join does not incur another cost. \[ \textsf{c}_{nobuild} = \textsf{c}(O_l) + \textsf{c}(O_r) - |O_r| + |O| \] We also allow for index joins as in CM1. This model makes hash joins substantially cheaper in cases where re-use is possible. This model favors some subplans to be right-deep plans which maximize the reuse of the built hash tables. Therefore, optimal solutions have both left-deep and right-deep segments. \iffalse \vspace{0.25em} \noindent \textbf{CM3: } Finally, in the next cost model, we model temporary tables and memory capacity constraints. There is a budget of tuples that can fit in memory and an additional physical operator that allows for materialization of a join result if memory exists \jmh{what do you mean if memory exists?}. Then, the downstream cost of reading from a materialized operator is 0. \[ \textsf{c}(O) = 0 \text{ if materialized} \] This model requires bushy plans due to the inherent non-linearity of the cost function and memory constraints. The cost model encourages plans that group tables together in ways that the join output can fit in the available memory. \fi \vspace{0.5em} In our implementation of these cost models, we use true cardinalities on single-table predicates, and we leverage standard independence assumptions to construct more complicated cardinality estimates. (This is not a fundamental limitation of \textsf{DQ}\xspace. Results in ~\secref{eval:real-systems} have shown that when Postgres and SparkSQL provide their native cost model and cardinality estimates, \textsf{DQ}\xspace is as effective.) The goal of this work is to evaluate the join ordering process independent of the strength or weakness of the underlying cardinality estimation. We consider the following baseline algorithms. These algorithms are not meant to be a comprehensive list of heuristics but rather representative of a class of solutions. \begin{table*}[ht] \begin{tabular}{@{} l l l l l l l l l l l l l l @{}} \toprule {\bf Optimizer} && \multicolumn{3}{c}{\bf Cost Model 1} & & \multicolumn{3}{c}{\bf Cost Model 2} & & \multicolumn{3}{c}{\bf Cost Model 3}\\ && {Min} & {Mean} & {Max} && {Min} & {Mean} & {Max} && {Min} & {Mean} & {Max}\\ \midrule QuickPick (QP)&& 1 & 23.87 & 405.04 && 7.43 & 51.84 & 416.18 && 1.43 & 16.74 & 211.13 \\ IK-KBZ (KBZ)&& 1 & 3.45 & 36.78 && 5.21 & 29.61 & 106.34 && 2.21 & 14.61 & 96.14 \\ Right-deep (RD)&& 4.7 & 53.25 & 683.35 && 1.93 & 8.21 & 89.15 && 1.83 & 5.25 & 69.15 \\ Left-deep (LD) && 1 & 1.08 & 2.14 && 1.75 & 7.31 & 65.45 && 1.35 & 4.21 & 35.91 \\ Zig-zag (ZZ) && 1 & 1.07 & 1.87 && 1 & 5.07 & 43.16 && 1 & 3.41 & 23.13 \\ Exhaustive (EX) && 1 & 1 & 1 && 1 & 1 & 1 && 1 & 1 & 1 \\ DQ && 1 & 1.32 & 3.11 && 1 & 1.68 & 11.64 && 1 & 1.91 & 13.14 \\ Minimum Selectivity (MinSel) && 2.43 & 59.86 & 1083.12 && 23.46 & 208.23 & 889.7 && 9.81 & 611.1 & 2049.13 \\ IK-KBZ+DP (LDP)&& 1 & 1.09 & 2.72 && 2.1 & 10.03 & 105.32 && 2.01 & 3.99 & 32.19 \\ \hline \end{tabular} \caption{Extended results including omitted techniques for all three cost models. \label{table:full-results}} \end{table*} \begin{enumerate}[noitemsep] \item Exhaustive (\textbf{EX}): This is a dynamic program that exhaustively enumerates all join plans avoiding Cartesian products. \item left-deep (\textbf{LD}): This is a dynamic program that exhaustively enumerates all left-deep join plans. \item Right-Deep (\textbf{RD}): This is a dynamic program that exhaustively enumerates all right-deep join plans. \item Zig-Zag (\textbf{ZZ}): This is a dynamic program that exhaustively enumerates all zig-zag trees (every join has at least one base relation, either on the left or the right)~\citep{ziane1993parallel}. \item IK-KBZ (\textbf{KBZ}): This algorithm is a polynomial time algorithm that decomposes the query graph into chains and orders the chains based on a linear approximation of the cost model~\citep{krishnamurthy1986optimization}. \item QuickPick-1000 (\textbf{QP}): This algorithm randomly selects 1000 join plans and returns the best of them. 1000 was selected to be roughly equivalent to the planning latency of \textsf{DQ}\xspace~\citep{waas2000join}. \item Minimum Selectivity (\textbf{MinSel}): This algorithm selects the join ordering based on the minimum selectivity heuristic~\citep{neumann2018adaptive}. While MinSel was fast, we found poor performance on the 3 cost models used in the paper. \item Linearized Dynamic Program (\textbf{LDP}): This approach applies a dynamic program in the inner-loop of IK-KBZ~\citep{neumann2018adaptive}. Not surprisingly, LDP’s results were highly correlated with those of IK-KBZ and Left-Deep enumeration, so we chose to omit them from the main body of the paper. \end{enumerate} All of the algorithms consider join ordering without Cartesian products, so \textbf{EX} is an optimal baseline. We report results in terms of the suboptimality w.r.t. \textbf{EX}, namely $cost_{algo}/cost_{\textbf{EX}}$. We present results on all 113 JOB queries. We train on 80 queries and test on 33 queries. We do 4-fold cross validation to ensure that every test query is excluded from the training set at least once. The performance of \textsf{DQ}\xspace is only evaluated on queries not seen in the training workload. Our standalone experiments are integrated with Apache Calcite~\citep{calcite}. Apache Calcite provides libraries for parsing SQL, representing relational algebraic expressions, and a Volcano-based query optimizer~\citep{graefe1991volcano,graefe1995cascades}. Calcite does not handle physical execution or storage and uses JDBC connectors to a variety of database engines and file formats. We implemented a package inside Calcite that allowed us to leverage its parsing and plan representation, but also augment it with more sophisticated cost models and optimization algorithms. Standalone \textsf{DQ}\xspace is written in single-threaded Java. The extended results including omitted techniques are described in Table~\ref{table:full-results}. \section{$C_{out}$ Cost Model} We additionally omitted experiments with a simplified cost model only searching for join orders and ignoring physical operator selection. We fed in true cardinalities to estimate the selectivity of each of the joins, which is a perfect version of the ``$C_{out}$'' model. We omitted these results as we did not see differences between the techniques and the goal of the study was to understand the performance of DQ over cost models that cause the heuristics to fail. In particular, we found that threshold non-linearities as in CM3 cause the most problems. \begin{table}[ht!] \centering \begin{tabular}{@{} l c @{}} \toprule $C_{out}$ & {\bf Mean}\\ \midrule QP & 1.02 \\ IK-KBZ & 1.34 \\ LD & 1.02 \\ ZZ & 1.02 \\ Ex & 1 \\ DQ & 1.03 \\ MinSel & 1.11 \\ \hline \end{tabular} \end{table} \section{Additional Standalone Experiments} In the subsequent experiments, we try to characterize when \textsf{DQ}\xspace is expected to work and how efficiently. \subsection{Sensitivity to Training Data} Classically, join optimization algorithms have been deterministic. Except for \textbf{QP}, all of our baselines are deterministic as well. Randomness in \textsf{DQ}\xspace (besides floating-point computations) stems from what training data is seen. We run an experiment where we provide \textsf{DQ}\xspace with 5 different training datasets and evaluate on a set of 20 hold-out queries. We report the max range (worst factor over optimal minus best factor over optimal) in performance over all 20 queries in Table~\ref{table:plan-variance}. For comparison, we do the same with \textbf{QP} over 5 trials (with a different random seed each time). \begin{table}[h]\centering \small% \begin{tabular}{@{} l c c c @{}} \toprule & {\bf CM1} & {\bf CM2} & {\bf CM3} \\ \midrule {\bf QP} & 2.11$\times$ & 1.71$\times$ & 3.44$\times$ \\ \textsf{DQ}\xspace & 1.59$\times$ & 1.13$\times$ & 2.01$\times$\\ \bottomrule \end{tabular} \vspace{0.25em} \caption{\small{Plan variance over trials. \label{table:plan-variance}}} \vspace{-0.6cm} \end{table} We found that while the performance of \textsf{DQ}\xspace does vary due to training data, the variance is relatively low. Even if we were to account for this worst case, \textsf{DQ}\xspace would still be competitive in our macro-benchmarks. It is also substantially lower than that of \textbf{QP}, a true randomized algorithm. \subsection{Sensitivity to Faulty Cardinalities} In general, the cardinality/selectivity estimates computed by the underlying RDBMS do not have up-to-date accuracy. All query optimizers, to varying degrees, are exposed to this issue since using faulty estimates during optimization may yield plans that are in fact suboptimal. It is therefore worthwhile to investigate this sensitivity and try to answer, ``is the neural network more or less sensitive than classical dynamic programs and heuristics?'' In this microbenchmark, the optimizers are fed \emph{perturbed} base relation cardinalities (explained below) during optimization; after the optimized plans are produced, they are scored by an \emph{oracle} cost model. This means, in particular, \textsf{DQ}\xspace only sees noisy relation cardinalities during training and is tested on true cardinalities. The workload consists of 20 queries randomly chosen out of all JOB queries; the join sizes range from 6 to 11 relations. The final costs reported below are the average from 4-fold cross validation. The perturbation of base relation cardinalities works as follows. We pick $N$ random relations, the true cardinality of each is multiplied by a factor drawn uniformly from $\{2, 4, 8, 16\}$. As $N$ increases, the estimate noisiness increases (errors in the leaf operators get propagated upstream in a compounding fashion). Table~\ref{table:sensitivity-1} reports the final costs with respect to estimate noisiness. \begin{table}[h]\centering \small% \begin{tabular}{@{} l c c c c @{}} \toprule & {$N=0$} & {$N=2$} & {$N=4$} & {$N=8$} \\ \midrule {\bf KBZ} & 6.33 & 6.35 & 6.35 & 5.85 \\ {\bf LD} & 5.51 & 5.53 & 5.53 & 5.60 \\ {\bf EX} & 5.51 & 5.53 & 5.53 & 5.60 \\ \textsf{DQ}\xspace & 5.68 & 5.70 & 5.96 & 5.68\\ \bottomrule \end{tabular} \vspace{0.25em} \caption{\small{Costs ($\log_{10}$) when $N$ relations have perturbed cardinalities.}} \label{table:sensitivity-1} \vspace{-0.6cm} \end{table} Observe that, despite a slight degradation in the $N=4$ execution, \textsf{DQ}\xspace is not any more sensitive than the \textbf{KBZ} heuristic. It closely imitates exhaustive enumeration---an expected behavior since its training data comes from \textbf{EX}'s plans computed with the faulty estimates. \subsection{Ablation Study} Table~\ref{table:feat-ablation} reports an ablation study of the featurization described earlier (\secref{subsec:featurization}): \begin{table}[h]\centering \small% \begin{tabular}{@{} l c c c @{}} \toprule & {\bf Graph Features} & {\bf Sel. Scaling} & {\bf Loss} \\ \midrule {\bf No Predicates} & No & No & 0.087 \\ & Yes & No & 0.049 \\ & Yes & Yes & 0.049 \\ {\bf Predicates} & No & No & 0.071\\ & Yes & No & 0.051\\ & Yes & Yes & 0.020\\ \bottomrule \end{tabular} \vspace{0.25em} \caption{\small{Feature ablation. \label{table:feat-ablation}}} \vspace{-0.6cm} \end{table} Without features derived from the query graph (Figure~\ref{fig:query-graph-feat}) and selectivity scaling (Figure~\ref{fig:feat-sel-scaling}) the training loss is 3.5$\times$ more. These results suggest that all of the different features contribute positively for performance. \begin{figure} \centering \includegraphics[width=\columnwidth,keepaspectratio]{exp/fine-tuning2.png} \caption{\small{We plot the runtime in milliseconds of a single query (q10c) with different variations of DQ (fully offline, fine tuning, and fully online). We found that the fine-tuned approach was the most effective one.} \label{exp:fine-tuning2}} \end{figure} \section{Discussion about Postgres Experiment} We also run a version of DQ where the model is only trained with online data (effectively the setting considered in ReJOIN~\citep{marcus2018deep}). Even on an idealized workload of optimizing a single query (Query 10c), we could not get that approach to converge. We believe that the discrepancy from ~\citep{marcus2018deep} is due to physical operator selection. In that work, the Postgres optimizer selects the physical operators \textit{given} the appropriate logical plans selected by the RL policy. With physical operator selection, the learning problem becomes significantly harder (Figure~\ref{exp:fine-tuning2}). We initially hypothesized the \textsf{DQ}\xspace outperforms the native Postgres optimizer in terms of execution times since it considers bushy plans. This hypothesis only partially explains the results. We run the same experiment where \textsf{DQ}\xspace is restricted to producing left-deep plans; in other words, \textsf{DQ}\xspace considers the same plan space as the native Postgres optimizer. We found that there was still a statistically significant speedup: \begin{table}[h]\centering \small% \begin{tabular}{@{} l c c @{}} \toprule & {Mean} & {Max} \\ \midrule \textsf{DQ}\xspace {\bf:LD} & 1.09$\times$ & 2.68$\times$ \\ \textsf{DQ}\xspace {\bf :EX} & 1.14$\times$ & 2.72$\times$ \\ \bottomrule \end{tabular} \vspace{0.25em} \caption{\small{Execution time speedup over Postgres with different plan spaces considered by \textsf{DQ}\xspace. Mean is the average speedup over the entire workload and max is the best case single-query speedup.}} \label{table:postgres-speedup} \vspace{-0.6cm} \end{table} We speculate that the speedup is caused by imprecision in the Postgres cost model. As a learning technique, \textsf{DQ}\xspace may smooth out inconsistencies in the cost model. Finally, we compare with Postgres' genetic optimizer (GEQ) on the 10 largest joins in JOB. \textsf{DQ}\xspace is about 7\% slower in planning time, but nearly 10$\times$ faster in execution time. The difference in execution is mostly due to one outlier query on which GEQ is 37$\times$ slower. \section{Optimizer Architecture} \label{sec:optimizer-arch} Selinger's optimizer design separated the problem of plan search from cost/selectivity estimation~\citep{selinger1979access}. This insight allowed independent innovation on each topic over the years. In our initial work, we follow this lead, and intentionally focus on learning a search strategy only. Even within the search problem, we focus narrowly on the classical select-project-join kernel. This too is traditional in the literature, going back to Selinger~\citep{selinger1979access} and continuing as recently as Neumann et al.'s very recent experimental work~\citep{neumann2018adaptive}. It is also particularly natural for illustrating the connection between dynamic programming and Deep RL and implications for query optimization. We intend for our approach to plug directly into a Selinger-based optimizer architecture like that of PostgreSQL, DB2 and many other systems. In terms of system architecture, \textsf{DQ}\xspace can be simply integrated as a learning-based replacement for prior algorithms for searching a plan space. Like any non-exhaustive query optimization technique, our results are heuristic. The new concerns raised by our approach have to do with limitations of training, including overfitting and avoiding high-variance plans. We use this section to describe the extensibility of our approach and what design choices the user has at her disposal. \def \treeA {\tikz[inner sep=1pt, baseline=(T3.north)] { \node (T1) at (0,0) {$T_1$}; \node (T2) at (1,0) {$T_2$}; \node (INLJ) at (0.5,0.5) {\small{\sffamily IndexJoin}}; \node (T3) at (1.5,0.5) {$T_3$}; \node (SMJ) at (1,1) {\small{\sffamily HashJoin}}; \node (T4) at (2,1) {$T_4$}; \node (HJ) at (1.5,1.5) {\small{\sffamily HashJoin}}; \draw (T1)--(INLJ)--(T2); \draw (INLJ)--(SMJ)--(T3); \draw (SMJ)--(HJ)--(T4);}} \def \treeB {\tikz[inner sep=1pt, baseline=(T3.south)] { \node (T1) at (0,0) {$T_1$}; \node (T2) at (1,0) {$T_2$}; \node (INLJ) at (0.5,0.5) {\small{\sffamily IndexJoin}}; \node (T3) at (1.5,0.5) {$T_3$}; \node (SMJ) at (1,1) {\small{\sffamily HashJoin}}; \draw (T1)--(INLJ)--(T2); \draw (INLJ)--(SMJ)--(T3);}} \def\treeC{\tikz[inner sep=1pt, baseline=(T1.north)] { \node (T1) at (0,0) {$T_1$}; \node (T2) at (1,0) {$T_2$}; \node (INLJ) at (0.5,0.5) {\small{\sffamily IndexJoin}}; \draw (T1)--(INLJ)--(T2);}}% \begin{figure*}[!h] \begin{tikzpicture}[inner sep=1pt,text centered] \node (TA) at (0,0) {$(~~~\{$ \treeA , \treeB , \treeC $\}$,~~~ $\{T1,\cdots,T4\}$;~~~ $\quad V^*)$}; \node[left = of TA] (orig) {\treeA}; \draw [double,->, >=stealth] (orig)--(TA); \node[text width=2.5cm] (plan) at (-7,-1.5) {\footnotesize Plan from Native Optimizer}; \node (plan) at (-1.3,-1.5) {\footnotesize Optimal Sub-plans}; \node[text width=1.5cm] (plan) at (2.75, -1.5) {\footnotesize Relations to Join}; \node[text width=1.5cm] (plan) at (4.35, -1.5) {\footnotesize Optimal Cost}; \node[rectangle, left = of orig, rounded corners=5, text width=2.0cm, draw] (opt) {\small Native Optimizer}; \draw[double,->, >=stealth] (opt) -- (orig); \end{tikzpicture} \vspace{-.3cm} \caption{\small Training data collection is efficient (\secref{sec:data-collection}). Here, by leveraging the principle of optimality, three training examples are emitted from a single plan produced by a native optimizer. These examples share the same long-term cost and relations to join (i.e., making these local decisions eventually leads to joining $\{T1, \cdots, T4\}$ with optimal cumulative cost $V^*$).} \label{fig:data-collection} \end{figure*} \subsection{Overview} Now, we describe what kind of training data is necessary to learn a Q-function. In supervised regression, we collect data of the form \texttt{(feature, values)}. The learned function maps from feature to values. One can think of this as a \emph{stateless} prediction, where the underlying prediction problem does not depend on some underlying process state. On the other hand, in the Q-learning setting, there is state. So we have to collect training data of the form \texttt{(state, decision, new state, cost)}. Therefore, a training dataset has the following format (in Java notation): \vspace{-0.3cm} \begin{lstlisting}[aboveskip=0pt,language=java] List<Graph, Join, Graph', Cost> dataset \end{lstlisting} In many cases like robotics or game-playing, RL is used in a live setting where the model is trained on-the-fly based on concrete moves chosen by the policy and measured in practice. Q-learning is known as an ``off-policy'' RL method. This means that its training is independent of the data collection process and can be suboptimal---as long as the training data sufficiently covers the decisions to be made. \subsection{Architecture and API} \textsf{DQ}\xspace collects training data sampled from a cost model and a native optimizer. It builds a model which improves future planning instances. \textsf{DQ}\xspace makes relatively minimal assumptions about the structure of the optimizer. Below are the API hooks that it requires implemented. \vspace{.5em} \noindent \emph{Workload Generation.} A function that returns a list of training queries of interest. \textsf{DQ}\xspace requires a relevant workload for training. In our experiments, we show that this workload can be taken from query templates or sampled from the database schema. \vspace{-0.3cm} \begin{lstlisting}[aboveskip=0pt,language=java] sample(): List<Queries> \end{lstlisting} \vspace{0.5em} \noindent \emph{Cost Sampling.} A function that given a query returns a list of join actions and their resultant costs. \textsf{DQ}\xspace requires the system to have its own optimizer to generate training data. This means generating feasible join plans and their associated costs. Our experiments evaluate integration with deterministic enumeration, randomized, and heuristic algorithms. \vspace{-0.3cm} \begin{lstlisting}[aboveskip=0pt,language=java] train(query): List<Graph,Join,Graph',Cost> \end{lstlisting} \vspace{0.5em} \noindent \emph{Predicate Selectivity Estimation.} A function that returns the selectivity of a particular single table predicate. \textsf{DQ}\xspace leverages the optimizer's own selectivity estimate for featurization (\secref{subsec:featurization}). \vspace{-0.3cm} \begin{lstlisting}[aboveskip=0pt,language=java] selectivity(predicate): Double \end{lstlisting} \vspace{0.5em} In our evaluation (\secref{sec:eval}), we will vary these exposed hooks to experiment with different implementations for each (e.g., comparing training on highly relevant data from a desired workload vs. randomly sampling join queries directly from the schema). \subsection{Efficient Training Data Generation} \label{sec:data-collection} Training data generation may seem onerous, but in fact, useful data is \emph{automatically generated} as a consequence of running classical planning algorithms. For each join decision that the optimizer makes, we can get the incremental cost of the join. Suppose, we run a classical bushy dynamic programming algorithm to optimize a k-way join, we not only get a final plan but also an optimal plan for every single subplan enumerated along the way. Each query generates an optimal query plan for all of the subplans that compose it, as well as observations of suboptimal plans that did not make the cut. This means that a single query generates a large amount of training examples. Figure \ref{fig:data-collection} shows how the principle of optimality helps enhance a training dataset. This data collection scheme differs from that of several popular RL algorithms such as PPO and Policy Gradients~\citep{schulman2017proximal} (and used in ~\citep{marcus2018deep}). These algorithms train their models ``episodically'', where they apply an entire sequence of decisions and observe the final cumulative reward. An analogy would be a graph search algorithm that does not backtrack but resets to the starting node and tries the whole search again. While general, this scheme not suited for the structure of join optimization, where an optimal plan is composed of optimal substructures. Q-learning, an algorithm that does not rely on episodic data and can learn from offline data consisting of a hierarchy of optimal subplans, is a better fit for join optimization. \begin{figure*}[!htp] \captionsetup[subfigure]{justification=centering} \begin{subfigure}[b]{.23\textwidth} \raisebox{10mm} \centering \small \begin{verbatim} SELECT * FROM Emp, Pos, Sal WHERE Emp.rank = Pos.rank AND Pos.code = Sal.code \end{verbatim} \caption{Example query} \end{subfigure}\hspace{\fill} \begin{subfigure}[b]{.23\textwidth} \centering \small \begin{equation*} \begin{split} A_G = [ & \text{E.id, E.name, E.rank,} \\ & \text{P.rank, P.title, P.code,} \\ & \text{S.code, S.amount} ] \\ = [ & 1\ 1\ 1\ 1\ 1\ 1\ 1\ 1 ] \end{split} \end{equation*} \vspace{-.2cm} \caption{Query graph featurization \label{fig:query-graph-feat}} \end{subfigure}\hspace{\fill} \begin{subfigure}[b]{.23\textwidth} \centering \small \begin{equation*} \begin{split} A_L &= [ \text{E.id, E.name, E.rank} ] \\ &= [ 1\ 1\ 1\ 0\ 0\ 0\ 0\ 0 ] \\ A_R &= [ \text{P.rank, P.title, P.code} ] \\ &= [ 0\ 0\ 0\ 1\ 1\ 1\ 0\ 0 ] \end{split} \end{equation*} \vspace{-.2cm} \caption{Features of $E \bowtie P$} \end{subfigure}\hspace{\fill} \begin{subfigure}[b]{.23\textwidth} \centering \small \begin{equation*} \begin{split} A_L = [ & \text{E.id, E.name, E.rank,} \\ & \text{P.rank, P.title, P.code} ] \\ = [ & 1\ 1\ 1\ 1\ 1\ 1\ 0\ 0 ] \\ A_R = [ & \text{S.code, S.amount} ] \\ = [& 0\ 0\ 0\ 0\ 0\ 0\ 1\ 1 ] \end{split} \end{equation*} \vspace{-.2cm} \caption{Features of $(E \bowtie P) \bowtie S$} \end{subfigure \vspace{-.2cm} \caption{\small{ {\bf A query and its corresponding featurizations (\secref{subsec:featurization}).} One-hot vectors encode the visible attributes in the query graph ($A_G$), the left side of a join ($A_L$), and the right side ($A_R$). Such encoding allows for featurizing both the query graph and a particular join. A partial join and a full join are shown. The example query covers all relations in the schema, so $A_G = A$. \label{fig:feat} }} \end{figure*} In our experiments, we bootstrap planning with a bushy dynamic program until the number of relations in the join exceeds 10 relations. Then, the data generation algorithm switches to a greedy scheme for efficiency for the last $K-10$ joins. Ironically, the data collected from such an optimizer might be ``too good'' (or too conservative) because it does not measure or learn from a diverse enough space of (costly, hence risky) subplans. If the training data only consisted of optimal sub-plans, then the learned Q-function may not accurately learn the downside of poor subplans. Likewise, if purely random plans are sampled, the model might not see very many instances of good plans. To encourage more ``exploration'', during data collection noise can be injected into the optimizer to force it to enumerate more diverse subplans. We control this via a parameter $\epsilon$, the probability of picking a random join as opposed to a join with the lowest cost. As the algorithm enumerates subplans, if $\textsf{rand()} < \epsilon$ then a random (valid) join is chosen on the current query graph; otherwise it proceeds with the lowest-cost join as usual. This is an established technique to address such ``covariate shift'', a phenomenon extensively studied in prior work~\citep{laskey2017dart}. \section{Background} \begin{figure}[t] \centering \includegraphics[width=0.8\columnwidth]{exp/teaser.png} \caption{\small We consider 3 cost models for the Join Order Benchmark: (1) one with inexpensive index lookups, (2) one where the only physical operator is a hybrid hash join with limited memory, and (3) one that allows for the reuse of previously built hash tables. The figure plots the cost suboptimality w.r.t. optimal plans. The classical left-deep dynamic program fails on the latter two scenarios. We propose a reinforcement learning based optimizer, \textsf{DQ}\xspace, which can adapt to a specific cost model given appropriate training data. \label{teaser}} \end{figure} The classic join ordering problem is, of course, NP-hard, and practical algorithms leverage heuristics to make the search for a good plan efficient. The design and implementation of optimizer search heuristics are well-understood when the cost model is roughly linear, i.e., the cost of a join is linear in the size of its input relations. This assumption underpins many classical techniques as well as recent work~\citep{selinger1979access,krishnamurthy1986optimization, trummer2017solving,neumann2018adaptive}. However, many practical systems have relevant non-linearities in join costs. For example, an intermediate result exceeding the available memory may trigger partitioning, or a relation may cross a size threshold that leads to a change in physical join implementation. It is not difficult to construct reasonable scenarios where classical heuristics dramatically fail (Figure~\ref{teaser}). Consider the query workload and dataset in the Join Order Benchmark~\citep{leis2015good}. A popular heuristic from the original Selinger optimizer is to prune the search space to only include left-deep join orders. Prior work showed that left-deep plans are extremely effective on this benchmark for cost models that prefer index joins~\citep{leis2015good}. Experimentally, we found this to be true as well: the worst-case cost over the entire workload is only 2x higher than the true optimum (for an exponentially smaller search space). However, when we simply change the cost model to be more non-linear, consisting of (1) hybrid hash join operators that spill partitions to disk when data size exceeds available memory, or (2) hash join operators that can re-use previously built hash tables, suddenly the left-deep heuristic is no longer a good idea---it is almost 50x more costly than the true optimum. These results illustrate that in a practical sense, the search problem is unforgiving: various heuristics have different weak spots where they fail by orders of magnitude relative to optimal. For example, success on such atypical or non-linear cost models may require searching over ``bushy'' plans, not just left-deep ones. With new hardware innovations~\citep{arulraj2017build} and a move towards serverless RDBMS architectures~\citep{aurora}, it is not unreasonable to expect a multitude of new query cost models that significantly differ from existing literature, which might require a complete redesign of standard pruning heuristics. Ideally, instead of a fixed heuristic, we would want a strategy to guide the search space in a more data-driven way---tailoring the search to a specific database instance, query workload, and observed join costs. This sets up the main premise of the paper: would it be possible to use data-driven machine learning methods to identify such a heuristic from data? \subsection{Example} We focus on the classical problem of searching for a query plan made up of binary join operators and unary selections, projections, and access methods. We will use the following database of three relations denoting employee salaries as a running example throughout the paper: \[ \text{Emp}(id, name, rank) ~~ \text{Pos}(rank, title, code) ~~ \text{Sal}(code, amount) \] Consider the following join query: \vspace{-0.7cm} \begin{lstlisting} SELECT * FROM Emp, Pos, Sal WHERE Emp.rank = Pos.rank AND Pos.code = Sal.code \end{lstlisting} There are many possible orderings to execute this query. For example, one could execute the example query as $Emp \bowtie (Sal \bowtie Pos)$, or as $Sal \bowtie (Emp \bowtie Pos)$. \subsection{Reinforcement Learning} Bellman's ``Principle of Optimality'' and the characterization of dynamic programming is one of the most important results in computing~\citep{bellman2013dynamic}. In addition to forming the basis of relational query optimization, it has a deep connection to a class of stochastic processes called Markov Decision Processes (MDPs), which formalize a wide range of problems from path planning to scheduling. In an MDP model, an agent makes a sequence of decisions with the goal of optimizing a given objective (e.g., improve performance, accuracy). Each decision is dependent on the current state, and typically leads to a new state. The process is ``Markovian'' in the sense that the system's current state completely determines its future progression. Formally, an MDP consists of a five-tuple: \[ \langle S, A, P(s,a), R(s,a), s_0 \rangle \] where $S$ describes a set of states that the system can be in, $A$ describes the set of actions the agent can take, $s' \sim P(s,a)$ describes a probability distribution over new states given a current state and action, and $s_0$ defines a distribution of initial states. $R(s,a)$ is the reward of taking action a in state s. The reward measures the performance of the agent. The objective of an MDP is to find a decision policy $\pi: S \mapsto A$, a function that maps states to actions, with the maximum expected reward: \begin{equation*} \begin{aligned} & \underset{\pi}{\argmax} & & \mathbf{E}\left[\sum_{t=0}^{T-1} R(s_t,a_t)\right] \\ & \text{subject to} & & s_{t+1} = P(s_t,a_t), a_t = \pi(s_t). \end{aligned} \end{equation*} As with dynamic programming in combinatorial problems, most MDPs are difficult to solve exactly. Note that the greedy solution, eagerly maximizing the reward at each step, might be suboptimal in the long run. Generally, analytical solutions to such problems scale poorly in the time horizon. Reinforcement learning (RL) is a class of stochastic optimization techniques for MDPs~\citep{sutton1998reinforcement}. An RL algorithm uses sampling, taking randomized sequences of decisions, to build a model that correlates decisions with improvements in the optimization objective (cumulative reward). The extent to which the model is allowed to extrapolate depends on how the model is parameterized. One can parameterize the model with a table (i.e., exact parameterization) or one can use any function approximator (e.g., linear functions, nearest neighbors, or neural networks). Using a neural network in conjunction with RL, or Deep RL, is the key technique behind recent results like learning how to autonomously play Atari games~\citep{mnih2015human} and the game of Go~\citep{silver2016mastering}. \subsection{Markov Model of Enumeration} Now, we will review standard ``bottom-up'' join enumeration, and then, we will make the connection to a Markov Decision Process. Every join query can be described as a query graph, where edges denote join conditions between tables and vertices denote tables. Any dynamic programming join optimizer implementation needs to keep track of its progress: what has already been done in a particular subplan (which relations were already joined up) and what options remain (which relations--whether base or the result of joins--can still be ``joined in'' with the subplan under consideration). The query graph formalism allows us to represent this state. \begin{definition}[Query Graph] A query graph $G$ is an undirected graph, where each relation $R$ is a vertex and each join predicate $\rho$ defines an edge between vertices. Let $\kappa_G$ denote the number of connected components of $G$. \end{definition} Making a decision to join two subplans corresponds to picking two vertices that are connected by an edge and merging them into a single vertex. Let $G=(V,E)$ be a query graph. Applying a join $c=(v_i, v_j)$ to the graph $G$ defines a new graph with the following properties: (1) $v_i$ and $v_j$ are removed from $V$, (2) a new vertex $(v_i+v_j)$ is added to $V$, and (3) the edges of $(v_i+v_j)$ are the union of the edges incident to $v_i$ and $v_j$. Each join reduces the number of vertices by $1$. Each plan can be described as a sequence of such joins $c_1 \circ c_2 ...\circ c_{T}$ until $|V| = \kappa_G$. The above description embraces another System R heuristic: ``avoiding Cartesian products''. We can relax that heuristic by simply adding edges to $G$ at the start of the algorithm, to ensure it is fully connected. Going back to our running example, suppose we start with a query graph consisting of the vertices $(Emp, Pos, Sal)$. Let the first join be $c_1 = (Emp, Pos)$; this leads to a query graph where the new vertices are $(Emp+Pos, Sal)$. Applying the only remaining possible join, we arrive at a single remaining vertex $Sal+(Emp+Pos)$ corresponding to the join plan $Sal \bowtie (Emp \bowtie Pos)$. The join optimization problem is to find the best possible join sequence---i.e., the best query plan. Also note that this model can be simply extended to capture physical operator selection as well. The set of allowed joins can be typed with an eligible join type, e.g., $c=(v_i, v_j, \textsf{HashJoin})$ or $c=(v_i, v_j, \textsf{IndexJoin})$. We assume access to a cost model $J(c) \mapsto \mathbb{R}_+$, i.e., a function that estimates the incremental cost of a particular join. \begin{problem}[Join Optimization Problem] Let $G$ define a query graph and $J$ define a cost model. Find a sequence $c_1 \circ c_2 ...\circ c_{T}$ terminating in $|V| = \kappa_G$ to minimize: \begin{equation*} \begin{aligned} & \min_{c_1,...,c_T} & & \sum_{i=1}^T J(c_i) \\ & \text{subject to} & & G_{i+1} = c(G_i). \end{aligned} \end{equation*} \label{joinopt} \end{problem} \vspace{-.25cm} Note how this problem statement exactly defines an MDP (albeit by convention a minimization problem rather than maximization). $G$ is a representation of the \textbf{state}, $c$ is a representation of the \textbf{action}, the vertex merging process defines the state transition $P(G,c)$, and the reward function is the negative cost $-J$. The output of an MDP is a function that maps a given query graph to the best next join. Before proceeding, we summarize our notation in Table~\ref{table:notation}. \subsection{Long Term Reward of a Join} \begin{table}[t]\centering \small% \ra{1.3} \begin{tabular}{@{} l l @{}} \toprule \textbf{\emph{Symbol}} & \textbf{\emph{Definition}} \\ \midrule $G$ & A query graph. This is a \emph{state} in the MDP. \\ $c$ & A join. This is an \textit{action}. \\ $G'$ & The resultant query graph after applying a join. \\ $J(c)$ & A cost model that scores joins. \\ \bottomrule \end{tabular} \vspace{0.25em} \caption{\small{Notation used throughout the paper.}\label{table:notation}} \vspace{-0.8cm} \end{table} To introduce how RL gives us a new perspective on this classical database optimization problem, let us first examine the greedy solution. A naive solution is to optimize each $c_i$ independently (also called Greedy Operator Optimization~\citep{neumann2018adaptive}). The algorithm proceeds as follows: (1) start with the query graph, (2) find the lowest cost join, (3) update the query graph and repeat until only one vertex is left. The greedy algorithm, of course, does not consider how local decisions might affect future costs. For illustration, consider our running example query with the following simple costs (assume a single join method with symmetric cost): \[J(EP)= 100,~J(SP)= 90,~J((EP)S)= 10,~J((SP)E)= 50\] The greedy solution would result in a cost of 140 (because it neglects the future effects of a decision), while the optimal solution has a cost of 110. However, there is an upside: this greedy algorithm has a computational complexity of $O(|V|^3)$, despite the super-exponential search space. The greedy solution is suboptimal because the decision at each index fails to consider the long-term value of its action. One might have to sacrifice a short term benefit for a long term payoff. Consider the optimization problem for a particular query graph $G$: \begin{equation} V(G) = \min_{c_1,...,c_T} \sum_{i=1}^T J(c_i) \label{eq:main} \end{equation} In classical treatments of dynamic programming, this function is termed the \emph{value function}. It is noted that optimal behavior over an entire decision horizon implies optimal behavior from any starting index $t>1$ as well, which is the basis for the idea of dynamic programming. Conditioned on the current join, we can write in the following form: \[ V(G) = \min_{c} Q(G,c) \] \[ Q(G,c) = J(c) + V(G') \] leading to the following recursive definition of the \emph{Q-function} (or cost-to-go function): \begin{equation} Q(G,c) = J(c) + \min_{c'} Q( G',c') \label{eq:q} \end{equation} Intuitively, the Q-function describes the long-term value of each join: the cumulative cost if we act optimally for all subsequent joins after the current join decision. Knowing $Q$ is equivalent to solving the problem since local optimization $\min_{c'} Q(G',c')$ is sufficient to derive an optimal sequence of join decisions. If we revisit the greedy algorithm, and revise it hypothetically as follows: (1) start with the query graph, (2) find the lowest \emph{Q-value} join, (3) update the query graph and repeat, then this algorithm has the same computational complexity of $O(|V|^3)$ but is provably optimal. To sketch out our solution, we will use Deep RL to approximate a global Q-function (one that holds for all query graphs in a workload), which gives us a polynomial-time algorithm for join optimization. \subsection{Applying Reinforcement Learning} \label{sec:apply-rl} An important class of reinforcement learning algorithms, called Q-learning algorithms, allows us to approximate the Q-function from samples of data~\citep{sutton1998reinforcement}. What if we could regress from features of $(G,c)$ to the future cumulative cost based on a small number of observations? Practically, we can observe samples of decision sequences containing $(G,c, J(c), G')$ tuples, where $G$ is the query graph, $c$ is a particular join, $J(c)$ is the cost of the join, and $G'$ is the resultant graph. Such a sequence can be extracted from any final join plan and by evaluating the cost model on the subplans. Let's further assume we have a parameterized model for the Q-function, $Q_\theta$: \[ Q_\theta(f_G,f_c) \approx Q(G,c) \] where $f_G$ is a \emph{feature vector} representing the query graph and $f_c$ is a feature vector representing a particular join. $\theta$ is the model parameters that represent this function and is randomly initialized at the start. For each training tuple $i$, one can calculate the following label, or the ``estimated'' Q-value: \[ y_i = J(c) + \min_{c'} Q_\theta(G',c') \] The $\{y_i\}$ can then be used as labels in a regression problem. If $Q$ were the true Q-function, then the following recurrence would hold: \[ Q(G,c) = J(c) + \min_{c'} Q_\theta(G',c') \] So, the learning process, or \emph{Q-learning}, defines a loss at each iteration: \[ L(Q) = \sum_{i} \|y_i - Q_\theta(G,c)\|_2^2 \] Then parameters of the Q-function can be optimized with gradient descent until convergence. RL yields two key benefits: (1) the search cost for a single query relative to traditional query optimization is radically reduced, since the algorithm has the time-complexity of greedy search, and (2) the parameterized model can potentially learn across queries that have ``similar'' but non-identical subplans. This is because the similarity between subplans are determined by the query graph and join featurizations, $f_G$ and $f_c$; thus if they are designed in a sufficiently expressive way, then the neural network can be trained to extrapolate the Q-function estimates to an entire workload The specific choice of Q-learning is important here (compared to other RL algorithms). First, it allows us to take advantage of optimal substructures during training and greatly reduce data needed. Second, compared to policy learning~\citep{marcus2018deep}, Q-learning outputs \emph{a score for each join that appears in any subplan} rather than simply selecting the best join. This is more amenable to deep integration with existing query optimizers, which have additional state like interesting orders and their own pruning of plans. Third, the scoring model allows for top-k planning rather than just getting the best plan. We note that the design of Q-learning variants is an active area of research in AI~\citep{hester2017deep, van2016deep}, so we opted for the simplicity of a Deep Q-learning approach and defer incorporation of advanced variants to future work. \subsection{Reinforcement Learning vs. Supervised Learning} Reinforcement Learning and Supervised Learning can seem very similar since the underlying inference methods in RL algorithms are often similar to those used in supervised learning and statistical estimation. Here is how we justify our terminology. In supervised learning, one has paired training examples with ground-truth labels (e.g., an image with a labeled object). For join optimization, this would mean a dataset where the example is the current join graph and the label is the next best join decision from an oracle. In the context of sequential planning, this problem setting is often called Imitation Learning~\citep{osa2018algorithmic}; where one imitates an oracle as best as possible. As in~\citep{levine2018learning}, the term ``Reinforcement Learning'' refers to a class of empirical solutions to Markov Decision Process problems where we do \textit{not} have the ground-truth, optimal next steps; instead, learning is guided by numeric ``rewards'' for next steps. In the context of join optimization, these rewards are subplan costs. RL rewards may be provided by a real-world experiment, a simulation model, or some other oracular process. In our work below, we explore different reward functions including both real-world feedback (\secref{subsec:feedback}) and simulation via traditional plan cost estimation (\secref{sec:data-collection}). RL purists may argue that access to any optimization oracle moves our formulation closer to supervised learning than classical RL. We maintain this terminology because we see the pre-training procedure as a useful prior. Rather than expensive, \emph{ab initio} learning from executions, we learn a useful (albeit imperfect) join optimization policy offline. This process bootstraps a more classical ``learning-by-doing'' RL process online that avoids executing grossly suboptimal query plans. There is additionally subtlety in the choice of algorithm. Most modern RL algorithms collect data episodically (execute an entire query plan and observe the final result). This makes sense in fields like robotics or autonomous driving where actions may not be reversible or decomposable. In query optimization, every query consists of subplans (each of which is its own ``query''). Episodic data collection ignores this compositional structure. \section{Discussion, Limitations, and Conclusion} \label{sec:extensions} We presented our method with a featurization designed for inner joins over foreign key relations as these were the major join queries in our benchmarks. This is not a fundamental restriction and is designed to ease exposition. It is relatively straightforward to extend this model to join conditions composed of conjunctions of binary expressions. Assume the maximum number of expressions in the conjunction is capped at $\mathcal{N}$. As before, let $A$ be the set of all attributes in the database. Each expression has two attributes and an operator. As with featurizing the vertices we can 1-hot encode the attributes present. We additionally have to 1-hot encode the binary operators $\{=,\neq,<,>\}$. For each of the expressions in the conjunctive predicate, we concatenate the binary feature vectors that have its operator and attributes. Since the maximum number of expressions in the conjunction capped at $\mathcal{N}$, we can get a fixed sized feature vector for all predicates. More broadly, we believe \textsf{DQ}\xspace is a step towards a learning query optimizer. As illustrated by the Cascades optimizer~\citep{graefe1995cascades} and follow-on work, cost-based dynamic programming---whether bottom up or top-down with memoization---needs not be restricted to select-project-join blocks. Most query optimizations can be recast into a space of algebraic transformations amenable to dynamic programming, including asymmetric operators like outer joins, cross-block optimizations including order optimizations and ``sideways information passing'', and even non-relational operators like PIVOT. The connection between RL and Dynamic Programming presented in this paper can be easily leveraged in those scenarios as well. Of course this blows up the search space, and large spaces are ideal for solutions like the one we proposed. It is popular in recent AI research to try ``end-to-end'' learning, where problems that were traditionally factored into subproblems (e.g., self-driving cars involve separate models for localization, obstacle detection and lane-following) are learned in a single unified model. One can imagine a similar architectural ambition for an end-to-end learning query optimizer, which simply maps subplan features to measured runtimes. This would require a significant corpus of runtime data to learn from, and changes to the featurization and perhaps the deep network structure we used here. \textsf{DQ}\xspace is a pragmatic middle ground that exploits the structure of the join optimization problem. Further exploring the extremes of learning and query optimization in future work may shed more insights. \section{Evaluation} \label{sec:eval} We extensively evaluate \textsf{DQ}\xspace to investigate the following major questions: \begin{itemize} \item How effective is \textsf{DQ}\xspace in producing plans, how good are they, and under what conditions (\secref{sec:eval-cm1}, \secref{sec:eval-cm2}, \secref{sec:eval-cm3})? \item How efficient is \textsf{DQ}\xspace at producing plans, in terms of runtimes and required data (\secref{eval:plan-latency}, \secref{eval:data-quantity}, \secref{eval:data-relevance})? \item Do \textsf{DQ}\xspace's techniques apply to real-world scenarios, systems, and workloads (\secref{eval:real-systems}, \secref{sec:eval-feedback})? \end{itemize} To address the first two questions, we run experiments on standalone \textsf{DQ}\xspace. The last question is evaluated with end-to-end experiments on \textsf{DQ}\xspace-integrated Postgres and SparkSQL. \subsection{Standalone Optimization Experiments} \label{subsec:standalone} We implemented \textsf{DQ}\xspace and a wide variety of optimizer search techniques previously benchmarked in Leis et al.~\citep{leis2015good} in a standalone Java query optimizer harness. Apache Calcite is used for parsing SQL and representing the SQL AST. We first evaluate standalone \textsf{DQ}\xspace and other optimizers for final plan costs; unless otherwise noted, exploration (\secref{sec:data-collection}) and real-execution feedback (\secref{subsec:feedback}) are turned off. We use the Join Order Benchmark (JOB)~\citep{leis2015good}, which is derived from the real IMDB dataset (3.6GB in size; 21 tables). The largest table has 36 million rows. The benchmark contains 33 templates and 113 queries in total. The joins have between 4 and 15 relations, with an average of 8 relations per query. We revisit a motivating claim from earlier: heuristics are well-understood when the cost model is linear but non-linearities can lead to significant suboptimality. The experiments intend to illustrate that \textsf{DQ}\xspace offers a form of \emph{robustness to cost model}, meaning, that it prioritizes plans tailored to the structure of the cost model, workload, and physical design---even when these plans are bushy. We consider 3 cost models: CM1 is a model for a main-memory database; CM2 additionally considers limited memory hash joins where after a threshold the costs of spilling partitions to disk are considered; CM3 additionally considers the re-use of already-built hash tables during upstream operators. We compare with the following baselines: QuickPick-1000 (\textbf{QP})~\citep{waas2000join} selects the best of 1000 random join plans; IK-KBZ (\textbf{KBZ})~\citep{krishnamurthy1986optimization} is a polynomial-time heuristic that decomposes the query graph into chains and orders them; dynamic programs Right-deep (\textbf{RD}), Left-deep (\textbf{LD}), Zig-zag (\textbf{ZZ})~\citep{ziane1993parallel}, and Exhaustive (\textbf{EX}) exhaustively enumerate join plans with the indicated plan shapes. Details of the setup are listed in Appendix~\secref{appendix:cost-details}. Results of this set of experiments are shown in Table~\ref{table:standalone-combined}. \subsubsection{Cost Model 1} \label{sec:eval-cm1} Our results on CM1 reproduce the conclusions of Leis et al.~\citep{leis2015good}, where left-deep plans are generally good (utilize indexes well) and there is little need for zigzag or exhaustive enumeration. \textsf{DQ}\xspace is competitive with these optimal solutions without \emph{a priori} knowledge of the index structure. In fact, \textsf{DQ}\xspace significantly outperforms the other heuristic solutions \textbf{KBZ} and \textbf{QP}. While it is true that \textbf{KBZ} also restricts its search to left-deep plans, it is suboptimal for cyclic join graphs---its performance is hindered since almost all JOB queries contain cycles. We found that \textbf{QP} struggles with the physical operator selection, and a significant number of random samples are required to find a narrow set of good plans (ones the use indexes effectively). Unsurprisingly, these results show that \textsf{DQ}\xspace, a learning-based solution, reasonably matches performance on cases where good heuristics exist. On average \textsf{DQ}\xspace is within 22\% of the \textbf{LD} solution and in the worst case only 1.45$\times$ worse \subsubsection{Cost Model 2} \label{sec:eval-cm2} By simply changing to a different, yet realistic, cost model, we can force the left-deep heuristics to perform poorly. CM2 accounts for disk usage in hybrid hash joins. In this cost model, none of the heuristics match the exhaustive search over the entire workload. Since the costs are largely symmetric for small relation sizes, there is little benefit to either left-deep or right-deep pruning. Similarly zig-zag trees are only slightly better, and the heuristic methods fail by orders-of-magnitude on their worst queries. \textsf{DQ}\xspace still comes close to the quality of exhaustive enumeration ($1.68\times$ on average). It does not perform as well as in CM1 (with its worst query about $12\times$ the optimal cost) but is still significantly better than the alternatives. Results on CM2 suggest that as memory becomes more limited, heuristics begin to diverge more from the optimal solution. We explored this phenomenon further and report results in Table~\ref{table:mem-limit}. \begin{table}[t!]\centering \small% \begin{tabular}{@{} l l l l l @{}} \toprule & {$M=10^8$} & {$M=10^6$} & {$M=10^4$} & {$M=10^2$} \\ \midrule {\bf KBZ} & 1.0 & 3.31 & 30.64 & 41.64 \\ {\bf LD} & 1.0 & 1.09 & 6.45 & 6.72 \\ {\bf EX} & 1.0 & 1.0 & 1.0 & 1.0 \\ \textsf{DQ}\xspace & 1.04 & 1.42 & 1.64 & 1.56 \\ \bottomrule \end{tabular} \vspace{0.25em} \caption{\small{Cost Model 2: mean relative cost vs. memory limit (number of tuples in memory).} \label{table:mem-limit}} \vspace{-0.6cm} \end{table} \subsubsection{Cost Model 3} \label{sec:eval-cm3} Finally, we illustrate results on CM3 that allows for the reuse of hash tables. Right-deep plans are no longer inefficient in this model as they facilitate reuse of the hash table (note right and left are simply conventions and there is nothing important about the labels). The challenge is that now plans have to contain a mix of left-deep and right-deep structures. Zig-zag tree pruning heuristic was exactly designed for cases like this. Surprisingly, \textsf{DQ}\xspace is significantly ($1.7\times$ on average and in the worst) better than zig-zag enumeration. We observed that bushy plans were necessary in a small number of queries and \textsf{DQ}\xspace found such lower-cost solutions. In summary, results in Table~\ref{table:standalone-combined} show that \textsf{DQ}\xspace is robust against different cost model regimes, since it learns to adapt to the workload at hand. \subsubsection{Planning Latency} \label{eval:plan-latency} Next, we report the planning (optimization) time of \textsf{DQ}\xspace and several other optimizers across the entire 113 JOB queries. The same model in \textsf{DQ}\xspace is used to plan all queries. Implementations are written in Java, single-threaded\footnote{To ensure fairness, for \textsf{DQ}\xspace we configure the underlying linear algebra library to use 1 thread. No GPU is used.}, and reasonably optimized at the algorithmic level (e.g., QuickPick would short-circuit a partial plan already estimated to be more costly than the current best plan)---but no significant efforts are spent on low-level engineering. Hence, the relative magnitudes are more meaningful than the absolute values. Experiments were run on an AWS EC2 c5.9xlarge instance with a 3.0GHz CPU and 72GB memory. Figure ~\ref{exp:planning-latency} reports the runtimes grouped by number of relations. In the small-join regime, \textsf{DQ}\xspace's overheads are attributed interfacing with a JVM-based deep learning library, \textsf{DL4J} (creating and filling the featurization buffers; JNI overheads due to native CPU backend execution). These could have been optimized away by targeting a non-JVM engine and/or GPUs, but we note that when the number of joins is small, exhaustive enumeration would be the ideal choice. In the large-join regime, \textsf{DQ}\xspace achieves drastic speedups: for the largest joins \textsf{DQ}\xspace runs up to 10,000$\times$ faster than exhaustive enumeration and $>10\times$ than left-deep. \textsf{DQ}\xspace upper-bounds the number of neural net invocations by the number of relations in a query, and additionally benefits from the batching optimization (\secref{sec:inference}). We believe this is a profound performance argument for a learned optimizer---it would have an even more unfair advantage when applied to larger queries or executed on specialized accelerators~\citep{jouppi2017datacenter}. \begin{figure} \centering \includegraphics[width=0.9\columnwidth,keepaspectratio]{exp/standalone-planning-new-1.pdf} \vspace{-0.5cm} \caption{\small{ Optimization latency (log-scale) on all JOB queries grouped by number of relations in each query (\secref{eval:plan-latency}). A total of 5 trials are run; standard deviations are negligible hence omitted.% } \label{exp:planning-latency}} \vspace{-.3cm} \end{figure} \subsubsection{Quantity of Training Data} \label{eval:data-quantity} How much training data does \textsf{DQ}\xspace need to become effective? To study this, we vary the number of training queries given to \textsf{DQ}\xspace and plot the mean relative cost using the cross validation technique described before. Figure \ref{exp:plot2} shows the relationship. \textsf{DQ}\xspace requires about 60-80 training queries to become competitive and about 30 queries to match the plan costs of QuickPick-1000. \begin{figure} \centering \includegraphics[width=0.9\columnwidth,keepaspectratio]{exp/exp2_plot1.png} \vspace{-0.4cm} \caption{\small{Mean relative cost (in log-scale) as a function of the number of training queries seen by \textsf{DQ}\xspace. We include QuickPick-1000 as a baseline. Cost Model 1 is used.} \label{exp:plot2}} \vspace{-0.1cm} \end{figure} Digging deeper, we found that the break-even point of 30 queries roughly corresponds to seeing all relations in the schema at least once. In fact, we can train \textsf{DQ}\xspace on small queries and test it on larger ones---as long as the relations are covered well. To investigate this generalization power, we trained \textsf{DQ}\xspace on all queries with $\leq$ 9 and 8 relations, respectively, and tested on the remaining queries (out of a total of 113). For comparison we include a baseline scheme of training on 80 random queries and testing on 33; see Table~\ref{table:generalization}. Table~\ref{table:generalization} shows that even when trained on subplans, \textsf{DQ}\xspace performs relatively well and generalizes to larger joins (recall, the workload contains up to 15-way joins). This indicates that \textsf{DQ}\xspace indeed learns \emph{local structures}---efficient joining of small combinations of relations. When those local structures do not sufficiently cover the cases of interest during deployment, we see degraded performance. \subsubsection{Relevance and Quality of Training Data} \label{eval:data-relevance} Quantity of training data matters, and so do \textit{relevance} and \textit{quality}. We first study relevance, i.e., the degree of similarity between the sampled training data and the test queries. This is controlled by changing the training data sampling scheme. Figure \ref{exp:plot3} plots the performance of different data sampling techniques each with 80 training queries. It confirms that the more relevant the training queries can be made towards the test workload, the less data is required for good performance. \begin{figure}[t!] \centering \includegraphics[width=0.9\columnwidth,keepaspectratio]{exp/exp2_plot2.png} \vspace{-0.4cm} \caption{\small{ Relevance of training data vs. \textsf{DQ}\xspace's plan cost. {\sf R80} is a dataset sampled independently of the JOB queries with random joins/predicates from the schema. {\sf R80wp} has random joins as before but contains the workload's predicates. {\sf WK80} includes 80 actual queries sampled from the workload. {\sf T80} describes a scheme where each of the 33 query templates is covered at least once in sampling. These schemes are increasingly ``relevant''. Costs are relative w.r.t. \textbf{EX}. }\label{exp:plot3}} \vspace{-0.2cm} \end{figure} \begin{figure}[t!] \centering \includegraphics[width=\columnwidth]{exp/exp2_plot3.png} \vspace{-0.7cm} \caption{ \small{Quality of training data vs. \textsf{DQ}\xspace's plan cost. \textsf{DQ}\xspace trained on data collected from QuickPick-1000, left-deep, or the bushy (exhaustive) optimizer. Data variety boosts convergence speed and final quality. Costs are relative w.r.t. \textbf{EX}. }\label{exp:plot4}} \vspace{-0.4cm} \end{figure} \begin{table}[t!]\centering \small% \begin{tabular}{@{} l c c @{}} \toprule & {\bf \# Training Queries} & {\bf Mean Relative Cost} \\ \midrule {\bf Random} & 80 & 1.32 \\ {\bf Train} $\leq$ 9-way & 82 & 1.61 \\ {\bf Train} $\leq$ 8-way & 72 & 9.95 \\ \bottomrule \end{tabular} \vspace{0.25em} \caption{\small{\textsf{DQ}\xspace trained on small joins and tested on larger joins. Costs are relative to optimal plans. }\label{table:generalization}} \vspace{-0.7cm} \end{table} Notably, it also shows that even synthetically generated random queries ({\sf R80}) are useful. \textsf{DQ}\xspace still achieves a lower relative cost compared to QuickPick-1000 even with random queries (4.16 vs. 23.87). This experiment illustrates that \textsf{DQ}\xspace does not actually require \emph{a priori} knowledge of the workload. Next, we study the \textit{quality} of training data, i.e., the optimality of the native planner \textsf{DQ}\xspace observes and gathers data from. We collect a varying amount of data sampled from the native optimizer, which we choose to be QuickPick-1000, left-deep, or bushy ({\bf EX}). Figure~\ref{exp:plot4} shows that all methods allow \textsf{DQ}\xspace to quickly converge to good solutions. The DP-based methods, left-deep and bushy, converge faster as they produce final plans and optimal subplans per query. In contrast, QuickPick yields only 1000 random full plans per query. The optimal subplans from the dynamic programs offer data variety valuable for training, and they cover better the space of different relation combinations that might be seen in testing. \subsection{Real Systems Execution} \label{eval:real-systems} It is natural to ask: how difficult and effective is it for a production-grade system to incorporate \textsf{DQ}\xspace? We address this question by integrating \textsf{DQ}\xspace into two systems, PostgreSQL and SparkSQL.\footnote{Versions: Spark 2.3; Postgres master branch checked out on 9/17/18.} The integrations were found to be straightforward: Postgres and SparkSQL each took less than 300 LoC of changes; in total about two person-weeks were spent. \subsubsection{Postgres Integration} \label{sec:postgres-integration} \textsf{DQ}\xspace integrates seamlessly with the bottom-up join ordering optimizer in Postgres. The original optimizer's DP table lookup is replaced with the invocation of \textsf{DQ}\xspace' Tensorflow (TF) neural network through the TF C API. As discussed in \secref{eval:plan-latency}, plans are batch-evaluated to amortize the TF invocation overhead. We run the Join Order Benchmark experiments on the integrated artifact and present the results below. All of the learning utilizes the cost model and cardinality estimates provided by Postgres. \vspace{0.5em} \noindent \textbf{Training.} \textsf{DQ}\xspace observes the native cost model and cardinality estimates from Postgres. We configured Postgres to consider bushy join plans (the default is to only consider left-deep plans). These plans generate traces of joins and their estimated costs in the form described in~\secref{sec:data-collection}. We \emph{do not} apply any exploration and execute the native optimizer as is. Training data is collected via Postgres' logging interface. Table~\ref{table:postgres-collection-overhead} shows that \textsf{DQ}\xspace can collect training data from an existing system with relatively minimal impact on its normal execution. The overhead can be further minimized if training data is asynchronously, rather than synchronously, logged. \vspace{0.5em} \noindent \textbf{Runtimes on JOB (Figure~\ref{exp:real}).} We allow the Postgres query planner to plan over 80 of the 113 training queries. We use a 5-fold cross validation scheme to hold out different sets of 33 queries. Therefore, each query has at least one validation set in which it was unseen during training. We report the worst case planning time and execution time for queries that have multiple such runs. In terms of optimization latency, \textsf{DQ}\xspace is significantly faster than Postgres for large joins, up to $3\times$. For small joins there is a substantial overhead due to neural network evaluations (even though \textsf{DQ}\xspace needs score much fewer join orders). These results are consistent with the standalone experiment in Section~\ref{eval:plan-latency} and the same comments there on small-join regimes apply. In terms of execution runtimes, \textsf{DQ}\xspace is significantly faster on a number of queries; averaging over the entire workload \textsf{DQ}\xspace yields a 14\% speedup. \subsubsection{SparkSQL Integration} \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{exp/job-postgres-combined.pdf} \vspace{-0.8cm} \caption{\small {Execution and optimization latencies of \textsf{DQ}\xspace and Postgres on JOB. Each point is a query executed by native Postgres (x-axis) and \textsf{DQ}\xspace (y-axis). Results below the $y=x$ line represent a speedup. Optimization latency is the time taken for the full planning pipeline, not just join ordering.} \label{exp:real}} \vspace{-0.2cm} \end{figure} \begin{table}[t]\centering \small% \begin{tabular}{@{} l c c @{}} \toprule & {Median} & {Max} \\ \midrule {\bf Postgres, no collection} & 19.17 ms & 149.53 ms \\ {\bf Postgres, with collection} & 35.98 ms & 184.22 ms \\ \bottomrule \end{tabular} \vspace{0.25em} \caption{\small{Planning latency with collection turned off/on.}} \label{table:postgres-collection-overhead} \vspace{-0.7cm} \end{table} \textsf{DQ}\xspace is also integrated into SparkSQL, a distributed data analytics engine. To show that \textsf{DQ}\xspace's effectiveness applies to more than one workload, we evaluate the integrated result on TPC-DS. \vspace{0.5em} \noindent \textbf{Training.} SparkSQL 2.3 contains a cost-based optimizer which enumerates bushy plans for queries whose number of relations falls under a tunable threshold. We set this threshold high enough so that all queries are handled by this bushy dynamic program. To score plans, the optimizer invokes \textsf{DQ}\xspace's trained neural net through TensorFlow Java. We use the native SparkSQL cost model and cardinality estimates. All algorithmic aspects of training data collection remain the same as the Postgres integration. \vspace{0.5em} \noindent \textbf{Effectiveness on TPC-DS (Figure~\ref{fig:spark-tpcds}).} We collect data from and evaluate on 97 out of all 104 queries in TPC-DS v2.4. The data files are generated with a scale factor of 1 and stored as columnar Parquet files. In terms of execution runtimes, \textsf{DQ}\xspace matches SparkSQL over the 97 queries (a mean speedup of 1.0$\times$). In terms of optimization runtimes, \textsf{DQ}\xspace has a mean speedup of 3.6$\times$ but a max speedup of 250$\times$ on the query with largest number of joins (Q64). Note that the mean optimization speedup here is less drastic than JOB because TPC-DS queries contain much less relations to join. \vspace{0.5em} \noindent \textbf{Discussion.} In summary, results above show that \textsf{DQ}\xspace's effective not only on the one workload designed to stress-test joins, but also on a well-established decision support workload. Further, we demonstrate the ease of integration into production-grade systems including a RDBMS and a distributed analytics engine. We hope these results provide motivation for developers of similar systems to incorporate \textsf{DQ}\xspace's learning-based join optimization technique. \subsection{Fine-Tuning With Feedback} \label{sec:eval-feedback} \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{exp/tpcds-sf1-36cores-combined.pdf} \vspace{-.8cm} \caption{\small{ Execution and optimization latencies of \textsf{DQ}\xspace and SparkSQL on TPC-DS (SF1). We use an EC2 c5.9xlarge instance with 36 vCPUs. SparkSQL's bushy dynamic program takes 1000 seconds to plan the largest query (Q64, 18-relation join); we include a zoomed-in view of the rest of the planning latencies. Results below the $y=x$ line represent a speedup. Across the workload, \textsf{DQ}\xspace's mean speedup over SparkSQL for execution is 1.0$\times$ and that for optimization is 3.6$\times$. }} \label{fig:spark-tpcds} \vspace{-0.3cm} \end{figure} Finally, we illustrate how \textsf{DQ}\xspace can overcome an inaccurate cost model by fine-tuning with feedback data (\secref{subsec:feedback}). We focus on a specific JOB query, Q10c, where the cost model particularly deviates from the true runtime. Baseline \textsf{DQ}\xspace is trained on data collected over 112 queries, which is every query except for Q10c, as usual (i.e., values are costs from Postgres' native cost model). For fine-tuning we execute a varying amount of these queries and collect their actual runtimes. To encourage observing a variety of physical operators, we use an exploration parameter of $\epsilon=0.1$ when observing runtimes (recall from \secref{sec:data-collection} exploration means with probability $\epsilon$ we form a random intermediate join). Figure \ref{exp:fine-tuning} shows the results as a function of the number of queries observed for real execution. Postgres emits a plan that executes in $70.0$s, while baseline \textsf{DQ}\xspace emits a plan that executes in $60.1$s. After fine-tuning, \textsf{DQ}\xspace emits a plan that executes in $20.3$s, outperforming both Postgres and its original performance. This shows true runtimes are useful in correcting faulty cost model and/or cardinality estimates. Interestingly, training a version of \textsf{DQ}\xspace using only real runtimes failed to converge to a reasonable model---this suggests learning high-level features from inexpensive samples from the cost model is beneficial. \begin{figure}[tp] \centering \includegraphics[width=\columnwidth]{exp/fine-tuning.png} \vspace{-.7cm} \caption{Effects of fine-tuning \textsf{DQ}\xspace on JOB Q10c. A modest amount of real execution using around 100 queries allows \textsf{DQ}\xspace to surpass both its original performance (by $3\times$) as well as Postgres (by $3.5\times$). \label{exp:fine-tuning}} \vspace{-0.3cm} \end{figure} \iffalse \subsection{Real Execution} \begin{figure*}[htp!] \centering \includegraphics[width=\textwidth,keepaspectratio]{exp/real.png} \caption{We execute the plans on a PostgreSQL database and measure the runtimes. \textsf{DQF} denotes a learned plan with 50 queries of fine-tuning. Query 1a, 10a, 12a, and 21a from JOB are shown. Runtimes that greatly exceeded the best optimizers are clipped. \label{exp:real}} \end{figure*} Lastly, we use our Apache Calcite connector to execute the learned plans on a real Postgres database. We force join plans constructed by our optimizer suite by setting \textsf{join\_collapse\_limit = 1} in the database and we also set \textsf{enable\_material = false} to disable any intra-query materialization. We load the IMDB data into Postgres and create primary key indexes. All experiments here were run on an EC2 t2.xlarge instance with PostgreSQL 9.2. Figure \ref{exp:real} illustrates the results on 4 queries from JOB. In this physical design, left-deep plans (KBZ, ZZ, LD) are preferred and are almost as effective as exhaustive plans. RD plans are predictably slow as they don't take advantage of the indexes. QP plans are good only when they by chance sample a left-deep strategy. Learning is competitive with the good plans in all of the real queries. Additionally, we evaluate fine-tuning the network based on past execution data (\secref{subsec:feedback}). We execute 50 random join plans and collect data from their execution; the network is then fine-tuned (bars denoted as ``DQF''). On two of the queries (10a, 21a), we see significant speedups of up to $2\times$; in the other two, there was no statistically significant change due to fine-tuning. \fi \section{Feedback From Execution} \label{subsec:feedback} \begin{table*}[th!]\centering \small% \begin{tabular}{@{} l l l l l l l l l l l l l l @{}} \toprule {\bf Optimizer} && \multicolumn{3}{c}{\bf Cost Model 1} & & \multicolumn{3}{c}{\bf Cost Model 2} & & \multicolumn{3}{c}{\bf Cost Model 3}\\ && {Min} & {Mean} & {Max} && {Min} & {Mean} & {Max} && {Min} & {Mean} & {Max}\\ \midrule QuickPick ({\bf QP}) && 1.0 & 23.87 & 405.04 && 7.43 & 51.84 & 416.18 && 1.43 & 16.74 & 211.13 \\ IK-KBZ ({\bf KBZ}) && 1.0 & 3.45 & 36.78 && 5.21 & 29.61 & 106.34 && 2.21 & 14.61 & 96.14 \\ Right-deep ({\bf RD}) && 4.70 & 53.25 & 683.35 && 1.93 & 8.21 & 89.15 && 1.83 & 5.25 & 69.15 \\ Left-deep ({\bf LD}) && 1.0 & 1.08 & 2.14 && 1.75 & 7.31 & 65.45 && 1.35 & 4.21 & 35.91 \\ Zig-zag ({\bf ZZ}) && 1.0 & 1.07 & 1.87 && 1.0 & 5.07 & 43.16 && 1.0 & 3.41 & 23.13 \\ Exhaustive ({\bf EX}) && 1.0 & 1.0 & 1.0 && 1.0 & 1.0 & 1.0 && 1.0 & 1.0 & 1.0 \\ \textsf{DQ}\xspace && 1.0 & 1.32 & 3.11 && 1.0 & 1.68 & 11.64 && 1.0 & 1.91 & 13.14 \\ \bottomrule \end{tabular} \vspace{0.25em} \caption{\small{\textsf{DQ}\xspace is robust and competitive under all three cost models (\secref{subsec:standalone}). Plan costs are relative to optimal plans produced by exhaustive enumeration, i.e., $cost_{algo}/cost_{\textbf{EX}}$. Statistics are calculated across the entire Join Order Benchmark.} \label{table:standalone-combined}} \vspace{-0.6cm} \end{table*} We have described how \textsf{DQ}\xspace learns from sampling the cost model native to a query optimizer. However, it is well-known that a cost model (costs) may fail to correlate with reality (runtimes), due to poor cardinality estimates or unrealistic rules used in estimation. To correct these errors, the database community has seen proposals of leveraging feedback from execution~\citep{chaudhuri2008pay, markl2003leo}. We can perform an analogous operation on learned Q-functions. Readers might be familiar with the concept of fine-tuning in the deep learning literature~\citep{yosinski2014transferable}, where a network is trained on one dataset and ``transferred'' to another with minimal re-training. \textsf{DQ}\xspace can optionally apply this technique to re-train itself on real execution runtimes to correlate better with the operating environment. \subsection{Fine-tuning \textsf{DQ}\xspace} Fine-tuning \textsf{DQ}\xspace consists of two steps: pre-training as usual and re-training. First, \textsf{DQ}\xspace is pre-trained to convergence on samples from the optimizer's cost model; these are inexpensive to collect compared to real execution. Next, the weights of the first two layers of the neural network are frozen, and the output layer's weights are re-initialized randomly. Re-training is then started on samples of real execution runtimes, which would only change the output layer's weights. Intuitively, the process can be thought of as first using the cost model to learn relevant features about the general structure of subplans (e.g., ``which relations are generally beneficial to join?''). The re-trained output layer then projects the effect of these features onto real runtimes. Due to its inexpensive nature, partial re-training is a common strategy applied in many machine learning applications. \subsection{Collecting Execution Data} For fine-tuning, we collect a list of real-execution data, \texttt{(Graph, Join, Graph', OpTime)}, where instead of the cost of the join, the real runtime attributed to the particular join operator is recorded. Per-operator runtimes can be collected by instrumenting the underlying system, or using the system's native analysis functionality (e.g., \textsf{EXPLAIN ANALYZE} in Postgres). \section{A General Framework: Related Work} The formulation of join optimization as a searching for sequence of graph contractions allows us to pose many common algorithms in a single unified framework. A \emph{greedy} solution to this problem is to optimize each $c_i$ independently. The algorithm proceeds as follows: (1) start with the query graph, (2) find the lowest cost contraction, (3) update the query graph and repeat. This greedy algorithm has a computational complexity of $O(|V|^3)$, and is described in \cite{d}. The greedy algorithm, of course, does not consider how local decision might affect future costs. To find a globally optimal solution one must consider the long term value of a decision. Classical sequential decision making theory formalizes this concept with the characterization of the cost-to-go function. \subsection{Existing Enumeration Algorithms} This Q-function formulation gives us a way to describe many different join enumeration methods, heuristics, and robustness techniques in the same mathematical framework. We can think of it as an estimation problem where the optimizer first has to construct an estimate of the Q function $\hat{Q} \approx Q$ whether through enumeration, trial execution, heuristics, or all of the above. Then, there is a selection phase where $\hat{Q}$ is realized into a join plan. In some algorithms, these two phases are not explicit and happen simultaneously but conceptually this is the process for optimal join ordering. Many common join optimization algorithms can be interpreted as manipulating different parts of the Q function for either more efficient/robust estimation or more efficient optimization. \vspace{0.25em} \noindent \textbf{Greedy Join Enumeration: } A greedy join enumeration strategy can be thought of as using the $\hat{Q} = J$ as an approximation for the Q-function. \vspace{0.25em} \noindent \textbf{IK-KBZ: } For acyclic query graphs (common in star schemas), a polynomial time enumeration algorithm was proposed called IK-KBZ~\cite{?}. The informal insight is that for chain-structured query graphs and linear cost functions, finding the optimal join plan reduces to sorting the chain by ``rank'' (how much it increases or reduces the size of the input relation in a left join). This basic algorithm can be recursively applied to tree-structured query graphs where branches are converted into chains when possible and then merged. One can interpret this algorithm as restrictions on the structure of the Q function. First, this procedure will only produce left-deep plans. This is equivalent to saying that for $t>1$ every contraction $c(u,v)$, $v$ must be a single relation, or alternatively $\hat{Q}(G,c) = \infty$. This restriction means that the $\hat{Q}(G,c)$ is linear in the cardinality of $v$ independent of what relations are on the left. Thus, the Q-Function in this class of problems essentially ranks all single relations by how much they increase the cardinality of a left-deep chain. \vspace{0.25em} \noindent \textbf{Cost-Space Linearization: } Many of the ideas in IK-KBZ are useful as heuristics even if the assumptions are not satisfied~\cite{?}. This can be thought of as approximating the true Q-function with a $\hat{Q}$ that is easier to construct. \vspace{0.25em} \noindent \textbf{System R: } Similar to IK-KBZ, but applicable to all queries the classic System R optimizer restricts the plan space to left-deep plans and avoiding Cartesian products. As before, one can think of this as, $\hat{Q}(G, c) = \infty$ for any contraction that creates a structure that is not a chain. Similarly, avoiding Cartesian products means that any contraction that is not along an edge is assumed to have a $\hat{Q}(G,c) = \infty$. However, the System R optimizer exactly calculates the Q function for all other plans. \vspace{0.25em} \noindent \textbf{QuickPick and other Randomized Algorithms: } Random sampling based join enumeration can also be expressed in this framework, where join plans are initially sampled with a $\hat{Q}$ that is random. Then, the true cost of each sample is evaluate and the best is selected. \vspace{0.25em} \noindent \textbf{Summary: } We tend to think about join enumeration algorithms as combinatorial. However, thinking about the Q-Function gives us an functional perspective on the problem--namely it is fundamentally a problem of data collection (through enumeration) and function approximation (to construct the optimal sequence). \subsection{Inaccurate Cost Models} All of the previously presented algorithm assume that the optimizer's internal cost model is accurate. In practice, even with exhaustive enumeration $\hat{Q}$ always approximates a true $Q^*$ that represents the real query execution performance. Several techniques have been proposed to ``correct'' the cost model based on feedback from execution~\cite{?}. Similarly, there is also a well-studied literature on robust query optimization~\cite{?} to avoid decisions that are sensitive to poor cost estimates. These approaches require careful modeling of the sources of uncertainty and sometimes only work for certain cost models (e.g., Least-Expected Cost optimization works best with linearized costs). Traditionally, the community has divorced the issues of enumeration and cost modeling. This paper argues that they are fundamentally a form of Q-Function approximation. Enumeration strategies place restrictions on the type of Q-function one is allowed to construct, adaptive techniques leverage feedback to correct systematic errors in the Q-function, and robust techniques try to account for uncertainty in the estimates. \section{Introduction}\label{intro}\sloppy Join optimization has been studied for more than four decades~\citep{selinger1979access} and continues to be an active area of research~\citep{trummer2017solving,neumann2018adaptive,marcus2018deep}. The problem's combinatorial complexity leads to the ubiquitous use of \emph{heuristics}. For example, classical System R-style dynamic programs often restrict their search space to certain shapes (e.g., ``left-deep'' plans). Query optimizers sometimes apply further heuristics to large join queries using genetic~\citep{postgres-genetic} or randomized~\citep{neumann2018adaptive} algorithms. In edge cases, these heuristics can break down (by definition), which results in poor plans~\citep{leis2015good}. In light of recent advances in machine learning, a new trend in database research explores replacing programmed heuristics with learned ones~\citep{marcus2018towards,kipf2018learned, ortiz2018learning,marcus2018deep, btree, kraska2018case, mitzenmacher2018model, ma2018query}. Inspired by these results, this paper explores the natural question of synthesizing dataset-specific join search strategies using learning. Assuming a given cost model and plan space, can we optimize the search over all possible join plans for a particular dataset? The hope is to learn tailored search strategies from the outcomes of previous planning instances that dramatically reduce search time for future planning. Our key insight is that join ordering has a deep algorithmic connection with Reinforcement Learning (RL)~\citep{sutton1998reinforcement}. Join ordering's sequential structure is the same problem structure that underpins RL. We exploit this algorithmic connection to embed RL deeply into a traditional query optimizer; anywhere an enumeration algorithm is used, a policy learned from an RL algorithm can just as easily be applied. This insight enables us to achieve two key benefits. First, we can seamlessly integrate our solution into many optimizers with the classical System R architecture. Second, we exploit the nested structure of the problem to dramatically reduce the training cost, as compared to previous proposals for a ``learning optimizer''. To better understand the connection with RL, consider the classical ``bottom-up'' dynamic programming solution to join ordering. The principle of optimality leads to an algorithm that incrementally builds a plan from optimal subplans of size two, size three, and so on. Enumerated subplans are \emph{memoized} in a lookup table, which is consulted to construct a sequence of 1-step optimal decisions. Unfortunately, the space and time complexities of exact memoization can be prohibitive. Q-learning, an RL algorithm~\citep{sutton1998reinforcement}, relaxes the requirement of exact memoization. Instead, it formulates optimal planning as a prediction problem: given the costs of previously enumerated subplans, which 1-step decision is most likely optimal? RL views the classic dynamic programming lookup table as a model---a data structure that \emph{summarizes} enumerated subplans and predicts the value of the next decision. In concrete terms, Q-learning sets up a regression from the decision to join a particular pair of relations to the observed benefit of making that join on past data (i.e., impact on the final cost of the entire query plan). To validate this insight, we built an RL-based optimizer \textsf{DQ}\xspace that optimizes select-project-join blocks and performs join ordering as well as physical operator selection. \textsf{DQ}\xspace observes the planning results of previously executed queries and trains an RL model to improve future search. We implement three versions of \textsf{DQ}\xspace to illustrate the ease of integration into existing DBMSes: (1) A standalone version built on top of Apache Calcite~\citep{calcite}, (2) a version integrated with PostgreSQL~\citep{postgres}, and (3) a version integrated with SparkSQL~\citep{armbrust2015spark}. Deploying \textsf{DQ}\xspace into existing production-grade systems (2) and (3) each required changes of less than 300 lines of code and training data could be collected through the normal operation of the DBMS with minimal overhead. One might imagine that training such a model is extremely data-intensive. While RL algorithms are indeed notoriously data-inefficient (typical RL settings, such as the Atari games~\citep{mnih2013playing}, require hundreds of thousands of training examples), we can exploit the optimal subplan structure specific to join optimization to collect an abundance of high-quality training data. From a single query that passes through a native optimizer, not only are the final plan and its total cost collected as a training example, so are all of its subplans and, recursively, \textit{everything inside the exact memoization table}. For instance, planning an 18-relation join query in TPC-DS (Q64) through a bushy optimizer can yield up to 600,000 training data points thanks to \textsf{DQ}\xspace's Q-learning formulation. We thoroughly study this approach on two workloads: Join Order Benchmark~\citep{leis2015good} and TPC-DS~\citep{tpcds} \textsf{DQ}\xspace sees significant speedups in planning times (up to $>200\times$) relative to dynamic programming enumeration while essentially matching the execution times of optimal plans computed by the native enumeration-based optimizers. These planning speedups allow for broadening the plan space to include bushy plans and Cartesian products. In many cases, they lead to improved query execution times as well. \textsf{DQ}\xspace is particularly useful under non-linear cost models such as memory limits or materialization. On two simulated cost models with significant non-linearities, \textsf{DQ}\xspace improves on the plan quality of the next best heuristic over a set of 6 baselines by $1.7\times$ and $3\times$. Thus, we show \textsf{DQ}\xspace approaches the optimization time efficiency of programmed heuristics \emph{and} the plan quality of optimal enumeration. We are enthusiastic about the general trend of integrating learning techniques into database systems---not simply by black-box application of AI models to improve heuristics, but by the deep integration of algorithmic principles that span the two fields. Such an integration can facilitate new DBMS architectures that take advantage of all of the benefits of modern AI: learn from experience, adapt to new scenarios, and hedge against uncertainty. Our empirical results with \textsf{DQ}\xspace span across multiple systems, multiple cost models, and workloads. We show the benefits (and current limitations) of an RL approach to join ordering and physical operator selection. Understanding the relationships between RL and classical methods allowed us to achieve these results in a data-efficient way. We hope that \textsf{DQ}\xspace represents a step towards a future learning query optimizer. \section{Introduction}\label{intro}\sloppy Join order optimization is a core component of almost all query optimizers. While the first approaches were proposed more than 40 years ago~\citep{selinger1979access}, the problem continues to be an active area of research~\citep{trummer2017solving,neumann2018adaptive,marcus2018deep}. One reason for the perennial interest in the problem is its combinatorial complexity, which leads to the use of heuristic solutions in practice (e.g., the ubiquitous left-deep heuristic). These heuristics can have weak spots leading to very suboptimal plans. Database administrators often hand-optimize these corner cases by creating views to force certain query plans and/or tuning optimizer parameters. In light of the recent advances in machine learning and AI, replacing programmed heuristics with learned ones is a new trend in join optimization research~\citep{marcus2018towards,kipf2018learned, ortiz2018learning,marcus2018deep}. Likewise, this paper explores the problem of synthesizing dataset-specific join search strategies using machine learning. We assume a fixed cost model and plan space, and focus on optimal ways to search over the space of possible join plans. The main idea is to learn from the results of previous planning instances and avoid search branches that were previously observed to be suboptimal. Join order optimization is particularly interesting from a machine learning perspective because it has a well-known sequential structure that even classical join optimization algorithms exploit (e.g., Sellinger's dynamic program~\citep{selinger1979access}). This is exactly the same problem structure, called a Markov Decision Process, that underpins stochastic sequential decision making problems in reinforcement learning (RL)~\citep{sutton1998reinforcement}. The main insight of this paper is that classical dynamic programming solutions to join optimization are, in a sense, degenerate RL algorithms. To understand the connection, let us consider the classical ``bottom up'' dynamic programming solution to join ordering. The principle of optimality leads to an algorithm that incrementally builds a plan from optimal subplans of size two, size three, and so on. The algorithm re-uses previously enumerated subplans through exact memoization in a lookup table, and uses this table to construct a sequence of 1-step optimal decisions. RL views this table as a model: the algorithm has access to a data structure that summarizes the results of previous enumerations and makes a future decision using this data structure. So instead of exact memorization, a family of RL algorithms called Q-Learning poses the dynamic programming steps as a generalized prediction problem: given the costs of previously enumerated subplans, which 1-step decision is most likely optimal? Concretely, this corresponds to setting up a regression problem between the decision to join a particular pair of relations and the observed benefit of making that decision in past data (impact on the final cost of the entire query plan). This formulation is particularly compelling if a single model can capture predictions for an entire workload, thus sharing learned experiences across planning instances. Casting join ordering as an RL problem sheds light on an entire spectrum of approaches spanning from pure classical enumeration to pure learning and different ways of leveraging past planning experience. This paper demonstrates that this connection is not simply theoretical, and can actually lead to orders of magnitude faster planning in operational systems today. We present an RL-based optimizer, called \textsf{DQ}\xspace, that does ordering and physical operator selection. In the initial implementation, we assume a read-only database and a fixed schema. \textsf{DQ}\xspace observes the planning results of previously executed queries and trains an RL model to improve future search instances. While RL algorithms are notoriously inefficient in terms of training data, \textsf{DQ}\xspace exploits the optimal subplan structure present in most join optimizers. From a single query, not only is the final plan and its total cost a training example, but all of its subplans as well and their associated costs as well. A relatively small number of queries generates a vast amount of examples from which \textsf{DQ}\xspace can learn a search strategy. We implement three versions of \textsf{DQ}\xspace to illustrate the ease of integration into existing DBMSes: (1) A version built on top of Apache Calcite~\citep{calcite}, (2) a version integrated in the Postgres SQL query optimizer, and (3) a version integrated into Spark SQL. The two deployments into existing systems (2) and (3) each required less than 300 LoC and training data could be collected through the normal operation of the DBMS. We present experimental results on two benchmark datasets and workloads: Join Order Benchmark (JOB)~\citep{leis2015good} and TPC-DS~\citep{tpcds}. In our experiments, we are restricted to optimizing foreign-key joins due to the schemas of the experimental workloads, and discuss at the end how \textsf{DQ}\xspace could be applied to more general join queries. We observe significant speedups in planning times (up-to > 200x) while essentially matching the execution times of optimal plans found through their enumeration-based native optimizers. We also show that these speedups can facilitate broadening the plan space (e.g., to bushy plans) leading to improved query execution times as well. \textsf{DQ}\xspace is particularly useful over cost models with significant non-linearities (such as memory limits or materialization). On two simulated cost models with significant non-linearities \textsf{DQ}\xspace improves on the next best heuristic over a set of 6 baselines by $1.7\times$ and $3\times$. This paper does not argue an application of RL to join optimization, but rather the insight that classical join optimization is deeply linked to the Q-Learning family of RL algorithms. The link allows RL to be applied efficiently to join order optimization problems. In particular, experiments suggest that \textsf{DQ}\xspace does not suffer as severe of a cold start problem endemic to existing learning-based join work where the authors report \textbf{tens of thousands} of training queries for each benchmark dataset~\citep{kipf2018learned, ortiz2018learning,marcus2018deep}, whereas, \textsf{DQ}\xspace requires hundreds of training queries in our experiments. This is fundamentally because the Q-Learning algorithm leverages optimal substructures during training in contrast to other recent work like~\citep{marcus2018deep}. We believe that addressing such data efficiency problems is the first step towards an ultimate vision of a DBMS that learns to optimize queries on its own. \section{Realizing the Q-learning Model} Next, we present the mechanics of actually training and operating a Q-learning model \subsection{Featurizing the Join Decision} \label{subsec:featurization} Before we get into the details, we will give a brief motivation of how we should think about featurization in a problem like this. The features should be sufficiently rich that they capture all relevant information to predict the future cumulative cost of a join decision. This requires knowing what the overall query is requesting, the tables on the left side of the proposed join, and the tables on the right side of the proposed join. It also requires knowing how single table predicates affect cardinalities on either side of the join. \vspace{0.5em} \noindent \textbf{Participating Relations: } The overall intuition is to use each column name as a feature, because it identifies the distribution of that column. The first step is to construct a set of features to represent which attributes are participating in the query and in the particular join. Let $A$ be the set of all attributes in the database (e.g., $\{Emp.id, Pos.rank,...,Sal.code,Sal.amount\}$). Each relation $rel$ (including intermediate join results) has a set of \emph{visible attributes}, $A_{rel} \subseteq A$, the attributes present in the output. Similarly, every query graph $G$ can be represented by its visible attributes $A_G \subseteq A$. Each join is a tuple of two relations $(L,R)$ and we can get their visible attributes $A_L$ and $A_R$. Each of the attribute sets $A_G, A_L, A_R$ can then be represented with a \emph{binary 1-hot encoding}: a value $1$ in a slot indicates that particular attribute is present, otherwise $0$ represents its absence. Using $\oplus$ to denote concatenation, we obtain the query graph features, $f_{G} = A_{G}$, and the join decision features, $f_{c} = A_{L} \oplus A_{R}$, and, finally, the overall featurization for a particular $(G,c)$ tuple is simply $f_{G} \oplus f_c$. Figure \ref{fig:feat} illustrates the featurization of our example query. \vspace{0.5em} \noindent \textbf{Selections: } Selections can change said distribution, i.e., \textsf{(col, sel-pred)} is different than \textsf{(col, TRUE)}. To handle single table predicates in the query, we have to tweak the feature representation. As with most classical optimizers, we assume that the optimizer eagerly applies selections and projections to each relation. Next, we leverage the table statistics present in most RDBMS. For each selection $\sigma$ in a query we can obtain the selectivity $\delta_{\sigma}$, which estimates the fraction of tuples present after applying the selection.\footnote{We consider selectivity estimation out of scope for this paper. See discussion in \secref{sec:optimizer-arch} and \secref{sec:relatedwork}.} To account for selections in featurization, we simply scale the slot in $f_G$ that the relation and attribute $\sigma$ corresponds to, by $\delta_r$. For instance, if selection $\text{Emp.id} > 200$ is estimated to have a selectivity of $0.2$, then the $\text{Emp.id}$ slot in $f_G$ would be changed to $0.2$. Figure \ref{fig:feat-sel-scaling} pictorially illustrates this scaling. \vspace{0.5em} \noindent \textbf{Physical Operators: } The next piece is to featurize the choice of physical operator. This is straightforward: we add another one-hot vector that indicates from a fixed set of implementations the type of join used (Figure~\ref{fig:feat-physical-op}). \vspace{0.5em} \noindent \textbf{Extensibility: } In this paper, we focus only on the basic form of featurization described above and study foreign key equality joins.\footnote{This is due to our evaluation workloads containing only such joins. \secref{sec:extensions} discusses how \textsf{DQ}\xspace could be applied to more general join types.} An ablation study as part of our evaluation (Table~\ref{table:feat-ablation}) shows that the pieces we settled on all contribute to good performance. That said, there is no architectural limitation in \textsf{DQ}\xspace that prevents it from utilizing other features. Any property believed to be relevant to join cost prediction can be added to our featurization scheme. For example, we can add an additional binary vector $f_{ind}$ to indicate which attributes have indexes built. Likewise, physical properties like sort-orders can be handled by indicating which attributes are sorted in an operator's output. Hardware environment variables (e.g., available memory) can be added as scalars if deemed as important factors in determining the final best plan. Lastly, more complex join conditions such as inequality conditions can also be handled (\secref{sec:extensions}). \subsection{Model Training} \textsf{DQ}\xspace uses a multi-layer perceptron (MLP) neural network to represent the Q-function. It takes as input the final featurization for a $(G,c)$ pair, $f_G \oplus f_c$. Empirically, we found that a two-layer MLP offered the best performance under a modest training time constraint ($< 10$ minutes). The model is trained with a standard stochastic gradient descent (SGD) algorithm. \subsection{Execution after Training} \label{sec:inference} After training, we obtain a parameterized estimate of the Q-function, $Q_\theta(f_G,f_c)$. For execution, we simply go back to the standard algorithm as in the greedy method but instead of using the local costs, we use the learned Q-function: (1) start with the query graph, (2) featurize each join, (3) find the join with the lowest \emph{estimated Q-value} (i.e., output from the neural net), (4) update the query graph and repeat. This algorithm has the time-complexity of greedy enumeration except in greedy, the cost model is evaluated at each iteration, and in our method, a neural network is evaluated. One pleasant consequence is that \textsf{DQ}\xspace exploits the abundant vectorization opportunities in numerical computation. In each iteration, instead of invoking the neural net sequentially on each join's feature vector, \textsf{DQ}\xspace \emph{batches} all candidate joins (of this iteration) together, and invokes the neural net once on the batch. Modern CPUs, GPUs, and specialized accelerators (e.g., TPUs~\citep{jouppi2017datacenter}) all offer optimized instructions for such single-instruction multiple-data (SIMD) workloads. The batching optimization amortizes each invocation's fixed overheads and has the most impact on large joins. \section{Reduction factor learning} Classical approaches to reduction factor estimation include (1) histograms and heuristics-based analytical formulae, and (2) applying the predicate under estimation to sampled data, among others. In this section we explore the use of learning in reduction factor estimation. We train a neural network to learn the underlying data distributions of a pre-generated database, and evaluate the network on unseen, randomly selections that query the same database. To gather training data, we randomly generate a database of several relations, as well as random queries each consisting of one predicate of the form ``R.attr $\langle$op$\rangle$ literal''. For numeric columns, the operands are $\{=, \neq, >, \geq, <, \leq\}$, whereas for string columns we restrict them to equality and inequalities. Each attribute's values are drawn from a weighted Gaussian. To featurize each selection, we similarly use 1-hot encodings of the participant attribute and of the operand. Numeric literals are then directly included in the feature vector, whereas for strings, we embed ``hash(string\_literal) \% B'' where $B$ is a parameter controlling the number of hash buckets. The labels are the true reduction factors by executing the queries on the generated database. A fully-connected neural network is used\footnote{Two hidden layers, 256 units each, with ReLU activation.}, which is trained by stochastic gradient descent. {\bf Evaluation.} We compare the neural net to a simple linear regressor. We train on a database of 5 relations, a total of 21 attributes; the train and test sets include 1000 and 2000 queries, respectively. We found that the fitted linear regressor has a mean-squared-error of $\approx 84$, while the neural network is able to achieve an L1 loss of $< 1$, approximating the true reduction factors with minimal error. \reminder {TODO(zongheng): they need to be on the same scale. } \section{Related Work} \label{sec:relatedwork} Application of machine learning in database internals is still the subject of significant debate this year and will continue to be a contentious question for years to come~\citep{btree, kraska2018case, mitzenmacher2018model, ma2018query}. An important question is what problems are amenable to machine learning solutions. We believe that query optimization is one such sub-area. The problems considered are generally hard and orders-of-magnitude of performance are at stake. In this setting, poor learning solutions will lead to slow but not incorrect execution, so correctness is not a concern. \vspace{0.5em}\noindent \textbf{Cost Function Learning} We are certainly not the first to consider ``learning'' in the query optimizer and there are a number of alternative architectures that one may consider. The precursors to this work are attempts to correct query optimizers through execution feedback. One of the seminal works in this area is the LEO optimizer~\citep{markl2003leo}. This optimizer uses feedback from the execution of queries to correct inaccuracies in its cost model. The underlying cost model is based on histograms. The basic idea inspired several other important works such as~\citep{chaudhuri2008pay}. The sentiment in this research still holds true today; when Leis et al. extensively evaluated the efficacy of different query optimization strategies they noted that feedback and cost estimation errors are still challenges in query optimizers~\citep{leis2015good}. A natural first place to include machine learning would be what we call \emph{Cost Function Learning}, where statistical learning techniques are used to correct or replace existing cost models. This is very related to the problem of performance estimation of queries~\citep{akdere2012learning, wu2013predicting, wu2013towards}. We actually investigated this by training a neural network to predict the selectivity of a single relation predicate. Results were successful, albeit very expensive from a data perspective. To estimate selectivity on an attribute with 10k distinct values, the training set had to include 1000 queries. This architecture suffers from the problem of \emph{featurization of literals}; the results are heavily dependent on learning structure in literal values from the database that are not always straightforward to featurize. This can be especially challenging for strings or other non-numerical data types. A recent workshop paper does show some promising results in using Deep RL to construct a good feature representation of subqueries but it still requires $>$ 10k queries to train~\citep{ortiz2018learning}. \vspace{0.5em}\noindent \textbf{Learning in Query Optimization} Recently, there has been several exciting proposals in putting learning inside a query optimizer. Ortiz et al.~\citep{ortiz2018learning} applies deep RL to learn a representation of queries, which can then be used in downstream query optimization tasks. Liu~\citep{liu2015cardinality} and Kipf~\citep{kipf2018learned} use DNNs to learn cardinality estimates. Closer to our work is Marcus et al.'s proposal of a deep RL-based join optimizer, ReJOIN~\citep{marcus2018deep}, which offered a preliminary view of the potential for deep RL in this context. The early results reported in ~\citep{marcus2018deep} top out at a 20\% improvement in plan execution time of Postgres (compared to our 3x), and as of that paper they had only evaluated on 10 out of the 113 JOB queries that we study here. DQ qualitatively goes beyond that work by offering an extensible featurization scheme supporting physical join selection. More fundamentally, DQ integrates the dynamic programming of Q-learning into that of a standard query optimizer, which allows us to use off-policy learning. Due to use of on-policy policy gradient methods, \citep{marcus2018deep} requires about 8,000 training queries to reach native Postgres’ cost on the 10 JOB queries. DQ exploits optimal substructures of the problem and uses off-policy Q-learning to increase data-efficiency by two orders of magnitude: 80 training queries to outperform Postgres’ real execution runtimes on the entire JOB benchmark. \vspace{0.5em}\noindent \textbf{Adaptive Query Optimization} Adaptive query processing~\citep{avnur2000eddies,deshpande2007adaptive} as well as the related techniques to re-optimize queries during execution~\citep{markl2004robust, babu2005proactive} is another line of work that we think is relevant to the discussion. Reinforcement learning studies sequential problems and adaptive query optimization is a sequential decision problem over tuples rather than subplans. We focus our study on optimization in fixed databases and the adaptivity that \textsf{DQ}\xspace offers is at a workload level. Continuously updating a neural network can be challenging for very fine-grained adaptivity, e.g., processing different tuples in different ways. \vspace{0.5em}\noindent \textbf{Robustness} There are a couple of branches of work that study robustness to different parameters in query optimization. In particular, the field of ``parametric query optimization''~\citep{hulgeri2002parametric,trummer2014multi}, studies the optimization of piecewise linear cost models. Interestingly, \textsf{DQ}\xspace is it is agnostic to this structure. It learns a heuristic from data identifying different regimes where different classes of plans work. We hope to continue experiments and attempt to interpret how \textsf{DQ}\xspace is partitioning the feature space into decisions. There is also a deep link between this work and least expected cost (LEC) query optimization~\citep{chu2002least}. Markov Decision Processes (the main abstraction in RL) are by definition stochastic and optimize the LEC objective. \vspace{0.5em}\noindent \textbf{Join Optimization At Scale} Scaling up join optimization has been an important problem for several decades, most recently~\citep{neumann2018adaptive}. At scale, several randomized approaches can be applied. There is a long history of randomized algorithms (e.g., the QuickPick algorithm~\citep{waas2000join}) and genetic algorithms~\citep{bennett1991genetic, steinbrunn1997heuristic}. These algorithms are pragmatic and it is often the case that commercial optimizers will leverage such a method after the number of tables grows beyond a certain point. The challenge with these methods is that their efficacy is hard to judge. We found that QuickPick often varied in performance on the same query quite dramatically. Another heuristic approach is relaxation, or solving the problem exactly under simplified assumptions. One straightforward approach is to simply consider greedy search avoiding Cartesian products~\citep{fegaras1998new}, which is also the premise of the IK-KBZ algorithms~\citep{ibaraki1984optimal,krishnamurthy1986optimization}. Similar linearization arguments were also made in recent work~\citep{trummer2017solving,neumann2018adaptive}. Existing heuristics do not handle all types of non-linearities well, and this is exactly the situation where learning can help. Interestingly enough, our proposed technique has a $O(n^3)$ runtime, which is similar to the \emph{linearizedDP} algorithm described in ~\citep{neumann2018adaptive}. We hope to explore the very large join regime in the future and an interesting direction is to compare \textsf{DQ}\xspace to recently proposed techniques like ~\citep{neumann2018adaptive}.
1,314,259,994,086
arxiv
\section{Introduction} \label{SI} The entropy of entanglement, famously elucidated in early debates on the nature of quantum mechanics\footnote{ The phrase ``entropy of entanglement'' is modern terminology \cite{Srednicki:1993im} but the physics was understood} , has emerged more recently as a powerful unifying tool in quantum field theory. It can act as an order parameter for phase transitions \cite{Kitaev:2005dm} \cite{b} \cite{Amico:2007ag} \cite{Hertzberg:2010uv}, has provocative links with the black hole entropy formula \cite{tHooft:1984kcu} \cite{Solodukhin:2011gn}, has assisted in generalizing the $c$-theorem to higher dimensions \cite{Casini:2012ei} \cite{Casini:2017vbe}, and appears to play a role in the holographic emergence of spacetime geometry \cite{Ryu:2006bv} \cite{Lin:2014hva} \cite{Dong:2016eik} in the $AdS/CFT$ correspondence \cite{Maldacena:1997re} \cite{Witten:1998qj} \cite{Aharony:1999ti}. Despite the utility of the quantity, it remains notoriously difficult to compute. Most examples involve only free fields \cite{Casini:2009sr} or exploit conformal symmetry \cite{Holzhey:1994we} \cite{Calabrese:2009qy}. This is unfortunate since some highly interesting applications involve the renormalization group flow of the entanglement entropy, as alluded to above. For a spacetime without boundary the leading contribution to the entanglment entropy of a quantum field theory is the \emph{area law} \cite{Srednicki:1993im}. That is, if one considers the entropy as a function of the scale of the region, the dominant contribution is proportional to the surface area. If one introduces a spacetime boundary, it is possible to have an additional term which scales as the intersection of the surface area with the boundary (this may or may not be subleading depending on the geometry of the spacetime). The difference in entanglement entropy for different choices of boundary physics $\Delta S$ will appear in this term. See Figure \ref{EntropyBoundary}. \begin{figure}[h!] \label{EntropyBoundary} \includegraphics[width=\textwidth]{figures/Regions3.png} \centering \caption{The entanglement entropy of Region A in a spacetime without boundary is proportional to its surface area, $S \sim A$. If one introduces a boundary with associated boundary physics, then the difference in entanglement entropy for difference choices of boundary physics scales with the intersection of the region with the boundary, $\Delta S \sim \partial A = A \cap \partial \Omega$}. \end{figure} In this paper we will explore two simple examples in this vein which sidestep the difficulty with broken scale invariance mentioned above by imposing ``mixed'' boundary conditions\footnote{ It is common in the $AdS/CFT$ literature to use this terminology, and we will use it here. However, traditionally ``mixed'' refers to imposing different boundary conditions at different \emph{locations} on the boundary whereas we have in mind the linear mixture of boundary terms traditionally called ``Robin'' boundary conditions } on a free scalar field. By ``mixed" we mean a boundary condition which is some linear interpolation between two canonical conformally invariant boundary conditions, controlled by a parameter $f$. See Figure \ref{BulkBoundaryFig}. \begin{figure}[h!] \label{BulkBoundaryFig} \includegraphics[width=\textwidth]{figures/Pic6.png} \centering \caption{We consider bulk spacetimes with boundary, either half Minkowski space or Anti-de Sitter spaces. Both have boundaries which are Minkowski space of one lower dimension. Mixed boundary conditions break conformal symmetry of the boundary, where for the latter we are thinking holographically. We are interested in computing the entanglement entropy of a Rindler wedge whose horizon intersects the $z$ axis.} \end{figure} In particular, we will be interested in the cases where the bulk geometry is Minkowski space or Anti-de Sitter space. For Minkowski space we will include an artificial boundary at $z=0$ so we are working in half Minkowski space. Anti-de Sitter space already has a (conformal) boundary region. In both cases the bulk theory is free and the interesting (conformal symmetry breaking) physics is located at the boundary, which is a $d$ dimensional Minkowski space where where $D=d+1$ is the bulk spacetime dimension. The case of Anti-de Sitter space is especially interesting since the bulk space is holographically dual to a theory associated with boundary. In either case one may implement the boundary condition via the addition of a boundary action, so one may think of the imposition of mixed boundary conditions as an insertion of a (relevant) boundary operator of dimension $\Delta=d-[f]$ ($ < D$). In the Minkowski case, one may think of this as a mass term localized on the boundary. This generates a renormalization group flow complete with ultraviolet and infrared fixed points which are the conformally invariant theories. But because the field remains free, the physics is determined entirely by the Green's function and so the entanglement entropy (of the Rindler wedge) is exactly computable with the usual methods augmented by some standard tools from asymptotic analysis. The main results are the expressions for the half Minkowski Rindler entropy (\ref{Result1}) and the dual conformal field theory Rindler entropy (\ref{Result2}). They will conveniently take the form (Exactly in the Minkowski case, to leading order in Anti-de Sitter): \begin{equation} \Delta S^f= \beta(f \epsilon^{d-\Delta}) \Delta S^{\infty} \end{equation} Where $\beta$ is some function which interpolates between $0$ and $1$ as $f$ interpolates between $0$ and $\infty$. Also see the corresponding graphs in Figure \ref{MinkowskiDS} and Figure \ref{plotAdS} which illusrate this behavior of the entropy as a function of the boundary coupling $f$. The expression (\ref{Result2b}) is also interesting as a distinct, but ultimately subleading, contribution (with the same behavior). The results are interesting for three reasons. First, it is useful to have exact results for the entanglement entropy that are not conformal field theories and which capture the full renormalization group interpolation between ultraviolet and infrared fixed points. (see \cite{Berthiere:2016ott} for a related example). We find that the interpolation is monotonic in the parameter $f$. In half Minkowski space, this extends the results of \cite{Berthiere:2016ott} to the case of $m^2=0$ where conformal symmetry is broken \emph{only} by boundary physics. In Anti-de Sitter space this extends the results of \cite{Miyagawa:2015sql} and \cite{Sugishita:2016iel} to finite $f$. Second, the results therefore illustrate the scaling behavior of the entangement entropy, which can be compared with expectations based on the irreversability of the renormalization group, as follows. As already mentioned, it has long been known that the dominant contribution to the entanglement entropy\footnote{ As long as the theory in question is considered at zero temperature and not a topological quantum field theory} obeys an area law \cite{Srednicki:1993im}: \begin{equation} S_{EE} = \mu(g, \epsilon) \frac{A}{\epsilon^{D-2}} \end{equation} where $A$ is the \emph{surface area} of the region in question and $\mu$ is some dimensionless parameter which is a function of the cutoff scale $\epsilon$ and the coupling constants $g$. Note this is divergent. We may understand this term, and its divergence, as emerging from the short distance correlations across the boundary (e.g. in the two point fucntion) which persist to arbitrarily short distances. For a conformal field theory in $D=2$ this $\mu(g,\epsilon)$ is just a constant proportional to the central charge, as was found in \cite{Holzhey:1994we} Heuristically this makes sense as the central charge is related to the number of degrees of freedom, which we might expect to scale with correlations across the boundary. Given the role of $c$ in establishing the irreversability of the renormalization group \cite{Zamolodchikov:1986gt}, this is already a hint of the way entanglement entropy may serve as a probe of renormalization group flow. For a spacetime with boundary, one can have an additional term: \begin{equation} \gamma(g, \epsilon) \frac{\partial A}{\epsilon^{d-2}} \end{equation} Where $\partial A$ is the area of the intersection of surface area with the boundary. It may or may not be subleading depending on the bulk geometry. It is this term that concerns us. Other subleading terms are possible with or without a boundary. One may collect all terms and define: \begin{equation} \label{gammaterm} \tilde{\mu}(g, \epsilon, r)=\frac{S_{EE}'(r)}{(D-2)r^{(D-3)}} \end{equation} Where $S_{EE}(r)$ is the entanglement entropy for a spherical region of radius $r$. It was shown in \cite{Casini:2017vbe} that Strong Subaddivity \cite{SSA} along with Lorentz invarience and the ``Markov property'' \cite{Casini:2017roe} which applies to the vacuum of a quantum field theory, implies that (among other things) the renormalization group flow of the entanglement entropy must obey: \begin{equation} \label{SSA_Req} \Delta \tilde{\mu}_{IR} \le \Delta \tilde{\mu}_{UV} \end{equation} Where the $\Delta$ here refers to subtracting the corresponding entanglement entropy of the pure (unperturbed) ultraviolet theory and ``infrared'' means $r \to \infty$ while ``ultraviolet'' means $r \to 0$.\footnote{It's worth noting that neither side is necessarily positive definite} Hueristically, the point is that this generalized area law coefficient $\tilde{\mu}$ is a quantity which depends on scale, interpolating between ultraviolet and infrared fixed points, and this behavior is constrained by the irreversability of the renormalization group flow. Strictly speaking, for half Minkowski space the boundary breaks the bulk Lorentz Invarience, so the result (\ref{SSA_Req}) does not apply. However the monotonicity of our result is illustrative \footnote{The result \cite{Casini:2018nym} was actually proven while this work was in the process of publication, and was meant to be evidence for the generalization, which in fact has now been proven!} of the $g$-theorem \cite{Casini:2016fgb} found for boundary conformal field theories in $D=2$ and which was generalized in \cite{Casini:2018nym}, which applies specifically to the second term (\ref{gammaterm}). This confirms the conventional wisdom that results such as (\ref{SSA_Req}) reflect, again more fundamentally, the irreversability of the renormalization group which we might expect to apply for more general backgrounds and appear in whatever way is appropriate given the nature of the physics involved in conformal symmetry breaking. Third, the results for Anti-de Sitter space, when combined with a geometric contribution which we also compute, provide a test for the recent proposal \cite{Faulkner:2013ana} for the $\frac{1}{N}$ corrections to the celebrated Ryu-Takayangi formula \cite{Ryu:2006bv}. The full statement is that the entanglement entropy of the conformal field theory dual to the bulk Anti-de Sitter space, which we think of as embedded in a holographic quantum theory of gravity, is given by: \begin{equation} \label{FLMp} S_{EE}^{CFT, \partial \gamma}=\frac{A_{\gamma}}{4G}+\frac{\delta A_{\gamma}}{4G}+S_{EE}^{AdS, \gamma}+S_{Wald}+S_{ren} \end{equation} The first term ($\sim N^2$) is the Ryu-Takayanagi term, the other terms ($\sim N^0$) include the 1-loop correction to the area, the bulk entanglement entropy, and corrections from curvature couplings and renormalization counterterms. In the context of $AdS/CFT$, the mixed boundary conditions have an interesting interpretation as dual to double trace operators in the conformal field theory, as was first pointed out in \cite{Witten:2001ua} and was elaborated on in e.g. \cite{Gubser:2002zh}. In particular, the $f$ quantity is dual to a \emph{coupling constant} for a double trace interaction: \begin{equation} \sim \frac{f}{2} \int dx^d \hat{\mathcal{O}} \hat{\mathcal{O}} \end{equation} Where $\hat{\mathcal{O}}$ is the operator dual to the bulk field in the conformal case $f=0$. This is a very rich topic, which e.g. includes interesting relations with the stability and boundedness \cite{Troost:2003ig} \cite{Casper:2017gcw} \cite{Cottrell:2017gkb} of the operators in $AdS/CFT$. Since the entropy may act as an order parameter for phase transitions, it too probes these issues and we will discuss them, though that will not be our main focus here. The important point is that the result (\ref{SSA_Req}) \emph{does} of course apply to the conformal field theory dual, and we will attempt to assess whether the prediction of (\ref{FLMp}) obeys the inequality as expected for any choice of $f$ for which the corresponding operator is relevant. This extends the work of \cite{Miyagawa:2015sql} and \cite{Sugishita:2016iel}, which partly inspired this work \cite{Faulkner:2013ana}. As a side note, we would also like to point out that this appears to serve in general as a tractable example of a holographic renormalization group flow generated entirely by $\frac{1}{N}$ effects, and also that we are able to use our methods to improve the computation of the vacuum energy found in \cite{Gubser:2002zh} \footnote{We'll actually compute the Free Energy Density in the conformal field theory}. \section{Preliminaries and Methods} \label{SP} The entropy of entanglement is defined as the von Neuman entropy of the ``reduced'' density operator associated with some subsector of the full quantum theory. Traditionally, one imagines separating the Hilbert space into a direct product: \begin{equation} \label{ABsplit} H=H_{A} \otimes H_B \end{equation} And then taking the trace of the density operator $\rho_{AB}$ for the full state over, say, space B to get a ``reduced'' density operator and associated entanglement entropy: \begin{equation} \rho_A=Tr_B \rho_{AB} \end{equation} \begin{equation} S_{EE}^A=-Tr_A(\rho_A \ln(\rho_A)) \end{equation} Strictly speaking, for quantum field theories the splitting in (\ref{ABsplit}) is not possible due to the Reeh-Schlieder theorem \cite{RS1}, and one must instead define the entanglement entropy for a \emph{subring of observables} rather than a subsector of the Hilbert space. This distinction turns out to be important for e.g. gauge fields \cite{Casini:2013rba} \cite{Ohmori:2014eia} \cite{Radicevic:2016tlt} , but since we are here interested only in scalar fields we may ignore this.\footnote{ In fact this subtlety produces ``boundary effects'' for entanglement of its own sort, which are \emph{different} than those investigated here}. In quantum physics $S_{EE}^A$ can be nonzero even if the full state $\rho_{AB}$ is pure, which is arguably one of the more profound differences from classical theory. In this work, we are interested in the entanglement entropy associated specifically with the \emph{Rindler Wedge}, the subregion of spacetime accessible to a uniformally accelerating observer. In the case of half Minkowski space, we are imagining a Rindler wedge associated with an observer accelerating away from the horizon but remaining equidistant from the artificial boundary. In the case of Anti-de Sitter space we are actually imagining a Rindler observer in the \emph{dual conformal field theory}. See again figure (\ref{BulkBoundaryFig}). It's worth noting here that the entanglement entropy should be associated not with a spacial slice but with the entire causal diamond which is the causal development of the slice. The Penrose diagram of the boundary in (\ref{BulkBoundaryFig}) shows the diamond associated with the bulk spacial slice. We will compute the entanglement entropy using the replica method. In general, this means observing that: \begin{equation} S=-\partial_n |_{n=1}Tr(\rho^n) \end{equation} This is just a mathematical fact, but it can be heuristically interpreted as saying the entropy is a measure of the decrease in the ``coincidence probability'' (the probability that all systems are found in the same state) as the number of systems is increased. Specifically in quantum field theory, this method geometrizes nicely because the $\rho^n$ can be thought of as ``gluing'' together multiple copies of the space and then the tracing procedure just computes the partition function on this space: \begin{equation} \label{Seq1} S=-\partial_n|_{n=1}(Z_n/Z_1^n)=\partial_n|_{n=1}(W_n-n W_1) \end{equation} Where $W$ refers to the connected function (or free energy in the Euclidean picture). In the case of the Rindler Wedge, the replica manifold is the cone with surplus angle $\theta= 2 \pi (n-1)$ and the conical singularity is located at the horizon (which is a single point after we Euclidienize). Within this computational scheme the $\epsilon$ cutoff serves to regulate this singularity, but the origin of the divergence is still better understood from the heuristic description given in the previous section. This quantity is divergent in quantum field theory due to the short distance behavior of the Green's function which encodes correlations across the horizon at all scales. We will therefore introduce a short distance cutoff $\epsilon$ to regulate the result. The ``area law'' result mentioned in the introduction implies that the Rindler entropy is also infrared divergent since the Rindler horizon area is infinite, so we will include a long distance cutoff $\Lambda$ where necessary as well. In a free theory, even with nontrivial boundary conditions, the whole theory is determined entirely by its Green's function, which may be determined by solving the equation of motion or built from the spectrum of the theory. For example we have that, at one loop: \begin{equation} \label{Seq2} W=\frac{1}{2}\int_{m^2}^{\infty}dm^2Tr(G) \end{equation} Where here the trace is over the spacetime. We may obtain results for the replica manifold from those for $n=1$ by means of the Sommerfeld Formula \cite{Sommerfeld}: \begin{equation} \label{Seq3} F_{2 \pi \alpha}(z)=F_{2 \pi}(z)-\frac{1}{4 \pi i \alpha}\int_{\Gamma} \cot \Big( \frac{w-z}{2 \alpha} \Big)F(w) dw \end{equation} Where the $\Gamma$ contour goes down the line $-\pi$ and back up the line $\pi$ and where $F_{2 \pi}(z)$ is an arbitrary $2 \pi-$periodic function In effect then, computing the entanglement entropy is just a matter of composing the linear functions (\ref{Seq1}, \ref{Seq2}, \ref{Seq3}). So the entanglement entropy up to one loop is a linear functional of the Green's function, which is expected given the theory is free and we are interested in vacuum correlations across the horizon. We will actually be interested here only in the \emph{entropy difference} for different boundary conditions, so this linearity is convenient since it means the \emph{difference in entanglement entropies is just a linear functional of the difference in Green's functions}. For Anti-de Sitter space we will need to additionally include a geometric contribution. This will be found using the linearized Einstein equation and by point splitting the Green's function to obtain the stress tensor, see Section (\ref{SAdS}). This too is linear in the Green's function. For all cases we will define the subtracted entropy as: \begin{equation} \label{entropydiff} \Delta S^f=S^f-S^0 \end{equation} Where $f$ is a boundary coupling which breaks conformal invariance. In general $\Delta$ will be used for this subtraction while $\delta$ will be used for quantum corrections, although $\Delta$ will also be used for the scaling dimension of some operators where it is not too unclear to do so (we hope). We will be working with a free massive quantum scalar field. We can define the theory by specifying the (Euclidien)\footnote{ to be clear, $t=-i\tau$} action and partition function: \begin{equation} \label{ScalarFieldTheory} Z=\int [d \phi]_{\partial \Omega}e^{-I[\phi]} \qquad I[\phi]=\frac{1}{2} \int_{\Omega} \sqrt{g} \big( g^{ab}\partial_a \phi \partial_b \phi+m^2 \phi^2 \big) \end{equation} Where $\Omega$ is some spacetime background and the restriction on the path integral must be chosen to implement suitable boundary conditions, of which there will be a 1 parameter family. The requirement is that the operator associated with the equation of motion resulting from the action: \begin{equation} \label{Dop} \hat{D}=- \nabla^2+m^2= -\frac{1}{\sqrt{g}} \partial_a \big(\sqrt{g}g^{ab}\partial_b \quad \big) \end{equation} Is a positive operator on the space of functions on satisfying said boundary conditions. This is made manifest if one recognizes that we may integrate by parts to schematically obtain: \begin{equation} Z=\int [d \phi]_{\partial\Omega} e^{- \phi^{\dagger} \cdot \hat{D} \cdot \phi} \end{equation} We will use the variables $D=d+1=n+3$ so that $D$ represents the bulk spacetime dimension, $d$ the boundary dimension, and $n$ the dimension of intersection of the horizon with the boundary. \section{half Minkowski Space} \label{SM} The (Euclidien) background geometry for half Minkowski space is given by: \begin{equation} ds^2=dz^2+d \tau^2+d \vec{x}\cdot d \vec{x} \qquad z \geq 0 \end{equation} We will imagine a horizon at, say, $x_1=0$, cutting the bulk spacetime in two and allowing us to compute an associated entanglement entropy. The differential operator which defines the scalar field theory reduces from (\ref{Dop}) to: \begin{equation} \hat{D}=-\partial_z^2-\vec{\partial_x} \cdot \vec{\partial_x}+m^2 \end{equation} Meanwhile one may integrate the action by parts to obtain that the operator is positive on the space of functions which obey: \begin{equation} \partial_z \phi |_{z=0}=f \phi|_{z=0} \qquad f\geq0 \end{equation} For some fixed $f$. The case $f=0$ is traditionally called the ``Neumann'' Theory while $f \to \infty$ is the ``Dirichlet'' theory. We may think of nonzero $f$ as inserting a relevant boundary operator of dimension $D-2$. We will be interested in the spectrum of this operator since this can be used to define the quantum field theory and in particular may give us the Green's function. So we seek to solve: \begin{equation} \hat{D} \phi=\lambda \phi \end{equation} One may check that the following functions are an orthonomal set of eigenfunctions which obey the boundary conditions: \begin{equation} \label{eigenfunctionsM} \phi=\frac{1}{(2 \pi)^{D/2}}(\alpha_{\kappa}\psi_{\kappa}+\alpha_{\kappa}^* \psi_{\kappa}^*)e^{i \vec{k} \cdot \vec{x}} \end{equation} Where: \begin{equation} \psi_{\kappa}=e^{i \kappa z} \qquad \alpha_{\kappa}=\frac{1}{\sqrt{2}} \frac{\kappa+i f}{\sqrt{f^2+\kappa^2}} \end{equation} The corresponding Eigenvalue is: \begin{equation} \lambda= \kappa^2+|\vec{k}|^2+m^2 \end{equation} Notice that the f=0 case returns: \begin{equation} \alpha_{\kappa}=\alpha^*_{\kappa} \end{equation} Whereas $f \to \infty$ gives: \begin{equation} \alpha_{\kappa}=-\alpha^*_{\kappa} \end{equation} Which are precisely the results we would expect (symmetry and antisymmtry) using the method of images to obtain the spectrum with boundary conditions from the spectrum on the whole space. Indeed, one can see from the structure of the eigenfunctions that it is a wavelength-dependent generalization of the method images, where the phase of the image depends on the scale. The boundary condition induces a sort of ``RG Flow'' from a free scalar field with a ``Neuman mirror'' in the UV (at large $\kappa$) to the same theory but with a ``Dirichlet Mirror'' in the IR (small $\kappa$).\footnote{To be clear, since the phase depends on scale, an object emitting a range of wavelengths would not really pervieve this as a mirror} We may find the Green's function from the spectral theory of $\hat{D}$ using the relation: \begin{equation} \hat{D}^{-1}=\sum_{\lambda}\frac{v_{\lambda}^\dagger v_{\lambda}}{\lambda} \end{equation} Where the $v_{\lambda}$ are the eigenfunctions of $\hat{D}$, in this case (\ref{eigenfunctionsM}) We have: \begin{equation} G^{f}=G+\int \frac{d \kappa dk^d}{(2 \pi)^D} \frac{\kappa^2-f^2}{\kappa^2+f^2} \frac{e^{i\vec{k}\cdot(\vec{x}-\vec{x}')}e^{i\kappa(z+z')}}{\kappa^2+|\vec{k}|^2+m^2} \end{equation} Where G is the usual Minkowski Green's function. We see that we have: \begin{equation} G^0=G+\hat{P}_zG \qquad G^{\infty}=G-\hat{P}_zG \qquad \hat{P}_z\equiv z' \to -z' \end{equation} We are actually most interested in the ``subtracted'' Green's function since we want to compute differences between the theories: \begin{equation} \label{dGm} \Delta G^{f} \equiv G^f-G^0=-2 \int \frac{d \kappa dk^d}{(2 \pi)^D} \frac{f^2}{\kappa^2+f^2} \frac{e^{i\vec{k}\cdot(\vec{x}-\vec{x}')}e^{i\kappa(z+z')}}{\kappa^2+|\vec{k}|^2+m^2} \end{equation} We may now use the formulas from (\ref{SP}) to compute the entanglement entropy. We will be cursory in the following, see Appendix A for more details. We will choose the case $m^2=0$ since then it is only the boundary that breaks conformal invairence, and $D=4$ for simplicity, but our method is generalizable. Let's start by considering the limiting cases $f \to 0, \infty$ We find: \begin{equation} \label{Minkfinty} \Delta S^{\infty} = -\frac{1}{24 \sqrt{\pi}} \frac{\Lambda}{\epsilon} \qquad \Delta S^{0} =0 \end{equation} Where the latter is true by definition. These are the endpoints, we'd like to be able to see the full interpolation as a function of $f$. We may try to proceed by expanding the integral (\ref{dGm}), which is intractable, in either small or large $f$: \begin{equation} \frac{f^2/k^2}{1+f^2/\kappa^2}=\sum_n(-f^2/\kappa^2)^{n+1} \qquad f\ll 1 \end{equation} \begin{equation} \frac{1}{1+\kappa^2/f^2}=\sum_n(-\kappa^2/f^2)^n \qquad f\gg 1 \end{equation} It was pointed out in \cite{Berthiere:2016ott} that the entropy is not an analytic function of $f$ at $f=0$, posssibly due to the appearance of a tachyon for $f < 0$ which indicates the onset of a phase transition at this point. Therefore we will expand in large $f$. We obtain: \begin{equation} \Delta S^f = -\frac{1}{24 \sqrt{\pi}} \frac{\Lambda}{\epsilon}\sum_{n=0}^{\infty} \frac{\Gamma(\frac{1}{2}+n)}{\sqrt{\pi}(1+2n)} \Big(\frac{-1}{f^2 \epsilon^2}\Big)^n \end{equation} This is divergent \footnote{Consider e.g. the ratio test}, but can be interpreted as an asymptotic series.\footnote{ A theorem of analysis \cite{AAAMiller} ensures that given the integrand is analytic and that the integration is finite term by term, the expression is the correct asymptotic series} Indeed, (\ref{dGm}) is not so different from the Eulerian integral: \begin{equation} F(x)=\int_0^{\infty} dt \frac{e^{-t}}{1+xt} \end{equation} Which is known to be tractable with asymptotic methods \cite{Euler}. We can even resum the series using the Borel summation method\footnote{ Again, see Appendix A for details} to obtain: \begin{equation} \label{Result1} \Delta S^f = \Big( \frac{G^{22}_{23} \big(\frac{1}{f^2 \epsilon^2}\big|^{1/2,1/2}_{0,1,-1/2} \big)}{2 \sqrt{\pi}} \Big) \Delta S^{\infty} \end{equation} Where $\Delta S^{\infty}$ is the same as that in \ref{Minkfinty}. Although there is no proof ensuring resummation is unique, we present this as the correct solution for the entanglement entropy as a function of $f$. It is monotonic as expected and has the correct asymptotic expansion. One can also check that it \emph{isn't analytic at the origin} as expected. So the appearance of the tachyon, and therefore the absence of stability, for $f < 0$ appears as non-analyticity of $\Delta S^f$. One can now plot the whole interpolation between theories, see figure \ref{MinkowskiDS}. Notice $f$ only appears in the combination $f \epsilon$, which encodes the ratio of the renormalization scale to the cutoff. \begin{figure}[h!] \label{MinkowskiDS} \includegraphics[width=\textwidth]{figures/PicMinkowski.png} \centering \caption{This plot shows $\Delta S^f/|\Delta S^{\infty}|$ vs. $f \epsilon$ for $D=4$ and $m^2=0$. Notice it is decreases monotonically} \end{figure} Notice also that it depends not on the area of the horizon, but the area of the intersection of it with the boundary. So it's \emph{subleading} to the usual area law. This plus its monotonicity is reminiscent of the $g$-theorem for $D=2$, where we have: \begin{equation} S=\frac{c}{6} \log \Big(\frac{\Lambda}{\epsilon} \Big)+\log(g)+c_0 \end{equation} The first term takes the place of the area law ($c$ is the central charge), the second term is a constant (since $n=0$ here) which depends on the boundary physics, and the last term is a constant which doesn't. The $g$ term has been proven to decrease monotonically, and this has been generalized to higher dimensions more recently \cite{Casini:2018nym}. Our result are illustrative of that this generalization, again building on \cite{Berthiere:2016ott}. \section{Anti-de Sitter Space} \label{SAdS} Now we will turn to the free scalar (\ref{ScalarFieldTheory}) in Anti-de Sitter space, with (Euclidianized) metric: \begin{equation} ds^2=\frac{L^2}{z^2}\big(dz^2+d \tau^2+d \vec{x}\cdot d \vec{x} \big) \qquad z \geq 0 \end{equation} Topologically this is the same as the previous section, and we will again use $x_1=0$ to define the Rindler splitting. For the theory (\ref{ScalarFieldTheory}) in an Anti-de Sitter background, it is convenient to introduce the quantity: \begin{equation} \label{nudef} \nu=\sqrt{d^2/4+m^2L^2} \qquad \Delta_{\pm}=\frac{d}{2}\pm \nu \end{equation} For example, the small $z$ limiting behavior for any solution to the equation of motion goes as: \begin{equation} \label{zexp} \phi(z)=p_1 z^{\Delta_-}+\ldots+p_2z^{\Delta_+} \end{equation} And for the range of masses given by: \begin{equation} 0\leq\nu\leq1 \end{equation} A one parameter family of boundary conditions is permissible just as in the Minkowski case: \begin{equation} p_2=f p_1 \qquad f \ge0 \end{equation} Where we see the mass dimension of f is [$f$]$=2 \nu$.\footnote{Because we will be integrating over $\nu$ at various points, it will be necessary to be careful about this implicit $\nu$ dependence in $f$} Just as before, as long as $f > 0$ the theory is well defined (as was shown in \cite{Ishibashi:2004wx}). The $f=0$ case is traditionally still called in ``Neuman Theory'' and the $f \to \infty$ the ``Dirichlet Theory''. As was pointed out in \cite{Gubser:2002zh}, it is important to note that in the limit $\nu \to 0$ the spectra for different $f$ all degenerate and so all give rise to the same quantum theory.\footnote{Nevertheless, there is still a 1 parameter family of possibilities due to the appearance of an additional term in the expansion (\ref{zexp}) for $\nu=$. This is just as with degeneracy for ordinary differential equations wherein an ``additional solution'' appears. One could consider this family additionally but we will not pursue this here} As mentioned in Section (\ref{SI}), in the Dual Conformal Field Theory the $f$ parameter acts as a coupling constant for a double trace deformation of the Neuman theory, and this deformation generates a Renormalization group flow between a theory with operators of scaling dimension $\Delta_-$ to one of $\Delta_+$. Unlike with the Minkowski case it is this Conformal Field Theory Dual we have in mind, and we are thinking of our results as a holographic calculation of the (Rindler) entropy in the Dual theory. We are able to relate the two using the proposal (\ref{FLMp}). In our case, we are only interested in the entropy difference, so we have: \begin{equation} \label{dFLM} \Delta S_{CFT}^f=\underbrace{\frac{\Delta \delta A^f}{4G}}_{geometric}+\underbrace{S_{AdS}^f}_{entropic} \end{equation} We obtain (\ref{dFLM}) by noting that the classical contributions cancel (because the classical solution for the theories we consider are $\phi=0)$), that we have no curvature couplings (which would give $S_{Wald}$ in (\ref{FLMp})), and by assuming that any renormalization counterterms do not depend on $f$.\footnote{This is a reasonable assumption but it is not guaranteed since in principle there may be finite boundary counterterms associated with $f$. We will neglect this possibility here.} This is all precisely the same as in \cite{Miyagawa:2015sql} and \cite{Sugishita:2016iel} where the $f=0,\infty$ cases were computed. We seek to extend their calculation to general $f$ and compare results to expectations based on the irreversibility of the Renormalization Group (\ref{SSA_Req}) as a way of checking the consistency of the proposal (\ref{FLMp}). So we must compute two terms, an \emph{entropic} contribution and a \emph{geometric} contribution. Both can be obtained from the Greens function. The Greens function is most easily found not by spectral theory but by solving: \begin{equation} \hat{D} G(x,x')=\frac{1}{\sqrt{g}} \delta(x-x') \end{equation} The metric factor is chosen so that: \begin{equation} \int dV f(x)\Big((\Box-m^2) G(x,x') \Big) =f(x') \end{equation} The solution requires nothing other than standard Sturm-Liouville techniques and resembles finding the classical static electric field Greens function in cylindrical coordinates as in \cite{Jackson}. This task was first accomplished for general $f$ in \cite{Gubser:2002zh}. We have: \begin{equation} \label{AdSG} \Delta G^f=\frac{-\sin(\pi \nu) L }{\pi} \int d \tilde{k} \alpha_k^f (z z')^{d/2}K_{\nu}(k z)K_{\nu}(k z')e^{i k |\Delta \vec{ r}| \cos(\theta)} \end{equation} With: \begin{equation} \alpha_k^f=\frac{2^{2 \nu}f \Gamma(1+\nu)}{k^{2 \nu} \Gamma(1-\nu)+2^{2 \nu}f \Gamma(1+\nu)}=\Big(\frac{k^{2 \nu} \Gamma(1-\nu)}{2^{2 \nu}f \Gamma(1+\nu)}+1 \Big)^{-1} \end{equation} And: \begin{equation} d \tilde{k}= \frac{\Omega_n k^{d-1}\sin(\theta)^n dk d\theta}{(2 \pi L)^d} \end{equation} And where $\Delta \vec{r}$ refers to the boundary directions only. \subsection{Entropic Contribution} \label{entropiccont} We may proceed to get the bulk entanglement entropy just as in Section \ref{SM}, this time by expanding\footnote{And the same theorem guarantees we will get at least an asymptotic series}: \begin{equation} \label{AdSexp} \alpha_k^f=\sum_i\Big(-\frac{k^{2 \nu} \Gamma(1-\nu)}{2^{2 \nu}f \Gamma(1+\nu)}\Big)^{i} \end{equation} For full details, see Appendix B. As it turns out there is a complication which is that for the Anti-de Sitter the analogue of (\ref{Seq2}): \begin{equation} W=\frac{1}{2} \int_0^{\nu} d\nu^2 TrG \end{equation} Where we may integrate from $\nu=0$ since $\Delta W_{\nu=0}=0$ as explained above. The issue is \emph{this} integral will be intractable. However if we expand in $\nu$, which is reasonable since $0\le \nu \le 1$, we will be able to proceed and will even be able to resum the series (\ref{AdSexp}) in $f$ order by order in $\nu$ \emph{without} asymptotic methods. The result, which does not converge, can be interpreted as an asymptotic expansion in small $\nu$ For example, for $D=5$ we obtain: \begin{equation} \label{Result2b} \Delta S^f=\Delta S^{\infty} \Big(\frac{f \epsilon^{2 \nu}}{1+f \epsilon^{2 \nu}} \Big)+s_{4,1} \Big(\frac{f \epsilon^{2 \nu}}{(1+f \epsilon^{2 \nu})^2} \Big)+\mathcal{O}(\nu^5) \end{equation} Where: \begin{equation} \Delta S^{\infty}=\frac{1}{72 \pi} \Big(\frac{\Lambda^2}{\epsilon^2} \Big) \nu^3 \end{equation} Which agrees with \cite{Miyagawa:2015sql} and \cite{Sugishita:2016iel}, and where: \begin{equation} \label{Sbulk5} s_{4,1}=\frac{1+\gamma_e+\log(4)+\psi^0(3/2)}{96 \pi} \Big(\frac{\Lambda^2}{\epsilon^2} \Big) \nu^4 \end{equation} This is monotonic, just as in the Minkowski case, however here it is monotonically \emph{ increasing}. See Figure \ref{AdSEntropyPlot} \begin{figure}[h!] \label{AdSEntropyPlot} \includegraphics[width=\textwidth]{figures/PicAdSE.png} \centering \caption{This plot shows $\Delta S^f_{AdS}/|\Delta S^{\infty}_{AdS}|$ vs. $f \epsilon$ for $\nu=\frac{1}{2}$ for any $D = 5$ . Notice it is increases monotonically} \end{figure} It is possible to compute the additional terms systematically, for any $D \ge 4$.\footnote{For $D=3$ there are additional divergences which prevent the integral expressions from being tractable even after expansion, see \cite{Miyagawa:2015sql}}. For all cases the results have the structure: \begin{equation} \label{Sbulkany} \Delta S_{AdS}^f=\sum_{i=3}^{\infty} \sum_{j=0}^{i-3} s_{ij} \nu^i \phi(-\frac{1}{f \epsilon^{2 \nu}}, -j,0) \end{equation} Where $\phi$ is the Hurwitz Lerch $\phi$ function. Only terms with $j=0$ will be nonzero in the limit $f \to\infty$ and these will only contribute for odd $i$ for $3 \leq i \le D-2$. For $D=5$ this is only the $\nu^3$ term which is why we were able to write it in the form (\ref{Sbulk5}). The expression we provided is written in terms of the dimensionless parameter $f \epsilon^{2 \nu}$, so we have implicitly had in mind fixing this quantity, expanding in it, and then resumming order by order in $\nu$. This form is useful for considering e.g. varying $f$ with $\epsilon$ fixed, which interpolates between Neuman and Dirichlet theories. However for some fixed $f$ we would like to take $\epsilon \to \infty$ since it is an ultraviolet regulator in the conformal field theory and we are really only interested in finite or divergent terms in this limit. Which terms survive will depend on the choice of $D$ and $\nu$, another reason (\ref{Sbulk5}) and (\ref{Sbulkany}) are useful. For the case $D=5$ and $\nu=\frac{1}{2}$ again, we get for example\footnote{We could not provide a similar expression in the Minkowski case because the expression was not analytic for small $f \epsilon$} : \begin{equation} \Delta S^f_{AdS}= \frac{3(\gamma_e+\log(4)+\psi^0(3/2))+11}{4608 \pi} ( \frac{f \Lambda^2}{\epsilon})-\frac{3(\gamma_e+\log(4)+\psi^0(3/2))+7}{2304 \pi} (f^2 \Lambda^2) +\mathcal{O}(\nu^5) \end{equation} Note the second term is a finite contribution, while both are proportional to the area in the dual conformal field theory. It is worth noting that as a bonus we may obtain the zero temperature Free Energy (The Euclidean connected function), which is related to the vacuum energy sought in \cite{Gubser:2002zh}. In \cite{Gubser:2002zh} the authors noted monotonicity of this quantity based on the integral expression \ref{AdSG} but did not compute the integral. We may however proceed with the same strategy as for the entropy to compute, for example: \begin{equation} \label{FrEnergy} \Delta W^f=- \Big(\frac{f \epsilon^{2 \nu}}{1+f \epsilon^{2 \nu}} \Big) \Big(\frac{\Lambda^d}{\epsilon^d} \Big) \Big(\frac{\Omega_n}{2^{d+1} \pi^{d-1}} \Big) \frac{ \Gamma(\frac{d-1}{2}) \Gamma(\frac{d}{2})^2}{3 d \Gamma(\frac{d+1}{2})}\nu^3+\mathcal{O}(\nu^4) \end{equation} Which in e.g. $D=5$ gives: \begin{equation} \Delta W^f=-\frac{1}{144 \pi^2} \Big(\frac{f \epsilon^{2 \nu}}{1+f \epsilon^{2 \nu}} \Big) \Big(\frac{\Lambda^4}{\epsilon^4} \Big) \nu^3 +\mathcal{O}(\nu^4) \end{equation} We note that $\frac{\Lambda^4}{\epsilon^3}$ is just the volume of the conformal field theory, so the coefficient of this may be interpreted as the free energy density (at zero temperature). All the higher order terms can be computed systematically as described previously. Notice the monotonicity appears as expected. \subsection{Geometric Contribution} In order to make a prediction for the entanglement entropy of the dual conformal field theory using the proposal (\ref{FLMp}), we must also compute the area term which comes from 1-loop stress tensor backreaction on the geometry. In general, this could result in a different Ryu-Takayangi minimal surface in the bulk, but since we have chosen the horizon to be defined by $x_1=0$ the unbroken symmetries ensures this will not change in our case and we need only compute the shift in area for this same surface. We must keep in mind also that we should use the same expansion employed when computing the entropic contribution. Namely, we fix a $z$ cutoff for the space and expand in the $\nu$ independant quantity $f \epsilon^{2 \nu}$. This means we will need to be solving the (linearized) Einstein equations with appropriate boundary conditions imposed at $z=\epsilon$\footnote{Equivalently, we may think of choosing boundary conditions by requiring that in the $\epsilon \to 0$ limit we have the same boundary as the Neuman theory since this fixes the ultraviolet fixed point} We proceed as follows. First, we must obtain the stress tensor. We may get this by ``point splitting'' the Greens function (see e.g. \cite{Moretti:1998rs}). That is we define: \begin{equation} \label{ptsplit} \langle T_{ab} \rangle =\lim_{x'\to x}\big( \langle \hat{T}_{ab}(x,x') \rangle -Z_{ab}\big)+g_{ab}Q \end{equation} Where the first term is the classical stress tensor promoted to a quantum operator, but ``point split'' so it isn't evaluated at coincident points. The second term is a quantity meant to remove the divergences from the first quantity. It can be rather difficult to determine but depends only on geometric invarients. The final term $Q$ is meant to enforce: \begin{equation} \nabla^a \langle T_{ab} \rangle =0 \end{equation} For us, the issue is simplified since we are only interested in the \emph{difference} in stress tensors for different boundary conditions. The divergent part $Z_{ab}$ is removed from this automatically.\footnote{In principle it is possible that some finite part remains, since we are comparing two different theories, not two states in the same theory (for which the subtraction is guaranteed to be correct by a theorem \cite{WaldQFT}). We will neglect this possibility since we appear to obtain the correct answer when our results reduce to known results obtained by other methods.} The classical (Hilbert) stress tensor is: \begin{equation} T_{ab} =\frac{2}{\sqrt{g}} \frac{\delta (\sqrt{g}\mathcal{L})}{\delta g^{ab}} = \qquad =2 \frac{\delta \mathcal{L}}{\delta g^{ab}}+g_{ab} \mathcal{L} \end{equation} Which for our case is: \begin{equation} T_{ab}= \partial_a \phi \partial_b \phi+\frac{1}{2} g_{ab}(g^{cd}\partial_c \phi \partial_d \phi+m^2 \phi^2) \end{equation} Where recall $m^2$ is related to $\nu$ by (\ref{nudef}) which is important for a $\nu$ expansion. The promotion of $T_{ab}$ to a point-split operator and taking its expectation value can be accomplished by replacing the terms in (\ref{ptsplit}) with the Greens function or the appropriate derivative. See Appendix B for the details of this and the rest of the calculation. So using (\ref{ptsplit}) we may obtain an expansion for difference in stress tensors. To lowest order in $\nu$ we get: \begin{equation} \label{stresstensor} \langle T_{ab} \rangle =-\big(\frac{f \epsilon^{2 \nu}}{1+f \epsilon^{2 \nu }} \big) \big( \frac{\Omega_n d^2 \Gamma(\frac{d-1}{2}) \Gamma(\frac{d}{2})^2 }{2^{d+4} \Gamma(\frac{d+3}{2}) \pi^{d-1}} \big) \frac{\nu}{L^D} g_{ab}+\mathcal{O}(\nu^2) \end{equation} Where $\Omega_n$ is the area of the $n$-sphere. For $D=5$ this is: \begin{equation} \langle T_{ab} \rangle =-\frac{f \epsilon^{2 \nu}}{1+f \epsilon^{2 \nu}}(\frac{\nu}{15 L^5 \pi^2})g_{ab}+\mathcal{O}(\nu^2) \end{equation} In order to obtain (\ref{stresstensor}), one must expand as in the previous subsection. This means cutting off the space at $z=\epsilon$, (\ref{stresstensor}) should therefore be thought of as the boundary value of the stress tensor which interpolates between Neuman and Dirichlet as a function of $z$. The contribution goes to zero as $\epsilon \to 0$ for fixed $f$ as it should, since all theories approach the Neuman fixed point in the ultraviolet. Because this contribution is proportional to the metric, at this order in $\nu$ the backreaction is equivalent to a simple shift of the cosmological constant: \begin{equation} \langle T_{ab} \rangle =\lambda g_{ab} \to \delta \Lambda_{c.c.}=-8 \pi G \lambda \end{equation} The cosmological constant is related to the $AdS$ radius $L$ by: \begin{equation} \Lambda_{c.c.}=-\frac{(d)(d-1)}{2 L^2} \end{equation} So: \begin{equation} \delta L = -\frac{2 \sqrt{2} G L^3 \pi \delta \lambda}{d(d-1)}+\mathcal{O}(G^2) \end{equation} Meanwhile the classical result for the area is: \begin{equation} A=\Lambda^n\int_{\epsilon}^{\infty} (L/z)^{n+1}=\frac{L^{n+1}}{n} \frac{\Lambda^n}{\epsilon^n} \end{equation} Putting this altogether we get: \begin{equation} \frac{\Delta \delta A^f}{4G}=-\frac{\Lambda^n}{\epsilon^n} \frac{2 \pi L^{3+n}}{d(d-2)} \Delta\lambda^f \end{equation} And so finally we have: \begin{equation} \label{AdSAterm} \frac{\Delta \delta A^f}{4G}+\mathcal{O}(\nu^2)= \big(\frac{f \epsilon^{2 \nu}}{1+f \epsilon^{2 \nu }} \big) \big( \frac{\Omega_n d^2 \Gamma(\frac{d-1}{2}) \Gamma(\frac{d}{2})^2 }{2^{d+3} d(d-2) \Gamma(\frac{d+3}{2}) \pi^{n}} \big) \frac{\Lambda^n}{\epsilon^n} \nu +\mathcal{O}(\nu^2) \end{equation} Which for $D=5$ again for example is: \begin{equation} \frac{\Delta \delta A^f}{4G}=\frac{f \epsilon^{2 \nu}}{1+f \epsilon^{2 \nu}}\frac{\Lambda^2}{\epsilon^2}\frac{\nu}{60 \pi}+\mathcal{O}(\nu^2) \end{equation} Note that this is a monotonic contribution to the boundary area law. It agrees with \cite{Miyagawa:2015sql} and \cite{Sugishita:2016iel} when $f \to \infty$. Since it is of order $\nu^1$ whereas (\ref{Sbulkany}) was of order $\nu^3$ it is always the \emph{leading} contribution. So the geometric contribution leads the entropic contribution \emph{for any} $f$ (for Rindler space). So (\ref{AdSAterm}) is our prediction for the dual field theory Rindler entropy to lowest order in $\nu$. That is, to emphasize: \begin{equation} \label{Result2} \Delta S_{CFT}^f= \frac{\Delta \delta A^f}{4G}+\mathcal{O}(\nu^2)= \big(\frac{f \epsilon^{2 \nu}}{1+f \epsilon^{2 \nu }} \big) \big( \frac{\Omega_n d^2 \Gamma(\frac{d-1}{2}) \Gamma(\frac{d}{2})^2 }{2^{d+3} d(d-2) \Gamma(\frac{d+3}{2}) \pi^{n}} \big) \frac{\Lambda^n}{\epsilon^n} \nu +\mathcal{O}(\nu^2) \end{equation} We can plot this as in Section (\ref{SM}). See Figure \ref{plotAdS} \begin{figure}[h!] \label{plotAdS} \includegraphics[width=\textwidth]{figures/PicAdS.png} \centering \caption{This plot shows $\Delta S^f_{CFT}/|\Delta S^{\infty}_{CFT}|$ vs. $f \epsilon$ to lowest order in $\nu$ for any $D \ge 4$ . Notice it is increases monotonically.} \end{figure} Higher order contributions can be computed systematically by proceeding with point splitting and solving the bulk Einstein Equations. See Appendix B for full details. The results have a structure similar to (\ref{Sbulkany}). We can of course take the $\epsilon \to 0$ limit for some choice of $\nu$ just like in the previous section, and the results are similar. \subsection{Irreversability} The monotonicity of the interpolation of the entropy of the conformal field theory found in the previous section is reminiscent of the behavior of ``c-functions'' which capture the irreversibility of the renormalization group flow. On the other hand our entropy apparently \emph{increases} rather than decreases which is the opposite of what we would expect. In order to clarify this issue we need to compare with explicit results on the irreversibility of the renormalization group flow and its relation to entanglement entropy. The most general result to date is the theorem in \cite{Casini:2017vbe} which was mentioned in the introduction. To be more explicit, let $\Delta S(r)$ be the difference in entanglement entropy between the deformed theory and the theory at the UV fixed point (just as in (\ref{entropydiff})) for a \emph{spherical} region of radius $r$. We can define the following quantity: \begin{equation} \tilde{\mu}(r)=\frac{S ' (r)}{(d-2)r^{d-3}} \end{equation} Where here we are thinking of $d$ as the spacetime dimension as in the conformal field theory dual to Anti-de Sitter space of dimension $D=d+1$. We can think of this as the coefficient of the area term. The result of \cite{Casini:2017vbe} says that this $\mu(r)$ acts as a c-function in any dimension: \begin{equation} \label{CasiniCond} \tilde{\mu}'(r) \le 0 \end{equation} That is it is monotonically decreasing as we go from ultraviolet to infrared with increasing spherical radius. If we think of our Rindler result as representing that of a sphere of infinite radius $r \sim \Lambda$ the result in the previous section satisfies (\ref{CasiniCond}) trivially: \begin{equation} \tilde{\mu}'(r)=0 \end{equation} This is because for any value of $f$ the Rindler Entropy is in the deep infrared regime. In order to compare with irreversibility expectations nontrivially, we may do one of two things: 1. Because our result to lowest order comes entirely from the shift in the cosmological constant, it would seem like we could extend our results to lowest order in $\nu$ to a spherical entangling surface using \cite{Ryu:2006bv}. If we do this naively, the inequality (\ref{CasiniCond}) will actually be violated. This is because, as was pointed out in \cite{Sugishita:2016iel}, there is a $\nu^1$ term that appears in the entropic contribution for finite $r$ which exactly cancels the geometric contribution, at least for the $f=\infty$ case. The important term is of order $\nu^3$. This resolves the puzzle pointed out in \cite{Miyagawa:2015sql} regarding consistency of higher order $\nu$ terms with expectations based on the $a$-theorem, but also means we cannot trivially extend our results to a spherical surfaces to the necessary order of $\nu$. This is because past first order the backreaction on the geometry is not reducible to a shift of the cosmological constant and we cannot e.g. guarantee the minimal surface remains the same. Nevertheless, if the interpolation found for the Rindler results extends to spherical entangling surfaces anyway, which appears likely since it appears to be inherited almost directly from the form of the Greens function, then the proof of the consistency of the proposal (\ref{FLMp}) with (\ref{CasiniCond}) in the $f \to \infty$ case found in \cite{Sugishita:2016iel} extends immediately to all $f$. So our results are highly suggestive, but \emph{not} a proof, of the consistency at finite $f$. 2. In $d=2$ dimensions ($AdS_3$) the area term is proportional to the central charge, so the interpolation would be directly interpretable. However we were not able to obtain the entropic contribution in this case. For $f \to \infty$ it was found that a naïve extension of the procedure in Section \ref{entropiccont} gave the correct result. If this holds true for finite $f$ then the interpolation \ref{Sbulkany} will remain true and the consistency found in \cite{Sugishita:2016iel} will extend to finite $f$. It is also worth noting that the free energy computation (\ref{FrEnergy}) is consistent with the $c$-theorem, but this was already known in \cite{Gubser:2002zh} \cite{Gubser:2002vv}. So our results are strongly suggestive, but do not concretely prove, that the proposal \ref{FLMp} is consistent with irreversibility of the renormalization group for finite $f$. \section{Discussion and Conclusions} \label{SD} In this paper, we computed the entanglement entropy for mixed boundary conditions in half Minkowski space and in the context of the AdS/CFT correspondence. In both cases, the result was a monotonic interpolation between conformal fixed points as a function of a dimensionless combination of the boundary coupling and the cutoff. In the case of half Minkowski space, our results build on earlier results and are illustrative of the generalization of the g-theorem to higher dimensions found in \cite{Casini:2018nym}. In Anti-de sitter space our results fill in the interpolation between Neuman and Dirichlet theories as already computed in \cite{Miyagawa:2015sql} and \cite{Sugishita:2016iel} and offer an opportunity for a consistency check of the proposal for $1/N$ corrections to the holographic entanglement entropy formula. We have also commented on the non analyticity of entropy difference at $f=0$ which was observed in \cite{Berthiere:2016ott}, where it was suggested that it may be indicative of a phase transition. Indeed, a tachyon appears precisely at this point. The theory would need to be embedded in a larger theory to determine the new phase since our free theory simply becomes divergent. It may be possible to extend the AdS/CFT result to higher order using the recent proposal\cite{Engelhardt:2014gca} \cite{Dong:2017xht}: \begin{equation} S_{CFT}=ext[\frac{ \langle A \rangle }{4G}+S_{AdS}] \end{equation} But it is not entirely clear how to make sense of the gravitational backreaction beyond 1-loop since gravity is not renormalizable and since the next order would inevitably involve quantum gravitational interactions with the matter field $\phi$ (for us the backreaction involved classical gravity responding to the 1-loop stress tensor, which is a tadpole diagram). It would be interesting to extend the results to higher spin. For spin 1, there is the additional possibility of topological terms which are supposedly probed by the entanglement entropy. Combined with mixed boundary conditions, there is a full SL(2,Z) space of theories (see \cite{Cottrell:2017gkb} for summary and elaboration on possibilities) which can be explored and it would be interesting to see how the entropy transforms under this group. For spin 2, it has been suggested \cite{Cottrell:2018skz} that the mixed boundary conditions give rise to quantum gravity on the boundary. In this case it is not even clear what the analogue of the formula (\ref{FLMp}) would be, making it especially interesting though perhaps problematic. As mentioned, the results confirm expectations based on the irreversibility of the renormalization group flow. As mentioned in \cite{Casini:2018nym} it would be desirable to extend these results as much as possible, for example to boundaries of different codimension. The fact that the entropy difference (\ref{entropydiff}) is so easily computable in this example may make a comparison with the entropy bounds \cite{Casini:2008cr} \cite{Bousso:2014sda} interesting. The Ryu-Takayangi formula has been used to derive the linearized Einstein equations in the bulk from the ``first law of entanglement entropy'' on the boundary (see e.g. \cite{Lashkari:2013koa} \cite{Faulkner:2013ica} \cite{Faulkner:2017tkh}) Extending these results to include quantum corrections is important and the example in this paper may provide an interesting test case for exploratory purposes. The solubility of this model may be useful in general for exploring holographic renormalization group flows generated by $\frac{1}{N}$ suppressed effects.
1,314,259,994,087
arxiv
\section{Introduction} Despite promising results in real-world applications, deep neural networks (DNN) contain hundreds of millions or even hundreds of billions of parameters, making them impossible to deploy on mobile devices. A significant amount of research efforts have been made to compress DNN models. Existing model compression algorithms can be generally divided into quantization algorithms \cite{zhou2016dorefa,esser2019lsq,li2016twn,hubara2016binarized,courbariaux2015binaryconnect} and pruning algorithms \cite{han2015deep,molchanov2017variational,srinivas2015data,li2016pruning,hu2016network}. Although pruning and quantization each achieves promising results, few efforts \cite{han2015deep,zhu2016ttq,tung2018clip,ye2018unified} have been made to combine these two. In this paper, we propose a compression method that unifies the pruning and the ternary quantization operations on a unit $n$-sphere to create highly compressed ternary weights (30$\times$ to 48$\times$ on ResNet models). We construct special ternary orthonormal bases of the convolutional and linear layers by using weight normalization \cite{salimans2016weight,Liu2021OPT}, pruning and Straight-Through Estimator (STE) ~\cite{bengio2013estimating}. Inspired by weight normalization \cite{salimans2016weight,wang2019orthogonal} and n-sphere projection \cite{Liu2021OPT,deng2019arcface,liu2017sphereface}, PTQ converts the model weights to orthonormal bases and can quantizes the linear, convectional and 1x1 convectional layers \cite{lin2013network}. This is achieved by pruning and quantizing the orthonormal model weights (Figure \ref{fig3}). Our hypothesis and strategies of pruning orthonormal bases for quantization (see Section 3) are novel in the field of model compression. The pruning boosts the orthogonality of model weights and encourages them to converge near ternary orthonormal bases on a unit n-sphere (see Section 3). Finally, we introduce a learned pruning threshold and a refined STE to guide the optimization and finalize ternary weights. Unlike other works \cite{wu2016quantized,stock2019killthebits} that learn codewords, we simply use three continuous ternary weight values (with zero-padding) as fixed codewords. Besides, different from pruning \cite{han2015deep} and clustering \cite{stock2019killthebits} that only reduce the disk footprint, PTQ can compress the model memory footprint to 16$\times$ smaller. Our method compresses ResNet-18 (46 MB) to 1.1 MB and ResNet-50 (99 MB) to 3.3 MB. In the meantime, the top-1 validation accuracy on ImageNet only drops about 3\%, reaching 66.2\% on ResNet-18 (Original Acc. 69.7\%) and 74.47\% on ResNet-50 (Original Acc. 76.15\%). Our results are 5\% higher than the leading results \cite{stock2019killthebits}, i.e., ResNet-18 14-bit model with 1.03 MB and 61.18\% accuracy, or ResNet-50 14-bit model with 3.19 MB and 68.21\% accuracy. Our method is compatible with full-precision and 2-bit activations, e.g., PACT \cite{choi2018pact} and LSQ \cite{esser2019lsq}. With the full precision linear layer being kept, which is the most common practice for quantization, our method demonstrates comparable accuracy to the state-of-the-art results, while the compression ratio is doubled or even tripled. Our contributions are listed as follows: \begin{itemize} \item We propose PTQ, a novel ternary construction method that unifies the pruning and quantization operations to enhance the efficiency of both. \item We show that the proposed PTQ method achieves the state-of-the-art model compression results on ResNet. We also propose a hypothesis as a new perspective on model compression to explain our findings. \end{itemize} \section{Related Work} A notable amount of research efforts have been devoted to model compression methods to reduce model size and accelerate inference, such as quantization~\cite{courbariaux2015binaryconnect,rastegari2016xnor,zhang2018lqnet} and pruning~\cite{han2015deep,li2016pruning}. However, only a few works combine pruning and quantization for model compression. \cite{han2015deep,zhu2016ttq,tung2018clip,ye2018unified}. \subsection{Quantization} Quantization methods aim to reassign the full-precision weights to the closest quantization points. Existing quantization methods can be categorized into uniform quantization and nonuniform quantization. Uniform quantization \cite{zhou2016dorefa} sets up all the quantization points evenly. Hyper-parameters or learned parameters are further introduced to control the quantization procedure \cite{zhou2016dorefa,bai2018proxquant}. Nonuniform quantization applies different quantization intervals across the value space. For example, the fast shift-based multiplication operation is used in Powers-of-Two quantization interval methods \cite{miyashita2016convolutional,zhou2017incremental,li2019apot}, with higher partition resolution around the mean of the weight distribution. In the extreme cases, the binary and ternary quantization ({-1, 1} and {-1, 0, 1}) is introduced \cite{hubara2016binarized,rastegari2016xnor} to reduce both weights and activations to a very low bit. The convolution operations can be further accelerated by bit-wise operations \cite{courbariaux2015binaryconnect}, however, at the cost of significant accuracy drop \cite{wang2014learning,lin2015neural}. \subsection{Pruning} Pruning can be categorized into unstructured and structured pruning. Han et al. \cite{han2015deep,han2015learning} propose magnitude-based pruning to prune network weights with small magnitude, and combine pruning with quantization to obtain highly compressed models. Many other magnitude-based pruning methods are proposed \cite{srinivas2015data,molchanov2017variational}. However, these unstructured pruning methods cannot take advantage of compression and acceleration without customized hardware. The structured pruning methods, such as channel-wise pruning \cite{li2016pruning,hu2016network,alvarez2017compression} and layer-wise pruning \cite{chin2018layer,dong2017learning}, do not have these drawbacks as they keep the original weight structures. Only a few efforts have been dedicated to the integration of quantization and pruning~\cite{han2015deep,zhu2016ttq,tung2018clip,ye2018unified}, and most of them have not yet considered the potential conflicts between pruning and quantization operations. \subsection{N-Sphere Optimization} The use of n-sphere in optimization methods is beneficial for both training efficiency and performance. Several works \cite{Liu2021OPT,salimans2016weight,wang2019orthogonal} show that weight normalization or similar methods \cite{bansal2018can_orthogonal,harandi2016generalized}, which optimize weights involving a $n$-sphere surface, can significantly improve convergence \cite{deng2019arcface,liu2017sphereface}. Such methods are designed to penalize the disparity between identity matrix and Gram matrix of layer weights \cite{balestriero2020mad,xie2017all} or gradients \cite{salimans2016weight,harandi2016generalized,ozay2016optimization_norm_cnn}, so as to enhance the orthogonality and ultimately increase the Observed Fisher Information (or condition number/eigenvalues of the Hessian matrix). This will benefit model optimization \cite{amari1997neural,martens2010deep,sutskever2013importance} and model quantization \cite{dong2019hawq}. \section{Preliminaries and Hypothesis} \subsection{Preliminaries} \label{S3.1} \subsubsection{Ternary Quantization} To convert a matrix $W$ into ternary representation, we conduct the ternary operation $Sign(.)$ on each element $w \in W$ as: \begin{equation} \label{eq_sign} Sign(w)=\left\{\begin{array}{ll} 1, & \textrm{where}\: w> 0, \\ 0, & \textrm{where}\: w=0, \\ -1, & \textrm{where}\: w<0. \end{array}\right. \end{equation} \subsubsection{Pruning} The pruning operation is denoted by: \begin{equation} W=Prune(W,t), \end{equation} where $t$ is either a percentage or a threshold. \subsubsection{N-sphere Projection} Throughout this paper, the $L2$ normalization operation $Norm(.)$ on a vector $\vec{u}$ is defined as: \begin{equation} \label{eq_norm} Norm(\vec{u})=\frac{\vec{u}}{||\vec{u}||}. \end{equation} After $L2$ normalization, $\vec{u}$ is projected to a unit n-sphere. We consider a general representation of neural network layers as: \begin{equation} y=\phi (W\cdot x), \label{eq-ywxb} \end{equation} where $y$ denotes the scalar outputs of a layer, $x$ is an n-dimensional vector of input features to the layer, $W$ is an n-dimensional weight matrix, and $\phi(.)$ denotes a nonlinear activation function such as the ReLU. Based on the property of the dot product operator, Equation (\ref{eq-ywxb}) can be rewritten as: \begin{equation} y_{i}=\phi( x_i\cdot W_i ) = \phi(||x_i|| \; ||W_i||\cos\theta_{i}), \label{eq4} \end{equation} where $i$ denotes the layer index. $W_i=[w_{i0}, w_{i1}, ... w_{ij}]$, where $j$ denotes $j$-th output channel of $W_i$, and $w_{ij}$ denotes the weight vector. Further, let $\cos\theta_i$ denote the cosine angle values between $w_{ij}$ and $x_i$. After projecting $x_i$ and $W_i $ to a unit n-sphere, Equation (\ref{eq4}) becomes: \begin{equation} y_{i}= \phi(Norm(x_i) \cdot Norm(W_i))=\phi(\cos\theta_{i}). \label{eq_n_proj} \end{equation} \subsection{Hypothesis} Convolution and linear layers can be seen as dot product based operations. The result of a dot product is determined by the vector magnitude and direction. Several works \cite{salimans2016weight,bansal2018can_orthogonal,rodriguez2016regularizing_norm_cnn,jia2017improving_norm_cnn,harandi2016generalized,ozay2016optimization_norm_cnn,huang2018orthogonal} show that improving orthogonality of the gradient can increase the training speed or boost the model performance. Adding orthogonal constraints to the weight \cite{balestriero2020mad,xie2017all,wang2019orthogonal} has a similar effect. Therefore, we make the following deduction: \textbf{Suppose there is a converged model, the weight covariance matrix of each layer is close to the identity matrix, i.e., the output channels approximate an orthogonal basis. Model orthogonality is positively correlated with its performance before model convergence \cite{bansal2018can_orthogonal}.} Adding orthogonal constraints can be interpreted as projecting the weights to a unit n-sphere and forcing the Gram matrix to move close to identity matrix, i.e., $||WW^T-I||=0$. According to the Equation (\ref{eq_n_proj}), the impact of weights magnitude on layer outputs vanishes, and the projected layers only need vector direction information to perform affine transformation. Eliminating smaller values of vectors will not significantly affect the layer outputs, because the normalized dot product result $\cos\theta$ in Equation (\ref{eq_n_proj}) is mostly determined by larger vector values. Especially, the $i$-th layer outputs will be projected to a unit n-sphere in the next layer, which will further neutralize the impact of such elimination. Based on the above deduction, once the projected unit n-sphere model is converged, the bases of such model weight matrices are nearly orthonormal. Pruning the smaller weight values will increase the sparsity of the weight covariance matrix and thus boost the weight orthogonality. We propose the following hypothesis: \textbf{ Due to the elimination of magnitude impact, pruning orthonormal-basis models on a unit n-sphere increases both model sparsity and orthogonality and therefore improves the performance.} According to our hypothesis, pruning can improve the performance without changing the main direction of model weights. We can reset the weight direction of a pruned model to a direction that consists of ternary $\{-1,0, 1\}$ and apply pruning again to encourage the model to converge. Then the converged model weights will approximate a ternary weights. Finally, applying STE can finalize the ternary quantization. Our experimental results support our hypothesis. \subsubsection{Intuitive Example Suppose a hyperspherical coordinate system (Figure \ref{fig3}), each normalized output channel $w_{ij}$ of $W_i$ can be seen as a point $P_s$ on a unit $n$-sphere surface. The number and the index of nonzero elements of $w_{ij}$ determine its direction, i.e., the position of $P_s$ on the $n$-sphere. The ternary weight $W^{ter}$ consists of a group of special output channel vectors $w^{ter}_{ij}$ that only contains values of $\{-1,0, 1\}$ and can be projected to a point $T_s$ on the surface of the same unit $n$-sphere. Therefore, converting the regular weight vector $w_{ij}$ to the ternary weight vector $w^{ter}_{ij}$ is equivalent to moving the $P_s$ on the $n$-sphere surface to the ternary position $T_s$ (Figure \ref{fig3}): \begin{equation*} \begin{aligned} \because P_s=Norm(P)=[0.66, \textbf{0.33}, 0.66], \end{aligned} \end{equation*} \begin{equation*} \begin{aligned} \because Norm(Prune(P_s))=Norm([0.66, \textbf{0}, 0.66]) \\ =[0.71, \textbf{0}, 0.71], \end{aligned} \end{equation*} \begin{equation*} \begin{aligned} \because T_s=Norm(T)=[0.71, \textbf{0}, 0.71], \end{aligned} \end{equation*} \begin{equation} \label{eq6} \therefore T_s=Norm(Prune(P_s)), \end{equation} In short, it is possible to obtain ternary weight only by using normalization and pruning (Eq.~\ref{eq6}). The direction property of $w_{ij}$ is the key to convert the regular weight to the ternary weight. The goal of such conversion is threefold: i) prune the normalized weight and convert the pruned weight to ternary $W^{ter}=Sign(W)$; ii) normalize each output channel vector $w^{ter}_{ij}$ of $W^{ter}$, i.e., $||w^{ter}_{ij}||=1$ in forward pass; iii) apply the STE to update $W$ in backward pass in order to obtain $W^{ter}$. \begin{figure}[t] \centering \includegraphics[width=0.6\columnwidth]{figures/sphere_new.png} \caption{Regular weight vector $P$ and ternary vector $T$ in 3D space. The $P_s$ and $T_s$ represent the normalized vectors. The regular weight vector $P$ is projected to the surface of the sphere. Once the model is converged, pruning and $L2$ normalization are applied to eliminate the smaller value of 0.33 of $P_s$ so as to get the normalized ternary weight $T_s$ (Equation \ref{eq6}). This procedure is the same as a regular pruning operation that keeps larger values and eliminates smaller values.}. \label{fig3} \end{figure} \section{Methodology} The overview of our proposed method is shown in Algorithm \ref{alg:training}. Firstly, the layer weights are projected to a unit $n$-sphere surface to obtain a magnitude-irrelevant orthonormal-basis model. Secondly, smaller values of the projects weights are removed by progressive weight pruning. After each pruning, the weights are reset to $\{-1,0, 1\}$ based on their sign. Thirdly, a refined STE is introduced to quantize the projected weight. Meanwhile, the pruning and quantization operations are constrained on a unit $n$-sphere. Finally, the ternary weights are then encoded by a fixed codebook. \begin{algorithm} \caption{PTQ training approach} \label{alg:training} \begin{algorithmic} \STATE \textbf{Input:} Current mini-batch images $X$; pretrained full precision weights $W$ of all layers; pruning ratio $r$ increases corresponding to the epoch and $r \in [0.4, 0.8]$; $t$ is a learned pruning threshold. \STATE \textbf{Result:} Quantized ternary networks for inference \item[] \STATE\textbf{Ternary Guided Pruning:} \FOR{$epoch \textrm{ in range}(Total\_Epochs)$} \IF{$epoch$\%30==0} \STATE { $W=Prune(W,r)$ \\ $W=Sign(W)$\\ } \ENDIF \STATE{ $y=\phi (Norm(W)\cdot Norm(X))$ \\ Calculate $\frac{\partial L({w})}{\partial {w}}$ and update $W$ } \ENDFOR \item[] \STATE\textbf{Ternary Quantization:} \FOR{$epoch \textrm{ in range}(Total\_Epochs)$} \STATE{ $W=Prune(W,t)$ \\ $W^{ter}=Sign(W)$ \\ $y=\phi (Norm(W^{ter})\cdot Norm(X))$ \\ Calculate $\frac{\partial L({w})}{\partial {w}} $ via refined STE, Equation(\ref{eq9}) } \STATE{ Update $W$ } \ENDFOR \end{algorithmic} \end{algorithm} \label{S3.2} \subsection{Ternary Guided Pruning} \label{Ternary_Guided_Pruning} In this part, we explain how to approximate ternary weights by periodically pruning and resetting (ternarizing) the model. The pretrained model is first projected to a unit $n$-sphere by applying the $L2$ normalization to the inputs and weights, which gives $||x_i||=1$, and $||w_{ij}||=1$, i.e., a unit-magnitude model (Equation (\ref{eq_n_proj})). After projecting, the main direction (larger values) of the model weights determines the capability of the model. The work of \cite{liu2017sphereface,deng2019arcface} shows that unit-magnitude deep model, which solely depends on weights direction, can perform as well as the regular ones. According to our deduction in Section 3.2, the converged model weights approximate to orthonormal bases. Suppose the ternary weights exist in a subspace of the model weights. To approximate such ternary weights, we simply prune the model to sparse then reset the weight values to its sign: \begin{equation} W=Sign(Prune(W,r)). \end{equation} Instead of assigning a fixed pruning ratio, we attempt to use progressive unstructured pruning \cite{han2015deep} with a ratio $r \in [0.4, 0.8]$ to remove smaller values in $W$. The model is further finetuned by repeating this process until the resetting stops compromising the model performance, meaning that the orthonormal bases are close to a ternary direction. \subsection{Ternary Quantization on $n$-Sphere} According to our deduction and hypothesis (Section 3.2), we unify pruning and quantization with STE on a unit n-sphere to boost the orthogonality of model weights and thus improve the quality of the produced ternary. However, unlike the first stage (Section 4.1), it is hard to define a proper pruning ratio to optimize the weight orthogonality. We introduce a learned threshold for each layer to adjust weight orthogonality for the pruning operation (Algorithm \ref{alg:training}). We also propose an approach to refine the gradient of STE during backward pass to adapt the n-sphere projection. To further verify our hypothesis, we combine activation quantization, e.g., PACT \cite{choi2018pact} and LSQ \cite{esser2019lsq}, with our work. \subsubsection{Refined Straight-Through Estimator} In this work, the STE \cite{bengio2013estimating} is adopted to update the weights as the following: \begin{equation} \textbf { Forward: } {W^{ter}} = Norm(Sign({Prune(W,t)})), \end{equation} \begin{equation} \label{eq9} \textbf {Backward:} \frac{\partial L({\mathbf{w}})}{\partial {\mathbf{w}}}\underset{S \bar{T} E}{\approx} \frac{\partial L\left({\mathbf{w}}^{ter}\right)}{\partial {\mathbf{w}}^{ter}}*M, \end{equation} where $M=(I-\frac{\mathbf{w^{ter}} \mathbf{w^{ter}}^{T}}{\|\mathbf{w^{ter}}\|^{2}})/{\|\mathbf{w^{ter}}\|}$ \cite{salimans2016weight}, $t$ is a learned variable, $L$ denotes the objective function, $Sign(.)$ returns ternary values $\{-1,0, 1\}$, and $Norm(.)$ denotes channel-wise L2 normalization, i.e., projecting the weight $W$ to a unit $n$-sphere. The use of term $M$ is common in orthogonality optimization works \cite{salimans2016weight,harandi2016generalized,ozay2016optimization_norm_cnn,amari1997neural,martens2010deep,sutskever2013importance}. $M$ is derived from the L2 normalization operation \cite{salimans2016weight}. In our method, it plays a similar role as pruning in enhancing the weight orthogonality. \subsubsection{Low-bit Activation Quantization} We combine our proposed method with activation quantization methods to examine its performance in convolution layers. The uniform activation quantization operation of PACT \cite{choi2018pact} is defined as: \begin{equation} y_{q}=\operatorname{round}\left(y \cdot \frac{2^{k}-1}{\alpha}\right) \cdot \frac{\alpha}{2^{k}-1} \end{equation} where $\alpha$ is a clipping threshold, and the activations $y$ are clipped into $[0, \alpha]$. After clipping, each element of $y$ is projected onto the quantization levels. $y_q$ denotes a set of quantization levels, and $k$ is the quantization bit width. The LSQ \cite{esser2019lsq} activation quantization is: \begin{equation} y_{q}=round(\operatorname{clip}(y / s,0, 2^{k-1}-1)), \end{equation} where $s$ is a learned quantizer step size, and $k$ is the quantization bit width. \subsection{The Fixed Codebook} The ternary values $\{-1,0, 1\}$ are the ideal components of a codebook for model compression. In our work, we take every 3 continuous weight values as the codewords. For the downsampling and linear layers, we pad zeros to match the length of the codewords. The total length of the codebook is $27$ (5-bit). Our work only needs 15-bit (three codewords) to store each convolution kernel. However, previous work, e.g., \cite{stock2019killthebits} uses float number as codewords thus the memory footprint is not reduced during convolutional operations. Due to the minimum bit width of a number is 8-bit in a common operation system, we use external compression method \cite{RFC1952,pavlov_lzma} to reduce the model disk footprint. \section{Experiments} The experiments measure two metrics, namely the classification accuracy and the size of the ternary model. We use A$n$ and W$n$ to represent the bit width of the activation and weights, e.g., A2/W32 denotes 2-bit activations and 32-bit (full-precision) weights, etc. The pretrained models are provided by the PyTorch. Firstly, we evaluate our method on different ResNet \cite{he2016deep} architectures, i.e., ResNet-20 for the CIFAR-10 \cite{krizhevsky2009learning} and ResNet-18/50 for the ImageNet ILSVRC12 dataset \cite{ILSVRC15}. Secondly, we compare our model size with various other quantization approaches. Lastly, we demonstrate the impact of weight resetting (Section \ref{Ternary_Guided_Pruning}) during training. \textbf{Layer quantization}. Following the best practices of popular quantization methods \cite{choi2018pact,esser2019lsq}, we leave the first and the last fully-connected layers as half-precision (16-bit) in the classification experiment. For the 2-bit activation models, we train the low-bit activations model with full-precision weight first, then convert the full-precision weight to the ternary weight. We compare our method and other representative low-bit quantization methods: Ternary Weight Networks (TWN) \cite{li2016ternary}, Trained Ternary Quantization (TTQ) \cite{zhu2016trained}, Learned Step Size Quantization (LSQ) \cite{esser2019lsq}, ADMM \cite{leng2018extremelyadmm}, PACT \cite{choi2018pact}, and DoReFa \cite{zhou2016dorefa}. We use the cosine annealing schedule \cite{loshchilov2016sgdr} to adjust the learning rates. \textbf{Model compression}. The convolution and linear layers are pruned and quantized together. The last linear layer and downsampling layer are ternary as well. We compare our method with ABC-Net \cite{lin2017abcnet}, Deep Compression (DC) \cite{han2015deep}, Hardware-Aware Automated Quantization (HAQ) \cite{wang2019haq}, Hessian AWare Quantization (HAWQ) \cite{dong2019hawq}, ABGD \cite{stock2019killthebits}, LR-Net \cite{shayer2017learning}, and BWN \cite{rastegari2016xnor}. \subsection{Image Classification} \subsubsection{CIFAR-10} The learning rate is set to 9e-3 and the epoch is 100. The batch size is 128. Table \ref{baselinemodel} shows the overall experimental results on CIFAR-10 with ResNet-20. To examine our weight quantization method, we use 2-bit uniform quantization method (PACT \cite{choi2018pact}) to quantize the activations. The full precision model is trained from scratch. The comparison among other methods shows a similar performance. \begin{table}[H] \centering \scalebox{0.8} { \begin{tabular}{llll} \hline Model &Methods &A32/W2 &A2/W2 \\ \hline \multirow{6}{*}{ResNet-20} & TWN &\bf{92.56} &- \\ &TTQ &91.13 &- \\ &LQ-Net &91.8 &90.2 \\ &DoReFa &-&88.2\\ &PACT &-&90.63\\ &\bf{PTQ (Ours)} & 92.23&\bf{90.85}\\ \hline \end{tabular}} \caption{Experimental accuracy (\%) on CIFAR-10 with ResNet-20.} \vspace{-1em} \label{baselinemodel} \end{table} \subsubsection{ImageNet-ILSVRC12} For the ImageNet dataset, we use the ResNet-18 and ResNet-50 to evaluate the proposed ternary weight quantization approach. The full precision model is initialized from the weight provided by the PyTorch. The batch size is 256. The L2-normalized weight decay is 0.0001 and the momentum is 0.9. \begin{table}[H] \centering \scalebox{0.8} { \begin{tabular}{lllll} \hline Model &Methods &A32/W2 &A2/W2 &Size$^\dagger$\\ \hline \multirow{7}{*}{ResNet-18} & TWN &61.8 &- &- \\ &TTQ &66.6 &- &- \\ &ADMM &67.0 &- &-\\ &LQ-Net &68.0 &64.9&6 MB \\ &LSQ* &\- &64.7 &- \\ &APoT* &\- &64.7 & 5 MB\\ &DoReFa &-&62.6 &5.2 MB\\ &PACT &-&64.4 &-\\ &\textbf{PTQ* (Ours)} & \bf{68.0} & \bf{65.8} &\bf{3 MB} \\ \hline \multirow{6}{*}{ResNet-50} & TWN &72.5 &- &- \\ &ADMM &72.5 &- &- \\ &LQ-Net &75.1 &71.5 &13.8 MB \\ &LSQ* &\- &71.2 &- \\ &DoReFa &- &67.1&- \\ &PACT &-&72.2&- \\ &\textbf{PTQ* (Ours)} & \bf{75.19} & \bf{71.6} & \bf{4.8 MB} \\ \hline \end{tabular}} \caption{Accuracy (\%) on ImageNet validation set using different bit widths with ResNet-18 and ResNet-50. W2 denotes 2-bit weight and A32 denotes 32-bit activation, etc. "$\dagger$" indicates the gzip compressed size of A2/W2 models without the first and last fully-connected layer quantization. "*" denotes ternary weights. We do not compare our method with other partial quantization methods, e.g., full-preccision shortcuts \cite{choi2018bridging} and mixed-precision weights \cite{dong2019hawq}. } \vspace{-1em} \label{imgnet} \end{table} The quantization results on ImageNet dataset (Table \ref{imgnet}) show that the proposed method achieves similar validation accuracy as representative quantization methods. However, it produces much smaller models (3MB out of 46MB for ResNet-18 and 4.8MB out of 99MB for ResNet-50). The model in Table \ref{imgnet} includes the first and the last fully-connected layers that are not quantized. Part of the size and accuracy results are obtained from the work of Zhuang \cite{modelquantization} and Chen \cite{chen2020fatnn}. \subsection{Model Compression} \begin{table}[!hbt] \centering \scalebox{0.8} { \begin{tabular}{lllll} \hline Methods &Acc. &Ratio &Size\\ \hline ABC(M=5, A5/W5) &65.0 &6 &- \\ ABC(M=3, A3/W3) &61.0 &10 &- \\ LR-Net(A32/W2) &63.5 &16 &-\\ BWN(A32/W1) &60.8 &32&- \\ ABGD-small(A32/W14) &65.8 &28 &1.6 MB \\ ABGD-large (A32/W15) &\textbf{61.1} &\textbf{43} & 1.03 MB\\ \hline \textbf{PTQ-L (A16/W2) (Ours)} &\textbf{67.0}&35 &1.3 MB\\ \textbf{PTQ-S (A16/W2) (Ours)} &\textbf{66.2}&42 &1.1 MB\\ \textbf{PTQ-ES (A16/W2) (Ours)} & \textbf{65.3} & \textbf{48} &\textbf{955 KB} \\ \hline \end{tabular}} \caption{ Model compression results of ResNet-18 on ImageNet dataset. We compare our approach with representative methods \cite{stock2019killthebits,lin2017abcnet,rastegari2016xnor,shayer2017learning}. PTQ achieves higher accuracy and compression ratio than the leading method, i.e., ABGD \cite{stock2019killthebits}. "-L, -S, -ES" denotes large, small, extra small, respectively. } \vspace{-1em} \label{mode_size_18} \end{table} \begin{table}[!hbt] \centering \scalebox{0.8} { \begin{tabular}{lllll} \hline Methods &Acc. &Ratio &Size\\ \hline HAQ (A/W:2-8) &70.6 &16 &6.3 MB \\ DC (A32/W2) &68.9 &16 &6.3 MB \\ DC (A32/W3) &75.1 &10 &9.3 MB\\ DC (A32/W4) &76.1 &8&12.4 MB \\ ABGD-small (A32/W14) &73.7 &20 &5 MB \\ ABGD-large (A32/W15) &68.2 &\textbf{32} & \textbf{3.1 MB}\\ HAWQ (A2/W4) &75.4&12 &7.9 MB\\ \hline \textbf{PTQ (A16/W2) (Ours)} &\textbf{74.4}&\textbf{32} &\textbf{3.1 MB}\\ \textbf{PTQ (A16/W2) (Ours)} & \textbf{73.8} & \textbf{36} &\textbf{2.7 MB} \\ \hline \end{tabular}} \caption{ Model compression results of ResNet-50 on ImageNet dataset. Compare to the works with more than 20x compression ratio, PTQ obtains the highest top-1 accuracy of 74.4\% and the smallest disk footprint of 3 MB. } \vspace{-1em} \label{mode_size_50} \end{table} \begin{figure*} \centering \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=\textwidth]{figures/sp_r18.pdf} \caption{ResNet18 on ImageNet.} \label{fig:y equals x} \end{subfigure} \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=\textwidth]{figures/sp_r50.pdf} \caption{ResNet50 on ImageNet.} \label{fig:three sin x} \end{subfigure} \caption{The compression ratio and accuracy of ResNet-18/50 on ImageNet dataset. "-L, -S, -ES" denotes large, small and extra small. "$*$" denotes 7-zip \cite{pavlov_lzma} compressed size. "$ ^\wedge $" denotes that the model has a full-precision last fully-connected layer. Our PTQ (SQ) method produces 2-bit (A2) and full-precision (A32) ternary (W2) models.} \label{img:modelsize} \end{figure*} In this part, we show the model compression results of ResNet model on ImageNet dataset. Because of the pruning operation, PTQ can provide a wide range of size-accuracy selections. For the ResNet-18 model, our method significantly outperforms the present works (Figure \ref{img:modelsize}). For example, with 1 MB disk footprint budget, the accuracy of PTQ model is 3\% higher than ABGD-large \cite{stock2019killthebits}. When the compression ratio is higher than 25$\times$, PTQ achieves an accuracy of 67\%, which is 2\% higher than ABGD-small \cite{stock2019killthebits} model but the model size is smaller (Table \ref{mode_size_18}). Besides, PTQ can compress a ResNet-18 model to 48$\times$ smaller with an acceptable accuracy of 65.36\%. For the ResNet-50, we compare our approach with the recent methods e.g., HAWQ \cite{dong2019hawq} and ABGD\cite{stock2019killthebits}. From Table \ref{mode_size_50} and Figure \ref{img:modelsize}, we show that PTQ achieves higher compression ratio and higher accuracy. \subsection{Sparsity and Accuracy} \begin{figure*} \centering \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=\textwidth]{figures/stage_1.pdf} \caption{Pruning the orthonormal-basis model.} \label{fig:y equals x} \end{subfigure} \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=\textwidth]{figures/stage_2.pdf} \caption{Quantization with a learned pruning threshold.} \label{fig:three sin x} \end{subfigure} \caption{The training trend line of model accuracy and sparsity of ResNet-18 on ImageNet. $(a)$ shows the first stage, i.e., the ternary guided pruning on a orthonormal-basis model.$(b)$ shows the result of quantization combined with pruning and STE (Equation \ref{eq9}). The line in $(b)$ with only 100 epochs shows the result of not applying resetting operation. It indicates that resetting benefits the quantization optimization.} \label{img:trend} \end{figure*} To study the impact of pruning and to verify our hypothesis, we plot the training trend line of the ResNet-18 model on ImageNet (Figure \ref{img:trend}). The results show that PTQ can produce more than 80\% sparsity while maintaining an accuracy above 65\% and at most 48x compression ratio. The trend line (Figure \ref{img:trend}$a$) indicates that pruning increases the model sparsity before each resetting operation. After four pruning-resetting operations, the accuracy near the lower bound rises from 60\% to 65\%. According to our hypothesis, the model weights are close to ternary orthonormal bases. At the beginning of the quantization stage (Figure \ref{img:trend}$b$), we can see that the unstable learned pruning thresholds lead to accuracy fluctuations. Note that there are two gaps in Figure \ref{img:trend}$b$, it is caused by training interruptions. During training, the accuracy increases from 50\% to 67\% and the sparsity grows from 40\% to 75\%. The accuracy starts to drop if we keep increasing the model sparsity. \cite{zhu2016ttq} reports similar findings. When the sparsity reaches about 75\%, we have the best accuracy of 67.03\% with a size of 1.4 MB. After that, PTQ can produce various size models to at most 48x compression ratio with a size of 955 KB and 65.37\% accuracy. \subsection{The Impact of Resetting Weights} As mentioned in Section \ref{S3.1} and \ref{Ternary_Guided_Pruning}, unlike other works that use STE to optimize the low-bit or ternary models directly, we apply weight resetting strategy to boost the STE optimization. We examine its impact on ImageNet dataset. We finetune the pretrained model to a unit-magnitude model. During the finetuning process, we compare two experiment settings. One is referred to as STE-Only and the other is STE-Reset. The STE-Only experiment only uses STE to optimize the model, whereas the STE-Reset experiment applies resetting strategy every 30 epoch. Both experiments gradually prune the model from 30\% to 60\% sparsity. The results in Table~\ref{abstudy} show that model performance improves as we periodically reset the weight to ternary. Note that if without resetting and pruning, our method becomes a variant of TWN \cite{li2016twn}. \begin{table}[H] \centering \scalebox{0.8} { \begin{tabular}{lll} \hline Model &Methods &Accuracy \\ \hline \multirow{2}{*}{ResNet-18} & STE-Only &65.2 \\ &STE-Reset &68.0 \\ \hline \multirow{2}{*}{ResNet-50} & STE-Only &72.9 \\ &STE-Reset &75.19 \\ \hline \end{tabular}} \caption{The impact of the resetting strategy on ImageNet dataset. The results further support our hypothesis about the pruning and resetting. We leave the last fully connected layer as half-precision.} \vspace{-1em} \label{abstudy} \end{table} \section{Conclusions} We propose a novel method, PTQ, to construct sparse ternary weights by unifying pruning, L2 projection and quantization on a unit n-sphere. The proposed method achieves the state-of-the-art model compression results. One of the advantages of our method is the use of pruning to enhance the model sparsity, performance and compression capability. Pruning provides a range of size-accuracy trade-off. The use of the balanced ternary weights $\{-1,0, 1\}$ reduces a 3x3 convolutional kernel memory footprint from 288 bits to 15 bits. The 2-bit activation ResNet-18 ternary model with an accuracy of 65.37\% and a size of 1 MB represents a great step froward in real-world applications. Future work may focus on the extension of our method to wide weight quantization levels. It is worthwhile to investigate other optimization methods for higher classification accuracy. In addition, combining our method with ternary activation quantization \cite{chen2020fatnn} and Power-of-Two \cite{li2019apot} methods may further speed up the inference. {\small \bibliographystyle{ieee_fullname}
1,314,259,994,088
arxiv
\section{Introduction}\label{section-intro} Generalized Linear Models (GLMs)~\citep{mccullagh1989generalized} are one of the most widely-used classes of models in statistical analysis, and their properties have been thoroughly studied and documented (see, for example, \cite{dunteman2006introduction}). Model training and prediction for GLMs often involves Maximum-Likelihood estimation (frequentist approaches) or posterior density estimation (Bayesian approaches), both of which require application of optimization or MCMC sampling techniques to the log-likelihood function or some function containing it. Differentiable functions often benefit from optimization/sampling algorithms that utilize the first and/or second derivative of the function~\citep{press2007numerical}. With proper choice of link functions, many GLMs have log-likelihood functions that are not only twice-differentiable, but also globally-concave~\citep{gilks1992adaptive}, making them ideal candidates for optimization/sampling routines that take advantage of these properties. For example, the most common optimization approach for GLMs is Iterative Reweighted Least Squares (IRLS)~\citep[Section~6.8.1]{gentle2007matrix}. IRLS is a disguised form of Newton-Raphson optimization~\citep{wright1999numerical}, which uses both the gradient and Hessian of the function, and relies on global concavity for convergence. When Hessian is too expensive to calculate or lacks definiteness, other optimization techniques such as conjugate gradient~\citep[Section~10.6]{press2007numerical} can be used, which still require the first derivative of the function. Among MCMC sampling algorithms, Adaptive Rejection Sampler~\citep{gilks1992adaptive} uses the first derivative and requires concavity of the log-density. Stochastic Newton Sampler~\citep{qi2002hessian,mahani2014sns}, a Metropolis-Hastings sampler using a locally-fitted multivariate Gaussian, uses both first and second derivatives and also requires log-concavity. Other techniques such as Hamiltonian Monte Carlo (HMC)~\citep{neal2011mcmc} use the first derivative of log-density, while their recent adaptations can use second and even third derivative information to adjust the mass matrix to local space geometry~\citep{girolami2011riemann}. Efficient implementation and analysis of GLM derivatives and their properties, therefore, is a key component to our ability to build probabilistic models using the powerful GLM framework. The \proglang{R} package \pkg{RegressionFactory} contributes to computational research and development on GLM-based statistical models by providing an abstract framework for constructing, and reasoning about, GLM-like log-likelihood functions and their derivatives. Its modular implementation can be viewed as code factorization using the chain rule of derivatives~\citep{apostol1974mathematical}. It offers a clear separation of generic steps (expander functions) from model-specific steps (base functions). New regression models can be readily implemented by supplying their base function implementation. Since base functions are in the much lower-dimensional space of the underlying probability distribution (often a member of the exponential family with one or two parameters), implementation of their derivatives is much easier than doing so in the high-dimensional space of regression coefficients. A by-product of this code refactoring using the chain rule is an invariance theorem governing the negative definiteness of the log-likelihood Hessian. The theorem allows this property to be studied in the base-distribution space, again a much easier task than doing so in the high-dimensional coefficient space. The modular organization of \pkg{RegressionFactory} also allows for performance optimization techniques to be made available across a broad set of regression models. This is particularly true for optimizations applied to expander functions, but also applies to base functions since they share many concepts and operations across models. \pkg{RegressionFactory} contains a lower-level set of tools compared to the facilities provided by mainstream regression utilities such as the \code{glm} command in \proglang{R}, or the package \pkg{dglm}~\citep{dunn2014dglm} for building double (varying-dispersion) GLM models. Therefore, in addition to supporting research on optimization/sampling algorithms for GLMs as well as research on performance optimization for GLM derivative-calculation routines, exposing the log-likelihood derivatives using the modular framework of \pkg{RegressionFactory} allows modelers to construct composite models from GLM lego blocks, including Hierarchical Bayesian models~\citep{gelman2006data}. The rest of the paper is organized as follows. In Section~\ref{section-theory}, we begin with an overview of GLM models and arrive at our abstract, and expanded, representation of GLM log-likelihoods (\ref{subsection-overview}). We then apply the chain rule of derviatives to this abstract expression to derive two equivalent sets of factorized equations (compact and explicit forms) for computing log-likelihood gradient and Hessian using their base-function counterparts (\ref{subsection-chain-rule}). We use the explicit forms of the equations to prove a negative-definiteness invariance theorem for the log-likelihood Hessian (\ref{subsection-invariance}). Section~\ref{section-implement} discusses the implementation of the aforementioned factorized code in \pkg{RegressionFactory} using the expander functions (\ref{subsection-expanders}) and the base functions (\ref{subsection-base-dist}). In Section~\ref{section-using}, we illustrate the use of \pkg{RegressionFactory} using examples from single-parameter and multi-parameter base functions. Finally, Section~\ref{section-summary} contains a summary and discussion. \section{Theory}\label{section-theory} In this section we develop the theoretical foundation for \pkg{RegressionFactory}, beginning with an overview of GLM models. \subsection{Overview of GLMs}\label{subsection-overview} In GLMs, response variable\footnote{To simplify notation, we assume that response variable is scalar, but generalization to vector response variables is straightforward.} $y$ is assumed to be generated from an exponential-family distribution, and its expected value is related to linear predictor $\mathbf{x}^t \boldsymbol\beta$ via the link function $g$: \begin{equation}\label{equation-glm} g(\mathrm{E}(y)) = \mathbf{x}^t \boldsymbol\beta. \end{equation} where $\mathbf{x}$ is the vector of covariates and $\boldsymbol\beta$ is the vector of coefficients. For single-parameter distributions, there is often a simple relationship between the distribution parameter and its mean. Combined with Equation~\ref{equation-glm}, this is sufficient to define the distribution in terms of the linear predictor, $\mathbf{x}^t \boldsymbol\beta$. For many double-parameter distributions, the distribution can be expressed as \begin{equation}\label{equation-dglm} f_Y(y; \theta, \Phi) = \exp \{ \frac{y \theta - B(\theta)}{\Phi} + C(y,\Phi) \} \end{equation} where range of $y$ does not depend on $\theta$ or $\Phi$. This function can be maximized with respect to $\theta$ without knowledge of $\Phi$. Same is true if we have multiple conditionally-independent data points, where log-likelihood takes a summative form. Once $\theta$ is found, we can find $\Phi$ (dispersion parameter) through maximization or method of moments, as done by \code{glm} in \proglang{R}. Generalization to varying-dispersion models is offered in the \proglang{R} package \pkg{dglm}, where both mean and dispersion are assumed to be linear functions of covariates. In \pkg{dglm} estimation is done iteratively by alternating between an ordinary GLM and a dual GLM in which the deviance components of the ordinary GLM appear as responses~\citep{smyth1989generalized}. In \pkg{RegressionFactory}, we take a more general approach to GLMs that encompasses the \code{glm} and \code{dglm} approaches but is more flexible. Our basic assumption is that log-density for each data point can be written as: \begin{equation} \log {\mathrm{P} (y \,|\, \{\ \mathbf{x}^j \}_{j=1,\dots,J})} = f(<\mathbf{x}^1,\boldsymbol\beta^1>, \dots, <\mathbf{x}^J,\boldsymbol\beta^J> , y) \end{equation} where $<a,b>$ means inner product of vectors $a$ and $b$. Note that we have absorbed the nonlinearities introduced through one or more link functions into the definition of $f$. For $N$ conditionally-independent observations $y_1,\dots,y_N$, the log-likelihood as a function of coefficients $\boldsymbol\beta^j$ is given by: \begin{equation}\label{eq-loglike} L(\boldsymbol\beta^1, \dots, \boldsymbol\beta^J) = \sum_{n=1}^N f_n(<\mathbf{x}_n^1, \boldsymbol\beta^1>, \dots, <\mathbf{x}_n^J, \boldsymbol\beta^J>), \end{equation} where we have absorbed the dependence of each term on $y_n$ into the indexes of the base functions $f_n(u^1,\dots,u^J)$. With proper choice of nonlinear transformations, we can assume that the domain of $L$ is $\mathbb{R}^{\sum_j K^j}$, where $K^j$ is the dimensionality of $\boldsymbol\beta^j$. This view of GLMs naturally unites single-parameter GLMs such as Binomial (with fixed number of trials) and Poisson, constant-dispersion two-parameter GLMs (e.g. normal and Gamma), varying-dispersion two-parameter GLMs (e.g. heteroscedastic normal regression), and multi-parameter models such as multinomial logit. It can motivate new GLM models such as geometric (see Section~\ref{subsection-geometric}) and exponential, and can even include survival models (see, e.g., \pkg{BSGW} package~\citep{mahani2014bsgw}). Several examples are discussed in Section~\ref{section-using}. Our next step is to apply the chain rule of derivatives to Equation~\ref{eq-loglike} to express the high-dimensional ($\sum_j K^j$) derivatives of $L$ in terms of the low-dimensional ($J$) derivatives of $f_n$'s. We will see that the resulting expressions offer a natural way for modular implementation of GLM derivatives. \subsection{Application of chain rule}\label{subsection-chain-rule} First, we define our notation for representing derivative objects. We concatenate all $J$ coefficient vectors, $\boldsymbol\beta^j$'s, into a single $\sum_j K^j$-dimensional vector, $\boldsymbol\beta$: \begin{equation} \boldsymbol\beta \equiv (\boldsymbol\beta^{1,t}, \dots, \boldsymbol\beta^{J,t})^t. \end{equation} The first derivative of log-likelihood can be written as: \begin{equation} \mathbf{G}(\boldsymbol\beta) \equiv \frac{\partial L}{\partial \boldsymbol\beta} = ((\frac{\partial L}{\partial \boldsymbol\beta^1})^t, \dots, (\frac{\partial L}{\partial \boldsymbol\beta^J})^t)^t, \end{equation} where \begin{equation} (\frac{\partial L}{\partial \boldsymbol\beta^j})^t \equiv (\frac{\partial L}{\partial \beta_1^j}, \dots, \frac{\partial L}{\partial \beta_{K^j}^j}). \end{equation} For second derivatives we have: \begin{equation} \mathbf{H}(\boldsymbol\beta) \equiv \frac{\partial^2 L}{\partial \boldsymbol\beta^2} = \left[ \frac{\partial^2 L}{\partial \boldsymbol\beta^j \partial \boldsymbol\beta^{j'}} \right]_{j,j'=1,\dots,J}, \end{equation} where we have defined $\mathbf{H}(\boldsymbol\beta)$ in terms of $J^2$ matrix blocks: \begin{equation} \frac{\partial^2 L}{\partial \boldsymbol\beta^j \partial \boldsymbol\beta^{j'}} \equiv \left[ \frac{\partial L}{\partial \beta_k^j \partial \beta_{k'}^{j'}} \right]_{j=1,\dots,K^j; j'=1,\dots,K^{j'}} \end{equation} Applying the chain rule to the log-likelihood function of Equation~\ref{eq-loglike}, we derive expressions for its first and second derivatives as a function of the derivatives of the base functions $f_1,\dots,f_N$: \begin{equation}\label{eq-gradient} \frac{\partial L}{\partial \boldsymbol\beta^j} = \sum_{n=1}^N \frac{\partial f_n}{\partial \boldsymbol\beta^j} = \sum_{n=1}^N \frac{\partial f_n}{\partial u^j} \mathbf{x}_n^j = \mathbf{X}^{j,t} \mathbf{g}^j, \end{equation} with \begin{equation} \mathbf{g}^j \equiv (\frac{\partial f_1}{\partial u^j}, \dots, \frac{\partial f_N}{\partial u^j})^t, \end{equation} and \begin{equation} \mathbf{X}^j \equiv (\mathbf{x}_1^j, \dots, \mathbf{x}_N^j)^t. \end{equation} Similarly, for the second derivative we have: \begin{equation}\label{eq-hessian} \frac{\partial^2 L}{\partial \boldsymbol\beta^j \partial \boldsymbol\beta^{j'}} = \sum_{n=1}^N \frac{\partial^2 f_n}{\partial \boldsymbol\beta^j \partial \boldsymbol\beta^{j'}} = \sum_{n=1}^N \frac{\partial^2 f_n}{\partial u^j \partial u^{j'}} \, (\mathbf{x}_n^j \otimes \mathbf{x}_n^{j'}) = \mathbf{X}^{j,t} \mathbf{h}^{jj'} \mathbf{X}^{j'}, \end{equation} where $\mathbf{h}^{jj'}$ is a diagonal matrix of size $N$ with $n$'th diagonal element defined as: \begin{equation} h_n^{jj'} \equiv \frac{\partial^2 f_n}{\partial u^j \partial u^{j'}} \end{equation} We refer to the matrix form of the Equations~\ref{eq-gradient} and \ref{eq-hessian} as `compact' forms, and the explicit-sum forms as `explicit' forms. The expander functions in \pkg{RegressionFactory} use the compact form to implement the high-dimensional gradient and Hessian (see Section~\ref{subsection-expanders}), while the definiteness-invariance theorem below utilizes the explicit-sum form of Equation~\ref{eq-hessian}. \subsection{Definiteness invariance of Hessian}\label{subsection-invariance} \newtheorem{theorem:concavity_1}{Theorem \begin{theorem:concavity_1} \label{theorem:concav_1} If all $f_n$'s in Equation~\ref{eq-loglike} have negative definite Hessians AND if at least one of $J$ matrices $\mathbf{X}^j \equiv (\mathbf{x}^j_1, \dots, \mathbf{x}^j_N)^t$ is full rank, then $L(\boldsymbol\beta^1,\dots,\boldsymbol\beta^J)$ also has a negative-definite Hessian. \end{theorem:concavity_1} \begin{proof} To prove negative-definiteness of $\mathbf{H} (\boldsymbol\beta)$ (hereafter referred to as $\mathbf{H}$ for brevity), we seek to prove that $ \mathbf{p}^t \mathbf{H} \mathbf{p}$ is negative for all non-zero $\mathbf{p}$ in $\mathbb{R}^{\sum_j K^j}$. We begin by decomposing $\mathbf{p}$ into $J$ subvectors of length $K^j$ each: \begin{equation} \label{eq:pp} \mathbf{p} = (\mathbf{p}^{1,t}, \dots, \mathbf{p}^{J,t})^t. \end{equation} We now have: \begin{eqnarray} \mathbf{p}^t \mathbf{H} \mathbf{p} &=& \sum_{j,j'=1}^J \mathbf{p}^{j,t} \, \frac{\partial^2 L}{\partial \boldsymbol\beta^j \partial \boldsymbol\beta^{j'}} \, \mathbf{p}^{j'} \\ &=& \sum_{j,j'} \mathbf{p}^{j,t} \left( \sum_n \frac{\partial^2 f_n}{\partial u^j \partial u^{j'}} \: . \: (\mathbf{x}_n^j \otimes \mathbf{x}_n^{j'}) \right) \mathbf{p}^{j'} \\ &=& \sum_n \sum_{j,j'} \frac{\partial^2 f_n}{\partial u^j \partial u^{j'}} \: \mathbf{p}^{j,t} \: (\mathbf{x}_n^j \otimes \mathbf{x}_n^{j'}) \: \mathbf{p}^{j'} \end{eqnarray} If we define a set of new vectors $\mathbf{q}_n$ as: \begin{eqnarray} \mathbf{q}_n \equiv \begin{bmatrix} \mathbf{p}^{1,t} \mathbf{x}_n^1 & \cdots & \mathbf{p}^{J,t} \mathbf{x}_n^J \end{bmatrix}, \end{eqnarray} and use $\mathbf{h}_n$ to denote the $J$-by-$J$ Hessian of $f_n$: \begin{equation} \mathbf{h}_n \equiv [ h_n^{jj'} ]_{j,j'=1,\dots,J}, \end{equation} we can write: \begin{equation} \mathbf{p}^t \mathbf{H} \mathbf{p} = \sum_n \mathbf{q}_n^t \, \mathbf{h}_n \, \mathbf{q}_n. \end{equation} Since all $\mathbf{h}_n$'s are assumed to be negative definite, all $\mathbf{q}_n^t \, \mathbf{h}_n \, \mathbf{q}_n$ terms must be non-positive. Therefore, $\mathbf{p}^t \mathbf{H} \mathbf{p}$ can be non-negative only if all its terms are zero, which is possible only if all $\mathbf{q}_n$'s are zero vectors. This, in turn, means we must have $\mathbf{p}^{j,t} \mathbf{x}_n^j = 0,\:\: \forall \, n,j$. In other words, we must have $\mathbf{X}^j \mathbf{p}^j = \emptyset,\:\: \forall \, j$. This means that all $\mathbf{X}^j$'s have non-singleton nullspaces and therefore cannot be full-rank, which contradicts our assumption. Therefore, $\mathbf{p}^T \mathbf{H} \mathbf{p}$ must be negative. This proves that $\mathbf{H}$ is negative definite. \end{proof} Proving negative-definiteness in the low-dimensional space of base functions is often much easier. For single-parameter distributions, we simply have to prove that the second derivative is negative. For two-parameter distributions, and according to Silvester's criterion~\citep{gilbert1991positive}, it is sufficient to show that both diagonal elements of the base-distribution Hessian as well as its determinant are negative. Note that negative-definiteness depends not only on the distribution but also on the choice of link function(s). For twice-differentiable functions, negative-definiteness of Hessian and log-concavity are equivalent~\citep{boyd2009convex}. \cite{gilks1992adaptive} have a list of log-concave distributions and link functions. \section{Implementation}\label{section-implement} \pkg{RegressionFactory} is a direct implementation of compact expressions in Equations~\ref{eq-gradient} and \ref{eq-hessian}. These expressions imply a code refactoring by separating model-specific steps (calculation of $\mathbf{g}^j$ and $\mathbf{h}^{jj'}$) from generic steps (calculation of linear predictors $\mathbf{X}^j \boldsymbol\beta^j$ as well as $\mathbf{X}^{j,t} \mathbf{g}^j$ and $\mathbf{X}^{j,t} \mathbf{h}^{jj'} \mathbf{X}^{j'}$). This decomposition is captured diagramatically in the system flow diagram of Figure~\ref{figure-flow-diagram}. \begin{figure} \centering \includegraphics[scale=1.5]{regfac_flow_diagram.pdf} \caption[]{System flow diagram for \pkg{RegressionFactory}. The expander function is responsible for calculation of log-likelihood and its gradient and Hessian in the high-dimensional space of regression coefficients. It does so by calculating the linear predictors and supplying them to the base function, which is responsible for calculation of log-likelihood and its gradient and Hessian for each data point, in the low-dimensional space of the underlying probability distribution. The expander function converts these low-dimensional objects into the high-dimensional forms, using generic matrix-algebra operations.}. \label{figure-flow-diagram} \end{figure} \subsection{Expander functions}\label{subsection-expanders} Current implementation of \pkg{RegressionFactory} contains expander and base functions for one-parameter and two-parameter distributions. This covers the majority of interesting GLM cases, and a few more. A notable exception is multinomial regression models (such as logit and probit) which can have an unspecified number of slots. The package can be extended in the future to accommodate such more general cases. \subsubsection{Single-parameter expander function}\label{subsubsection-expanders-1d} Below is the source code for \code{regfac.expand.1par}: \begin{Schunk} \begin{Sinput} R> regfac.expand.1par <- function(beta, X, y, fbase1, fgh = 2, ...) { + # obtain base distribution derivatives + ret <- fbase1(X + # expand base derivatives + f <- sum(ret$f) + if (fgh == 0) return (f) + g <- t(X) + if (fgh == 1) return (list(f = f, g = g)) + xtw <- 0*X + for (k in 1:ncol(X)) xtw[, k] <- X[, k] * ret$h + h <- t(xtw) + return (list(f = f, g = g, h = h)) + } \end{Sinput} \end{Schunk} \code{beta} is the vector of coefficients, \code{X} is the matrix of covariates, \code{y} is the vector (or matrix) of response variable, \code{fbase1} is the single-parameter base function being expanded, and \code{fgh} is a flag indicating whether the gradient or Hessian must be returned or not. The \code{dots} argument (\code{...}) is used for passing special, fixed arguments such as the number of trials in a binomial regression. The vectorized function \code{fbase} is expected to return a list of three vectors: \code{f}, \code{g} and \code{h}, corresponding to the base distribution, its first derivative and its second derivative (all vectors of length $N$ or \code{nrow(X)}). The second and third elements correspond to $\mathbf{g}^1$ and $\mathbf{h}^{11}$ in our notation. Several design aspects of the code are noteworthy for computational efficiency: \begin{enumerate} \item Since $\mathbf{h}$ is diagonal, we only need to return the $N$ diagonal elements. \item For the same reason, rather than multiplying $\mathbf{h}$ by $\mathbf{X}$, we only multiply the vector of diagonal elements by each of the $K$ columns of $\mathbf{X}$. \item The flag \code{fgh} controls whether a) only the function value must be returned (\code{fgh==0}), b) only the function and its first derivative must be returned (\code{fgh==1}), or c) the function as well as its first and second derivative must be returned (\code{fgh==2}). This allows optimization or sampling algorithms that do not the first or second derivative to avoid paying an unnecessary computational penalty. Since most often a higher-level derivative implies the need for lower-level derivative(s) (including the function as zero'th derivative), and also since the computational cost of higher derivatives is much higher, the \code{fgh} flag works in an incremental fashion (only 3 options) rather than covering all permutations of \code{f,g,h}. \end{enumerate} \subsubsection{Two-parameter expander function}\label{subsubsection-expanders-2d} Below is the source code for \code{regfac.expand.2par}, the 2D expander function in \pkg{RegressionFactory}: \begin{Schunk} \begin{Sinput} R> regfac.expand.2par <- function(coeff, X + , Z=matrix(1.0, nrow = nrow(X), ncol = 1) + , y, fbase2, fgh = 2, block.diag = FALSE + , ...) { + # extracting coefficients of X and Z + K1 <- ncol(X); K2 <- ncol(Z) + beta <- coeff[1:K1] + gamma <- coeff[K1 + 1:K2] + + # obtain base distribution derivatives + ret <- fbase2(X + + # expand base derivatives + # function + f <- sum(ret$f) + if (fgh == 0) return (f) + # gradient + g <- c(t(X) + if (fgh == 1) return (list(f = f, g = g)) + # Hessian + h <- array(0, dim=c(K1+K2, K1+K2)) + # XX block + xtw <- 0 * X + for (k in 1:K1) xtw[, k] <- X[, k] * ret$h[, 1] + h[1:K1, 1:K1] <- t(xtw) + # ZZ block + ztw <- 0 * Z + for (k in 1:K2) ztw[, k] <- Z[, k] * ret$h[, 2] + h[K1 + 1:K2, K1 + 1:K2] <- t(ztw) + # XZ and ZX blocks + if (!block.diag) { + ztw2 <- 0*Z + for (k in 1:K2) ztw2[,k] <- Z[,k]*ret$h[,3] + h[K1 + 1:K2, 1:K1] <- t(ztw2 + h[1:K1, K1 + 1:K2] <- t(h[K1 + 1:K2, 1:K1]) + } + + return (list(f = f, g = g, h = h)) + } \end{Sinput} \end{Schunk} Aside from the same performance optimization techniques used for the one-parameter expander function, the two-parameter expander function has an additional parameter, \code{block.diag}. When \code{TRUE} it sets the cross-derivative terms between the two slots to zero. It can be useful in two scenarios: 1) When the full Hessian is not negative definite, but the Hessian for each parameter is. Block-diagonalization allows for optimization and sampling techniques that rely on this property to be used, at the expense of potentially slower convergence since the block-diagonalized Hessian is not accurate, 2) When optimization of one slot can proceed without knowledge of the value of the other slot, as in many two-parameter exponential family members where the dispersion parameter can be ignored in ML estimation of the mean parameter (e.g. in normal distribution). \subsection{Base distributions}\label{subsection-base-dist} Corresponding to the one-parameter and two-parameter expander functions, \pkg{RegressionFactory} offers many of the standard base distributions used in GLM models. Using the nomenclature of \code{glm}, current version (\code{0.7.1}) contains the following base distributions and link functions (* indicates distributions not included in \code{glm} software): \begin{itemize} \item One-parameter distributions: \begin{itemize} \item Binomial (logit, probit, cauchit, cloglog) \item Poisson (log) \item Exponential (log) (*) \item Geometric (logit) (*) \end{itemize} \item Two-parameter distributions: \begin{itemize} \item Gaussian (identity / log) \item Inverse Gaussian (log / log) \item Gamma (log / log) \end{itemize} \end{itemize} A few points are worth mentioning regarding the choice of base distributions and link functions: \begin{enumerate} \item Naming convention: We generally follow this convention for single-parameter distributions: \code{fbase1.<distribution>.<mean link function>} and this convention for two-parameter distributions: \code{fbase2.<distribution>.<mean link function>.<dispersion link function>} There are can be exceptions. For example, in geometric regression \code{fbase1.geometric.logit} the linear predictor is assumed to be \code{logit} of the sucess probability, which is inverse of the distribution mean. Thus, technically the link function is \code{-log(mu-1)}, but for brevity we simply refer to this link function as \code{logit}. Ultimately, naming conventions are less important than the definition of log-likelihood function, which combines the distribution and the link functions. \item Since the focus of \pkg{RegressionFactory} is on supporting optimization and sampling algorithms for GLM-like models, we are not interested in constant terms in the log-likelihood, i.e. terms that are independent of the regression coefficients. Therefore, we can omit them from the base functions for computational efficiency. An example is the log-factorial term in the Poisson base distribution. (Note that such constant terms are automatically differentiated out of the gradient and Hessian.) If needed, users can implement thin wrappers around the base functions to add the constant terms to the log-likelihood. \item Our preference is to choose link functions that map the natural domain of the distribution parameter to the real space. For example, in Poisson distribution the natural domain of the distribution mean is the positive real space. The \code{log} link function maps this natural domain to the entire real space. However, for \code{identity} and \code{sqrt} link functions the range is positive real space. \item We also prefer link functions that produce negative-definiteness for the entire Hessian, or at least for Hessian blocks (corresponding to a subset of the base-distribution parameters). This allow for more optimization/sampling algorithms that take advantage of concavity to be applied to the expanded log-likelihood (according to Theorem~\ref{theorem:concav_1}). \item We have chosen to absorb the link functions into the function names and their implementation, rather than making distribution names and lonk functions parameters of a single base function. Doing the latter is certainly possible, offering usability at computational cost. Our current choice is driven by the fact that the primary target of \pkg{RegressionFactory} is developers rather than end-users. \end{enumerate} \section[]{Using \pkg{RegressionFactory}}\label{section-using} The most basic application of \pkg{RegressionFactory} is to use the readily-available log-likelihood functions and derivatives. For example, one might be developing a Bayesian model where the log-likelihood is combined with the prior to form the posterior, which is then supplied to a sampling algorithm. Or one might be working on a new optimization algorithm and would like to test its correctness and performance on regression log-likelihood functions as an important use-case. Users can also supply their own base functions to the expander functions of \pkg{RegressionFactory} and readily obtain the log-likelihood and its derivatives. Implementation of functions for calculating base distribution derivatives is often quite simple, which can significantly reduce the time needed for prototyping a new regression model. There are two equivalent approaches for passing the log-likelihood functions to an optimization/sampling routine: 1) Pass the expander function as the primary function, and the base function as an argument of the primary function, 2) write a thin wrapper that combines the expander and base functions, and pass this wrapper function to the optimization/sampling routine. If the log-likelihood function must be added to another function (such as a prior), then the second approach is the only option where the wrapper implements the logic for adding the two functions. Due to its higher versatility as well as higher code readability, we recommend the second approach. The above point as well as other usage details are illustrated below, with several examples from single-parameter and double-parameter distributions. \subsection{Example 1: Bayesian GLM} The easiest way to take advantage of \pkg{RegressionFactory} is to utilize its standard GLM base functions in custom applications, either for testing the performance of a new optimization/sampling technique, or for composing more complex models from these lego blocks. In the first example, we show how a Bayesian GLM can be constructed in the \pkg{RegressionFactory} framework. We begin with a basic implementation of Bayesian logistic regression using flat normal priors on each coefficient. First we must load the package into our \proglang{R} session: \begin{Schunk} \begin{Sinput} R> library(RegressionFactory) \end{Sinput} \end{Schunk} Log-likelihood for logistic regression can be readily constructed by applying the single-parameter expander function to the binomial base function and setting the number of trials equal to \code{1}: \begin{Schunk} \begin{Sinput} R> loglike.logistic <- function(beta, X, y, fgh) { + regfac.expand.1par(beta, X, y, fbase1.binomial.logit, fgh, n=1) + } \end{Sinput} \end{Schunk} We also need a prior for \code{beta}, which we assume to be a normal distribution on each of the \code{K} elements of \code{beta} with the same mean (\code{mu.beta}) and standard deviation (\code{sd.beta}): \begin{Schunk} \begin{Sinput} R> logprior.logistic <- function(beta, mu.beta, sd.beta, fgh) { + f <- sum(dnorm(beta, mu.beta, sd.beta, log=TRUE)) + if (fgh==0) return (f) + g <- -(beta-mu.beta)/sd.beta^2 + if (fgh==1) return (list(f=f, g=g)) + h <- diag(-1/sd.beta^2, nrow=length(beta)) + return (list(f=f, g=g, h=h)) + } \end{Sinput} \end{Schunk} We can now combine the likelihood and prior according to Bayes rule to construct the log-posterior: \begin{Schunk} \begin{Sinput} R> logpost.logistic <- function(beta, X, y, mu.beta, sd.beta, fgh) { + ret.loglike <- loglike.logistic(beta, X, y, fgh) + ret.logprior <- logprior.logistic(beta, mu.beta, sd.beta, fgh) + regfac.merge(ret.loglike, ret.logprior, fgh=fgh) + } \end{Sinput} \end{Schunk} In the above, we have taken advantage of the utility function \code{regfac.merge} for combining two lists containing function values and its first two derivatives. In order to test the above posterior function, we simulate some data using the generative model for logistic regression and estimate the coefficients using \code{glm} for reference: \begin{Schunk} \begin{Sinput} R> N <- 1000 R> K <- 5 R> X <- matrix(runif(N*K, min=-0.5, max=+0.5), ncol=K) R> beta <- runif(K, min=-0.5, max=+0.5) R> y <- rbinom(N, size = 1, prob = 1/(1+exp(- R> beta.glm <- glm(y~X-1, family="binomial")$coefficients \end{Sinput} \end{Schunk} We now draw \code{1000} MCMC samples from the posterior of \code{beta} using Stochastic Newton Sampler (SNS), via \proglang{R} package \pkg{sns}~\citep{mahani2014sns}. We are taking advantage of the fact that the sum of two negative-definite Hessians is also negative-definite, a condition needed by SNS. Also, we assume that \code{mu.beta} and \code{sd.beta} are both given to provide a non-informative prior on \code{beta}. Finally, we run \code{sns} in non-stochastic mode via the flag \code{rnd=FALSE} to allow for better comparison of output with \code{glm}: \begin{Schunk} \begin{Sinput} R> library(sns) R> # for more accurate results and better comparison, increase nsmp R> nsmp <- 100 R> mu.beta <- 0.0 R> sd.beta <- 1000 R> beta.smp <- array(NA, dim=c(nsmp,K)) R> beta.tmp <- rep(0,K) R> for (n in 1:nsmp) { + beta.tmp <- sns(beta.tmp, fghEval=logpost.logistic, X=X, y=y + , mu.beta=mu.beta, sd.beta=sd.beta, fgh=2, rnd=FALSE) + beta.smp[n,] <- beta.tmp + } R> beta.sns <- colMeans(beta.smp[(nsmp/2+1):nsmp,]) R> cbind(beta.glm, beta.sns) \end{Sinput} \begin{Soutput} beta.glm beta.sns X1 0.01728161 0.01728161 X2 -0.57629725 -0.57629722 X3 -0.55204361 -0.55204359 X4 0.23282848 0.23282846 X5 -0.06941221 -0.06941221 \end{Soutput} \end{Schunk} Next, we consider a less-trivial example. We create a hierarchical structure where coefficients of \code{J} groups are assumed to be pooled from normal distribution. This is a simple example of Hierarchical Bayesian models, which due to lack of explanatory variables at the upper level is reduced to a random-coefficient model. We begin with data generation to provide the reader with a tangible grasp of the assumed generative model: \begin{Schunk} \begin{Sinput} R> J <- 20 R> mu.beta.hb <- runif(K, min=-0.5, max=+0.5) R> sd.beta.hb <- runif(K, min=0.5, max=1.0) R> X.hb <- list() R> y.hb <- list() R> beta.hb <- array(NA, dim=c(J,K)) R> for (k in 1:K) { + beta.hb[,k] <- rnorm(J, mu.beta.hb[k], sd.beta.hb[k]) + } R> for (j in 1:J) { + X.hb[[j]] <- matrix(runif(N*K, min=-0.5, max=+0.5), ncol=K) + y.hb[[j]] <- rbinom(N, size=1, prob=1/(1+exp(- + } \end{Sinput} \end{Schunk} Again, we generate \code{glm} coefficient estimates for reference. Note that \code{glm} treats the groups completely independently of each other, i.e. without any pooling: \begin{Schunk} \begin{Sinput} R> beta.glm.all <- array(NA, dim=c(J,K)) R> for (j in 1:J) { + beta.glm.all[j,] <- glm(y.hb[[j]]~X.hb[[j]]-1 + , family="binomial")$coefficients + } \end{Sinput} \end{Schunk} Again, we draw samples from posterior on coefficients using SNS, turning the \code{rnd} flag off for better comparison. Also for code brevity and maintaining focus on how to use pkg{RegressionFactory}, we ignore sampling from the posterior of \code{mu.beta} and \code{sd.beta}, and assume their value is given. We must first construct the log-posteriors. Note that we do not need to change the definition of log-posteior, but the interpretation of \code{mu.beta} and \code{sd.beta} has changed fronm scalars to vector of length \code{K} each: \begin{Schunk} \begin{Sinput} R> beta.smp.hb <- array(NA, dim=c(nsmp,J,K)) R> beta.tmp.hb <- array(0.0, dim=c(J,K)) R> for (n in 1:nsmp) { + for (j in 1:J) { + beta.tmp.hb[j,] <- sns(beta.tmp.hb[j,], fghEval=logpost.logistic + , X=X.hb[[j]], y=y.hb[[j]] + , mu.beta=mu.beta.hb, sd.beta=sd.beta.hb, fgh=2, rnd=F) + } + beta.smp.hb[n,,] <- beta.tmp.hb + } R> beta.sns.hb <- apply(beta.smp.hb[(nsmp/2+1):nsmp,,], c(2,3), mean) \end{Sinput} \end{Schunk} We have taken advantage of conditional independence [ref] of the coefficients of each group, given the values of \code{mu.beta} and \code{sd.beta}. We examine the coefficients of the first few groups between \code{glm} and HB methods: \begin{Schunk} \begin{Sinput} R> head(beta.glm.all) \end{Sinput} \begin{Soutput} [,1] [,2] [,3] [,4] [,5] [1,] 0.02469046 -0.02179051 0.09795783 -0.3686893 0.37422102 [2,] 0.23716478 -0.14741078 -0.64943037 -0.1177934 0.17251167 [3,] -0.08602828 0.07878448 0.16795872 0.1100804 0.03127816 [4,] 0.00178465 0.24283261 -0.24240170 -0.2504084 -0.15572659 [5,] 0.17083027 0.32218840 0.13358916 0.1168579 -0.15588975 [6,] 0.22690633 0.11722254 -0.08157556 0.3395405 0.02823816 \end{Soutput} \begin{Sinput} R> head(beta.sns.hb) \end{Sinput} \begin{Soutput} [,1] [,2] [,3] [,4] [,5] [1,] -0.008028756 -0.03689594 0.13273096 -0.36892225 0.27940600 [2,] 0.166836113 -0.15122282 -0.54964963 -0.14652433 0.10439417 [3,] -0.103947649 0.05881464 0.19355822 0.05347266 -0.02005739 [4,] -0.029730149 0.21519779 -0.17819428 -0.26395210 -0.18503945 [5,] 0.122504007 0.29084126 0.15998503 0.06725949 -0.18384662 [6,] 0.169507903 0.10188925 -0.03573334 0.26120405 -0.02598995 \end{Soutput} \end{Schunk} Plotting unpooled (\code{glm}) and pooled (\code{sns}) coefficients shows the typical shrinkage pattern of Bayesian models. \begin{Schunk} \begin{Sinput} R> plot(beta.glm.all[,1], beta.sns.hb[,1] + , xlab="Unpooled Coefficients" + , ylab="Pooled Coefficients") R> abline(a=0, b=1) \end{Sinput} \end{Schunk} \begin{figure} \includegraphics{RegressionFactory-fig1} \caption{Pooling of logistic regression coefficients using a hierarchical Bayesian framework produces the familiar shrinkage towards the mean effect.} \label{fig-shrinkage} \end{figure} \subsection{Example 2: Double-parameter GLM with varying dispersion} As a second example, we consider a double-parameter GLM with varying dispersion, i.e., dependent on the covariates. As of version \code{0.7.1}, \pkg{RegressionFactory} contains three double-parameter base distributions: Gaussian, inverse Gaussian, and Gamma. These double-parameter distributions can be used in a constant-dispersion or varying-dispersion setting. Constant-dispersion scenario is a special case of varying-dispersion scenario where the only covariate used to explain the dispersion parameter of the base distribution is intercept. This corresponds to the default value of \code{Z} in the function \code{regfac.expand.2par}. First, we load the \proglang{R} package \pkg{dglm}: \begin{Schunk} \begin{Sinput} R> library(dglm) \end{Sinput} \end{Schunk} To use \pkg{RegressionFactory}, as before we implement a thin wrapper to combine the 2D expander with the normal base distribution: \begin{Schunk} \begin{Sinput} R> loglike.linreg <- function(coeff, X, y, fgh, vd = F) { + if (vd) regfac.expand.2par(coeff = coeff, X = X, Z = X, y = y + , fbase2 = fbase2.gaussian.identity.log, fgh = fgh, block.diag = F) + else regfac.expand.2par(coeff = coeff, X = X, y = y + , fbase2 = fbase2.gaussian.identity.log, fgh = fgh, block.diag = F) + } \end{Sinput} \end{Schunk} The boolean flag \code{vd} indicates whether we want to use covariates to explain the dispersion or not. If \code{FALSE}, the model is reduced to ordinary linear regression. Next, we simulate data according to the assumed generative model: \begin{Schunk} \begin{Sinput} R> N <- 1000 R> K <- 5 R> X <- matrix(runif(N*K, min=-0.5, max=+0.5), ncol=K) R> beta <- runif(K, min=-0.5, max=+0.5) R> gamma <- runif(K, min=-0.5, max=+0.5) R> mean.vec <- R> sd.vec <- exp( R> y <- rnorm(N, mean.vec, sd.vec) \end{Sinput} \end{Schunk} We now estimate constant-dispersion and varying-dispersion models using the \proglang{R} commands \code{lm} and \code{dglm}: \begin{Schunk} \begin{Sinput} R> # constant-dispersion model R> est.glm <- lm(y~X-1) R> beta.glm <- est.glm$coefficients R> sigma.glm <- summary(est.glm)$sigma R> # varying-dispersion model R> est.dglm <- dglm(y~X-1, dformula = ~X-1, family = "gaussian", dlink = "log") \end{Sinput} \begin{Soutput} family: gaussian \end{Soutput} \begin{Sinput} R> beta.dglm <- est.dglm$coefficients R> gamma.dglm <- est.dglm$dispersion.fit$coefficients \end{Sinput} \end{Schunk} Finally, we estimate the same models using the expander framework of \pkg{RegressionFactory}: \begin{Schunk} \begin{Sinput} R> # constant-dispersion R> coeff.smp <- array(NA, dim=c(nsmp, K+1)) R> coeff.tmp <- rep(0, K+1) R> for (n in 1:nsmp) { + coeff.tmp <- sns(coeff.tmp, fghEval=loglike.linreg + , X=X, y=y, fgh=2, vd = F, rnd = F) + coeff.smp[n,] <- coeff.tmp + } R> beta.sns.cd <- colMeans(coeff.smp[(nsmp/2+1):nsmp, 1:K]) R> sigma.sns.cd <- sqrt(exp(mean(coeff.smp[(nsmp/2+1):nsmp, K+1]))) R> cbind(beta.glm, beta.sns.cd) \end{Sinput} \begin{Soutput} beta.glm beta.sns.cd X1 -0.2630246 -0.2630246 X2 -0.4988430 -0.4988430 X3 0.5966928 0.5966928 X4 -0.3255354 -0.3255354 X5 0.1037526 0.1037526 \end{Soutput} \begin{Sinput} R> cbind(sigma.glm, sigma.sns.cd) \end{Sinput} \begin{Soutput} sigma.glm sigma.sns.cd [1,] 1.031139 1.028558 \end{Soutput} \begin{Sinput} R> # varying-dispersion R> coeff.smp <- array(NA, dim=c(nsmp, 2*K)) R> coeff.tmp <- rep(0, 2*K) R> for (n in 1:nsmp) { + coeff.tmp <- sns(coeff.tmp, fghEval=loglike.linreg + , X=X, y=y, fgh=2, vd = T, rnd = F) + coeff.smp[n,] <- coeff.tmp + } R> beta.sns.vd <- colMeans(coeff.smp[(nsmp/2+1):nsmp, 1:K]) R> gamma.sns.vd <- colMeans(coeff.smp[(nsmp/2+1):nsmp, K+1:K]) R> cbind(beta.dglm, beta.sns.vd) \end{Sinput} \begin{Soutput} beta.dglm beta.sns.vd X1 -0.2702744 -0.2702757 X2 -0.5500612 -0.5500678 X3 0.6352290 0.6352354 X4 -0.3866443 -0.3866514 X5 0.1004077 0.1004075 \end{Soutput} \begin{Sinput} R> cbind(gamma.dglm, gamma.sns.vd) \end{Sinput} \begin{Soutput} gamma.dglm gamma.sns.vd X1 0.3183887 0.3184326 X2 0.5214325 0.5214858 X3 0.2689971 0.2689735 X4 0.6318339 0.6318852 X5 -0.4145652 -0.4145799 \end{Soutput} \end{Schunk} Note that the mean coefficients from \pkg{dglm} and \pkg{RegressionFactory} match exactly in constant-dispersion case, but the dispersion parameters do not match since \code{dglm} uses a method of moments to estimate dispersion, rather than log-likelihood maximization. For varying-dispersion scenario, since mean and dispersion coefficients are estimated simultaneously, neither sets match exactly between the two methods, but they are very close, and the discrepancy becomes smaller for larger data. \subsection{Example 3: Geometric regression}\label{subsection-geometric} In the last example, we illustrate how a new GLM regression can be easily constructed using the \pkg{RegressionFactory} framework. This involves three steps: 1) identify a base distribution, 2) select the link function(s), and 3) combine 1 and 2 to arrive at the log-likelihood function and its derivatives, preferrably to make the Hessian negative-definite. According to Theorem~\ref{theorem:concav_1}, this property can be proven in the base-distribution space, which is often quite easy. Consider the geometric distribution: \begin{equation} P(y=k; p) = (1-p)^{k-1}p. \end{equation} Using a logit link function for $p$, we arrive at the following log-likelihood: \begin{equation} f(u; y) = - \left( y \, u + (1 + y) \, \log {1 + e^{-u}} \right). \end{equation} Concavity of the above function can be easily verified: \begin{equation} f_{uu} = -(1 + y) e^u / (1 + e^u)^2 < 0 \end{equation} The base function \code{fbase1.goemetric.logit} implements the above log-likelihood and its first two derivatives. To test the function, we first simulate data from the distribution: \begin{Schunk} \begin{Sinput} R> N <- 1000 R> K <- 5 R> X <- matrix(runif(N*K, min=-0.5, max=+0.5), ncol=K) R> beta <- runif(K, min=-0.5, max=+0.5) R> y <- rgeom(N, prob = 1/(1+exp(- \end{Sinput} \end{Schunk} We now use SNS in non-stochastic mode (i.e. Newton optimization) to estimate the coefficients. We begin by our usual thin wrapper around the expander function to fully implement the log-likelihood. \begin{Schunk} \begin{Sinput} R> loglike.geometric <- function(beta, X, y, fgh) { + regfac.expand.1par(beta, X, y, fbase1.geometric.logit, fgh) + } R> beta.est <- rep(0,K) R> for (n in 1:10) { + beta.est <- sns(beta.est, fghEval=loglike.geometric + , X=X, y=y, fgh=2, rnd = F) + } R> cbind(beta, beta.est) \end{Sinput} \begin{Soutput} beta beta.est [1,] 0.3631200 0.4260850 [2,] 0.1219817 0.1695647 [3,] -0.3779823 -0.4790984 [4,] -0.4623841 -0.2790712 [5,] -0.1732932 -0.3219395 \end{Soutput} \end{Schunk} \section{Summary}\label{section-summary} We presented \proglang{R} package \pkg{RegressionFactory}, a modular framework for evaluating GLM log-likelihood functions and their derivatives. We illustrated its utility in rapidly developing composite GLM models such as Hierarchical Bayesian as well as new regression models such as geometric and exponential regression. The accompanying definiteness-invariance theorem allows us to reason about logl-likelihood Hessian in a much lower-dimensional space. Another advantage of our modular implementation is that it allows for performance optimization strategies to be readily applied across all GLM models. For example, the linear algebra steps contained in the expansion functions \code{regfac.expand.1par} and \code{regfac.expand.2par} can be thoroughly studied from the following perspectives: \begin{itemize} \item Row-major vs. column-major layout of the covariate matrices $\mathbf{X}^j$'s, for single-threaded and multi-threaded scenarios. \item Non-Uniform Memory Access (NUMA) implications of memory allocation for $\mathbf{X}^j$'s. \item Loop and cache fusion strategies. \item Coarse- vs. fine-grained parallelization in composite models such as HB. \end{itemize} While base functions contain model-specific code, yet they also present broad optimization opportunities. For example, they are all vectorized by definition, suggesting that they can benefit from optimized Single-Instruction, Multiple-Data (SIMD) implementation. In particular, access to vectorized transcendental functions can greatly improve the peformance of many base functions. Many of the above issues have been studied here~\citep{mahani2013simd}. A natural next step for \pkg{RegressionFactory} would be to implement the expander and base functions in compiled code such as \proglang{C/C++}, which allows for many of the advanced optimization techniques to be applicable subsequently.
1,314,259,994,089
arxiv
\chapter{Towards Distributed Petascale Computing}\label{ch01} \section{Introduction}\label{ch01sec01} Recent advances in experimental techniques have opened up new windows into physical and biological processes on many levels of detail. The resulting data explosion requires sophisticated techniques, such as grid computing and collaborative virtual laboratories, to register, transport, store, manipulate, and share the data. The complete cascade from the individual components to the fully integrated multi-science systems crosses many orders of magnitude in temporal and spatial scales. The challenge is to study not only the fundamental processes on all these separate scales, but also their mutual coupling through the scales in the overall multi-scale system, and the resulting emergent properties. These complex systems display endless signatures of order, disorder, self-organization and self-annihilation. Understanding, quantifying and handling this complexity is one of the biggest scientific challenges of our time \cite{Barabasi2005}. In this chapter we will argue that studying such multi-scale multi-science systems gives rise to inherently hybrid models containing many different algorithms best serviced by different types of computing environments (ranging from massively parallel computers, via large-scale special purpose machines to clusters of PC's) whose total integrated computing capacity can easily reach the PFlop/s scale. Such hybrid models, in combination with the by now inherently distributed nature of the data on which the models `feed' suggest a distributed computing model, where parts of the multi-scale multi-science model are executed on the most suitable computing environment, and/or where the computations are carried out close to the required data (i.e. bring the computations to the data instead of the other way around). Prototypical examples of multi-scale multi-science systems come from bio-medicine, where we have data from virtually all levels between `molecule and man' and yet we have no models where we can study these processes as a whole. The complete cascade from the genome, proteome, metabolome, physiome to health constitutes multi-scale, multi-science systems, and crosses many orders of magnitude in temporal and spatial scales \cite{Finkelstein2004, Sloot2006}. Studying biological modules, their design principles, and their mutual interactions, through an interplay between experiments and modeling and simulations, should lead to an understanding of biological function and to a prediction of the effects of perturbations (e.g. genetic mutations or presence of drugs). \cite{Ventura2006} A good example of the power of this approach, in combination with state-of-the-art computing environments, is provided by the study of the heart physiology, where a true multi-scale simulation, going from genes, to cardiac cells, to the biomechanics of the whole organ, is now feasible. \cite{Noble2002} This `from genes to health' is also the vision of the Physiome project \cite{Hunter2003, Hunter2006}, and the ViroLab \cite{virolab, Sloot2006b}, where a multi-scale modeling and simulation of human physiology is the ultimate goal. The wealth of data now available from many years of clinical, epidemiological research and (medical) informatics, advances in high-throughput genomics and bioinformatics, coupled with recent developments in computational modeling and simulation, provides an excellent position to take the next steps towards understanding the physiology of the human body across the relevant $10^9$ range of spatial scales (nm to m) and $10^{15}$ range of temporal scales, ($\mu$s to human lifetime) and to apply this understanding to the clinic. \cite{Hunter2003, Ayache2005} Examples of multi-scale modeling are increasingly emerging (see for example, \cite{Davies2005, Iribe2006, Kelly2006, Sloot2005}). In Section \ref{ch01sec02} we will consider the Grid as the obvious choice for a distributed computing framework, and we will then explore the potential of computational grids for Petascale computing in Section \ref{ch01sec03}. Section \ref{ch01sec04} presents the \emph{Virtual Galaxy} as a typical example of a multi-scale multi-physics application, requiring distributed Petaflop/s computational power. \section{Grid Computing}\label{ch01sec02} The radical increase in the amount of IT-generated data from physical, living and social systems brings about new challenges related to the sheer size of data. It was this data `deluge' that originally triggered the research into grid computing \cite{Foster2001, Hey2003}. Grid computing is an emerging computing model that provides the ability to share data and instruments and to perform high throughput computing by taking advantage of many networked computers able to divide process execution across a distributed infrastructure. As the Grid is ever more frequently used for collaborative problem solving in research and science, the real challenge is in the development of new applications for a new kind of users through virtual organizations. Existing grid programming models are discussed in \cite{Lee2003, bal2004}. Workflow is a convenient way of distribution of computations across a grid. A large group of composition languages have been studied for formal description of workflows \cite{aalst2005} and they are used for orchestration, instantiation, and execution of workflows \cite{Ludascher2006}. Collaborative applications are also supported by problem solving environments which enable users to handle application complexity with web-accessible portals for sharing software, data, and other resources \cite{pse2005}. Systematic ways to building grid applications are provided through object-oriented and component technology, for instance the Common Component Architecture which combines the IDL-based distributed framework concept with requirements of scientific applications \cite{cca2006}. Some recent experiments with computing across grid boundaries, workflow composition of Grid services with semantic description, and development of collaborative problem solving environments are reported in \cite{malawski2006, wcf2005, cross}. These new computational approaches should transparently exploit the dynamic nature of Grid and virtualization of grid infrastructure. The challenges are efficient usage of knowledge for automatic composition of applications \cite{kwfgrid}. Allen et al. in \cite{Allen2003} distinguish four main types of grid applications: (1) Community-centric; (2) Data-centric ; (3) Computation-centric; and (4) Inter-action-centric. Data-centric applications are, and will continue to be the main driving force behind the Grid. Community-centric applications are about bringing people or communities together, as e.g. in the Access Grid, or in distributed collaborative engineering. Interaction-centric applications are those that require 'a man in the loop', for instance in real-time computational steering of simulations or visualizations (as e.g. demonstrated by the CrossGrid project \cite{cross}. In this chapter we focuss on Computation-centric applications. These are the traditional High Performance Computing (HPC) and High Throughput Computing (HTC) applications which, according to Allen et al. \cite{Allen2003} ``turned to parallel computing to overcome the limitations of a single processor, and many of them will turn to Grid computing to overcome the limitations of a parallel computer.'' In the case of parameter sweep (i.e. HTC) applications this has already happened. Several groups have demonstrated successful parameter sweeps on a computational Grid (see e.g. \cite{Sudholt2004}). For tightly coupled HPC applications this is not so clear, as common wisdom is that running a tightly coupled parallel application in a computational grid (in other words, a parallel job actually running on several parallel machines that communicate with each other in a Grid) is of no general use because of the large overheads that will be induced by communications between computing elements (see e.g. \cite{Lee2003}). However, in our opinion this certainly is a viable option, provided the granularity of the computation is large enough to overcome the admittedly large communication latencies that exist between compute elements in a Grid. \cite{Hoekstra2005} For PFlop/s scale computing we can assume that such required large granularity will be reached. Recently a Computation-centric application running in parallel on compute elements located in Poland, Cyprus, Portugal, and the Netherlands was successfully demonstrated \cite{Tirado2005, Gualandris2007}. \section{Petascale Computing on the Grid}\label{ch01sec03} Execution of multi-scale multi-science models on computational grids will in general involve a diversity of computing paradigms. On the highest level functional decompositions may be performed, splitting the model in sub-models that may involve different types of physics. For instance, in a fluid-structure interaction application the functional decomposition leads to one part modeling the structural mechanics, and another part modeling the fluid flow. In this example the models are tightly coupled and exchange detailed information (typically, boundary conditions at each time step). On a lower level one may again find a functional decomposition, but at some point one encounters single-scale, single-physics sub-models, that can be considered as the basic units of the multi-scale multi-science model. For instance, in a multi-scale model for crack propagation, the basic units are continuum mechanics at the macroscale, modeled with finite elements, and molecular dynamics at the microscale \cite{Broughton1999}. Another examplex is provided by Plasma Enhanced Vapor Deposition where mutually coupled chemical, plasma physical and mechanical models can be distinguished \cite{Lera2005}. In principle all basic modeling units can be executed on a single (parallel) computer, but they can also be distributed to several machines in a computational grid. These basic model units will be large scale simulations by themselves. With an overall performance on the PFlop/s scale, it is clear that the basic units will also be running at impressive speeds. It is difficult to estimate the number of such basic model units. In the example of the fluid-structure interaction, there are two, running concurrently. However, in case of for instance a multi-scale system modeled with the Heterogeneous Multiscale Method \cite{E2007} there could be millions of instances of a microscopic model that in principle can execute concurrently (one on each macroscopic grid point). So, for the basic model units we will find anything between single processor execution and massively parallel computations. A computational grid offers many options of mapping the computations to computational resources. First, the basic model units can be mapped to the most suitable resources. So, a parallel solver may be mapped to massively parallel computers, whereas for other solvers special purpose hardware may be available, or just single PC's in a cluster. Next, a distributed simulation system is required to orchestrate the execution of the multi-scale multi-science models. A computational grid is an appropriate environment for running functionally decomposed distributed applications. A good example of research and development in this area is the CrossGrid Project which aimed at elaboration of an unified approach to development and running large scale interactive distributed, compute- and data-intensive applications, like biomedical simulation and visualization for vascular surgical procedures, a flooding crisis team decision support system, distributed data analysis in high energy physics, and air pollution combined with weather forecasting \cite{cross}. The following issues were of key importance in this research and will also play a pivotal role on the road towards distributed PFlop/s scale computing on the Grid: porting applications to the grid environment; development of user interaction services for interactive startup of applications, online output control, parameter study in the cascade, and runtime steering, and on-line, interactive performance analysis based on-line monitoring of grid applications. The elaborated CrossGrid architecture consists of a set of self-contained subsystems divided into layers of applications, software development tools and Grid services \cite{cro-arch}. Large scale grid applications require on-line performance analysis. The application monitoring system, OCM-G, is a unique online monitoring system in which requests and response events are generated dynamically and can be toggled at runtime. This imposes much less overhead on the application and therefore can provide more accurate measurements for the performance analysis tool like G-PM, which can display (in form of various metrics) the behavior of Grid applications \cite{ocm-g}. The High Level Architecture (HLA) fulfills many requirements of distributed interactive applications. HLA and the Grid may complement each other to support distributed interactive simulations. The G-HLAM system supports for execution of legacy HLA federates on the Grid without imposing major modifications of applications. To achieve efficient execution of HLA-based simulations on the Grid, we introduced migration and monitoring mechanisms for such applications. This system has been applied to run two complex distributed interactive applications: N-body simulation and virtual bypass surgery \cite{cro-hla}. In the next section we explore in some detail a prototypical application where all the aforementioned aspects need to be addressed to obtain distributed Petascale computing. \section{The Virtual Galaxy}\label{ch01sec04} A grand challenge in computational astrophysics, requiring \emph{at least} the PFlop/s scale, is the simulation of the physics of formation and evolution of large spiral galaxies like the Milky-way. This requires the development of a hybrid simulation environment to cope with the multiple time scales, the broad range of physics and the shear number of simulation operations \cite{Makino2005a, Hut2006}. The nearby grand design spiral galaxy M31 in the constellation andromeda, as displayed in Fig.\,\ref{fig:M31}, provides an excellent birdseye view of how the Milky-way probably looks. This section presents the Virtual Galaxy as a typical example of a multi-physics application that requires PFlop/s computational speeds, and has all the right properties to be mapped to distributed computing resources. We will introduce in some detail the relevant physics and the expected amount of computations (i.e. Flop) needed to simulate a Virtual Galaxy. Solving Newton's equations of motion for any number of stars is a challenge by itself, but to perform this in an environment with the number of stars as in the Galaxy, and over the enormous range of density contrasts and with the inclusion of additional chemical and nuclear physics, doesn't make the task easier. No single computer will be able to perform the resulting multitude of computations, and therefore it provides a excellent example for a hybrid simulation environment containing a wide variety of distributed hardware. We end this section with a discussion on how a Virtual Galaxy simulation could be mapped to a PFlop/s scale grid computing environment. We believe that the scenarios that we outline are prototypical and also apply to a multitude of other multi-science multi-scale systems, like the ones that were discussed section~\ref{ch01sec01} and \ref{ch01sec03}. \begin{figure} \begin{center} \psfig{figure=Andromeda.eps,width=0.7\textwidth} \end{center} \caption{\label{fig:M31} The Andromeda Nebula, M31. A mosaic of hundreds of Earth based telescope pointings were needed to make this image.} \end {figure} \subsection{A Multi-Physics model of the Galaxy} The Galaxy today contains a few times $10^{11}$ the solar mass (\mbox{${\rm M}_\odot$}) in gas and stars. The life cycle of the gas in the Galaxy is illustrated in Fig.\,\ref{fig:gas2gas}, where we show how gas transforms to star clusters, which again dissolve to individual stars. The ingredients for a self consistent model of the Milky-way Galaxy is based on these same three ingredients: the gas, the star clusters and the field stellar population. The computational cost and physical complexity for simulating each of these ingredients can be estimated based on the adopted algorithms. \begin{figure} \begin{center} \psfig{figure=fig_gas2gas.eps,width=0.5\textwidth} \end{center} \caption{\label{fig:gas2gas} Schematic representation of the evolution of the gas content of the Galaxy.} \end{figure} \subsubsection{How gas turns into star clusters} \label{Sect:gas2SCs}\label{Sect:GD} Stars and star clusters form from giant molecular clouds which collapse when they become dynamically unstable. The formation of stars and star clusters is coupled with the galaxy formation process. The formation of star clusters themselves has been addressed by many research teams and most of the calculations in this regard are a technical endeavor which is mainly limited by the lack of resources. Simulations of the evolution of a molecular cloud up to the moment it forms stars are generally performed with adaptive mesh refinement and smoothed particles hydrodynamics algorithms. These simulations are complex, and some calculations include turbulent motion of the gas \cite{Bate2005}, solve the full magnetic hydrodynamic equations \cite{Zengin2004,Whitehouse2006}, or include radiative transport \cite{Padoan2002}. All the currently performed dynamical cloud collapse simulations are computed with a relatively limited accuracy in the gravitational dynamics. We adopt the smoothed particle hydrodynamics methodology to calculate the gravitational collapse of a molecular cloud, as it is relatively simple to implement and has scalable numerical complexity. These simulation environments are generally based on the Barnes-Hut tree code \cite{Barnes1986} for resolving the self gravity between the gas or dust volume or mass elements, and have a ${\cal O}(n_{\rm SPH}\log n_{\rm SPH})$ time complexity \cite{Kawai2004}. Simulating the collapse of a molecular cloud requires at least $\sim 10^3$ SPH particles per star, a star cluster that eventually (after the simulation) consisting of ${\cal O}(10^4)$ stars then requires about $n_{\rm SPH} \sim 10^{7}$ SPH particles. The collapse of a molecular clound lasts for about $\tau_J \simeq 1/\sqrt{G \rho}$, which for a $10^4$\mbox{${\rm M}_\odot$}\, molecular cloud with a size of 10\,pc is about a million years. Within this time span the molecular cloud will have experienced roughly $10^4$ dynamical time scales totaling the CPU requirements to about ${\cal O}(10^{11})$ Flop for calculating the gravitational collapse of one molecular cloud. \subsubsection{The evolution of the individual stars} \label{Sect:stars2gas}\label{Sect:SE} Once most of the gas is cleared from the cluster environment, an epoch of rather clean dynamical evolution mixed with the evolution of single stars and binaries starts. In general, star cluster evolution in this phase may be characterized by a competition between stellar dynamics and stellar evolution. Here we focus mainly on the nuclear evolution of the stars. With the development of shell based Heny\'e codes \cite{Eggleton2006} the nuclear evolution of a single star for its entire lifetime requires about $10^9$ Flop \cite{Makino1990}. Due to efficient step size refinement the performance of the algorithm is independent of the lifetime of the star; a 100\,\mbox{${\rm M}_\odot$}\,star is as expensive in terms of compute time as a 1\,\mbox{${\rm M}_\odot$}\, star. Adopting the mass distribution with which stars are born \cite{Kroupa1990} about one in 6 stars require a complete evolutionary calculation. The total compute time for evolving all the stars in the Galaxy over its full life time then turns out to be about $10^{20}$\,Flop. Most ($\ {\raise-.5ex\hbox{$\buildrel>\over\sim$}}\ 99$\%) of all the stars in the Galaxy will not do much apart from burning their internal fuel. To reduce the cost of stellar evolution we can therefore parameterize the evolution of such stars. Excellent stellar evolution prescriptions at a fraction of the cost ($\ {\raise-.5ex\hbox{$\buildrel<\over\sim$}}\ 10^4$ Flop) are available \cite{Eggleton1989,Hurley2000}, and could be used for the majority of stars (which is also what we adopted in \S\,\ref{Sect:PM}). \subsubsection{Dynamical evolution} \label{SCs2stars}\label{Sect:SD} When a giant molecular cloud collapses one is left with a conglomeration of bound stars and some residual gas. The latter is blown away from the cluster by the stellar winds and supernovae of the young stars. The remaining gas depleted cluster may subsequently dissolve in the background on a time scale of about $10^8$\,years. The majority (50-90\%) of star clusters which are formed in the Galaxy dissolve due to the expulsion of the residual gas \cite{Goodwin1997, Boily2003}. Recent reanalysis of the cluster population of the Large Magelanic cloud indicates that this process of {\em infant mortality} is independent of the mass of the cluster \cite{Lamers2005}. Star clusters that survive their infancy engage in a complicated dynamical evolution which is quite intricately coupled with the nuclear evolution of the stars \cite{Portegies2001b}. The dynamical evolution of a star cluster is best simulated using direct $N$-body integration techniques, like NBODY4 \cite{Aarseth1975,Aarseth1999} or the {\tt starlab} software environment \cite{Portegies2001b}. For dense star clusters the compute time is completely dominated by the force evaluation. Since each star has a gravitational pull at all other stars this operation scales with ${\cal O}(N^2)$ for one dynamical time step. The good news is that the large density contrast between the cluster central regions and its outskirts can cover 9 orders of magnitude, and stars far from the cluster center are regularly moving whereas central stars have less regular orbits \cite{Gemmeke2006}. By applying smart time stepping algorithms one can reduce the ${\cal O}(N^2)$ to ${\cal O}(N^{4/3})$ without loss of accuracy \cite{Makino1988}. In fact one actually gains accuracy since taking many unnecessary small steps for a regularly integrable star suffers from numerical round-off. The GRAPE-6, a special purpose computer for gravitational $N$-body simulations, performs dynamical evolution simulations at a peak speed of about 64\,Tflop/s \cite{Makino2001}, and is extremely suitable for large scale $N$-body simulations. \subsubsection{The galactic field stars}\label{Sect:FS} Stars that are liberated by star clusters become part of the Galactic tidal field. These stars, like the Sun, orbit the Galactic center in regular orbits. The average time scale for one orbital revolution for a field star is about 250\,Myr. These regularly orbiting stars can be resolved dynamically using a relatively unprecise $N$-body technique, we adopt here the ${\cal O}(N)$ integration algorithm which we introduced in \S\,\ref{Sect:GD}. In order to resolve a stellar orbit in the Galactic potential about 100 integration time steps are needed. Per Galactic crossing time (250\,Myr) this code then requires about $10^6$ operations per star, resulting in a few times $10^7 N$ Flop for simulating the field population. Note that simulating the galactic field population is a trivially parallel operation, as the stars hover around in their self generated potential \subsection{A performance model for simulating the Galaxy}\label{Sect:PM} Next we describe the required computer resources as a function of life time of a Virtual Galaxy. The model is relatively simple and the embedded physics is only approximate, but it will give an indication on what type of calculation is most relevant in what state of the evolution of the Galaxy. According to the model we start the evolution of the Galaxy with amorphous gas. We subsequently assume that molecular clouds are formed with power-law mass function with an index of -2 between $10^3$\,\mbox{${\rm M}_\odot$}\, and $10^7$\,\mbox{${\rm M}_\odot$}, with distribution in time which is flat in $\log t$. We assume that the molecular cloud lives for between 10\,Myr and 1\,Gyr (with an equal probability between these moments). The star formation efficiency is 50\%, and the cluster has an 80\% change to dissolve within 100\,Myr (irrespective of the cluster mass). The other 20\% clusters dissolve on a time scale of about $t_{\rm diss} \sim 10 \sqrt{R^3 M}$\,Myr. During this period they lose mass at a constant rate. The field population is enriched with the same amount of mass. \begin{figure} \begin{center} \psfig{figure=fig_Galaxy_History.eps,width=0.7\textwidth} \end{center} \caption{\label{fig:VG} The evolution of the mass content in the Galaxy via the simple model described in \S\,\ref{Sect:PM}. The dotted curve give the total mass in giant molecular clouds, the thick dashed curve in star clusters and the solid curve in field stars, which come from dissolved star clusters. } \end{figure} \begin{figure} \begin{center} \psfig{figure=fig_FLOPs.eps,width=0.7\textwidth} \end{center} \caption{\label{fig:CPU} The number of floating points operations expenditure per million years for the various ingredients in the performance model. The solid, thick short dash and doted curve are as in Fig.\,\ref{fig:VG}. New in this figure are the two dotted and dash-dotted lines near the bottom, which represent the CPU time needed for evolving the field star population (lower dotted curve) and dark matter (botton curve). } \end{figure} The resulting total mass in molecular clouds, star clusters and field stars is presented in fig.\,\ref{fig:VG}. At early age, the galaxy completely consists of molecular clouds. After about 10\,Myr some of these cloulds collapse to form star clusters and single stars, indicated by the rapildy rising solid (field stars) and dashed (star clusters) curves. The maximum number of star clusters if reached when the Galaxy is about a Gyr old. The field population continues to rise to reach a value of a few times $10^{11}$\,\mbox{${\rm M}_\odot$}\, at today's age of about 10\,Gyr. By that time the total mass in star clusters has dropped to several $10^9$\,\mbox{${\rm M}_\odot$}\, quite comparable with the observed masses of the field population and the star cluster content. In Fig.\,\ref{fig:CPU} we show the evolution of the amount of Flop required to simulate the entire galaxy, as a function of its life time. The Flop count along the vertical axis are given in units of number of floating points operations per million years in Galactic evolution. For example, to evolve the Galaxy's population of molecular clouds from 1000\,Myr to 1001\,Myr requires about $10^{16}$ Flop. \subsection{Petascale simulation of a Virtual Galaxy}\label{Sect:VGSim} From Fig.\,\ref{fig:CPU} we see that the most expensive submodels in a Virtual Galaxy are the star cluster simulations, the molecular could simulations, and the field star simulations. In the following discussion we neglect the other components. A Virtual Galaxy model, viewed as a multi-scale multi-physics model, can then be decomposed as in Fig.\,\ref{fig:VGdecomp}. \begin{figure} \begin{center} \psfig{figure=VGdecomp.eps,width=0.7\textwidth} \end{center} \caption{\label{fig:VGdecomp} Functional decomposition of the Virtual Galaxy} \end{figure} The by far most expensive operation is the star cluster computations. We have $O(10^4)$ star clusters, each cluster can be simulated independent of the others. This means that a further decomposition is possible, down to the individual cluster level. A single star cluster simulation, containing $O(10^4)$ stars, still requires computational speeds at the TFlop/s scale (see also below). The clusters simulations require $10^{21}$ Flop per simulated Myr of lifetime of the Galaxy. The molecular clouds plus the field stars need, on average over the full life time of the Galaxy, $10^{15}$ Flop per simulated Myr of lifetime, and can be executed on general purpose parallel machines. A distributed Petascale computing infrastructure for the Virtual Galaxy could consist of a single or two general purpose parallel machines to execute the molecular clouds and fields stars at a sustained performance of 1 TFlop/s, and a distributed grid of special purpose Grapes to simulate the star clusters. We envision for instance 100 next generation GrapeDR systems\footnote{ Currently some 100 Grape6 systems, delivering an average performance of 100 GFlop/s are deployed all over the world.}, each delivering 10 Tflop/s, providing a sustained 1 PFlop/s for the star cluster computations. We can now estimate the expected runtime on a Virtual Galaxy simulation on this infrastructure. In Table\,\ref{tab:run} we present the estimated wall-clock time needed for simulating the Milky-way Galaxy, a smaller subset and a dwarf galaxy using the distributed Petascale resource described above. Note that in the reduced Galaxies the execution time goes linearly down with the reduction factor, which should be understood as a reduction of mass in the molecular clouds and a reduction of the total number of star clusters (but with the same amount of stars per star cluster). \begin{table} \tabletitle{Estimated run times of the Virtual Galaxy simulation on a distributed Petascale architecture as described in the main text.} \label{tab:run} \begin{tabular}{|c|c|c|c|} Age & Milky Way Galaxy & Factor 10 reduction & Dwarf Galaxy \\ & & &(factor 100 reduction)\\ \hline 10 Myr & 3 hour & 17 min. & 2 min. \\ 100 Myr & 3 year & 104 days & 10 days \\ 1 Gyr & 31 year & 3 year & 115 days \\ 10 Gyr & 320 year & 32 year & 3 year \\ \end{tabular} \end{table} With such a performance it will be possible to simulate the entire Milky-way Galaxy for about 10\,Myr which is an interesting time scale on which stars form, massive stars evolve and infant mortality of young newly born star clusters operates. By simulating the entire Milky-way Galaxy on this important time scale will enable us to study these phenomena with unprecedented detail. At the same performance it will be possible to simulate part (1/10th) of the Galaxy on a time scale of 100\,Myr. This time scale is important for the evolution of young and dense star clusters, the major star formation mode in the Galaxy. Simulating a dwarf galaxy, like the Large Magellanic Cloud for its entire lifetime will become possible with a PFlop/s scale distributed computer. The entire physiology of this galaxy is largely not understood, as well as the intricate coupling between stellar dynamics, gas dynamics, stellar evolution and dark matter. \section{Discussion and Conclusions}\label{ch01sec05} Multi-scale multi-science modeling is the next (grand) challenge in Computational Science. Not only in terms of formulating the required couplings across the scales or between multi-science models, but also in terms of the sheer computational complexity of such models. The later can easily result in requirements on the PFlop/s scale. We have argued that simulating these models involves high level functional decompositions, finally resulting in some collection of single-scale single-science sub-models, that by themselves could be quite large, requiring simulations on e.g. massively parallel computers. In other words, the single-scale single-science sub-models would typically involve some form of High Performance - or High Throughput Computing. Moreover, they may have quite different demands for compute infrastructure, ranging from Supercomputers, via special purpose machines, to the single workstation. We have illustrated this by pointing to a few models from biomedicine and in more detail in the discussion on the Virtual Galaxy. We believe that the Grid provides the natural distributed computing environment for such functionally decomposed models. The Grid has reached a stage of maturity that in essence all the necessary ingredients needed to develop a PFlop/s scale computational grid for multi-scale multi-science simulations are available. Moreover, in a number of projects grid enabled functionally decomposed distributed computing has been successfully demonstrated, using many of the tools that were discussed in Section \ref{ch01sec02}. Despite these successes the experience with computational grids is still relatively small. Therefore, a real challenge lies ahead in actually demonstrating the feasibility of Grids for distributed Petascale computing, and realizing Grid-enabled Problem Solving Environments for multi-scale multi-science applications.
1,314,259,994,090
arxiv
\section{\bf #1}}} \newcommand{\SUBSECTION}[1]{\bigskip{\large\subsection{\bf #1}}} \newcommand{\SUBSUBSECTION}[1]{\bigskip{\large\subsubsection{\bf #1}}} \begin{titlepage} \begin{center} \vspace*{2cm} {\large \bf Retarded electric and magnetic fields of a moving charge: Feynman's derivation of Li\'{e}nard-Wiechert potentials revisited} \vspace*{1.5cm} \end{center} \begin{center} {\bf J.H.Field } \end{center} \begin{center} { D\'{e}partement de Physique Nucl\'{e}aire et Corpusculaire Universit\'{e} de Gen\`{e}ve . 24, quai Ernest-Ansermet CH-1211 Gen\`{e}ve 4.} \end{center} \begin{center} {e-mail; john.field@cern.ch} \end{center} \vspace*{2cm} \begin{abstract} Retarded electromagnetic potentials are derived from Maxwell's equations and the Lorenz condition. The difference found between these potentials and the conventional Li\'{e}nard-Wiechert ones is explained by neglect, for the latter, of the motion-dependence of the effective charge density. The corresponding retarded fields of a point-like charge in arbitary motion are compared with those given by the formulae of Heaviside, Feynman, Jefimenko and other authors. The fields of an accelerated charge given by the Feynman are the same as those derived from the Li\'{e}nard-Wiechert potentials but not those given by the Jefimenko formulae. A mathematical error concerning partial space and time derivatives in the derivation of the Jefimenko equations is pointed out. \end{abstract} \vspace*{1cm}{\it Keywords}; Special Relativity, Classical Electrodynamics. \newline \vspace*{1cm} PACS 03.30+p 03.50.De \end{titlepage} \SECTION{\bf{Introduction}} The present paper is the fifth in a series written recently by the present author on relativistic classical electrodynamics (RCED). In the first of the papers ~\cite{JHFRCED}, all of the formulae of classical electromagnetism (CEM), up to relativistic corrections of O($\beta^2$), relating to intercharge forces, were derived from Hamilton's Principle, assuming only Coulomb's inverse-square force law of electrostatics and relativistic invariance. In the same paper it was shown that the intercharge force, mediated by the exchange of space-like virtual photons, is predicted by quantum electrodynamics (QED) to be instantaneous in the centre-of-mass frame of the interacting charges. Recently, convincing experimental evidence has been obtained~\cite{Kohletal} for the non-retarded nature of `bound' magnetic fields with $r^{-2}$ dependence, (associated in QED with virtual photon exchange) in a modern version, probing small $r$ values, of the Hertz experiment~\cite{Hertz} in which the electromagnetic waves associated with the propagation of real photons (fields with $r^{-1}$ dependence) were originally discovered. \par In two subsequent papers~\cite{JHFRSKO,JHFIND} the predictions of the RCED formulae for intercharge forces derived in Ref.~\cite{JHFRCED} are compared with the predictions of the CEM (Heaviside) formulae~\cite{Heaviside} for the force fields of a uniformly moving charge. Unlike the RCED formulae, the CEM ones correspond to a retarded interaction. If the latter are written in `present time' form~\cite{PPPT} they are found to differ from RCED formulae by terms of O($\beta^2$). In the first paper~\cite{JHFRSKO}, it is shown that consistent results for small-angle Rutherford scattering in different inertial frames are obtained only for RCED formulae and that stable, circular, Keplerian orbits of a system consisting of two equal and opposite charges are impossible for the case of the retarded CEM fields. The related `Torque Paradox' of Jackson~\cite{JackTP} is also resolved by use of the instantaneous RCED fields. The second paper~\cite{JHFIND} considers electromagnetic induction in different reference frames. Again, consistent results are obtained only in the case of RCED fields. It is demonstrated that for a particular spatial configuration of a simple two-charge `magnet' the Heaviside formula for the electric field predicts a vanishing induction effect in the case that the `magnet' is in motion and the test coil is at rest. \par In Ref.~\cite{JHFFT}, the space-time transformation properties of the RCED and CEM force fields were studied in detail and compared with those that provide the classical description of the creation, propagation, and destruction of real photons. It was shown that in the relativistic theory longitudinal (with respect the direction of motion of the source charge) electric fields contain covariance-breaking terms of O($\beta^2$). The electric Gauss Law and Electrodynamic (Amp\`{e}re Law) Maxwell Equations are also modified by the addition of covariance-breaking terms of O($\beta^4$) and O($\beta^5$) respectively. The retarded fields are re-derived from the Maxwell Equations and the Lorenz condition and an error in the derivation of the retarded Li\'{e}nard-Wiechert (LW)~\cite{LW} potentials was pointed out. The argument leading to this conclusion ---which implies that retarded fields given by the Heaviside formulae are erroneous for this trivial mathematical reason, as well as being inconsistent with QED--- is recalled in Sections 2 and 3 below. \par The aim of the present paper is to present a more detailed discussion of retarded electromagnetic fields with a view to pointing out some of the mathematically erroneous statements on this subject that have appeared in classical research literature, text books and modern pedagogical literature. The correct relativistic formulae for the retarded fields of an accelerated charge have previously been derived in Ref.~\cite{JHFFT}. These fields actually describe only the production and propagation of real photons whereas in text books and the pedagogical literature it is universally assumed that these fields describe both intercharge forces and radiative effects. Since the present paper is concerned only with the postulates and mathematical arguments used in different derivations of the retarded fields, the physical interpretation of the fields (in particular their relation to the quantum mechanical description of radiation), as discussed in Ref.~\cite{JHFFT}, is not considered. \par The structure of the paper is as follows. In the following section the retarded 4-vector potential is derived from inhomgeneous d'Alembert equations and the Lorenz condition. The reason for the difference between the potential so-obtained and the pre-relativistic LW potentials is explained. In Section 3 Feynman's derivation of the LW potentials is recalled, where the `multiple counting' committed also in the original derivations~\cite{LW} is made particularly transparent. In Section 4 some erroneous `relativistic' derivations of the LW potentials and the Heaviside formulae that are commonly presented in text books on classical electromagnetism are discussed. In Section 5 the retarded fields of a uniformly moving charge are considered and the `present time' formulae for the retarded RCED fields are derived for comparison with the Heaviside formulae of CEM. In Section 6 a comparison is made between different formulae for the retarded fields of an accelerated charge that have appeared in text books and the pedagogical literature including the well-known ones of Feynman and Jefimenko. Section 7 contains a brief summary. \SECTION{\bf{Derivation of retarded electromagnetic potentials from inhomogeneous d'Alembert equations}} As described in Ref.~\cite{Jack1}, retarded electromagnetic potentials may be derived from the Maxwell equations: \begin{eqnarray} \vec{\nabla} \cdot \vec{{\rm E}} & = & 4 \pi {\rm J}_0, \\ \vec{\nabla} \times \vec{{\rm B}} & - &\frac{1}{c} \frac{\partial \vec{{\rm E}}}{\partial t} = 4 \pi \vec{{\rm J}} \end{eqnarray} and the Lorenz condition \begin{equation} \vec{\nabla} \cdot \vec{{\rm A}} + \frac{1}{c}\frac{\partial {\rm A}_0}{\partial t} = 0 \end{equation} where the current density ${\rm J}$ is a 4-vector: \begin{equation} {\rm J}(\vec{x}_J(t),t) = ({\rm J}_0;\vec{{\rm J}}) \equiv (\gamma_u \rho^*; \gamma_u \vec{\beta}_u \rho^*) = \frac{u \rho^*}{c}. \end{equation} The system of source charges is assumed to be at rest in the frame S$^*$, where the charge density is $\rho^*$, and to move with velocity $\vec{u} = c \vec{\beta}_u$ relative to the frame S in which the potential is defined. The 4-vector velocity of the charge system in this last frame is: \begin{equation} u \equiv (c\gamma_u ; c\gamma_u \vec{\beta}_u ) \end{equation} where \[ \beta_u \equiv \frac{u}{c},~~~\gamma_u \equiv \frac{1}{\sqrt{1-\beta_u^2}}. \] The first step of the calculation is to use the Lorenz condition (2.3) to eliminate either $\vec{{\rm J}}$ or ${\rm J}_0$ from (2.1) and (2.2) to obtain the inhomogeneous d'Alembert equations: \begin{eqnarray} \nabla^2 {\rm A}_0 -\frac{1}{c^2}\frac{\partial^2 {\rm A}_0}{\partial t^2} & = & -4 \pi {\rm J}_0, \\ \nabla^2 \vec{{\rm A}} -\frac{1}{c^2}\frac{\partial^2 \vec{{\rm A}}}{\partial t^2} & = & -4 \pi \vec{{\rm J}}. \end{eqnarray} These equations are readily solved by introducing appropriate Green functions~\cite{Jack1}. The solutions give the retarded 4-vector potential: \begin{equation} {\rm A}_{\mu}^{ret}(\vec{x}_q,t) = \int dt' \int d^3 x_J(t') \frac{{\rm J}_{\mu}(\vec{x}_J(t'), t')} {|\vec{x}_q -\vec{x}_J(t')|} \delta(t'+\frac{|\vec{x}_q -\vec{x}_J(t')|}{c}-t). \end{equation} Here $\vec{x}_q$ is the position and $t$ the time at which the potential is defined and $\vec{x}_J(t')$ specifies the position of the volume element $d^3 x_J(t')$ at the earlier time $t'$. The $\delta$-function ensures that the volume element lies on the backward light cone of the field point specified by $\vec{x}_q$, as required by causality, since the potentials give the classical description of the propagation, from the source to the field point, of real (on-shell) photons at speed $c$. This is a consequence of the wave-equation-like structure of the terms on the left sides of the d'Alembert equations. The solutions of the corresponding homogeneous d'Alembert equations are progressive waves with phase velocity $c$. \par In the special case of a single point-like source charge the current density in (2.8) is given by the expression: \begin{equation} {\rm J}^Q(\vec{x}_J(t'), t') = \frac{ Q u}{c} \delta (\vec{x}_J(t')-\vec{x}_Q(t')) \end{equation} where $\vec{x}_Q(t')$ is the position of the charge at time $t'$. Inserting (2.9) in (2.8), and integrating over $\vec{x}_J$, gives \begin{equation} {\rm A}_{\mu}^{ret}(\vec{x}_q,t) = \frac{Q u_{\mu}}{c} \int dt' \frac{\delta(t'-t'_Q)} {r'} \end{equation} where \begin{equation} r' \equiv |\vec{x}_q-\vec{x}_Q(t')|,~~~ t'_Q \equiv t - \frac{|\vec{x}_q -\vec{x}_Q(t'_Q)|}{c} = \left. t - \frac{r'}{c} \right|_{t' = t'_Q}. \end{equation} The retarded 4-vector potential is therefore: \begin{equation} ({\rm A}_0^{ret};\vec{{\rm A}}^{ret}) = \left( \left. \frac{Q \gamma_u}{r'} \right|_{t' = t'_Q}; \left. \frac{Q \gamma_u \vec{\beta}_u}{r'}\right|_{t' = t'_Q} \right). \end{equation} \par A similiar result to (2.12) is obtained in the case of an extended distribution of charge in the case that its dimensions are much less than the separation between the average position of the source charge distribution, $\langle \vec{x}_J\rangle$, and the field point. In this case $\vec{x}_J$ may be replaced in the $\delta$-function and denominator of (2.8) by $\langle \vec{x}_J\rangle$, so that the factor $\langle r' \rangle \equiv |\vec{x}_q -\langle \vec{x}_J \rangle|$ in the denominator may be taken outside the $\vec{x}_J$ integral giving \begin{eqnarray} {\rm A}_{\mu}^{ret}(\vec{x}_q,t) & = & \int\frac{dt'}{\langle r' \rangle} \int d^3 x_J( t'_J) {\rm J}_{\mu}(\vec{x}_J(t'),t') \delta(t'+\frac{|\vec{x}_q - \langle \vec{x}_J\rangle|}{c}-t) \nonumber \\ & = & \int\frac{dt'}{\langle r' \rangle}\frac{u_{\mu}}{c} \int d^3 x_J(t') \rho^*(\vec{x}_J(t'), t') \delta(t'+\frac{|\vec{x}_q -\langle \vec{x}_J\rangle|}{c}-t) \nonumber \\ & = & \int\frac{dt'}{\langle r' \rangle} \frac{u_{\mu} Q }{c} \delta(t'-\langle t'_J \rangle) \end{eqnarray} where $Q$ is the total charge of the distribution: \begin{equation} Q = \int \rho^*(\vec{x}_J(t'), t') d^3 x_J( t') \end{equation} and \begin{equation} \langle t'_J \rangle \equiv t - \frac{|\vec{x}_q -\langle \vec{x}_J \rangle|}{c} = t - \frac{\langle r' \rangle}{c} \end{equation} giving the 4-vector potential: \begin{equation} ({\rm A}_0^{ret};\vec{{\rm A}}^{ret}) = \left( \left. \frac{Q \gamma_u}{\langle r' \rangle} \right|_{t' = \langle t'_J \rangle}; \left. \frac{Q \gamma_u \vec{\beta}_u}{ \langle r' \rangle}\right|_{t' = \langle t'_J \rangle} \right). \end{equation} \par It is now of interest, in view of understanding the origin of the LW potentials, to recalculate the retarded potentials after inverting the the order of the $t'$ and $\vec{x}_J(t')$ integrations in (2.8), so that: \begin{equation} {\rm A}^{ret}_{\mu}(\vec{x}_q,t) = \int d^3 x_J(t') \int dt' \frac{{\rm J}_{\mu}(\vec{x}_J(t'), t')} {|\vec{x}_q -\vec{x}_J(t')|} \delta(t'+\frac{|\vec{x}_q -\vec{x}_J(t')|}{c}-t). \end{equation} Unlike in (2.10), where the insertion of the current density of a point-like charge, (2.9) simply specifies the value of $t'$ in the $\delta$-function to be $t'_Q$, as given by Eq.(2.11), on integrating over $\vec{x}_J$, the argument of the $\delta$-function in (2.17) has a more complicated dependence on $t'$: \begin{equation} \delta [f(t')] = \frac{\delta(t'-t'_J)}{~~~~\left|\frac{\partial f(t')}{\partial t'}\right|_{t '= t'_J}} \end{equation} where $t'_J$ is the solution of the equation $f(t')=0$ and \begin{equation} f(t') \equiv t' +\frac{|\vec{x}_q -\vec{x}_J(t')|}{c}-t. \end{equation} It follows from (2.19), and the definition of $t'_J$, that \begin{equation} t'_J = t - \frac{|\vec{x}_q -\vec{x}_J(t'_J)|}{c}. \end{equation} Differentiating (2.19) gives: \begin{equation} \frac{\partial f(t')}{\partial t'} = 1-\hat{r}'_J \cdot \vec{\beta}_u \end{equation} where: \begin{equation} \hat{r}'_J = \frac{\vec{x}_q -\vec{x}_J(t_J')}{|\vec{x}_q -\vec{x}_J(t_J')|},~~~ \vec{\beta}_u = \frac{1}{c}\frac{d \vec{x}_J(t')}{d t'} \end{equation} so that (2.17) may be written as \begin{equation} {\rm A}^{ret}_{\mu}(\vec{x}_q,t) = \int d^3 x_J(t') \int dt' \frac{{\rm J}_{\mu}(\vec{x}_J(t'), t')} {|\vec{x}_q -\vec{x}_J(t')|(1-\hat{r}'_J \cdot \vec{\beta}_u)} \delta(t'- t_J'). \end{equation} In performing the integral over $t'$, proper account must now be taken of the appropriate current density $J_{\mu}$ to be inserted in (2.23). The limits of the $t'$ integral are determined by the times at which the backward light cone of the field point coincides with the boundaries of the moving charge distribution. This is illustrated in Fig.1 for a uniform block of charge DEFG, of trapeziodal shape, moving in the plane of the figure towards a distant field point, in this plane, far to the right. The segments AA', BB' and CC' lie on the light front, LF, that coincides with the backward light cone of the field point. It is assumed that the latter is sufficiently far that LF may be approximated by a plane, with normal in the plane of the figure. The block of charge is moving with speed $u$ in the plane of the figure at angle $\theta$ to the direction of motion of LF. The light front starts to overlap the block of charge in the position AA' and ceases to do so in the position CC'. The limits of the $t'$ integral in (2.23) for this case then correspond to the times when the front coincides with AA' (lower limit) and with CC' (upper limit). Inspection of Fig.1 shows that, during the time interval between these limits, the average value of the charge density, $\bar{\rho}$, is less than that when the distribution it at rest, $\rho^*$, by the ratio: \begin{equation} \frac{\ell}{L} = \frac{{ \rm length~of~charge~distribution}}{{\rm length~of~light~cone~overlap~region}}. \end{equation} If $\Delta t'$ is the time during which there is overlap between LF and the block of charge, the geometry of Fig.1 gives: \begin{equation} L = u \Delta t' + \ell = \frac{c \Delta t'}{\cos \theta} \end{equation} so that \begin{equation} \frac{\ell}{L} = 1-\frac{u}{c} \cos \theta = 1-\hat{r}'_J \cdot \vec{\beta}_u. \end{equation} It can be seen from Fig.1 that the same average charge density is obtained if the uniform block of charge is replaced by a point-like charge, $Q$, equal to the integrated charge of the block and placed at its centre, or if the moving uniform charge distribution is replaced by the fixed one MNOP with density $\bar{\rho}$. For a single point-like charge the appropriate current density in (2.23) is then given by (2.9). (2.24) and (2.26) as: \begin{equation} {\rm J}^Q(\vec{x}_J(t'), t') = (1-\hat{r}'_J \cdot \vec{\beta}_u) \frac{ Q u}{c} \delta (\vec{x}_J(t')-\vec{x}_Q(t')). \end{equation} Inserting (2.27) in (2.23) and performing the integrals over $t'$ and $\vec{x}_J$ recovers the result of Eq.(2.12). The increased overlap time of the light front resulting from the motion of the block of charge is exactly compensated by the reduction of the average charge density resulting from the same motion. The incorrect LW potentials are given by taking into account the time-everlap correction factor but neglecting the corresponding change in the charge density. This gives, instead of (2.12), the potentials \begin{equation} ({\rm A}_0^{ret};\vec{{\rm A}}^{ret}) \equiv (\gamma_u {\rm A}_0({\rm LW})^{ret};\gamma_u\vec{{\rm A}}({\rm LW})^{ret}) = \left( \left. \frac{Q \gamma_u}{r'(1-\hat{r}'_J \cdot \vec{\beta}_u)} \right|_{t' = t'_Q}; \left. \frac{Q \gamma_u \vec{\beta}_u}{r'(1-\hat{r}'_J \cdot \vec{\beta}_u)}\right|_{t' = t'_Q} \right) \end{equation} where ${\rm A}_0({\rm LW})^{ret}$ and $\vec{{\rm A}}({\rm LW })^{ret}$ are the Li\'{e}nard and Wiechert potentials. This mistake in the original Li\'{e}nard and Wiechert~\cite{LW} calculations has been repeated in all text-book treatments of the subject of retarded potentials. Some examples may be found in Refs.~\cite{PPLW,RosserLW,JackLW,SchwartzLW, GriffithsLW}. \par Inspection of (2.23) shows that neglect of the charge density correction factor of (2.27) in evaluating the potentials implies that they are under-estimated when the source is receding ($\hat{r}'_J \cdot \vec{\beta}_u < 0$) and over-estimated when it is approaching ($\hat{r}'_J \cdot \vec{\beta}_u > 0$). On the other hand, the $1/r'$ dependence of the potential implies that the potentials are greater (smaller) when $\hat{r}'_J \cdot \vec{\beta}_u < 0$ ($\hat{r}'_J \cdot \vec{\beta}_u > 0$). It is shown in Section 5 below that for the `present time' LW fields (Eqs.(4.14) and (4.15) below) the neglect of the charge density correction factor results in exact compensation of the $1/r'^2$ dependence so that the magnitudes of the fields are independent of the sign of $\hat{r}'_J \cdot \vec{\beta}_u$, as is the case for an instantaneous intercharge interaction. \par Note that the potentials on the right side of (2.28), derived by neglecting the density correction factor in (2.27), differ from the retarded LW potentials by an overall factor of $\gamma_u$. This factor will be commented on at the end of Section 4 below where alternative `relativistic' derivations of the LW potentials are discussed. \SECTION{\bf{Feynman's derivation of the Li\'{e}nard-Wiechert potentials}} The erroneous nature of the retarded potentials found when the charge density correction factor of Eqs.(2.24) and (2.26) is neglected is made particularly clear by a careful examination of Feynman's derivation~\cite{FeynLW} of the LW potentials for the case of parallel motion of the source distribution and the light front, LF, corresponding to the backward light cone of the field point. \begin{figure}[htbp] \begin{center} \hspace*{-0.5cm}\mbox{ \epsfysize9.0cm\epsffile{retardf1c.eps}} \caption{{\sl The plane light front (LF) BB' crosses the block of charge DEFG with uniform charge density $\rho$ while moving from the position AA' to CC'.The light front and the block move in the plane of the figure with speeds $c$ and $u$ respectively in the directions indicated. The average charge density sampled by the light front during its passage over the block is $\bar{\rho} = \rho^* \ell/L$ The retarded potential generated by the charge of the block at a distant field point to the right of the figure is the same as that that would be generated by a block of charge in the form MNOP with the same depth as DEFG,with uniform charge density $\bar{\rho}$, at rest, or by a moving point-like charge $Q =\rho^* V$, where $V$ is the volume of the block DEFG.}} \label{fig-fig1} \end{center} \end{figure} \begin{figure}[htbp] \begin{center} \hspace*{-0.5cm}\mbox{ \epsfysize15.0cm\epsffile{retardf2c.eps}} \caption{{\sl Feynman's method of calculating retarded potentials~\cite{FeynLW}. A uniform rectangular block of charge of length {\it l} moves to the right with speed $v$ towards a distant field point. The light front, LF, in causal connection with the field point, overlaps the block for a distance $L$ and a time $T$. In a) LF arrives at the front of the block. The position of the block when LF overtakes it is shown shaded. b) and d) show the positions of LF at times $t = T/5$ and $t = 2T/5$ respectively. The regions of the block sampled by LF in the time intervals $0 < t < T/5$ and $T/5 < t < 2T/5$ are shown by the SW-NE and NW-SE cross-hatched areas, respectively. The similar crossed-hatched areas in c) and e) show the charge volumes assigned to the potential integral, during the same time intervals, in Feynman's calculation. See text for discussion.}} \label{fig-fig2} \end{center} \end{figure} \par Feynman's analysis of the problem of retarded potentials is shown in Fig.2. A rectangular block of charge, of uniform density, moves towards the field point, which is sufficiently far to the right that the variation of $r'_J$ may, as in deriving Eq.(2.16) above, be neglected in evaluating the integral that gives the potential. The light front moves across the charge distribution, sampling it. Each element of charge which is crossed by LF gives a contribution to the potential. The depth of the block of charge is $\ell$ and LF moves over the distance $L$ while crossing the charge distribution. The front overlaps the charge distribution for a time interval $T$. The overlap distance, $L$, is divided into bins of width $w$ and the contribution to the potential of each bin is considered separately. In Fig.2 the dimensions and velocity $u$ are chosen so that: \[ \ell = \frac{2 L}{5},~~~~~w = \frac{L}{5}. \] It then follows that $u = 3c/5$. In this figure, the positions of the charge distribution and the front LF at times 0, $T/5$, $2T/5$ respectively are shown. In Figs.2b,2d the front has crossed charge thicknesses of $0.4 w$,$0.8w$ respectively. The region crossed during the time $0 < t < T/5$ is shown by SW-NE\footnote{The points of the compass: South-West (SW), North-East (NE), North-West (NW) and South-East (SE).}diagonal cross-hatching, that crossed in $T/5< t < 2T/5$ by NW-SE diagonal cross-hatching. Thus the average charge density in each bin is reduced, in comparison with the situation when the charges are at rest, by 60$\%$. Integrating first over the time, as in (2.17), for each bin, then gives: \begin{equation} {\rm A}_{\mu} = \frac{u_{\mu} S}{c r'_J} \sum_{bins}w \bar{\rho} = \frac{u_{\mu} S L \bar{\rho}}{c r'_J} \end{equation} where $S$ is the surface area of the charge distribution normal to its direction of motion and $\bar{\rho}$ is the average charge density. From the geometry of Fig.2a, $\bar{\rho} = 2 \rho^*/5$ where $\rho^*$ is the rest frame charge density. Since $L = 5 \ell /2$ (3.1) gives: \begin{equation} {\rm A}_{\mu} = \frac{u_{\mu} S \ell \rho^*}{c r'_J} = \frac{u_{\mu} Q}{c r'_J} \end{equation} where $Q$ is the total charge in the block. Allowing for the propagation time delay of the light front with respect to the time of the field point (3.2) agrees with Eq.(2.16) but not with the LW potentials in (2.28). \par The contributions to the integral given by the first two bins, according to Feynman's original calculation~\cite{FeynLW} are shown by the SW-NE and NW-SE diagonal hatching in Figs.2c and 2e respectively. The movement of the charge distribution is neglected, and with it the change in the effective charge density. Feynman's result is given by replacing $\bar{\rho}$ in (3.1) by $\rho^*$, the density of the charge distribution at rest. This gives a result consistent with (2.28), but is evidently wrong, since charge elements are multiply counted during the passage of the light front. For example, a contribution to the integral is assigned proportional to the area of the cross-hatched region to the left of LF in Fig.2c for $t \le T/5$. However, inspection of Fig.2b, showing the actual geometrical configuration at $t = T/5$, shows that, because of the parallel motion of the charge distribution, LF has crossed only the fraction of the region in Fig.2c that is both shaded and cross-hatched, not the entire cross-hatched region. In fact, careful inspection of Fig21-6(c) of Ref.~\cite{FeynLW} shows clearly that the contribution due to the passage of the light front over the first bin is overestimated. Only the region of the charge distribution to the left of the light front as shown in this figure has been sampled at this time, not the filled first bin of Fig21-6(b) of Ref.~\cite{FeynLW}. \SECTION{\bf{`Relativistic' derivations of the Li\'{e}nard-Wiechert potentials and the electromagnetic fields of a uniformly moving charge}} As well as the derivation of the LW potentials by consideration of retardation effects, as in the original papers of Li\'{e}nard and Wiechert, text books on classical electromagnetism contain alternative derivations, where no retardation effects are considered, but instead a relativistic `length contraction' effect is invoked. For example, in Ref.~\cite{LLLW1}, the temporal component ${\rm A}_0$ of the 4-vector electromagnetic potential is obtained by Lorentz-transformation into the frame S, where ${\rm A}_0$ is defined, from the frame S$^*$ in which the point-like source charge $Q$ is at rest: \begin{equation} {\rm A}_0 = \gamma_u {\rm A}_0^* = \gamma_u \frac{Q}{r^*} \end{equation} where \begin{equation} r \equiv |\vec{x}_q-\vec{x}_Q|,~~~r^* \equiv |\vec{x}_q^*-\vec{x}_Q^*|. \end{equation} The vectors $\vec{x}_q$, $\vec{x}_Q$ ($\vec{x}_q^*$,$\vec{x}_Q^*$) give the position of the field point and the source charge, respectively, in the frames S (S$^*$). These coordinates are specified at a fixed time in the frame S ---no retardation effects are considered. It is then assumed that the $x$-coordinate separations in the frames S and S$^*$ are related by the relativistic length contraction relation: \begin{equation} x_q^*- x_Q^* = \frac{x_q- x_Q}{\sqrt{1-\frac{u^2}{c^2}}} \end{equation} while the $y$ and $z$ separations are the same in both frames. It then follows from (4.2) and (4.3) that \begin{equation} (r^*)^2 = \frac{(x_q- x_Q)^2+(1-\frac{u^2}{c^2})[(y_q- y_Q)^2+(z_q- z_Q)^2]}{1-\frac{u^2}{c^2}}. \end{equation} Denoting by $\psi$ the angle between the vectors $\vec{x}_q-\vec{x}_Q$ and $\vec{u}$, (4.4) may be written as: \begin{eqnarray} (r^*)^2 & = & r^2 \frac{[\cos^2 \psi+(1-\frac{u^2}{c^2})\sin^2 \psi]}{1-\frac{u^2}{c^2}} \nonumber \\ & = & r^2 \frac{[1-\beta_u^2\sin^2 \psi]}{1-\frac{u^2}{c^2}}. \end{eqnarray} Substituting $r^*$ from (4.5) in (4.1) than gives \begin{equation} {\rm A}_0 \equiv {\rm A}_0({\rm LW})^{PT} = \frac{Q}{r(1-\beta_u^2\sin^2 \psi )^{\frac{1}{2}}}. \end{equation} This is the `present time' (PT) formula~\cite{PPPT} for the temporal component of the retarded LW potential ${\rm A}_0({\rm LW})^{ret}$ given in Eq.(2.28) above. All quantities in (4.6) are defined at the instant that the potential is specified. The `present time' form of the 3-vector potential $\vec{{\rm A}}$ is calculated, in a similar manner, to obtain \begin{equation} \vec{{\rm A}} \equiv \vec{{\rm A}}({\rm LW})^{PT} = \frac{Q \vec{\beta}_u}{r(1-\beta_u^2\sin^2 \psi)^{\frac{1}{2}}} \end{equation} It is interesting to note that that the $\gamma_u$ factor in (4.1), manifesting the 4-vector character of ${\rm A}$, is cancelled by a similar factor originating in the `length contraction' effect of Eq.(4.3). \par A similar derivation of ${\rm A}_0({\rm LW})^{PT}$ may be found in Ref.~\cite{PPLW1} where it is noted that the change of variables \begin{equation} x_q^* = \frac{x_q}{\sqrt{1-\frac{u^2}{c^2}}},~~~ y_q^* = y_q,~~~ z_q^* = z_q \end{equation} transforms the d'Alembert equation (2.6) into a Poisson equation, the solution of which is the Coulomb electrostatic potential $Q/r^*$. Expressing $r^*$ in terms of ($x_q$,$y_q$,$z_q$), neglecting a multiplicative factor $\gamma_u$ (which was cancelled in the derivation of Ref.~\cite{LLLW1} by the similar factor in the numerator of the right side of (4.1)) the potential ${\rm A}_0({\rm LW})^{PT}$ is obtained. It is only mentioned at the end of the calculation that the purely mathematical transformations of Eqs.(4.8) should be interpreted as physical transformations predicted by the Lorentz transformation, Unlike in Ref~\cite{LLLW1} the scalar and vector potentials are treated in non-relativistic manner, necessitating a (tacit) neglect of a factor $\gamma_u$ in order to recover the LW result. \par A `relativistic' derivation of the `present time' formulae for the electric and magnetic fields of a uniformly moving charge, by use of a similar `length contraction' ansatz as in Refs.~\cite{LLLW1,PPLW1} is found in Jackson's book~\cite{JackLW1}\footnote{A similar derivation is found in the widely-used text book on Electricity and Magnetism by Purcell~\cite{Purcell}.}, The conventional transformation laws of electric and magnetic fields between the frames S$^*$ and S; \begin{equation} {\rm E}_x = {\rm E}_x^*,~~~{\rm E}_y = \gamma_u( {\rm E}_y^*+\beta_u {\rm B}_z^*),~~~{\rm B}_z = \gamma_u( {\rm B}_z^*+\beta_u {\rm E}_y^*) \end{equation} are used to transform the fields in the rest frame of the source charge: \begin{equation} {\rm E}_x^* = \frac{Q(x_q^*- x_Q^*)}{(r^*)^3},~~~{\rm E}_y^* = \frac{Q(y_q^*- y_Q^*)}{(r^*)^3},~~~{\rm E}_z^* = {\rm B}_x^*= {\rm B}_y^*= {\rm B}_z^*=0 \end{equation} into the frame S. Performing this transformation, and using (4.3) to express the result in terms of S frame coordinates\footnote{Actually Jackson used a relativistic time dilatation equation equivalent to Eq.(4.3).} gives \begin{eqnarray} {\rm E}_x & = & \frac{Q(x_q- x_Q)}{\gamma_u^2 \{(x_q- x_Q)^2+(1-\frac{u^2}{c^2})[(y_q- y_Q)^2+(z_q- z_Q)^2]\}^{\frac{3}{2}}} \nonumber \\ & = & \frac{Q \cos \psi}{\gamma_u^2 r^2(1-\beta_u^2\sin^2 \psi)^{\frac{3}{2}}}, \\ {\rm E}_y & = & \frac{Q(y_q- y_Q)}{\gamma_u^2 \{(x_q- x_Q)^2+(1-\frac{u^2}{c^2})[(y_q- y_Q)^2+(z_q- z_Q)^2]\}^{\frac{3}{2}}} \nonumber \\ & = & \frac{Q \sin \psi}{\gamma_u^2 r^2(1-\beta_u^2\sin^2 \psi)^{\frac{3}{2}}}, \\ {\rm B}_y & = & \beta_u E_y = \frac{Q \beta_u \sin \psi}{\gamma_u^2 r^2(1-\beta_u^2\sin^2 \psi)^{\frac{3}{2}}}. \end{eqnarray} Eqs.(4.11)-(4.13) may also be written in 3-vector notation as: \begin{eqnarray} \vec{{\rm E}} & \equiv & \vec{{\rm E}}({\rm H})^{PT} = \frac{Q \vec{r}}{\gamma_u^2 r^3(1-\beta_u^2\sin^2 \psi)^{\frac{3}{2}}}, \\ \vec{{\rm B}} & \equiv & \vec{{\rm B}}({\rm H} )^{PT} = \vec{\beta_u} \times \vec{{\rm E}}. \end{eqnarray} The label `H' stands for `Heaviside' who first obtained these equations~\cite{Heaviside} more than a decade before the advent of special relativity. They may also be obtained from the `present time' potentials in (4.6) and (4.7) and the usual definitions of electric and magnetic fields in terms of derivatives of the 4-vector potential. \par It is easy to show that the `length contraction' ansatz of Eqs.(4.3) and (4.8) used to derive (4.14) and (4.15), as obtained from the retarded LW potential, but without invoking any retardation effect, is inconsistent with a fundamental reciprocity property of special relativity. This was stated in a concise way, and in a manner directly applicable to the problem considered here, by Pauli~\cite{Pauli}: \par {\tt The contraction of lengths at rest in S$^*$ is equal to that of lengths at\newline rest in S and observed in S$^*$.} \begin{figure}[htbp] \begin{center} \hspace*{-0.5cm}\mbox{ \epsfysize9.0cm\epsffile{retardf3c.eps}} \caption{{\sl Spatial configurations in the frames S$^*$ [a)] and S [b)] at corresponding instants in the two frames; for example when the origin of S$^*$, situated at $Q$ coincides with the origin of S.}} \label{fig-fig3} \end{center} \end{figure} \par To make manifest the symmetry of the configurations in the frames S and S$^*$, that is the basis of the applicability of the above reciprocity postulate in the present case, a test charge $q$, at rest, is placed at the field point in S. As shown in Fig.3, the `length at rest in S$^*$' is the separation, $r^*$, of the source and test charges in this frame (Fig.3a). Similarly the `length at rest in S is equal to $r$ (Fig.3b). However, in the case of the `length contraction' ansatz of Eqs.(4.3) and (4.8), $r$ is also the contracted value of the length $r^*$ as observed in S. i.e. \begin{equation} r = \alpha(u)r^* \end{equation} where $\alpha(u)$ is some even function of the relative velocity $u$ of the frames S and S$^*$, and $\alpha(u) < 1$, $\alpha(0) = 1$. The above reciprocity postulate states that also \begin{equation} r^* = \alpha(u^*)r. \end{equation} where $\alpha(u) = \alpha(u^*)$ Combining (4.16) and (4.17), \begin{equation} r = \alpha(u)r^* = \alpha(u)\alpha(u^*) r. \end{equation} It follows that if $r \ne 0$, $\alpha(u)\alpha(u^*) = 1$. This requires that $u = u^*= 0$, contradicting the initial hypothesis that S and S$^*$ are in relative motion. The existence of a `length contraction' effect respecting Pauli's reciprocity condition is therefore excluded by {\it reductio ad absurdum} (self-contradiction). \par The length contraction ansatz of (4.3) and (4.8) is therefore incompatible with the above stated reciprocity property of special relativity. How this universally (until now) accepted length contraction effect results from a misinterpretation of the symbols in the space-time Lorentz transformation has been extensively discussed elsewhere ~\cite{JHFSR,JHFSR1,JHFSR2}. In conclusion, the 'relativistic' derivation of the field equations (4.14) and (4.15) neglects retardation effects and is in fact incompatible with (correctly interpreted~\cite{JHFSR,JHFSR1,JHFSR2}) special relativity. That the same result is obtained using the incorrect LW potentials (derived by consideration of pre-relativistic retarded fields) must then be regarded, not as confirmation of the correctness of the formulae, but as purely fortuitous. The `present time' formulae derived from the relativistically-correct retarded potentials in (2.12) are presented in the following section. \par An alternative `relativistic derivation' of $ {\rm A}({\rm LW})^{PT}$, but also assuming retardation, was given by Landau and Lifshitz~\cite{LLLW2}. The retardation condition (2.11) was used to write the temporal component of ${\rm A}$, in the rest frame of the point-like source charge as: \begin{equation} {\rm A}_0^* = \frac{Q}{c(t-t'_Q)}. \end{equation} It was the noticed that the 4-vector: \begin{equation} {\rm A} \equiv \frac{Q u}{x^{ret} \cdot u} \end{equation} where \begin{equation} x^{ret} \equiv (c(t-t'_Q);\vec{x}_q-\vec{x}_Q(t')) \end{equation} reduces to (4.19) in the rest frame of the source charge. The right side of (4.20) is precisely the retarded LW potential in (2.28) of a point-like charge. Although it is true that (4.20) gives (4.19) in the rest frame of the source charge, the same is true of the different 4-vector potential in (2.12). The relation (4.20) is, however, nothing more than a mathematical curiosity, lacking any physical motivation, whereas the potential in (2.12), equally consistent with (4.19), is the solution of the d'Alembert equations (derived from Maxwell's equations and the Lorenz condition) for a point-like charge. The physical meaning and method of derivation of the potential of (2.12), unlike that of (4.20), are therefore quite clear. \par That the retarded LW potentials and the associated fields could be derived in a `relativistic' calculation in which retardation effects are completely neglected, whereas in the original derivations of Li\'{e}nard, Wiechert and Heaviside, performed before the advent of special relativity, the (actually spurious) length contraction effect is neglected should be serious cause for concern. This unease, however, seemed not to be shared by authors of text books, and the pedagogical literature, on classical electromgnetism, throughout the last century. There is now ample experimental verification of the predictions of correctly interpreted~\cite{JHFSR,JHFSR1,JHFSR2} special relativity, and that retardation effects do occur in processes where real photons are radiated, so that the corresponding classical fields must also be retarded. The contradiction posed by the absence of one or the other of two essential, but different, physical phenomena in the two different derivations of the Heaviside formulae was clearly stated by Jefimenko~\cite{JEFrr}, but the obvious doubt shed by this on the correctness of the formulae and/or the derivations, was passed over in silence. In fact, as demonstrated in the present paper both the original 19th century and the 20th century `relativistic' derivations' are wrong. The former because the variation of the effective charge density of the moving charge distribution was not taken into account, the latter because the `length contraction' effect on which they are based, does not exist. It is proposed in the present paper that the correct relativistic retarded potentials of a point-like charge are those given above in Eq.(2.12). The corresponding electric and magnetic fields, for the case of a charge in uniform motion, are derived in the following section. In Section 6 the retarded fields of accelerated charges are considered, and compared with those derived from the LW potentials as well as the well-known formulae of Feynman and Jefimenko as well as some others that have appeared in text books and the pedagogical literature. \par The RCED 4-vector potential and current differ from those of CEM by the multiplicative factor $\gamma_u$ (see Eq.(2.28) above). This leads to to a breakdown of the Gauss law for the electric field of a moving charge ~\cite{JHFRSKO,JHFFT} and covariance-breaking terms in the electrodynamic Maxwell equation (Amp\`{e}re's law)~\cite{JHFFT}. In text books on CEM, the validity of the electric field Gauss law for both static and moving source charges is justified by invoking the relativistic length contraction effect of Eq.(4.3)~\cite{PPLC}. If the charge density in a moving frame transforms as the temporal component of a 4-vector $\propto \gamma_u$, and a volume element transforms $\propto (\gamma_u)^{-1}$ due to relativistic length contraction, then the charge within the volume element is Lorentz invariant. Since however, as demonstrated above, the length contraction effect is spurious, the effective charge actually varies as $\gamma_u$ or as $(u/c)^2$ for small $u$. This effect has been experimentally observed in the vicinity of an electrically neutral superconducting magnet~\cite{Edwards}. \SECTION{\bf{Retarded electric and magnetic fields of a point-like charge in uniform motion}} The electric and magnetic fields corresponding to the 4-vector potential (2.12) are obtained by straightforward application of the definitions of electric and magnetic fields in terms of derivatives of the potential: \begin{eqnarray} \vec{{\rm E}} & \equiv & -\vec{\nabla} {\rm A}_0- \frac{1}{c}\frac{\partial \vec{{\rm A}}}{\partial t}, \\ \vec{{\rm B}} & \equiv & \vec{\nabla} \times \vec{{\rm A}} \end{eqnarray} where, without loss of generality, it may be assumed that the electric field is confined to the $x$-$y$ plane, \[ \vec{\nabla} \equiv \hat{\imath}\frac {\partial ~}{\partial x_q} + \hat{\jmath}\frac {\partial ~}{\partial y_q}. \] Unit vectors along the $x$- $y$- and $z$-axes are denoted as $\hat{\imath}$, $\hat{\jmath}$ and $\hat{k}$. To perform the calculation, the retardation condition \begin{equation} t' = t-\frac{|\vec{x}_q-\vec{x}_Q(t')|}{c} = t-\frac{r'}{c} \end{equation} must be used to express the derivatives with respect to $t$ in (5.1) in terms of $t'$, since the retarded position of the source charge is a function of $t'$, not of $t$. Assuming that $u$ is constant, (2.12) gives: \begin{equation} \left. \frac{\partial {\rm A}_0}{\partial x_q}\right|_{t} = -\frac{Q \gamma_u}{(r')^2} \left. \frac{\partial r'}{\partial x_q}\right|_{t}. \end{equation} Differentiating the geometrical relation: \begin{equation} (r')^2 = [x_q-x_Q(t')]^2 + y_q^2 \end{equation} with respect to $x_q$ gives \begin{equation} r'\left. \frac{\partial r'}{\partial x_q}\right|_{t} = (x_q-x_Q(t'))\left(1- \frac{d x_Q(t')}{d t'} \left. \frac{\partial t'}{\partial x_q}\right|_{t}\right). \end{equation} Differentiating (5.3) with respect to $x_q$, \begin{equation} \left.\frac{\partial t'}{\partial x_q}\right|_{t} = -\frac{1}{c} \left.\frac{\partial r'}{\partial x_q}\right|_{t}. \end{equation} Combining (5.6) and (5.7), rearranging, and noting that $d x_Q(t')/dt' = c \beta_u$ gives \begin{equation} \left. \frac{\partial r'}{\partial x_q}\right|_{t} = \frac{x_q-x_Q(t')}{r'(1-\hat{r}' \cdot \vec{\beta}_u)} \end{equation} where $ \hat{r}' \equiv \vec{r}'/r'$. Combining (5.4) and (5.8) \begin{equation} \left. \frac{\partial {\rm A}_0}{\partial x_q}\right|_{t} = -\frac{Q \gamma_u [x_q-x_Q(t')]}{(1-\hat{r}' \cdot \vec{\beta}_u)(r')^3}. \end{equation} An analogous relation is obtained for $\partial {\cal A}_0/ \partial y_q$ so that \begin{equation} -\vec{\nabla}{\rm A}_0 = \frac{Q \gamma_u \vec{r}'}{(1-\hat{r}'\cdot \vec{\beta}_u)(r')^3}. \end{equation} Considering now the second term on the right side of (5.1), (2.12) gives \begin{equation} - \frac{1}{c}\frac{\partial \vec{{\rm A}}}{\partial t} = - \frac{\hat{\imath}}{c} \left. \frac{\partial {\rm A}_x}{\partial t}\right|_{x_q,y_q} = -\frac{ Q \gamma_u \vec{\beta}_u}{(r')^2} \left. \frac{\partial r'}{\partial t'}\right|_{x_q,y_q} \left. \frac{\partial t'}{\partial t}\right|_{x_q,y_q}. \end{equation} Differentiating (5.5) with respect to $t'$: \begin{equation} r'\left. \frac{\partial r'}{\partial t'}\right|_{x_q,y_q} = -[x_q-x_Q(t')]\frac{d x_Q (t')}{d t'} \end{equation} or \begin{equation} \left. \frac{\partial r'}{\partial t'}\right|_{x_q,y_q} = -c \hat{r}' \cdot \vec{\beta}_u. \end{equation} Differentiating (5.3) with respect to $t$: \begin{equation} \left. \frac{\partial t'}{\partial t}\right|_{x_q,y_q} = 1-\frac{1}{c} \left. \frac{\partial r'}{\partial t'}\right|_{x_q,y_q} \times \left. \frac{\partial t'}{\partial t}\right|_{x_q,y_q} \end{equation} Combining (5.13) and (5.14) and rearranging \begin{equation} \left. \frac{\partial t'}{\partial t}\right|_{x_q,y_q} = \frac{1}{1-\hat{r}' \cdot \vec{\beta}_u}. \end{equation} Combining (5.1),(5.10),(5.11),(5.13) and (5.15) gives, for the retarded RCED electric field: \begin{equation} \vec{{\rm E}}({\rm RCED})^{ret} = \left.\frac{Q \gamma_u}{(1-\hat{r}'\cdot \vec{\beta}_u)}\left[ \frac{[\vec{r}'-\vec{\beta}_u (\vec{r}'\cdot \vec{\beta}_u)]}{(r')^3} \right]\right|_{t' = t'_Q}. \end{equation} Since $\vec{{\rm A}} = \hat{\imath} {\rm A}$, \begin{equation} \vec{\nabla} \times \vec{{\rm A}} = -\hat{k} \left. \frac{\partial {\rm A}_x}{\partial y_q}\right|_{t} =-\hat{k} \frac{Q \gamma_u \beta_u}{(r')^2} \left. \frac{\partial r'}{\partial y_q}\right|_{t}. \end{equation} Similarly to (5.8) \begin{equation} \left. \frac{\partial r'}{\partial y_q}\right|_{t} = \frac{y_q}{r'(1-\hat{r}' \cdot \vec{\beta}_u)} \end{equation} So that \begin{equation} \vec{{\rm B}}({\rm RCED})^{ret} = \vec{\nabla} \times \vec{{\rm A}} = \left.\frac{Q \gamma_u \vec{\beta}_u \times \vec{r}' } {(r')^3(1-\hat{r}'\cdot \vec{\beta}_u)}\right|_{t' = t'_Q} = \vec{\beta}_u \times \vec{{\rm E}}({\rm RCED})^{ret}. \end{equation} Apart from the retarded time argument and an overall factor $1/(1-\hat{r}'\cdot \vec{\beta}_u)$ (the Jacobian of the transformation from $t$ to $t'$, see Eq.(5.15)) Eqs.(5.16) and (5.19) are the same as the formulae for the instantaneous force fields of RCED~\cite{JHFRCED}. \begin{figure}[htbp] \begin{center} \hspace*{-0.5cm}\mbox{ \epsfysize9.0cm\epsffile{retardf4c.eps}} \caption{{\sl Geometry for the calculation of the retarded potential of a charge moving with uniform velocity $u$ along the x-axis in terms of the `present time' coordinates $r$, $\psi$. R is the position of the charge from which a light signal was emitted so as to arrive at the field point F at the instant shown. P is the position of the charge at this instant. The line segment PN is perpendicular to RF.}} \label{fig-fig4} \end{center} \end{figure} \par For comparison with the Heaviside formulae (4.14) and (4.15), that may be derived from the LW potentials it is of interest to write $\vec{{\rm E}}({\rm RCED})^{ret}$ and $\vec{{\rm B}}({\rm RCED})^{ret}$ in the `present time' form. \par Consider a point-like charge, $Q$, moving with speed $u$ along the $x$-axis (Fig.4). The field point at which the fields are to be specified is denoted by F, the present position of the charge by P and the retarded position, lying on the backward light cone of F, by R. If N is the foot of the normal to RF passing through P, the geometry of Fig.4 gives: \begin{eqnarray} {\rm NF} & = & r'-\beta_u r'\cos \psi' = r'(1-\hat{r}' \cdot \vec{\beta}_u) \nonumber \\ & = & \sqrt{r^2-\beta_u^2(r' \sin \psi')^2} =\sqrt{r^2-\beta_u^2(r \sin \psi)^2} \nonumber \\ & = & r(1- \beta_u^2 \sin^2 \psi)^{\frac{1}{2}} \equiv r f_u. \end{eqnarray} Solving the quadratic equation obtained by applying the cosine rule to the triangle RFP: \begin{equation} (r')^2 = r^2 + \beta_u^2(r')^2 + 2 \beta_u r r' \cos \psi \end{equation} gives the retarded separation between the source charge and field point, $r'$ in terms of the 'present time' parameters $r$ and $\psi$ \begin{equation} r' = r\frac{(\beta_u \cos \psi + f_u)}{1-\beta_u^2}. \end{equation} Also \begin{equation} \sin \psi' = \frac{r \sin \psi}{r'} = \frac{(1-\beta_u^2) \sin\psi}{\beta_u \cos \psi + f_u} \end{equation} and \begin{eqnarray} \cos \psi' & = & = \frac{\beta_u f_u + \cos \psi}{\beta_u \cos \psi + f_u} \\ \hat{r}' & = & \hat{\imath} \cos \psi' + \hat{\jmath} \sin \psi' \\ \hat{r}' \cdot \vec{\beta}_u & = & \beta_u \cos \psi'. \end{eqnarray} Eqs.(5.20)-(5.26) may now be used to express the retarded fields in terms of `present time' coordinates: \begin{eqnarray} \vec{{\rm E}}({\rm RCED})^{PT} & = & \frac{Q (1-\beta_u)[ \hat{\imath}( \beta_u f_u + \cos \psi) + \hat{\jmath}(1+\beta_u) \sin \psi]}{r^2 \gamma_u(\beta_u \cos \psi + f_u)^2 f_u}, \\ \vec{{\rm B}}({\rm RCED})^{PT} & = & \frac{Q \hat{k} \sin \psi}{r^2 \gamma_u^3(\beta_u \cos \psi + f_u)^2 f_u} = \vec{\beta}_u \times \vec{{\rm E}}({\rm RCED})^{PT}. \end{eqnarray} \begin{figure}[htbp] \begin{center} \hspace*{-0.5cm}\mbox{ \epsfysize14.0cm\epsffile{retardf5c.eps}} \caption{{\sl Scaled retarded present time longitudinal electric field: $E_Lr^2/Q$ as a function of $\psi$ for various values of the source charge velocity $\beta_u = u/c$. The curves of $E_L(H)r^2 /Q$ are antisymmetric relative to $\psi = 90^{\circ}$ and so do not display the expected $\psi$ dependence of retarded fields as seen in the $E_L(RCED)r^2/Q$ curves where $|E_L(RCED)|$ for $\psi < 90^{\circ}$ (source charge approaching the field point) is less than $|E_L(RCED)|$ for $\psi > 90^{\circ}$ (source charge receding from the field point).}} \label{fig-fig5} \end{center} \end{figure} \begin{figure}[htbp] \begin{center} \hspace*{-0.5cm}\mbox{ \epsfysize14.0cm\epsffile{retardf6c.eps}} \caption{{\sl Scaled retarded present time transverse electric field: $E_Tr^2/Q$ as a function of $\psi$ for various values of the source charge velocity $\beta_u = u/c$. The curves of $E_T(H)r^2/Q$ are symmetric relative to $\psi = 90^{\circ}$ and so do not display the expected $\psi$ dependence of retarded fields as seen in the $E_T(RCED)r^2/Q$ curves where $|E_T(RCED)|$ for $\psi < 90^{\circ}$ (source charge approaching the field point) is less than $|E_T(RCED)|$ for $\psi > 90^{\circ}$ (source charge receding from the field point).}} \label{fig-fig6} \end{center} \end{figure} These expressions replace, in relativistic classical electrodynamics, the pre-relativistic Heaviside formulae (4.14) and (4.15). Important differences are that (5.27), unlike (4.14), is not radial, and in consequence does not respect Gauss' Law, and does not revert to a radial Coulomb field on neglecting terms of O($\beta_u^2$). \par The manifestly incorrect physical behaviour of the retarded electric field given by the Heaviside formula (4.14) is evident on comparing it with that given by (5.27). This is done in Figs. 5 and 6 which show curves of ${\rm E}_L^{PT}r^2/Q$ and ${\rm E}_T^{PT}r^2/Q$, respectively, as a function of $\psi$, where $\vec{{\rm E}}^{PT} = \hat{\imath}{\rm E}_L^{PT}+\hat{\jmath}{\rm E}_T^{PT}$, for different values of $\beta_u$, as given by (4.14) and (5.27). Elementary physical considerations require that if $\psi < 90^{\circ}$ (i.e. the source charge is approaching the field point) the magnitude of the retarded field must be less than when $\psi > 90^{\circ}$ and the charge is receding from the field point. This is because in the former case the source charge was further from the field point than its present-time position when the retarded field was emitted, and closer to it in the latter case. Because the strength of the field is inversely proportional to the square of the source-field point separation, at the time of emission of the retarded field, the magnitude of the field must be greater at an angle $\psi_+ = 90^{\circ} + \alpha$ than at an angle $\psi_- = 90^{\circ} - \alpha$ where $\alpha > 0$. The fields given by (5.27) demonstrate this property, whereas $\vec{{\rm E}}({\rm H})^{PT}$ given by (4.14) gives symmetric behaviour for ${\rm E}_T$: \begin{equation} {\rm E}_T({\rm H})^{PT}(\psi_+) = {\rm E}_T({\rm H})^{PT}(\psi_-) \end{equation} and antisymmetric behaviour for ${\rm E}_L$: \begin{equation} {\rm E}_L({\rm H})^{PT}(\psi_+) = -{\rm E}_L({\rm H})^{PT}(\psi_-). \end{equation} These symmetry properties are those of instantaneous~\cite{JHFRCED,JHFRSKO}, not retarded, force fields\footnote{See, for example, the comparision of the `present time' retarded LW fields with the instantaneous RCED fields in Figs. 2 and 3 of Ref.~\cite{JHFRSKO}.}. As explained in Section 2 above, the antisymmetry of the $E_L({\rm H})^{PT}$ curves in Fig. 5, about $\psi = 90^{\circ}$ and the symmetry of the $E_T({\rm H})^{PT}$ curves in Fig. 6, about $\psi = 90^{\circ}$ in Fig. 6 is a consequence of the neglect of the velocity dependence of the source charge density in deriving the LW potentials of Eq.(2.28). \SECTION{\bf{Retarded electric and magnetic fields of an accelerated point-like charge: the RCED, Li\'{e}nard-Wiechert, Feynman and Jefimenko equations}} The generalisation of the RCED formulae (5.16) and (5.19) to the case of a non-constant value of the source charge velocity $\vec{u}$ is straightforward. The details of the calculation may be found in Ref.~\cite{JHFFT}. Including the terms generated by the acceleration of the source charge gives: \begin{eqnarray} \vec{{\rm E}}({\rm RCED})^{ret} & = & \left. \left\{\frac{Q \gamma_u}{K}\left[ \frac{[\hat{r}'-\vec{\beta}_u (\hat{r}'\cdot \vec{\beta}_u)]}{(r')^2} +\frac{[\gamma_u^2 \beta_u \dot{\beta}_u(\hat{r}'- \vec{\beta}_u)- \dot{\vec{\beta}_u}]} {c r'} \right]\right\}\right|_{t' = t'_Q}, \\ \vec{{\rm B}}({\rm RCED})^{ret} & = & \left. \left\{ \frac{Q \gamma_u (\vec{\beta}_u \times \hat{r}')}{K} \left[ \frac{1}{(r')^2} + \frac{\gamma_u^2 \dot{\beta}_u}{c \beta_u r'} \right]\right\}\right|_{t' = t'_Q} \end{eqnarray} where \begin{equation} K \equiv (1- \hat{r}' \cdot \vec{\beta}_u),~~~\dot{\beta}_u \equiv |\dot{\vec{\beta}_u}|. \end{equation} It follows from (6.1) that \begin{equation} \vec{\beta}_u \times \vec{{\rm E}}({\rm RCED})^{ret} = \left. \left\{ \frac{Q \gamma_u (\vec{\beta}_u \times \hat{r}')}{K} \left[ \frac{1}{(r')^2} + \frac{\gamma_u^2\beta_u \dot{\beta}_u}{c r'} \right] -\frac{Q\gamma_u(\vec{\beta}_u \times \dot{\vec{\beta}_u})}{Kcr'} \right\}\right|_{t' = t'_Q} \end{equation} and \begin{equation} \hat{r}'\times \vec{{\rm E}}({\rm RCED})^{ret} = \left. \left\{ \frac{Q \gamma_u (\vec{\beta}_u \times \hat{r}')}{K} \left[ \frac{\hat{r}'\cdot \vec{\beta}_u}{(r')^2} + \frac{\gamma_u^2\beta_u \dot{\beta}_u}{c r'} \right] -\frac{Q\gamma_u( \hat{r}' \times \dot{\vec{\beta}_u})}{Kcr'} \right\}\right|_{t' = t'_Q}. \end{equation} The relation $\vec{{\rm B}}({\rm RCED})^{ret} = \vec{\beta}_u \times \vec{{\rm E}}({\rm RCED})^{ret}$ than holds only if $\dot{\beta}_u =0$, as in Eq.(5.19) above, while, in all cases, $\vec{{\rm B}}({\rm RCED})^{ret} \ne \hat{r}' \times \vec{{\rm E}}({\rm RCED})^{ret}$ \par These formulae may be compared with those derived by inserting the LW potentials of Eq.(2.28) into the defining equations (5.1) and (5.2) of the electric and magnetic fields, making use of the Jacobian of (5.15) to relate derivatives with respect to $t$ and $t'$. This calculation is given in Appendix A. It is found that: \begin{eqnarray} \vec{{\rm E}}({\rm LW})^{ret} & = & \left.\left\{\frac{Q}{K^3}\left[\frac{\hat{r}'- \vec{\beta}_u}{\gamma_u^2 (r')^2} + \frac{\hat{r}' \times [(\hat{r}'- \vec{\beta}_u) \times \dot{\vec{\beta}_u}]}{c r'}\ \right]\right\}\right|_{t' = t'_Q}, \\ \vec{{\rm B}}({\rm LW})^{ret} & = & \left. \left\{\frac{Q (\vec{\beta}_u \times \hat{r}')}{K^3}\left[\frac{1} {\gamma_u^2(r')^2} +\frac{(\dot{\beta}_u(1-\hat{r}' \cdot \vec{\beta}_u) + \beta_u(\hat{r}' \cdot \dot{\vec{\beta}_u})}{c r' \beta_u}\right] \right\} \right|_{t' = t'_Q}. \end{eqnarray} The markedly different angular dependence of these fields to that of the RCED fields of (6.1) and (6.2) may be noticed. Eq.(6.6) gives \begin{equation} \vec{\beta}_u \times \vec{{\rm E}}({\rm LW})^{ret} = \left. \left\{\frac{Q}{K^3} \left[(\vec{\beta}_u \times \hat{r}')\left(\frac{1}{\gamma_u^2(r')^2} +\frac{\hat{r}' \cdot \dot{\vec{\beta}_u}}{c r'}\right) \right] \right\} \right|_{t' = t'_Q}, \end{equation} \begin{equation} \hat{r}' \times \vec{{\rm E}}({\rm LW})^{ret} = \vec{{\rm B}}({\rm LW})^{ret} \end{equation} The relation $\vec{{\rm B}}({\rm LW})^{ret} = \vec{\beta}_u \times \vec{{\rm E}}({\rm LW})^{ret}$ then hold only if $\dot{\beta}_u = 0$. \par The RCED retarded fields (6.1) and (6.2) are now compared with those obtained from formulae given by Feynman, Jefimenko and other authors. The consistency of the latter fields with the LW ones of (6.6) and (6.7) will also be considered. In the `Feynman Lectures in Physics' compact formulae for the retarded fields of an accelerated point-like charge are given, but not derived~\cite{FeynFMC1, FeynFMC2}. In the notation of the present paper they are \begin{eqnarray} \vec{{\rm E}}({\rm Feyn})^{ret} & = & Q \left. \left[ \frac{\hat{r}'}{(r')^2}+\frac{r'}{c}\frac{d ~}{d t}\left( \frac{\hat{r}'}{(r')^2}\right) + \frac{1}{c^2} \frac{d^2 \hat{r}'}{d t^2} \right]\right|_{t' = t'_Q}, \\ \vec{{\rm B}}({\rm Feyn})^{ret} & = & \left. \hat{r}'\right|_{t' = t'_Q} \times \vec{{\rm E}}({\rm Feyn})^{ret}. \end{eqnarray} Since (see Eq.(5.3)) $r'$ is a function of the retarded time $t'$, not of the present time $t$ it is necessary to introduce the Jacobian of Eq.(5.15) in order to evaluate the derivatives in Eq.(6.10). Although Feynman uses the symbol for a total time derivative rather than a partial one in Eq.(6.10) it is clear from the definitions of the fields in terms of potentials in (5.1) and (5.2) that the time derivatives should be understood as partial ones for a fixed position of the field point $\vec{x}_q$ as in (5.15). The straightforward but somewhat lengthy calculation, which is analogous to that shown in the previous section, leading to Eqs.(5.16) and (5.19), is presented in Appendix B. The following formula for the electric field is obtained: \begin{eqnarray} \vec{{\rm E}}({\rm Feyn})^{ret} & = & Q \left\{\frac{\hat{r}'}{(r')^2} + \frac{1}{K} \frac{[3 \hat{r}'(\hat{r}'\cdot \vec{\beta}_u)-\vec{\beta}_u]}{(r')^2}\right. \nonumber \\ & & + \frac{1}{K^2}\left[ \frac{\hat{r}'[3 (\hat{r}'\cdot \vec{\beta}_u)^2 -\beta_u^2] - 2 \vec{\beta}_u (\hat{r}'\cdot \vec{\beta}_u)}{(r')^2}+ \frac{[\hat{r}'(\hat{r}' \cdot \dot{\vec{\beta}_u}) - \dot{\vec{\beta}_u}]}{cr'}\right] \nonumber \\ & & + \left. \left. \frac{[\hat{r}'(\hat{r}'\cdot \vec{\beta}_u)- \vec{\beta}_u]}{K^3} \left[ \frac{(\hat{r}'\cdot \vec{\beta}_u)^2 -\beta_u^2}{(r')^2} +\frac{\hat{r}' \cdot \dot{\vec{\beta}_u}}{cr'} \right]\right\}\right|_{t' = t'_Q}. \end{eqnarray} Collecting terms on the right side of (6.12) proportional to $\hat{r}'$, $\vec{\beta}_u$ and $\dot{\vec{\beta}_u}$ recovers, together with (6.11), the LW formulas (6.6) and (6.7). \par A formula for the retarded potentials, similar to Eq.(2.8) above, is obtained in in Ref.~\cite{JPS} by using Green functions to solve the inhomogeneous d'Alembert equations (2.6) and (2.7). However, subsequently, the usual mistake is made of neglecting the motion-dependence of the charge density, as explained in Section 2. After performing the spatial integration, instead of replacing $r'(t')$ in the argument of the $\delta$-function by $r'(t_ Q')$, (see Eq.(2.10) above) the appropriate retarded position of the point-like source charge at time $t$, the same functional dependence on $t'$ is assumed as before the spatial integration and the formula (2.18) is then used to transform the argument of the $\delta$-function, leading to the spurious retardation factor $1/K$ of the LW potentials. In this way the formula (19) of Ref.~\cite{JPS} was obtained which was then shown to give the Feynman's formula Eq.~(6.10) above. It was also correctly stated (but not demonstrated) that the same formula was equivalent to the LW field of Eq.~(6.6) above. \par The Jefimenko formulae for the fields of an accelerated charge distribution are ~\cite{Jefimenko}: \begin{eqnarray} \vec{{\rm E}}({\rm Jefi})^{ret} & = & \int \left\{\hat{r}' \left[\frac{[\rho]}{(r')^2}+ \frac{1}{cr'} \frac{\partial[\rho]}{\partial t}\right]-\frac{1}{c^2r'} \frac{\partial[\vec{{\it J}}]}{\partial t}\right\} d^3 x_{{\it J}}, \\ \vec{{\rm B}}({\rm Jefi})^{ret} & = & \frac{1}{c}\int \left[\frac{[\vec{{\it J}}]}{(r')^2}+ \frac{1}{cr'} \frac{\partial[\vec{{\it J}}]}{\partial t}\right] \times \hat{r}' d^3 x_{{\it J}}. \end{eqnarray} The square brackets around the charge density $\rho$ and the non-relativistic current density $\vec{{\it J}}$ indicate that they are evaluated at the retarded time $t' = t-r'/c$, as is also the spatial separation $r'$ of the current element from the field point. The volume element, $d^3 x_{{\it J}}$, is also specified at the retarded time $t'$. Important differences from the RCED, LW and Feynman formulae are that the time derivatives act only on the charge and current densities, not on $r'$, and that, as compared to the Feynman formula, only first order time derivatives appear. However a time derivative of $\vec{r}'$ is implicit in the definition of the current $\vec{{\it J}}$. \par In order to discuss the Jefimenko equations for the case of a point-like charge, it will be found convenient to explicitly impose the retardation condition by including an integration over the retarded time $t'$ together with the corresponding $\delta$-function as in Eq.(2.8). Indeed, much confusion about, and misinterpretation of, formulae for retarded fields result from not properly taking into account integrations over both space and time. This must be done to correctly describe the essential physical properties of the problem under consideration ---a spatially extended distribution of charge\footnote{In the real world, consisting of an ensemble of identical point-like charged particles.} in motion that is probed, in time, by the backward light cone of the field point. As will be seen, attempts to simplify formulae by omitting time integrals, and the associated $\delta$-functions, as in (6.13) and (6.14), and in many text-book treatments of retarded potentials, often lead to erroneous results. Specifying explicitly the retardation condition, (6.13) and (6.14) are written as: \begin{eqnarray} \vec{{\rm E}}({\rm Jefi})^{ret} & = & \int dt' \int \left\{ \left[\frac{[\rho(\vec{x}_{{\it J}}(t'),t')} {(r')^2}+ \frac{1}{cr'} \frac{\partial[\rho(\vec{x}_{{\it J}}(t'),t')]}{\partial t}\right]\hat{r}'\right. \nonumber \\ & & \left. -\frac{1}{c^2r'} \frac{\partial[\vec{{\it J}}(\vec{x}_{{\it J}}(t'),t')]}{\partial t}\right\} \delta(t' +\frac{r'(t')}{c}-t) d^3 x_{{\it J}}(t'), \\ \vec{{\rm B}}({\rm Jefi})^{ret} & = & \int dt' \int \frac{1}{c} \left[\frac{[\vec{{\it J}}(\vec{x}_{{\it J}}(t'),t')]}{(r')^2} \right. \nonumber \\ & & \left. + \frac{1}{cr'} \frac{\partial[\vec{{\it J}}(\vec{x}_{{\it J}}(t'),t')]}{\partial t}\right] \times \hat{r}' \delta(t' +\frac{r'(t')}{c}-t) d^3 x_{{\it J}}(t'). \end{eqnarray} A point-like charge $Q$ has, in non-relativistic approximation, the following charge and current densities: \begin{eqnarray} \rho_Q(\vec{x}_{{\it J}}(t'),t') = Q \delta(\vec{x}_{{\it J}}(t')-\vec{x} _Q(t')), \\ \vec{{\it J}}_Q(\vec{x}_{{\it J}}(t'),t') = Q \vec{u}(t')\delta(\vec{x}_{{\it J}}(t')-\vec{x} _Q(t')). \end{eqnarray} Substituting (6.17) and (6.18) into (6.15) and (6.16) and performing the spatial integrations gives: \begin{eqnarray} \vec{{\rm E}}({\rm Jefi})^{ret} & = & Q \int dt' \left[\frac{\hat{r}'} {(r')^2} -\frac{1}{c^2r'} \frac{\partial \vec{u}(t')}{\partial t}\right] \delta(t'-t'_Q) \nonumber \\ & = & \left .\left . Q \left[\frac{\hat{r}'}{(r')^2} -\frac{1}{c^2r'}\frac{\partial t'} {\partial t}\right|_{\vec{x}_q} \frac{d \vec{u}(t')}{d t'}\right]\right|_{t'= t'_Q} \nonumber \\ & = & \left. Q \left[\frac{\hat{r}'}{(r')^2} -\frac{1}{K c^2r'} \frac{d\vec{u}(t')}{d t'}\right]\right|_{t' = t'_Q}, \\ \vec{{\rm B}}({\rm Jefi})^{ret} & = & Q \int dt' \left[\frac{\vec{u}(t')}{c (r')^2} + \frac{1}{c^2r'} \frac{\partial \vec{u}(t')}{\partial t}\right] \times \hat{r}' \delta(t'-t'_Q) \nonumber \\ & = & Q\left\{ \left[\frac{\vec{u}(t')}{ c (r')^2} \left. \left.+ \frac{1}{c^2r'}\frac{\partial t'}{\partial t}\right|_{\vec{x}_q} \frac{d \vec{u}(t')}{d t'}\right] \times \hat{r}' \right\} \right|_{t' = t'_Q} \nonumber \\ & = & Q\left\{ \left[\frac{\vec{u}(t')}{c (r')^2} \left.+ \frac{1}{Kc^2r'} \frac{d\vec{u}(t')}{d t'}\right] \times \hat{r}' \right\} \right|_{t'= t'_Q} \end{eqnarray} where \begin{equation} t'_Q \equiv t - \frac{r'(t'_Q)}{c}. \end{equation} For a uniformly moving charge, in contrast to the RCED, LW and Feynman equations the Jefimenko equations therefore predict the same (Coulombic) electric field as in the electrostatic case. \par In a paper on time-dependent generalisations of of the Biot and Savart and Coulomb laws where the Jefimenko equations were extensively discussed~\cite{GH}, it was claimed that the Jefimenko, Li\'{e}nard-Wiechert and Feynman formulae for the retarded fields are all consistent with each other. The arguments given in support of this conclusion are critically examined below, but first the claim of the authors of Ref.~\cite{GH} to {\it derive} the Jefimenko equations from the defining formulae (5.1) and (5.2) of electric and magnetic fields and retarded potentials is considered. The assumed form of the potentials in Ref.~\cite{GH} in the notation and choice of units of the present paper, is: \begin{eqnarray} {\rm A}_0 & = & \int \frac{[\rho]}{r'}d \tau, \\ \vec{{\rm A}} & = & \int \frac{[\vec{{\it J}}]}{r'}d \tau. \end{eqnarray} The volume element $d \tau \equiv d^3x_{{\it J}}(t')$ and the quantities in square brackets, as well as the distance $r'$ between the volume element and the field point are all specified at the retarded time $t' = t-r'/c$. Note that unlike the solutions of the D'Alembert equations in (2.8) there is no integral over the retarded time in these equations and also they do not contain the $1/K$ factor of the LW potentials. Substitution of (6.22) and (6.23) into (5.1) gives\footnote{The field is labelled according to the initials of the authors, Griffiths and Heald, of Ref.~\cite{GH}}: \begin{equation} \vec{{\rm E}}({\rm GH1})^{ret} = -\int \left[ \frac{1}{r'}\vec{\nabla}[\rho] + [\rho]\vec{\nabla}\left(\frac{1}{r'}\right)+ \frac{1}{c^2 r'}\frac{\partial [\vec{{\it J}}]} {\partial t} + \frac{[\vec{{\it J}}]}{c^2}\frac{\partial ~} {\partial t}\left(\frac{1}{r'}\right)\right]d \tau. \end{equation} This aleady disagrees with the corresponding equation (21) of Ref.~\cite{GH}, where the last term on the right side of (6.30) is omitted. Indeed, this term does not vanish but the last factor in it is: \begin{equation} \left. \frac{\partial ~}{\partial t}\left(\frac{1}{r'}\right)\right|_{\vec{x}_q} = \left. \frac{\partial t'}{\partial t}\right|_{\vec{x}_q} \left.\frac{\partial ~} {\partial t'}\left(\frac{1}{r'}\right)\right|_{\vec{x}_q} = -\frac{1}{(r')^2} \left. \frac{\partial t'}{\partial t}\right|_{\vec{x}_q} \left.\frac{\partial r'} {\partial t'}\right|_{\vec{x}_q} = \frac{c(\hat{r'} \cdot \vec{\beta_u})}{K (r')^2} \end{equation} where (5.13) and (5.15) have been used. This result is implicit in the later Eq.(44) of Ref.~\cite{GH}, so the omission of the term in Eq.(21) of this reference in hard to understand. The retarded density $[\rho]$ may be a function of the retarded time $t'$ and the position $\vec{x}_{{\it J}}(t')$ of the volume element $d \tau$, but does not depend of the position $\vec{x}_q$ of the field point. Therefore $ \vec{\nabla}[\rho]$ vanishes. More formally\footnote{The ellipsis in (6.26) and subsequent equations indicates the contribution of the $x$- and $y$-components.} \begin{eqnarray} \vec{\nabla}[\rho] & = & \left. \hat{\imath} \frac{\partial [\rho]}{\partial x_q}\right|_{t} +~.~.~. \nonumber \\ & = & \left. \hat{\imath} \frac{\partial t'}{\partial x_q}\right|_{t} \frac{d [\rho]}{d t'} +~.~.~. \nonumber \\ & = & \left. \left. \left. \hat{\imath} \frac{\partial t'}{\partial x_q}\right|_{t} \frac{\partial t} {\partial t'}\right|_{t} \frac{\partial [\rho]}{\partial t}\right|_{t} +~.~.~. \nonumber \\ & = & 0 \end{eqnarray} to be compared with the relation \begin{equation} \vec{\nabla}[\rho] = -\frac{1}{c}\frac{\partial [\rho]}{\partial t}\hat{r}' \end{equation} given in Ref.~\cite{GH}. The mathematical error leading to the incorrect equation (6.27) is a subtle one concerning the precise definitions of partial derivatives. Combining Eqs.(5.7) and (5.8) gives: \begin{equation} \left. \frac{\partial t'}{\partial x_q}\right|_{t} = -\frac{1}{c}\frac{(x_q - x_Q)}{K r'} \end{equation} so that the second line of (6.26) may be written as: \begin{equation} \vec{\nabla}[\rho] = -\frac{\hat{\imath}}{c}\frac{(x_q - x_Q)}{K r'}\frac{d [\rho]}{d t'} +~.~.~.~~. \end{equation} Now it seems plausible, in view of Eq.(5.15) above to make the substitution \begin{equation} \frac{1}{K} \frac {d ~}{d t'} \rightarrow \frac{\partial ~}{\partial t} \end{equation} in (6.29) thus yielding (6.27). But all spatial partial derivatives in (5.1) and hence in (6.26) and (6.29) are evaluated {\it at constant} $t$ whereas the operator relation of (6.30) is (see Eq.(5.15)) valid only {\it at constant} $\vec{x}_q$, and so is inapplicable in relation to derivatives with respect to $x_q$, $y_q$ or $z_q$. \par In fact the spurious relation (6.27) was also used by Jefimenko in the original derivation of Eq.(6.13)~\cite{Jefimenko}. Maxwell's equations and Eq.(6.23), called the `Vector Identity' V-33~\cite{JefiVI}, are used to obtain (6.19) from an integral vector identity: the `Vector wave field theorem' V-31~\cite{JefiVI}. \par The term containing $\vec{\nabla}(1/r')$ in Eq.(6.24) is the same, up to a constant multiplicative factor, as one which has been previously calculated in Section 5 (Eq.(5.10)) so that: \begin{equation} -\vec{\nabla}\left(\frac{1}{r'}\right) = \frac{\hat{r}'}{K(r')^2}. \end{equation} Combining (6.24), (6.25), (6.26) and (6.27) gives: \begin{equation} \vec{{\rm E}}({\rm GH1})^{ret} = \int \left[\frac{[\rho]\hat{r}'-(\hat{r}'\cdot \beta_u)[\vec{{\it J}}]} {K(r')^2} - \frac{1}{c^2 r'}\frac{\partial [\vec{{\it J}}]} {\partial t}\right]d \tau. \end{equation} Note that the first term on the right side of (6.32) differs from the corresponding one in Jefimenko's formula by a factor $1/K$. This factor was missed in the calculation of Ref.~\cite{GH}. In summary, the claimed-to-be-derived Jefimenko formula, Eq.(19) of Ref.~\cite{GH} is missing a factor $1/K$ on the first term; the second term vanishes, and the fourth term in (6.24) (the second in (6.32)) is omitted. Indeed, only the last term of Eq.(19) of Ref.~\cite{GH} is correct. \par Combining (5.2) and (6.23) gives: \begin{equation} \vec{{\rm B}}({\rm GH1})^{ret} = \frac{1}{c}\int \left[\vec{\nabla} \times \frac{[\vec{{\it J}}]}{r'} - [\vec{{\it J}}] \times \vec{\nabla}\left(\frac{1}{r'}\right)\right]d \tau. \end{equation} Choosing $[\vec{{\it J}}]$ parallel to the $x$-axis \begin{equation} \vec{\nabla} \times [\vec{{\it J}}] = -\hat{k}\left. \frac{\partial |\vec{{\it J}}|}{\partial y_q} \right |_t = -\hat{k}\left.\left.\left. \frac{\partial t'}{\partial y_q} \right |_t \frac{\partial t}{\partial t'} \right |_t \frac{\partial |\vec{{\it J}}|}{\partial t}\right |_t = 0 \end{equation} whereas the authors of Ref.~\cite{GH} state that \begin{equation} \vec{\nabla} \times [\vec{{\it J}}] = \frac{1}{c}\frac{\partial [\vec{{\it J}}]}{\partial t} \times \hat{r}'. \end{equation} This results from a similar misuse of partial derivatives to that described above in connection with Eq.(6.27). Eqs.(6.33),(6.31) and (6.34) give \begin{equation} \vec{{\rm B}}({\rm GH1})^{ret} = \int \frac{[\vec{{\it J}}] \times \hat{r}'}{c K (r')^2} d \tau \end{equation} which differs from the Jefimenko equation (6.15) by an overall factor $1/K$ and the absence of the $\partial [\vec{{\it J}}]/\partial t$ term. Again the `derivation' of the Jefimenko equation in Ref.~\cite{GH} , based now on the incorrect formula (6.35), is fallacious. Jefimenko~\cite{Jefimenko} also assumed this formula in order to derive the second term on the right side of (6.14). \par In Section IV of Ref.~\cite{GH} it is claimed to derive the LW fields of Eqs.(6.4) and (6.5) from the Jefimenko formulae. However this derivation starts not from the Jefimenko formulae (6.13) and (6.14) but instead from the different equations~\footnote{Eq.~(6.37)is Eq.~(38) of Ref.~\cite{GH}} (given here the label `GH2'): \begin{eqnarray} \vec{{\rm E}}({\rm GH2})^{ret} & = & \int \left[\frac{[\rho]\hat{r}'}{(r')^2}+ \frac{\partial ~}{\partial t}\left(\frac{[\rho]\hat{r}'}{cr'}\right) - \frac{\partial ~}{\partial t}\left(\frac{[\vec{{\it J}}]}{c^2r'}\right)\right] d^3 x_{{\it J}}, \\ \vec{{\rm B}}({\rm GH2})^{ret} & = & \int \left[\frac{[\vec{{\it J}}] \times \hat{r}'}{c(r')^2}+ \frac{\partial ~}{\partial t}\left(\frac{[\vec{{\it J}}] \times \hat{r}'}{c^2r'}\right)\right] d^3 x_{{\it J}} \end{eqnarray} which differ from the Jefimenko equations in that the time derivatives operate not only on the charge and current densities but also on the reciprocal of the retarded source-field point separation $r'$ and the unit vector $\hat{r}'$. \par The authors of Ref.~\cite{GH} introduce into Eqs.(6.37) and (6.38) point-like non-relativistic charge and current densities according to Eq.(6.17) and (6.18). Since the integration over $t'$ is omitted, it is then implicit in these equations that $t' = t'_Q$, where the fixed time, $t'_Q$, is the solution of Eq.(6.21), corresponding to a fixed position of the source charge for given values of $\vec{x}_q $ and $t$. It then follows that for point-like charges (6.37) and (6.38) are written as: \begin{eqnarray} \vec{{\rm E}}({\rm GH2})^{ret} & = & Q \int \left[\frac{\hat{r}'}{(r')^2}+ \frac{\partial ~}{\partial t}\left(\frac{\hat{r}'}{cr'}\right) - \frac{\partial ~}{\partial t}\left(\frac{\vec{\beta}_u}{c r'}\right)\right] \delta(\vec{x}_{{\it J}}(t'_Q)-\vec{x}_Q(t'_Q)) d^3 x_{{\it J}} \nonumber \\ & = & \left.Q \left[\frac{\hat{r}'}{(r')^2}+ \frac{\partial ~}{\partial t}\left(\frac{\hat{r}'}{cr'}\right) - \frac{\partial ~}{\partial t}\left(\frac{[\vec{\beta}_u]}{c r'}\right)\right] \right|_{t' = t'_Q}, \\ \vec{{\rm B}}({\rm GH2})^{ret} & = & Q \int \left[\frac{\vec{\beta}_u \times \hat{r}'}{(r')^2}+ \frac{\partial ~}{\partial t}\left(\frac{\vec{\beta}_u \times \hat{r}'}{c r'}\right)\right] \delta(\vec{x}_{{\it J}}(t'_Q)-\vec{x}_Q(t'_Q)) d^3 x_{{\it J}}\nonumber \\ & = & Q \left. \left\{ \left[\frac{\vec{\beta}_u\times \hat{r}'}{(r')^2}+ \frac{\partial ~}{\partial t}\left(\frac{\vec{\beta}_u\times \hat{r}'}{c r'}\right)\right]\right\}\right|_{t' = t'_Q}. \end{eqnarray} However, these formulae are not the ones obtained from (6.37) and (6.38) in Ref.~\cite{GH}. Instead, a change of variable is introduced into the $\delta$-functions in the first lines of (6.39) and (6.40): \begin{equation} \vec{z}(t'_Q) \equiv \vec{x}_{{\it J}}(t'_Q)-\vec{x}_Q(t'_Q). \end{equation} The Jacobian, $J$, relating the volume elements $d^3\vec{z}$ and $d^3\vec{x}_{{\it J}}$ according to \begin{equation} d^3\vec{z} = J d^3\vec{x}_{{\it J}} \end{equation} is introduced. It is then stated, without derivation, that $J = K$ where $K$ is Jacobian relating $dt$ to $dt'$, as given by Eqs.(5.15) and (6.3) above. The $x$-component of (6.41) is \begin{equation} z_x(t'_Q) = x_{{\it J}}(t'_Q)-x_Q(t'_Q). \end{equation} Since $x_Q(t'_Q)$ is constant it follows from (6.49) that $d z_x = dx_{{\it J}}$. Similarly. $d z_y = dy_{{\it J}}$ and $d z_z = dz_{{\it J}}$ so that the Jacobian $J$ in (6.42) is unity, not $K$, as claimed in Ref.~\cite{GH}. \par Since the authors of Ref.~\cite{GH} nevertheless {\it did} insert a factor $1/K$ multiplying the $\delta$-functions in the first lines of (6.39) and (6.40), before performing the spatial integrations, the equations obtained were not those in the last lines of (6.39) and (6.40) but instead: \begin{eqnarray} \vec{{\rm E}}({\rm GH2})^{ret}_{J = K} & = & \left.Q \left[\frac{\hat{r}'}{K(r')^2}+ \frac{\partial ~}{\partial t}\left(\frac{\hat{r}'}{cKr'}\right) - \frac{\partial ~}{\partial t}\left(\frac{[\vec{\beta}_u]}{cK r'}\right)\right] \right|_{t' = t'_Q}, \\ \vec{{\rm B}}({\rm GH2})^{ret}_{J = K} & = & Q \left. \left\{ \left[\frac{\vec{\beta}_u \times \hat{r}'}{K(r')^2}+ \frac{\partial ~}{\partial t}\left(\frac{\vec{\beta}_u \times \hat{r}'}{cK r'}\right)\right]\right\}\right|_{t' = t'_Q} \end{eqnarray} where the subscript `$J = K$' distinguishes these equations from the formally correct ones (6.39) and (6.40), i.e. the correctly calculated point-like charge versions of the general equations (6.37) and (6.38), claimed to be the Jefimenko equations but actually given, without derivation, in Ref.~\cite{GH}. \par After writing (6.44) and (6.45) (the equivalents, in gaussian units, of the MKS Eqs.(41) and (42) of Ref~\cite{GH}) it is stated that they are `essentially in the form made famous by Feynman'. In fact Eq.~(6.44) is equivalent to Eq.~(19) of Ref.~\cite{JPS} on transforming the $t'$ derivatives in the latter into the $t$ derivatives of the former. It is correctly stated, but not demonstrated, in Ref.~\cite{JPS} that their Eq.~(19) is the same as the LW retarded electric field. \par The introduction of the factor (1/K) inside the time derivatives in (6.44) and (6.45) is equivalent to replacing the retarded potentials in (6.22) and (6.23) by the LW potentials. It is shown in Ref~\cite{GH} that (6.44) is equivalent to the LW electric field of Eq.~(6.6) and stated (without proof) that the magnetic field is given by the relation $\vec{{\rm B}}({\rm LW})^{ret} = \hat{r}' \times \vec{{\rm E}}({\rm LW})^{ret}$. \par Summarising the results obtained so far in this section: Ref.~\cite{JPS} does demonstrate the consistency of the Feynman and LW formulae for retarded electric fields. The `derivation' of the Jefimenko formulae from the defining equations (5.1) and (5.2) of the electric and magnetic fields and the retarded potentials (6.22) and (6.23) given in Ref.~\cite{GH} is erroneous due to mathematical misinterpretation of spatial partial derivatives. The same remark applies to Jefimenko's original derivation~\cite{Jefimenko} of these equations. The Eqs.(6.44) and (6.45) given in Ref.~\cite{GH} are not the Jefimenko equations but are obtained from them by introducing an overall multiplicative factor $1/K$ in each term and allowing the time derivatives to act on all factors in the terms of the equation instead of uniquely on the charge and current densities as in the Jefimenko formulae. This is tantamount to replacing the potentials of (6.22) and (6.23) by the retarded LW potentials of Eq.~(2.28). Eq.(6.44) does give the LW field of (6.6), as claimed in Ref.~\cite{GH}. \par It was pointed out in Ref.~\cite{McDonald} that an equation for the magnetic field identical to the Jefimenko equation (6.14) and a formula equivalent to the Jefimenko electric field, (6.13), had been given earlier in the second edition of the book `Classical Electricity and Magnetism' by Panofsky and Phillips~\cite{PPJE}. The equivalent electric field formula is: \begin{equation} \vec{E}({\rm PP})^{ret} = \int \left\{\frac{\hat{r}'[\rho]}{(r')^2} + \frac{([\vec{{\rm J}}]\cdot \hat{r}')\hat{n}+([\vec{{\rm J}}]\times \hat{r}')\times \hat{r}'}{c(r')^2} +\frac{[\dot{\vec{{\rm J}}}]\times \hat{r}')\times \hat{r}'}{c^2 r'}\right\} d^3 x_{{\rm J}}. \end{equation} A calculation claiming to show the equivalence of (6.13) and (6.46) was given in Section II of Ref.~\cite{McDonald} The first step was to repeat the erroneous derivation of (6.13) from the defining equation (5.1) of the electric field and the non-relativistic potentials (6.22) and (6.23), previously given in Ref.~\cite{GH} and discussed above. The term proportional to $\partial[\rho]/\partial t$ is manipulated to obtain Eq.(6.46). As shown above, this term actually vanishes. \SECTION{\bf{Summary}} Retarded potentials are derived from homogeneous d'Alembert equations for electromagnetic potentials and the Lorenz condition. The potentials so-obtained in Eq.(2.12) differ from the LW potentials of CEM. It is shown that the incorrect LW potentials result from neglect of the dependence of the effective density of a moving charge distribution on its speed. This point is made particularly clear by the careful re-examination of Feynman's derivation of the LW potentials presented in Section 3. In Section 4, several `relativistic' derivations of the LW potentials or the corresponding retarded fields given in text books are reviewed. It is shown that they all contain misapplications of special relativity --in particular by invoking a spurious `length contraction' effect. In all of the relativistic derivations, retardation effects are neglected, whereas in the original 19th Century derivations of the LW potentials or the corresponding retarded fields, no relativistic effects are considered. There are therefore two independent, logically incompatible, and incorrect, derivations of retarded potentials and their associated fields. In Section 5 the retarded RCED fields of a uniformly moving charge are derived and expressed in `present time' form. Except for an overall multiplicative factor $1/(1-\hat{r}' \cdot \vec{\beta}_u)$ and the retarded time argument, they are the same as the instantaneous force fields of RCED~\cite{JHFRCED}. In Section 6 the consistency claimed in the pedagogical literature between various different formulae for the fields of an accelerated charge (LW, Feynman, Jefimenko) is considered. The Feynman formula for the retarded electric field of a charge in arbitary motion is (as previously shown in Ref.~\cite{JPS}) consistent with the LW field. The electric field of a uniformly moving charge given by the Jefimenko formula is found to be, unlike CEM and RCED fields, Coulombic. \par The considerations of the present paper are of a primarily mathematical nature. The physical interpretation of retarded (radiation) and instantaneous (force) fields in RCED has been discussed in some detail previously~\cite{JHFRCED,JHFFT}. \newpage \par{\bf Appendix A} \renewcommand{\theequation}{A.\arabic{equation}} \setcounter{equation}{0} \par In this appendix, retarded electric and magnetic fields are derived from the LW potentials as well as from the equations (6.44) and (6.45) equivalent to those given in Ref~\cite{GH} and claimed there to be the same as the LW fields. To derive the LW fields the potentials \begin{eqnarray} {\rm A}_0({\rm LW})^{ret} & = & \left.\frac{Q}{K r'}\right|_{t' = t'_Q}, \\ \vec{{\rm A}}({\rm LW})^{ret} & = & \left.\frac{\vec{\beta}_u}{K r'}\right|_{t' = t'_Q} \end{eqnarray} where $K \equiv (1-\hat{r}' \cdot \vec{\beta}_u)$, are substituted into the defining equations (5.1) and (5.2) of electric and magnetic fields to give \begin{eqnarray} \vec{{\rm E}}({\rm LW})^{ret}& = & -\vec{\nabla} {\rm A}_0({\rm LW})^{ret} - \frac{1}{c} \frac{\partial \vec{{\rm A}}({\rm LW})^{ret}}{\partial t}, \\ \vec{{\rm B}}({\rm LW})^{ret} & = & \vec{\nabla} \times \vec{{\rm A}}({\rm LW})^{ret}. \end{eqnarray} For simplicity, all labels, superscripts and subscripts on the fields and potentials are omitted in the following. \par Taking into account, by the chain rule, the contribution to the fields of each factor in the potentials, (A.3) and (A.4) give: \begin{eqnarray} \vec{{\rm E}} & = & -\frac{Q}{K}\vec{\nabla}\left(\frac{1}{r'}\right)- \frac{Q}{r'}\vec{\nabla}\left(\frac{1}{K}\right) -\frac{Q \vec{\beta}_u}{c K} \frac{\partial ~}{\partial t}\left(\frac{1}{r'}\right) -\frac{Q \vec{\beta}_u}{c r'} \frac{\partial ~}{\partial t}\left(\frac{1}{K}\right) -\frac{Q}{c K r'} \frac{\partial \vec{\beta}_u }{\partial t}, \\ \vec{{\rm B}} & = & -\frac{Q}{K}\vec{\beta}_u \times\vec{\nabla}\left(\frac{1}{r'}\right) -\frac{Q}{r'}\vec{\beta}_u \times\vec{\nabla}\left(\frac{1}{K}\right) + \frac{Q}{K r'}(\vec{\nabla} \times \vec{\beta}_u). \end{eqnarray} In these and the following equations it is understood that all spatial partial derivatives hold $t$ constant and all temporal partial deivatives hold $\vec{x}_q$, the field position, constant. The derivatives in the successive terms on the right sides of these equations are now evaluated. \par The first term on the right side of (A.5) gives: \begin{eqnarray} -\frac{Q}{K}\vec{\nabla}\left(\frac{1}{r'}\right) & = & -\frac{ \hat{\imath} Q}{K} \frac{\partial~}{\partial x_q}\left(\frac{1}{r'}\right)+ .~.~.~. \nonumber \\ & = & \frac{ \hat{\imath} Q}{K(r')^2} \frac{\partial r'}{\partial x_q}+ .~.~.~. \nonumber \\ & = & \frac{ \hat{\imath} Q (x_q-x_Q)}{K^2(r')^3}+ .~.~.~. \nonumber \\ & = & \frac{ \hat{r'}}{K^2(r')^2} \end{eqnarray} where Eq.(5.8) has been used. \par Considering the second term on the right side of (A.5), \begin{eqnarray} -\frac{Q}{r'}\vec{\nabla}\left(\frac{1}{K}\right) & = & -\frac{ \hat{\imath} Q}{ r' K^2} \frac{\partial~}{\partial x_q}(\hat{r}' \cdot \vec{\beta}_u)+ .~.~.~ \nonumber \\ & = & -\frac{ \hat{\imath} Q}{ r' K^2} \left[\vec{\beta}_u \cdot \frac{\partial \hat{r}'}{\partial x_q} + \hat{r}' \cdot \frac{\partial \vec{\beta}_u }{\partial x_q}\right]+ .~.~.~ \nonumber \\ & = & -\frac{ \hat{\imath} Q}{ r' K^2} \left[\vec{\beta}_u \cdot \frac{\partial \hat{r}'}{\partial x_q} + \frac{\partial t'}{\partial x_q}(\hat{r}'\dot{\cdot \vec{\beta}_u})\right]+ .~.~.~~. \end{eqnarray} Now, \begin{eqnarray} \frac{\partial \hat{r}'}{\partial x_q} & = & \frac{\partial ~}{\partial x_q} \left(\frac{\vec{r}'}{r'}\right) = \frac{1}{r'} \frac{\partial \vec{r}'}{\partial x_q} - \frac{\vec{r}'}{(r')^2} \frac{\partial r'}{\partial x_q}\nonumber \\ & = & \frac{\hat{\imath}}{r'}\left(1- \frac{d x_Q}{d t'}\frac{\partial t'}{\partial x_q}\right) - \frac{\hat{r}'}{r'} \frac{\partial r'}{\partial x_q}\nonumber \\ & = & \frac{\hat{\imath}}{r'} + \frac{\hat{\imath} \beta_u -\hat{r}'}{r'} \frac{\partial r'}{\partial x_q}\nonumber \\ & = & \frac{\hat{\imath}}{r'} + \frac{(\hat{\imath} \beta_u -\hat{r}')(x_q-x_Q)}{K(r')^2} \end{eqnarray} where Eqs.(5.7) and (5.8) have been used, as well as the assumption that $\vec{u}$ is parallel to the $x$-axis. Substituting (A.9) in (A.8) and again using Eqs.(5.7) and (5.8) gives: \begin{eqnarray} -\frac{Q}{r'}\vec{\nabla}\left(\frac{1}{K}\right) & = & -\frac{Q \vec{\beta}_u}{K^2(r')^2} -Q \hat{\imath}\frac{[\beta_u^2-(\hat{r}' \cdot \vec{\beta}_u)]}{K^3(r')^2} \frac{(x_q-x_Q)}{r'}+ Q \hat{\imath}\frac{(\hat{r}' \cdot \dot{\vec{\beta}_u})]}{c K^3r'} \frac{(x_q-x_Q)}{r'}+.~.~.~. \nonumber \\ & = & -\frac{Q \vec{\beta}_u}{K^2(r')^2} -Q\hat{r}'\frac{[\beta_u^2-(\hat{r}' \cdot \vec{\beta}_u)]}{K^3(r')^2} + Q \hat{r}'\frac{(\hat{r}' \cdot \dot{\vec{\beta}_u})]}{c K^3r'}+.~.~.~.~~. \end{eqnarray} The third term on the right side of (A.5) gives \begin{eqnarray} -\frac{Q \vec{\beta}_u}{c K} \frac{\partial ~}{\partial t}\left(\frac{1}{r'}\right) & = & -\frac{Q\vec{\beta}_u}{c K} \frac{\partial t'} {\partial t} \frac{\partial ~}{\partial t'}\left(\frac{1}{r'}\right) \nonumber \\ & = & \frac{Q \vec{\beta}_u}{c K^2(r')^2} \frac{\partial r'} {\partial t'} \nonumber \\ & = & -\frac{Q \vec{\beta}_u (\hat{r}' \cdot \vec{\beta}_u)}{K^2(r')^2} \end{eqnarray} where (5.15) and (5.13) have been used. The fourth term on the right side of (A.5) gives \begin{equation} -\frac{Q \vec{\beta}_u}{c r'} \frac{\partial ~}{\partial t}\left(\frac{1}{K}\right) = -\frac{Q \vec{\beta}_u}{c r'} \frac{\partial t'} {\partial t} \frac{\partial ~}{\partial t'}\left(\frac{1}{K}\right) \end{equation} But \begin{equation} \frac{\partial ~}{\partial t'}\left(\frac{1}{K}\right) = \frac{\partial ~}{\partial t'} \left(\frac{1}{1- (\hat{r}' \cdot \vec{\beta}_u)}\right) = \frac{1}{K^2}\frac{\partial(\hat{r}' \cdot \vec{\beta}_u)}{\partial t'} = \frac{1}{K^2}\left[\vec{\beta}_u \cdot \frac{\partial \hat{r}'}{\partial t'} +\hat{r}' \cdot \dot{\vec{\beta}_u}\right] \end{equation} and also \begin{equation} \frac{\partial \hat{r}'}{\partial t'} = \frac{\partial ~}{\partial t'}\left(\frac{\vec{r}'}{r'}\right) = -c\frac{\vec{\beta}_u}{r'}-\frac{\vec{r}'}{(r')^2}\frac{\partial r'}{\partial t'} = c\frac{[ \hat{r}'(\hat{r}' \cdot \vec{\beta}_u)-\vec{\beta}_u]}{r'} \end{equation} where (5.13) has been used. (A.12)-(A.14) together with (5.15) then give \begin{equation} -\frac{Q \vec{\beta}_u}{c r'} \frac{\partial ~}{\partial t}\left(\frac{1}{K}\right) = -\frac{Q \vec{\beta}_u}{K^3}\left\{ \frac{(\hat{r}' \cdot \vec{\beta}_u)^2-\beta_u^2)}{(r')^2} + \frac{\hat{r}' \cdot \dot{\vec{\beta}_u}}{c r'}\right\}. \end{equation} Collecting together (A.7), (A.10), (A.11) and (A.15) gives, for the electric field derived from the LW potentials: \begin{eqnarray} \vec{{\rm E}} & = & \frac{Q}{K^2(r')^2}\left\{\hat{r}'- \vec{\beta}_u[1+ (\hat{r}' \cdot \vec{\beta}_u)] -\frac{\vec{\beta}_u[(\hat{r}' \cdot \vec{\beta}_u)^2-\beta_u^2]+\hat{r}' [ \hat{r}' \cdot \vec{\beta}_u-\beta_u^2]}{K}\right\} \nonumber \\ & & - \frac{Q}{K^2 c r'}\left[\dot{\vec{\beta}_u} +\frac{(\vec{\beta}_u -\hat{r}')\hat{r}' \cdot \dot{\vec{\beta}_u}}{K}\right] \nonumber \\ & = & \frac{Q}{K^3}\left[\frac{\hat{r}'-\vec{\beta}_u}{\gamma_u^2(r')^2} + \frac{{\hat{r}' \times [(\hat{r}'-\beta}_u) \times\dot{\vec{\beta}_u}]}{c r'} \right]. \end{eqnarray} \par The retarded LW magnetic field given by (A.6) is now calculated. \par Since $\vec{u}$ is assumed to be parallel to the $x$-axis, it follows that \begin{eqnarray} -\frac{Q}{K}\vec{\beta}_u \times\vec{\nabla}\left(\frac{1}{r'}\right) & = & -\frac{Q\hat{k} \beta_u}{K}\frac{\partial ~}{\partial y_q}\left(\frac{1}{r'}\right) \nonumber \\ & = & \frac{Q\hat{k} \beta_u}{K(r')^2}\frac{\partial r'}{\partial y_q} = \frac{Q\hat{k} \beta_u y_q}{K^2(r')^3} \nonumber \\ & = & \frac{Q(\vec{\beta}_u \times \hat{r}')}{K^2(r')^2}. \end{eqnarray} Similarly \begin{eqnarray} -\frac{Q}{K}\vec{\beta}_u \times\vec{\nabla}\left(\frac{1}{K}\right) & = & -\frac{Q\hat{k} \beta_u}{K}\frac{\partial ~}{\partial y_q}\left(\frac{1}{K}\right) \nonumber \\ & = & -\frac{Q\hat{k} \beta_u}{K^2 r'}\frac{\partial (\hat{r}' \cdot \vec{\beta}_u)}{\partial y_q} \nonumber \\ & = & -\frac{Q \hat{k} \beta_u}{K^2r'} \left(\hat{r}' \cdot \frac{\partial \vec{\beta}_u}{\partial y_q}+\vec{\beta}_u \cdot \frac{\partial \hat{r}'}{\partial y_q} \right). \end{eqnarray} Evaluating the first term in brackets on the right side of Eq.(A.18), \begin{equation} \hat{r}' \cdot \frac{\partial \vec{\beta}_u}{\partial y_q} = \frac{\partial t'}{\partial y_q}(\hat{r}' \cdot \dot{\vec{\beta}_u}) = -\frac{y_q(\hat{r}' \cdot \dot{\vec{\beta}_u})}{cK r'} \end{equation} where the relation \begin{equation} \frac{\partial t'}{\partial y_q} = -\frac{1}{c} \frac{\partial r'}{\partial y_q} \end{equation} given by differentiating the retardation condition $t' = t-r'/c$ as well as Eq.(5.18) have been used. The second tern in brackets on the right side of(A.18) is \begin{equation} \vec{\beta}_u \cdot \frac{\partial \hat{r}'}{\partial y_q} = \vec{\beta}_u \cdot \frac{\partial ~}{\partial y_q} \left(\frac{\vec{r}'}{r'}\right) = \vec{\beta}_u \cdot \left( \frac{1}{r'} \frac{\partial \vec{r}'}{\partial y_q}- \frac{\vec{r}'}{(r')^2 }\frac{\partial r'}{\partial y_q} \right). \end{equation} Assuming, without loss of generality, that the vector $\vec{r}'$ is confined to the $x-y$ plane, \begin{equation} \frac{\partial \vec{r}'}{\partial y_q} = -\hat{\imath}\frac{d x_Q}{d t'}\frac{\partial t'}{\partial y_q} + \hat{\jmath}. \end{equation} Combining (A.21) and (A.22), again using (A.20) and (5.18), gives \begin{equation} \vec{\beta}_u \cdot \frac{\partial \hat{r}'}{\partial y_q} = \frac{[\beta_u^2- \hat{r}' \cdot \vec{\beta}_u]y_q}{K(r')^2}. \end{equation} Combining A(18), (A.19) and (A.23), \begin{equation} -\frac{Q}{K}\vec{\beta}_u \times\vec{\nabla}\left(\frac{1}{K}\right) = \frac{Q \vec{\beta}_u \times \hat{r}'}{K^3}\left[ \frac{[ \hat{r}' \cdot \vec{\beta}_u- \beta_u^2]}{(r')^2} +\frac{\hat{r}' \cdot \dot{\vec{\beta}_u}}{cr'} \right] \end{equation} The third term on the right side of (A.6) is \begin{eqnarray} \frac{Q}{K r'}(\vec{\nabla} \times \vec{\beta}_u) & = & -\frac{Q \hat{k}}{K r'} \frac{\partial \beta_u}{\partial y_q} = -\frac{Q \hat{k}}{K r'} \frac{\partial t'}{\partial y_q} \dot{\beta}_u \nonumber \\ & = & \frac{Q \hat{k}}{c K r'} \frac{\partial r'}{\partial y_q} \dot{\beta}_u = \frac{Q (\vec{\beta}_u \times \hat{r}')}{c K^2 r'\beta_u}\dot{\beta}_u \end{eqnarray} where (A.20) and (5.18) have been used. \par Collecting together (A.17), (A.24) and (A.25), the magnetic field generated by the LW potentials is: \begin{eqnarray} \vec{{\rm B}} & = & \left. \left\{\frac{Q (\vec{\beta}_u \times \hat{r}')}{K^3}\left[\frac{K + \hat{r}' \cdot \vec{\beta}_u -\beta_u^2}{(r')^2} +\frac{K \dot{\beta}_u+ \beta_u(\hat{r}' \cdot \dot{\vec{\beta}_u})}{c r' \beta_u}\right] \right\} \right|_{t' = t'_Q}, \nonumber \\ & = & \left. \left\{\frac{Q (\vec{\beta}_u \times \hat{r}')}{K^3}\left[\frac{1}{\gamma_u^2(r')^2} +\frac{(\dot{\beta}_u(1-\hat{r}' \cdot \vec{\beta}_u) + \beta_u(\hat{r}' \cdot \dot{\vec{\beta}_u})}{c r' \beta_u}\right] \right\} \right|_{t' = t'_Q}. \end{eqnarray} (A.16) and (A.26) are the formulae (6.6) and (6.7) of the text. \par The consistency of the fields of Eqs.(6.44) and (6.45) with the LW fields of (6.6) and (6.7) claimed in Ref.~\cite{GH} is now investigated. The equations analogous to (A.5) and (A.6) given by using the chain rule to expand the derivatives in (6.44) and (6.45) are: \begin{eqnarray} \vec{{\rm E}} & = & \frac{Q \hat{r}'}{K (r')^2} + \frac{Q \hat{r}'}{c r'} \frac{\partial ~}{\partial t}\left(\frac{1}{K}\right) + \frac{Q \hat{r}'}{c K} \frac{\partial ~}{\partial t}\left(\frac{1}{r'}\right)+ \frac{Q}{c K r'}\frac{\partial \hat{r}'}{\partial t} \nonumber \\ & & -\frac{Q \vec{\beta}_u}{c K} \frac{\partial ~}{\partial t}\left(\frac{1}{r'}\right) -\frac{Q \vec{\beta}_u}{c r'} \frac{\partial ~}{\partial t}\left(\frac{1}{K}\right) -\frac{Q}{c K r'} \frac{\partial \vec{\beta}_u }{\partial t}, \\ \vec{{\rm B}} & = & \frac{Q (\vec{\beta}_u \times \hat{r}')}{K (r')^2}+ \frac{Q(\vec{\beta}_u \times \hat{r}')}{c r'} \frac{\partial ~}{\partial t}\left(\frac{1}{K}\right)+ \frac{Q (\vec{\beta}_u \times \hat{r}')}{c K} \frac{\partial ~}{\partial t}\left(\frac{1}{r'}\right) \nonumber \\ & & +\frac{Q}{c K r'}\left(\vec{\beta}_u \times\frac{\partial \hat{r}'}{\partial t}\right)-\frac{Q}{c K r'} \left( \hat{r}' \times \frac{\partial \vec{\beta}_u }{\partial t}\right). \end{eqnarray} . Comparison with (A.5) and (A.6) shows that the last three terms in (A.5) and (A.27) (originating from the time derivative in (A.3)) are the same but all other terms in (A.27) and (A.28) differ from those in (A.5) and (A.6). Thus to compare (A.5) and (A.27) the derivatives in the second. third and fourth terms on the right of (A.27) must be evaluated, while to compare (A.6) and (A.28) all derivatives on the right side of (A.28) must be evaluated. This is readily done using, {\it mutatis mutandis}, the formulae obtained above. \par The second term in (A.27) is \begin{equation} \frac{Q \hat{r}'}{c r'} \frac{\partial ~}{\partial t}\left(\frac{1}{K}\right) = \frac{Q \hat{r}'}{c K r'} \frac{\partial ~}{\partial t'}\left(\frac{1}{K}\right) = \frac{Q \hat{r}'}{K^3}\left\{ \frac{(\hat{r}' \cdot \vec{\beta}_u)^2-\beta_u^2)}{(r')^2} + \frac{\hat{r}' \cdot \dot{\vec{\beta}_u}}{c r'}\right\} \end{equation} by analogy with Eq.(A.15). \par The third term is \begin{equation} \frac{Q \hat{r}'}{c K}\frac{\partial ~}{\partial t}\left(\frac{1}{r'}\right) = \frac{Q \hat{r}'}{c K^2}\frac{\partial ~}{\partial t'}\left(\frac{1}{r'}\right) = -\frac{Q \hat{r}'}{c K^2 (r')^2} \frac{\partial r'}{\partial t'} = \frac{Q \hat{r}' (\hat{r}' \cdot \vec{\beta}_u)}{K^2 (r')^2} \end{equation} where (5.15) and (5.13) have been used \par The fourth term is \begin{equation} \frac{Q}{c K r'}\frac{\partial \hat{r}'}{\partial t} = \frac{Q}{c K^2 r'}\frac{\partial \hat{r}'}{\partial t'} =\frac{Q [ \hat{r}'(\hat{r}' \cdot \vec{\beta}_u)-\vec{\beta}_u]}{K^2(r')^2}. \end{equation} Substituting (A.29)-(A.31) into (A.22) as well as the previously obtained terms, and performing some algebraic simplification, gives \begin{equation} \vec{{\rm E}} = \frac{Q}{K^3}\left[\frac{\hat{r}'-\vec{\beta}_u}{\gamma_u^2(r')^2} + \frac{{\hat{r}' \times [(\hat{r}'-\beta}_u) \times\dot{\vec{\beta}_u}]}{c r'} \right]. \end{equation} Which is the LW field of Eq.(A.16) Noting that the first three terms of (A.28) differ from those of (A.27) by the replacement $\hat{r}' \rightarrow \vec{\beta}_u \times \hat{r}'$ and using the above results for the latter terms, gives, after algebraic simplification, the LW magnetic field of Eq.~(6.7). \newpage \par{\bf Appendix B} \renewcommand{\theequation}{B.\arabic{equation}} \setcounter{equation}{0} \par For clarity the total time derivatives in (6.6) are replaced the corresponding partial derivatives with respect to the present time, $t$, for a fixed value of the field point position $\vec{x}_q$. \par The second term on the right side of (6.10) contains the derivative: \begin{equation} \frac{\partial~}{\partial t}\left(\frac{\hat{r}'}{(r')^2}\right) = \frac{\partial t'}{\partial t} \frac{\partial~}{\partial t'} \left(\frac{\vec{r}'}{(r')^3}\right) = \frac{\partial t'}{\partial t}\left[\frac{1}{(r')^3} \frac{\partial \vec{r}'}{\partial t'} -\frac{3 \vec{r}'}{(r')^4}\frac{\partial r'}{\partial t'}\right]. \end{equation} Since the vector $\vec{r}'$ is confined to the $x$-$y$ plane, \begin{equation} \frac{\partial \vec{r}'}{\partial t'} = \hat{\imath} \frac{\partial (x_q-x_Q)}{\partial t'} + \hat{\jmath} \frac{\partial y_q}{\partial t'} = -c\vec{\beta}_u \end{equation} since $\partial y_q/\partial t' = 0$ and $c \beta_u = d x_Q/dt$. (5.13), (5.15), (B.1) and (B.2) give: \begin{equation} \frac{r'}{c}\frac{\partial~}{\partial t}\left(\frac{\hat{r}'}{(r')^2}\right) = \frac{1}{1-\hat{r}' \cdot \vec{\beta}_u}\left[ \frac{3 \hat{r}'( \hat{r}' \cdot \vec{\beta}_u) - \vec{\beta}_u}{(r')^2}\right]. \end{equation} \par Considering now the last term on the right side of (6.10): \begin{equation} \frac{\partial^2 \hat{r}'}{\partial t^2} = \frac{\partial t'}{\partial t}\frac{\partial~}{\partial t'} \left[\frac{\partial t'}{\partial t}\frac{\partial \hat{r}'}{\partial t'}\right] = \frac{\partial t'}{\partial t}\left[\frac{\partial \hat{r}'}{\partial t'} \frac{\partial~}{\partial t'} \left(\frac{\partial t'}{\partial t}\right)+ \frac{\partial t'}{\partial t} \frac{\partial^2 \hat{r}'}{\partial t'^2}\right] \end{equation} where \begin{eqnarray} \frac{\partial~}{\partial t'}\left(\frac{\partial t'}{\partial t}\right) & = & \frac{\partial~}{\partial t'}\left(\frac{1}{1-\hat{r}' \cdot \vec{\beta}_u}\right) = \frac{1}{(1-\hat{r}' \cdot \vec{\beta}_u)^2} \frac{\partial(\hat{r}' \cdot \vec{\beta}_u)}{\partial t'} \nonumber \\ & = & \frac{1}{(1-\hat{r}' \cdot \vec{\beta}_u)^2}\left[ c \frac{[(\hat{r}' \cdot \vec{\beta}_u)^2 -\beta_u^2]} {r'}+ (\hat{r}' \cdot \dot{\vec{\beta}_u)}\right] \end{eqnarray} where (A.14) has been used. It also follows from (A.14) that \begin{eqnarray} \frac{\partial^2 \hat{r}'}{\partial t'^2} & = & - \frac{c[\hat{r}'(\hat{r}' \cdot \vec{\beta}_u)-\vec{\beta}_u]}{(r')^2}\frac{\partial r'}{\partial t'} +\frac{c}{r'}\left[\left(\frac{\partial \hat{r}'}{\partial t'}\right)(\hat{r}' \cdot \vec{\beta}_u) + \hat{r}'\left(\frac{\partial \hat{r}'}{\partial t'}\right) \cdot \vec{\beta}_u + \hat{r}'(\hat{r}' \cdot \dot{\vec{\beta}_u})- \dot{\vec{\beta}_u}\right] \nonumber \\ & = & c^2\left[\frac{\hat{r}'[3(\hat{r}' \cdot \vec{\beta}_u)^2-\beta_u^2] - 2\vec{\beta}_u(\hat{r}' \cdot \vec{\beta}_u)}{(r')^2}\right] + \frac{ c[\hat{r}'(\hat{r}' \cdot \dot{\vec{\beta}_u})-\dot{\vec{\beta}_u}]}{r'}. \end{eqnarray} Combining (5.15) and (A.14),(B.5) and (B.6) then gives \begin{eqnarray} \frac{1}{c^2} \frac{\partial^2 \hat{r}'}{\partial t^2} & = & \frac{1}{(1-\hat{r}' \cdot \vec{\beta}_u)} \left\{\frac{[\hat{r}'(\hat{r}'\cdot\vec{\beta}_u)- \vec{\beta}_u]}{(1-\hat{r}' \cdot \vec{\beta}_u)^2} \left[\frac{\hat{r}'(\hat{r}'\cdot \vec{\beta}_u)^2-\beta_u^2}{(r')^2}+\frac{\hat{r}' \cdot \dot{\vec{\beta}_u}} {c r'}\right]\right. \nonumber \\ &+&\left. \frac{1}{(1-\hat{r}' \cdot \vec{\beta}_u)}\left[\frac{\hat{r}'[3(\hat{r}' \cdot \vec{\beta}_u)^2 -\beta_u^2] -2\vec{\beta}_u(\hat{r}'\cdot \vec{\beta}_u)}{(r')^2}+ \frac{\hat{r}'( \hat{r}' \cdot \dot{\vec{\beta}_u})- \dot{\vec{\beta}_u}}{c r'}\right]\right\}. \end{eqnarray} Inserting (B.3) and (B.7) into (6.10) yields Eq.(6.12) of the text. \newpage
1,314,259,994,091
arxiv
\section{Introduction and preliminaries} \label{introduction} Let $\operatorname{M}_{\mathbb P^n}(rm+\chi)$ be the moduli space of Gieseker semi-stable sheaves on the complex projective space $\mathbb P^n$ having Hilbert polynomial $P(m) = rm+\chi$. Le Potier \cite{lepotier} showed that $\operatorname{M}_{\mathbb P^2}(rm+\chi)$ is irreducible and, if $r$ and $\chi$ are coprime, is smooth. For low multiplicity the homology of $\operatorname{M}_{\mathbb P^2}(rm+\chi)$ has been studied in \cite{choi_chung_moduli, choi_chung_geometry}, by the wall-crossing method, and in \cite{choi_maican, maican_international, maican_sciences} by the Bia\l{y}nicki-Birula method. When $n > 2$ the moduli space is no longer irreducible. Thus, according to \cite{freiermuth_trautmann}, $\operatorname{M}_{\mathbb P^3}(3m+1)$ has two irreducible components meeting transversally. The focus of this paper is the moduli space $\mathbf M = \operatorname{M}_{\mathbb P^3}(4m+1)$ of stable sheaves on $\mathbb P^3$ with Hilbert polynomial $4m+1$. This has already been investigated in \cite{choi_chung_maican} using wall-crossing, by relating $\mathbf M$ to $\operatorname{Hilb}_{\mathbb P^3}(4m+1)$. The main result of \cite{choi_chung_maican} states that $\mathbf M$ consists of three irreducible components, denoted $\overline{\mathbf R}$, $\overline{\mathbf E}$, $\mathbf{P}$, of dimension $16$, $17$, respectively, $20$. The generic sheaves in $\overline{\mathbf R}$ are structure sheaves of rational quartic curves. The generic sheaves in $\overline{\mathbf E}$ are of the form $\mathcal O_E(P)$, where $E$ is an elliptic quartic curve and $P$ is a point on $E$. The third irreducible component parametrizes the planar sheaves. The purpose of this paper is to reprove the decomposition of $\mathbf M$ into irreducible components without using the wall-crossing method, see Theorem \ref{main_theorem}. We achieve this as follows. Using the decomposition of $\operatorname{Hilb}_{\mathbb P^3}(4m+1)$ into irreducible components, found in \cite{chen_nollet}, we show that the subset of $\mathbf M$ of sheaves generated by a global section is irreducible, see Proposition \ref{R_irreducible}. This provides our first irreducible component. We then describe the sheaves having support an elliptic quartic curve, see Section \ref{elliptic}. To show that the set of such sheaves ${\mathcal F}$ is irreducible we use results from \cite{vainsencher} regarding the geometry of $\operatorname{Hilb}_{\mathbb P^3}(4m)$. Given ${\mathcal F}$, we construct at Proposition \ref{E_closure} a variety $\mathbf{W}$ together with a map $\sigma \colon \mathbf{W} \to \Gamma$, the support map, where $\Gamma \subset \operatorname{Hilb}_{\mathbb P^3}(4m)$ is an irreducible quasi-projective curve, such that ${\mathcal F} \in \sigma^{-1}(x)$ for a point $x \in \Gamma$ and such that $\Gamma \setminus \{ x \}$ consists only of smooth curves. Moreover, the fibers of $\sigma$ are irreducible, hence $\mathbf{W}$ is irreducible, and hence ${\mathcal F}$ is contained in the closure of the set of sheaves with support smooth elliptic curves. Thus we obtain the second irreducible component. The set $\mathbf{P}$ of planar sheaves is irreducible because it is a bundle over the Grassmannian of planes in $\mathbb P^3$ with fiber $\operatorname{M}_{\mathbb P^2}(4m+1)$, which is, as mentioned above, irreducible. We also rely on the cohomological classification of sheaves in $\mathbf M$ found at \cite[Theorem 6.1]{choi_chung_maican}, which does not use the wall-crossing method (it uses the Beilinson spectral sequence). We fix a $4$-dimensional vector space $V$ over $\mathbb C$ and we identify $\mathbb P^3$ with $\mathbb P(V)$. We fix a basis $\{ X, Y, Z, W \}$ of $V^*$. We quote below \cite[Theorem 6.1]{choi_chung_maican}: \begin{theorem} \label{homological_conditions} Let ${\mathcal F}$ give a point in $\operatorname{M}_{\mathbb P^3}(4m+1)$. Then ${\mathcal F}$ satisfies one of the following cohomological conditions: \begin{enumerate} \item[(i)] $\operatorname{h}^0({\mathcal F} \otimes \Omega^2(2)) = 0$, $\operatorname{h}^0({\mathcal F} \otimes \Omega^1(1)) = 0$, $\operatorname{h}^0({\mathcal F}) = 1$; \item[(ii)] $\operatorname{h}^0({\mathcal F} \otimes \Omega^2(2)) = 0$, $\operatorname{h}^0({\mathcal F} \otimes \Omega^1(1)) = 1$, $\operatorname{h}^0({\mathcal F}) = 1$; \item[(iii)] $\operatorname{h}^0({\mathcal F} \otimes \Omega^2(2)) = 1$, $\operatorname{h}^0({\mathcal F} \otimes \Omega^1(1)) = 3$, $\operatorname{h}^0({\mathcal F}) = 2$. \end{enumerate} \end{theorem} Let $\mathbf M_0$, $\mathbf M_1$, $\mathbf M_2 \subset \mathbf M$ be the subsets of sheaves satisfying conditions (i), (ii), respectively, (iii). We will call them \emph{strata}. Clearly, $\mathbf M_0$ is open, $\mathbf M_1$ is locally closed and $\mathbf M_2$ is closed. We also quote the classification of the sheaves in each stratum in terms of locally free resolutions, which was carried out at \cite[Theorem 6.1]{choi_chung_maican}. The sheaves in $\mathbf M_0$ are precisely the sheaves having a resolution of the form \begin{equation} \label{sheaves_in_R} 0 \longrightarrow 3\mathcal O(-3) \stackrel{\psi}{\longrightarrow} 5\mathcal O(-2) \stackrel{\varphi}{\longrightarrow} \mathcal O(-1) \oplus \mathcal O \longrightarrow {\mathcal F} \longrightarrow 0 \end{equation} \[ \varphi = \left[ \begin{array}{ccccc} X & Y & Z & W & 0 \\ q_1 & q_2 & q_3 & q_4 & q_5 \end{array} \right] \] or a resolution of the form \begin{equation} \label{sheaves_in_E} 0 \longrightarrow 3\mathcal O(-3) \stackrel{\psi}{\longrightarrow} 5\mathcal O(-2) \stackrel{\varphi}{\longrightarrow} \mathcal O(-1) \oplus \mathcal O \longrightarrow {\mathcal F} \longrightarrow 0 \end{equation} \[ \varphi = \left[ \begin{array}{ccccc} l_1 & l_2 & l_3 & 0 & 0 \\ q_1 & q_2 & q_3 & q_4 & q_5 \end{array} \right] \] where $l_1$, $l_2$, $l_3$ are linearly independent. Let $\mathbf R, \mathbf E \subset \mathbf M_0$ be the subsets of sheaves having resolution (\ref{sheaves_in_R}), respectively, (\ref{sheaves_in_E}). Clearly, $\mathbf R$ is an open subset of $\mathbf M$ and consists of structure sheaves of rational quartic curves. The set $\mathbf E$ contains all extensions of $\mathbb C_P$ by $\mathcal O_E$, where $E$ is an elliptic quartic curve and $P$ is a point on $E$. The sheaves in $\mathbf M_1$ are precisely the sheaves having a resolution of the form \begin{equation} \label{sheaves_in_M_1} 0 \longrightarrow 3\mathcal O(-3) \stackrel{\psi}{\longrightarrow} 5\mathcal O(-2) \oplus \mathcal O(-1) \stackrel{\varphi}{\longrightarrow} 2\mathcal O(-1) \oplus \mathcal O \longrightarrow {\mathcal F} \longrightarrow 0 \end{equation} where $\varphi_{12} = 0$ and $\varphi_{11} \colon 5\mathcal O(-2) \to 2\mathcal O(-1)$ is not equivalent to a morphism represented by a matrix of the form \[ \left[ \begin{array}{ccccc} \star & \star & 0 & 0 & 0 \\ \star & \star & \star & \star & \star \end{array} \right] \qquad \text{or} \qquad \left[ \begin{array}{ccccc} \star & \star & \star & \star & 0 \\ \star & \star & \star & \star & 0 \end{array} \right]. \] The sheaves in $\mathbf M_2$ are precisely the sheaves of the form $\mathcal O_C(-P)(1)$, where $\mathcal O_C(-P)$ in $\mathcal O_C$ denotes the ideal sheaf of a closed point $P$ in a planar quartic curve $C$. Assume now that ${\mathcal F}$ has resolution (\ref{sheaves_in_R}). Let $S \subset \mathbb P^3$ be the quadric surface given by the equation $q_5 = 0$. From the snake lemma we get the resolution \[ 0 \longrightarrow 3\mathcal O(-3) \longrightarrow \Omega^1(-1) \longrightarrow \mathcal O_S \longrightarrow {\mathcal F} \longrightarrow 0. \] We consider first the case when $S$ is smooth. The semi-stable sheaves on a smooth quadric surface with Hilbert polynomial $4m+1$ have been investigated in \cite{ballico_huh}. We cite below the main result of \cite{ballico_huh}: \begin{proposition} \label{smooth_quadric} Let ${\mathcal F}$ be a coherent sheaf on $\mathbb P^1 \times \mathbb P^1$ that is semi-stable relative to the polarization $\mathcal O(1, 1)$ and such that $P_{{\mathcal F}}(m) = 4m +1$. Then precisely one of the following is true: \begin{enumerate} \item[(i)] ${\mathcal F}$ is the structure sheaf of a curve of type $(1,3)$; \item[(ii)] ${\mathcal F}$ is the structure sheaf of a curve of type $(3, 1)$; \item[(iii)] ${\mathcal F}$ is a non-split extension $0 \to \mathcal O_E \to {\mathcal F} \to \mathbb C_P \to 0$ for a curve $E$ in $\mathbb P^1 \times \mathbb P^1$ of type $(2, 2)$ and a point $P \in E$. Such an extension is unique up to isomorphism and satisfies the condition $\operatorname{H}^1({\mathcal F}) = 0$. \end{enumerate} Thus, $\operatorname{M}_{\mathbb P^1 \times \mathbb P^1}(4m+1)$ has three connected components. Two of these, $\mathbb P(\operatorname{H}^0(\mathcal O(1, 3)))$ and $\mathbb P(\operatorname{H}^0(\mathcal O(3, 1)))$, are isomorphic to $\mathbb P^7$. The third one is smooth, has dimension $9$, and is isomorphic to the universal elliptic curve in $\mathbb P(\operatorname{H}^0(\mathcal O(2, 2))) \times (\mathbb P^1 \times \mathbb P^1)$. The sheaves at \emph{(iii)} are precisely the sheaves having a resolution of the form \[ 0 \longrightarrow \mathcal O(-2, -1) \oplus \mathcal O(-1, -2) \stackrel{\varphi}{\longrightarrow} \mathcal O(-1, -1) \oplus \mathcal O \longrightarrow {\mathcal F} \longrightarrow 0 \] with $\varphi_{11} \neq 0$, $\varphi_{12} \neq 0$. \end{proposition} The following well-known lemma provides one of our main technical tools. \begin{lemma} Let $X$ be a projective scheme and $Y$ a subscheme. Let ${\mathcal F}$ be a coherent $\mathcal O_X$-module and let ${\mathcal G}$ be a coherent $\mathcal O_Y$-module. Then there is an exact sequence of vector spaces \begin{multline} \label{ext_sequence_1} 0 \longrightarrow \operatorname{Ext}^1_{O_Y}({\mathcal F}_{|Y}, {\mathcal G}) \longrightarrow \operatorname{Ext}^1_{\mathcal O_X}({\mathcal F}, {\mathcal G}) \longrightarrow \operatorname{Hom}_{\mathcal O_Y}({\mathcal Tor}_1^{\mathcal O_X}({\mathcal F}, \mathcal O_Y), {\mathcal G}) \\ \longrightarrow \operatorname{Ext}^2_{\mathcal O_Y}({\mathcal F}_{|Y}, {\mathcal G}) \longrightarrow \operatorname{Ext}^2_{\mathcal O_X}({\mathcal F}, {\mathcal G}). \end{multline} In particular, if ${\mathcal F}$ is an $\mathcal O_Y$-module, then the above exact sequence takes the form \begin{multline} \label{ext_sequence_2} 0 \longrightarrow \operatorname{Ext}^1_{O_Y}({\mathcal F}, {\mathcal G}) \longrightarrow \operatorname{Ext}^1_{\mathcal O_X}({\mathcal F}, {\mathcal G}) \longrightarrow \operatorname{Hom}_{\mathcal O_Y}({\mathcal F} \otimes_{\mathcal O_X} {\mathcal I}_Y, {\mathcal G}) \\ \longrightarrow \operatorname{Ext}^2_{\mathcal O_Y}({\mathcal F}, {\mathcal G}) \longrightarrow \operatorname{Ext}^2_{\mathcal O_X}({\mathcal F}, {\mathcal G}). \end{multline} \end{lemma} \section{Sheaves supported on rational quartic curves} \label{rational} Let $\mathbf R_0 \subset \mathbf R$ be the set of isomorphism classes of structure sheaves $\mathcal O_R$ of curves $R \subset S$ of type $(1, 3)$ or $(3, 1)$ on smooth quadrics $S \subset \mathbb P^3$. A curve of type $(1, 3)$ on $S$ can be deformed inside $\mathbb P^3$ to a curve of type $(3, 1)$, hence $\mathbf R_0$ is irreducible of dimension $16$. Let $\mathbf E_0 \subset \mathbf E$ be the set of isomorphism classes of non-split extensions of $\mathbb C_P$ by $\mathcal O_E$ for $E \subset S$ a curve of type $(2, 2)$ on a smooth quadric $S \subset \mathbb P^3$ and $P$ a closed point on $E$. From (\ref{ext_sequence_2}) and Proposition \ref{smooth_quadric} (iii) we have the exact sequence \[ 0 \longrightarrow \operatorname{Ext}^1_{\mathcal O_S} (\mathbb C_P, \mathcal O_E) \simeq \mathbb C \longrightarrow \operatorname{Ext}^1_{\mathcal O_{\mathbb P^3}} (\mathbb C_P, \mathcal O_E) \longrightarrow \operatorname{Hom}_{\mathcal O_S}^{} (\mathbb C_P, \mathcal O_E) = 0. \] We denote by $\mathcal O_E(P)$ the unique non-split extension of $\mathbb C_P$ by $\mathcal O_E$. Clearly, $\mathbf E_0$ is irreducible of dimension $17$. Let $\mathbf E_{\scriptstyle \operatorname{free}} \subset \mathbf E_0$ denote the open subset of sheaves that are locally free on their schematic support, which is equivalent to saying that $P \in \operatorname{reg}(E)$. Let $\mathbf{P} \subset \operatorname{M}_{\mathbb P^3}(4m+1)$ be the closed set of planar sheaves. It has dimension $20$. Let $\mathbf{P}_{\scriptstyle \operatorname{free}} \subset \mathbf{P}$ be the open subset of sheaves that are locally free on their support. According to \cite{iena}, $\mathbf{P} \setminus \mathbf{P}_{\scriptstyle \operatorname{free}}$ has codimension $2$ in $\mathbf{P}$. \begin{proposition} The closed sets $\overline{\mathbf R}_0$, $\overline{\mathbf E}_0$ and $\mathbf{P}$ are irreducible components of $\operatorname{M}_{\mathbb P^3}(4m+1)$. Moreover, $\mathbf R_0$, $\mathbf E_{\scriptstyle \operatorname{free}}$ and $\mathbf{P}_{\scriptstyle \operatorname{free}}$ are smooth open subsets of the moduli space. \end{proposition} \begin{proof} Let ${\mathcal F} = \mathcal O_R$ give a point in $\mathbf R_0$, where $R \subset S$ is a curve of, say, type $(1, 3)$. From Serre duality we have \[ \operatorname{Ext}^2_{\mathcal O_S}({\mathcal F}, {\mathcal F}) \simeq \operatorname{Hom}_{\mathcal O_S}^{}({\mathcal F}, {\mathcal F}(-2, -2))^* = 0. \] From the exact sequence (\ref{ext_sequence_2}) we get the relation \[ \operatorname{ext}^1_{\mathcal O_{\mathbb P^3}}({\mathcal F}, {\mathcal F}) = \operatorname{ext}^1_{\mathcal O_S}({\mathcal F}, {\mathcal F}) + \hom_{\mathcal O_S}^{}({\mathcal F}(-2), {\mathcal F}) = 7 + \operatorname{h}^0(\mathcal O_R(2, 2)) = 16. \] This shows that $\overline{\mathbf R}_0$ is an irreducible component of $\mathbf M$ and that $\mathbf R_0$ is smooth. Consider next ${\mathcal F} = \mathcal O_E(P)$ giving a point in $\mathbf E_0$. As above, we have the relation \[ \operatorname{ext}^1_{\mathcal O_{\mathbb P^3}}({\mathcal F}, {\mathcal F}) = \operatorname{ext}^1_{\mathcal O_S}({\mathcal F}, {\mathcal F}) + \hom_{\mathcal O_S}^{}({\mathcal F}(-2), {\mathcal F}) = 9 + \hom_{\mathcal O_S}^{}({\mathcal F}, {\mathcal F}(2, 2)). \] Assume, in addition, that ${\mathcal F}$ is locally free on $E$. Its rank must be $1$ because $E$ is a curve of multiplicity $4$. Thus \[ \operatorname{Hom}_{\mathcal O_S}^{}({\mathcal F}, {\mathcal F}(2, 2)) \simeq \operatorname{H}^0(\mathcal O_E(2, 2)) \simeq \mathbb C^8, \] hence $\operatorname{ext}^1_{\mathcal O_{\mathbb P^3}}({\mathcal F}, {\mathcal F}) = 17$. This shows that $\overline{\mathbf E}_0$ is an irreducible component of $\mathbf M$ and that $\mathbf E_{\scriptstyle \operatorname{free}}$ is smooth. Assume now that ${\mathcal F}$ is supported on a planar quartic curve $C \subset H$. Using Serre duality and (\ref{ext_sequence_2}) we get the relation \[ \operatorname{ext}^1_{\mathcal O_{\mathbb P^3}}({\mathcal F}, {\mathcal F}) = \operatorname{ext}^1_{\mathcal O_H}({\mathcal F}, {\mathcal F}) + \hom_{\mathcal O_H}^{}({\mathcal F}(-1), {\mathcal F}) = 17 + \hom_{\mathcal O_H}^{}({\mathcal F}, {\mathcal F}(1)). \] Assume, in addition, that ${\mathcal F}$ is locally free on $C$, so a line bundle. Thus \[ \operatorname{Hom}_{\mathcal O_H}^{}({\mathcal F}, {\mathcal F}(1)) \simeq \operatorname{H}^0(\mathcal O_C(1)) \simeq \mathbb C^3, \] hence $\operatorname{ext}^1_{\mathcal O_{\mathbb P^3}}({\mathcal F}, {\mathcal F}) = 20$. This shows that $\mathbf{P}$ is an irreducible component of $\mathbf M$ and that $\mathbf{P}_{\scriptstyle \operatorname{free}}$ is smooth. \end{proof} \begin{remark} \label{planar_sheaves} Let ${\mathcal F}$ be a one-dimensional sheaf on $\mathbb P^3$ without zero-dimensional torsion. Let ${\mathcal F}'$ be a planar subsheaf such that ${\mathcal F}/{\mathcal F}'$ has dimension zero. Then ${\mathcal F}$ is planar. Indeed, say that ${\mathcal F}'$ is an $\mathcal O_H$-module for a plane $H \subset \mathbb P^3$. From (\ref{ext_sequence_1}) we have the exact sequence \[ 0 \to \operatorname{Ext}^1_{\mathcal O_H}(({\mathcal F}/{\mathcal F}')_{| H}, {\mathcal F}') \to \operatorname{Ext}^1_{\mathcal O_{\mathbb P^3}}({\mathcal F}/{\mathcal F}', {\mathcal F}') \to \operatorname{Hom}_{\mathcal O_H}({\mathcal Tor}_1^{\mathcal O_{\mathbb P^3}}({\mathcal F}/{\mathcal F}', \mathcal O_H), {\mathcal F}'). \] The group on the right vanishes because ${\mathcal Tor}_1^{\mathcal O_{\mathbb P^3}}({\mathcal F}/{\mathcal F}', \mathcal O_H)$ is supported on finitely many points, yet ${\mathcal F}'$ has no zero-dimensional torsion. Thus ${\mathcal F} \in \operatorname{Ext}^1_{\mathcal O_H}(({\mathcal F}/{\mathcal F}')_{| H}, {\mathcal F}')$, so ${\mathcal F}$ is an $\mathcal O_H$-module. \end{remark} \begin{proposition} \label{non-planar_M_1} The non-planar sheaves in $\operatorname{M}_{\mathbb P^3}(4m+1)$ having resolution (\ref{sheaves_in_M_1}) are precisely the non-split extensions of the form \begin{equation} \label{sheaves_in_D'} 0 \longrightarrow \mathcal O_C \longrightarrow {\mathcal F} \longrightarrow \mathcal O_L \longrightarrow 0 \end{equation} where $C$ is a planar cubic curve and $L$ is a line meeting $C$ with multiplicity $1$. For such a sheaf, $\operatorname{H}^0({\mathcal F})$ generates $\mathcal O_C$. The set $\mathbf R$ consists precisely of the sheaves generated by a global section. The set $\mathbf E$ consists precisely of the sheaves ${\mathcal F}$ such that $\operatorname{H}^0({\mathcal F})$ generates a subsheaf with Hilbert polynomial $4m$. \end{proposition} \begin{proof} Let $\varphi$ be a morphism as at (\ref{sheaves_in_M_1}). Denote ${\mathcal G} = {\mathcal Coker}(\varphi_{11})$ and let $H \subset \mathbb P^3$ be the plane given by the equation $\varphi_{22} = 0$. From the snake lemma we have the exact sequence \[ \mathcal O_H \longrightarrow {\mathcal F} \longrightarrow {\mathcal G} \longrightarrow 0. \] We examine first the case when \[ \varphi_{11} \nsim \left[ \begin{array}{ccccc} 0 & 0 & \star & \star & \star \\ \star & \star & \star & \star & \star \end{array} \right]. \quad \text{Thus we may write} \quad \varphi_{11} = \left[ \begin{array}{ccccc} X & Y & Z & W & 0 \\ 0 & l_1 & l_2 & l_3 & l_4 \end{array} \right]. \] If $l_4$ is a multiple of $X$, then $P_{{\mathcal G}} = 3$ (see the proof of \cite[Theorem 6.1(iii)]{choi_chung_maican}), hence, by Remark \ref{planar_sheaves}, ${\mathcal F}$ is planar. Assume now that $l_4$ is not a multiple of $X$ and let $L \subset \mathbb P^3$ be the line given by the equations $X = 0$, $l_4 = 0$. Then ${\mathcal G}$ is a proper quotient sheaf of $\mathcal O_L(-1)$, hence it has support of dimension zero, and hence, by Remark \ref{planar_sheaves}, ${\mathcal F}$ is planar. It remains to examine the case when \[ \varphi_{11} = \left[ \begin{array}{ccccc} u_1 & u_2 & u_3 & 0 & 0 \\ 0 & v_1 & v_2 & v_3 & v_4 \end{array} \right]. \] Let $P$ be the point given by the ideal $(u_1, u_2, u_3)$ and let $L$ be the line given by the equations $v_3 = 0$, $v_4 = 0$. We have an exact sequence \[ \mathcal O_L(-1) \longrightarrow {\mathcal G} \longrightarrow \mathbb C_P \longrightarrow 0. \] If the first morphism is not injective, then ${\mathcal G}$ has dimension zero, hence ${\mathcal F}$ is planar. If ${\mathcal G}$ is an extension of $\mathbb C_P$ by $\mathcal O_L(-1)$, then this extension does not split, otherwise $\mathcal O_L(-1)$ would be a destabilizing quotient sheaf of ${\mathcal F}$. Thus, ${\mathcal G} \simeq \mathcal O_L$ and we have an exact sequence \[ 0 \longrightarrow {\mathcal E} \longrightarrow {\mathcal F} \longrightarrow \mathcal O_L \longrightarrow 0 \] where ${\mathcal E}$ gives a point in $\operatorname{M}_H(3m)$ and is generated by a global section. Thus ${\mathcal E}$ is the structure sheaf of a cubic curve $C \subset H$. If $L \subset H$, then from (\ref{ext_sequence_2}) we would have the exact sequence \[ 0 \longrightarrow \operatorname{Ext}^1_{\mathcal O_H} (\mathcal O_L, \mathcal O_C) \longrightarrow \operatorname{Ext}^1_{\mathcal O_{\mathbb P^3}} (\mathcal O_L, \mathcal O_C) \longrightarrow \operatorname{Hom}_{\mathcal O_H}(\mathcal O_L(-1), \mathcal O_C). \] The group on the right vanishes because $\mathcal O_C$ is stable. We deduce that ${\mathcal F}$ lies in $\operatorname{Ext}^1_{\mathcal O_H} (\mathcal O_L, \mathcal O_C)$, hence ${\mathcal F}$ is planar. Thus far we have showed that if ${\mathcal F}$ is non-planar and has resolution (\ref{sheaves_in_M_1}), then ${\mathcal F}$ is an extension as in the proposition. Conversely, given a non-split extension (\ref{sheaves_in_D'}), then ${\mathcal F}$ is semi-stable, because $\mathcal O_C$ and $\mathcal O_L$ are stable. In view of Theorem \ref{homological_conditions}, since ${\mathcal F}$ is non-planar, we have $\operatorname{h}^0({\mathcal F}) = 1$. Thus $\operatorname{H}^0({\mathcal F})$ generates $\mathcal O_C$. It follows that ${\mathcal F}$ cannot have resolutions (\ref{sheaves_in_R}) or (\ref{sheaves_in_E}), otherwise $\operatorname{H}^0({\mathcal F})$ would generate ${\mathcal F}$ or would generate a subsheaf with Hilbert polynomial $4m$. We conclude that ${\mathcal F}$ has resolution (\ref{sheaves_in_M_1}). The rest of the proposition follows from Theorem \ref{homological_conditions} and from the fact, proved in \cite{drezet_maican}, that for a planar sheaf ${\mathcal F}$ having resolution (\ref{sheaves_in_M_1}), the space of global sections generates a subsheaf with Hilbert polynomial $4m-2$ or it generates the structure sheaf of a cubic curve. \end{proof} \begin{proposition} \label{R_irreducible} The set $\mathbf R$ of sheaves in $\operatorname{M}_{\mathbb P^3}(4m+1)$ generated by a global section is irreducible. \end{proposition} \begin{proof} Let $\operatorname{Hilb}_{\mathbb P^3}(4m+1)^{\scriptstyle \operatorname{s}} \subset \operatorname{Hilb}_{\mathbb P^3}(4m+1)$ be the open subset of semi-stable quotients. The image of the canonical map \[ \operatorname{Hilb}_{\mathbb P^3}(4m+1)^{\scriptstyle \operatorname{s}} \longrightarrow \operatorname{M}_{\mathbb P^3}(4m+1) \] is $\mathbf R$. According to \cite[Theorem 4.9]{chen_nollet}, $\operatorname{Hilb}_{\mathbb P^3}(4m+1)$ has four irreducible components, denoted $H_1$, $H_2$, $H_3$, $H_4$. The generic point in $H_1$ is a rational quartic curve. The generic curve in $H_2$ is the disjoint union of a planar cubic and a line. The generic member of $H_3$ is the disjoint union of a point and an elliptic quartic curve. The generic member of $H_4$ is the disjoint union of a planar quartic curve and three distinct points. Thus, $H_2 \cup H_3 \cup H_4$ lies in the closed subset \[ H = \{ [\mathcal O \twoheadrightarrow \mathcal{S}] \mid \ \operatorname{h}^0(\mathcal{S}) \ge 2 \} \subset \operatorname{Hilb}_{\mathbb P^3}(4m+1). \] According to Theorem \ref{homological_conditions}, $H^{\scriptstyle \operatorname{s}} = \emptyset$. Indeed, any sheaf in $\mathbf M_2$ cannot be generated by a single global section. Thus, $\operatorname{Hilb}_{\mathbb P^3}(4m+1)^{\scriptstyle \operatorname{s}}$ is an open subset of $H_1$, hence it is irreducible, and hence $\mathbf R$ is irreducible. \end{proof} \section{Sheaves supported on elliptic quartic curves} \label{elliptic} We will next examine the sheaves ${\mathcal F}$ having resolution (\ref{sheaves_in_E}). Let $P$ be the point given by the ideal $(l_1, l_2, l_3)$. Notice that the subsheaf of ${\mathcal F}$ generated by $\operatorname{H}^0({\mathcal F})$ is the kernel of the canonical map ${\mathcal F} \to \mathbb C_P$. This shows that ${\mathcal F}$ is non-planar because, according to \cite{drezet_maican}, the global sections of a sheaf in $\operatorname{M}_{\mathbb P^2}(4m+1)$ whose first cohomology vanishes generate a subsheaf with Hilbert polynomial $4m - 2$ or the structure sheaf of a planar cubic curve, which is not the case here. We consider first the case when $q_4$ and $q_5$ have no common factor, so they define a curve $E$. Applying the snake lemma to the diagram \[ \xymatrix { & & 0 \ar[d] & 0 \ar[d] \\ 0 \ar[r] & \mathcal O(-4) \ar[r]^-{\tiny \left[ \!\!\! \begin{array}{c} q_5 \\ q_4 \end{array} \!\!\! \right]} & 2\mathcal O(-2) \ar[r]^-{[-q_4 \ q_5]} \ar[d] & \mathcal O \ar[r] \ar[d] & \mathcal O_E \ar[r] & 0 \\ 0 \ar[r] & 3\mathcal O(-3) \ar[r] & 5\mathcal O(-2) \ar[r]^-{\varphi} \ar[d] & \mathcal O(-1) \oplus \mathcal O \ar[r] \ar[d] & {\mathcal F} \ar[r] & 0 \\ 0 \ar[r] & \mathcal{K} \ar[r] & 3\mathcal O(-2) \ar[r]^-{[l_1 \ l_2 \ l_3]} \ar[d] & \mathcal O(-1) \ar[r] \ar[d] & \mathbb C_P \ar[r] & 0 \\ & & 0 & 0 } \] we see that ${\mathcal F}$ is an extension of $\mathbb C_P$ by $\mathcal O_E$. From Serre duality we have \[ \operatorname{Ext}^1_{\mathcal O_{\mathbb P^3}}(\mathbb C_P, \mathcal O_E) \simeq \operatorname{Ext}^2_{\mathcal O_{\mathbb P^3}}(\mathcal O_E, \mathbb C_P)^* \simeq \mathbb C. \] The group in the middle can be determined by applying $\operatorname{Hom}(\rule{7pt}{.5pt}, \mathbb C_P)$ to the first row of the diagram above. We may write ${\mathcal F} = \mathcal O_E(P)$. \begin{proposition} \label{F_stable} The sheaf $\mathcal O_E(P)$ is stable. \end{proposition} \begin{proof} We will show that $\mathcal O_E$ is stable, forcing $\mathcal O_E(P)$ to be stable. To prove that $\mathcal O_E$ is stable, we must show that it does not contain a stable subsheaf ${\mathcal E}$ having one of the following Hilbert polynomials: $m$, $m+1$ (i.e. the structure sheaf of a line), $2m$, $2m+1$ (i.e. the structure sheaf of a conic curve), $3m$, $3m+1$. The structure sheaf of a line contains subsheaves having Hilbert polynomial $m$ and the structure sheaf of a conic curve contains subsheaves having Hilbert polynomial $2m$. Thus, it is enough to consider only the Hilbert polynomials $m$, $2m$, $3m+1$, $3m$. In the first case, we have a commutative diagram \[ \xymatrix { 0 \ar[r] & \mathcal O(-3) \ar[d]^-{\gamma} \ar[r] & 2\mathcal O(-2) \ar[r] \ar[d]^-{\beta} & \mathcal O(-1) \ar[r] \ar[d]^-{\alpha} & {\mathcal E} \ar[r] \ar[d] & 0 \\ 0 \ar[r] & \mathcal O(-4) \ar[r] & 2\mathcal O(-2) \ar[r] & \mathcal O \ar[r] & \mathcal O_E \ar[r] & 0 } \] in which $\alpha \neq 0$. It follows that $\mathcal O(-3) \simeq {\mathcal Ker}(\gamma) \simeq {\mathcal Ker}(\beta)$, which is absurd. In the second case, we get a commutative diagram \[ \xymatrix { 0 \ar[r] & 2\mathcal O(-3) \ar[d]^-{\gamma} \ar[r] & 4\mathcal O(-2) \ar[r] \ar[d]^-{\beta} & 2\mathcal O(-1) \ar[r] \ar[d]^-{\alpha} & {\mathcal E} \ar[r] \ar[d] & 0 \\ 0 \ar[r] & \mathcal O(-4) \ar[r] & 2\mathcal O(-2) \ar[r] & \mathcal O \ar[r] & \mathcal O_E \ar[r] & 0 } \] in which $\alpha \neq 0$, hence ${\mathcal Ker}(\alpha) \simeq \mathcal O(-1)$ or $\mathcal O(-2)$. From the exact sequence \[ 0 \longrightarrow 2\mathcal O(-3) \simeq {\mathcal Ker}(\gamma) \longrightarrow {\mathcal Ker}(\beta) \longrightarrow {\mathcal Ker}(\alpha) \longrightarrow {\mathcal Coker}(\gamma) \simeq \mathcal O(-4) \] we see that ${\mathcal Ker}(\beta) \simeq 3\mathcal O(-2)$ and we get the exact sequence \[ 0 \longrightarrow 2\mathcal O(-3) \longrightarrow 3\mathcal O(-2) \longrightarrow {\mathcal Ker}(\alpha) \longrightarrow 0. \] Such an exact sequence cannot exist. In the third case, we use the resolution of ${\mathcal E}$ given at \cite[Theorem 1.1]{freiermuth_trautmann}. We obtain a commutative diagram \[ \xymatrix { 0 \ar[r] & 2\mathcal O(-3) \ar[d]^-{\gamma} \ar[r] & 3\mathcal O(-2) \oplus \mathcal O(-1) \ar[r] \ar[d]^-{\beta} & \mathcal O(-1) \oplus \mathcal O \ar[r] \ar[d]^-{\alpha} & {\mathcal E} \ar[r] \ar[d] & 0 \\ 0 \ar[r] & \mathcal O(-4) \ar[r] & 2\mathcal O(-2) \ar[r] & \mathcal O \ar[r] & \mathcal O_E \ar[r] & 0 } \] in which $\alpha$ is non-zero on global sections, hence ${\mathcal Ker}(\alpha) \simeq \mathcal O(-1)$. We obtain a contradiction from the exact sequence \[ 0 \longrightarrow 2\mathcal O(-3) \simeq {\mathcal Ker}(\gamma) \longrightarrow {\mathcal Ker}(\beta_{11}) \oplus \mathcal O(-1) \longrightarrow {\mathcal Ker}(\alpha) \longrightarrow 0. \] Assume, finally, that ${\mathcal E}$ gives a stable point in $\operatorname{M}_{\mathbb P^3}(3m)$. If $\operatorname{H}^0({\mathcal E}) \neq 0$, then it is easy to see that ${\mathcal E}$ is the structure sheaf of a planar cubic curve, hence we get a commutative diagram \[ \xymatrix { 0 \ar[r] & \mathcal O(-4) \ar[d]^-{\gamma} \ar[r] & \mathcal O(-3) \oplus \mathcal O(-1) \ar[r] \ar[d]^-{\beta} & \mathcal O \ar[r] \ar[d]^-{\alpha} & {\mathcal E} \ar[r] \ar[d] & 0 \\ 0 \ar[r] & \mathcal O(-4) \ar[r] & 2\mathcal O(-2) \ar[r] & \mathcal O \ar[r] & \mathcal O_E \ar[r] & 0 } \] in which $\alpha$ is injective. We get a contradiction from the fact that $\mathcal O(-1)$ is a subsheaf of ${\mathcal Ker}(\beta) \simeq {\mathcal Ker}(\gamma)$. If $\operatorname{H}^0({\mathcal E}) = 0$, then we get a commutative diagram of the form \[ \xymatrix { 0 \ar[r] & 3\mathcal O(-3) \ar[d]^-{\gamma} \ar[r] & 6\mathcal O(-2) \ar[r] \ar[d]^-{\beta} & 3\mathcal O(-1) \ar[r] \ar[d]^-{\alpha} & {\mathcal E} \ar[r] \ar[d] & 0 \\ 0 \ar[r] & \mathcal O(-4) \ar[r] & 2\mathcal O(-2) \ar[r] & \mathcal O \ar[r] & \mathcal O_E \ar[r] & 0 } \] It is easy to see that $\alpha(1)$ is injective on global sections, hence ${\mathcal Coker}(\alpha)$ is isomorphic to the structure sheaf of a point and ${\mathcal Coker}(\beta) \simeq \mathcal O(-2)$. We get a contradiction from the exact sequence \[ \mathcal O(-4) \simeq {\mathcal Coker}(\gamma) \longrightarrow {\mathcal Coker}(\beta) \longrightarrow {\mathcal Coker}(\alpha). \qedhere \] \end{proof} To finish the discussion about sheaves at Theorem \ref{homological_conditions} (i), we need to examine the case when $q_4 = u v_1$ and $q_5 = u v_2$ with linearly independent $v_1, v_2 \in V^*$. Let $H$ be the plane given by the equation $u = 0$ and $L$ the line given by the equations $v_1 = 0$, $v_2 = 0$. We apply the snake lemma to the diagram \[ \xymatrix { & & 0 \ar[d] & & 0 \ar[d] \\ 0 \ar[r] & \mathcal O(-3) \ar[r] & 2\mathcal O(-2) \ar[rr]^-{[v_1 \ v_2]} \ar[d] & & \mathcal O(-1) \ar[r] \ar[d]^-{\tiny \left[ \!\! \begin{array}{c} 0 \\ u \end{array} \!\! \right]} & \mathcal O_L(-1) \ar[r] & 0 \\ 0 \ar[r] & 3\mathcal O(-3) \ar[r] & 5\mathcal O(-2) \ar[rr]^-{\varphi} \ar[d] & & \mathcal O(-1) \oplus \mathcal O \ar[r] \ar[d] & {\mathcal F} \ar[r] & 0 \\ 0 \ar[r] & \mathcal{K} \ar[r] & 3\mathcal O(-2) \ar[rr]^-{\tiny \left[ \!\! \begin{array}{ccc} l_1 & l_2 & l_3 \\ \star & \star & \star \end{array} \!\! \right]} \ar[d] & & \mathcal O(-1) \oplus \mathcal O_H \ar[r] \ar[d] & {\mathcal G} \ar[r] & 0 \\ & & 0 & & 0 } \] The kernel of the canonical map ${\mathcal G} \to \mathbb C_P$ is an $\mathcal O_H$-module. This shows that ${\mathcal F}$ is not isomorphic to ${\mathcal G}$, otherwise, in view of Remark \ref{planar_sheaves}, ${\mathcal F}$ would be planar. Thus $\mathcal O_L(-1) \to {\mathcal F}$ is non-zero, hence it is injective. We get a non-split extension \begin{equation} \label{sheaves_in_D} 0 \longrightarrow \mathcal O_L(-1) \longrightarrow {\mathcal F} \longrightarrow {\mathcal G} \longrightarrow 0 \end{equation} and it becomes clear that $P \in H$ and that ${\mathcal G}$ gives a point in $\operatorname{M}_{\mathbb P^3} (3m+1)$. From Remark \ref{planar_sheaves} we see that ${\mathcal G}$ gives a point in $\operatorname{M}_H (3m+1)$. Thus, ${\mathcal G}$ is the unique non-split extension of $\mathbb C_P$ by $\mathcal O_C$ for a cubic curve $C \subset H$ containing $P$. We write ${\mathcal G} = \mathcal O_C(P)$. Let $\mathbf D \subset \operatorname{M}_{\mathbb P^3}(4m+1)$ be the set of non-split extension sheaves as in (\ref{sheaves_in_D}) that are non-planar (we allow the possibility that $L \subset H$, in which case the support of ${\mathcal F}$ is contained in the double plane $2H$). We examine first the case when $L \nsubseteq H$, that is, $L$ meets $C$ with multiplicity $1$, at a point $P'$. According to \cite[Theorem 1.1]{freiermuth_trautmann} there is a resolution \begin{equation} \label{planar_3m+1} 0 \longrightarrow 2\mathcal O(-3) \stackrel{\delta}{\longrightarrow} 3\mathcal O(-2) \oplus \mathcal O(-1) \stackrel{\gamma}{\longrightarrow} \mathcal O(-1) \oplus \mathcal O \longrightarrow {\mathcal G} \longrightarrow 0 \end{equation} \[ \delta = \left[ \begin{array}{ll} \phantom{-} u & \phantom{-} 0 \\ \phantom{-} 0 & \phantom{-} u \\ -u_1 & -u_2 \\ -g_1 & -g_2 \end{array} \right], \qquad \gamma = \left[ \begin{array}{cccc} u_1 & u_2 & u & 0 \\ g_1 & g_2 & 0 & u \end{array} \right] \] where $\operatorname{span} \{ u_1, u_2, u \} = \operatorname{span} \{ l_1, l_2, l_3 \}$ and $C$ has equation $u_1 g_2 - u_2 g_1 = 0$ in $H$. Note that ${\mathcal G}_{| L} \simeq \mathbb C_{P'}$ unless $\gamma(P') = 0$, in which case ${\mathcal G}_{| L} \simeq \mathbb C_{P'} \oplus \mathbb C_{P'}$. But $\gamma(P') = 0$ if and only if $P' = P \in \operatorname{sing}(C)$. From (\ref{ext_sequence_1}) we have the exact sequence \[ 0 \to \operatorname{Ext}^1_{\mathcal O_L} ({\mathcal G}_{| L}, \mathcal O_L(-1)) \to \operatorname{Ext}^1_{\mathcal O_{\mathbb P^3}} ({\mathcal G}, \mathcal O_L(-1)) \to \operatorname{Hom}_{\mathcal O_L} ({\mathcal Tor}_1^{\mathcal O_{\mathbb P^3}}({\mathcal G}, \mathcal O_L), \mathcal O_L(-1)). \] The group on the right vanishes because $\mathcal O_L(-1)$ has no zero-dimensional torsion. It follows that \[ \operatorname{Ext}^1_{\mathcal O_{\mathbb P^3}} ({\mathcal G}, \mathcal O_L(-1)) \simeq \begin{cases} \mathbb C & \text{if $P \neq P'$ or if $P = P' \in \operatorname{reg}(C)$}, \\ \mathbb C^2 & \text{if $P = P' \in \operatorname{sing}(C)$}. \end{cases} \] Let $\mathbf D_0 \subset \mathbf D$ be the open subset given by the conditions that $L \nsubseteq H$ and either $P \neq P'$ or $P = P' \in \operatorname{reg}(C)$. The map \[ \mathbf D_0 \longrightarrow \operatorname{Hilb}_{\mathbb P^3}(m+1) \times \operatorname{M}_{\mathbb P^3}(3m+1), \qquad [{\mathcal F}] \longmapsto (L, [{\mathcal G}]) \] is injective and has irreducible image. We deduce that $\mathbf D_0$ is irreducible and has dimension $16$. Let $\mathbf D' \subset \operatorname{M}_{\mathbb P^3}(4m+1)$ be the subset of non-split extensions (\ref{sheaves_in_D'}). Denote $P = L \cap C$. From (\ref{ext_sequence_1}) we have the exact sequence \[ 0 \to \mathbb C \simeq \operatorname{Ext}^1_{\mathcal O_H}(\mathbb C_{P}, \mathcal O_C) \to \operatorname{Ext}^1_{\mathcal O_{\mathbb P^3}}(\mathcal O_L, \mathcal O_C) \to \operatorname{Hom}_{\mathcal O_H}({\mathcal Tor}_1^{\mathcal O_{\mathbb P^3}}(\mathcal O_L, \mathcal O_H), \mathcal O_C) = 0. \] We deduce that, given $L$ and $C$, there is a unique non-split extension of $\mathcal O_L$ by $\mathcal O_C$. The map \[ \mathbf D' \longrightarrow \operatorname{Hilb}_{\mathbb P^3}(m+1) \times \operatorname{Hilb}_{\mathbb P^3}(3m) \] sending ${\mathcal F}$ to $(L, C)$ is injective and has irreducible image. We deduce that $\mathbf D'$ is irreducible and has dimension $15$. Tensoring (\ref{sheaves_in_D'}) with $\mathcal O_H$ we get the exact sequence \[ 0 = {\mathcal Tor}_1^{\mathcal O_{\mathbb P^3}} (\mathcal O_L, \mathcal O_H) \longrightarrow \mathcal O_C \longrightarrow {\mathcal F}_{| H} \longrightarrow \mathbb C_{P} \longrightarrow 0 \] from which we see that ${\mathcal F}_{| H} \simeq \mathcal O_C(P)$. We obtain the extension \[ 0 \longrightarrow \mathcal O_L(-1) \longrightarrow {\mathcal F} \longrightarrow \mathcal O_C(P) \longrightarrow 0. \] We deduce that $[{\mathcal F}] \in \mathbf D$. Thus, $\mathbf D' \subset \mathbf D$. Moreover, $\mathbf D' \cap \mathbf D_0$ is open and non-empty in $\mathbf D'$ because it consists precisely of extensions as above for which $P \in \operatorname{reg}(C)$. Thus, $\mathbf D' \subset \overline{\mathbf D}_0$. \begin{remark} Note that $\mathbf D_0 \setminus \mathbf D'$ is the open subset of $\mathbf D$ given by the conditions $L \nsubseteq H$ and $P \neq P'$. We claim that $\mathbf D_0 \setminus \mathbf D'$ is the set of sheaves of the form $\mathcal O_D(P)$, where $D = L \cup C$ is the union of a line and a planar cubic curve having intersection of multiplicity $1$ and $P \in C \setminus L$. First we show that the notation $\mathcal O_D(P)$ is justified. From (\ref{ext_sequence_1}) we have the exact sequence \begin{multline*} 0 \longrightarrow \mathbb C \simeq \operatorname{Ext}^1_{\mathcal O_L}(\mathbb C_{P'}, \mathcal O_L(-1)) \longrightarrow \operatorname{Ext}^1_{\mathcal O_{\mathbb P^3}}(\mathcal O_C, \mathcal O_L(-1)) \\ \longrightarrow \operatorname{Hom}({\mathcal Tor}_1^{\mathcal O_{\mathbb P^3}}(\mathcal O_C, \mathcal O_L), \mathcal O_L(-1)) = 0 \end{multline*} which shows that $\mathcal O_D$ is the unique non-split extension of $\mathcal O_C$ by $\mathcal O_L(-1)$. The long exact sequence of groups \begin{multline*} 0 = \operatorname{Ext}^1_{\mathcal O_{\mathbb P^3}}(\mathbb C_P, \mathcal O_L(-1)) \longrightarrow \operatorname{Ext}^1_{\mathcal O_{\mathbb P^3}}(\mathbb C_P, \mathcal O_D) \longrightarrow \operatorname{Ext}^1_{\mathcal O_{\mathbb P^3}}(\mathbb C_P, \mathcal O_C) \simeq \mathbb C \\ \longrightarrow \operatorname{Ext}^2_{\mathcal O_{\mathbb P^3}} (\mathbb C_P, \mathcal O_L(-1)) = 0 \end{multline*} shows that there is a unique non-split extension of $\mathbb C_P$ by $\mathcal O_D$, which we denote by $\mathcal O_D(P)$. Given ${\mathcal F} \in \mathbf D_0 \setminus \mathbf D'$, the pull-back of $\mathcal O_C$ in ${\mathcal F}$, denoted ${\mathcal F}'$, is a non-split extension of $\mathcal O_C$ by $\mathcal O_L(-1)$. Indeed, if ${\mathcal F}'$ were a split extension, then $\mathcal O_C \subset {\mathcal F}$ and ${\mathcal F}/\mathcal O_C \simeq \mathcal O_L(-1) \oplus \mathbb C_P$, so $\mathcal O_L(-1)$ would be a destabilising quotient sheaf of ${\mathcal F}$. Thus ${\mathcal F}' \simeq \mathcal O_D$ and ${\mathcal F} \simeq \mathcal O_D(P)$. Conversely, $\mathcal O_D(P) / O_L(-1)$ is an extension of $\mathbb C_P$ by $\mathcal O_C$, hence $\mathcal O_D(P) / O_L(-1) \simeq \mathcal O_C(P)$. \end{remark} \begin{remark} \label{no_extensions} If $L \cap C = \{ P \}$ is a regular point of $C$, and $D = L \cup C$, then there are no semi-stable extensions of the form \[ 0 \longrightarrow \mathcal O_D \longrightarrow {\mathcal F} \longrightarrow \mathbb C_P \longrightarrow 0. \] Indeed, if ${\mathcal F}$ were such a semi-stable extension, then we would also have an extension \[ 0 \longrightarrow \mathcal O_L(-1) \longrightarrow {\mathcal F} \longrightarrow {\mathcal G} \longrightarrow 0 \] where ${\mathcal G}$ is an extension of $\mathbb C_P$ by $\mathcal O_C$. Note that ${\mathcal G}$ is a non-split extension, otherwise $\mathcal O_C$ would be a destabilizing quotient sheaf of ${\mathcal F}$. Thus ${\mathcal F}$ is the unique non-split extension of $\mathcal O_C(P)$ by $\mathcal O_L(-1)$, so it is also the unique non-split extension of $\mathcal O_L$ by $\mathcal O_C$. Thus $\operatorname{H}^0({\mathcal F})$ generates $\mathcal O_C$, hence $\mathcal O_D$ is a subsheaf of $\mathcal O_C$, which is absurd. \end{remark} \begin{remark} \label{S_irreducible} The set $\mathbf{S} \subset \operatorname{M}_{\mathbb P^2}(3m) \times \operatorname{M}_{\mathbb P^2}(3m+1)$ of pairs $([{\mathcal E}], [{\mathcal G}])$ such that $\operatorname{H}^0({\mathcal E}) = 0$ and ${\mathcal E}$ is a subsheaf of ${\mathcal G}$ is irreducible. By duality, this is equivalent to saying that the set $\mathbf{S}^{\scriptscriptstyle \operatorname{D}} \subset \operatorname{M}_{\mathbb P^2}(3m-1) \times \operatorname{M}_{\mathbb P^2}(3m)$ of pairs $([{\mathcal G}], [{\mathcal E}])$ such that $\operatorname{H}^0({\mathcal E}) = 0$ and ${\mathcal G}$ is a subsheaf of ${\mathcal E}$ is irreducible. Given an exact sequence \[ 0 \longrightarrow {\mathcal G} \longrightarrow {\mathcal E} \longrightarrow \mathbb C_{P'} \longrightarrow 0 \] we may combine the resolutions of sheaves on $\mathbb P^2$ \[ 0 \longrightarrow \mathcal O(-3) \oplus \mathcal O(-2) \xrightarrow{\tiny \left[ \!\! \begin{array}{cc} q_1 & \!\!\! u_1 \\ q_2 & \!\!\! u_2 \end{array} \!\! \right]} 2\mathcal O(-1) \longrightarrow {\mathcal G} \longrightarrow 0 \] and \[ 0 \longrightarrow \mathcal O(-3) \longrightarrow 2\mathcal O(-2) \xrightarrow{\tiny \left[ \!\! \begin{array}{cc} v_1 & \!\!\! v_2 \end{array} \!\! \right]} \mathcal O(-1) \longrightarrow \mathbb C_{P'} \longrightarrow 0 \] to form the resolution \[ 0 \longrightarrow \mathcal O(-3) \stackrel{\psi}{\longrightarrow} \mathcal O(-3) \oplus 3\mathcal O(-2) \stackrel{\varphi}{\longrightarrow} 3\mathcal O(-1) \longrightarrow {\mathcal E} \longrightarrow 0, \] \[ \varphi = \left[ \begin{array}{cccc} q_1 & u_1 & l_{11} & l_{12} \\ q_2 & u_2 & l_{21} & l_{22} \\ 0 & 0 & v_1 & v_2 \end{array} \right]. \] We indicate by the index $i$ the maximal minor of a matrix obtained by deleting column $i$. The condition $\operatorname{H}^0({\mathcal E}) = 0$ is equivalent to the condition $\psi_{11} \neq 0$, which is equivalent to the following conditions: $\varphi_1 \neq 0$ and $\varphi_1$ divides $\varphi_2$, $\varphi_3$, $\varphi_4$. As $\varphi_1$ divides both $(q_1 u_2 - u_1 q_2) v_1$ and $(q_1 u_2 - u_1 q_2) v_2$, we see that $\varphi_1$ is a multiple of $q_1 u_2 - u_1 q_2$. It follows that $\varphi$ is equivalent to the matrix \[ \upsilon = \left[ \begin{array}{cccc} l_{11} v_2 - l_{12} v_1 & u_1 & l_{11} & l_{12} \\ l_{21} v_2 - l_{22} v_1 & u_2 & l_{21} & l_{22} \\ 0 & 0 & v_1 & v_2 \end{array} \right]. \] Let $U \subset \operatorname{Hom}(\mathcal O(-3) \oplus 3\mathcal O(-2), 3\mathcal O(-1))$ be the set of morphisms represented by matrices $\upsilon$ as above satisfying the following conditions: $\upsilon_1 \neq 0$, $u_1$ and $u_2$ are linearly independent, $v_1$ and $v_2$ are linearly independent. Clearly, $U$ is irreducible. Let $\upsilon' \in \operatorname{Hom}(\mathcal O(-3) \oplus \mathcal O(-2), 2\mathcal O(-1))$ be the morphism represented by the matrix \[ \left[ \begin{array}{cc} l_{11} v_2 - l_{12} v_1 & u_1 \\ l_{21} v_2 - l_{22} v_1 & u_2 \end{array} \right]. \] The above discussion shows that the map $\pi \colon U \to \mathbf{S}^{\scriptscriptstyle \operatorname{D}}$, $\upsilon \mapsto ([{\mathcal Coker}(\upsilon')], [{\mathcal Coker}(\upsilon)])$ is surjective. Thus, $\mathbf{S}^{\scriptscriptstyle \operatorname{D}}$ is irreducible. The open subset $\mathbf{S}_{\scriptstyle \operatorname{irr}} \subset \mathbf{S}$, given by the condition that the schematic support of ${\mathcal G}$ be irreducible, is irreducible. \end{remark} Let $\mathbf D_1 \subset \mathbf D$ be the locally closed subset given by the conditions $L \nsubseteq H$ and $P = P' \in \operatorname{sing}(C)$. Since $\dim \operatorname{Ext}^1_{\mathcal O_{\mathbb P^3}}({\mathcal G}, \mathcal O_L(-1)) = 2$, we see that $\dim \mathbf D_1 = 14$. The set of cubic curves in $\mathbb P^2$ that are singular at a fixed point is irreducible. It follows that $\mathbf D_1$ is irreducible, as well. \begin{proposition} \label{D_1_in_D_0} The set $\mathbf D_1$ is contained in the closure of $\mathbf D_0$. \end{proposition} \begin{proof} Consider $[{\mathcal F}] \in \mathbf D_0 \cup \mathbf D_1$. Consider extension (\ref{sheaves_in_D}) in which ${\mathcal G} = \mathcal O_C(P)$ and $L \cap H = \{ P'\}$. Dualizing we get the extension \[ 0 \longrightarrow \mathcal O_C(-P) \longrightarrow {\mathcal F}^{\scriptscriptstyle \operatorname{D}} \longrightarrow \mathcal O_L(-1) \longrightarrow 0. \] Tensoring with $\mathcal O_H$ we get the exact sequence \[ 0 = {\mathcal Tor}_1^{\mathcal O_{\mathbb P^3}} (\mathcal O_L(-1), \mathcal O_H) \longrightarrow \mathcal O_C(-P) \longrightarrow ({\mathcal F}^{\scriptscriptstyle \operatorname{D}})_{| H} \longrightarrow \mathbb C_{P'} \longrightarrow 0. \] This short exact sequence does not split. Indeed, by \cite{maican_duality}, ${\mathcal F}^{\scriptscriptstyle \operatorname{D}}$ is stable and has slope $-1/4$, hence $\mathcal O_C(-P)$, which has slope $-1/3$, cannot be a quotient sheaf of ${\mathcal F}^{\scriptscriptstyle \operatorname{D}}$. Since $\mathcal O_C(-P)$ is stable, it is easy to see that $({\mathcal F}^{\scriptscriptstyle \operatorname{D}})_{| H}$ gives a sheaf in $\operatorname{M}_H(3m)$ supported on $C$. The kernel of the map ${\mathcal F}^{\scriptscriptstyle \operatorname{D}} \to ({\mathcal F}^{\scriptscriptstyle \operatorname{D}})_{| H}$ is supported on $L$ and has no zero-dimensional torsion, hence it is isomorphic to $\mathcal O_L(-2)$. Denote ${\mathcal E} = (({\mathcal F}^{\scriptscriptstyle \operatorname{D}})_{| H})^{\scriptscriptstyle \operatorname{D}}$. Dualizing the exact sequence \[ 0 \longrightarrow \mathcal O_L(-2) \longrightarrow {\mathcal F}^{\scriptscriptstyle \operatorname{D}} \longrightarrow ({\mathcal F}^{\scriptscriptstyle \operatorname{D}})_{| H} \longrightarrow 0 \] we obtain the extension \begin{equation} \label{E-F-O_L} 0 \longrightarrow {\mathcal E} \longrightarrow {\mathcal F} \longrightarrow \mathcal O_L \longrightarrow 0. \end{equation} Tensoring with $\mathcal O_H$, and taking into account the fact that ${\mathcal Tor}_1^{\mathcal O_{\mathbb P^3}}(\mathcal O_L, \mathcal O_H) = 0$, we get the exact sequence \begin{equation} \label{E-O_C(P)} 0 \longrightarrow {\mathcal E} \longrightarrow \mathcal O_C(P) \longrightarrow \mathbb C_{P'} \longrightarrow 0. \end{equation} From (\ref{ext_sequence_1}) we have the exact sequence \[ 0 \longrightarrow \operatorname{Ext}^1_{\mathcal O_H} (\mathbb C_{P'}, {\mathcal E}) \stackrel{\epsilon}{\longrightarrow} \operatorname{Ext}^1_{\mathcal O_{\mathbb P^3}}(\mathcal O_L, {\mathcal E}) \longrightarrow \operatorname{Hom} ({\mathcal Tor}_1^{\mathcal O_{\mathbb P^3}}(\mathcal O_L, \mathcal O_H), {\mathcal E}) = 0. \] It is clear now that the isomorphism class of ${\mathcal F}$ corresponds to the isomorphism class of $\mathcal O_C(P)$ under the bijective map $\epsilon$. Let $\mathbf D'' \subset (\mathbf D_0 \cup \mathbf D_1) \setminus \mathbf D'$ be the subset given by the condition that $C$ be irreducible. Note that $\mathbf D''$ is an open subset of $\mathbf D$ and contains an open subset of $\mathbf D_1$. We will prove below that $\mathbf D''$ is irreducible. Since $\mathbf D_1$ is irreducible, we arrive at the conclusion of the proposition: \[ \mathbf D_1 \subset \overline{\mathbf D'' \cap \mathbf D}_1 \subset \overline{\mathbf D}{}'' = \overline{\mathbf D'' \cap \mathbf D}_0 \subset \overline{\mathbf D}_0. \] Consider the subset \[ \mathbf{S}'' \subset \operatorname{Hilb}_{\mathbb P^3}(m+1) \times \operatorname{M}_{\mathbb P^3}(3m) \times \operatorname{M}_{\mathbb P^3}(3m+1) \] of triples $(L, [{\mathcal E}], [{\mathcal G}])$ satisfying the following conditions: ${\mathcal E}$ and ${\mathcal G}$ are supported on a planar irreducible cubic curve $C$, $\operatorname{H}^0({\mathcal E}) = 0$, ${\mathcal E}$ is a subsheaf of ${\mathcal G}$, and $L \cap C = \{ P' \}$, where $\mathbb C_{P'} \simeq {\mathcal G}/{\mathcal E}$. Note that the projection $\mathbf{S}'' \to \operatorname{M}_{\mathbb P^3}(3m) \times \operatorname{M}_{\mathbb P^3}(3m+1)$ has fibers affine planes and has image the irreducible variety $\mathbf{S}_{\scriptstyle \operatorname{irr}}$ from Remark \ref{S_irreducible}. It follows that $\mathbf{S}''$ is irreducible. To prove that $\mathbf D''$ is irreducible, we will show that the morphism \[ \eta \colon \mathbf D'' \longrightarrow \mathbf{S}'', \qquad \eta ([{\mathcal F}]) = (L, [(({\mathcal F}^{\scriptscriptstyle \operatorname{D}})_{| H})^{\scriptscriptstyle \operatorname{D}}], [{\mathcal F}_{| H}]) \] is bijective. We first verify surjectivity. Given an extension \[ 0 \longrightarrow {\mathcal E} \longrightarrow {\mathcal G} \longrightarrow \mathbb C_{P'} \longrightarrow 0 \] we let ${\mathcal F} \in \operatorname{Ext}^1_{\mathcal O_{\mathbb P^3}} (\mathcal O_L, {\mathcal E})$ be the image of ${\mathcal G}$ under $\epsilon$. Since ${\mathcal G}$ does not split, neither does ${\mathcal F}$. By hypothesis ${\mathcal E}$ has irreducible support, hence ${\mathcal E}$ is stable, and, a fortiori, ${\mathcal F}$ is stable. Applying the snake lemma to the diagram \[ \xymatrix { 0 \ar[r] & {\mathcal E} \ar@{=}[d] \ar[r] & {\mathcal F} \ar[r] \ar[d] & \mathcal O_L \ar[d] \ar[r] & 0 \\ 0 \ar[r] & {\mathcal E} \ar[r] & {\mathcal G} \ar[r] & \mathbb C_{P'} \ar[r] & 0 } \] we get the extension \[ 0 \longrightarrow \mathcal O_L(-1) \longrightarrow {\mathcal F} \longrightarrow {\mathcal G} \longrightarrow 0. \] Thus, $[{\mathcal F}] \in \mathbf D_0 \cup \mathbf D_1$ and ${\mathcal F}_{| H} \simeq {\mathcal G}$, where $H$ is the plane containing $C$. Dualizing the first row of the above diagram we see that $({\mathcal F}^{\scriptscriptstyle \operatorname{D}})_{| H} \simeq {\mathcal E}^{\scriptscriptstyle \operatorname{D}}$. By hypothesis ${\mathcal E}$ is not isomorphic to $\mathcal O_C$, hence $[{\mathcal F}] \notin \mathbf D'$. Thus $[{\mathcal F}] \in \mathbf D''$ and $\eta ([{\mathcal F}]) = (L, [{\mathcal E}], [{\mathcal G}])$. This proves that $\eta$ is surjective. Since $[{\mathcal F}] = \epsilon ([{\mathcal G}])$ we see that $\eta$ is also injective. \end{proof} \noindent We will next examine the sheaves in $\mathbf D$ for which $L \subset H$. From (\ref{ext_sequence_2}) we have the exact sequence \begin{multline*} 0 \longrightarrow \operatorname{Ext}^1_{\mathcal O_H} (\mathcal O_C(P), \mathcal O_L(-1)) \longrightarrow \operatorname{Ext}^1_{\mathcal O_{\mathbb P^3}} (\mathcal O_C(P), \mathcal O_L(-1)) \\ \longrightarrow \operatorname{Hom}(\mathcal O_C(P)(-1), \mathcal O_L(-1)) \\ \longrightarrow \operatorname{Ext}^2_{\mathcal O_H} (\mathcal O_C(P), \mathcal O_L(-1)) \simeq \operatorname{Hom}_{\mathcal O_H}^{}(\mathcal O_L(-1), \mathcal O_C(P)(-3))^* = 0. \end{multline*} Thus, we have non-planar sheaves precisely if $\operatorname{Hom} (\mathcal O_C(P), \mathcal O_L) \neq 0$. Any non-zero morphism $\alpha \colon \mathcal O_C(P) \to \mathcal O_L$ fits into a commutative diagram \[ \xymatrix { 0 \ar[r] & 2\mathcal O_H(-2) \ar[r]^-{\upsilon} \ar[d]^-{\gamma} & \mathcal O_H(-1) \oplus \mathcal O_H \ar[d]^-{\beta} \ar[r] & \mathcal O_C(P) \ar[r] \ar[d]^-{\alpha} & 0 \\ 0 \ar[r] & \mathcal O_H(-1) \ar[r]^-{l} & \mathcal O_H \ar[r] & \mathcal O_L \ar[r] & 0 } \] \[ \beta = \left[ \begin{array}{cc} v & c \end{array} \right], \qquad \gamma = \left[ \begin{array}{cc} v_1 & v_2 \end{array} \right], \qquad \upsilon = \left[ \begin{array}{cc} u_1 & u_2 \\ g_1 & g_2 \end{array} \right] \] with $\beta \neq 0$. Note that $c \neq 0$, otherwise ${\mathcal Coker}(\beta)$ would be the structure sheaf of a line and we would have the relation $(vu_1, vu_2) = (lv_1, lv_2)$. Thus $v_1$ and $v_2$ would be linearly independent, hence ${\mathcal Coker}(\gamma)$ would be zero-dimensional, and hence ${\mathcal Coker}(\beta)$ would be zero-dimensional, which is absurd. Replacing, possibly, $\upsilon$ with an equivalent matrix, we may assume that $g_1$ and $g_2$ are divisible by $l$. Conversely, if $\mathcal O_C(P)$ is the cokernel of the morphism \[ \upsilon = \left[ \begin{array}{cc} u_1 & u_2 \\ l v_1 & l v_2 \end{array} \right], \qquad \text{then, denoting} \qquad \upsilon' = \left[ \begin{array}{cc} u_1 & u_2 \\ v_1 & v_2 \end{array} \right], \] we can apply the snake lemma to the commutative diagram \begin{equation} \label{upsilon_diagram} \xymatrix { & 2\mathcal O_H(-2) \ar@{=}[r] \ar[d]^-{\upsilon'} & 2\mathcal O_H(-2) \ar[d]^-{\upsilon} \\ 0 \ar[r] & 2\mathcal O_H(-1) \ar[r]^-{1 \oplus l} & \mathcal O_H(-1) \oplus \mathcal O_H \ar[r] & \mathcal O_L \ar[r] & 0 } \end{equation} to get a surjective map $\mathcal O_C(P) \to \mathcal O_L$. This discussion shows that $\operatorname{Hom}(\mathcal O_C(P), \mathcal O_L)$ does not vanish precisely if $C = L \cup C'$ for a conic curve $C' \subset H$ and for $P \in C'$. In this case we have a commutative diagram \[ \xymatrix { \operatorname{Hom}(\mathcal O_C, \mathcal O_L(-1)) = 0 \ar[d] \\ \operatorname{Ext}^1_{\mathcal O_H} (\mathbb C_P, \mathcal O_L(-1)) \ar[d] & & \operatorname{Hom}(\mathbb C_P, \mathcal O_L) = 0 \ar[d] \\ \operatorname{Ext}^1_{\mathcal O_H} (\mathcal O_C(P), \mathcal O_L(-1)) \ar@{^(->}[r] \ar[d] & \operatorname{Ext}^1_{\mathcal O_{\mathbb P^3}} (\mathcal O_C(P), \mathcal O_L(-1)) \ar@{->>}[r] \ar[d]^-{\delta} & \operatorname{Hom}(\mathcal O_C(P), \mathcal O_L) \ar[d]^-{\simeq} \\ \operatorname{Ext}^1_{\mathcal O_H} (\mathcal O_C, \mathcal O_L(-1)) \ar@{^(->}[r] \ar[d] & \operatorname{Ext}^1_{\mathcal O_{\mathbb P^3}} (\mathcal O_C, \mathcal O_L(-1)) \ar@{->>}[r] & \operatorname{Hom}(\mathcal O_C, \mathcal O_L) \simeq \mathbb C \\ \operatorname{Ext}^2_{\mathcal O_H} (\mathbb C_P, \mathcal O_L(-1)) \ar[r]^-{\simeq} \ar[d] & \operatorname{Hom}_{\mathcal O_H}^{} (\mathcal O_L, \mathbb C_P)^* \\ \operatorname{Ext}^2_{\mathcal O_H} (\mathcal O_C(P), \mathcal O_L(-1)) = 0 } \] Here $\delta({\mathcal F})$ is the pull-back of $\mathcal O_C$ in ${\mathcal F}$. If $P \notin L$, then $\delta$ is an isomorphism. If $P \in L$, then we have an exact sequence \[ 0 \longrightarrow \mathbb C \longrightarrow \operatorname{Ext}^1_{\mathcal O_{\mathbb P^3}} (\mathcal O_C(P), \mathcal O_L(-1)) \stackrel{\delta}{\longrightarrow} \operatorname{Ext}^1_{\mathcal O_{\mathbb P^3}} (\mathcal O_C, \mathcal O_L(-1)) \longrightarrow \mathbb C \longrightarrow 0. \] If ${\mathcal F}$ is non-planar, then $\delta({\mathcal F})$ is generated by a global section. Indeed, in view of Proposition \ref{non-planar_M_1}, ${\mathcal F}$ cannot have resolution (\ref{sheaves_in_M_1}), so it has resolution (\ref{sheaves_in_R}) or (\ref{sheaves_in_E}). Also, ${\mathcal F}$ is not generated by a global section because $\mathcal O_C(P)$ is not generated by a global section. It follows that $P_{{\mathcal F}'}(m) = 4m$, where ${\mathcal F}' \subset {\mathcal F}$ is the subsheaf generated by $\operatorname{H}^0({\mathcal F})$. But ${\mathcal F}'$ maps to $\mathcal O_C$, hence $\delta({\mathcal F}) \subset {\mathcal F}'$. These two sheaves have the same Hilbert polynomial, so they coincide. We conclude that $\delta({\mathcal F})$ is the structure sheaf $\mathcal O_D$ of a quartic curve $D$. If $P \notin L$, then ${\mathcal F} \simeq \mathcal O_D(P)$. Assume now that $P \in L$. The preimage of $[\mathcal O_D]$ under the induced map \[ \mathbb P \big( \operatorname{Ext}^1_{\mathcal O_{\mathbb P^3}}(\mathcal O_C(P), \mathcal O_L(-1)) \big) \setminus \mathbb P(\mathbb C) \longrightarrow \mathbb P \big( \operatorname{Ext}^1_{\mathcal O_{\mathbb P^3}}(\mathcal O_C, \mathcal O_L(-1)) \big) \] is an affine line that maps to a curve in $\operatorname{M}_{\mathbb P^3}(4m+1)$. The exact sequence \begin{multline*} 0 = \operatorname{Hom}(\mathbb C_P, \mathcal O_C) \longrightarrow \operatorname{Ext}^1_{\mathcal O_{\mathbb P^3}} (\mathbb C_P, \mathcal O_L(-1)) \simeq \mathbb C \longrightarrow \operatorname{Ext}^1_{\mathcal O_{\mathbb P^3}} (\mathbb C_P, \mathcal O_D) \\ \longrightarrow \operatorname{Ext}^1_{\mathcal O_{\mathbb P^3}} (\mathbb C_P, \mathcal O_C) \simeq \mathbb C \end{multline*} shows that $\operatorname{Ext}^1_{\mathcal O_{\mathbb P^3}} (\mathbb C_P, \mathcal O_D)$ has dimension $2$. Indeed, if this vector space had dimension $1$, then its image in $\operatorname{M}_{\mathbb P^3} (4m+1)$ would be a point. This, we saw above, is not the case. Let $\mathbf D_2 \subset \mathbf D$ be the closed subset given by the condition $L \subset H$. Equivalently, $\mathbf D_2$ is given by the condition $C = L \cup C'$ and $P \in C'$ for a conic curve $C'$. According to \cite[Proposition 4.10]{choi_chung_maican}, the set $\mathbf D_2$ is irreducible of dimension $14$. Indeed, let \begin{equation} \label{set_S} \mathbf{S} \subset \operatorname{Hilb}_{\mathbb P^2}(m+1) \times \operatorname{M}_{\mathbb P^2}(3m+1) \end{equation} be the locally closed subset of pairs $(L, [\mathcal O_C(P)])$ for which $C = L \cup C'$ and $P \in C'$, for a conic curve $C' \subset \mathbb P^2$. According to \cite[Lemma 4.9]{choi_chung_maican}, $\mathbf{S}$ is irreducible. The canonical map $\mathbf D_2 \to \mathbf{S}$ is surjective and its fibers are irreducible of dimension $3$. \section{The irreducible components} \label{components} Let \[ \mathbf W_0 \subset \operatorname{Hom}(3\mathcal O(-3), 5\mathcal O(-2)) \times \operatorname{Hom}(5\mathcal O(-2), \mathcal O(-1) \oplus \mathcal O) \] be the subset of pairs of morphisms equivalent to pairs $(\psi, \varphi)$ occurring in resolutions (\ref{sheaves_in_R}) and (\ref{sheaves_in_E}). We claim that $\mathbf W_0$ is locally closed. To see this, consider first the locally closed subset $\mathbb{W}$ given by the following conditions: $\psi$ is injective, $\varphi$ is generically surjective, $\varphi \circ \psi = 0$. We have the universal sequence \[ 3\mathcal O_{\mathbb{W} \times \mathbb P^3} (-3) \stackrel{\Psi}{\longrightarrow} 5\mathcal O_{\mathbb{W} \times \mathbb P^3} (-2) \stackrel{\Phi}{\longrightarrow} \mathcal O_{\mathbb{W} \times \mathbb P^3}(-1) \oplus \mathcal O_{\mathbb{W} \times \mathbb P^3}. \] Denote $\widetilde{{\mathcal F}} = {\mathcal Coker}(\Phi)$. Corresponding to the polynomial $P(m) = 4m+1$ we have the locally closed subset \[ \mathbb{W}_P = \{ x \in \mathbb{W},\ P_{\widetilde{{\mathcal F}}_x} = P \} \subset \mathbb{W} \] constructed when we flatten $\widetilde{{\mathcal F}}$, see \cite[Theorem 2.1.5]{huybrechts_lehn}. Now $\mathbf W_0 \subset \mathbb{W}_P$ is the subset given by the condition that $\widetilde{{\mathcal F}}_x$ be semi-stable, which is an open condition, because $\widetilde{{\mathcal F}}_{| \mathbb{W}_P \times \mathbb P^3}$ is flat over $\mathbb{W}_P$. We endow $\mathbf W_0$ with the induced reduced structure. Consider the map \[ \rho_0 \colon \mathbf W_0 \longrightarrow \mathbf M_0, \qquad (\psi, \varphi) \longmapsto [{\mathcal Coker}(\varphi)]. \] On $\mathbf W_0$ we have the canonical action of the linear algebraic group \[ \mathbf G_0 = \big( \operatorname{Aut}(3\mathcal O(-3)) \times \operatorname{Aut}(5\mathcal O(-2)) \times \operatorname{Aut}(\mathcal O(-1) \oplus \mathcal O) \big) / \mathbb C^* \] where $\mathbb C^*$ is identified with the subgroup $ \{ (t \cdot \operatorname{id}, t \cdot \operatorname{id}, t \cdot \operatorname{id}), \ t \in \mathbb C^* \} $. It is easy to check that the fibers of $\rho_0$ are precisely the $\mathbf G_0$-orbits. Let \[ \mathbf W_1 \subset \operatorname{Hom} (3\mathcal O(-3), 5\mathcal O(-2) \oplus \mathcal O(-1)) \times \operatorname{Hom} (5\mathcal O(-2) \oplus \mathcal O(-1), 2\mathcal O(-1) \oplus \mathcal O) \] be the locally closed subset of pairs of morphisms equivalent to pairs $(\psi, \varphi)$ occurring in resolution (\ref{sheaves_in_M_1}) and let \[ \mathbf W_2 \subset \operatorname{Hom}(\mathcal O(-4) \oplus \mathcal O(-2), \mathcal O(-3) \oplus 3\mathcal O(-1)) \times \operatorname{Hom}(\mathcal O(-3) \oplus 3\mathcal O(-1), 2\mathcal O) \] be the set of pairs given at \cite[Theorem 6.1(iii)]{choi_chung_maican}. The groups $\mathbf G_1$, $\mathbf G_2$ are defined by analogy with the definition of $\mathbf G_0$. As before, for $i = 1, 2$, the fibers of the canonical quotient map $\rho_i \colon \mathbf W_i \to \mathbf M_i$ are precisely the $\mathbf G_i$-orbits. \begin{proposition} \label{quotients} For $i = 0, 1$, $\mathbf M_i$ is the categorical quotient of $\mathbf W_i$ modulo $\mathbf G_i$. The subvariety $\mathbf M_2$ is the geometric quotient of $\mathbf W_2$ modulo $\mathbf G_2$. \end{proposition} \begin{proof} The argument at \cite[Theorem 3.1.6]{drezet_maican} shows that $\rho_0$, $\rho_1$, $\rho_2$ are categorical quotient maps. Since $\mathbf M_2$ is normal (being smooth), we can apply \cite[Theorem 4.2]{popov_vinberg} to conclude that $\rho_2$ is a geometric quotient map. \end{proof} Consider the closed subset $\mathbf W_{\scriptstyle \operatorname{ell}} = \rho_0^{-1} (\mathbf E) \subset \mathbf W_0$. Consider the restriction to the second direct summand of the map \[ \mathcal O_{\mathbf W_{\scriptscriptstyle \operatorname{ell}} \times \mathbb P^3}(-1) \oplus \mathcal O_{\mathbf W_{\scriptscriptstyle \operatorname{ell}} \times \mathbb P^3} \longrightarrow \widetilde{{\mathcal F}}_{| \mathbf W_{\scriptscriptstyle \operatorname{ell}} \times \mathbb P^3} \] and denote its image by $\widetilde{{\mathcal F}}'$. The quotient $[\mathcal O_{\mathbf W_{\scriptscriptstyle \operatorname{ell}} \times \mathbb P^3} \twoheadrightarrow \widetilde{{\mathcal F}}']$ induces a morphism \[ \sigma \colon \mathbf W_{\scriptstyle \operatorname{ell}} \longrightarrow \operatorname{Hilb}_{\mathbb P^3}(4m). \] According to \cite[Examples 2.8 and 4.8]{chen_nollet}, $\operatorname{Hilb}_{\mathbb P^3}(4m)$ has two irreducible components, denoted $H_1$, $H_2$. The generic member of $H_1$ is a smooth elliptic quartic curve. The generic member of $H_2$ is the disjoint union of a planar quartic curve and two isolated points. Note that $H_2$ lies in the closed subset \[ H = \{ [ \mathcal O \twoheadrightarrow \mathcal{S}] \mid \ \operatorname{h}^0(\mathcal{S}) \ge 3 \} \subset \operatorname{Hilb}_{\mathbb P^3}(4m). \] Since $\sigma$ factors through the complement of $H$, we deduce that $\sigma$ factors through $H_1$. By an abuse of notation we denote the corestriction by $\sigma \colon \mathbf W_{\scriptstyle \operatorname{ell}} \to H_1$. \begin{proposition} \label{E_closure} The sets $\mathbf D_0$, $\mathbf D_1$, $\mathbf D_2$, $\mathbf D$ and $\mathbf E$ are contained in the closure of $\mathbf E_0$. The set $\mathbf D$ is irreducible and $\mathbf D_0$ is dense in $\mathbf D$. Moreover, \[ \overline{\mathbf E} \setminus \mathbf{P} = \mathbf E \cup \mathbf D = \mathbf E \cup \mathbf D', \qquad \overline{\mathbf R} \setminus (\overline{\mathbf E} \cup \mathbf{P}) = \mathbf R. \] \end{proposition} \begin{proof} Let $\mathbf E_{\operatorname{reg}} \subset \mathbf E_0$ be the open subset of sheaves with smooth support. Let $H_{10} \subset H_1$ be the open subset consisting of smooth elliptic quartic curves. For any $x \in H_1 \setminus H_{10}$ there is an irreducible quasi-projective curve $\Gamma \subset H_1$ such that $x \in \Gamma$ and $\Gamma \setminus \{ x \} \subset H_{10}$. To produce $\Gamma$ proceed as follows. Embed $H_1$ into a projective space. Intersect with a suitable linear subspace passing through $x$ to obtain a subscheme of dimension $1$ all of whose irreducible components meet $H_{10}$. Retain one of these irreducible components and remove the points, other than $x$, that lie outside $H_{10}$ Notice that if $y = [\mathcal O \twoheadrightarrow \mathcal O_E]$ is a point in $H_{10}$, then $\sigma^{-1}\{ y \}$ is irreducible of dimension $1 + \dim \mathbf G_0$. Indeed, \[ \sigma^{-1} \{ y \} = \rho_0^{-1} \{ [\mathcal O_E(P)], \ P \in E \}. \] Assume now that $x = [\mathcal O \twoheadrightarrow \mathcal O_E]$ where $E$ is the schematic support of a sheaf in $\mathbf E \setminus \mathbf D$. We denote its irreducible components by $Z_0, \ldots, Z_m$. Denote by $(\mathbf E \setminus \mathbf D)^0$ the open subset of sheaves of the form $\mathcal O_{E'}(P')$ with $P'$ lying outside $Z_1 \cup \ldots \cup Z_m$ and let $\mathbf W^0$ be its preimage under $\rho_0$. Denote by $\sigma_0$ the restriction of $\sigma$ to $\mathbf W^0$. Clearly, $\sigma_0^{-1}\{ y \}$ is irreducible of dimension $1 + \dim \mathbf G_0$ and the same is true for $\sigma_0^{-1}\{ x \}$. Thus, the fibers of the map $\sigma_0^{-1}(\Gamma) \to \Gamma$ are all irreducible of the same dimension. By \cite[Theorem 8, page 77]{shafarevich} we deduce that $\sigma_0^{-1}(\Gamma)$ is irreducible. Thus, $\rho_0(\sigma^{-1}(\Gamma))$ is irreducible, hence any sheaf of the form $\mathcal O_E(P)$, $P \in Z_0 \setminus (Z_1 \cup \ldots \cup Z_m)$, is the limit of sheaves in $\mathbf E_{\operatorname{reg}}$. The same argument applies to $\mathcal O_E(P)$ for $P$ belonging to exactly one of the components of $E$. A fortiori, $\mathcal O_E(P)$ lies in the Zariski closure of $\mathbf E_{\operatorname{reg}}$ for all $P \in E$. We conclude that $\mathbf E \setminus \mathbf D \subset \overline{\mathbf E}_0$. Let $D$ be the union of a line $L$ and a planar irreducible cubic curve $C$, where $L$ and $C$ meet precisely at a regular point of $C$. Take $x = [\mathcal O \twoheadrightarrow \mathcal O_D]$. Then \[ \sigma^{-1} \{ x \} = \rho_0^{-1} \{ [\mathcal O_D(P)],\ P \in C \setminus L \} \] is irreducible of dimension $1 + \dim \mathbf G_0$. We deduce as above that any sheaf of the form $\mathcal O_D(P)$, $P \in C \setminus L$, is the limit of sheaves in $\mathbf E_{\operatorname{reg}}$. The set of sheaves of the form $\mathcal O_D(P)$ is dense in $\mathbf D_0$. We conclude that $\mathbf D_0 \subset \overline{\mathbf E}_0$. Let $\mathbf D^o \subset \mathbf D \cap \mathbf E = \mathbf D \setminus \mathbf D'$ be the open subset given by the condition that $P \notin L$. Let $\sigma^o \colon \mathbf D^o \to H_1$ denote the restriction of $\sigma$. According to \cite[Theorem 5.2 (4)]{vainsencher}, there is an irreducible closed subset $\hat{\mathbf B} \subset H_1$ whose generic member is the union of a planar cubic curve and an incident line. Let $D$ be the schematic support of a sheaf in $\mathbf D_2$. According to \cite[Theorem 5.2 (5)]{vainsencher}, the point $x = [\mathcal O \twoheadrightarrow \mathcal O_D]$ belongs to $\hat{\mathbf B}$. By the same argument as above, there is an irreducible quasi-projective curve $\Gamma \subset \hat{\mathbf B}$ containing $x$ such that the points $y \in \Gamma \setminus \{ x \}$ are of the form $[\mathcal O \twoheadrightarrow \mathcal O_{L \cup C}]$, where $C$ is a planar irreducible cubic curve and $L$ is an incident line. Notice that \[ (\sigma^o)^{-1} \{ y \} = \rho_0^{-1} \{ [\mathcal O_{L \cup C}(P)], \ P \in C \setminus L \} \] is irreducible of dimension $1 + \dim \mathbf G_0$. Assume, in addition, that $D$ is the union of an irreducible plane conic curve $C'$ and a double line supported on $L'$. Then \[ (\sigma^o)^{-1} \{ x \} = \rho_0^{-1} \{ [\mathcal O_D(P)], \ P \in C' \setminus L' \} \] is irreducible of dimension $1 + \dim \mathbf G_0$. We deduce, as above, that $(\sigma^o)^{-1} (\Gamma)$ is irreducible, hence $\rho_0((\sigma^o)^{-1} (\Gamma))$ is irreducible, and hence any sheaf of the form $\mathcal O_D(P)$, $P \in C' \setminus L'$, is the limit of sheaves in $\mathbf D_0$. But $\mathbf D_2$ is irreducible, hence the set of sheaves $\mathcal O_D(P)$ as above is dense in $\mathbf D_2$. We deduce that $\mathbf D_2 \subset \overline{\mathbf D}_0$. Thus $\mathbf D_2 \subset \overline{\mathbf E}_0$. Recall from Proposition \ref{D_1_in_D_0} that $\mathbf D_1 \subset \overline{\mathbf D}_0$. Since $\mathbf D = \mathbf D_0 \cup \mathbf D_1 \cup \mathbf D_2$, we see that $\mathbf D \subset \overline{\mathbf D}_0 \subset \overline{\mathbf E}_0$. The inclusion $\overline{\mathbf E} \setminus \mathbf{P} \subset \mathbf E \cup \mathbf D'$ follows from Theorem \ref{homological_conditions} and Proposition \ref{non-planar_M_1}. Indeed, $\mathbf E$ is closed in $\mathbf M_0$. The reverse inclusion was proved above. Finally, \[ \overline{\mathbf R} \setminus (\overline{\mathbf E} \cup \mathbf{P}) = \overline{\mathbf R} \setminus (\mathbf E \cup \mathbf D' \cup \mathbf{P}) \subset \mathbf M \setminus (\mathbf E \cup \mathbf D' \cup \mathbf{P}) = \mathbf M_0 \setminus \mathbf E = \mathbf R. \] The reverse inclusion is obvious because by definition $\mathbf R$ is disjoint from $\mathbf E$, $\mathbf D'$, $\mathbf{P}$. \end{proof} \noindent From Proposition \ref{E_closure} we obtain the decomposition of $\operatorname{M}_{\mathbb P^3}(4m+1)$ into irreducible components. \begin{theorem} \label{main_theorem} The moduli space $\operatorname{M}_{\mathbb P^3}(4m+1)$ consists of three irreducible components $\overline{\mathbf R}$, $\overline{\mathbf E}$ and $\mathbf{P}$. \end{theorem} \noindent The intersections $\overline{\mathbf R} \cap \mathbf{P}$, $\overline{\mathbf E} \cap \mathbf{P}$, $\overline{\mathbf R} \cap \overline{\mathbf E}$ were described generically in \cite{choi_chung_maican}. They are irreducible and have dimension $14$, $16$, respectively, $15$. The generic member of $\overline{\mathbf R} \cap \mathbf{P}$ has the form $[\mathcal O_C(P_1 + P_2 + P_3)]$, where $C$ is a planar quartic curve and $P_1$, $P_2$, $P_3$ are three distinct nodes. The generic point in $\overline{\mathbf E} \cap \mathbf{P}$ has the form $[\mathcal O_C(P_1 + P_2 + P)]$, where $C$ is a planar quartic curve, $P_1$ and $P_2$ are distinct nodes and $P$ is a third point on $C$. The generic sheaves in $\overline{\mathbf R} \cap \overline{\mathbf E}$ have the form $\mathcal O_E(P)$, where $E$ is a singular $(2, 2)$-curve on a smooth quadric surface and $P \in \operatorname{sing}(E)$.
1,314,259,994,092
arxiv
\section{Introduction} \begin{figure}[t] \centering \includegraphics[angle=0,width=\textwidth]{pic/flowchart.png} \caption{High-level description of the proposed \mbox{MF-KD}{} method.} \label{fig:flowchart} \end{figure} Deep learning is state of the art in the majority of ML problems: computer vision, speech recognition, machine translation, etc. The progress in this area mostly comes from discovering new architectures of neural networks, which is usually performed by human experts. This motivates a new direction of research -- \textit{neural architecture search (NAS)} -- developing algorithms for finding new well-performing architectures of neural networks. Existing approaches could be broadly divided into two groups. \textbf{Black-box optimization}. Given a discrete search space $\mathcal{A}$ of all the architectures and a performance function $f(\cdot)$ of an architecture, like testing accuracy, these approaches aim to solve $\mathop{\mathrm{argmax}}_{a \in \mathcal{A}} f(a) $ via black-box optimization. One of the first proposed approaches of this kind \cite{zoph2016neural,zoph2018learning} treated architecture design (layer by layer) as a sequential decision process and the performance $f(a)$ was a reward. The optimization was done by reinforcement learning. Classical methods like Gaussian process-based bayesian optimization with a particular kernel \cite{kandasamy2018neural} and evolutionary optimization \cite{real2019regularized} also could be applied. Some methods \cite{white2019bananas,shi2019multi} use performance predictors together with bayesian uncertainty estimation. The black-box optimization methods share the same drawback -- they require a large number of architecture evaluations and significant computational resources. \textbf{One-shot NAS}. Another line of research goes beyond black-box optimization and utilizes the structure and the learning algorithm of a neural network. The architecture search is done simultaneously with the training of networks themselves, and the search time is not significantly larger than a training time of one network. The key idea of the one-shot NAS is the \textit{weight-sharing} trick -- that is, all the architectures from the search space share weights of architecture blocks. Some black-box methods like evolutionary search \cite{elsken2018efficient} and RL-based NAS \cite{pham2018efficient} can be modified by the weight-sharing and enjoy considerable speedup. The DARTS method \cite{liu2018darts} considers a supernetwork containing all the networks from a search space as its subnetworks. The choice between subnetworks is governed by architectural parameters which are updated by gradient descent similarly to differentiable hyperparameters optimization methods \cite{pedregosa2016hyperparameter}. Subsequent modifications improve DARTS in terms of search speed and performance of resulting architectures \cite{xu2019pc,chen2019progressive,liang2019darts+,dong2019searching,cai2018proxylessnas}. Alternative approaches update subnetworks randomly during the training phase \cite{guo2019single,li2019random,bender2019understanding}. Then, the best subnetwork is selected by its validation accuracy. Overall, black-box optimization methods are much slower but more robust and general. Given a rich search space of architectures, a black-box method typically will find a good one though spending a lot of time. Also black-box optimization doesn't restrict the network's performance to be differentiable with respect to architectural parameters, like DARTS. Constraints like FLOPS/latency/memory footprint can be applied straightforwardly. Popular one-shot methods like DARTS and ENAS are quite fast. Unfortunately, they perform only slightly better than the random search \cite{yang2019evaluation,adam2019understanding,li2019random}. Is it possible to speedup black-box NAS? The natural approach is to do \textit{low-fidelity} evaluations of neural architectures, for example, train them for a few epochs. However, the final goal is to find the best architecture in terms of a \textit{high-fidelity} evaluation -- after training until convergence. An interesting research question arises: is it possible to make correct architecture selection after training for a few epochs? Obviously, this selection is not perfect, but we show how to improve it by training with a \textit{knowledge distillation (KD)} loss function. We found that the proposed technique not only improves the accuracy of a network but also improves the correlation between low- and high-fidelity evaluations. In this paper, we make the following contributions: \begin{itemize} \item we propose \textbf{a new approach to the low-fidelity evaluation of neural architectures} -- training for a few epochs with a knowledge distillation loss (Section \ref{sec:method-kd}); \item we incorporate the proposed low-fidelity evaluations into a \textbf{bayesian multi-fidelity search method ``\mbox{MF-KD}{}''} based on co-kriging (Figure \ref{fig:flowchart} and Section \ref{sec:method-mf}); \item we carry out experiments with the NAS-Bench-201 benchmark \cite{dong2020bench}, including CIFAR-10, CIFAR-100, and ImageNet- 16-120. We prove that the proposed approach leads to \textbf{the better architecture selection}, given the same computational budget, than several state-of-the-art baselines (Section \ref{sec:experiments}) \end{itemize} The code is in the repository\\ \url{https://anonymous.4open.science/r/a6c96420-435a-484b-9170-e2de9ab0aee3/}. \\ \section{Related Work} \textbf{Knowledge distillation} (KD) was proposed in \cite{hinton2015distilling}. The seminal paper matched predictions of a student and a teacher with cross-entropy. Later extensions suggest matching features maps instead of class probabilities \cite{romero2014fitnets,zagoruyko2016paying,tung2019similarity,peng2019correlation,ahn2019variational,passalis2018learning,huang2017like,tian2019contrastive}. The methods similar to KD were developed for other problems: sequences-to-sequence modeling \cite{kim2016sequence}, reinforcement learning \cite{rusu2015policy}, etc. \textbf{Multi-fidelity/low-fidelity}. Low-fidelity evaluations are used sometimes in the context of hyperparameter optimization and NAS. The proposed variants include: training on a part of dataset \cite{klein2016fast}, shorter training time \cite{zela2018towards}, lower resolution of images \cite{chrabaszcz2017downsampled}, less filters per payer \cite{zoph2018learning}. Low-fidelity evaluations are faster but they are biased. This issue motivates \textit{multi-fidelity} methods which progressively increase fidelity during the search: MF-GP-UCB \cite{kandasamy2019multi}, MF-MI-Greedy \cite{song2018general}, co-kriging \cite{klyuchnikov2019figuring}. \textbf{KD \& NAS}. Some very recent papers study applications of KD to NAS. \cite{li2020blockwisely} propose to independently train blocks in a student's supernetwork by mimicking corresponding blocks of a teacher with MSE loss. \cite{kang2020towards} proposed an oracle knowledge distillation loss and showed that ENAS \cite{pham2018efficient} with this loss outperforms ENAS with logistic loss. \cite{liu2020search} studies RL-based NAS with networks trained with KD loss instead of a logistic loss. They conclude that the found architecture depends on the teacher architecture used for KD, that is, some structural knowledge is transferred from a teacher. The main difference between our work and the aforementioned papers is that we use KD loss for improving low-fidelity evaluations of architectures -- inside a multi-fidelity search algorithm. At the same time, \cite{liu2020search} does only high-fidelity evaluations (training on the full dataset). \cite{kang2020towards,li2020blockwisely} incorporates KD loss into training of a supernetwork, while our work is about treating NAS as a search over a discrete domain of architectures. \section{The proposed method} \label{sec:proposed-method} \subsection{Knowledge Distillation (KD)} The knowledge distillation (KD) assumes two models: a \textit{teacher} and a \textit{student}. The teacher is typically a large and accurate network or an ensemble. The student is trained to fit the softmax outputs of the teacher together with ground truth labels. The idea is that outputs of the teacher capture not only the information provided by ground truth labels but also the probabilities of other classes -- ``dark knowledge''. The knowledge distillation can be summarized as follows. Let $z_i$ be logits (pre-softmax activations) and $q_i$ -- probabilities of classes as predicted by a neural network. Knowledge distillation smooths $z_i$ with the temperature $\tau$ \begin{equation} q_i = \sigma(z_i / \tau) = \frac{exp(z_i/\tau)}{\sum_j exp(z_j/\tau)}. \end{equation} Neural networks often do very confident predictions (close to 0 or 1) and smoothing helps to provide for student more information during training \cite{hinton2015distilling}. The KD loss is a linear combination of the logistic loss and cross-entropy between predictions of the teacher and the student \begin{equation} \label{kd-loss} (1-\lambda) \sum_{i} H(y_i, \sigma(z^S_i)) + \lambda \tau^2 \sum_{i} H ( \sigma(z^T_i / \tau), \sigma(z^S_i / \tau) ), \end{equation} where $z^T_i$, $z^S_i$ are logits of the teacher and the student, $H(p, q) = - p \log(q)$ is the cross-entropy function. The factor $\tau^2$ is used for scaling gradients of both parts of the loss function to be the same order. In the rest of the paper, we will refer to this variant of the knowledge distillation as ``original KD''. Other variants of KD suggest matching feature maps of the student and the teacher with various discrepancy functions \cite{romero2014fitnets,zagoruyko2016paying,tung2019similarity,peng2019correlation,ahn2019variational,passalis2018learning,huang2017like,tian2019contrastive}. For example, the NST loss \cite{huang2017like} uses Maximum Mean Discrepancy (MMD): \begin{equation} \label{eq:nst-loss} \sum_{i} \left( H(y_i, \sigma(z^S_i)) + \beta \mathcal{L}_{MMD^2}(F_{i, T}, F_{i,S}) \right), \end{equation} where $F_T$, $F_S$ are the feature maps of the teacher and the student, \begin{gather} \mathcal{L}_{MMD^2}(F^T, F^S) = \frac{1}{C_T^2}\sum_{i=1}^{C_T}\sum_{i'=1}^{C_T} k(\frac{f_T^{i\cdot}}{||f_T^{i\cdot}||_2}, \frac{f_T^{i'\cdot}}{||f_T^{i'\cdot}||_2}) + \frac{1}{C_S^2}\sum_{j=1}^{C_S}\sum_{j'=1}^{C_S} k(\frac{f_S^{j\cdot}}{||f_S^{j\cdot}||_2}, \frac{f_S^{j'\cdot}}{||f_S^{j'\cdot}||_2})\notag\\ - \frac{2}{C_T C_S}\sum_{i=1}^{C_T}\sum_{j=1}^{C_S} k(\frac{f_T^{i\cdot}}{||f_T^{i\cdot}||_2}, \frac{f_S^{j\cdot}}{||f_S^{j\cdot}||_2})\label{eq:nst-loss-mmd}. \end{gather} Here $f^{i\cdot}_T$,$f^{j\cdot}_S$ are feature map from the layers $i,j$ of the teacher and the student respectively, $k(x,y)$ is a kernel. \subsection{Low-fidelity evaluations with knowledge distillation} Let $a$ be an architecture from a search space $\mathcal{A}$, $N_a(w,x)$ -- a network with the architecture $a$ and weights $w$; $x$ -- input features. Neural architecture search is the nested optimization problem \begin{gather} a^* = \mathop{\mathrm{argmax}}_{a} ACC_{HF}(a) \label{eq:hf-problem}, \\ ACC_{HF}(a) = \frac{1}{|D_{test}|} \sum_{(x, y) \in D_{test}} I(y= N_{a}(w^*_{HF}, x)), \\ w^*_{HF} = \mathop{\mathrm{argmin}}_{w} \sum_{(x,y) \in D_{train}} L(y, N_a(w, x)), \end{gather} where $ACC_{HF}(a)$ is the testing accuracy of the architecture $a$ after training on the full dataset $D_{train}$, $L(\cdot)$ is the logistic loss function, $I(\cdot)$ is the indicator function. We call $ACC_{HF}(a)$ the \textit{high-fidelity} evaluation. Instead of solving the computationally hard problem (\ref{eq:hf-problem}), we propose to solve \begin{gather} a^* = \mathop{\mathrm{argmax}}_{a} ACC_{LF}(a), \label{eq:lf-problem}\\ ACC_{LF}(a) = \frac{1}{|D_{test}|} \sum_{(x, y) \in D_{test}} I(y= N_a(w^*_{LF}, x)), \\ w^*_{LF} \approx \mathop{\mathrm{argmin}}_{w} \sum_{(x,y) \in D_{train}} \left( \alpha L(y, N_{a}(w, x)) + \beta L_{T}(N_a(w,x)) \right) \label{eq:inner-kd}. \end{gather} In the inner loop (\ref{eq:inner-kd}) the minimization is approximate by training for a few epochs with the KD loss $\alpha L(\cdot) + \beta L_T(\cdot)$. Here $L_T(\cdot)$ forces the student network to mimic the teacher network. We call $ACC_{LF}(a)$ the \textit{low-fidelity} evaluation. The better low-fidelity evaluation is, the higher a correlation between $ACC_{HF}(a)$ and $ACC_{LF}(a)$ should be. Even when correlation is large, low-fidelity evaluations are not enough since typically they are biased: $\mathop{\mathrm{argmax}}_{a} ACC_{LF}(a) \neq \mathop{\mathrm{argmax}}_{a} ACC_{HF}(a)$. This bias motivates \textit{multi-fidelity} search methods that combine low- and high-fidelity evaluations. \fi \subsection{Low-fidelity evaluations with knowledge distillation} \label{sec:method-kd} Let $\alpha$ be an architecture from a search space $\mathcal{A}$. We assume that each architecture could be represented by a real-valued vector of features $x\in\mathcal{X}\subseteq\mathbb{R}^d$. We call $y(x)$ an \textit{evaluation} of the architecture $\alpha$, namely its validation accuracy after fitting on the train dataset. We use the following notations: \begin{itemize} \item $y^1(x)$ - \textit{low-fidelity} evaluation, that is, validation accuracy of the network $\alpha$ after fitting on the train dataset for $E_1$ epochs with the NST loss (\ref{eq:nst-loss}); \item $y^2(x)$ - \textit{high-fidelity} evaluation, that is, validation accuracy of the network $\alpha$ after fitting on the train dataset for $E_2$ epochs with the logistic loss, $E_2>E_1$. \end{itemize} The better low-fidelity evaluation $y^1(x)$ is, the higher a correlation between $y^1(x)$ and $y^2(x)$ should be. Even when correlation is large, low-fidelity evaluations are not enough since typically they are biased: \begin{equation} \mathop{\mathrm{argmax}}_{x} y^1(x) \neq \mathop{\mathrm{argmax}}_{x} y^2(x). \label{eq:fidelity_opt_bias} \end{equation} This bias motivates \textit{multi-fidelity} search methods that combine low- and high-fidelity evaluations. \subsection{Multi-fidelity NAS} \label{sec:method-mf} We combine low-fidelily $y^1(x)$ and high-fidelity $y^2(x)$ evaluations via the co-kriging fusion model \begin{equation} \label{eq:co-kriging} y^{2}(x) = \rho y^{1}(x) + \delta(x), \end{equation} where $y^{2}(x)$, $y^{1}(x)$, $\delta(x)$ are Gaussian Processes \cite{williams2006gaussian}, $y^{1}(x)$ is independent of $\delta(x)$, in turn, $\delta(x)$ allows handling biases \eqref{eq:fidelity_opt_bias}, $\rho$ is a scaling factor which is fitted by maximum likelihood. Figure \ref{fig:flowchart} depicts the high-level structure of the proposed method, while Algorithm \ref{alg:mf-nas} gives the formal description. Initially, Algorithm \ref{alg:mf-nas} samples $n_1+n_2$ architectures randomly and does low-fidelity evaluations of $n_1$ architectures and high-fidelity evaluations of $n_2$ architectures. After this warm-up, the architectures are selected cyclically by the UCB criteria (line \ref{alg_line:ucb}) and evaluated via high-fidelity evaluation only. At each iteration, the co-kriging fusion model (line \ref{alg_line:co-kriging}) is updated. Finally, Algorithm \ref{alg:mf-nas} returns an architecture having the best validation score. The proposed model can be extended to more than two levels of fidelity using the schema proposed in \cite{kennedy2000predicting}. Under the Markov assumptions on the covariance structures, each level of fidelity depends only on the previous one in the same fashion as high-fidelity depends on low-fidelity in \eqref{eq:co-kriging}. \begin{algorithm}[t!] \caption{\mbox{MF-KD}{}: A Multi-fidelity Neural Architecture Search Method with Knowledge Distillation} \label{alg:mf-nas} \DontPrintSemicolon \SetAlgoVlined \SetKwInOut{Input}{Input}\SetKwInOut{Output}{Return} \Input{$\mathcal{X}$ - search space of architectures (encodings in $\mathbb{R}^d$), $n_1$, $n_2$, $E_1$, $E_2$, $N$, T - total budget for the procedure, $\beta$ - non-negative float (default 1.0, exploration/exploitation trade-off).} \BlankLine $t = 0$ // spent budget\; Randomly sample $n_{1}$ architectures -- $A_1$ \; (I) Train architectures from $A_1$ for $E_1$ epochs with Knowledge Distillation (low-fidelity 1)\; $t += \texttt{budget for (I)}$\; Randomly sample $n_{2}$ architectures -- $A_2$ \; (II) Train architectures from $A_2$ for $E_2$ epochs (low-fidelity 2)\; $t += \texttt{budget for (II)}$\; Fit co-kriging fusion regression $y^{(2)}(x) = \rho y^{(1)}(x) + \delta(x) $, where $y^2(x)$ -- predictions for low-fidelity 2 data, $y^1(x)$ -- predictions for low-fidelity 1 data, $\delta(x)$ -- discrepancy. $y^{(2)}(x), y^{(1)}(x), \delta(x)$ are Gaussian Processes, $y^{(1)}$ is independent of $\delta$, $\rho$ - scaling factor (parameter).\; \While{$t < T$} { Sample $N$ random architectures -- $A$\; Select one architecture $x_*$ from $A$: $x_* = \mathop{\mathrm{argmax}}_{x \in A} \left(\mathbb{E}\left[y^{(2)}(x)\right] + \beta Var\left[y^{(2)}(x)\right]\right)$ \label{alg_line:ucb}\; (III) Train architecture $x_*$ for $E_2$ epochs (low-fidelity 2)\; $t += \texttt{budget for (III)}$\; Fit co-kriging fusion regression (with updated low-fidelity 2 data for $x_*$) \label{alg_line:co-kriging} } \BlankLine \Output{Architecture with the best validation score after $E_2$ epochs evaluated during the procedure.} \end{algorithm} \section{Experiments} \label{sec:experiments} \subsection{NAS benchmark} \label{sec:benchmark} \begin{figure}[t] \centering \includegraphics[angle=0,width=\textwidth]{pic/nasbench-201.png} \caption{Top: macro-structure of the network. Bottom-left: example of stacked cells with various operations. Each cell is a directed acyclic graph. Each edge is associated with some operation. Bottom-right: the list of operations. Picture was redrawn from \cite{dong2020bench}.} \label{fig:nas-bench-201} \end{figure} For the experiments, we used NAS-Bench-201 \cite{dong2020bench} benchmark\footnote{Initially, we planned to carry out experiments with the larger NAS-Bench-101 \cite{ying2019bench} benchmark containing 423k trained architectures. Unfortunately, the implementation is in TensorFlow, while KD methods are implemented in PyTorch; they are not compatible. We also have faced technical difficulties with the TensorFlow code.}. It contains 15,625 convolutional architectures trained on CIFAR-10, CIFAR-100 and ImageNet-16-120 (downsampled to 16$\times$16 variant of ImageNet with 120 classes). Figure \ref{fig:nas-bench-201} shows macro- and micro-structure of the architectures. Each cell is stacked $N = 5$ times, number of output channels gradually increases from 16 to 64 from the first to the last layer. These architectures were trained with the following hyperparameters: 200 epochs, cosine annealing learning rate, momentum 0.9, initial learning rate 0.1, weight decay $5\times10^{-4}$, and augmentation (random crop, random flip). The benchmark provides thorough logs of training. The test accuracy of top architectures from the NAS-Bench-201 is not state-of-the-art since the networks were trained for a not so many epochs with only basic augmentation techniques. Otherwise, training of 15,625 networks would by unfeasible. \textbf{Encoding of the architectures}. Since the macro-structure is fixed, each architecture can be unequivocally described by it's cell structure. We used concatenated one-hot encodings of operations associated with edges as encoding of the whole architecture. \subsection{KD methods} \label{sec:experiments-kd} \begin{table}[t] \centering \caption{Kendall-tau correlation between low-fidelity and high-fidelity evaluations. Low-fidelity evaluations are done by training for 1 epoch.} \begin{tabular}{cccc} \toprule \textbf{Loss} & \multicolumn{3}{c}{\textbf{Dataset}} \\ \cmidrule{2-4} & \textbf{CIFAR-10} & \textbf{CIFAR-100} & \textbf{ImageNet16-120} \\ \midrule logistic loss & 0.17 & 0.06 & 0.21\\ NST loss & \textbf{0.47} & \textbf{0.47} & \textbf{0.45}\\ \bottomrule \end{tabular} \label{tbl:lf-hf-corr} \end{table} In preliminary experiments with CIFAR-100 dataset (see Appendix \ref{app:kd-methods}) we tested various types of KD methods\footnote{We used implementations from \url{https://github.com/HobbitLong/RepDistiller}.}. We selected the NST method for the further experiments with NAS-Bench-201 since it lead to the highest correlation between low-fidelity and high-fidelity evaluations. In the NST loss, we used the polynomial kernel $k(x,y)=(x^Ty+c)^b$ is with $c=0,b=2$ and $\beta=12.5$ in (\ref{eq:nst-loss}). We used ResNet networks trained on the same datasets as teachers. Unfortunately, calculating the gradient of the NST loss (\ref{eq:nst-loss}) adds significant overhead to the traditional logistic loss, training becomes $\approx$3 times slower. To mitigate this issue, we calculated the NST loss approximately \begin{equation} \label{eq:nst-loss-approx} \sum_{i} \left( H(y_i, \sigma(z^S_i)) + \beta \mathcal{\widetilde{L}}_{MMD^2}(F_{i, T}, F_{i,S}) \right), \end{equation} with only a subset of feature maps $\widetilde{C}_T \subset \{1, \ldots, C_T\}, \widetilde{C}_S \subset \{1, \ldots, C_S\}$ \footnote{We used feature maps after 2 cells of N=5 stacked cells and each residual block (see Figure \ref{fig:nas-bench-201}).}: \begin{gather} \mathcal{\widetilde{L}}_{MMD^2}(F^T, F^S) = \frac{1}{|\widetilde{C}_T|^2}\sum_{i\in \widetilde{C}_T}\sum_{i' \in \widetilde{C}_T} k(\frac{f_T^{i\cdot}}{||f_T^{i\cdot}||_2}, \frac{f_T^{i'\cdot}}{||f_T^{i'\cdot}||_2})\\ + \frac{1}{|\widetilde{C}_S|^2}\sum_{j \in \widetilde{C}_S}\sum_{j' \in \widetilde{C}_S} k(\frac{f_S^{j\cdot}}{||f_S^{j\cdot}||_2}, \frac{f_S^{j'\cdot}}{||f_S^{j'\cdot}||_2})\notag - \frac{2}{|\widetilde{C}_T| |\widetilde{C}_S|}\sum_{i \in \widetilde{C}_T}\sum_{j \in \widetilde{C}_S} k(\frac{f_T^{i\cdot}}{||f_T^{i\cdot}||_2}, \frac{f_S^{j\cdot}}{||f_S^{j\cdot}||_2})\label{eq:nst-loss-mmd-approx}. \end{gather} After this optimization, training with the NST loss became only $\approx$1.5 times slower\footnote{Even more speed-up is possible by precomputing feature maps of the teacher network.}. We didn't perform a detailed study of this issue, and consider that selecting layers for doing knowledge distillation is an interesting topic for the further research. Table \ref{tbl:lf-hf-corr} shows Kendall-tau rank correlations between low- and high-fidelity evaluations. Low-fidelity evaluations are done by training architectures for 1 epoch. We conclude that training with the NST loss significantly improves the correlation over the conventional logistic loss. \subsection{Multi-fidelity NAS} \label{sec:experiments-mf} Finally, we tested the proposed \mbox{MF-KD}{} method (Algorithm \ref{alg:mf-nas}) with the NAS-Bench-201 benchmark. We used parameters $E_1=1, E_2=12, n_1=100, n_2=20, N=5000$). NAS methods were compared by the test accuracy \footnote{ The co-kriging regression in the Algorithm \ref{alg:mf-nas} is fitted to the \textit{validation} accuracy, while the methods are compared by the \textit{test} accuracy of the best architectures. Model selection based on validation accuracy while estimating performance by test accuracy is a common pattern for AutoML/NAS algorithms performed to avoid overfitting.} of the best found architecture, averaged over 100 runs. NAS methods were allowed to use the equal computational budget -- $12\times10^3$ seconds, same as in \cite{dong2020bench}. For all the methods except \mbox{MF-KD}{} we used the data from \cite{dong2020bench}. Table \ref{tbl:mf-results} presents the results. We conclude that the \mbox{MF-KD}{} method is the best performing one. Particularly, it found better architecture than the state-of-the-art multi-fidelity algorithm BOHB. The improvement over the second best method, Regularized Evolution, is significant with p-value $< 0.05$. An alternative way to assess the NAS method's quality is to compare results by the relative accuracy in the search space instead of the absolute accuracy. The MF-KD method found architectures having accuracy's in top 0.3\%, 0.2\%, 0.5\% within the search space for the datasets CIFAR-10, CIFAR-100, and ImageNet16-120, respectively. \begin{table}[t] \centering \caption{Results of the \mbox{MF-KD}{} method. For each of the method, the accuracy of the best found architecture is shown. The search was performed under the same computational budget, averaged over 100 runs.} \begin{tabular}{cccc} \toprule \textbf{Method} & \multicolumn{3}{c}{\textbf{Accuracy, \%}} \\ \cmidrule{2-4} & \textbf{CIFAR-10} & \textbf{CIFAR-100} & \textbf{ImageNet-16-120} \\ \midrule DARTS-V2 & 54.30 & 15.61 & 16.32 \\ GDAS & 93.51 & 70.61 & 41.84 \\ ENAS & 54.30 & 15.61 & 16.32 \\ Reg. Evolution & 93.92 & 71.84 & 45.54 \\ Random Search & 93.70 & 71.04 & 44.57 \\ BOHB & 93.61 & 70.85 & 44.42 \\ \textbf{\mbox{MF-KD}{}} & \textbf{93.93} & \textbf{72.00} & \textbf{ 45.67} \\ \bottomrule \end{tabular} \label{tbl:mf-results} \end{table} \subsection{Ablation studies} \label{sec:ablation} The proposed method \mbox{MF-KD}{} has two contributions: bayesian multi-fidelity search and low-fidelity evaluations after training with the NST loss. We carry out ablation studies of these contributions: \begin{itemize} \item Search with a single fidelity: Gaussian Processes Regression (GPR) with high-fidelity evaluations only; \item Multi-fidelity search, where low-fidelity evaluations are training with the conventional logistic loss for a few epochs; \end{itemize} Table \ref{tbl:ablation} show the results: both of the contributions improve the algorithm's performance for more CIFAR-100 and ImageNet-16-120 datasets. For simpler CIFAR-10 this not the case. Also, for CIFAR-100 and ImageNet-16-120 the proposed method found the architecture better than the teacher, ResNet. \begin{table}[t] \centering \caption{Ablation studies of the \mbox{MF-KD}{} method. For each of the method, the accuracy of the best found architecture is shown. The search was performed under the same computational budget, averaged over 100 runs.} \begin{tabular}{cccc} \toprule \textbf{Method} & \multicolumn{3}{c}{\textbf{Accuracy, \%}} \\ \cmidrule{2-4} & \textbf{CIFAR-10} & \textbf{CIFAR-100} & \textbf{ImageNet-16-120} \\ \midrule ResNet & \textbf{93.97} & 70.86 & 44.63 \\ Single fidelity (GPR) & 93.78 & 71.44 & 45.44 \\ Multi-fidelity (no KD) & \textbf{93.97} & 71.83 & 45.41 \\ \textbf{\mbox{MF-KD}{}} & 93.93 & \textbf{72.00} & \textbf{45.67} \\ \bottomrule \end{tabular} \label{tbl:ablation} \end{table} \section{Conclusion} In this work, we have proposed the new \mbox{MF-KD}{} method tailored to neural architecture search. By doing experiments, we have proved that the \mbox{MF-KD}{} method is efficient. It leads to a better architecture selection than several state-of-the-art baselines given the same computational budget. Also it outperforms state-of-the-art multi-fidelity method BOHB. We validated our contributions on the NAS-Bench-201 benchmark, including CIFAR-10, CIFAR-100 and ImageNet-16-120 datasets. Our research gives an interesting insight into knowledge distillation methods themselves. While these methods are typically compared by a performance of compact student networks trained with the KD loss, we apply the KD loss to improve \textit{architecture selection} after training for a few epochs. Our work satisfies the best practices for scientific research on NAS \cite{lindauer2019best}, see Appendix \ref{appendix:best-practices}. \section{Best practices of NAS research} \label{appendix:best-practices} The best practices of NAS research are the following \cite{lindauer2019best}: \begin{enumerate} \item Release Code for the Training Pipeline(s) you use; \item Release Code for Your NAS Method; \item Don’t Wait Until You’ve Cleaned up the Code; That Time May Never Come; \item Use the Same NAS Benchmarks, not Just the Same Datasets; \item Run Ablation Studies; \item Use the Same Evaluation Protocol for the Methods Being Compared; \item Compare Performance over Time; \item Compare Against Random Sampling and Random Search; \item Validate The Results Several Times; \item Use Tabular or Surrogate Benchmarks If Possible; \item Control Confounding Factors; \item Report the Use of Hyperparameter Optimization; \item Report the Time for the Entire End-to-End NAS Method; \item Report All the Details of Your Experimental Setup. \end{enumerate} We released all the code (1, 2, 3). We carried out experiments on the public tabular benchmark (4, 10). We did ablation studies in Section \ref{sec:ablation} (5). Since we used tabular benchmark (6) is satisfied. For multi-fidelity optimization, we made comparisons over time (7). We compared against random search (8). Experimental results are averaged over many runs (9). We did our best to control confounding factors (11). Hyperparameter optimization (12) is described in Appendix \ref{app:kd-methods}. We did our best to report all the details about the experimental setup (14). We also discuss computational performance in the Section \ref{sec:experiments-kd}. \section{Comparison of KD methods} \label{app:kd-methods} Initially, we evaluated various KD methods on two small search spaces: 100 modifications of MobileNetV2 \cite{sandler2018mobilenetv2} and 300 modifications of ShuffleNetV2 \cite{ma2018shufflenet} architectures. These architectures share the same pattern -- particular human-designed blocks are repeated certain number of times while channels count increase from input to output. To create search spaces, we randomly modified repetitions and channels counts while preserving the increasing pattern of channels from input to output. These numbers -- repetitions and channels count -- were used as architectures' features. The dimensionality of the MobileNetV2 search space is 16, the ShuffleNetV2 search space -- 7. To avoid too small and too large architectures, we left only ones having a number of parameters in the range $(\nicefrac{1}{3}P, 3P)$, where $P$ is the number of the parameters of the original MobileNetV2 and ShuffleNetV2 respectively. We have trained all the architectures on the full CIFAR-100 \cite{krizhevsky2009learning} dataset, and its \nicefrac{1}{27}, \nicefrac{1}{9}, \nicefrac{1}{3} random but fixed subsets (instead of few epochs). Various loss functions were tested: logistic loss (no KD), knowledge distillation methods: original KD \cite{hinton2015distilling}, Hint \cite{romero2014fitnets}, AT \cite{zagoruyko2016paying}, SP \cite{tung2019similarity}, CC \cite{peng2019correlation}, VID \cite{ahn2019variational}, PKT \cite{passalis2018learning}, NST \cite{huang2017like}, CRD \cite{tian2019contrastive}. The hyperparameters of training were: 100 epochs, momentum 0.9, cosine annealing learning rate, initial learning rate 0.1, weight decay $5\times10^{-4}$, batch size 128 with random cropping and horizontal flipping. The hardware was GeForce GTX 1080 Ti. Teachers in search spaces were original MobileNetV2 and ShuffleNetV2 architectures trained on the same dataset with the same hyperparameters. \begin{table}[t] \centering \caption{Correlation between high-fidelity and low-fidelity evaluations. } \label{tbl:many-kd-corr} \centering \begin{tabular}{cccccccccccc} \toprule \multirow{2}{*}{\textbf{Part}} & \multicolumn{10}{c}{\textbf{Pearson corr.}} \\ \cmidrule{2-11} & \textbf{no KD} & \textbf{orig. KD} & \textbf{AT} & \textbf{NST} & \textbf{SS} & \textbf{VID} & \textbf{PKT} & \textbf{CRD} & \textbf{Hint} & \textbf{CC}\\ \midrule & \multicolumn{10}{c}{{MobileNetV2 search space}}\\ \cmidrule{2-11} 1/27 & 0.11 & 0.34 & \textbf{0.57} & 0.42 & 0.35 & -0.03 & 0.35 & 0.18 & 0.19 & 0.16\\ 1/9 & 0.46 & \textbf{0.61} & \textbf{0.61} & 0.60 & 0.53 & 0.07 & 0.47 & 0.44 & 0.48 & 0.45\\ 1/3 & 0.86 & \textbf{0.92} & 0.74 & 0.81 & 0.79 & -0.21 & 0.41 & 0.85 & 0.84 & 0.90\\ \midrule & \multicolumn{10}{c}{{ShuffleNetV2 search space}}\\ \cmidrule{2-11} 1/27 & 0.48 & 0.54 & 0.43 & \textbf{0.61} & 0.45 & 0.45 & 0.44 & 0.43 & 0.47 & 0.46\\ 1/9 & 0.64 & \textbf{0.81} & 0.57 & 0.74 & 0.61 & 0.60 & 0.60 & 0.30 & 0.64 & 0.58\\ 1/3 & \textbf{0.92} & 0.91 & 0.72 & \textbf{0.93} & 0.91 & 0.92 & 0.76 & 0.88 & 0.91 & \textbf{0.92}\\ \bottomrule \end{tabular} \end{table} \textbf{Hyperparameters tuning}. We have tuned hyperparameters of KD methods by doing low-fidelity evaluations of 20 random architectures with training on a \nicefrac{1}{9} part of the CIFAR-100 dataset. Then we have selected the best combination by the highest correlation with high-fidelity evaluations, see Table \ref{tbl:kd-hyperparams}. The same hyperparameters were used for the NST loss for main experiments with NAS-Bench-201 benchmark (Section \ref{sec:experiments}). Table \ref{tbl:many-kd-corr} shows correlation between high-fidelity and low-fidelity evaluations for the best hyperparameters. We conclude that the AT and NST loss (AT loss is the particular case of NST loss) perform the best for the evaluation by training on \nicefrac{1}{27} of the dataset. \begin{table}[t] \centering \caption{Optimal hyperparameters of KD methods} \label{tbl:kd-hyperparams} \begin{tabular}{lcc} \toprule KD method & MobileNetV2 & ShuffleNetV2 \\ & search space & search space \\ \midrule Distilling the knowledge in a neural network \cite{hinton2015distilling} (KD) & \multicolumn{2}{c}{$\tau=32, \lambda=1$} \\ Fitnets: Hints for thin deep nets \cite{romero2014fitnets} (Hint) & \multicolumn{2}{c}{$\beta=100$} \\ Attention Transfer (AT) \cite{zagoruyko2016paying} & $\beta=10^3$ & $\beta=4\times10^3$ \\ Similarity-Preserving Knowledge Distillation (SP) \cite{tung2019similarity} & $\beta=750$ & $\beta=90$ \\ Correlation Congruence (CC) \cite{peng2019correlation} & \multicolumn{2}{c}{$\beta=0.5\times10^{-2}$} \\ Variational information distillation & $\beta = 0.01$ & $\beta=0.25$ \\ \quad for knowledge transfer (VID) & & \\ Learning deep representations & \multicolumn{2}{c}{$\beta=48\times10^4$} \\ \quad with probabilistic knowledge transfer (PKT) \cite{passalis2018learning} & & \\ Like what you like: & $\beta=12.5$ & $\beta=200$ \\ \quad Knowledge distill via neuron select. transfer (NST) \cite{huang2017like} & & \\ Contrastive Representation Distillation (CRD) \cite{tian2019contrastive} & $\tau=0.2, \beta=0.5$ & $\tau=0.05, \beta=1$\\ \bottomrule \end{tabular} \end{table} \section{Additional experiments with ImageNet} \label{app:imagenet} We did low-fidelity evaluations of all the architectures from the MobileNetV2 search space by training on \nicefrac{1}{27} part of the ImageNet dataset. For low-fidelity evaluations, we trained for 100 epochs and other hyperparameters were the same as for low-fidelity evaluations on CIFAR-100. For high-fidelity evaluations, we used the following hyperparameters: 150 epochs with momentum 0.9, cosine annealing learning rate, initial learning rate 0.05, weight decay $4\times10^{-5}$, batch size 128, the augmentation included random cropping and horizontal flipping. Additionally, we did high-fidelity and low-fidelity evaluations of 10 random architectures and calculated Kendall-tau correlation between high- and low-fidelity evaluations. For conventional logistic loss, it turned out 0.42, while for the original KD loss 0.73. The increase in the correlation confirms our conclusions.
1,314,259,994,093
arxiv
\section{Introduction} Pretraining objectives of large language models can be roughly divided into two categories. First, vanilla next token prediction \cite{brown2020language} aims to learn the distribution of the next token in a sequence given the context to the left. Second, the masked language modeling (MLM) objective \cite{devlin2018bert, raffel2020exploring}, which masks out a portion of the tokens in a sequence and asks the model to predict them, aims to learn the distribution of one or more tokens given bidirectional context. While the major breakthrough, aka, GPT3 \cite{brown2020language} was demonstrated using vanilla next token prediction, recent work \cite{tay2022transcending, zeng2022glm, bavarian2022efficient} has hinted that incorporating the masked language modeling objective may be highly beneficial. In addition, \cite{tay2022transcending} has demonstrated that such bidirectional conditionals provide strong infilling capabilities. One may notice that, unlike the unidirectional conditional distributions that vanilla next token prediction learns, the bidirectional conditionals that MLMs learn are overly abundant in terms of representing a coherent joint distribution. Therefore, they are not guaranteed to be self-consistent (see Chapter \ref{sec:why}). A very simple example for such inconsistencies is shown in Figure \ref{fig:t5_example}. In this example, we obtain the bidirectional conditional distributions that the T5 model learned using two input masked sequences. The two similar sequences are designed with a small difference, in order to examine if the resulting conditionals satisfy a basic law of probabilities (hold consistency). Results clearly show otherwise. We design experiments to quantify such inconsistencies in Chapter \ref{Chapter-T5}. One interesting line of research in the literature focused on whether and how the bidirectional conditionals that BERT-style MLMs provide can be used to construct the joint probability of a sequence in a principled manner \cite{goyal2021exposing, ghazvininejad2019mask, wang2019bert}, just like vanilla next token prediction models. But the numerous papers on this topic have overlooked the concern of inconsistencies. \cite{yamakoshi2022probing} stated that ``any deviations (supposedly) tend to be negligible with large datasets''. The experiments shown in Chapter \ref{Chapter-BERT} demonstrate that this is not the case at all. We thus posit that addressing the consistency issue should be treated as the first step in modeling the joint distribution with BERT-style MLMs. \begin{figure*}[h] \centering \includegraphics[width=1.0\textwidth]{figures/drawing_t5_example.pdf} \caption{A simple bigram example that exposes the inconsistencies in the T5 model. The conditional probabilities that the model learned (quoted from t5-11b fed with the shown masked sequences) contradict each other greatly. Not only are the ratios unbalanced, the model confuses its own preference of the two bigrams.}\label{fig:t5_example} \end{figure*} \begin{figure*} \centering \includegraphics[width=1.0\textwidth]{figures/drawing_roberta_example.pdf} \caption{An example of inconsistencies in the BERT-style MLM. Each ``inferred'' value refers to the probability given by the MLM (RoBERTa-large in this figure). Each ``solved'' value is obtained by passing the other 7 ``inferred'' values to the equation in the red square. We see that the difference between each inferred and solved value is significant. And the solved value may even be larger than 1.}\label{fig:roberta_example} \end{figure*} \section{Why inconsistencies can occur in MLMs}\label{sec:why} For a set of conditional distributions to be self-consistent, they need to be able to be derived from a single coherent joint distribution. One essential reason for the inconsistencies to occur among the conditionals provided by a trained MLM is that the number of conditionals it can calculate far \textit{exceeds} the number of degrees of freedom of a joint distribution. Consider a sequence of length $L$ and with vocabulary $V$, the joint distribution of the tokens in such a sequence is defined by $|V|^L$ probabilities that sum to 1. Therefore, the number of degrees of freedom ($D$) of such a joint distribution is given by: \begin{eqnarray} D_{joint}=|V|^L - 1, \label{eq1} \end{eqnarray} Vanilla next token prediction models or MLMs essentially learn conditionals that predict some tokens in the sequence given others. Such conditional probabilities and probabilities from the joint distribution can be linearly derived from each other. Therefore, each free conditional that the language model is capable of specifying provides an additional constraint on the joint distribution. One can easily verify that a vanilla next token prediction based language model provides $|V|^L - 1$ free conditionals \footnote{A single softmax operation over $V$ essentially gives $|V| - 1$ free conditionals. Here we call conditionals free when they can be assigned any values decided by an underlying neural network.} to just exactly determine the joint distribution. Therefore, a vanilla next token prediction model (no matter how it is trained, or even untrained) would never suffer from self-inconsistencies. MLMs, which can provide distributions of masked tokens given bidirectional context, could specify far more free conditionals. Even for the simplest case, where the MLM predicts the distribution of only 1 (masked) token given $L-1$ other (unmasked) tokens in the sequence, the total number of free conditionals ($N$) is \begin{eqnarray} N_{mlm}(1)= L \times (|V|^L - |V|^{L-1}), \label{eq2} \end{eqnarray} Just $N_{mlm}(1)$ is already far larger than $D_{joint}$. We leave the discussions for $N_{mlm}(k)$ for later work. This fact sets up room for there to be inconsistencies among the conditionals an MLM provides. We explain our strategies and quantification methods for diagnosing T5-style and BERT-style MLMs in the next 2 sections. \section{Diagnosing T5-style MLMs} \label{Chapter-T5} T5-style MLMs are capable of modeling the distribution of segments of variable length in a given bidirectional context. Here we use the simple bigram scenario to expose the inconsistencies that exist among such distributions. Consider two bigrams $x_1x_{21}$ and $x_1x_{22}$ that share a same token $x_1$ in the first position, the conditional distributions concerning such two bigrams should satisfy \begin{eqnarray} \dfrac{p(x_{21}|x_1)}{p(x_{22}|x_1)} = \dfrac{p(x_1x_{21})}{p(x_1x_{22})} \label{eq3} \end{eqnarray} The left hand side can be obtained by only masking the second token, leaving $x_1$ in the context. While the right hand side can be obtained by masking the whole bigram. For the example in Figure \ref{fig:t5_example}, ``chicken'' corresponds to $x_1$. ``Salad'' and ``breast'' correspond to $x_{21}$ and $x_{22}$. We automatically build such a dataset of bigram pairs in a given context by running BART \cite{lewis2019bart} on a portion of the C4 dataset \cite{raffel2020exploring} to generate another plausible bigram alternative to an existing one. We then use the two sequences to test T5's inconsistencies regarding Equation \ref{eq3} \footnote{We focus on plausible bigrams in this draft because they are most relevant in practice but Equation \ref{eq3} should hold for all bigrams in all sentences in all corpora in a self-consistent MLM.}. We can use relative difference ($d_r$) of the left and right hand side of Equation \ref{eq3} to quantify the inconsistency. \begin{eqnarray} d_r = \dfrac{|\texttt{lhs}_{(\ref{eq3})}-\texttt{rhs}_{(\ref{eq3})}|}{\texttt{lhs}_{(\ref{eq3})}} \label{eq4} \end{eqnarray} $d_r$ is expected to be 0 for a self-consistent MLM. Table~\ref{tab:T5_statistics} shows that $d_r$ is typically very large for the T5 family, although scaling up the model has a markable effect on reducing it. Another way to quantify the inconsistency regarding the two bigrams is to count how often a severe case happens where the MLM disagrees with itself on which bigram it prefers. I.e., sometimes $\texttt{lhs}_{(\ref{eq3})} > 1$ and $\texttt{rhs}_{(\ref{eq3})} < 1$, or $\texttt{lhs}_{(\ref{eq3})} < 1$ and $\texttt{rhs}_{(\ref{eq3})} > 1$. Figure \ref{fig:t5_example} shows such a case, where t5-11b prefers ``chicken salad'' over ``chicken breast'' when considering the conditionals provided in $\texttt{rhs}_{(\ref{eq3})}$, yet its preference flips when considering $\texttt{lhs}_{(\ref{eq3})}$. Table \ref{tab:T5_statistics} shows that disagreement on comparison happens with considerable frequency, but scaling up models helps reduce it. \begin{table*}[h] \centering \begin{tabular}{|c|c|c|c|c|} \hline Metric & T5-base & T5-large & T5-3b & T5-11b \\ \hline Relative difference ($d_r$, median, \%) & 47.5 & 45.8 & 44.7 & 42.0\\ \hline Disagreement on comparison (\%) & 9.64 & 8.85 & 7.53 & 6.54\\ \hline \end{tabular} \caption{Inconsistencies in the T5 model tested on 19399 pairs of bigrams. We show the median value for relative difference as it is resilient to outliers.} \label{tab:T5_statistics} \end{table*} \section{Diagnosing BERT-style MLMs} \label{Chapter-BERT} Ever since the success of BERT \cite{devlin2018bert}, there has been research effort \cite{goyal2021exposing, wang2019bert, yamakoshi2022probing} on sampling sequences from it by modeling its implicitly specified joint distribution one way or another. For example, \cite{goyal2021exposing} views it as an energy-based model defined using the bidirectional conditionals of the masked tokens. Such research effort is based on the intuition that bidirectional conditionals could be more robust than auto-regressive (unidirectional) conditionals \cite{goyal2021thesis}. This line of research operates based on the assumption that the overly abundant bidirectional conditionals that the BERT-style MLMs provide are self-consistent. \cite{yamakoshi2022probing} based on \cite{heckerman2000dependency, neville2007relational} stated that ``any deviations (supposedly) tend to be negligible''. We demonstrate in this section that this is not the case at all. There are considerable inconsistencies that exist among the bidirectional conditionals that a trained BERT-style model provides. Figure \ref{fig:roberta_example} demonstrates such an example. Again we use bigrams as the simplest example to expose the inconsistencies. Because BERT-style MLMs cannot directly model the distribution of multiple tokens together (local joint distribution), we consider 4 bigrams this time: $x_{11}x_{21}$, $x_{11}x_{22}$, $x_{12}x_{21}$ and $x_{12}x_{22}$. $x_{11}$ and $x_{12}$ are two possible tokens that the first position can take. $x_{21}$ and $x_{22}$ the second. One can easily verify \footnote{Clue: converting each term to local joint distributions.} that the 8 conditional distributions concerning such four bigrams should theoretically satisfy \begin{equation}\label{eq5} \begin{aligned} \dfrac{p(x_{21}|x_{11})}{p(x_{22}|x_{11})} \times \dfrac{p(x_{11}|x_{22})}{p(x_{12}|x_{22})} = \\ \dfrac{p(x_{11}|x_{21})}{p(x_{12}|x_{21})} \times \dfrac{p(x_{21}|x_{12})}{p(x_{22}|x_{12})} \end{aligned} \end{equation} \begin{table*}[h] \centering \begin{tabular}{|c|c|c|} \hline Metric & Roberta-base & Roberta-large \\ \hline log-probability difference & 0.836 & 0.792 \\ \hline \end{tabular} \caption{Difference of log-probabilities between inferred and solved conditionals. The difference would be 0 for self-consistent MLMs. Roughly a 0.8 difference means that one is 120\% larger than the other.} \label{tab:Roberta_statistics} \end{table*} One way to test the inconsistencies among the 8 conditionals is to try to solve one using the other 7 and compare the solved conditional with the original (inferred by model) one. We show the solved conditionals in the example in Figure \ref{fig:roberta_example}. It clearly demonstrates that the probabilities given by a BERT-style MLM can be in serious inconsistencies with each other. We build a testing dataset with 4 such plausible bigrams for each context and quantify consistencies in using difference of log probabilities: \begin{equation}\label{eq6} \begin{aligned} |\log p_{solved} - \log p_{inferred}| \end{aligned} \end{equation} Table \ref{tab:Roberta_statistics} shows the results. \section{Summary} This draft demonstrates and naively quantifies the inconsistencies that exist in large MLMs in the simple scenario of bigrams. Such inconsistencies originate from the fact that the number of bidirectional conditionals MLMs can learn far exceeds what is needed for constructing the joint distribution. Given the recent evidence that MLM-based pretraining might be a powerful paradigm, we think that resolving the its consistency issue could be a necessary step for future work. \section*{Acknowledgements} We would like to thank Fuzhao Xue for the useful discussions.
1,314,259,994,094
arxiv
\section{Introduction} Leakage in water supply systems causes wastage of water and energy resources, and poses public health risk due to water pollution. Leaks may occur, for example, due to aging pipelines, corrosion, and excessive steady and/or unsteady pressures in the system \cite{ghazali2012comparative}. Thus, an effective leakage detection method is essential. Most related research in this area has focused on the problem of leak estimation, for which the objective is usually to estimate the location of the leak, assuming that a leak actually exists in the pipeline. For this purpose, various transient-based leak location estimation methods have been developed (e.g., \cite{ghazali2012comparative,vitkovsky2000leak,wang2018pipeline,wang2018identification}). In this work we address the related (but different) problem of leak detection, by developing suitable statistical hypothesis testing procedures. Despite being a natural detection approach, to our knowledge, hypothesis tests have yet to be developed for leak detection in water pipeline systems. Generally speaking, we develop data-driven approaches to decide between the presence or absence of a leak in the pipeline, and for the former case, return estimates of the leak parameters. The measured data corresponds to primary and secondary measurements of head differences at different frequencies, taken from multiple sensors deployed at different locations along the water pipeline. Our tests are developed based on a linearized transient wave model in the frequency domain, as proposed in \cite{wang2018pipeline,wang2018identification}, which has been supported by experimental data \cite{wang2018experiment}. By applying hypothesis testing theory to this model, we find that, from a technical point of view, the problem boils down to a binary classification problem that discriminates between a ``null hypothesis'', corresponding to zero-mean complex Gaussian noise with non-trivial correlation, and an ``alternative hypothesis'', corresponding to a structured (deterministic) signal embedded within the Gaussian noise. For the latter hypothesis, the deterministic signal is a function of the leak parameters, including size and location. Since the signal and noise model parameters (i.e., noise covariance, leak location and size) are all unknown, we develop test procedures based on the generalized likelihood ratio test (GLRT) \cite{kelly1986adaptive}, which constructs a likelihood ratio based on the two hypotheses, and replaces the unknown parameters in the likelihood functions by appropriate estimates. We first consider a traditional strategy of which replaces the unknown parameters by their maximum likelihood estimates (MLE), and develop a suitable test statistic. This statistic exploits the known structure of the leak signals (under the alternative hypothesis), and is proven to have the desirable property of being a constant false alarm rate (CFAR) statistic; meaning that a detection threshold can be specified which achieves a fixed false alarm probability, regardless of the model parameters. Through simulations, we demonstrate the good performance of the proposed method in detecting leaks, and show enhancement over methods that have been developed for related models in the context of radar detection. This approach is particularly suited to ``data rich'' scenarios, where the MLEs provide accurate parameter estimates. One limitation of the proposed approach is that for high dimensional settings when the number of frequency domain measurements and/or the number of sensors is large, the number of parameters to estimate is also large. This is particularly the case for the noise covariance matrix, and it is well known that under high dimensional settings that the MLE -- corresponding to the conventional sample covariance matrix (SCM) estimate -- is particularly inaccurate. This, in turn, can degrade the performance of the proposed leak detection algorithm. To deal with this potential problem, we propose a second detection algorithm that seeks to design a robust covariance estimation solution which is suitably optimized for the task of leak detection, under high dimensional settings. The approach is to replace the SCM with a regularized version (termed RSCM) in the GLRT statistic, and to optimize the regularization parameter to maximize the leak detection accuracy subject to a prescribed false alarm criteria. The RSCM is a simple but effective covariance matrix estimator to deal with problems of sample deficiency and high dimensionality by pulling the spread sample eigenvalues toward their grand mean \cite{ledoit2004well}. It is used in many fields, including mathematical finance and adaptive array processing \cite{rubio2012performance,abramovich1981controlled,abramovich2007modified,mestre2006finite,carlson1988covariance}. Extensions have also been proposed which replace the SCM with a robust covariance matrix estimator (such as Tyler's estimator) to provide resilience against outliers \cite{couillet2014large,yang2015robust,auguin2018large}. The main challenge is generally to develop data-driven methods to optimize the regularization parameter, which is typically application dependent. In a similar spirit to previous work (e.g., \cite{ledoit2004well,mestre2006finite,rubio2012performance,ma2003efficient,kammoun2018optimal,couillet2016second}), our solution draws from recent results in the area of large dimensional random matrix theory. Most specifically, it leverages technical results from \cite{couillet2016second,kammoun2018optimal}, which considered a related detection problem, but which considered a different model to the one in this paper. The basic idea of the approach is to first characterize the asymptotic behavior of the false alarm and detection probabilities under certain double-limit asymptotics, which we define, and subsequently to provide consistent estimators of these probabilities which are completely data-driven. Based on this, we can then optimize the regularization parameter in an online fashion, which maximizes the (estimated) detection probability while maintaining a prescribed (estimated) false alarm probability. The performance of this second proposed leak detection algorithm is demonstrated through simulations, and shown to outperform the first proposed algorithm, particularly under high-dimensional model settings, at the expense of increased complexity. \section{System model}\label{sec:model} As shown in Fig.\,\ref{fig:PipelineSystemConfigH1}, we consider a reservoir-pipe-valve system where the pipe of length $l$ meters is bounded by $p_{\rm U}=0$ and $p_{\rm D}=l$. A total of $M$ pressure sensors deployed near the downstream node are used to collect pressure head oscillations\footnote{The pressure head (in meters) relates the pressure of a fluid to the height of a column of that fluid having an equivalent static pressure at its base. The head is defined as $h=p/(\rho g)$ where $p$ is the pressure (in Pascals), $g$ denotes gravitational acceleration, and $\rho$ is the density of the fluid. For example, 50 m of head in a pipe implies that if that pipe bursts, the height of the resulting water jet would be 50 m.} for leak identification. The locations of the $M$ sensors are $p_{\rm U}<x_1<x_2<\ldots<x_M<p_{\rm D}$. We denote the leak size and the leak location as $s$ and $\phi$. \begin{figure}[!htb] \begin{center} \includegraphics[width=1\linewidth]{Model.pdf} \caption{Pipeline configuration. Under hypothesis $H_0$, there is no leak in the water pipe, while under hypothesis $H_1$, a leak of size $s$ is present at location $\phi$. } \label{fig:PipelineSystemConfigH1} \end{center} \vspace{-0.3cm} \end{figure} By rapidly closing and/or opening the valve at the downstream of the pipe, the sensors measure the pressure head oscillations at different frequencies, which are affected by a leak in the pipe. Let $h_m(w_j)$ denote the head oscillation at frequency $w_j$ and location $x_m$, and $h^o_m(w_j)$ the computed head oscillation with no leak, where $j=1,\ldots,J$ and $m=1,\ldots,M$. We define the head difference at frequency $w_j$ observed by the sensor at $x_m$ as $z_m(w_j)=h_m(w_j)-h^o_m(w_j)$. If the pipe is intact (with no leak), $z_m(w_j)=h_m(w_j)-h^o_m(w_j)=n_m(w_j)$, where $n_m(w_j)$ is the measurement noise, which can be measurement error or environment noise induced by turbulence, traffic, construction, etc. Otherwise, $z_m(w_j)=sg_m(\phi, w_j)+n_m(w_j)$, in which $sg_m(\phi, w_j)$ is the leak component, which depends on the leak size $s$ and the leak location $\phi$. The detailed formulas of $h^o_m(w_j)$ and $g_m(\phi, w_j)$ are provided in the Appendix \ref{appx:model}. Assembling $z_m(w_j)$ into a vector ${\bf z}_0\in\mathbb{C}^N$ of length $N=J\times M$, we have \begin{align}\nonumber {\bf z}_0={\rm vec}[z_m(w_j), j=1,\ldots,J, m=1,\ldots,M]. \end{align} We denote the hypothesis of whether there exists a leak or not by $H_1$ and $H_0$, respectively. Then the problem of detecting a leak in a noise-contaminated water pipe can be posed in terms of the following binary hypothesis test: \begin{align}\label{eq:H1H0} \left\{ \begin{array}{l} H_0: {\bf z}_0={\bf n}_0, \\ H_1: {\bf z}_0=s{\bf g}(\phi)+{\bf n}_0 \end{array} \right. \end{align} where the noise vector ${\bf n}_0={\rm vec}[n_m(w_j)]$ is assumed to be Gaussian distributed\footnote{The Gaussian noise assumption in water pipes with flow is justified by experimental investigations in laboratory pipe systems \cite{dubey2019measurement}.} with zero mean and covariance matrix ${\bf C}_N$, and \begin{align}\nonumber {\bf g}(\phi)={\rm vec}[g_m(\phi,w_j), j=1,\ldots,J, m=1,\ldots,M]. \end{align} We assume that $K$ independent samples of noise-only data are available, which are referred to as secondary data: \begin{align}\nonumber {\bf z}_k={\bf n}_k, ~{\bf n}_k\sim CN(\textbf{0}, {\bf C}_N), ~k=1,\ldots,K. \end{align} These may be obtained, for example, by the steady-state pressure measurements when the pipe is newly built. Thus, the leak detection problem can be recast as the following hypotheses: \begin{align}\nonumber \left\{ \begin{array}{l} H_0: {\bf z}_0={\bf n}_0, \quad\quad\quad\quad\,\,\;{\bf z}_k={\bf n}_k, ~~k=1\ldots,K\\ H_1: {\bf z}_0=s{\bf g}(\phi)+{\bf n}_0, \quad{\bf z}_k={\bf n}_k, ~~k=1\ldots,K. \end{array} \right. \end{align} The joint probability density function (PDF) of the input data under $H_0$ is \begin{align}\nonumber &f_0({\bf z}_0,\ldots,{\bf z}_K|H_0)=\\ \label{eq:pdf_f0} &\frac{1}{(\pi^N\det({\bf C}_N))^{K+1}}{\rm exp}\left[-\sum_{k=1}^K{\bf z}_k^H{\bf C}_N^{-1}{\bf z}_k\right]{\rm exp}\left[-{\bf z}_0^H{\bf C}_N^{-1}{\bf z}_0\right] \end{align} where $\det({\bf C}_N)$ is the matrix determinant of ${\bf C}_N$. Similarly, the joint PDF of the input data under $H_1$ is \begin{align} \nonumber &f_1({\bf z}_0,\ldots,{\bf z}_K|H_1)=\\ \nonumber &\frac{1}{(\pi^N\det({\bf C}_N))^{K+1}}{\rm exp}\left[-\sum_{k=1}^K{\bf z}_k^H{\bf C}_N^{-1}{\bf z}_k\right]\\ \label{eq:pdf_f1} &\times{\rm exp}\!\left[-({\bf z}_0-s{\bf g}(\phi))^H{\bf C}_N^{-1}({\bf z}_0-s{\bf g}(\phi))\right]. \end{align} The most natural approach to detect the presence of a leak is the likelihood ratio (LR) test, which computes the LR or its logarithm and compares it with a certain threshold $\alpha$ \cite{lehmann2006testing}. Specifically, the LR test is \begin{align}\nonumber L=\frac{f_1({\bf z}_0,\ldots,{\bf z}_K|H_1)}{f_0({\bf z}_0,\ldots,{\bf z}_K|H_0)}\mathop{\gtrless}^{H_1}_{H_0}\alpha. \end{align} Namely, if $L>\alpha$, we decide $H_1$, and if $L\leq\alpha$, we decide $H_0$. The LR test is known to maximize the detection probability $P_{\rm D}$ at a certain false alarm probability $P_{\rm FA}$. The $P_{\rm D}$ is defined as the probability that the detector correctly decides hypothesis $H_1$: \begin{align}\nonumber P_{\rm D}=\mathbb{P}[L>\alpha|H_1], \end{align} and the $P_{\rm FA}$ is defined as the probability that the detector decides hypothesis $H_1$ when the true hypothesis is $H_0$: \begin{align} \label{eq:Pfa} P_{\rm FA}=\mathbb{P}[L>\alpha|H_0]. \end{align} For leak detection in a water pipeline system, we usually do not know the parameters $s$, $\phi$ and ${\bf C}_N$ in the PDFs $f_0({\bf z}_0,\ldots,{\bf z}_K|H_0)$ and $f_1({\bf z}_0,\ldots,{\bf z}_K|H_1)$. In this context, the LR test can not be employed. The GLRT, which employs the MLEs of the unknown parameters, is a suitable solution. \section{Generalized likelihood ratio test (GLRT)} In this section, we derive a GLRT-based leak detection approach and demonstrate its desirable CFAR property. The performance of our proposed approach is also assessed by numerical simulations. \subsection{Derivation of GLRT} We denote the leak component in the data model as ${\bf p}=s{\bf g}(\phi)$ and assume that $K\geq N$. By estimating $s$ and $\phi$, we get the estimate of ${\bf p}$. The considered GLRT is \begin{align}\label{eq:test} L=\frac{\max_{s, \phi}\max_{{\bf C}_N}f_1({\bf z}_0,\ldots,{\bf z}_K|H_1)}{\max_{{\bf C}_N}f_0({\bf z}_0,\ldots,{\bf z}_K|H_0)}\mathop{\gtrless}^{H_1}_{H_0}\alpha. \end{align} The MLEs of ${\bf C}_N$ under $H_0$ and $H_1$ are equal to the SCM, which are well known \cite{goodman1963statistical}. Namely, the MLE of ${\bf C}_N$ under $H_0$ is $\frac{1}{K+1}\sum_{k=0}^K{\bf z}_k{\bf z}_k^H$ and the MLE of ${\bf C}_N$ under $H_1$ is \\ $\frac{1}{K+1}\left[({\bf z}_0-s{\bf g}(\phi))({\bf z}_0-s{\bf g}(\phi))^H+\sum_{k=1}^K{\bf z}_k{\bf z}_k^H\right]$. Denote ${\bf S}_N=\sum_{k=1}^K{\bf z}_k{\bf z}_k^H$. Following similar derivation steps in \cite{kelly1986adaptive}, we obtain the MLEs of $s$ and $\phi$: \begin{align} \label{eq:sLR} \hat{s}=\frac{{\rm Re}\{{\bf g}^H(\phi){\bf S}_N^{-1}{\bf z}_0\}}{{\bf g}^H(\phi){\bf S}_N^{-1}{\bf g}(\phi)}, \end{align} and the MLE of $\phi$ is \begin{align}\label{eq:gR} \hat{\phi}=\argmax_{\phi\in[p_{\rm U}, p_{\rm D}]}\frac{{\rm Re}^2\{{\bf g}^H(\phi){\bf S}_N^{-1}{\bf z}_0\}}{{\bf g}^H(\phi){\bf S}_N^{-1}{\bf g}(\phi)}. \end{align} The statistic in (\ref{eq:gR}) can be seen as a generalization of the leak location estimator presented in \cite{wang2018pipeline,wang2018identification}, which considered the problem of leak estimation under white Gaussian noise. Because of the complicated structure of ${\bf g}(\phi)$, it is not easy to obtain an explicit formula for $\hat{\phi}$ from (\ref{eq:gR}), unlike for the other parameters. Thus, we obtain the MLE of $\phi$ through a grid search in the range $[p_{\rm U}, p_{\rm D}]$ that minimizes $\lambda$. By plugging the MLEs of $s$, $\phi$ and ${\bf C}_N$, the test (\ref{eq:test}) becomes \begin{align}\label{eq:test_complex} \frac{1+{\bf z}_0^H{\bf S}_N^{-1}{\bf z}_0}{1+{\bf z}_0^H{\bf S}_N^{-1}{\bf z}_0-\frac{{\rm Re}^2\{{\bf g}^H(\hat{\phi}){\bf S}_N^{-1}{\bf z}_0\}}{{\bf g}^H(\hat{\phi}){\bf S}_N^{-1}{\bf g}(\hat{\phi})}}\mathop{\gtrless}^{H_1}_{H_0}\alpha. \end{align} Denote $\alpha_1=1-\frac{1}{\alpha}$. The hypothesis test (\ref{eq:test_complex}) can be further simplified as \begin{align}\label{eq:propose} \Delta=\frac{{\rm Re}^2\{{\bf g}^H(\hat{\phi}){\bf S}_N^{-1}{\bf z}_0\}}{(1+{\bf z}_0^H{\bf S}_N^{-1}{\bf z}_0){\bf g}^H(\hat{\phi}){\bf S}_N^{-1}{\bf g}(\hat{\phi})}\mathop{\gtrless}^{H_1}_{H_0}\alpha_1. \end{align} Similar to the GLRT in \cite{kelly1986adaptive}, the distribution of the test statistic $\Delta$ under $H_0$ is independent of ${\bf C}_N$ and ${\bf g}(\hat{\phi})$. Hence, the cumulative distribution function (CDF) of $\Delta$ under $H_0$, denoted as $F_{\Delta}$, remains the same for any covariance matrix ${\bf C}_N$ and nonzero vector ${\bf g}(\hat{\phi})$. Consequently, although a closed-form expression of $F_{\Delta}$ is difficult to derive, it is sufficient to apply Monte-Carlo simulations to obtain the empirical CDF $F_{\Delta}$ based on simulated data by setting ${\bf g}(\hat{\phi})=[1,0,\ldots,0]^T$, ${\bf C}_N={\bf I}_N$ and generating ${\bf z}_i$, $i=0,\ldots, K$ as standard normal distributed random vectors. The threshold $\alpha_1$ for a desired $P_{\rm FA}$ can then be determined by computing $\alpha_1=F_\Delta^{-1}(1-P_{\rm FA})$. Although the GLRT in (\ref{eq:test}) is similar to that in \cite{kelly1986adaptive}, we should point out the main differences between the two GLRTs. Firstly, in (\ref{eq:test}), the leak size $s$ is confined to be a real number but in \cite{kelly1986adaptive}, $s$ is complex, which leads to a different MLE expression of $s$ as in (\ref{eq:sLR}). Additionally, while in \cite{kelly1986adaptive}, the signal vector ${\bf g}$ is known, in our case, ${\bf g}$ is parameterized by unknown leak location $\phi$, which is estimated in (\ref{eq:gR}). The detection procedure is summarized in {\bf Algorithm\;\ref{al:alg1}}. As the detection test (\ref{eq:propose}) uses the SCM as the estimate of ${\bf C}_N$, we refer to this leak detection (LD) scheme as LD-SCM. \begin{algorithm}[h] {\small{ \caption{LD-SCM} \label{al:alg1} \begin{enumerate} \smallskip \item Determine the threshold $\alpha_1$ corresponding to the prescribed $P_{\rm FA}$ and the empirical CDF $F_{\Delta}$: \begin{align}\nonumber \alpha_1=F_\Delta^{-1}(1-P_{\rm FA}). \end{align} \item Find the optimal estimate of $\phi$ and thus ${\bf g}(\hat{\phi})$ by numerically solving: \begin{align}\label{eq:phi_est} \hat{\phi}=\argmax_{\phi\in[p_{\rm U}, p_{\rm D}]}\frac{{\rm Re}^2\{{\bf g}^H(\phi){\bf S}_N^{-1}{\bf z}_0\}}{{\bf g}^H(\phi){\bf S}_N^{-1}{\bf g}(\phi)}. \end{align} \item Compute the test statistic: \begin{align}\nonumber \Delta=\frac{{\rm Re}^2\{{\bf g}^H(\hat{\phi}){\bf S}_N^{-1}{\bf z}_0\}}{(1+{\bf z}_0^H{\bf S}_N^{-1}{\bf z}_0){\bf g}^H(\hat{\phi}){\bf S}_N^{-1}{\bf g}(\hat{\phi})}. \end{align} \item Accept $H_0$ (``no leak"), if $\Delta\leq\alpha_1$; otherwise accept $H_1$ (``leak present"). \item If $H_1$ accepted, set the estimates of $\phi$ from (\ref{eq:phi_est}) and $s$: \begin{align}\nonumber \hat{s}=\frac{{\rm Re}\{{\bf g}^H(\hat{\phi}){\bf S}_N^{-1}{\bf z}_0\}}{{\bf g}^H(\hat{\phi}){\bf S}_N^{-1}{\bf g}(\hat{\phi})}. \end{align} \end{enumerate} }} \end{algorithm} \subsection{Performance evaluation and comparison}\label{sec:sim_3methods} Here we demonstrate the performance of our proposed LD-SCM scheme, and compare it against alternative detection methods. The system configuration is shown in Fig.\;\ref{fig:PipelineSystemConfigH1}. A water pipe in a horizontal plane with length $l=2000$ m and diameter $D=0.5$ m is considered. The locations of upstream and downstream reservoirs are assumed to be $p_{\rm U}=0$ m and $p_{\rm D}=2000$ m, respectively. Two pressure sensors are situated at $x_1=1800$ m and $x_2=2000$ m. The wave speed is $a=1000$ m/s. The utilized frequencies are $w=jw_{th}$, $j=1,2,\ldots,32$, where $w_{th}=a\pi/(2l)$ is the fundamental frequency (first resonant frequency). Thus $N=64$. Under the hypothesis $H_1$, the leak location is $\phi=600$ m and the leak size is $s=1.4\times10^{-4}$ ${\rm m}^2$. Other necessary parameters required in the system model (see Appendix \ref{appx:model}) are: $f=0.02$, $e^L=0$, $Q_0=0.0153$ ${\rm m}^3/{\rm s}$, $g=9.8$ ${\rm m}/{\rm s}^2$ and $H_{0}^L=23.5$ m. In the following simulations, we carry out Monte Carlo simulations using $10^5$ runs. We compare the performance of our proposed LD-SCM scheme against alternative detection methods. First, we consider the ``oracle" detector with perfect knowledge of parameters $s$, $\phi$ and ${\bf C}_N$. Although the oracle detector is unachievable in practice, it provides an upper bound on the performance of leak detection. We also compare with a classical method used in radar detection \cite{raghavan1995cfar}, which also uses the SCM as the estimate of ${\bf C}_N$ and is referred to as RD-SCM. Different from the LD-SCM scheme, this method estimates the leak component ${\bf p}=s{\bf g}(\phi)$ as a whole. It ignores the structure of ${\bf p}$ and does not estimate $s$ and $\phi$ separately. Detailed descriptions of the oracle detector and the RD-SCM are provided in Appendix \ref{appx:benchmark}. In the simulations, we set $K=600$, $[{\bf C}_N]_{i,j}=\nu^20.9^{|i-j|}$ and define the signal to noise ratio (SNR) as ${\rm SNR}=\frac{\|{\bf p}\|^2}{\nu^2}$. Fig.\;\ref{fig:SNR_Pd_3methods} shows the detection probability $P_{\rm D}$ against different SNRs under $P_{\rm FA}=10^{-3}$. Our proposed LD-SCM has higher $P_{\rm D}$ than that realized by the RD-SCM over different SNRs, and performs fairly close to the oracle. \begin{figure}[!htb] \begin{center} \subfigure[$P_{\rm D}$ against SNR with prescribed $P_{\rm FA}=10^{-3}$.]{ \includegraphics[width=0.9\linewidth]{Pd_SNR_105_600_part1_v2.pdf} \label{fig:SNR_Pd_3methods}} \subfigure[ROCs with fixed SNR = -3 dB.]{ \includegraphics[width=0.9\linewidth]{ROC_105_K600_part1_v3.pdf} \label{fig:ROC_3methods}} \caption{Performance comparison of the oracle detector, LD-SCM and RD-SCM when $N=64$, $K=600$.} \label{fig:ROC_Pd_3methods} \end{center} \vspace{-0.3cm} \end{figure} To further demonstrate the performance of the LD-SCM, we plot receiver operating characteristic (ROC) curves for the different approaches. Fig.\;\ref{fig:ROC_3methods} shows that while the oracle detector naturally performs the best, the LD-SCM uniformly outperforms the RD-SCM over the entire span of $P_{\rm FA}$. \begin{figure}[!htb] \begin{center} \includegraphics[width=0.9\linewidth]{Pd_SNR_SZ_105_v2.pdf} \vspace{-0.2cm} \caption{$P_{\rm D}$ of LD-SCM against SNR with prescribed $P_{\rm FA}=10^{-3}$, for $N=64$ and different $K$.} \label{fig:ROC_Pd_105} \end{center} \vspace{-0.3cm} \end{figure} To show the effect of the sample size $K$ of the secondary data, we further compare the leak detection performance of the LD-SCM for fixed $N=64$ and different $K$. As we see from Fig.\;\ref{fig:ROC_Pd_105}, the detection probability $P_{\rm D}$ decreases when $K$ becomes smaller. This is because the sample size $K$ is closely related to the estimation accuracy of the SCM. It is well known that the estimation error of the SCM becomes large when the sample size $K$ is small compared to the data dimension $N$ \cite{reed1974rapid,boroson1980sample,mestre2008improved}. This has been demonstrated rigorously using random matrix theory, which considers the setting when $K$ and $N$ are both large, and which has shown that the eigenvalues and eigenvectors of the SCM behave very differently from those of ${\bf C}_N$ \cite{marvcenko1967distribution,mestre2008asymptotic,silverstein1995strong,silverstein1995empirical}. Thus, the performance degradation of the LD-SCM is caused in part by the estimation error of the SCM. To deal with this, a more robust covariance matrix estimate may help to enhance the leak detection performance when $K$ is not substantially larger than $N$. This is the main focus of the subsequent section. \section{Leak detection with regularized sample covariance matrix} As shown in the last section, the performance of the LD-SCM degrades when the sample size $K$ does not greatly exceed the matrix dimension $N$. Since the measurements are collected through $M$ sensors at $J$ frequencies, it is possible that the data dimension $N=J\times M$ is large, compared to the sample size $K$. Thus it is desirable to design a leak detection method that yields good performance when the data dimension is high or the sample size of the secondary data is small. As the performance degradation is, to some extent, caused by the increased estimation error of the SCM, we may apply a more robust high dimensional covariance matrix estimator. A popular approach is the regularized SCM (RSCM) \cite{ledoit2004well,mestre2006finite,rubio2012performance,kammoun2018optimal}. We consider in this paper the design of an RSCM estimator, with the regularization parameter specifically optimized for the leak detection problem. We denote this second proposed leak detection scheme as LD-RSCM. It is inspired by recent works \cite{kammoun2018optimal,couillet2016second} on radar detection. \subsection{Derivation of LD-RSCM with unknown $\phi$ (under $H_1$)} \label{sec:knownPhi} Initially, we introduce the design of the LD-RSCM with unknown leak location $\phi\in\mathcal{R}_l$ as $\mathcal{R}_l=[p_{\rm U}, p_{\rm D}]$ in the hypothesis $H_1$. The problem with unknown leak location (which is the case in practice) is addressed in Section \ref{sec:estphi}, in which the estimation of $\phi$ under $H_1$ is considered. With ${\bf g}(\phi)$ remained untouched, our data model becomes similar to that in radar detection \cite{kelly1986adaptive}. From results in \cite{kelly1986adaptive,gini1997sub}, the MLE of $s$ under $H_1$ is a function of $\phi$: \begin{align}\label{eq:MLE_s} \hat{s}(\phi)=\argmax_s f_1({\bf z}_0,\ldots,{\bf z}_K|H_1)=\frac{{\rm Re}\{{\bf g}^H(\phi){\bf C}_N^{-1}{\bf z}_0\}}{{\bf g}^H(\phi){\bf C}_N^{-1}{\bf g}(\phi)}. \end{align} By substituting $\hat{s}(\phi)$ for $s$ in $f_1$, the logarithm of the LR test statistic becomes \begin{align}\label{L:knownC_N} L(\phi)=\ln\frac{\max_{s}f_1({\bf z}_0,\ldots,{\bf z}_K|H_1)}{f_0({\bf z}_0,\ldots,{\bf z}_K|H_0)}=\frac{{\rm Re}^2\{{\bf g}^H(\phi){\bf C}_N^{-1}{\bf z}_0\}}{{\bf g}^H(\phi){\bf C}_N^{-1}{\bf g}(\phi)}. \end{align} Since ${\bf C}_N$ is unknown in (\ref{L:knownC_N}) and in order to cope with a possible deficiency in samples and improve the covariance matrix estimation accuracy, we use the RSCM as the estimate of ${\bf C}_N$, which is defined as follows: \begin{align}\nonumber \hat{\bf C}_N(\rho)=(1-\rho)\frac{N}{{\rm tr}({\bf R}_N)}{\bf R}_N+\rho{\bf I}_N, \end{align} where $\rho\in(0,1]$ is the regularization parameter and ${\bf R}_N=\frac{1}{K}\sum_{k=1}^K{\bf z}_k{\bf z}_k^H$ is the SCM computed with the secondary data. We normalize the trace of ${\bf R}_N$ to be of the same scale with that of ${\bf I}_N$ to ensure $\hat{\bf C}_N(\rho)$ to be sensitive to $\rho$. By plugging the RSCM into the test statistic $L(\phi)$ in (\ref{L:knownC_N}), we obtain $L(\rho,\phi)$ as a function of the regularization parameter $\rho$ and the leak location $\phi$, and the hypothesis test becomes \begin{align}\label{eq:L_rho} L(\rho,\phi)=\frac{{\rm Re}^2\{{\bf g}^H(\phi)\hat{\bf C}_N^{-1}(\rho){\bf z}_0\}}{{\bf g}^H(\phi)\hat{\bf C}_N^{-1}(\rho){\bf g}(\phi)}\mathop{\gtrless}^{H_1}_{H_0}\alpha. \end{align} Our aim is to find the optimal $\rho$, for any $\phi$ (which would be estimated), that can asymptotically maximize the detection probability $P_{\rm D}=\mathbb{P}\left[L(\rho,\phi)>\alpha|H_1\right]$ under a pre-determined false alarm probability $P_{\rm FA}=\mathbb{P}\left[L(\rho,\phi)>\alpha|H_0\right]$. For fixed $N$ and $K$, this is not an easy task. Additionally, it is obvious to see that the distribution of $L(\rho,\phi)$ in (\ref{eq:L_rho}) depends on ${\bf C}_N$ and unlike the LD-SCM method, the LD-RSCM scheme does not enjoy the CFAR property. This adds to the difficulty of determining the threshold $\alpha$. Inspired by \cite{couillet2016second,kammoun2018optimal}, we resort to asymptotic tools from random matrix theory to address this problem. The approach is to first characterize the asymptotic false alarm and detection probabilities for all $\rho$ within a specified range, under the assumption that $N,K\rightarrow\infty$ with $c_N=N/K\rightarrow c$. We subsequently provide consistent estimators of the asymptotic false alarm and detection probabilities that are defined only in terms of the observed primary and secondary data. Based on this, we fix the estimated false alarm probability and optimize online over $\rho$ to maximize the estimated detection probability. Following this approach, we assume that $\limsup_N\|{\bf C}_N\|<\infty$ where $\|{\bf C}_N\|$ is the spectral norm of ${\bf C}_N$. Additionally, we make an extra assumption on the order of magnitude of $s$ with respect to $N$ to avoid getting trivial limiting results as $N\rightarrow\infty$. To see this, consider hypothesis $H_1$, and recall (\ref{eq:H1H0}), noting that $\|{\bf g}(\phi)\|_2 = O(\sqrt{N})$ (since ${\bf g}(\phi)$ is an $N$-dimensional vector whose elements do not depend on $N$). Then, if $s$ remains fixed as $N \to \infty$, (\ref{eq:L_rho}) implies that $L(\rho,\phi)\rightarrow\infty$, and consequently, $P_{\rm D}\rightarrow1$ for any fixed threshold $\alpha$. In order to avoid this, we assume that $s = O(\frac{1}{\sqrt{N}})$. In practice, this indicates that a small leak size is considered, which makes the detection problem even more difficult. We first observe that the structure of $L(\rho,\phi)$ in (\ref{eq:L_rho}) is similar to that of the test statistic $\hat{T}_N^{\rm RSCM}(\rho)$ described in \cite{kammoun2018optimal}, which is \begin{align}\nonumber \hat{T}_N^{\rm RSCM}(\rho)=\frac{\left|{\bf g}^H\hat{\bf C}_N^{-1}(\rho){\bf z}_0\right|}{\sqrt{{\bf z}_0^H\hat{\bf C}_N^{-1}(\rho){\bf z}_0}\sqrt{{\bf g}^H\hat{\bf C}_N^{-1}(\rho){\bf g}}}. \end{align} The forms of $L(\rho,\phi)$ and $\hat{T}_N^{\rm RSCM}(\rho)$ are similar, but not exactly the same. Especially, in $\hat{T}_N^{\rm RSCM}(\rho)$, ${\bf g}$ is known, not parameterized by unknown $\phi$. Nonetheless, the subsequent analysis will draw significantly from the technical derivations in \cite{kammoun2018optimal} (also \cite{couillet2016second}). To demonstrate our results, we first introduce some frequently used quantities. Denote for $z\in\mathbb{C}\backslash\mathbb{R}_{+}$ by $m_N(z)$ the unique complex solution to \small \begin{align}\nonumber m_N(z)\!=\!\left(\!-z\!+\!c_N(1-\rho)\frac{1}{N}{\rm tr}{\bf C}_N({\bf I}_N\!+\!(1\!-\!\rho)m_N(z){\bf C}_N)^{-1}\!\right)^{-1}\!\!\!. \end{align} \normalsize Define for $\kappa>0$, $\mathcal{R}_{\kappa}$ as $\mathcal{R}_{\kappa}\triangleq[\kappa,1]$. Also denote $\underline{\rho}\triangleq\frac{\rho}{\rho+\frac{(1-\rho)N}{{\rm tr}({\bf C}_N)}}$. With these notations at hand, we are now ready to analyze the asymptotic behaviors of $P_{\rm FA}$ and $P_{\rm D}$. \begin{theorem}[False alarm probability]\label{th:Pfa} Under the assumption that $\phi$ is independent of ${\bf z}_0, {\bf z}_1,\ldots,{\bf z}_K$, we have as $N, K\rightarrow\infty$, with $c_N=N/K\rightarrow c\in(0,1)$, \begin{align}\nonumber \sup_{\rho\in\mathcal{R}_{\kappa}, \phi\in\mathcal{R}_l}\left|\mathbb{P}\left[L(\rho,\phi)>\alpha|H_0\right]-Q_1\left(\frac{\alpha}{\sigma^2(\rho,\phi)}\right)\right|\rightarrow0, \end{align} where \begin{align}\nonumber &\sigma^2(\rho,\phi)=\frac{1}{2\rho}\frac{1}{{\bf g}^H(\phi){\bf Q}_N(\underline{\rho}){\bf g}(\phi)}\\ \label{eq:sigma} &\hspace{1.2cm}\times\frac{{\bf g}^H(\phi){\bf C}_N{\bf Q}_N^2(\underline{\rho}){\bf g}(\phi)}{1-cm_N^2(-\underline{\rho})(1-\underline{\rho})^2\frac{1}{N}{\rm tr}{\bf C}_N^2{\bf Q}_N^2(\underline{\rho})}, \\ \nonumber &{\bf Q}_N(\underline{\rho})=({\bf I}_N+(1-\underline{\rho})m_N(-\underline{\rho}){\bf C}_N)^{-1} \end{align} and $Q_1\left(\frac{\alpha}{\sigma^2(\rho,\phi)}\right)$ is the regularized gamma function\footnotemark[1] \begin{align} \label{eq:Q1Defn} Q_1\left(\frac{\alpha}{\sigma^2(\rho,\phi)}\right)=Q\left(\frac{1}{2}, \frac{\alpha}{2\sigma^2(\rho,\phi)}\right). \end{align} \end{theorem} \footnotetext[1]{$Q(r,x)$ is defined as $Q(r,x)=\frac{\Gamma(r,x)}{\Gamma(r)}$ where the upper incomplete gamma function $\Gamma(r,x)$ is $\Gamma(r,x)=\int_{x}^\infty t^{r-1}e^{-t}dt$ and the gamma function $\Gamma(r)$ is $\Gamma(r)=\Gamma(r,0)=\int_0^\infty t^{r-1}e^{-t}dt$. } {\bf Proof:} See Appendix \ref{appx:th_Pfa}. {This is a uniform convergence result over both $\rho$ and $\phi$, which is essential in the sequel. The uniform convergence over $\rho$ allows the design of setting $\rho$ that maximizes $P_{\rm D}$ at a certain $P_{\rm FA}$, while the uniform convergence over $\phi$ ensures Theorem\;\ref{th:Pfa} and the following results still hold with the unknown $\phi$ being replaced by its corresponding estimate. The proof of Theorem \ref{th:Pfa} follows a similar methodology used in \cite{couillet2016second}. First, we prove the pointwise convergence for each $\rho\in\mathcal{R}_{\kappa}$ and $\phi\in\mathcal{R}_l$. Then we generalize the convergence result to uniform convergence across $\rho\in\mathcal{R}_{\kappa}$ and $\phi\in\mathcal{R}_l$. In contrast to \cite{couillet2016second}, the key challenge lies in the additional study of the uniform convergence across $\phi\in\mathcal{R}_l$. Due to the space limitation, detailed proof is included in the Supplementary Material S1. Theorem\;\ref{th:Pfa} provides an asymptotic expression for $P_{\rm FA}$. The following theorem provides an asymptotic expression for the detection probability $P_{\rm D}$.} \begin{theorem}[Detection probability]\label{th:Pd} Under the assumption that $\phi$ is independent of ${\bf z}_0, {\bf z}_1,\ldots,{\bf z}_K$, we have as $N, K\rightarrow\infty$, with $c_N\rightarrow c\in(0,1)$, \begin{align}\nonumber \sup_{\rho\in\mathcal{R}_{\kappa},\phi\in\mathcal{R}_l}\left|\mathbb{P}\left[L(\rho,\phi)>\alpha|H_1\right]-Q_2\left(\beta^2(\rho,\phi), \frac{\alpha}{\sigma^2(\rho,\phi)}\right)\right|\rightarrow0, \end{align} where $Q_2$ is \begin{align}\nonumber Q_2(\lambda,x)=e^{-\lambda/2}\sum_{j=0}^\infty\frac{(\lambda/2)^j}{j!}\frac{\gamma(\frac{1+2j}{2},x/2)}{\Gamma(\frac{1+2j}{2})} \end{align} while $\gamma(r,x)=\int_{0}^x t^{r-1}e^{-t}dt$, and \begin{align}\nonumber \beta(\rho,\phi)=&\sqrt{2}s\frac{{\bf g}^H(\phi){\bf Q}_N(\underline{\rho}){\bf g}(\phi)}{\sqrt{{\bf g}^H(\phi){\bf C}_N{\bf Q}_N^2(\underline{\rho}){\bf g}(\phi)}}\\\nonumber &\times\sqrt{1-cm_N^2(-\underline{\rho})(1-\underline{\rho})^2\frac{1}{N}{\rm tr}{\bf C}_N^2{\bf Q}_N^2(\underline{\rho})}. \end{align} \end{theorem} {\bf Proof:} See Appendix \ref{appx:th_Pd}. According to Theorem \ref{th:Pfa} and Theorem \ref{th:Pd}, $L(\rho,\phi)$ behaves quite differently depending on whether there is a leak in the water pipe or not. In particular, under $H_0$, $L(\rho,\phi)$ asymptotically behaves like a chi-squared random variable, with $1$ degree of freedom parameterized by $\sigma^2(\rho,\phi)$; while it is well approximated under $H_1$ by a noncentral chi-squared random variable with $1$ degree of freedom, parameterized by $\sigma^2(\rho,\phi)$ and $\beta^2(\rho,\phi)$. We will now discuss the choice of the regularization parameter $\rho$ and the threshold $\alpha$. We aim at setting $\rho$ and $\alpha$ for any certain $\phi\in\mathcal{R}_l$ in such a way as to maximize the asymptotic $P_{\rm D}$, with the asymptotic $P_{\rm FA}$ set to a fixed (tolerable) value $\eta$. From Theorem \ref{th:Pfa}, one can easily see that the values of $\alpha$ and $\rho$ that provide an asymptotic $P_{\rm FA}$ equal to $\eta$ should satisfy \begin{align}\nonumber \frac{\alpha}{\sigma^2(\rho,\phi)}=Q_1^{-1}(\eta). \end{align} From these choices, we then look for those values that maximize the asymptotic detection probability which is given, according to Theorem \ref{th:Pd}, by \begin{align}\nonumber Q_2\left(\beta^2(\rho,\phi), \frac{\alpha}{\sigma^2(\rho,\phi)}\right). \end{align} The second argument of $Q_2$ should be kept fixed in order to ensure the required asymptotic $P_{\rm FA}$. Noting also that $Q_2$ increases with respect to the first argument, which depends on $\rho$ but not $\alpha$, the optimization of $P_{\rm D}$ boils down to considering any $\rho^*$ satisfying: \begin{align}\label{eq:opt_rho} \rho^*\in\argmax_{\rho\in\mathcal{R}_{\kappa}}\{\theta(\rho,\phi)\} \end{align} where $\theta(\rho,\phi)\triangleq\frac{1}{2s^2}\beta^2(\rho,\phi)$. Note the presence of ``$\in$" in (\ref{eq:opt_rho}), since the optimization on the right-hand side can adopt multiple solutions. Then the corresponding threshold should be \begin{align}\label{eq:alpha_opt} \alpha^*=\sigma^2(\rho^*,\phi)Q_1^{-1}(\eta). \end{align} The maximal asymptotic $P_{\rm D}$ that can be obtained while satisfying an asymptotic $P_{\rm FA}$ equal to $\eta$ is thus given by \begin{align}\nonumber P_{\rm D}=Q_2\left(2s^2\theta(\rho^*,\phi), \frac{\alpha^*}{\sigma^2(\rho^*,\phi)}\right). \end{align} These solutions for $\rho^*$ and $\alpha^*$ should be seen as ``oracle'' solutions, since they are not directly realizable from measured data. Specifically, they require knowledge of $\sigma^2(\rho,\phi)$ and $\theta(\rho,\phi)$, which involve the unknown covariance matrix ${\bf C}_N$ (and also the unknown $\phi$, to be addressed subsequently). Hence, to provide a practically useful solution, it is necessary to obtain consistent estimates of $\sigma^2(\rho,\phi)$ and $\theta(\rho,\phi)$ based on the available sample data. Such estimates, which do not require specific knowledge of ${\bf C}_N$, are provided in the following propositions. \begin{prop}\label{th:sigma} For $\rho\in(0,1)$ and $\phi\in\mathcal{R}_l$, define \begin{align}\label{eq:sigma_est} \hat{\sigma}^2(\rho,\phi)=\frac{{\rm tr}({\bf R}_N)}{2(1-\rho)N}\frac{1-\frac{\rho{\bf g}^H(\phi)\hat{\bf C}_N^{-2}(\rho){\bf g}(\phi)}{{\bf g}^H(\phi)\hat{\bf C}_N^{-1}(\rho){\bf g}(\phi)}}{\left(1-c_N+c_N\rho\frac{1}{N}{\rm tr}\hat{\bf C}_N^{-1}(\rho)\right)^2} \end{align} and let $\hat{\sigma}^2(1,\phi)=\lim_{\rho\uparrow1}\hat{\sigma}^2(\rho,\phi)=\frac{{\bf g}^H(\phi){\bf R}_N{\bf g}(\phi)}{2{\bf g}^H(\phi){\bf g}(\phi)}$. Under the assumption that $\phi$ is independent of ${\bf z}_0, {\bf z}_1,\ldots,{\bf z}_K$, we have, as $N, K\rightarrow\infty$, with $c_N\rightarrow c\in(0,1)$, \begin{align}\nonumber \sup_{\rho\in\mathcal{R}_{\kappa},\phi\in\mathcal{R}_l}\left|\hat{\sigma}^2(\rho,\phi)-\sigma^2(\rho,\phi)\right|\stackrel{\rm a.s.}\longrightarrow0. \end{align} Moreover, \begin{align}\nonumber \sup_{\rho\in\mathcal{R}_{\kappa},\phi\in\mathcal{R}_l}\left|\mathbb{P}\left[L(\rho,\phi)>\alpha|H_0\right]-Q_1\left(\frac{\alpha}{\hat{\sigma}^2(\rho,\phi)}\right)\right|\rightarrow0. \end{align} \end{prop} {\bf Proof:} See Appendix \ref{appx:th_sigma}. \begin{prop}\label{th:theta_est} For $\rho\in(0,1)$ and $\phi\in\mathcal{R}_l$, define $\hat{\theta}(\rho,\phi)$ as \begin{align} \nonumber \hat{\theta}(\rho,\phi)=&\frac{(1-\rho)N}{{\rm tr}({\bf R}_N)}\left(1-c_N+c_N\rho\frac{1}{N}{\rm tr}\hat{\bf C}_N^{-1}(\rho)\right)^2\\\label{eq:hatPhiDef} &\times\frac{({\bf g}^H(\phi)\hat{\bf C}_N^{-1}(\rho){\bf g}(\phi))^2}{{\bf g}^H(\phi)\hat{\bf C}_N^{-1}(\rho){\bf g}(\phi)-\rho{\bf g}^H(\phi)\hat{\bf C}_N^{-2}(\rho){\bf g}(\phi)} \end{align} and let $\hat{\theta}(1,\phi)\triangleq\lim_{\rho\uparrow1}\hat{\theta}(\rho,\phi)=\frac{({\bf g}^H(\phi){\bf g}(\phi))^2}{{\bf g}^H(\phi){\bf S}_N{\bf g}(\phi)}$. Under the assumption that $\phi$ is independent of ${\bf z}_0, {\bf z}_1,\ldots,{\bf z}_K$, we have as $N, K\rightarrow\infty$, with $c_N\rightarrow c\in(0,1)$, \begin{align}\nonumber \sup_{\rho\in\mathcal{R}_{\kappa},\phi\in\mathcal{R}_l}\left|\hat{\theta}(\rho,\phi)-\theta(\rho,\phi)\right|\stackrel{\rm a.s.}\longrightarrow0. \end{align} Moreover \small \begin{align}\nonumber \sup_{\rho\in\mathcal{R}_{\kappa},\phi\in\mathcal{R}_l}\left|\mathbb{P}\left[L(\rho,\phi)>\alpha|H_1\right]-Q_2\left(2{s}^2\hat{\theta}(\rho,\phi), \frac{\alpha}{\hat{\sigma}^2(\rho,\phi)}\right)\right|\rightarrow0. \end{align} \normalsize \end{prop} {\bf Proof:} Since the structure of $\theta(\rho,\phi)$ is similar to that of $\sigma^2(\rho,\phi)$, Proposition \ref{th:theta_est} can be proved similarly to Proposition \ref{th:sigma}. Next, since the convergence results in Theorem \ref{th:Pd} and Proposition \ref{th:theta_est} are uniform in $\rho$, we can establish the following: \begin{cor}\label{th:rho_est} For $\phi\in\mathcal{R}_l$, define $\hat{\rho}^*$ as any value satisfying \begin{align}\nonumber \hat{\rho}^*\in\argmax_{\rho\in\mathcal{R}_{\kappa}}\hat{\theta}(\rho,\phi). \end{align} Under the assumption that $\phi$ is independent of ${\bf z}_0, {\bf z}_1,\ldots,{\bf z}_K$, for every $\alpha>0$ and $\phi\in\mathcal{R}_l$, as $N,K\rightarrow\infty$ with $c_N\rightarrow c\in(0,1)$, \begin{align}\nonumber \left|\mathbb{P}\left[L(\hat{\rho}^*,\phi)>\alpha|H_1\right]-\max_{\rho\in\mathcal{R}_{\kappa}}\{\mathbb{P}\left[L(\rho,\phi)>\alpha|H_1\right]\}\right|\rightarrow0. \end{align} \end{cor} {\bf Proof:} This can be proved following the same steps as in the proof of \cite[Corollary 1]{couillet2016second}, and therefore is omitted. Hence, $\hat{\rho}^*$ provides an asymptotically optimal estimate of $\rho^*$. Moreover, from (\ref{eq:alpha_opt}) and Proposition \ref{th:sigma}, we construct a consistent estimate of ${\alpha}^*$ (for achieving an asymptotic $P_{\rm FA}$ of a prescribed value $\eta$) as follows: \begin{align}\nonumber \hat{\alpha}=\hat{\sigma}^2(\hat{\rho}^*,\phi)Q_1^{-1}(\eta). \end{align} The final remaining issue, required to establish a completely data-dependent leak detection algorithm, is to address the problem of unknown $\phi$. This is pursued in the following. \subsection{Estimation of unknown leak location $\phi$}\label{sec:estphi} Here we develop an estimator $\hat{\phi}$ and correspondingly ${\bf g}(\hat{\phi})$ that can be substituted for the unknown ${\bf g}(\phi)$ in the test statistic $L(\rho,\phi)$ in (\ref{eq:L_rho}). From (\ref{eq:pdf_f1}) and (\ref{eq:MLE_s}), the MLE of $\phi$ with measurement ${\bf z}_0$ is \begin{align}\nonumber \hat{\phi}&=\argmax_{\phi\in[p_{\rm U}, p_{\rm D}]}f_1({\bf z}_0, {\bf z}_1, \ldots,{\bf z}_K|{\bf C}_N, \hat{s}, H_1)\\\label{eq:Est_phi_CN} &=\argmax_{\phi\in[p_{\rm U}, p_{\rm D}]}\frac{{\rm Re}^2\{{\bf g}^H(\phi){\bf C}_N^{-1}{\bf z}_0\}}{{\bf g}^H(\phi){\bf C}_N^{-1}{\bf g}(\phi)}. \end{align} However, the MLE of $\hat{\phi}$ in (\ref{eq:Est_phi_CN}) is based on the unobservable ${\bf C}_N$. In the following theorem, we show that the estimate $\hat{\phi}_{\{{\bf R}_N,{\bf z}_0\}}$ given by (\ref{eq:Est_phi_CN}) but with ${\bf C}_N$ replaced by the SCM ${\bf R}_N$, is asymptotically equivalent to the estimate $\hat{\phi}$ in (\ref{eq:Est_phi_CN}). \begin{theorem}\label{th:phi_est} Define $\hat{\phi}_{\{{\bf R}_N,{\bf z}_0\}}$ as any value satisfying \begin{align}\label{eq:phi_scm} \hat{\phi}_{\{{\bf R}_N,{\bf z}_0\}}\in\argmax_{\phi\in\mathcal{R}_l}\frac{{\rm Re}^2\{{\bf g}^H(\phi){\bf R}_N^{-1}{\bf z}_0\}}{{\bf g}^H(\phi){\bf R}_N^{-1}{\bf g}(\phi)}. \end{align} As $N, K\rightarrow\infty$, with $c_N=N/K\rightarrow c\in(0,1)$, \begin{align}\nonumber \!\!\left|\frac{{\rm Re}^2\{{\bf g}^H(\hat{\phi}_{\{{\bf R}_N,{\bf z}_0\}}){\bf C}_N^{-1}{\bf z}_0\}}{{\bf g}^H(\hat{\phi}_{\{{\bf R}_N,{\bf z}_0\}}){\bf C}_N^{-1}{\bf g}(\hat{\phi}_{\{{\bf R}_N,{\bf z}_0\}})}\!-\!\frac{{\rm Re}^2\{{\bf g}^H(\hat{\phi}){\bf C}_N^{-1}{\bf z}_0\}}{{\bf g}^H(\hat{\phi}){\bf C}_N^{-1}{\bf g}(\hat{\phi})}\right|\stackrel{\rm a.s.}\longrightarrow0. \end{align} \end{theorem} {\bf Proof}: See Appendix \ref{appx:th_phi}. With $\hat{\phi}_{\{{\bf R}_N,{\bf z}_0\}}$, and correspondingly ${\bf g}(\hat{\phi}_{\{{\bf R}_N,{\bf z}_0\}})$, the test statistic $L(\rho,\phi)$ in (\ref{eq:L_rho}) becomes, by substituting ${\bf g}(\hat{\phi}_{\{{\bf R}_N,{\bf z}_0\}})$ for ${\bf g}(\phi)$, \begin{align}\nonumber {L}(\hat{\phi}_{\{{\bf R}_N,{\bf z}_0\}},\rho)=\frac{{\rm Re}^2\{{\bf g}^H(\hat{\phi}_{\{{\bf R}_N,{\bf z}_0\}})\hat{\bf C}_N^{-1}(\rho){\bf z}_0\}}{{\bf g}^H(\hat{\phi}_{\{{\bf R}_N,{\bf z}_0\}})\hat{\bf C}_N^{-1}(\rho){\bf g}(\hat{\phi}_{\{{\bf R}_N,{\bf z}_0\}})}. \end{align} However, it is difficult to study the asymptotic $P_{\rm FA}$ and $P_{\rm D}$ of statistic ${L}(\rho,\hat{\phi}_{\{{\bf R}_N,{\bf z}_0\}})$, unlike the analysis of ${L}(\rho,\phi)$ given in Theorem \ref{th:Pfa} and Theorem \ref{th:Pd}, in which $\phi$ is independent of ${\bf z}_0, {\bf z}_1, \ldots, {\bf z}_K$. As we can see from (\ref{eq:phi_scm}), $\hat{\phi}_{\{{\bf R}_N,{\bf z}_0\}}$ depends on primary data ${\bf z}_0$ and ${\bf R}_N$ constructed from the secondary data ${\bf z}_1,\ldots,{\bf z}_k$. This dependency makes the asymptotic analysis of ${L}(\rho,\hat{\phi}_{\rho,\{{\bf R}_N,{\bf z}_0\}})$ even more complicated. If we were to have access to a parallel independent set of data for estimating $\phi$ (i.e., ${\bf y}_0$ in place of ${\bf z}_0$, and ${\bf y}_1, \ldots, {\bf y}_K$ in place of ${\bf z}_1, \ldots, {\bf z}_K$), such that \begin{align} \label{eq:phiHatIdeal} \hat{\phi}_{\{{\bf W}_N,{\bf y}_0\}}\in\argmax_{\phi\in\mathcal{R}_l}\frac{{\rm Re}^2\{{\bf g}^H(\phi){\bf W}_N^{-1}{\bf y}_0\}}{{\bf g}^H(\phi){\bf W}_N^{-1}{\bf g}(\phi)} \end{align} where ${\bf W}_N=\frac{1}{K}\sum_{k=1}^K{\bf y}_k{\bf y}_k^H$, then $\hat{\phi}_{\{{\bf W}_N,{\bf y}_0\}}$ is independent of ${\bf z}_0, {\bf z}_1, \ldots, {\bf z}_K$ and all the results presented in Section\;\ref{sec:knownPhi} hold upon substituting ${\bf g}(\hat{\phi}_{\{{\bf W}_N,{\bf y}_0\}})$ for ${\bf g}(\phi)$. In the absence of such parallel data set, however, we can still apply the proposed statistic ${L}(\rho,\phi)$, but it will generally be suboptimal. Nonetheless, through simulations, which are not shown due to space limitations, we find that in practice there is no need to have a complete parallel data set to achieve good performance, but rather, it is sufficient to simply have access to ${\bf y}_0$. This is because the correlations induced by using ${\bf z}_1, \ldots, {\bf z}_K$ in estimating $\phi$ are rather weak and thus minimally affect performance, whereas the dependencies induced by ${\bf z}_0$ are strong and lead to substantial performance degradation. Thus, we propose to employ the estimator $\hat{\phi}_{\{{\bf R}_N,{\bf y}_0\}}$ as any value satisfying \begin{align} \label{eq:phiHatFinal} \hat{\phi}_{\{{\bf R}_N,{\bf y}_0\}}\in\argmax_{\phi\in[p_{\rm U}, p_{\rm D}]}\frac{{\rm Re}^2\{{\bf g}^H(\phi){\bf R}_N^{-1}{\bf y}_0\}}{{\bf g}^H(\phi){\bf R}_N^{-1}{\bf g}(\phi)}. \end{align} Based on the results in Section \ref{sec:knownPhi} with the estimated leak location $\hat{\phi}_{\{{\bf R}_N,{\bf y}_0\}}$ substituted for $\phi$, we obtain the optimized regularization parameter $\hat{\rho}^*$ and test statistic ${L}(\hat{\rho}^*,\hat{\phi}_{\{{\bf R}_N,{\bf y}_0\}})$. {Both $\hat{\phi}_{\{{\bf R}_N,{\bf y}_0\}}$ and $\hat{\rho}^*$ can be computed through simple numerical searches in the range of $\mathcal{R}_l$ and $\mathcal{R}_{\kappa}$ respectively.} Our proposed leak detection scheme, LD-RSCM, is summarized in {\bf Algorithm \ref{al:alg2}}. \begin{algorithm}[h] {\small{ \caption{LD-RSCM}\label{al:alg2} \begin{enumerate} \smallskip \item Compute the estimated leak location $\hat{\phi}_{\{{\bf R}_N,{\bf y}_0\}}$ based on (\ref{eq:phiHatFinal}). \item Set the regularization parameter $\hat\rho^*$ as \begin{align}\nonumber \hat{\rho}^*\in\argmax_{\rho\in\mathcal{R}_{\kappa}}\hat{\theta}(\rho,\hat{\phi}_{\{{\bf R}_N,{\bf y}_0\}}) \end{align} with $\hat{\theta}(\cdot)$ given by (\ref{eq:hatPhiDef}), but with $\phi$ replaced by $\hat{\phi}_{\{{\bf R}_N,{\bf y}_0\}}$. \item For a user-prescribed false alarm probability $\eta$, set the threshold $\hat{\alpha}$ as \begin{align}\nonumber \hat{\alpha}=\hat{\sigma}^2(\hat{\rho}^*,\hat{\phi}_{\{{\bf R}_N,{\bf y}_0\}})Q_1^{-1}(\eta) \end{align} with $Q_1(\cdot)$ defined in (\ref{eq:Q1Defn}) and $\hat{\sigma}(\cdot)$ defined as in (\ref{eq:sigma_est}), but with $\phi$ replaced by $\hat{\phi}_{\{{\bf R}_N,{\bf y}_0\}}$. \item Construct the test statistic \begin{align}\nonumber {L}(\hat{\rho}^*,\hat{\phi}_{\{{\bf R}_N,{\bf y}_0\}})=\frac{{\rm Re}^2\{{\bf g}^H(\hat{\phi}_{\{{\bf R}_N,{\bf y}_0\}})\hat{\bf C}_N^{-1}(\hat{\rho}^*){\bf z}_0\}}{{\bf g}^H(\hat{\phi}_{\{{\bf R}_N,{\bf y}_0\}})\hat{\bf C}_N^{-1}(\hat{\rho}^*){\bf g}(\hat{\phi}_{\{{\bf R}_N,{\bf y}_0\}})}. \end{align} \item Accept $H_0$ (``no leak"), if $\tilde{L}(\hat{\rho}^*,\hat{\phi}_{\{{\bf R}_N,{\bf y}_0\}})\leq\hat{\alpha}$; otherwise accept $H_1$ (``leak present"). \item If $H_1$ accepted, set the estimates of $\phi$ and $s$: \begin{align}\nonumber \hat{\phi}=\hat{\phi}_{\{{\bf R}_N,{\bf y}_0\}},~~ \hat{s}=\frac{{\rm Re}\{{\bf g}^H(\hat{\phi}_{\{{\bf R}_N,{\bf y}_0\}})\hat{\bf C}_N^{-1}(\hat{\rho}^*){\bf z}_0\}}{{\bf g}^H(\hat{\phi}_{\{{\bf R}_N,{\bf y}_0\}})\hat{\bf C}_N^{-1}(\hat{\rho}^*){\bf g}(\hat{\phi}_{\{{\bf R}_N,{\bf y}_0\}})}. \end{align} \end{enumerate} }} \end{algorithm} \subsection{Simulation Results}\label{sec:simulation} Here we present simulation results to test the performance of the proposed leak detection algorithm, LD-RSCM. We consider a scenario with $K$ comparable to $N$, setting $K=128$, $N=64$. Other than the choice of $K$ and $N$, the same simulation settings are used as described in Section \ref{sec:sim_3methods}. Results are averaged over Monte Carlo simulations of $10^5$ runs. {\subsubsection{Accuracy of theoretical approximations for false alarm and detection probabilities} We start by checking the accuracy of the asymptotic theoretical results for the false alarm probability. Specifically, for ${L}(\rho,\phi)$ in (\ref{eq:L_rho}), in Fig.\;\ref{fig:Theorem1_H0} we plot the exact value of $P_{\rm FA} = \mathbb{P}\left[L(\rho,\phi)>\alpha|H_0\right]$ (computed empirically), and compare with the deterministic asymptotic approximation $Q_1\left( \alpha / \sigma^2(\rho,\phi)\right)$ from Theorem \ref{th:Pfa}, and the corresponding approximation with estimated $\hat{\sigma}^2(\rho,\phi)$, $Q_1\left( \alpha / \hat{\sigma}^2(\rho,\phi) \right)$ from Proposition \ref{th:sigma}. All curves are in good agreement. We further check the accuracy of the asymptotic theoretical results for the detection probability in Fig.\;\ref{fig:Theorem2_H1_v2}, plotting the exact value of $P_{\rm D} = \mathbb{P}\left[L(\rho,\phi)>\alpha|H_1\right]$ (computed empirically), along with the deterministic asymptotic approximation $Q_2\left(\beta^2(\rho,\phi), \frac{\alpha}{\sigma^2(\rho,\phi)}\right)$ from Theorem \ref{th:Pd}, and the corresponding approximation with estimated values of $\beta^2(\rho,\phi)$ and $\sigma^2(\rho,\phi)$, $Q_2\left(2{s}^2\hat{\theta}(\rho,\phi), \frac{\alpha}{\hat{\sigma}^2(\rho,\phi)}\right)$, from Proposition \ref{th:theta_est}. Again, we see close alignment between the theoretical and empirical results. \begin{figure}[!htb] \begin{center} \subfigure[$P_{\rm FA}$, empirical and theoretical.]{ \includegraphics[width=0.8\linewidth]{Theorem1_H0_v2.pdf} \label{fig:Theorem1_H0}} \subfigure[$P_{\rm D}$, empirical and theoretical. Results for $P_{\rm FA}=10^{-3}$.]{ \includegraphics[width=0.8\linewidth]{Theorem2_H1_v6.pdf} \label{fig:Theorem2_H1_v2}} \caption{Empirical and theoretical results for the false alarm and detection probabilities achieved with ${L}(\rho)$. Results for $[{\bf C}_N]_{i,j}=\nu^20.9^{|i-j|}$, ${\rm SNR} = -3$ dB, $\phi=600$ m and $\rho=0.5$. } \label{fig:Theorem_knownphi} \end{center} \end{figure} \subsubsection{Performance of the proposed test statistic with different $\phi$ estimators} Next we check the performance, in terms of both false alarm probability and detection probability, of the proposed test statistic ${L}(\rho,\hat{\phi})$ when constructed from different estimates of $\phi$. Specifically, in Fig.\;\ref{fig:Theorem_diffphi}, we compare $P_{\rm FA}$ and $P_{\rm D}$ (computed empirically) for ${L}(\rho,\hat{\phi})$ constructed using $\hat{\phi}_{\{{\bf W}_N,{\bf y}_0\}}$, $\hat{\phi}_{\{{\bf R}_N,{\bf y}_0\}}$ and $\hat{\phi}_{\{{\bf R}_N,{\bf z}_0\}}$, with ${\bf W}_N$ and ${\bf y}_0$ defined as in (\ref{eq:phiHatIdeal}). We first observe that if $\phi$ is estimated using ${\bf R}_N$ and ${\bf z}_0$ (equivalently, from ${\bf z}_0, {\bf z}_1, \ldots, {\bf z}_K$), the performance deteriorates substantially, at least in terms of false alarm probability. On the other hand, the performance is similar whether $\phi$ is estimated based on ${\bf R}_N$ and ${\bf y}_0$ or from ${\bf W}_N$ and ${\bf y}_0$, confirming the claims made above, leading to the proposed estimate in (\ref{eq:phiHatFinal}). Moreover, as shown in the figure, even though not theoretically concrete, our asymptotic approximations for the false alarm and detection probabilities remain accurate for $\phi$ estimates constructed from ${\bf R}_N$ and ${\bf y}_0$, but they completely break down when such estimates are constructed from ${\bf R}_N$ and ${\bf z}_0$. This reinforces the need for the additional independent sample ${\bf y}_0$, for the proposed algorithm to perform well. \begin{figure}[!htb] \begin{center} \subfigure[$P_{\rm FA}$, empirical and theoretical]{ \includegraphics[width=0.85\linewidth]{Theorem1_H0_diffphi_2.pdf} \label{fig:Theorem1_H0_diffphi}} \subfigure[$P_{\rm D}$, empirical and theoretical. Results for $P_{\rm FA}=10^{-3}$]{ \includegraphics[width=0.85\linewidth]{Theorem2_H1_diffphi_2.pdf} \label{fig:Theorem2_H1_diffphi}} \caption{Empirical and theoretical results for the false alarm and detection probabilities achieved with $L(\hat{\phi},\rho)$, with different estimators $\hat{\phi}$. Results for $[{\bf C}_N]_{i,j}=\nu^20.9^{|i-j|}$, ${\rm SNR}=-3$ dB, and $\rho=0.5$. } \label{fig:Theorem_diffphi} \end{center} \vspace{-0.3cm} \end{figure} } \subsubsection{Performance comparison of LD-RSCM and LD-SCM} We compute the performance of the proposed LD-RSCM leak detector, and compare this against the LD-SCM detector that we proposed earlier. For the implementation of LD-RSCM, we assume having an extra primary data ${\bf y}_0$, which is not needed in LD-SCM. In Fig.\;\ref{fig:SNR_Pd_Pfa3} we plot the detection probability $P_{\rm D}$ against SNR, for $P_{\rm FA}=10^{-3}$, $[{\bf C}_N]_{i,j}=\nu^20.9^{|i-j|}$. Evidently, LD-RSCM achieves higher detection probability than LD-SCM over the entire span of SNRs. Performance gains are also reflected in Fig.\;\ref{fig:ROC_semilogx}, which presents ROC curves for ${\rm SNR}=-3$ dB. These results clearly demonstrate the advantage of employing a robust covariance matrix estimate to achieve superior leak detection accuracy under high dimensional settings. \begin{figure}[!htb] \begin{center} \subfigure[$P_{\rm D}$ against SNR with prescribed $P_{\rm FA}=10^{-3}$.]{ \includegraphics[width=0.9\linewidth]{Pd_SNR_105_part2_v2.pdf} \label{fig:SNR_Pd_Pfa3}} \subfigure[ROCs with fixed SNR = -3 dB.]{ \includegraphics[width=0.9\linewidth]{ROC_105_part2_v3.pdf} \label{fig:ROC_semilogx}} \caption{Performance comparison of LD-RSCM and LD-SCM when $N=64$, $K=128$.} \label{fig:ROC_Pd_128} \end{center} \end{figure} \section{Discussion} This paper has presented methods for automatically detecting leaks in a water pipeline. This is an important problem for practical water supply systems, which are plagued by inefficiencies caused by pipeline leakages. Such leakages can not only lead to loss of valuable natural resources, but they can also lead to compromised water quality and potentially affect public health. As we have shown, the leak detection problem naturally can be formulated as a binary hypothesis test which, technically, amounts to detecting structured signals (originating due to leakages) in the presence of correlated noise. By adopting the GLRT testing principle, we proposed a simple test procedure which we demonstrated to perform well, particularly when the number of measured samples is not low. The proposed method also has the practically-desirable CFAR property. To further improve performance under data limited (or high-dimensional) scenarios, we further leveraged results from random matrix theory to present a more robust solution. This method revealed better performance, at the expense of requiring higher implementation complexity. Overall, our work provides a first attempt at designing hypothesis tests which are specifically tailored for the problem of detecting leaks in pipelines. Further experimental work will be needed to confirm the performance of the methods in the field. Moreover, an important extension will be to generalize the framework, possibly using multiple hypothesis testing theory, to detect multiple leaks in a pipeline, and to handle more complex pipeline configurations. \begin{appendices} {\section{Water pipeline signal model description}\label{appx:model} Here we provide a brief introduction for the physical model in Section \ref{sec:model}, considering a water pipeline with a single leak. Especially we give a discussion about the derivations of $h_m^0(w_j)$ and $g_m(\phi, w_j)$ in the model. Further discussion about the model can be found in \cite{wang2018pipeline,wylie1993fluid,chaudhry1979applied}. The discharge and head oscillations due to a fluid transient are represented by $q$ and $h$. These are described by the linearized unsteady-oscillatory continuity and momentum equations in the time domain \cite{chaudhry1979applied} \begin{equation}\label{continuity_time} \frac{\partial q}{\partial x}+\frac{gA}{a^2}\frac{\partial h}{\partial t}-\frac{Q_0^L}{2(H_0^L-e^L)}h(\phi)\delta(x-\phi)=0 , \end{equation} \begin{equation}\label{momentum_time} \frac{1}{gA}\frac{\partial q}{\partial t}+\frac{\partial h}{\partial x}+Rq=0, \end{equation} for $x\in[p_{\rm U},p_{\rm D}]$, in which $a$ is the wave speed, $g$ is the gravitational acceleration, $A$ is the area of the pipeline, $\phi$ is the leak location, $Q_0^L$ and $H_0^L$ are the steady-state discharge and head at the leak, $e^L$ is the elevation of the pipe at the leak, $R$ is the steady-state resistance term being $R=(fQ_0)/(gDA^2)$ for turbulent flows, $f$ is the Darcy-Weisbach friction factor, $Q_0$ is the steady-state discharge in the pipe and $D$ is the pipe diameter. Physically, \eqref{continuity_time} represents the mass conservation principle. The first term in the left hand side of \eqref{continuity_time} is the divergence of mass at a point $x$ along the pipe. The second term represents the rate of accumulation of mass at $x$. Therefore, a net mass flux towards $x\neq\phi$ (i.e., $\frac{a^2}{gA}\frac{\partial q}{\partial x}<0$) is accommodated by mass accumulation towards $x$ (i.e., $\frac{\partial h}{\partial t}>0$). This accumulation is fundamentally due to the compressibility of the fluid and the elasticity of the pipe. The last term in the left hand side of \eqref{continuity_time} depicts the mass conservation at the leak. Let $\phi^{-}$ and $\phi^{+}$ represent respectively just upstream and just downstream of the leak. With the assumption \begin{equation}\label{eq:h at leak} h(\phi^{-})=h(\phi^{+})=h(\phi), \end{equation} Eq.~\eqref{continuity_time} leads to \begin{equation}\label{eq:q_leak} q(\phi^{-})=q(\phi^{+})+q(\phi)=q(\phi^{+})-\frac{Q_0^L}{2(H_0^L-e^L)}h(\phi). \end{equation} Eq.~\eqref{momentum_time} is Newton's second law along the pipe. The first term ($\frac{1}{gA}\frac{\partial q}{\partial t}$) has its origin in the axial acceleration of the fluid. The second term ($\frac{\partial h}{\partial x}$) represents the net pressure force. The third term ($Rq$) is the resistance force due to the friction between the fluid and pipe wall. Readers with electrical engineering background should note that there is a one to one correspondence between \eqref{continuity_time} and \eqref{momentum_time} and the Telegrapher equations \cite{pozar2009microwave}. The head $h$ is analogous to the voltage; the flow rate of fluid $q$ is analogous to the current; the friction coefficient $R$ is analogous to the resistance; $\frac{gA}{a^2}$ is the capacitance; $\frac{1}{gA}$ is the inductance; $\frac{Q_0^L}{2(H_0^L-e^L)}$ is analogous to the conductance of the shunt. The model in this paper considers momentum along the pipe, but neglects momentum in the radial and azimuthal directions. This implies that the current model is for low frequency waves where the wavelength is much larger than the pipe diameter. In addition, the model is linearized (i.e., nonlinear terms are neglected). This assumption is valid if (i) the wave amplitude is much lower than the steady-state pressure and (ii) the Mach number $\ll 1$. Typically, the steady-state pressure head is in the range 40 m to 70 m. Therefore, the assumption (i) is not limiting in practice. In addition, in practice the flow velocity is of the order of 1 m/s and the wave speed range is from 350 m/s to 1500 m/s. Therefore, the Mach number is of order 1/350 or less. Thus, the assumption (ii) is also not of concern in practice. Taking the Fourier transform of~\eqref{continuity_time} and (\ref{momentum_time}) with respect to $t$ gives $q$ and $h$ in the frequency domain for $x\in[p_{\rm U},\phi)\cup(\phi,p_{\rm D}]$: \begin{equation}\label{continuity_frequency} \frac{a^2}{gA}\frac{\partial q}{\partial x}+{\rm i}wh=0 , \end{equation} \begin{equation}\label{momentum_frequency} \frac{\partial h}{\partial x}+\left(\frac{{\rm i}w}{gA}+R\right)q=0, \end{equation} where $w$ is the angular frequency. Solving~\eqref{continuity_frequency} and \eqref{momentum_frequency} with the head and mass conservation conditions across the leak, i.e., \eqref{eq:q_leak} and \eqref{eq:h at leak}, the quantities at $x_m$ can be computed in the following way \cite{chaudhry1979applied}: \begin{small} \begin{equation}\label{eq:transfer} \left(\!\!\begin{array}{c}q(x_m)\\h(x_m)\end{array}\!\!\right)\!=\!M_0(x_m-\phi)\!\left(\!\!\begin{array}{cc}1 & -\frac{Q_0^L}{2(H_0^L-e^L)}\\0 & 1\end{array}\!\!\right)\!M_0(\phi)\!\left(\!\!\begin{array}{c}q(p_{\rm U})\\h(p_{\rm U})\end{array}\!\!\right)\!. \end{equation} \end{small} In this equation, \begin{equation}\label{matrix_noleak} M_0(x)=\left(\begin{array}{cc} \cosh\left(\mu x\right) & -\frac{1}{Z}\sinh\left(\mu x\right)\\ -Z\sinh\left(\mu x\right) & \cosh\left(\mu x\right) \end{array}\right) \end{equation} is the field matrix, where $Z=\mu a^2/({\rm i}w g A)$ is the characteristic impedance and $\mu=a^{-1}\sqrt{-w^2+{\rm i}gAw R}$ is the propagation function. If the pipe is frictionless ($f=0$), $\mu={\rm i}k$, where $k=w/a$ is the wavenumber. The transfer matrix on the right hand side of~\eqref{eq:transfer} can be simplified as \cite{wang2018pipeline}: \begin{equation}\label{eq:transfer2} M_0(x_m-\phi)\left(\hspace{-0.2cm}\begin{array}{cc}1 & -\frac{Q_0^L}{2(H_0^L-e^L)}\\0 & 1\end{array}\hspace{-0.2cm}\right)M_0(\phi)=M_0(x_m)+sM_1(\phi), \end{equation} in which \small \begin{align}\nonumber &M_1(\phi)=\sqrt{\frac{g}{2(H_0^L-e^L)}}\\\label{matrix^Leak} &\!\!\times\!\!\left(\hspace{-0.3cm}\begin{array}{cc} Z\sinh\left(\mu \phi\right)\cosh\left(\mu(x_m\!-\!\phi)\right) &\hspace{-0.3cm}-\cosh\left(\mu \phi\right)\cosh\left(\mu(x_m\!-\!\phi)\right)\\ -Z^2\sinh\left(\mu \phi\right)\sinh\left(\mu(x_m\!-\!\phi)\right) &\hspace{-0.3cm}Z\cosh\left(\mu \phi\right)\sinh\left(\mu(x_m\!-\!\phi)\right) \end{array}\!\!\!\!\right) \end{align} \normalsize is a matrix related to the location $\phi$ of the leak but independent of the leak size $s$. By combining~\eqref{eq:transfer}--\eqref{matrix^Leak}, the head at $x_m$ for a given angular frequency $w_j$ is \begin{equation}\nonumber h_m(w_j)=h^0_m(w_j)+sg_m(\phi,w_j), \end{equation} wherein \begin{align}\nonumber h^0_m(w_j)=&-Z(w_j)\sinh\left(\mu(w_j) x_m\right)q(p_{\rm U},w_j) + \\\nonumber &\cosh\left(\mu(w_j) x_m\right)h(p_{\rm U},w_j) \end{align} and \begin{align}\nonumber &g_m(\phi,w_j)=-\frac{\sqrt{g}Z(w_j)\sinh(\mu(w_j)(x_m-\phi))}{\sqrt{2(H_0^L-e^L)}}\\\nonumber &\!\!\times\!\left(Z(w_j)\sinh(\mu(w_j) \phi)q(p_{\rm U}, w_j)\!-\!\cosh(\mu(w_j)\phi)h(p_{\rm U},w_j)\right). \end{align} Applying the boundary condition that $h(p_{\rm U},w_j)=0$ (as the upstream $p_{\rm U}$ is connected to a reservoir), then \begin{equation}\nonumber h^0_m(w_j)=-Z(w_j)\sinh\left(\mu(w_j) x_m\right)q(p_{\rm U},w_j) \end{equation} and \begin{align}\nonumber g_m(\phi,w_j)=&-\frac{\sqrt{g}Z(w_j)\sinh(\mu(w_j)(x_m-\phi))}{\sqrt{2(H_0^L-e^L)}}\\\nonumber &\times Z(w_j)\sinh(\mu(w_j) \phi)q(p_{\rm U}, w_j), \end{align} where $q(p_{\rm U},\omega_j)$ can be estimated by \cite{wang2018identification} \begin{equation}\nonumber q(p_{\rm U},\omega_j)=-\frac{h(p_{\rm U}+\epsilon, w_j)}{Z(\omega_j)\sinh(\mu(\omega_j)\epsilon)}, \end{equation} where $h(p_{\rm U}+\epsilon, w_j)$ is a pressure head measured at a location very close to $p_{\rm U}$ (denoted by $p_{\rm U}+\epsilon$ where $0<\epsilon\ll l$). Since the measured head $h_m(w_j)$ is contaminated by noise $n_m(w_j)$, it can be represented as \begin{equation}\nonumber h_m(w_j)=h^{0}_m(w_j)+sg_m(\phi, w_j)+n_m(w_j). \end{equation} } \section{Benchmark methods}\label{appx:benchmark} \subsection{Oracle detector}\label{appx:opt} If for benchmarking purposes one supposes that under hypothesis $H_1$ the leak size $s$, leak location $\phi$ and noise covariance matrix ${\bf C}_N$ are assumed known, then the likelihood ratio test can be applied (instead of the GLRT), which maximizes the detection probability $P_{\rm D}$ at a certain false alarm probability $P_{\rm FA}$ \cite{lehmann2006testing}. For this oracle detector, from (\ref{eq:pdf_f0}) and (\ref{eq:pdf_f1}), the logarithm of the likelihood ratio statistic is equal to \small \begin{align}\nonumber L&=\ln\frac{f_1({\bf z}_0,\ldots,{\bf z}_K)}{f_0({\bf z}_0,\ldots,{\bf z}_K)}\\\nonumber &=2{\rm Re}\{s{\bf g}^H(\phi){\bf C}_N^{-1}{\bf z}_0\}-s{\bf g}^H(\phi){\bf C}_N^{-1}s{\bf g}(\phi). \end{align} \normalsize Comparing with a threshold $\alpha$ results in the following optimal decision rule: \begin{align}\nonumber 2{\rm Re}\{s{\bf g}^H(\phi){\bf C}_N^{-1}{\bf z}_0\}-s{\bf g}^H(\phi){\bf C}_N^{-1}s{\bf g}(\phi)\mathop{\gtrless}^{H_1}_{H_0}\alpha, \end{align} which, after straightforward simplification, can be rewritten as \begin{align}\nonumber \Delta_{\rm oracle}={\rm Re}\{s{\bf g}^H(\phi){\bf C}_N^{-1}{\bf z}_0\}\mathop{\gtrless}^{H_1}_{H_0}\alpha_2 \end{align} where $\alpha_2=\frac{1}{2}(\alpha+s{\bf g}^H(\phi){\bf C}_N^{-1}s{\bf g}(\phi))$. With this statistic, the false alarm probability is given by \mbox{$P_{\rm FA}=P[\Delta_{\rm oracle}>\alpha_2|H_0]$}, and the detection probability is given by $P_{\rm D}=P[\Delta_{\rm oracle}>\alpha_2|H_1]$. It is important to note that the assumption of $s$, $\phi$, and ${\bf C}_N$ being known is not practically meaningful, but nonetheless, this oracle detector provides an upper bound on the performance that can be achieved by GLRT-based methods, which estimate these unknown quantities. \subsection{RD-SCM}\label{appx:RD_SCM} In our data model described in Section \ref{sec:model}, the leak component ${\bf p}$ is parameterized by the unknown leak size $s$ and the leak location $\phi$. If we were to ignore the structure of ${\bf p}$ and estimate this vector as a whole, the solution of the resulting leak detection problem would be the same as that considered previously in radar detection \cite{raghavan1995cfar}. We refer to this method as RD-SCM, as indicated in Section \ref{sec:sim_3methods}. In this case, the GLRT becomes: \begin{align}\nonumber L_2=\frac{\max_{{\bf C}_N}\max_{\bf p}f_1({\bf z}_0,\ldots,{\bf z}_K)}{\max_{{\bf C}_N}f_0({\bf z}_0,\ldots,{\bf z}_K)}\mathop{\gtrless}^{H_1}_{H_0}\alpha. \end{align} Under $H_0$, the MLE of ${\bf C}_N$ is $\frac{1}{K+1}\sum_{k=0}^K{\bf z}_k{\bf z}_k^H$, whereas under $H_1$, the MLEs of ${\bf p}$ and ${\bf C}_N$ are ${\bf z}_0$ and $\frac{1}{K+1}\sum_{k=1}^K{\bf z}_k{\bf z}_k^H$ respectively \cite{raghavan1995cfar}. Thus \small \begin{align}\nonumber L_2=\left(\frac{\det\left({\bf z}_0{\bf z}_0^H+\sum_{k=1}^K{\bf z}_k{\bf z}_k^H\right)}{\det\left(\sum_{k=1}^K{\bf z}_k{\bf z}_k^H\right)}\right)^{K+1}. \end{align} \normalsize Denote ${\bf S}_N=\sum_{k=1}^K{\bf z}_k{\bf z}_k^H$ and since \small \begin{align} \det\left({\bf z}_0{\bf z}_0^H+\sum_{k=1}^K{\bf z}_k{\bf z}_k^H\right)=\det\left({\bf S}_N\right)\left(1+{\bf z}_0^H{\bf S}_N^{-1}{\bf z}_0\right), \end{align} \normalsize the GLRT becomes \begin{align}\label{eq:GLRT_1} L_2=\left(1+{\bf z}_0^H{\bf S}_N^{-1}{\bf z}_0\right)^{K+1}\mathop{\gtrless}^{H_1}_{H_0}\alpha. \end{align} the GLRT (\ref{eq:GLRT_1}) is equivalent to the following test: \begin{align}\nonumber \Delta_2={\bf z}_0^H{\bf S}_N^{-1}{\bf z}_0\mathop{\gtrless}^{H_1}_{H_0}\alpha_3 \end{align} where $\alpha_3=\sqrt[K+1]{\alpha}-1$. One advantage of this approach is that the probability densities of $\Delta_2$ under $H_0$ and $H_1$ can be obtained analytically, as given in \cite{raghavan1995cfar,shah1998performance}. Thus, $P_{\rm FA}$ and $P_{\rm D}$ for this RD-SCM scheme can be written in closed-form \cite{raghavan1995cfar,shah1998performance}. We can also observe that the probability distribution of $\Delta_2$ is independent of ${\bf C}_N$ under $H_0$, and thus the RD-SCM also has the CFAR property, which is illustrated in detail in \cite{raghavan1995cfar}. \section{Technical proofs} \subsection{Proof of Theorem \ref{th:Pfa}}\label{appx:th_Pfa} The proof follows by applying the methodology used in \cite{couillet2016second}. First, we prove the convergence for each $\rho\in\mathcal{R}_{\kappa}$ and $\phi\in\mathcal{R}_l$. We characterize the asymptotic behavior of the denominator and numerator of $L(\rho,\phi)$ separately. Shown in \cite{kammoun2018optimal}, as $N,K\rightarrow\infty$, with $c_N=N/K\rightarrow c\in(0,1)$, the following results hold: \begin{align}\label{eq:conv_gphi} \left|\frac{1}{N}{\bf g}^H(\phi)\hat{\bf C}_N^{-1}(\rho){\bf g}(\phi)-\frac{1}{N\rho}{\bf g}^H(\phi){\bf Q}_N(\underline{\rho}){\bf g}(\phi)\right|\stackrel{\rm a.s.}\longrightarrow 0 \end{align} and for some $x\sim N(0,1)$, \small \begin{align}\nonumber &\frac{1}{\sqrt{N}}{\rm Re}({\bf g}^H(\phi)\hat{\bf C}_N^{-1}(\rho){\bf z}_0)-\\\nonumber &\sqrt{\frac{1}{2\rho^2N}\frac{{\bf g}^H(\phi){\bf C}_N{\bf Q}_N^2(\underline{\rho}){\bf g}(\phi)}{1-cm_N(-\underline{\rho})^2(1-\underline{\rho})^2\frac{1}{N}{\rm tr}{\bf C}_N^2{\bf Q}_N^2(\underline{\rho})}}x=o_{p}(1) \end{align} \normalsize This shows in particular that \mbox{$\frac{1}{N}{\rm Re}^2\{{\bf g}^H(\phi)\hat{\bf C}_N^{-1}(\rho){\bf z}_0\}$} behaves asymptotically as a chi-squared random variable with \mbox{scale $\sqrt{\frac{1}{2\rho^2N}\frac{{\bf g}^H(\phi){\bf C}_N{\bf Q}_N^2(\underline{\rho}){\bf g}(\phi)}{1-cm_N(-\underline{\rho})^2(1-\underline{\rho})^2\frac{1}{N}{\rm tr}{\bf C}_N^2{\bf Q}_N^2(\underline{\rho})}}$} and degree of freedom $1$. Using this result along with Slutsky's lemma \cite{gut2013probability}, we conclude that, under $H_0$, $L(\rho,\phi)$ is also asymptotically equivalent to a chi-squared random variable but with scale $\sigma(\rho,\phi)$. We therefore get, for fixed $\rho\in\mathcal{R}_{\kappa}$ and $\phi\in\mathcal{R}_l$, \begin{align}\label{eq:result1} \left|\mathbb{P}\left[L(\rho,\phi)>\alpha|H_0\right]-Q_1\left(\frac{\alpha}{\sigma^2(\rho,\phi)}\right)\right|\rightarrow0. \end{align} $Q_1\left(\frac{\alpha}{\sigma^2(\rho,\phi)}\right)$ is the regularized gamma function\footnotemark[1] \begin{align} \label{eq:Q1Defn} Q_1\left(\frac{\alpha}{\sigma^2(\rho,\phi)}\right)=Q\left(\frac{1}{2}, \frac{\alpha}{2\sigma^2(\rho,\phi)}\right). \end{align} The generalization to uniform convergence across $\rho\in\mathcal{R}_{\kappa}$ then follows via the same arguments as in \cite{couillet2016second}. Next we prove the uniform convergence across $\phi\in\mathcal{R}_l$. To reduce the amount of notations, we drop the parameter $\rho$ in function $L(\rho,\phi)$ and covariance estimator ${\bf C}_N(\rho,\phi)$ in the following. We shall exploit a $\phi$-Lipschitz property of $L(\phi)$ to reduce the uniform convergence over $\mathcal{R}_l$ to a uniform convergence over finitely many values of $\phi$. The $\phi$-Lipschitz property we shall need is as follows: for each $\varepsilon>0$, \begin{align}\label{eq:lipschitz} \lim_{\delta\rightarrow0}\lim_{N\rightarrow\infty}P\left(\sup_{\phi,\phi'\in\mathcal{R}_l\atop|\phi-\phi'|<\delta}|L(\phi)-L(\phi')|>\varepsilon\right)=0. \end{align} Let us prove this result. Let $\eta>0$ be small and $\mathcal{A}_N^\eta\triangleq\{\exists\phi\in\mathcal{R}_l,\frac{1}{N}{\bf g}^H(\phi)\hat{\bf C}_N^{-1}{\bf g}(\phi)<\eta\}$. Developing the difference $L(\phi)-L(\phi')$ and isolating the denominator according to its belonging to $\mathcal{A}_N^\eta$ or not, we may write \begin{align}\nonumber &P\left(\sup_{\phi,\phi'\in\mathcal{R}_l\atop|\phi-\phi'|<\delta}|L(\phi)-L(\phi')|>\varepsilon\right) \\ \nonumber &\leq P(\mathcal{A}_N^\eta)+P\left(\sup_{\phi,\phi'\in\mathcal{R}_l\atop|\phi-\phi'|<\delta}V_N(\phi,\phi')>\varepsilon\eta\right) \end{align} where \begin{align}\nonumber V_N(\phi,\phi')\triangleq&\frac{1}{N^2}{\rm Re}^2\{{\bf g}^H(\phi)\hat{\bf C}_N^{-1}{\bf z}_0\}{\bf g}^H(\phi')\hat{\bf C}_N^{-1}{\bf g}(\phi') \\ \nonumber &-\frac{1}{N^2}{\rm Re}^2\{{\bf g}^H(\phi')\hat{\bf C}_N^{-1}{\bf z}_0\}{\bf g}^H(\phi)\hat{\bf C}_N^{-1}{\bf g}(\phi). \end{align} It is obvious that $P(\mathcal{A}_N^\eta)\rightarrow0$ for a sufficiently small choice of $\eta$. To prove that \begin{align}\nonumber \lim_{\delta\rightarrow0}\limsup_{N}P\left(\sup_{|\phi-\phi'|<\eta}V_N(\phi,\phi')>\varepsilon\eta\right)=0, \end{align} it is then sufficient to show that \small \begin{align}\nonumber &\lim_{\delta\rightarrow0}\limsup_NP\!\!\left(\!\!\sup_{\phi,\phi'\in\mathcal{R}_l\atop|\phi-\phi'|<\delta}\!\!\!\frac{1}{\sqrt{N}}\left|{\bf g}^H(\phi)\hat{\bf C}_N^{-1}{\bf z}_0\!-\!{\bf g}^H(\phi')\hat{\bf C}_N^{-1}{\bf z}_0\right|>\varepsilon'\!\!\right)\\ \label{eq:conv_v} &=0 \end{align} \normalsize for any $\varepsilon'>0$ and similarly for \small${\bf g}^H(\phi')\hat{\bf C}_N^{-1}{\bf g}(\phi')-{\bf g}^H(\phi)\hat{\bf C}_N^{-1}{\bf g}(\phi)$\normalsize. Let us prove (\ref{eq:conv_v}), the other result following essentially the same line of arguments. For this, by Kallenberg \cite[Corollary 16.9]{kallenberg1997foundations}, it is sufficient to prove, say \begin{align}\label{eq:expect} \sup_{\phi,\phi'\in\mathcal{R}_l\atop|\phi\neq\phi'|}\sup_N\frac{E\left[\frac{1}{N}|{\bf g}^H(\phi)\hat{\bf C}_N^{-1}{\bf z}_0-{\bf g}^H(\phi')\hat{\bf C}_N^{-1}{\bf z}_0|^2\right]}{|\phi-\phi'|^2}<\infty. \end{align} Since \begin{align}\nonumber &\frac{1}{N}E\left[\left|{\bf g}^H(\phi)\hat{\bf C}_N^{-1}{\bf z}_0-{\bf g}^H(\phi')\hat{\bf C}_N^{-1}{\bf z}_0\right|^2\right]\\ \nonumber &=\frac{1}{N}\|{\bf g}(\phi)-{\bf g}(\phi')\|^2E\left[\left\|\hat{\bf C}_N^{-2}\right\|\left\|{\bf z}_0{\bf z}_0^H\right\|\right], \end{align} and $E\left[\left\|\hat{\bf C}_N^{-2}\right\|\left\|{\bf z}_0{\bf z}_0^H\right\|\right]<\infty$, to prove (\ref{eq:expect}), we only need to prove \begin{align}\label{eq:phi} \frac{\frac{1}{N}\|{\bf g}(\phi)-{\bf g}(\phi')\|^2}{|\phi-\phi'|^2}<\infty. \end{align} Since \begin{align}\nonumber \frac{1}{N}\|{\bf g}(\phi)-{\bf g}(\phi')\|^2=\frac{1}{N}\sum_{m=1}^N|g_m(\phi)-g_m(\phi')|^2, \end{align} we first focus on analyzing $|g_m(\phi)-g_m(\phi')|$ for $i=1,\ldots,N$. Denote $\phi'=\phi+\tau$, \begin{align}\nonumber &|g_m(\phi)-g_m(\phi')|=\left|-\frac{1}{2}\cosh(2\mu_m\phi-\mu_mx_m)\right.\\ \nonumber &\left.\qquad\qquad\qquad\qquad\quad+\frac{1}{2}\cosh(2\mu_m\phi+2\mu_m\tau-\mu_mx_m)\right|\\ \nonumber &\!\!=\left|\frac{1}{2}\cosh(2\mu_m\phi-\mu_mx_m)(\cosh2\mu_mm\tau-1)\right.\\\nonumber &\left.\qquad+\frac{1}{2}\sinh(2\mu_m\phi-\mu_mx_m)\sinh2\mu_m\tau\right| \\ \label{eq:neq} &\!\!\leq\!\frac{1}{2}\left|\cosh\mu_mL(\cosh2\mu_m\tau-1)\!+\!\sinh\mu_mL\sinh2\mu\tau\right|, \end{align} where the equality in (\ref{eq:neq}) is obtained when $\phi=L$ and $x_m=L$. Therefore we establish the following inequality \begin{align}\nonumber \frac{\frac{1}{N}\sum_{m=1}^N|g_m(\phi)-g_m(\phi')|^2}{|\phi-\phi'|^2}\leq\Delta \end{align} where \small \begin{align}\nonumber \Delta = \frac{\frac{1}{N}\sum_{m=1}^N\frac{1}{4}|\cosh\mu_mL(\cosh2\mu_m\tau-1)+\sinh\mu_mL\sinh2\mu_m\tau|^2}{\tau^2}. \end{align} \normalsize The Tyler expansions of $\cosh 2\mu_m\tau$ and $\sinh 2\mu_m\tau$ are \begin{align}\nonumber \cosh 2\mu_m\tau = & 1+\frac{(2\mu_m\tau)^2}{2!}+\frac{(2\mu_m\tau)^4}{4!}+\frac{(2\mu_m\tau)^6}{6!} \\ \nonumber &+\frac{(2\mu_m\tau)^8}{8!}+\cdots, \\ \nonumber \sinh 2\mu_m\tau = & 2\mu_m\tau+\frac{(2\mu_m\tau)^3}{3!}+\frac{(2\mu_m\tau)^5}{5!} \\ \nonumber &+\frac{(2\mu_m\tau)^7}{7!}+\frac{(2\mu_m\tau)^9}{9!}+\cdots. \end{align} By plugging in these Tyler expansions in $\Delta$, we obtain \small \begin{align}\nonumber \Delta=\frac{1}{N}\sum_{m=1}^N\frac{1}{4}\left|\cosh\mu_mL\left(\frac{(2\mu_m)^2\tau}{2!}+\frac{(2\mu_m)^4\tau^3}{4!}+\frac{(2\mu_m)^6\tau^5}{6!}\right.\right.\\\nonumber \left.\left.+\frac{(2\mu_m)^8\tau^7}{8!}+\cdots\right)+\sinh\mu_mL\left(2\mu_m+\frac{(2\mu_m)^3\tau^2}{3!}\right.\right.\\\nonumber \left.\left.+\frac{(2\mu_m)^5\tau^4}{5!}+\frac{(2\mu_m)^7\tau^6}{7!}+\frac{(2\mu_m)^9\tau^8}{9!}+\cdots.\right)\right|^2. \end{align} \normalsize It can be observed that $\Delta$ is an increasing function of $\tau$. Since $\tau\leq L$, we have \small \begin{align}\nonumber \{\Delta(\tau)\}_{\rm max}=\Delta(L)=\frac{1}{L^2}\frac{1}{N}\sum_{m=1}^N\frac{1}{4}\left|\cosh\mu_mL(\cosh2\mu_mL-1)\right.\\\nonumber \left.+\sinh\mu_mL\sinh2\mu_mL\right|^2<\infty. \end{align} \normalsize Therefore we have proven (\ref{eq:phi}) and (\ref{eq:expect}), and also complete the proof of (\ref{eq:lipschitz}). Getting back to our original problem, let us now take $\varepsilon>0$ arbitrary, $\phi_1<\ldots<\phi_J$ be a regular sampling of $\mathcal{R}_l$, and $\delta=\frac{L}{J}$. Then by (\ref{eq:result1}), J being fixed, for all $n>n_0(\varepsilon)$, \begin{align}\label{eq:max} \max_{1\leq j\leq J}\left|P(L(\phi_j)>\alpha)-Q_1\left(\frac{\alpha}{\sigma^2(\phi_j)}\right)\right|<\varepsilon. \end{align} Also, from (\ref{eq:lipschitz}), for small enough $\delta$, \begin{align}\nonumber \max_{1\leq j\leq J}P\left(\sup_{\phi\in\mathcal{R}_l\atop|\phi-\phi_j|<\delta}\left|L(\phi)-L(\phi_j)\right|>\alpha\xi\right)\\\nonumber \leq P\left(\sup_{\phi,\phi'\in\mathcal{R}_l\atop|\phi-\phi'|<\delta}\left|L(\phi)-L(\phi')\right|>\alpha\xi\right) <\varepsilon \end{align} for all large $n>n_0'(\varepsilon,\xi)>n_0(\varepsilon)$ where $\xi>0$ is also taken arbitrarily small. Thus we have, for each $\phi\in\mathcal{R}_l$ and for $n>n_0'(\varepsilon,\xi)$, \small \begin{align}\nonumber P(L(\phi)>\alpha)&\leq P\left(L(\phi_i)>\alpha(1-\xi)\right)\!+\!P\left(\left|L(\phi)-L(\phi')\right|>\alpha\xi\right) \\\nonumber &\leq P(L(\phi_i)>\alpha(1-\xi))+\varepsilon \end{align} \normalsize for $i\leq J$ the unique index such that $|\phi-\phi_i|<\delta$ and where the inequality holds uniformly on $\phi\in\mathcal{R}_l$. Similarly, reversing the roles of $\phi$ and $\phi'$, \begin{align}\nonumber P(L(\phi)>\alpha)\geq P(L(\phi_i)>\alpha(1+\xi))-\varepsilon. \end{align} As a consequence, by (\ref{eq:max}), for $n>n_0'(\varepsilon,\xi)$, uniformly on $\phi\in\mathcal{R}_l$, \begin{align}\nonumber P(L(\phi)>\alpha)&\leq Q_1\left(\frac{\alpha(1-\xi)}{2\sigma^2(\phi_i)}\right)+2\varepsilon \\\nonumber P(L(\phi)>\alpha)&\geq Q_1\left(\frac{\alpha(1+\xi)}{2\sigma^2(\phi_i)}\right)-2\varepsilon \end{align} which, by continuity of $Q_1$ and $\phi\mapsto\sigma^2$, letting $\xi$ and $\delta$ small enough (up to growing $n_0'(\varepsilon,\xi)$), leads to \begin{align}\nonumber \sup_{\phi\in\mathcal{R}_l}\left|P(L(\phi)>\alpha)-Q_1\left(\frac{\alpha}{2\sigma^2(\phi)}\right)\right|\leq3\varepsilon \end{align} for all $n_0'(\varepsilon,\xi)$, which completes the uniform convergence across $\phi\in\mathcal{R}_l$. \subsection{Proof of Theorem \ref{th:Pd}}\label{appx:th_Pd} We first study the asymptotic behavior of the detection probability for fixed $\rho\in\mathcal{R}_{\kappa}$ and $\phi\in\mathcal{R}_l$. As shown in \cite{kammoun2018optimal}, under $H_1$, $\frac{1}{\sqrt{N}}{\rm Re}({\bf g}^H(\phi)\hat{\bf C}_N^{-1}(\rho){\bf z}_0)$ behaves asymptotically as a Gaussian variable with mean $\mu=\frac{s}{\sqrt{N}\rho}{\bf g}^H(\phi){\bf Q}_N(\underline{\rho}){\bf g}(\phi)$ and variance $\nu^2\!=\!\frac{1}{2\rho^2N}\frac{{\bf g}^H(\phi){\bf C}_N{\bf Q}_N^2(\underline{\rho}){\bf g}(\phi)}{1-cm_N(-\underline{\rho})^2(1-\underline{\rho})^2\frac{1}{N}{\rm tr}{\bf C}_N^2{\bf Q}_N^2(\underline{\rho})}$ as $N,K\!\!\rightarrow\infty$, with $c_N\!=\!N/K\!\rightarrow\!c\in(0,1)$. Thus, $\frac{1}{N}{\rm Re}^2\{{\bf g}^H(\phi)\hat{\bf C}_N^{-1}(\rho){\bf z}_0\}$ behaves asymptotically as a noncentral chi-squared random variable with degree of freedom $1$, parameterized by the location $\mu^2$ and scale $\nu$. Combining this result and (\ref{eq:conv_gphi}) along with Slutsky's lemma, we conclude that, under $H_1$, $L(\rho,\phi)$ in (\ref{eq:L_rho}) is also asymptotically equivalent to a noncentral chi-squared random variable with degree of freedom $1$, but with location $\frac{\mu^2}{{\bf g}^H(\phi){\bf Q}_N(\underline{\rho}){\bf g}(\phi)}$ and scale $\sigma$. Defining $\beta(\rho,\phi)=\frac{\mu}{\sigma\sqrt{{\bf g}^H(\phi){\bf Q}_N(\underline{\rho}){\bf g}(\phi)}}$\normalsize, we therefore conclude, for fixed $\rho\in\mathcal{R}_{\kappa}$ and $\phi\in\mathcal{R}_l$, that \begin{align}\nonumber \left|\mathbb{P}\left[L(\rho,\phi)>\alpha|H_1\right]-Q_2\left(\beta^2(\rho,\phi)), \frac{\alpha}{\sigma^2(\rho)}\right)\right|\rightarrow0. \end{align} As before, the generalization to include uniform convergence across $\rho\in\mathcal{R}_{\kappa}$ and $\phi\in\mathcal{R}_l$ can be derived by following the same procedure as in \cite{couillet2016second} and the proof of Theorem \ref{th:Pfa}, and is therefore again not reproduced. \subsection{Proof of Proposition \ref{th:sigma}}\label{appx:th_sigma} The proof consists of two steps. Firstly we prove that for a fixed $\phi\in\mathcal{R}_l$, the following convergence result holds: \begin{align}\label{eq:convrho} \sup_{\rho\in\mathcal{R}_{\kappa}}\left|\hat{\sigma}^2(\rho,\phi)-\sigma^2(\rho,\phi)\right|\stackrel{a.s.}\longrightarrow0. \end{align} Then the uniform convergence over $\phi\in\mathcal{R}_l$ is deducted, which completes the proof. In the first step, we start by showing that $\hat{\sigma}^2(1,\phi)$ is well defined. It is easy to observe that $\hat{\sigma}^2(\rho,\phi)$ in (\ref{eq:sigma_est}) is undefined (zero over zero) when $\rho=1$. We use l'Hopital's rule to obtain the value of $\hat{\sigma}^2(\rho,\phi)$ when $\rho$ approaches $1$. Define $\hat{\sigma}^2(\rho,\phi)=\frac{h(\rho,\phi)}{w(\rho)}$ with $h(\rho,\phi)$ and $w(\rho)$ given by \begin{align}\nonumber h(\rho,\phi)=1-\frac{\rho{\bf g}^H(\phi)\hat{\bf C}_N^{-2}(\rho){\bf g}(\phi)}{{\bf g}^H(\phi)\hat{\bf C}_N^{-1}(\rho){\bf g}(\phi)} \end{align} and \begin{align}\nonumber w(\rho)=\frac{2(1-\rho)N}{{\rm tr}({\bf R}_N)}\left(1-c_N+c_N\rho\frac{1}{N}{\rm tr}\hat{\bf C}_N^{-1}(\rho)\right)^2. \end{align} By a uniform variation of l'Hopital's rule \cite[Lemma 13]{kammoun2018optimal}, we have \begin{align}\nonumber \lim_{\rho\uparrow1}\limsup_N\left|\hat{\sigma}^2(\rho,\phi)-\frac{h'(1,\phi)}{w'(1)}\right|\stackrel{\rm a.s.}\longrightarrow0. \end{align} Using the differentiation rules $\frac{d}{d\rho}\hat{\bf C}_N^{-1}(\rho)\!=\!-\hat{\bf C}_N^{-2}(\rho)(-{\bf R}_N+{\bf I}_N)$ and $\frac{d}{d\rho}\hat{\bf C}_N^{-2}(\rho)=-\hat{\bf C}_N^{-3}(\rho)(-{\bf R}_N+{\bf I}_N)$ \cite{kammoun2018optimal}, we then prove \begin{align}\nonumber \lim_{\rho\uparrow1}\limsup_N\left|\hat{\sigma}^2(\rho,\phi)-\frac{{\bf g}^H(\phi){\bf R}_N{\bf g}(\phi)}{2{\bf g}^H(\phi){\bf g}(\phi)}\right|\stackrel{\rm a.s.}\longrightarrow0. \end{align} Now, using the fact that as $N, K\rightarrow\infty$, with $c_N\rightarrow c\in(0,1)$, $\frac{1}{N}{\bf g}^H(\phi){\bf R}_N{\bf g}(\phi)-\frac{1}{N}{\bf g}^H(\phi){\bf C}_N{\bf g}(\phi)\stackrel{\rm a.s.}\longrightarrow0$ \cite{kammoun2018optimal}, we obtain \begin{align}\nonumber \lim_{\rho\uparrow1}\limsup_N\left|\hat{\sigma}^2(\rho,\phi)-\frac{{\bf g}^H(\phi){\bf C}_N{\bf g}(\phi)}{2{\bf g}^H(\phi){\bf g}(\phi)}\right|\stackrel{\rm a.s.}\longrightarrow0 \; . \end{align} Since $\sigma^2(1,\phi)=\frac{{\bf g}^H(\phi){\bf C}_N{\bf g}(\phi)}{2{\bf g}^H(\phi){\bf g}(\phi)}$, we have thus proved as $N, K\rightarrow\infty$, with $c_N\rightarrow c\in(0,1)$, $\left|\hat{\sigma}^2(1,\phi)-\sigma^2(1,\phi)\right|\stackrel{\rm a.s.}\longrightarrow0$ where $\hat{\sigma}^2(1,\phi)=\lim_{\rho\uparrow1}\hat{\sigma}^2(\rho,\phi)$. It then suffices to prove (\ref{eq:convrho}) when $\rho$ belongs to the set $\tilde{\mathcal{R}}_{\kappa}\triangleq[\kappa,1-\kappa]$. By (\ref{eq:conv_gphi}), we could obtain the consistent estimator of the first part of $\sigma^2(\rho,\phi)$, that is $\frac{1}{2\rho}\frac{1}{{\bf g}^H(\phi){\bf Q}_N(\underline{\rho}){\bf g}(\phi)}$. For the remaining part of $\sigma^2(\rho,\phi)$, $\frac{{\bf g}^H(\phi){\bf C}_N{\bf Q}_N^2(\underline{\rho}){\bf g}(\phi)}{1-cm_N^2(-\underline{\rho})(1-\underline{\rho})^2\frac{1}{N}{\rm tr}{\bf C}_N^2{\bf Q}_N^2(\underline{\rho})}$, the following convergence results in \cite{couillet2016second} are exploited: \small \begin{align}\nonumber \sup_{\rho\in\tilde{\mathcal{R}}_{\kappa}}&\!\left|\!\frac{1}{N}\frac{{\bf g}^H(\phi)\hat{\bf C}_N^{-1}(\rho){\bf g}(\phi)\!-\!\rho{\bf g}^H(\phi)\hat{\bf C}_N^{-2}(\rho){\bf g}(\phi)}{(1-\underline{\rho})m_N^2(-\underline{\rho})}\!\left(\!\rho\!+\!\frac{(1-\rho)N}{{\rm tr}({\bf R}_N)}\!\right)\right.\\ \label{eq:3} &~\left.-\frac{1}{N}\frac{{\bf g}^H(\phi){\bf C}_N{\bf Q}_N^2(\underline{\rho}){\bf g}(\phi)}{1-cm_N^2(-\underline{\rho})(1-\underline{\rho})^2\frac{1}{N}{\rm tr}{\bf C}_N^2{\bf Q}_N^2(\underline{\rho})}\right|\stackrel{\rm a.s.}\longrightarrow0 \end{align} \normalsize and \small \begin{align}\nonumber \sup_{\rho\in\tilde{\mathcal{R}}_{\kappa}}&\left|\left(\frac{1-c_N}{\underline{\rho}}+c_N\frac{1}{N}{\rm tr}\hat{\bf C}_N^{-1}(\rho)\left(\rho+\frac{(1-\rho)N}{{\rm tr}({\bf R}_N)}\right)\right) \right.\\\label{eq:m_rho} &~\left.-m_N(-\underline{\rho})\right|\stackrel{\rm a.s.}\longrightarrow0. \end{align} \normalsize By combining (\ref{eq:3}) and (\ref{eq:m_rho}), we have \small \begin{align}\nonumber &\sup_{\rho\in\tilde{\mathcal{R}}_{\kappa}}\left|\frac{1}{N}\frac{{\bf g}^H(\phi){\bf C}_N{\bf Q}_N^2(\underline{\rho}){\bf g}(\phi)}{1-cm_N^2(-\underline{\rho})(1-\underline{\rho})^2\frac{1}{N}{\rm tr}{\bf C}_N^2{\bf Q}_N^2(\underline{\rho})}\right.\\ \nonumber &\left.-\frac{1}{N}\frac{{\rm tr}({\bf R}_N)}{(1-\rho)N}\frac{{\bf g}^H(\phi)\hat{\bf C}_N^{-1}(\rho){\bf g}(\phi)-\rho{\bf g}^H(\phi)\hat{\bf C}_N^{-2}(\rho){\bf g}(\phi)}{\left(1-c_N+c_N\rho\frac{1}{N}{\rm tr}\hat{\bf C}_N^{-1}(\rho)\right)^2}\right|\stackrel{\rm a.s.}\longrightarrow0. \end{align} \normalsize Together with (\ref{eq:conv_gphi}), we prove the uniform convergence (\ref{eq:convrho}) over $\rho\in\mathcal{R}_{\kappa}$. In the second step, we prove the uniform convergence over $\phi\in\mathcal{R}_l$. To simplify notations, we again drop the parameter $\rho$, that is, we aim to prove, as $N, K\rightarrow\infty$, with $c_N\rightarrow c\in(0,1)$, \begin{align}\label{eq:uniformsigma} \sup_{\phi\in\mathcal{R}_l}\left|\hat{\sigma}^2(\phi)-\sigma^2(\phi)\right|\stackrel{a.s.}\longrightarrow0. \end{align} From the definition of uniform convergence, this amounts to showing that for some $C>0$ and any given $\varepsilon>0$, \begin{align}\label{eq:C} \sup_{\phi\in\mathcal{R}_l}\left|\hat{\sigma}^2(\phi)-\sigma^2(\phi)\right|<C\varepsilon \end{align} for all large $K$ almost surely. Taking $\phi_1<\ldots<\phi_J$ be a regular sampling of $\mathcal{R}_l$, and $\delta=\frac{L}{J}$, there exist $\phi_i$ that satisfies $|\phi-\phi_i|<\delta$. With this, we can write: \begin{align}\nonumber &\sup_{\phi\in\mathcal{R}_l}|\hat{\sigma}^2(\phi)-\sigma^2(\phi)| \leq\sup_{\phi\in\mathcal{R}_l}\left\{|\hat{\sigma}^2(\phi)-\hat{\sigma}^2(\phi_i)| \right.\\\nonumber &\left.\qquad\qquad\qquad+|\sigma^2(\phi_i)-\sigma^2(\phi)|+|\hat{\sigma}^2(\phi_i)-\sigma^2(\phi_i)|\right\} \\ \nonumber &\leq \sup_{\phi\in\mathcal{R}_l}|\sigma^2(\phi_i)-\sigma^2(\phi)|+\sup_{\phi\in\mathcal{R}_l}|\hat{\sigma}^2(\phi)-\hat{\sigma}^2(\phi_i)|\\\label{eq:3terms} &\qquad+\max_i|\hat{\sigma}^2(\phi_i)-\sigma^2(\phi_i)|. \end{align} Hence, it follows that the relation (\ref{eq:C}) would be established upon proving that, for certain $C_1>0$, $C_2>0$ and $C_3>0$, we have $\sup_{\phi\in\mathcal{R}_l}|\sigma^2(\phi_i)-\sigma^2(\phi)|<C_2\varepsilon$, $\sup_{\phi\in\mathcal{R}_l}|\hat{\sigma}^2(\phi)-\hat{\sigma}^2(\phi_i)|<C_1\varepsilon$, $\max_i|\hat{\sigma}^2(\phi_i)-\sigma^2(\phi_i)|<C_3\varepsilon$ for all large $K$ almost surely. To establish the first bound, we start by using (\ref{eq:sigma}) to write \small \begin{align}\nonumber &|\sigma^2(\phi_i)-\sigma^2(\phi)|=q*\left|\frac{{\bf g}^H(\phi_i){\bf C}_N{\bf Q}_N^2{\bf g}(\phi_i)}{{\bf g}^H(\phi_i){\bf Q}_N{\bf g}(\phi_i)}-\frac{{\bf g}^H(\phi){\bf C}_N{\bf Q}_N^2{\bf g}(\phi)}{{\bf g}^H(\phi){\bf Q}_N{\bf g}(\phi)}\right| \\ \nonumber &\!\!\!=\!\!\frac{q}{{\bf g}^H(\phi_i){\bf Q}_N{\bf g}(\phi_i){\bf g}^H(\phi){\bf Q}_N{\bf g}(\phi)}\!\!\left|{\bf g}^H(\phi_i){\bf C}_N{\bf Q}_N^2{\bf g}(\phi_i){\bf g}^H(\phi){\bf Q}_N{\bf g}(\phi)\right.\\ \nonumber &\left.\quad-{\bf g}^H(\phi){\bf C}_N{\bf Q}_N^2{\bf g}(\phi){\bf g}^H(\phi_i){\bf Q}_N{\bf g}(\phi_i)\right|, \end{align} \normalsize where $q = \frac{1}{2\rho}\frac{1}{1-cm_N^2(-\underline{\rho})(1-\underline{\rho})^2\frac{1}{N}{\rm tr}{\bf C}_N^2{\bf Q}_N^2(\underline{\rho})}$. Rewrite $|\sigma^2(\phi_i)-\sigma^2(\phi)|=q*\frac{A}{B}$, where \small \begin{align}\nonumber A&\!\triangleq\!\!\frac{1}{N^2}{\bf g}^H\!(\phi){\bf Q}_N{\bf g}(\phi)[{\bf g}^H\!(\phi_i){\bf C}_N{\bf Q}_N^2{\bf g}(\phi_i)\!-\!{\bf g}^H\!(\phi){\bf C}_N{\bf Q}_N^2{\bf g}(\phi)]\\\nonumber &~+\frac{1}{N^2}[{\bf g}^H\!(\phi){\bf Q}_N{\bf g}(\phi)\!-\!{\bf g}^H\!(\phi_i){\bf Q}_N{\bf g}(\phi_i)]{\bf g}^H\!(\phi){\bf C}_N{\bf Q}_N^2{\bf g}(\phi), \\\nonumber B&\triangleq\frac{1}{N^2}{\bf g}^H(\phi_i){\bf Q}_N{\bf g}(\phi_i){\bf g}^H(\phi){\bf Q}_N{\bf g}(\phi). \end{align} \normalsize We first deal with $A$. Since \begin{align}\nonumber &\frac{1}{N}[{\bf g}^H(\phi_i){\bf C}_N{\bf Q}_N^2{\bf g}(\phi_i)-{\bf g}^H(\phi){\bf C}_N{\bf Q}_N^2{\bf g}(\phi)]\\ \nonumber &=\frac{1}{N}({\bf g}(\phi_i)-{\bf g}(\phi))^H{\bf C}_N{\bf Q}_N^2({\bf g}(\phi)+{\bf g}(\phi_i)) \\\nonumber &\leq\frac{1}{\sqrt{N}}\|{\bf g}(\phi)-{\bf g}(\phi_i)\|\|{\bf C}_N{\bf Q}_N^2\|\frac{1}{\sqrt{N}}\|{\bf g}(\phi)+{\bf g}(\phi_i)\| \end{align} and \begin{align}\nonumber &\frac{1}{N}[{\bf g}^H(\phi){\bf Q}_N{\bf g}(\phi)-{\bf g}^H(\phi_i){\bf Q}_N{\bf g}(\phi_i)] \\\nonumber &=\frac{1}{N}({\bf g}(\phi)-{\bf g}(\phi_i))^H{\bf Q}_N({\bf g}(\phi)+{\bf g}(\phi_i)) \\\nonumber &\leq\frac{1}{\sqrt{N}}\|{\bf g}(\phi)-{\bf g}(\phi_i)\|\|{\bf Q}_N\|\frac{1}{\sqrt{N}}\|{\bf g}(\phi)+{\bf g}(\phi_i)\|, \end{align} we have \begin{align}\nonumber A\leq&\frac{1}{\sqrt{N}}\|{\bf g}(\phi)-{\bf g}(\phi_i)\| \\\nonumber &\times\left(\frac{1}{N}{\bf g}^H(\phi){\bf Q}_N{\bf g}(\phi)\|{\bf C}_N{\bf Q}_N^2\|\frac{1}{\sqrt{N}}\|{\bf g}(\phi)+{\bf g}(\phi_i)\| \right.\\\nonumber &\left.+\|{\bf Q}_N\|\frac{1}{\sqrt{N}}\|{\bf g}(\phi)+{\bf g}(\phi_i)\|\frac{1}{N}{\bf g}^H(\phi){\bf C}_N{\bf Q}_N^2{\bf g}(\phi)\right). \end{align} Denote $\phi_i=\phi+\tau$, $|\tau|<\delta<L$, we obtain \begin{align}\nonumber &g_m(\phi_i)-g_m(\phi)=\cosh(2\mu_m\phi-\mu_mx_m)\sinh^2\mu_m\tau \\\nonumber &\qquad\qquad+\sinh(2\mu_m\phi-\mu_mx_m)\sinh\mu_m\tau\cosh\mu_m\tau \\\nonumber &=\sinh\mu_m\tau[\cosh(2\mu_m\phi-\mu_mx_m)\sinh\mu_m\tau \\\nonumber &\qquad\qquad\qquad+\sinh(2\mu_m\phi-\mu_mx_m)\cosh\mu_m\tau]\\\nonumber &<\sinh\mu_m\tau[\cosh(2\mu_m\phi-\mu_mx_m)\sinh\mu_mL\\\nonumber &\qquad\qquad\qquad+\sinh(2\mu_m\phi-\mu_mx_m)\cosh\mu_mL]. \end{align} By taking $\delta$ satisfies $\max_m\sinh\mu_m\delta<\varepsilon$, we obtain, for each $m$, $m=1,\ldots, N$, \begin{align}\nonumber g_m(\phi_i)-g_m(\phi)<h_m\varepsilon, \end{align} where $h_m = \cosh(2\mu_m\phi-\mu_mx_m)\sinh\mu_mL+\sinh(2\mu_m\phi-\mu_mx_m)\cosh\mu_mL$. Since \begin{align}\nonumber \frac{1}{\sqrt{N}}\|{\bf g}(\phi)-{\bf g}(\phi_i)\|&=\frac{1}{\sqrt{N}}\sqrt{\sum_{m=1}^N|g_m(\phi)-g_m(\phi_i)|^2} \\ \nonumber &<\frac{1}{\sqrt{N}}\sqrt{\sum_{m=1}^Nh_m^2}\varepsilon, \end{align} we obtain \small \begin{align}\nonumber &A<\left(\frac{1}{N}{\bf g}^H(\phi){\bf Q}_N{\bf g}(\phi)\|{\bf C}_N{\bf Q}_N^2\|\frac{1}{\sqrt{N}}\|{\bf g}(\phi)+{\bf g}(\phi_i)\| \right.\\\nonumber &\left.\!\!\!+\|{\bf Q}_N\|\frac{1}{\sqrt{N}}\|{\bf g}(\phi)+{\bf g}(\phi_i)\|\frac{1}{N}{\bf g}^H(\phi){\bf C}_N{\bf Q}_N^2{\bf g}(\phi)\!\!\right)\!\!\frac{1}{\sqrt{N}}\sqrt{\sum_{m=1}^Nh_m^2}\varepsilon \\\nonumber &<\frac{2}{N}\|{\bf g}(\phi)\|^2\frac{1}{\sqrt{N}}\|{\bf g}(\phi)+{\bf g}(\phi_i)\|\|{\bf Q}_N\|\|{\bf C}_N{\bf Q}_N^2\|\frac{1}{\sqrt{N}}\sqrt{\sum_{m=1}^Nh_m^2}\varepsilon. \end{align} \normalsize As $\frac{1}{\sqrt{N}}\|{\bf g}(\phi)\|$, $\|{\bf C}_N\|$ and $\|{\bf Q}_N\|$ are bounded, we have $A< p_1\varepsilon$ for some constant $p_1$. Similarly, since \begin{align}\nonumber B\geq\frac{1}{N^2}\|{\bf g}(\phi_i)\|^2\|{\bf g}(\phi)\|^2\|{\bf Q}_N\|^2 \end{align} and $\frac{1}{\sqrt{N}}\|{\bf g}(\phi)\|$ and $\|{\bf Q}_N\|$ is bounded, we have $B>p_2$, for some constant $p_2$. Therefore, we have established the desired property \begin{align}\label{eq:term1} |\sigma^2(\phi_i)-\sigma^2(\phi)|=q*\frac{A}{B}<\frac{qp_1}{p_2}\varepsilon. \end{align} We now turn to deriving the analogous result for the second term in (\ref{eq:3terms}). To this end, similar to before, we start with \small \begin{align}\nonumber &|\hat{\sigma}^2(\phi)-\hat{\sigma}^2(\phi_i)|=r\times\left|\frac{{\bf g}^H(\phi)\hat{\bf C}_N^{-2}{\bf g}(\phi)}{{\bf g}^H(\phi)\hat{\bf C}_N^{-2}{\bf g}(\phi)}-\frac{{\bf g}^H(\phi_i)\hat{\bf C}_N^{-2}{\bf g}(\phi_i)}{{\bf g}^H(\phi_i)\hat{\bf C}_N^{-2}{\bf g}(\phi_i)}\right| \\\nonumber &=\frac{r}{\frac{1}{N^2}{\bf g}^H(\phi)\hat{\bf C}_N^{-1}{\bf g}(\phi){\bf g}^H(\phi_i)\hat{\bf C}_N^{-1}{\bf g}(\phi_i)}\\\nonumber &\qquad\times\left|\frac{1}{N^2}[{\bf g}^H(\phi)\hat{\bf C}_N^{-2}{\bf g}(\phi){\bf g}^H(\phi_i)\hat{\bf C}_N^{-1}{\bf g}(\phi_i)- \right.\\ \nonumber &\qquad\qquad\left.{\bf g}^H(\phi_i)\hat{\bf C}_N^{-2}{\bf g}(\phi_i){\bf g}^H(\phi)\hat{\bf C}_N^{-1}{\bf g}(\phi)]\right|, \end{align} \normalsize where $r=\frac{{\rm tr}({\bf R}_N)}{2(1-\rho)N}\frac{1}{{\left(1-c_N+c_N\rho\frac{1}{N}{\rm tr}\hat{\bf C}_N^{-1}(\rho)\right)^2}}$. Rewrite $|\hat{\sigma}^2(\phi)-\hat{\sigma}^2(\phi_i)|=r*\frac{D}{E}$, where \begin{align}\nonumber D&\triangleq\frac{1}{N^2}{\bf g}^H(\phi)\hat{\bf C}_N^{-2}{\bf g}(\phi)[{\bf g}^H(\phi_i)\hat{\bf C}_N^{-1}{\bf g}(\phi_i)-{\bf g}^H(\phi)\hat{\bf C}_N^{-1}{\bf g}(\phi)]\\\nonumber &+\frac{1}{N^2}[{\bf g}^H(\phi)\hat{\bf C}_N^{-2}{\bf g}(\phi)-{\bf g}^H(\phi_i)\hat{\bf C}_N^{-2}{\bf g}(\phi_i)]{\bf g}^H(\phi)\hat{\bf C}_N^{-1}{\bf g}(\phi), \\\nonumber E&\triangleq\frac{1}{N^2}{\bf g}^H(\phi)\hat{\bf C}_N^{-1}{\bf g}(\phi){\bf g}^H(\phi_i)\hat{\bf C}_N^{-1}{\bf g}(\phi_i). \end{align} We first deal with $D$. Since \begin{align}\nonumber &\frac{1}{N}[{\bf g}^H(\phi_i)\hat{\bf C}_N^{-1}{\bf g}(\phi_i)-{\bf g}^H(\phi)\hat{\bf C}_N^{-1}{\bf g}(\phi)]\\\nonumber &=\frac{1}{N}({\bf g}(\phi_i)-{\bf g}(\phi))^H\hat{\bf C}_N^{-1}({\bf g}(\phi)+{\bf g}(\phi_i)) \\\nonumber &\leq\frac{1}{\sqrt{N}}\|{\bf g}(\phi)-{\bf g}(\phi_i)\|\|\hat{\bf C}_N^{-1}\|\frac{1}{\sqrt{N}}\|{\bf g}(\phi)+{\bf g}(\phi_i)\| \end{align} and \begin{align}\nonumber &\frac{1}{N}[{\bf g}^H(\phi_i)\hat{\bf C}_N^{-2}{\bf g}(\phi_i)-{\bf g}^H(\phi)\hat{\bf C}_N^{-2}{\bf g}(\phi)] \\\nonumber &=\frac{1}{N}({\bf g}(\phi_i)-{\bf g}(\phi))^H\hat{\bf C}_N^{-2}({\bf g}(\phi)+{\bf g}(\phi_i)) \\\nonumber &\leq\frac{1}{\sqrt{N}}\|{\bf g}(\phi)-{\bf g}(\phi_i)\|\|\hat{\bf C}_N^{-2}\|\frac{1}{\sqrt{N}}\|{\bf g}(\phi)+{\bf g}(\phi_i)\|, \end{align} we have \small \begin{align}\nonumber D\leq&\frac{1}{\sqrt{N}}\|{\bf g}(\phi)-{\bf g}(\phi_i)\|\\ \nonumber &\times\left(\frac{1}{N}{\bf g}^H(\phi)\hat{\bf C}_N^{-2}{\bf g}(\phi)\|\hat{\bf C}_N^{-1}\|\frac{1}{\sqrt{N}}\|{\bf g}(\phi)+{\bf g}(\phi_i)\| \right.\\\nonumber &\left.+\|\hat{\bf C}_N^{-2}\|\frac{1}{\sqrt{N}}\|{\bf g}(\phi)+{\bf g}(\phi_i)\|\frac{1}{N}{\bf g}^H(\phi)\hat{\bf C}_N^{-1}{\bf g}(\phi)\right)\\\nonumber \leq&\frac{1}{\sqrt{N}}\|{\bf g}(\phi)-{\bf g}(\phi_i)\|\frac{2}{N}\|{\bf g}(\phi)\|^2\|\hat{\bf C}_N^{-2}\|\|\hat{\bf C}_N^{-1}\|\\\nonumber &\times\frac{1}{\sqrt{N}}\|{\bf g}(\phi)+{\bf g}(\phi_i)\|. \end{align} \normalsize As we have proved that \begin{align}\nonumber \frac{1}{\sqrt{N}}\|{\bf g}(\phi)-{\bf g}(\phi_i)\|<\frac{1}{\sqrt{N}}\sqrt{\sum_{m=1}^Nh_m^2}\varepsilon, \end{align} and $\frac{1}{\sqrt{N}}\|{\bf g}(\phi)\|$, $\|\hat{\bf C}_N^{-1}\|$ are bounded, we have $D< p_3\varepsilon$ for some constant $p_3$. Similarly, since \begin{align}\nonumber E\geq\frac{1}{N^2}\|{\bf g}(\phi_i)\|^2\|{\bf g}(\phi)\|^2\|\hat{\bf C}_N^{-1}\|^2 \end{align} and $\frac{1}{\sqrt{N}}\|{\bf g}(\phi)\|$ and $\|\hat{\bf C}_N^{-1}\|$ are bounded, we have $E>p_4$, for some constant $p_4$. Therefore, we have established the desired property \begin{align}\label{eq:term2} |\hat{\sigma}^2(\phi)-\hat{\sigma}^2(\phi_i)|=r*\frac{D}{E} <\frac{rp_3}{p_4}\varepsilon. \end{align} Finally, we turn to deriving the analogous result (for all large $K$ almost surely) for the third term in (\ref{eq:3terms}). Since, as already established, for each $\phi_i$, as $N, K\rightarrow\infty$, with $c_N\rightarrow c\in(0,1)$, $\left|\hat{\sigma}^2(\phi_i)-\sigma^2(\phi_i)\right|\stackrel{a.s.}\longrightarrow0$, we have that for each $\phi_i$, $\left|\hat{\sigma}^2(\phi_i)-\sigma^2(\phi_i)\right|<\varepsilon$ for all large $K$ almost surely. Thus, \begin{align}\nonumber \max_i|\hat{\sigma}^2(\phi_i)-\sigma^2(\phi_i)|<\sum_{i=1}^J|\hat{\sigma}^2(\phi_i)-\sigma^2(\phi_i)|<J\varepsilon. \end{align} This, combined with (\ref{eq:term1}) and (\ref{eq:term2}) completes the proof that (\ref{eq:C}) holds, hence establishing the desired uniform convergence (\ref{eq:uniformsigma}). \subsection{Proof of Theorem \ref{th:phi_est}} \label{appx:th_phi} The proof relies on the following convergence results, which will be derived subsequently: As $N, K\rightarrow\infty$, with $c_N=N/K\rightarrow c\in(0,1)$, \begin{align}\label{eq:conv_dmn} \max_{\phi\in\mathcal{R}_l}\!\left|\frac{1}{N}{\bf g}^H(\phi){\bf R}_N^{-1}{\bf g}(\phi)\!-\!\frac{1}{1-c}\frac{1}{N}{\bf g}^H(\phi){\bf C}_N^{-1}{\bf g}(\phi)\right|\!\!\stackrel{\rm a.s.}\longrightarrow\!0 \end{align} and \begin{align}\nonumber \max_{\phi\in\mathcal{R}_l]}&\left|\frac{1}{N}{\rm Re}^2\{{\bf g}^H(\phi){\bf R}_N^{-1}{\bf z}_0\} \right.\\ \label{eq:conv_nmt} &~\left.-\frac{1}{(1-c)^2}\frac{1}{N}{\rm Re}^2\{{\bf g}^H(\phi){\bf C}_N^{-1}{\bf z}_0\}\right|\stackrel{\rm a.s.}\longrightarrow0. \end{align} We then have \small \begin{align}\nonumber \max_{\phi\in\mathcal{R}_l}\left|\frac{{\rm Re}^2\{{\bf g}^H(\phi){\bf R}_N^{-1}{\bf z}_0\}}{{\bf g}^H(\phi){\bf R}_N^{-1}{\bf g}(\phi)}\!-\!\frac{1}{1-c}\frac{{\rm Re}^2\{{\bf g}^H(\phi){\bf C}_N^{-1}{\bf z}_0\}}{{\bf g}^H(\phi){\bf C}_N^{-1}{\bf g}(\phi)}\right|\!\stackrel{\rm a.s.}\longrightarrow\!0. \end{align} \normalsize Denote $\hat{\phi}\in\argmax_{\phi\in[p_{\rm U}, p_{\rm D}]}\frac{{\rm Re}^2\{{\bf g}^H(\phi){\bf C}_N^{-1}{\bf z}_0\}}{{\bf g}^H(\phi){\bf C}_N^{-1}{\bf g}(\phi)}$. Together with $\hat{\phi}_{\{{\bf R}_N,{\bf z}_0\}}\in\argmax_{\phi\in[p_{\rm U}, p_{\rm D}]}\frac{{\rm Re}^2\{{\bf g}^H(\phi){\bf R}_N^{-1}{\bf z}_0\}}{{\bf g}^H(\phi){\bf R}_N^{-1}{\bf g}(\phi)}$, the following inequalities hold true: \small \begin{align}\label{ieq:1} \frac{{\rm Re}^2\{{\bf g}^H(\hat{\phi}_{\{{\bf R}_N,{\bf z}_0\}}){\bf R}_N^{-1}{\bf z}_0\}}{{\bf g}^H(\hat{\phi}_{\{{\bf R}_N,{\bf z}_0\}}){\bf R}_N^{-1}{\bf g}(\hat{\phi}_{\{{\bf R}_N,{\bf z}_0\}})}\geq\frac{{\rm Re}^2\{{\bf g}^H(\hat{\phi}){\bf R}_N^{-1}{\bf z}_0\}}{{\bf g}^H(\hat{\phi}){\bf R}_N^{-1}{\bf g}(\hat{\phi})} \end{align} \normalsize and \small \begin{align}\label{ieq:2} \frac{{\rm Re}^2\{{\bf g}^H(\hat{\phi}){\bf C}_N^{-1}{\bf z}_0\}}{{\bf g}^H(\hat{\phi}){\bf C}_N^{-1}{\bf g}(\hat{\phi})}\geq\frac{{\rm Re}^2\{{\bf g}^H(\hat{\phi}_{\{{\bf R}_N,{\bf z}_0\}}){\bf C}_N^{-1}{\bf z}_0\}}{{\bf g}^H(\hat{\phi}_{\rm SCM}){\bf C}_N^{-1}{\bf g}(\hat{\phi}_{\{{\bf R}_N,{\bf z}_0\}})} \; . \end{align} \normalsize We also have \small \begin{align}\nonumber &\left|\frac{{\rm Re}^2\{{\bf g}^H(\hat{\phi}_{\{{\bf R}_N,{\bf z}_0\}}){\bf R}_N^{-1}{\bf z}_0\}}{{\bf g}^H(\hat{\phi}_{\{{\bf R}_N,{\bf z}_0\}}){\bf R}_N^{-1}{\bf g}(\hat{\phi}_{\{{\bf R}_N,{\bf z}_0\}})}\right.\\\nonumber &~\left.-\frac{1}{1-c}\frac{{\rm Re}^2\{{\bf g}^H(\hat{\phi}_{\{{\bf R}_N,{\bf z}_0\}}){\bf C}_N^{-1}{\bf z}_0\}}{{\bf g}^H(\hat{\phi}_{\{{\bf R}_N,{\bf z}_0\}}){\bf C}_N^{-1}{\bf g}(\hat{\phi}_{\{{\bf R}_N,{\bf z}_0\}})}\right|\\ \nonumber &\leq \max_{\phi\in\mathcal{R}_l}\left|\frac{{\rm Re}^2\{{\bf g}^H(\phi){\bf R}_N^{-1}{\bf z}_0\}}{{\bf g}^H(\phi){\bf R}_N^{-1}{\bf g}(\phi)}\!-\!\frac{1}{1-c}\frac{{\rm Re}^2\{{\bf g}^H(\phi){\bf C}_N^{-1}{\bf z}_0\}}{{\bf g}^H(\phi){\bf C}_N^{-1}{\bf g}(\phi)}\right|\\ \label{ieq:3} &\stackrel{\rm a.s.}\longrightarrow0, \\\nonumber &\left|\frac{{\rm Re}^2\{{\bf g}^H(\hat{\phi}){\bf R}_N^{-1}{\bf z}_0\}}{{\bf g}^H(\hat{\phi}){\bf R}_N^{-1}{\bf g}(\hat{\phi})}-\frac{1}{1-c}\frac{{\rm Re}^2\{{\bf g}^H(\hat{\phi}){\bf C}_N^{-1}{\bf z}_0\}}{{\bf g}^H(\hat{\phi}){\bf C}_N^{-1}{\bf g}(\hat{\phi})}\right| \\\nonumber &\leq \max_{\phi\in\mathcal{R}_l}\left|\frac{{\rm Re}^2\{{\bf g}^H(\phi){\bf R}_N^{-1}{\bf z}_0\}}{{\bf g}^H(\phi){\bf R}_N^{-1}{\bf g}(\phi)}-\frac{1}{1-c}\frac{{\rm Re}^2\{{\bf g}^H(\phi){\bf C}_N^{-1}{\bf z}_0\}}{{\bf g}^H(\phi){\bf C}_N^{-1}{\bf g}(\phi)}\right|\\ \label{ieq:4} &\stackrel{\rm a.s.}\longrightarrow0, \end{align} \normalsize Using (\ref{ieq:3}) and (\ref{ieq:4}) in (\ref{ieq:1}), it follows that for all large $N$, almost surely, \begin{align}\label{ieq:5} \frac{{\rm Re}^2\{{\bf g}^H(\hat{\phi}){\bf C}_N^{-1}{\bf z}_0\}}{{\bf g}^H(\hat{\phi}){\bf C}_N^{-1}{\bf g}(\hat{\phi})}\leq\frac{{\rm Re}^2\{{\bf g}^H(\hat{\phi}_{\{{\bf R}_N,{\bf z}_0\}}){\bf C}_N^{-1}{\bf z}_0\}}{{\bf g}^H(\hat{\phi}_{\{{\bf R}_N,{\bf z}_0\}}){\bf C}_N^{-1}{\bf g}(\hat{\phi}_{\{{\bf R}_N,{\bf z}_0\}})} \; . \end{align} Thus (\ref{ieq:2}) and (\ref{ieq:5}) together ensure that \small \begin{align}\nonumber \left|\frac{{\rm Re}^2\{{\bf g}^H(\hat{\phi}_{\{{\bf R}_N,{\bf z}_0\}}){\bf C}_N^{-1}{\bf z}_0\}}{{\bf g}^H(\hat{\phi}_{\{{\bf R}_N,{\bf z}_0\}}){\bf C}_N^{-1}{\bf g}(\hat{\phi}_{\{{\bf R}_N,{\bf z}_0\}})}-\frac{{\rm Re}^2\{{\bf g}^H(\hat{\phi}){\bf C}_N^{-1}{\bf z}_0\}}{{\bf g}^H(\hat{\phi}){\bf C}_N^{-1}{\bf g}(\hat{\phi})}\right|\stackrel{\rm a.s.}\longrightarrow0 \; . \end{align} \normalsize To complete the proof, we now present the derivations of (\ref{eq:conv_dmn}) and (\ref{eq:conv_nmt}). Since \begin{align}\nonumber {\bf R}_N=\frac{1}{K} \sum_{k=1}^K {\bf z}_k{\bf z}_k^H={\bf C}_N^{1/2}\left(\frac{1}{K}\sum_{k=1}^K{\bf q}_k{\bf q}_k^H\right){\bf C}_N^{1/2}, \end{align} we rewrite $\frac{1}{N}{\bf g}^H(\phi){\bf R}_N^{-1}{\bf g}(\phi)$ as \begin{align} \nonumber &\frac{1}{N}{\bf g}^H(\phi){\bf R}_N^{-1}{\bf g}(\phi)\\ \label{eq:rewrite} &=\frac{1}{N}{\bf g}^H(\phi){\bf C}_N^{-1}{\bf g}(\phi)\tilde{\bf g}^H(\phi)\left(\frac{1}{K}\sum_{k=1}^K{\bf q}_k{\bf q}_k^H\right)^{-1}\tilde{\bf g}(\phi) \end{align} where $\tilde{\bf g}(\phi)=\frac{{\bf g}(\phi){\bf C}_N^{-1/2}}{\sqrt{{\bf g}(\phi){\bf C}_N^{-1}{\bf g}(\phi)}}$. \\ Next, we note that $\tilde{\bf g}^H(\phi)\left(\frac{1}{K}\sum_{k=1}^K{\bf q}_k{\bf q}_k^H\right)^{-1}\tilde{\bf g}(\phi)$ is a rotation invariant scalar, hence we have \begin{align}\nonumber \tilde{\bf g}^H(\phi)\left(\frac{1}{K}\sum_{k=1}^K{\bf q}_k{\bf q}_k^H\right)^{-1}\tilde{\bf g}(\phi)=\frac{1}{N}{\bf 1}_N^H {\bf \Lambda}^{-1}{\bf 1}_N \end{align} where ${\bf \Lambda}={\rm diag}(\lambda_1,\ldots,\lambda_N)$ is a diagonal matrix with diagonal entries $\lambda_1,\ldots, \lambda_N$ equal to the eigenvalues of $\frac{1}{K}\sum_{k=1}^K{\bf q}_k{\bf q}_k^H$ \cite{pafka2003noisy}. Denote the empirical eigenvalue distribution of $\frac{1}{K}\sum_{k=1}^K{\bf q}_k{\bf q}_k^H$ as $f(\lambda)=\frac{1}{N}\sum_{i=1}^N\delta(\lambda-\lambda_i)$ where $\delta(\lambda)$ is the Dirac delta function. According to the Mar\v{c}enko-Pastur law \cite{marvcenko1967distribution}, as $N, K\rightarrow\infty$, with $c_N=N/K\rightarrow c\in(0,1]$, $f(\lambda)$ converges almost surely to a non-random limiting eigenvalue distribution \begin{align}\label{eq:conv_lambda} \rho(\lambda)=\frac{1}{2\pi c}\frac{\sqrt{(\lambda_{+}-\lambda)(\lambda-\lambda_{-})}}{\lambda}, \quad \quad \lambda \in [ \lambda_{-}, \lambda_{+} ] \end{align} where $\lambda_{\pm}=(1\pm \sqrt{c})^2$. As a consequence \cite{pafka2003noisy} \begin{align}\nonumber \left| \frac{1}{N}{\bf 1}_N^H {\bf \Lambda}^{-1}{\bf 1}_N -\int\rho(\lambda)/\lambda\, {\rm d}\lambda\right|\stackrel{\rm a.s.}\longrightarrow0 , \nonumber \end{align} and, equivalently, \small \begin{align}\label{eq:conv_dmn_2 \max_{\phi\in\mathcal{R}_l}\left|\tilde{\bf g}^H(\phi)\left(\frac{1}{K}\sum_{k=1}^K{\bf q}_k{\bf q}_k^H\right)^{-1}\tilde{\bf g}(\phi)\!-\!\int\rho(\lambda)/\lambda\, {\rm d}\lambda\right|\stackrel{\rm a.s.}\longrightarrow0. \end{align} \normalsize Since $\int\rho(\lambda)/\lambda\, {\rm d}\lambda=\frac{1}{1-c}$, combining (\ref{eq:rewrite}) and (\ref{eq:conv_dmn_2}), the convergence (\ref{eq:conv_dmn}) follows. As for (\ref{eq:conv_nmt}), it should hold under both hypotheses. Under $H_0$, with ${\bf z}_0={\bf n}_0$, define $\tilde{\bf n}_0={\bf C}_N^{-1/2}{\bf n}_0$. Then we have \begin{align}\nonumber &\frac{1}{N}{\rm Re}^2\{{\bf g}^H(\phi){\bf R}_N^{-1}{\bf z}_0\} \\\nonumber &=\frac{1}{N}{\bf g}^H(\phi){\bf C}_N^{-1}{\bf g}(\phi){\rm Re}^2\left\{\tilde{\bf g}^H(\phi)\left(\frac{1}{K}\sum_{k=1}^K{\bf q}_k{\bf q}_k^H\right)^{-1}\tilde{\bf n}_0\right\}. \end{align} Again, since $\frac{1}{K}\sum_{k=1}^K{\bf q}_k{\bf q}_k^H$ is rotation invariant, we have \begin{align}\nonumber \max_{\phi\in\mathcal{R}_l}&\left|\frac{1}{N}{\rm Re}^2\{{\bf g}^H(\phi){\bf R}_N^{-1}{\bf z}_0\}\right.\\ \nonumber &~\left.-\frac{1}{2N}{\bf g}^H(\phi){\bf C}_N^{-1}{\bf g}(\phi)\left(\int\rho(\lambda)/\lambda\, {\rm d}\lambda\right)^2\right|\stackrel{\rm a.s.}\longrightarrow0. \end{align} Thus, \begin{align} \nonumber \max_{\phi\in\mathcal{R}_l}&\left|\frac{1}{N}{\rm Re}^2\{{\bf g}^H(\phi){\bf R}_N^{-1}{\bf z}_0\}\right.\\\label{eq:conv_nmt_1} &~\left.-\frac{1}{(1-c)^2}\frac{1}{2N}{\bf g}^H(\phi){\bf C}_N^{-1}{\bf g}(\phi)\right|\stackrel{\rm a.s.}\longrightarrow0. \end{align} Since also \begin{align}\label{eq:conv_nmt_2} \max_{\phi\in\mathcal{R}_l}\!\left|\frac{1}{N}{\rm Re}^2\{{\bf g}^H(\phi){\bf C}_N^{-1}{\bf z}_0\}\!-\!\frac{1}{2N}{\bf g}^H(\phi){\bf C}_N^{-1}{\bf g}(\phi)\right|\!\!\stackrel{\rm a.s.}\longrightarrow\!0, \end{align} combining (\ref{eq:conv_nmt_1}) and (\ref{eq:conv_nmt_2}), we obtain the convergence (\ref{eq:conv_nmt}) when ${\bf z}_0={\bf n}_0$. Under $H_1$, with ${\bf z}_0=s{\bf g}(\phi)+{\bf n}_0$, we have \begin{align}\nonumber {\bf g}^H(\phi){\bf R}_N^{-1}{\bf z}_0=s{\bf g}^H(\phi){\bf R}_N^{-1}{\bf g}(\phi)+{\bf g}^H(\phi){\bf R}_N^{-1}{\bf n}_0. \end{align} Relating to (\ref{eq:conv_dmn}) and (\ref{eq:conv_nmt_1}), we obtain \begin{align}\nonumber \max_{\phi\in\mathcal{R}_l}&\left|\frac{1}{N}{\rm Re}^2\{{\bf g}^H(\phi){\bf R}_N^{-1}{\bf z}_0\}\right.\\\label{eq:conv_nmt1_H1} &~\left.-\frac{2s^2+1}{2N(1-c)^2}{\bf g}^H(\phi){\bf C}_N^{-1}{\bf g}(\phi)\right|\stackrel{\rm a.s.}\longrightarrow0. \end{align} \small \begin{align}\label{eq:conv_nmt1_H1} \max_{\phi\in\mathcal{R}_l}\left|\frac{1}{N}{\rm Re}^2\{{\bf g}^H(\phi){\bf R}_N^{-1}{\bf z}_0\}\!-\!\frac{2s^2+1}{2N(1-c)^2}{\bf g}^H(\phi){\bf C}_N^{-1}{\bf g}(\phi)\right|\!\!\stackrel{\rm a.s.}\longrightarrow\!0. \end{align} \normalsize Similarly, \small \begin{align}\nonumber \max_{\phi\in\mathcal{R}_l}&\left|\frac{1}{N}{\rm Re}^2\{{\bf g}^H(\phi){\bf C}_N^{-1}{\bf z}_0\}\!-\!\frac{2s^2+1}{2N}{\bf g}^H(\phi){\bf C}_N^{-1}{\bf g}(\phi)\right|\!\stackrel{\rm a.s.}\longrightarrow0, \end{align} \normalsize which together with (\ref{eq:conv_nmt1_H1}) further yields (\ref{eq:conv_nmt}). \end{appendices} \section*{acknowledgement} The authors thank Mohamed S. Ghidaoui of HKUST's Department of Civil and Environmental Engineering for numerous helpful discussions throughout the course of this work, particularly relating to details of the physical water pipeline model and associated practicalities. \bibliographystyle{IEEEtran}
1,314,259,994,095
arxiv
\section{Introduction} Investigation of coordinated dynamics of many interactive oscillatory elements is relevant for the understanding of various phenomena from different branches of science. Probably, the most important and also mostly studied effect is the emergence of a collective mode, observed in populations of flashing fireflies~\cite{kaempfer1906history}, groups of pedestrians on footbridges~\cite{strogatz2005theoretical} or metronomes placed on a common support~\cite{martens2013chimera}, electronic circuits~\cite{watanabe1994constants}, populations of cells~\cite{richard1996acetaldehyde}, synthetic genetic oscillators~\cite{prindle2011sensing}, etc. Besides of collective synchrony, oscillatory networks exhibit many other interesting dynamical states like clusters and heteroclinic switching~\cite{hansel1993clustering}, chimeras~\cite{kuramoto2002coexistence}, collective chaos~\cite{hakim1992dynamics}, traveling waves~\cite{hooper1988travelling}, quasiperiodic partial synchrony~\cite{vanvreeswijk1996partial, rosenblum2007self, pikovsky2009self, clusella2016minimal}, solitary states~\cite{maistrenko2014solitary}, and so on. Analysis of such states and transitions between them is in the focus of current research. Some of mentioned effects can be studied within the framework of the famous Kuramoto model~\cite{kuramoto1984chemical} and of its immediate extension, the Kuramoto-Sakaguchi model~\cite{sakaguchi1986soluble}, that treat phase oscillators with the sine-coupling. Though this is a rather simplistic description of real-world oscillators, these models became extremely popular due to the possibility of analytical treatment~\cite{acebron2005kuramoto, pikovsky2015dynamics}. For example, they allow for theoretical description of synchronization transitions (that, in dependence on the distribution of oscillatory frequencies, can be alike second- or first-order \cite{pazo2005thermodynamic} phase transitions). Due to their specific mathematical properties, sine-coupled phase oscillators also often admit a low-dimensional description via the Watanabe-Strogatz (WS)~\cite{watanabe1993integrability, watanabe1994constants} and Ott-Antonsen (OA)~\cite{ott2008low, ott2009long} theories. All this explains why the Kuramoto-Sakaguchi model became a paradigmatic one, with applications ranging from explanation of social effects~\cite{kaempfer1906history, strogatz2005theoretical} to neuroscience~\cite{breakspear2010generative}. In most variants of the Kuramoto-Sakaguchi model researchers treat networks with attractive interactions and the existing literature extensively covers this case~\cite{montbrio2004synchronization, abrams2008solvable, barreto2008synchronization}. Networks of repulsive elements attract much less attention, although they show interesting effects~\cite{vanvreeswijk1994inhibition, tsimring2005repulsive, pimenova2016interplay}. Not much attention is also paid to mixed networks~\cite{hong2011kuramoto, hong2011conformists, anderson2012multiscale, iatsenko2013stationary, vlasov2014synchronization, qiu2016synchronization}, consisting of both attractive and repulsive elements, though systems of this type are common in neuroscience, because real neurons interact via excitatory and inhibitory connections~\cite{wilson1972excitatory, vreeswijk1996chaos, peyrache2012spatiotemporal, dehghani2016dynamic}. In this paper we concentrate on emergence of solitary state and quasiperiodic partial synchrony in networks with attractive and repulsive connections. The solitary state, when a single repulsive unit leaves the synchronous cluster, was for the first time found and analyzed in Ref.~\cite{maistrenko2014solitary} and later in Refs.~\cite{brezetskyi2015rare, jaros2015chimera, chouzouris2018chimera, chen2019dynamics, majhi2019solitary}. A generalized solitary state, where several oscillators exhibit dynamics different from that of the synchronous cluster received attention in~Refs.~\cite{kapitaniak2014imperfect, hizanidis2016chimera, rybalova2017transition, semenova2017coherenceincoherence, jaros2018solitary, semenova2018mechanism, shepelev2018chimera, rybalova2018mechanism, mikhaylenko2019weak, sathiyadevi2019long}. This state appears at the border between synchrony and asynchrony, as soon as repulsion starts to prevail over attraction. Our setup is an extension of the finite-size two-group Kuramoto model treated in Ref.~\cite{maistrenko2014solitary}, where all oscillators were identical. We demonstrate that for small frequency mismatches between the groups and a weak repulsion, there appears a small region, where the attractive units build a synchronous cluster, while the repulsive oscillators exhibit quasiperiodic partially synchronous dynamics. Slightly stronger repulsion leads to the solitary state, which is replaced by quasiperiodic dynamics again for bigger repulsion. For large mismatches in the frequency the solitary state is not observed, but only quasiperiodic dynamics. \section{The Model} A popular version of the standard Kuramoto-Sakaguchi model is a system of $M$ interacting groups of identical units, described by the following equations: \begin{equation} \dot{\theta}^{\sigma}_j = \omega_{\sigma} + \sum_{\sigma' = 1}^M \frac{K_{\sigma\sigma'}}{N} \sum_{k=1}^{N_{\sigma'}} \sin(\theta^{\sigma'}_k - \theta^{\sigma}_j + \alpha_{\sigma\sigma'}) \; , \end{equation} where $\theta^{\sigma}_j$ is the phase of the $i$th oscillator in the group $\sigma$ and $\sigma = 1,\ldots,M$. Here $\omega_{\sigma}$ and $N_{\sigma}$ are the natural frequency and the number of oscillators in the group $\sigma$, $N=\sum_\sigma N_\sigma$, and $K_{\sigma\sigma'}$ and $\alpha_{\sigma\sigma'}$ are respectively the strength of the coupling and the phase shift characterizing interaction between groups $\sigma$ and $\sigma'$. In the following we analyze a two-group Kuramoto-Sakaguchi model wherein the coupling coefficients and the phase shift parameters depend on the acting group only, i.e. $K_{\sigma\sigma'} = K_{\sigma'}$ and $\alpha_{\sigma\sigma'} = \alpha_{\sigma'}$. We concentrate on a particular case, motivated by neuroscience applications, when the coupling within the first group is attractive while in the second group it is repulsive. We denote phases of the units in these groups by $\varphi$ and $\psi$, respectively. By re-scaling the time and performing a transformation to a reference frame co-rotating with the frequency of the attractive group, we write the model as \begin{equation} \begin{aligned} \dot\varphi_j &= \frac{1}{N}\sum_{k=1}^{N_a}\sin(\varphi_k-\varphi_j+\alpha_a) -\frac{1+\varepsilon}{N}\sum_{k=1}^{N_r}\sin(\psi_k-\varphi_j+\alpha_r) \; , \\ \dot\psi_j &=\omega+\frac{1}{N}\sum_{k=1}^{N_a}\sin(\varphi_k-\psi_j+\alpha_a) -\frac{1+\varepsilon}{N}\sum_{k=1}^{N_r}\sin(\psi_k-\psi_j+\alpha_r) \; , \end{aligned}\label{eq:orig_system} \end{equation} where subscripts $a$ and $r$ stand for ``attractive'' and ``repulsive'', respectively. Quantification of coupling has been reduced to a single parameter $K_r/K_a = -(1 + \varepsilon)$, with $\varepsilon$ being the excess of repulsive coupling. An $\varepsilon < -1$ indicates that interaction within both groups is attractive and, trivially, the whole system synchronizes. For $\varepsilon =-1$ the second group is uncoupled and in the range $-1 < \varepsilon < 0$ the repulsive coupling is weaker than the attractive coupling. For $\varepsilon = 0$ their magnitudes are identical and for $\varepsilon > 0$ the repulsive coupling dominates. Introducing the Kuramoto mean fields for both groups, $Z_a = \rho_a e^{i\Theta_a} = 1/N_a \sum e^{i\varphi_j}$, $Z_r= \rho_r e^{i\Theta_r} = 1/N_r \sum e^{i\psi_j}$, and the common forcing \begin{equation} H = he^{i\Phi} = \frac{N_a}{N}e^{i\alpha_a}Z_a -\frac{N_r}{N}(1+\varepsilon)e^{i\alpha_r}Z_r \; , \label{eq:common_force} \end{equation} we re-write the model in a compact form as \begin{align} \dot\varphi_j & = \mbox{Im}\left [ He^{-i\varphi_j}\right ] = h\sin(\Phi-\varphi_j) \; ,\label{eq:model_compact1} \\ \dot\psi_j & = \omega+ \mbox{Im}\left [He^{-i\psi_j}\right ] = \omega+h\sin(\Phi-\psi_j) \; . \label{eq:model_compact2} \end{align} For the further analysis we restrict ourselves to the case of equally sized groups $N_r = N_a = N/2$ and $\alpha_a = \alpha_r = 0$. Equation (\ref{eq:common_force}) then reduces to \begin{equation} H = he^{i\Phi} = \frac{1}{2}\left [Z_a -(1+\varepsilon)Z_r\right ] \; . \label{eq:common_force_r} \end{equation} We notice that according to the Watanabe-Strogatz (WS) theory \cite{watanabe1993integrability, watanabe1994constants} the dynamical description of $n>3$ identical oscillators subject to a common force can be reduced to equations for three global variables and $n-3$ constants of motion. Thus, for $N_{a,r}>3$ and $\omega \neq 0$ the model (\ref{eq:model_compact1},\ref{eq:model_compact2}) is in fact 6-dimensional and can be described by two coupled systems of WS equations, see Ref.~\cite{pikovsky2008partially}. For $\omega=0$ all oscillators become identical and the whole ensemble can be described by three WS equations. \section{Synchronous state} \begin{figure} \includegraphics{figure01.pdf} \caption{Full synchrony in system (\ref{eq:model_compact1},\ref{eq:model_compact2}) is a two-cluster state. The region of full synchrony, as obtained numerically, is shaded with gray, while all other states are shown with white. The dashed red and the solid green lines show the analytical results for boundary of existence and of stability of the two-cluster state, respectively, see Eqs.~(\ref{eq:analytical_full_synchrony},\ref{eq:stability_full_synchrony}). }\label{fig:full_synchrony} \end{figure} First we analyze conditions of existence and stability of a synchronous state, where $\varphi_j = \varphi$ and $\psi_j = \psi$ for all $j$ and observed frequencies are $\dot{\varphi} = \dot{\psi} = \nu$. Notice that generally $\varphi \neq \psi$, i.e.\ synchrony in this setup shall be understood as existence of a two-cluster state. Notice also that for $\varepsilon < -1$ both groups are attractive and synchronize regardless of $\omega$, therefore we are interested in the interval $\varepsilon > -1$. Let $\varphi = \nu t$, $\psi = \nu t + \psi_0$, and $\Phi = \nu t + \Phi_0$. Then real and imaginary parts of Eq.~(\ref{eq:common_force_r}) provide \begin{equation} \begin{aligned} h \cos\Phi_0 & = \frac{1}{2} - \frac{1+\varepsilon}{2} \cos\psi_0 \; , \\ h \sin\Phi_0 & = - \frac{1+\varepsilon}{2} \sin\psi_0 \; . \end{aligned} \label{eq:fullsyn2} \end{equation} \paragraph{Condition of existence.} Subtracting Eq.~(\ref{eq:model_compact1}) from Eq.~(\ref{eq:model_compact2}) and using $\dot\psi_0=0$ we find that \begin{equation} \omega= h[\sin\Phi_0 - \sin(\Phi_0 - \psi_0)]\;. \label{eq:omega} \end{equation} Writing the second term as $\sin\Phi_0\cos\psi_0-\cos\Phi_0\sin\psi_0$ and excluding $\sin\Phi_0$ and $\cos\Phi_0$ using Eqs.~(\ref{eq:fullsyn2}) we obtain \begin{equation} \sin\psi_0 = -\frac{2\omega}{\varepsilon} \;. \label{eq:analytical_full_synchrony} \end{equation} It follows, that synchrony does not exist for $\varepsilon=0$, when attraction and repulsion are balanced. For $\varepsilon>0$ the repulsion becomes stronger than attraction and therefore the synchronous two-cluster state cannot be expected either. This consideration yields the border of the synchronous domain for $\varepsilon<0$: \begin{equation} \vert \omega \vert \leq - \varepsilon /2 \;. \label{eq:full_synchrony_w} \end{equation} In order to find the observed frequency $\nu$ we expand (\ref{eq:model_compact2}) and insert (\ref{eq:fullsyn2}). Together with (\ref{eq:analytical_full_synchrony}) this yields \begin{equation} \nu = \frac{1+\varepsilon}{\varepsilon} \omega \; . \label{eq:synchronous_nu} \end{equation} Notice that the ratio $(1+\varepsilon)/\varepsilon$ is negative in the region of existence, so that two synchronous clusters rotate in the direction, opposite to the one determined by $\omega$. (We remind that we consider the motion in a frame, co-rotating with the natural frequency of the attractive group.) \paragraph{Condition of stability.} The next step is to determine stability of the two-cluster configuration. For this purpose we first consider the linear stability of the repulsive cluster with respect to a symmetric perturbation~\cite{yeldesbay2014chimeralike}. It means that phases of two perturbed oscillators become $\psi_{\pm} = \nu t+\psi_0 \pm \alpha$, where $\alpha\ll 1$. This assures that the mean field $Z_r$ remains unchanged in the first-order approximation in $\alpha$. The perturbed oscillators then evolve according to \begin{equation} \dot{\psi}_{\pm} = \omega + h\sin(\Phi_0 - \psi_0 \mp \alpha) \; .\label{eq:linear_stability} \end{equation} In the first order in $\alpha$ we find \begin{equation} \dot{\alpha} = - \alpha h \cos(\Phi_0 - \psi_0) \; .\label{eq:stability_alpha} \end{equation} Thus, the cluster is stable for $h\cos(\Phi_0 - \psi_0) > 0$. With the help of Eqs.~(\ref{eq:fullsyn2}) this condition can be re-written as \begin{equation} \cos\psi_0 - (1+\varepsilon) >0\;. \end{equation} Hence, the border of stability is determined by the condition $\cos\psi_0=1+\varepsilon$. Now, using Eq.~(\ref{eq:analytical_full_synchrony}), we exclude $\psi_0$ and obtain the stability boundary as \begin{equation} \omega = \pm \sqrt{-\frac{\varepsilon^3}{2} - \frac{\varepsilon^4}{4}} \; . \label{eq:stability_full_synchrony} \end{equation} Using the same approach for the attractive group we find the condition for the stability to be \begin{equation} \cos\psi_0 < \frac{1}{1+\varepsilon} \; .\label{eq:condition_attractive} \end{equation} In the domain where the synchronous state exists we have $\varepsilon<0$ and the latter condition is fulfilled. Next, we have to consider the stability of the two-cluster configuration with respect to a shift of one of the clusters. For this purpose we re-write Eqs.~(\ref{eq:model_compact1},\ref{eq:model_compact2}) for the special case of $\varphi_j = \varphi$ and $\psi_j = \psi$. Using Eq.~(\ref{eq:common_force}) we obtain \begin{align} \dot{\varphi} & = -\frac{1+\varepsilon}{2}\sin(\psi - \varphi) \label{eq:compact_full_1} \; , \\ \dot{\psi} & = \omega - \frac{1}{2}\sin(\psi - \varphi) \label{eq:compact_full_2} \; , \end{align} which yields the Adler equation~\cite{adler1946study} for the distance between the clusters $\delta = \psi - \varphi$: \begin{equation} \dot{\delta} = \omega + \frac{\varepsilon}{2} \sin \delta \; . \label{eq:full_} \end{equation} This equation has a stable fixed point for $\vert \omega \vert < -\frac{\varepsilon}{2}$, i.e.\ in the whole domain of existence of the two-cluster solution. The final conclusion is that the stability of the synchronous two-cluster state is given by Eq.~(\ref{eq:stability_full_synchrony}). This result fits very well the numerical results shown in Fig.~\ref{fig:full_synchrony}. As one can see, the stable domain is smaller than the region where full synchrony exists. \section{Nontrivial States Beyond the two-Cluster synchrony } \subsection{Solitary State} The next solution we observe is the three-cluster state. As has been shown in Ref.~\cite{maistrenko2014solitary}, the system (\ref{eq:model_compact1},\ref{eq:model_compact2}) with $\omega=0$, exhibits, beyond the fully synchronous one-cluster solution, a peculiar solitary state, where a cluster of $N_a$ attractive and $N_r - 1$ repulsive oscillators coexists with a phase-shifted solitary oscillator. This state is not of full measure, so that not every initial condition leads to it. The range of the coupling values, where this solution exists shrinks as $1/N$ for $N \to \infty$. This makes the solitary state reliably observable only for small system sizes. The picture we observe for $\omega \ne 0$ is slightly different. Though the loss of synchrony here also occurs via appearance of a solitary unit, now one finds a three-cluster state: a cluster of $N_a$ attractive oscillators, a cluster of $N_r-1$ repulsive oscillators, and a solitary repulsive unit. The phase shifts between clusters are constant, so that the whole configuration rotates with the same constant observed frequency $\nu$. An illustration of this can be found in Fig.~\ref{fig:average_frequency_solitary}. \begin{figure} \includegraphics{figure02.pdf} \caption{a) Schematic illustration of the solitary state. Here the big and small green triangles denote the cluster of $N_r-1$ repulsive units and solitary repulsive oscillator, respectively. The cluster of attractive units is shown by the blue cross. Panel b) shows phase differences $\delta_{1,2}$ in the solitary state for a particular case $N_r = N_a = 5$ and $\varepsilon=0.212$ (this value corresponds to the largest range of $\omega$ for which the solitary state exists, cf. Fig.~\ref{fig:solitary}). Here black circles and blue crosses show the results of direct numerical simulation for $\delta_2$ and $\delta_1$, respectively, while the solid green and the dashed red lines are the theoretical results obtained with the help of Eqs.~(\ref{eq:analytical_solitary},\ref{eq:solitary_phi}). The boundary of the solitary state is at $\omega \approx 0.009$. }\label{fig:average_frequency_solitary} \end{figure} \begin{figure} \centering \includegraphics{figure03.pdf} \caption{The solitary state is a state with three clusters of size $N_a$, $N_r-1$, and 1, respectively. Parameters where such a state was observed numerically are shaded gray, while all others are shaded white. The solid green line gives the analytically derived boundary, see Eq.~(\ref{eq:analytical_solitary}). The dashed black line in the left panel marks $\varepsilon=0.212$; this value approximately corresponds to the largest interval of $\omega$ where the solitary state exists. The panels from left to right show the results for $N_a=N_r=5,6,7$ and $8$.}\label{fig:solitary} \end{figure} \paragraph{Condition of existence.} For a description of this state we write $\varphi = \nu t$, $\psi_{1,\ldots ,N_r - 1} = \nu t + \delta_1$, $\psi_{N_r} = \nu t + \delta_1 + \delta_2$, and $\Phi = \nu t + \Phi_0$. This yields the equations \begin{align} h e^{i\Phi_0} & = \frac{1}{2} - \frac{1+\varepsilon}{2N_r} \left[(N_r-1) e^{i \delta_1} + e^{i(\delta_1 + \delta_2)} \right] \; , \label{eq:solitary_forcing} \\ \nu & = h \mbox{Im}[e^{i\Phi_0}] \; , \label{eq:solitary_frequency}\\ \nu & = \omega + h \mbox{Im}[e^{i(\Phi_0 - \delta_1)}] \; , \label{eq:solitary_alpha}\\ \nu & = \omega + h \mbox{Im}[e^{i(\Phi_0 - \delta_1 - \delta_2)}] \; . \end{align} From the last two equations it follows that $\mbox{Im}[e^{i(\Phi_0 - \delta_1)}] = \mbox{Im}[e^{i(\Phi_0 - \delta_1 - \delta_2)}]$ and $\mbox{Re}[e^{i(\Phi_0 - \delta_1)}] = -\mbox{Re}[e^{i(\Phi_0 - \delta_1 - \delta_2)}]$. This yields $2 \delta_1 + \delta_2 = 2\Phi_0 - \pi$. Multiplying (\ref{eq:solitary_forcing}) with $e^{-i\Phi_0}$ and taking the imaginary part we find, by replacing $h$ with Eq.~(\ref{eq:solitary_frequency}), \begin{equation} \mbox{Im}[e^{i(\Phi_0-\delta_1)}] = \frac{1}{1+\varepsilon}\mbox{Im}(e^{i\Phi_0}) \; . \label{eq:solitary_phases} \end{equation} By applying this relation to Eqs.~(\ref{eq:solitary_frequency},\ref{eq:solitary_alpha}) we find that observed frequency $\nu$ is described by the same Eq.~(\ref{eq:synchronous_nu}) as in the synchronous state. However, while in the case of full synchrony $(1+\varepsilon)/\varepsilon$ was negative, here it is positive. Next, multiplying Eq.~(\ref{eq:solitary_forcing}) by $e^{-i\Phi_0}$ and taking this time the real part we obtain, after replacing $\mbox{Re}(e^{i\Phi_0}) = \sqrt{1 - {\mbox{Im}(e^{i\Phi_0})}^2}$: \begin{equation} h = \frac{1}{2}\sqrt{1-{\mbox{Im}(e^{i\Phi_0})}^2} - \frac{N_r-2}{2N_r}\sqrt{{(1+\varepsilon)}^2 - {\mbox{Im}(e^{i\Phi_0})}^2} \; . \end{equation} Finally, replacing $h$ with the help of Eqs.~(\ref{eq:solitary_frequency},\ref{eq:synchronous_nu}) and introducing $x = \mbox{Im}(e^{i \Phi_0})$, we obtain \begin{equation} 0 = x \sqrt{1 - x^2} - \frac{N_r-2}{N_r} x \sqrt{{(1+\varepsilon)}^2 - x^2} - 2 \frac{1+\varepsilon}{\varepsilon} \omega \;.\label{eq:analytical_solitary} \end{equation} To find the parameter domain of existence of the solitary state we need to find the range of $\omega$ so that Eq.~(\ref{eq:analytical_solitary}) can be fulfilled for a given $\varepsilon$. First of all notice that Eq.~(\ref{eq:analytical_solitary}) is invariant with respect to the transformation $x \to -x$ and $\omega \to -\omega$. The branch for $\omega > 0$ is given by the solution for $x \in (0,1]$ and the other one can be inferred by using the transformation $\omega \to -\omega$. Consider the function $f$ consisting of the first two terms on the right hand side of Eq.~(\ref{eq:analytical_solitary}): \begin{equation} f(x, N_r, \varepsilon) = x \sqrt{1 - x^2} - \frac{N_r-2}{N_r} x \sqrt{{(1+\varepsilon)}^2 - x^2} \; . \label{eq:solitary_root} \end{equation} The border of the solitary state for $\omega > 0$ can then be calculated as $\omega = \frac{\varepsilon}{2(1+\varepsilon)} f_{\max}(x, N_r, \varepsilon)$. To find the maximum of $f$ we write $\partial{f}/\partial{x}=0$, which yields \begin{equation} (1-2x^2)N_r\sqrt{{(1+\varepsilon)}^2 - x^2} = [{(1+\varepsilon)}^2 - 2x^2](N_r-2)\sqrt{1-x^2} \; . \label{eq:solitary_derivative} \end{equation} Squaring Eq.~(\ref{eq:solitary_derivative}) and ordering it by powers of $x$ we get a cubic equation for $x^2$. The expression for the roots is too long to be shown here, but the calculated maximal $\omega$ for the solitary state fits the numerical results nicely, as shown in Fig.~\ref{fig:solitary}. \paragraph{Phase shifts in the solitary state.} To determine the phase shifts $\delta_1$ and $\delta_2$, we first rewrite Eq. (\ref{eq:orig_system}) in terms of $\delta_1$ and $\delta_2$: \begin{equation} \dot{\delta}_2 = \frac{1}{2} [\sin\delta_2 ((1+\varepsilon) - \cos\delta_1) - \cos\delta_2\sin\delta_1 + \sin\delta_1] \; . \end{equation} Next, similarly to the case of $\omega = 0$ studied in Ref.~\cite{maistrenko2014solitary}, we write it as \begin{equation} \dot{\delta}_2 = A [\sin(\delta_2 - \delta_2^*) + \sin\delta_2^*] \; ,\label{eq:solitary_delta_2} \end{equation} where $\tan\delta_2^* = \sin\delta_1/((1+\varepsilon) - \cos\delta_1)$ and $A = \sin\delta_1/(2\sin\delta_2^*)$. A stable state has the solution $\delta_2 = 0$ or $\delta_2 = 2 \delta_2^* + \pi$. The first solution corresponds to the 2-cluster state and the second solution to the solitary state. As shown earlier the phase shifts in the solitary state are related via $2 \Phi_0 - \pi = 2\delta_1 + \delta_2$. This can also be expressed as $\Phi_0 = \delta_1 + \delta_2^*$. Equation (\ref{eq:solitary_phases}) then allows one to write the relation between $\delta_2^*$ and $\Phi_0$ as \begin{equation} \sin\delta_2^* = \frac{1}{1+\varepsilon}\sin\Phi_0 \; ,\label{eq:solitary_phi} \end{equation} and consequently allows for the calculation of $\delta_2$ and $\delta_1$ from $\Phi_0$. $\Phi_0$ can be calculated numerically from Eq.~(\ref{eq:analytical_solitary}) and the resulting phase shifts coincide with the numerical results in Fig.~\ref{fig:average_frequency_solitary}. \paragraph{Stability.} An analytical linear stability analysis shows that the value of $\delta_2$ is stable in the region of existence. Finding the stability for $\delta_1$ is not as simple and can only be done numerically. Still we find it to be stable in the whole region of existence for $N_a = N_r = 5$. The stability analysis can be found in Appendix~\ref{sec:appendix_stability_solitary}. \paragraph{Case $\omega=0$ vs.\ case $\omega\neq 0$.} Our numerical results indicate that for $\omega \neq 0$ in the parameter range where the solitary state exists, it is the only attractor. This is an essential difference with the previously studied case $\omega = 0$, see Ref.~\cite{maistrenko2014solitary}, where the solitary state has not full measure. Indeed, for $\omega=0$ the system (\ref{eq:model_compact1},\ref{eq:model_compact2},\ref{eq:common_force_r}) admits splay state solutions $h=0$ with $\Theta_a=\Theta_r=\Phi$ and \begin{equation} \rho_r = \rho_a/(1+\varepsilon) \; . \label{eq:w0_max} \end{equation} For $\omega\ne0$ the state $h=0$ is not a solution and numerical studies indicate that the completely asynchronous case $\rho_a=\rho_r=h=0$ is unstable. Thus, the solitary state remains the only attractor. \paragraph{Absence of other clustered states.} According to the WS theory~\cite{watanabe1993integrability, watanabe1994constants, pikovsky2011dynamics}, the repulsive group can be fully described by two global angle variables $\Psi$ and $\Gamma$, global variable $0 \leq\kappa \leq 1$, and $N_r$ constants $\chi_k$, $k=1,\ldots,N_r$. The latter depend on initial conditions and obey three additional constraints. The original phase variables can be obtained from the global ones with the help of the M\"obius transformation~\cite{marvel2009identical, pikovsky2015dynamics} as $e^{i\psi_k}=e^{i\Gamma}(\kappa+e^{i(\chi_k-\Psi)})/(\kappa e^{i(\chi_k-\Psi)}+1)$. For $\kappa<1$, general initial conditions, i.e.\ different $\chi_k$, yield different $\psi_k$ (for an example of such dynamics see the partially synchronous state described in the next Section). For $\kappa=1$ typically all $\psi_k=\psi$, i.e.\ one observes a one-cluster state. However, it is possible that $e^{i(\chi_k-\Psi)}=-1$ for some $k=n$ and then one phase $\psi_n$ differs from other clustered phases, i.e.\ the solitary state is observed~\cite{pikovskyprivate, maistrenko2014solitary}. Other cluster states except for full synchrony and the $(N_r-1,1)$ configuration are therefore not allowed, see Ref.~\cite{engelbrecht2014classification} for a rigorous proof. Certainly, similar consideration can be applied to the attractive group, but there the solitary state is unstable and only the trivial one-cluster state is observed. \subsection{Self-Consistent Partial Synchronization}\label{sec:scps} \subsubsection{Numerical analysis} Outside of the domains of full synchrony and solitary states we find a partially synchronized repulsive group, characterized by the order parameter $0<\rho_r<1$. As for the attractive group, we find that it remains synchronous even for such large values of $\varepsilon$ as 10. Though the condition of its full synchrony (\ref{eq:condition_attractive}) can be easily extended for the general case of $\rho_r \leq 1$ to $\rho_r \cos(\Theta_r - \Theta_a) < (1+\varepsilon)^{-1}$, we were not able to prove the synchrony analytically and only checked it numerically~\footnote{The attractive group remained fully synchronized even when the units were made non-identical by sampling the frequencies from a normal distribution with zero mean and standard deviation of $10^{-3}$. Hence, stability of the attractive group is not a numerical artifact.}. A diagram of the states, including the domains of existence of full synchrony and of the solitary state, combined with the presentation of the time-averaged order parameter $\bar\rho_r$~\footnote{In the following the time-averaged quantities are denoted by overlined letters.} can be found in Fig.~\ref{fig:overview}. \begin{figure} \includegraphics{figure04.pdf} \caption{Overview of the states in the parameter space for $N_a=N_r=5$. The black region corresponds to the domain of full synchrony as determined by Eq.~(\ref{eq:stability_full_synchrony}). The blue color shows the domain of solitary states, see Eq.~(\ref{eq:analytical_solitary}). The background outside of these two regions shows the time-averaged order parameter $\bar\rho_r $ of the repulsive group for one initial condition; here it is $\bar\rho_r<1$, so that this is the domain of partial synchrony. White lines show parameter values where the dynamics is analyzed in details, see Figs.~\ref{fig:frequency_oa},\ref{fig:analytical_order}. }\label{fig:overview} \end{figure} The observed partial synchrony can be seen as a self-organized quasiperiodic state, SOQ (or self-consistent partial synchrony, SCPS)~\cite{rosenblum2007self, pikovsky2009self, clusella2016minimal}. The latter is characterized by the difference between the average frequency of the oscillators and their mean field. Indeed, in our setup the average frequency (observed frequency) $\bar\nu_r$ of repulsive units is larger than the average frequency $\bar\Omega_r $ of their mean field. (In fact, the instantaneous frequencies also differ nearly all the time.) Furthermore, the mismatch $\bar\nu-\bar\Omega_r $ increases with $\omega$. Nevertheless, both sub-populations remain synchronous on the macroscopic level, i.e.\ the average mean field frequencies coincide, $\bar\Omega_r =\bar\Omega_a$, see Fig.~\ref{fig:average_frequency}. We notice that close to the border of the solitary state these frequencies are not always well-defined, as indicated by small values of the minimal instantaneous order parameter. In this border domain we observe very long transients; precise identification of the dynamical states here requires a separate investigation. \begin{figure} \includegraphics{figure05.pdf} \caption{a) Observed frequencies of oscillators from the repulsive group, $\bar\nu_r$ (green triangle), and of the mean fields $\bar\Omega_r$ (blue crosses) and $\bar\Omega_a$ (red pluses), for $N_a=N_r = 5$ and $\varepsilon=0.212$. b) the average order parameter of the repulsive group $\bar\rho_r$ (black squares) and the minimal value of this order parameter over a long time interval, $\rho_{r,min}=\text{min}_t[\rho_r(t)]$ (cyan circles). The analytical border of the solitary state is denoted by a dashed gray line; to the left of this line all frequencies coincide. Close to the border of the solitary state, the phase is not well defined, probably due to long transients, as indicated by low values of $\rho_{r,min}$. This leads to the discrepancies between $\bar\Omega_a$ and $\bar\Omega_r$. The right border of this domain is (quite arbitrary) marked by a dotted line. To the right of this border $\bar\Omega_r=\bar\Omega_a$ (blue crosses and red pluses overlap). The results have been obtained taking a perturbed cluster as initial condition. }\label{fig:average_frequency} \end{figure} Results for similar computations for a large range of $\omega$ are presented in Fig.~\ref{fig:frequency_oa}. However, here the simulations were started from many different initial conditions. As one can see, partially synchronous states are characterized by a large degree of multistability: In fact, the whole range of SCPS is multistable, as can be seen in Fig.~\ref{fig:frequency_oa} as well as in Fig.~\ref{fig:analytical_order} below. Different initial conditions result in different values of $\bar\Omega_r$ and $\bar \nu_r$~\footnote{To obtain these quantities we have averaged the frequencies over the time interval of 500 units, after transient of 1000 units.}. Interestingly, the variation of these quantities reduces with increasing $\omega$. For all these parameters the mean fields of both populations remain synchronized; we have also checked that their phases remain well-defined~\footnote{Even for such large values as $\omega=1$ and $\varepsilon=1$ the smallest observed order parameter over 100 different initial conditions was 0.08, with the average being 0.2.}. \begin{figure} \centering \includegraphics{figure06.pdf} \caption{The observed average frequency for $N_{a,r}=5$ and $\varepsilon=0.5$, for 100 random initial conditions per $\omega$. The blue squares denote frequency $\bar\Omega_r$ of the mean field and the green dots denote frequency $\bar \nu_r $ of the repulsive oscillators. The dashed red line is the solution of Eq.~(\ref{eq:OA_freq}) and the solid black line is the solution for the mean field frequency, see Eq.~(\ref{eq:ws_3dim_tr}).}\label{fig:frequency_oa} \end{figure} Notice that transition from the solitary state to partial synchrony is accompanied by change of the direction of rotation with respect to the considered coordinate frame~\footnote{We remind that we use the frame, co-rotating with the natural frequency of oscillators in the attractive group.}. Indeed, before the transition all frequencies are positive, while immediately after it they are negative, see Fig.~\ref{fig:average_frequency}. With a further increase of the parameter $\omega$, the frequency of the repulsive units $\bar \nu_r$ becomes positive and then tends to $\omega$. In fact, for large $\omega$ or for strongly repulsive systems, the repulsive units tend to have a uniform distribution of phases. However, they remain perturbed by the field of the synchronous attractive cluster, so that the uniform distribution can be reached only asymptotically. We illustrate partially synchronous dynamics of the repulsive group by several snapshots in Fig.~\ref{fig:transient_solitary}, for an intermediate value $\omega=0.1$. We see that repulsive oscillators form a group (a loose cluster), then the first oscillator in the group accelerates, stays for some instant in anti-phase with respect to others, so that we can speak about transient solitary state, and then joins the group again, now becoming the last one in the group. Then the group dissolves again, and now the oscillator that was initially the third in the group stays for some time in anti-phase to the rest of the group, then the group recombines, and so on. Notice that only every second oscillator undergoes the transient solitary state. This dynamics seem to be independent of the initial condition and was observed both for even and odd $N_{a,r}$. This bears some resemblance to a phenomenon observed in an ensemble of attractive and repulsive active rotators, see Ref.~\cite{zaks2016onset}. \begin{figure} \includegraphics{figure07.pdf} \caption{A specific type of partial synchrony found in the system for intermediate values of $\omega$. The snapshots shows the repulsive oscillators over time, where every oscillator is marked by a different color and symbol. At times $t=5.9$ and $t=21.7$ a single oscillator leaves the fuzzy cluster. The observed system is rather small with $N_r=N_a=5$, $\varepsilon=0.2$, and $\omega=0.1$.}\label{fig:transient_solitary} \end{figure} To conclude the discussion of the multistability of the partially synchronous state, we analyze a large system. In Fig.~\ref{fig:multistability} we show two distributions of phases $\psi$ for $N_{a,r} =1024$. These distributions have been obtained by simulation started from different initial conditions: in one case, illustrated in a), we use a perturbed cluster state, while the case in b) corresponds to random initial conditions. The distributions differ in their form, as well as in their dynamics. In the first case the distribution is bounded and bimodal; it moves with time and ``breathes'', changing its width. Generally the phase differences between the mean fields and common force vary in time. In the case of random initial conditions phases spread around the unit circle and their distribution is unimodal and nearly stationary (small time fluctuations are probably due to finite size effect). The differences between the distributions also lead to slight differences in the average frequencies. For the perturbed cluster we find $\bar \nu_r = 0.262$ and $\bar \Omega_r = -0.196$ and for the random initial conditions we obtain $\bar \nu_r = 0.264$ and $\bar \Omega_r = -0.181$. \begin{figure} \includegraphics{figure08.pdf} \caption{Phases of the repulsive group and their histograms, for two different initial conditions and $N_{a,r} = 1024$, $\omega = 0.75$, and $\varepsilon = 0.5$. Initial conditions are perturbed cluster (a) and random (b). The solid red (dashed green) line is the phase of the repulsive (attractive) mean field $\Theta_r$ ($\Theta_a$), while the dotted black line denotes the phase of the forcing $\Phi$. The distribution in a) changes its width with time, whereas the distribution in b) is practically stationary.}\label{fig:multistability} \end{figure} \subsubsection{Theoretical analysis} Here we provide some analytical estimates for the state of partial synchrony. As already mentioned, for the case $\omega=0$ and partial synchrony of the repulsive units, the relation between order parameters of two groups is given by Eq.~(\ref{eq:w0_max}). Since the attractive group is always synchronized, $\rho_a=1$, we obtain $\rho_r=1/(1+\varepsilon)$. We expect that this expression can be used as an estimation also for small $\omega$. We also expect that this expression yields the upper limit for $\rho_r$, since an increase in $\omega$ can only lead to a decrease in the level of synchrony. \begin{figure} \includegraphics{figure09.pdf} \caption{The average repulsive order parameter $\bar \rho_r$ (black dots) for 100 different initial conditions per $\varepsilon$ and $N_r = 5$. In a) $\omega = 0.02$ and in b) $\omega=0.6$. The dashed red line marks $1/(1+\varepsilon)$; as expected this curve yields a reasonable upper bound estimate for small $\omega$. The solid green line the solution of Eq.~(\ref{eq:WS_delta}); this estimation works better for large $\omega$.}\label{fig:analytical_order} \end{figure} Next, we recall that according to the WS theory the description of the system (\ref{eq:model_compact1},\ref{eq:model_compact2},\ref{eq:common_force_r}) can be reduced to six equations for collective variables. (Below we use the WS equations in the form, suggested in Ref.~\cite{pikovsky2008partially}.) Furthermore, we restrict the consideration to the Ott-Antonsen (OA) manifold~\cite{ott2008low, ott2009long} that corresponds to uniform distribution of the constants of motion in the WS theory \cite{pikovsky2008partially}. In this case the system is further simplified, with four equations for $\rho_{a,r}$ and $\Theta_{a,r}$. Moreover, since $\rho_a = 1$, we obtain a three-dimensional system. The final equations follow from the WS equations~\cite{pikovsky2008partially} and read \begin{align} \dot{\rho_r} & = \frac{1 - \rho_r^2}{4} \left[ \cos(\Theta_r - \Theta_a) - (1+\varepsilon) \rho_r \right] \; , \\ \dot{\Theta_r} & = \omega + \frac{1 + \rho_r^2}{4\rho_r} \sin(\Theta_a - \Theta_r) \; , \label{eq:ws_3dim_tr} \\ \dot{\Theta_a} & = \frac{1+\varepsilon}{2} \rho_r \sin(\Theta_r - \Theta_a) \; . \end{align} Introduction of the phase shift between the mean fields $\delta=\Theta_r - \Theta_a$ leads to the two-dimensional system \begin{equation} \begin{aligned} \dot{\rho_r} & = \frac{1 - \rho_r^2}{4} \left[ \cos\delta - (1+\varepsilon) \rho_r \right] \; , \\ \dot{\delta} & = \omega - \frac{1 + [1-2(1+\varepsilon)]\rho_r^2}{4\rho_r}\sin\delta \; . \end{aligned} \label{eq:ws_2dim} \end{equation} Notice that since we are very far from the thermodynamic limit, the OA Ansatz can be considered only as a rather crude approximation and, hence, Eqs.~(\ref{eq:ws_2dim}) provide only some estimates. We are interested in states, where the mean fields are locked, and therefore $\delta$ is bounded. We consider a weaker condition $ \dot{\delta} = 0$ and also neglect time variability of the order parameter, taking $\dot{\rho_r} = 0$. Applying this approximation to Eqs.~(\ref{eq:ws_2dim}) we obtain an estimation for the average order parameter $\bar\rho_r$: \begin{equation} \begin{aligned} \cos\delta & = (1+\varepsilon)\bar\rho_r \; , \\ \sin\delta & = \frac{4\omega \bar\rho_r}{1 + (1-2(1+\varepsilon))\bar\rho_r^2}\; . \end{aligned}\label{eq:WS_delta} \end{equation} Eliminating $\delta$ by squaring the equations and reordering terms, we obtain a cubic equation for $\bar\rho_r^2$. The expression for the roots are too lengthy and therefore not shown; the results for the average order parameter $\bar\rho_r$ can be seen in Fig.~\ref{fig:analytical_order}. We see that for large $\omega$ the estimation of $\bar\rho_r$ is quite good. Given $\bar\rho_r$ we find $\delta$ from Eqs.~(\ref{eq:WS_delta}). In its turn, this yields the estimation of the average frequency of the repulsive mean field $\bar\Omega_r$ from Eq.~(\ref{eq:ws_3dim_tr}) as $\bar\Omega_r=\omega+(1+\bar\rho_r^2)\sin(\delta)/4\bar\rho_r$. For a known $\bar\Omega_r$ the average frequency of an oscillator can be calculated with the help of the WS theory~\cite{baibolatov2009periodically}. Using this we find the average frequency $\bar \nu_r$ of the repulsive oscillators (for the derivation see Appendix~\ref{sec:freq_OA}) to be \begin{equation} \bar \nu_r = \frac{1 - \bar\rho_r^2}{1 + \bar\rho_r^2} \omega + \frac{2 \bar\rho_r^2}{1 + \bar\rho_r^2} \bar\Omega_r \; .\label{eq:OA_freq} \end{equation} The estimated $\bar \nu_r$ fits the numerical results in Fig.~\ref{fig:frequency_oa} for large $\omega$ quite well; the estimate $\bar\Omega_r$ is not as good, but also corresponds to the numerics for large $\omega$. \section{Conclusion} We have analyzed the interplay of attraction and repulsion in a two-group Kuramoto model. In the considered network each group consists of identical elements but the groups differ in their frequencies. We have found that if attraction is stronger than repulsion then there exist an interval of frequency mismatch $\omega$ where the system synchronizes, in the sense that each group forms a cluster. The stronger the repulsion, the smaller is this interval of two cluster synchrony. The shift between synchronous clusters is determined by $\omega$. A further increase of repulsion or of $|\omega|$ destroys the two-cluster synchrony. However, the attractive group remains synchronized while the repulsive one undergoes a transition to quasiperiodic partial synchrony. In this state the order parameter of the repulsive group is between zero and one, the mean field frequency remains locked to the frequency of the attractive group, but individual units have a different, generally incommensurate, frequency. For small $|\omega|$ the transition from two-cluster synchrony to partial synchrony occurs via formation of a solitary state. In this regime there exist two clusters (one with attractive units and one with all repulsive units but one) and one solitary repulsive oscillator. The borders of synchronous and solitary regimes have been obtained analytically. We notice that the domain of the solitary state solutions rapidly shrinks with the increase of ensemble size, whereas the partial synchrony persists for large ensembles as well. For large $|\omega|$ the frequencies of the individual units and of the mean field have been estimated with the help of the WS theory. We believe that our results can be useful for analysis of neuronal ensembles with excitatory and inhibitory connections. \begin{acknowledgements} This paper was developed within the scope of the IRTG 1740 / TRP 2015/50122-0, funded by the DFG/ FAPESP. M. R. was supported by the Russian Science Foundation (Grant No. 17-12-01534). The authors thank A. Pikovsky and Y. Maistrenko for helpful discussions. \end{acknowledgements}
1,314,259,994,096
arxiv
\section{Introduction} \label{sec1} A basic assumption of traditional machine learning is that data in the training and test sets are independently sampled in one domain with the identical underlying distribution. However, with the growing amount of heterogeneity in modern data, the assumption of having only one domain may not be reasonable. Transfer learning (TL) is a learning strategy that enables us to learn from a source domain with plenty of labeled data as well as a target domain with no or very few labeled data in order to design a better classifier in the target domain than the ones trained by target-only data for its generalization performance. This can reduce the effort of collecting labeled data for the target domain, which might be very costly, if not impossible. Due to its importance, there has been ongoing research on the topic of transfer learning and many surveys in the recent years covering transfer learning and domain adaptation methods from different perspectives \cite{survey2010,survey2017deep, survey2015, survey2016, survey2017}. If we train a model in one domain and directly apply it in another, the trained model may not generalize well, but if the domains are related, appropriate transfer learning and domain adaptation methods can borrow information from all the data across the domains to develop better generalizable models in the target domain. Transfer learning in medical genomics is desirable, since the number of labeled data samples is often very limited due to the difficulty of having disease samples and the prohibitive costs of human clinical trials. However, it is relatively easier to obtain gene-expression data for cell lines or other model species like mice or dogs. If these different life systems share the same underlying disease cellular mechanisms, we may utilize data in cell lines or model species as our source domain to develop transfer learning methods for more accurate human disease prognosis in the target domain \cite% {zou2015transfer,ganchev2011transfer}. \subsection{Related Works} Domain adaptation (DA) is a specific case of transfer learning where the source and target domains have the same classes or categories \cite% {survey2017deep, survey2015, survey2017}. DA methods either adapt the model learned in the source domain to be applied in the target domain or adapt the source data so that the distribution can be close to the one of the target data. Depending on the availability of labeled target data, the DA methods are categorized as unsupervised and semi-supervised algorithms. Unsupervised DA problems applies to the cases where there are no labeled target data and the algorithm uses only unlabeled data in the target domain along with source labeled data \cite{gong2012geodesic}. Semi-supervised DA methods use both the unlabeled and a few labeled target data to learn a classifier in the target domain with the help of source labeled data \cite{HFA2012, hoffman2013,hoffman2014, CDLS2016}. Depending on whether the source and target domains have the same feature space with the same feature dimension, there are homogeneous and heterogeneous DA methods. The first direction in homogeneous DA is instance re-weighting, for which the most popular measure to re-weight the data is Maximum Mean Discrepancy (MMD) \cite{MMD} between the two domains. Transfer Adaptive Boosting (TrAdaBoost) \cite{dai2007boosting} is another method that adaptively sets the weights for the source and target samples during each iteration based on the relevance of source and target data to help train the target classifier. Another direction is model or parameter adaptation. There are several efforts to adapt the SVM classifier designed in the source domain for the target domain, for example, based on residual error \cite% {duan2009domain,bruzzone2010domain}. Feature augmentation methods, such as Geodesic Flow Sampling (GFS) and Geodesic Flow Kernel (GFK) \cite% {gong2012geodesic}, derive intermediate subspaces using Geodesic flows, which interpolate between the source and target domains. Finding an invariant latent domain in which the distance between the empirical distributions of the source and target data is minimized is another direction to tackle the problem of domain adaptation, such as Invariant Latent Space (ILS) in \cite{ILS2017}. Authors in \cite{ILS2017} proposed to learn an invariant latent Hilbert space to address both the unsupervised and semi-supervised DA problems, where a notion of domain variance is simultaneously minimized while maximizing a measure of discriminatory power using Riemannian optimization techniques. Max-Margin Domain Transform (MMDT) \cite{hoffman2013} is a semi-supervised feature transformation DA method which uses a cost function based on the misclassification loss and jointly optimizes both the transformation and classifier parameters. Another domain-invariant representation method \cite{OT} matches the distributions in the source and target domains via a regularized optimal transportation model. Heterogeneous Feature Augmentation (HFA) \cite{HFA2012} is a heterogeneous DA method which typically embeds the source and target data into a common latent space prior to data augmentation. Domain adaption has been recently studied in deep learning frameworks like deep adaptation network (DAN) \cite{long2015learning}, residual transfer networks (RTN) \cite{long2016unsupervised}, and models based on generative adversarial networks (GAN) such as domain adversarial neural network (DaNN) \cite{ganin2016domain} and coupled GAN (CoGAN) \cite{liu2016coupled}. Although deep DA methods have shown promising results, they require a fairly large amount of labeled data. \subsection{Main Contributions} This paper treats homogeneous transfer learning and domain adaptation from Bayesian perspectives, a key aim being better theoretical understanding when data in the source domain are \textquotedblleft transferrable" to help learning in the target domain. When learning complex systems with limited data, Bayesian learning can integrate prior knowledge to compensate for the generalization performance loss due to the lack of data. Rooted in Optimal Bayesian Classifiers (OBC) \cite{Lori1,Lori2}, which gives the classifiers having Bayesian minimum mean squared error (MMSE) over uncertainty classes of feature-label distributions, we propose a Bayesian transfer learning framework and the corresponding Optimal Bayesian Transfer Learning (OBTL) classifier to formulate the OBC in the target domain by taking advantage of both the available data and the joint prior knowledge in source and target domains. In this Bayesian learning framework, transfer learning from the source to target domain is through a joint prior probability density function for the model parameters of the feature-label distributions of the two domains. By explicitly modeling the dependency of the model parameters of the feature-label distribution, the posterior of the target model parameters can be updated via the joint prior probability distribution function in conjunction with the source and target data. Based on that, we derive the \textit{effective} class-conditional densities of the target domain, by which the OBTL classifier is constructed. Our problem definition is the same as the aforementioned domain adaptation methods, where there are plenty of labeled source data and few labeled target data. The source and target data follow different multivariate Gaussian distributions with arbitrary mean vectors and precision (inverse of covariance) matrices. For the OBTL, we define a joint Gaussian-Wishart prior distribution, where the two precision matrices in the two domains are jointly connected. This joint prior distribution for the two precision matrices of the two domains acts like a bridge through which the useful knowledge of the source domain can be transferred to the target domain, making the posterior of the target parameters tighter with less uncertainty. With such a Bayesian transfer learning framework and several theorems from multivariate statistics, we define an appropriate joint prior for the precision matrices using hypergeometric functions of matrix argument, whose marginal distributions are Wishart as well. The corresponding closed-form posterior distributions for the target model parameters are derived by integrating out all the source model parameters. Having closed-form posteriors facilitates closed-form effective class-conditional densities. Hence, the OBTL classifier can be derived based on the corresponding hypergeometric functions and does not need iterative and costly techniques like MCMC sampling. Although the OBTL classifier has a closed form, computing these hypergeometric functions involves the computation of series of zonal polynomials, which is time-consuming and not scalable to high dimension. To resolve this issue, we use the Laplace approximations of these functions, which preserves the good prediction performance of the OBTL while making it efficient and scalable. The performance of the OBTL is tested on both synthetic data and real-world benchmark image datasets to show its superior performance over state-of-the-art domain adaption methods. The paper is organized as follows. Section \ref{sec2} introduces the Bayesian transfer learning framework. Section \ref{sec3} derives the closed-form posteriors of target parameters, via which Section \ref{sec4} obtains the effective class-conditional densities in the target domain. Section \ref{sec5} derives the OBTL classifier, and Section \ref{sec6} presents the OBC in the target domain and shows that the OBTL classifier converts to the target-only OBC when there is no interaction between the domains. Section \ref{sec7} presents experimental results using both synthetic and real-world benchmark data. Section \ref{sec8} concludes the paper. Appendix \ref{appendix:hypergeometric} states some useful theorems for the generalized hypergeometric functions of matrix argument. Appendices \ref{appendix:posterior} and \ref{appendix:effective} provide the proofs of our main theorems. Finally, Appendix \ref{appendix:Laplace} presents the Laplace approximation of Gauss hypergeometric functions of matrix argument. \section{Bayesian Transfer Learning Framework} \label{sec2} We consider a supervised transfer learning problem in which there are $L$ common classes (labels) in each domain. Let $\mathcal{D}_{s}$ and $\mathcal{D% }_{t}$ denote the labeled datasets of the source and target domains with the sizes of $N_{s}$ and $N_{t}$, respectively, where $N_{t}\ll N_{s}$. Let $% \mathcal{D}_{s}^{l}=\left\{ \mathbf{x}_{s,1}^{l},\mathbf{x}_{s,2}^{l},\cdots ,\mathbf{x}_{s,n_{s}^{l}}^{l}\right\} $, $l\in \{1,\cdots ,L\}$, where $% n_{s}^{l}$ denotes the size of data in the source domain for the label $l$. Similarly, let $% \mathcal{D}_{t}^{l}=\left\{ \mathbf{x}_{t,1}^{l},\mathbf{x}_{t,2}^{l},\cdots ,\mathbf{x}_{t,n_{t}^{l}}^{l}\right\} $, $l\in \{1,\cdots ,L\}$, where $% n_{t}^{l}$ denotes the size of data in the target domain for the label $l$. There is no intersection between $\mathcal{D}_t^i$ and $\mathcal{D}_t^j$ and also between $\mathcal{D}_s^i$ and $\mathcal{D}_s^j$ for any $i,j\in \{1,\cdots,L\}$. Obviously, we have $% \mathcal{D}_{s}=\cup _{l=1}^{L}\mathcal{D}_{s}^{l}$, $\mathcal{D}_{t}=\cup _{l=1}^{L}\mathcal{D}_{t}^{l}$, $N_{s}=\sum_{l=1}^{L}n_{s}^{l}$, and $% N_{t}=\sum_{l=1}^{L}n_{t}^{l}$. Since we consider the homogeneous transfer learning scenario, where the feature spaces are the same in both the source and target domains, $\mathbf{x}_{s}^{l}$ and $\mathbf{x}_{t}^{l}$ are $% d\times 1$ vectors for $d$ features of the source and target domains, respectively. Letting $\mathbf{x}^{l}=\left[ {\mathbf{x}_{t}^{l^{\prime }}},{\mathbf{x}% _{s}^{l^{\prime }}}\right] ^{\prime }$ be a $2d\times 1$ augmented feature vector, $\mathbf{A}^{^{\prime }}$ denoting the transpose of matrix $\mathbf{A% }$, a general joint sampling model would take the Gaussian form \begin{equation} \mathbf{x}^{l}\sim \mathcal{N}\left( \mathbf{\mu }^{l},\left( \mathbf{% \Lambda }^{l}\right) ^{-1}\right) ,~~~l\in \{1,\cdots ,L\}, \label{x} \end{equation}% with \begin{equation} \mathbf{\mu }^{l}=% \begin{bmatrix} \mathbf{\mu }_{t}^{l} \\ \mathbf{\mu }_{s}^{l}% \end{bmatrix}% ,~~~~\mathbf{\Lambda }^{l}=% \begin{bmatrix} \mathbf{\Lambda }_{t}^{l} & \mathbf{\Lambda }_{ts}^{l} \\ {\mathbf{\Lambda }_{ts}^{l}}^{^{\prime }} & \mathbf{\Lambda }_{s}^{l}% \end{bmatrix}, \label{mu_lambda} \end{equation} where $\mathbf{\mu }^{l}$ is the $2d\times 1$ mean vector, and $\mathbf{% \Lambda }^{l}$ is the $2d\times 2d$ precision matrix. In this model, $% \mathbf{\Lambda }_{t}^{l}$ and $\mathbf{\Lambda }_{s}^{l}$ account for the interactions of features within the source and target domains, respectively, and $\mathbf{\Lambda }_{ts}^{l}$ accounts for the interactions of the features across the source and target domains, for any class $l\in \{1,\cdots ,L\}$. In this Gaussian setting, it is common to use a Wishart distribution as a prior for the precision matrix $\mathbf{\Lambda }^{l}$, since it is a conjugate prior. In transfer learning, it is not realistic to assume joint sampling of the source and target domains. Therefore we cannot use the general joint sampling model. Instead, we assume that there are two datasets separately sampled from the source and target domains. Thus, we define a joint prior distribution for $\mathbf{\Lambda }_{s}^{l}$ and $\mathbf{\Lambda }_{t}^{l}$ by marginalizing out the term $\mathbf{\Lambda }_{ts}^{l}$. This joint prior distribution of the parameters of the source and target domains accounts for the dependency (or ``relatedness") between the domains. Given this adjustment to account for transfer learning, we utilize a Gaussian model for the feature-label distribution in each domain: \begin{equation} \mathbf{x}_{z}^{l}\sim \mathcal{N}\left( \mathbf{\mu }_{z}^{l},{\left( \mathbf{\Lambda }_{z}^{l}\right) }^{-1}\right) ,~~~l\in \{1,\cdots ,L\}, \label{x_s_x_t} \end{equation}% where subscript $z\in \{s,t\}$ denotes the source $s$ or target $t$ domain, $\mathbf{\mu }_{s}^{l}$ and $\mathbf{\mu }_{t}^{l}$ are $d\times 1$ mean vectors in the source and target domains for label $l$, respectively, $% \mathbf{\Lambda }_{s}^{l}$ and $\mathbf{\Lambda }_{t}^{l}$ are the $d\times d $ precision matrices in the source and target domains for label $l$, respectively, and a joint Gaussian-Wishart distribution is employed as a prior for mean and precision matrices of the Gaussian models. Under these assumptions, the joint prior distribution for $\mathbf{\mu }_{s}^{l}$, $\mathbf{\mu }% _{t}^{l}$, $\mathbf{\Lambda }_{s}^{l}$, and $\mathbf{\Lambda }_{s}^{l}$ takes the form \begin{equation} \label{general_joint_prior} p\left( \mathbf{\mu }_{s}^{l},\mathbf{\mu }_{t}^{l},\mathbf{\Lambda }% _{s}^{l},\mathbf{\Lambda }_{t}^{l}\right) =p\left( \mathbf{\mu }_{s}^{l},% \mathbf{\mu }_{t}^{l}|\mathbf{\Lambda }_{s}^{l},\mathbf{\Lambda }% _{t}^{l}\right) p\left( \mathbf{\Lambda }_{s}^{l},\mathbf{\Lambda }% _{t}^{l}\right) . \end{equation}% To facilitate conjugate priors, we assume that, for any class $l\in \{1,\cdots ,L\}$, $\mathbf{\mu }_{s}^{l}$ and $\mathbf{\mu }_{t}^{l}$ are conditionally independent given $\mathbf{\Lambda }_{s}^{l}$ and $\mathbf{% \Lambda }_{t}^{l}$, so that \begin{equation} p\left( \mathbf{\mu }_{s}^{l},\mathbf{\mu }_{t}^{l},\mathbf{\Lambda }% _{s}^{l},\mathbf{\Lambda }_{t}^{l}\right) =p\left( \mathbf{\mu }_{s}^{l}|% \mathbf{\Lambda }_{s}^{l}\right) p\left( \mathbf{\mu }_{t}^{l}|\mathbf{% \Lambda }_{t}^{l}\right) p\left( \mathbf{\Lambda }_{s}^{l},\mathbf{\Lambda }% _{t}^{l}\right) , \label{p_mu} \end{equation}% and that both $p\left( \mathbf{\mu }_{s}^{l}|\mathbf{\Lambda }% _{s}^{l}\right) $ and $p\left( \mathbf{\mu }_{t}^{l}|\mathbf{\Lambda }% _{t}^{l}\right) $ are Gaussian, \begin{equation} \mathbf{\mu }_{z}^{l}|\mathbf{\Lambda }_{z}^{l}\sim \mathcal{N}\left( \mathbf{m}_{z}^{l},\left( \kappa _{z}^{l}\mathbf{\Lambda }_{z}^{l}\right) ^{-1}\right) , \label{mu_s_mu_t} \end{equation}% where $\mathbf{m}_{z}^{l}$ is the $d\times 1$ mean vector of $\mathbf{\mu }% _{z}^{l}$, and $\kappa _{z}^{l}$ is a positive scalar hyperparameter. We need to define a joint distribution for $\mathbf{\Lambda }_{s}^{l}$ and $% \mathbf{\Lambda }_{t}^{l}$. In the case of a prior for either $\mathbf{% \Lambda }_{s}^{l}$ or $\mathbf{\Lambda }_{t}^{l}$, we use a Wishart distribution as the conjugate prior. Here we desire a joint distribution for $\mathbf{\Lambda }_{s}^{l}$ and $\mathbf{\Lambda }_{t}^{l}$, whose marginal distributions for both $\mathbf{\Lambda }_{s}^{l}$ and $\mathbf{\Lambda }% _{t}^{l}$ are Wishart. We present some definitions and theorems that will be used in deriving the OBTL classifier. \begin{definition} \label{definition1} A random $d\times d$ symmetric positive-definite matrix $% \mathbf{\Lambda }$ has a nonsingular Wishart distribution with $\nu $ degrees of freedom, $W_{d}(\mathbf{M},\nu )$, if $\nu \geq d$ and $\mathbf{M} $ is a $d\times d$ positive-definite matrix ($\mathbf{M}>0$) and the density is \begin{equation} p(\mathbf{\Lambda })=\left[ 2^{\frac{\nu d}{2}}\Gamma _{d}\left( \frac{\nu }{% 2}\right) |\mathbf{M}|^{\frac{\nu }{2}}\right] ^{-1}|\mathbf{\Lambda }|^{% \frac{\nu -d-1}{2}}\mathrm{etr}\left( -\frac{1}{2}\mathbf{M}^{-1}\mathbf{% \Lambda }\right) , \label{wishart} \end{equation}% where $|\mathbf{A}|$ is the determinant of $\mathbf{A}$, $\mathrm{etr}(% \mathbf{A})=\exp \left( \mathrm{tr}(\mathbf{A})\right) $ and $\Gamma _{d}(\alpha )$ is the multivariate gamma function given by \begin{equation} \Gamma _{d}(\alpha )=\pi ^{\frac{d(d-1)}{4}}\prod_{i=1}^{d}\Gamma \left( \alpha -\frac{i-1}{2}\right) . \label{Gamma_multi} \end{equation} \end{definition} \begin{prop} \label{proposition1} \cite{muirhead}: If $\mathbf{\Lambda} \sim W_d(\mathbf{M}% ,\nu)$, and $\mathbf{A}$ is an $r\times d$ matrix of rank $r$, where $r \le d $, then $\mathbf{A} \mathbf{\Lambda} \mathbf{A}^{^{\prime }} \sim W_r(% \mathbf{A}\mathbf{M}\mathbf{A}^{^{\prime }},\nu)$. \end{prop} \begin{corollary} \label{corollary1} If $\mathbf{\Lambda} \sim W_d(\mathbf{M},\nu) $ and $% \mathbf{\Lambda} = \begin{psmallmatrix} \mathbf{\Lambda}_{11} & \mathbf{\Lambda}_{12} \\ \mathbf{\Lambda}_{12}^{'} & \mathbf{\Lambda}_{22} \end{psmallmatrix} $, where $\mathbf{% \Lambda}_{11}$ and $\mathbf{\Lambda}_{22}$ are $d_1\times d_1$ and $d_2 \times d_2$ submatrices, respectively, and if $\mathbf{M} = \begin{psmallmatrix} \mathbf{M}_{11} & \mathbf{M}_{12} \\ \mathbf{M}_{12}^{'} & \mathbf{M}_{22} \end{psmallmatrix} $ is the corresponding partition of $\mathbf{M}$ with $% \mathbf{M}_{11}$ and $\mathbf{M}_{22}$ being two $d_1 \times d_1$ and $d_2 \times d_2$ submatrices, respectively, then $\mathbf{\Lambda}_{11} \sim W_{d_1}(\mathbf{M}_{11},\nu)$ and $\mathbf{\Lambda}_{22} \sim W_{d_2}(% \mathbf{M}_{22},\nu)$. \end{corollary} Using Corollary \ref{corollary1}, we can ensure that using the Wishart distribution for the precision matrix $\mathbf{\Lambda}^l$ (\ref{mu_lambda}) of the joint model in (\ref{x}) will lead to the Wishart marginal distributions for $\mathbf{\Lambda}_s^l$ and $\mathbf{\Lambda}_t^l$ in the source and target domains separately, which is a desired property. Now we introduce a theorem, proposed in \cite{joint_wishart}, which gives the form of the joint distribution of the two submatrices of a partitioned Wishart matrix. \begin{theorem} \label{theorem1} \cite{joint_wishart}: Let $\mathbf{\Lambda} = \begin{psmallmatrix} \mathbf{\Lambda}_{11} & \mathbf{\Lambda}_{12} \\ \mathbf{\Lambda}_{12}^{'} & \mathbf{\Lambda}_{22} \end{psmallmatrix} $ be a $(d_1+d_2) \times (d_1+d_2)$ partitioned Wishart random matrix, where the diagonal partitions are of sizes $d_1 \times d_1$ and $d_2 \times d_2$, respectively. The Wishart distribution of $\mathbf{\Lambda}$ has $\nu \ge d_1 +d_2$ degrees of freedom and positive-definite scale matrix $\mathbf{M}=% \begin{psmallmatrix} \mathbf{M}_{11} & \mathbf{M}_{12} \\ \mathbf{M}_{12}^{'} & \mathbf{M}_{22}\end{psmallmatrix} $ partitioned in the same way as $\mathbf{% \Lambda}$. The joint distribution of the two diagonal partitions $\mathbf{% \Lambda}_{11}$ and $\mathbf{\Lambda}_{22}$ have the density function given by \begin{equation} \begin{aligned} & p(\mathbf{\Lambda}_{11},\mathbf{\Lambda}_{22}) = \\ & K ~ \mathrm{etr}\left(-\frac{1}{2} \left(\mathbf{M}_{11}^{-1} + \mathbf{F}^{'}\mathbf{C}_2 \mathbf{F}\right)\mathbf{\Lambda}_{11}\right) \mathrm{etr}\left(-\frac{1}{2} \mathbf{C}_2^{-1} \mathbf{\Lambda}_{22}\right) \\ & \times |\mathbf{\Lambda}_{11}|^{\frac{\nu-d_2-1}{2}} ~ |\mathbf{\Lambda}_{22}|^{\frac{\nu-d_1-1}{2}} ~ ~_0F_1\left(\frac{\nu}{2}; \frac{1}{4}\mathbf{G} \right), \end{aligned} \end{equation} where $\mathbf{C}_2=\mathbf{M}_{22} - \mathbf{M}_{12}^{^{\prime }}\mathbf{M}% _{11}^{-1}\mathbf{M}_{12}$, $\mathbf{F}=\mathbf{C}_2^{-1} \mathbf{M}% _{12}^{^{\prime }} \mathbf{M}_{11}^{-1}$, $\mathbf{G}=\mathbf{\Lambda}_{22}^{% \frac{1}{2}} \mathbf{F} \mathbf{\Lambda}_{11} \mathbf{F}^{^{\prime }}\mathbf{% \Lambda}_{22}^{\frac{1}{2}}$, $K^{-1} = 2^{\frac{(d_1+d_2)\nu}{2}} \Gamma_{d_1}\left(\frac{\nu}{2}\right) \Gamma_{d_2}\left(\frac{\nu}{2}% \right) |\mathbf{M}|^{\frac{\nu}{2}}$, and$~_0F_1$ is the generalized matrix-variate hypergeometric function. \end{theorem} \begin{definition} \label{definition2} \cite{nagar2017properties}: The generalized hypergeometric function of one matrix argument is defined by \begin{eqnarray} \label{hypergeo} ~_pF_q(a_1,\cdots,a_p;b_1,\cdots,b_q;\mathbf{X}) \hspace{2cm} \notag \\ = \sum_{k=0}^{\infty} \sum_{\kappa \vdash k} \frac{(a_1)_\kappa \cdots (a_p)_{\kappa}}{(b_1)_{\kappa} \cdots (b_q)_\kappa} \frac{C_\kappa(\mathbf{X}% )}{k!}, \end{eqnarray} where $a_i$, $i=1,\cdots,p$, and $b_j$, $j=1,\cdots,q$, are arbitrary complex (real in our case) numbers, $C_\kappa(\mathbf{X})$ is the zonal polynomial of $d\times d$ symmetric matrix $\mathbf{X}$ corresponding to the ordered partition $\kappa=(k_1,\cdots,k_d)$, $k_1 \ge \cdots \ge k_d \ge 0$, $k_1+\cdots k_d=k$ and $\sum_{\kappa\vdash k}$ denotes summation over all partitions $\kappa$ of $k$. The generalized hypergeometric coefficient $% (a)_\kappa$ is defined by \begin{equation} (a)_\kappa = \prod_{i=1}^d \left(a - \frac{i-1}{2}\right)_{k_i}, \end{equation} where $(a)_r=a(a+1)\cdots (a+r-1)$, $r=1,2,\cdots$, with $(a)_0=1$. \end{definition} Conditions for convergence of the series in (\ref{hypergeo}) are available in the literature \cite{constantine1963}. From (\ref{hypergeo}) it follows \vspace{-.2cm} \begin{equation} \begin{aligned} &~_0F_0(\mathbf{X}) = \sum_{k=0}^{\infty} \sum_{\kappa \vdash k} \frac{C_\kappa(\mathbf{X})}{k!} = \sum_{k=0}^{\infty} \frac{(\mathrm{tr}(\mathbf{X}))^k}{k!} = \mathrm{etr}(\mathbf{X}), \\ &~_1F_0(a;\mathbf{X}) = \sum_{k=0}^{\infty} \sum_{\kappa \vdash k} \frac{(a)_\kappa C_\kappa(\mathbf{X})}{k!} = |\mathbf{I}_m - \mathbf{X}|^{-a}, ~~ ||\mathbf{X}|| <1, \\ &~_0F_1(b;\mathbf{X}) = \sum_{k=0}^{\infty} \sum_{\kappa \vdash k} \frac{ C_\kappa(\mathbf{X})}{(b)_\kappa k!}, \\ &~_1F_1(a;b;\mathbf{X}) = \sum_{k=0}^{\infty} \sum_{\kappa \vdash k} \frac{(a)_\kappa}{(b)_\kappa}\frac{ C_\kappa(\mathbf{X})}{k!}, \\ &~_2F_1(a,b;c;\mathbf{X}) = \sum_{k=0}^{\infty} \sum_{\kappa \vdash k} \frac{(a)_\kappa (b)_\kappa}{(c)_\kappa}\frac{ C_\kappa(\mathbf{X})}{k!}, ~~~||\mathbf{X}|| <1, \end{aligned} \label{Gauss} \end{equation} where $||\mathbf{X}|| <1$ means that the maximum of the absolute values of the eigenvalues of $\mathbf{X}$ is less than $1$. $_1F_1(a;b;\mathbf{X})$ and $_2F_1(a,b;c;\mathbf{X}) $ are respectively called Confluent and Gauss hypergeometric functions of matrix argument. See Appendix \ref{appendix:hypergeometric} for some useful theorems on zonal polynomials and generalized hypergeometric functions of matrix arguments. We use those theorems to derive the posterior densities and posterior predictive densities of the target parameters in closed forms in terms of Confluent and Gauss hypergeometric functions of matrix argument in Sections \ref{sec3} and \ref{sec4}, respectively. Now, using Theorem \ref{theorem1}, we define the joint prior distribution, $% p(\mathbf{\Lambda }_{s}^{l},\mathbf{\Lambda }_{t}^{l})$ in (\ref{p_mu}), of the precision matrices of the source and target domains for class $l\in \{1,\cdots ,L\}$ as follows: \vspace{-.2cm} \begin{equation} \begin{aligned} &p(\mathbf{\Lambda}_{t}^l,\mathbf{\Lambda}_{s}^l) = K^l \mathrm{etr}\left(-\frac{1}{2} \left({\left(\mathbf{M}_{t}^l\right)}^{-1} + {\mathbf{F}^l}^{'}\mathbf{C}^l \mathbf{F}^l\right)\mathbf{\Lambda}_{t}^l\right) \\ & \times \mathrm{etr}\left(-\frac{1}{2} {\left(\mathbf{C}^l\right)}^{-1} \mathbf{\Lambda}_{s}^l\right) \\ & \times \left|\mathbf{\Lambda}_{t}^l\right|^{\frac{\nu^l-d-1}{2}} \left|\mathbf{\Lambda}_{s}^l\right|^{\frac{\nu^l-d-1}{2}} ~_0F_1\left(\frac{\nu^l}{2}; \frac{1}{4}\mathbf{G}^l \right), \end{aligned} \label{joint} \end{equation}% where $\mathbf{M}=% \begin{psmallmatrix} \mathbf{M}_{t}^{l} & \mathbf{M}_{ts}^{l} \\ {\mathbf{M}_{ts}^{l}}^{'} & \mathbf{M}_{s}^{l} \end{psmallmatrix} $ is a $2d\times 2d$ positive definite scale matrix, $\nu ^{l}\geq 2d$ denotes degrees of freedom, and \vspace{-.2cm} \begin{equation} \begin{aligned} \mathbf{C}^{l} &=&\mathbf{M}_{s}^{l}-{\mathbf{M}_{ts}^{l}}^{^{\prime }}{% \left( \mathbf{M}_{t}^{l}\right) }^{-1}\mathbf{M}_{ts}^{l}, \\ \mathbf{F}^{l} &=&{\left( \mathbf{C}^{l}\right) }^{-1}{\mathbf{M}_{ts}^{l}}% ^{^{\prime }}{\left( \mathbf{M}_{t}^{l}\right) }^{-1}, \\ \mathbf{G}^{l} &=&{\mathbf{\Lambda }_{s}^{l}}^{\frac{1}{2}}\mathbf{F}^{l}% \mathbf{\Lambda }_{t}^{l}{\mathbf{F}^{l}}^{^{\prime }}{\mathbf{\Lambda }% _{s}^{l}}^{\frac{1}{2}}, \\ {(K^{l})}^{-1} &=&2^{d\nu ^{l}}\Gamma _{d}^{2}\left( \frac{\nu ^{l}}{2}% \right) |\mathbf{M}^{l}|^{\frac{\nu ^{l}}{2}}. \end{aligned} \end{equation} Using Corollary \ref{corollary1}, $\mathbf{\Lambda }_{t}^{l}$ and $\mathbf{% \Lambda }_{s}^{l}$ have the following Wishart marginal distributions: \begin{equation} \mathbf{\Lambda }_{z}^{l}\sim W_{d}(\mathbf{M}_{z}^{l},\nu ^{l}),~~~l\in \{1,\cdots ,L\},~~~z\in \{s,t\}. \label{marg_t} \end{equation} \vspace{-.3cm} \section{Posteriors of Target Parameters} \label{sec3} \begin{figure*}[h!] \centering \begin{tikzpicture}[->,>=stealth',shorten >=.5pt,auto,node distance=2cm, thick,main node/.style={circle,draw}] \node[main node, cloud, draw,cloud puffs=10,cloud puff arc=120, aspect=1.5, inner ysep=.3em] (1) {$\mathcal{D}_t^l$}; \node[main node] (2) [right of=1] {$\mathbf{\mu}_t^l$}; \node[main node] (3) [right of=2] {$\mathbf{\Lambda}_t^l$}; \node[main node] (4) [right of=3] {$\mathbf{\Lambda}_s^l$}; \node[main node] (5) [right of=4] {$\mathbf{\mu}_s^l$}; \node[main node, cloud, draw,cloud puffs=10,cloud puff arc=120, aspect=1.5, inner ysep=.3em] (6) [right of=5] {$\mathcal{D}_s^l$}; \node[main node,rectangle,minimum height=1.8cm,minimum width=5.7cm,rounded corners=.3cm,dashed] (7) [right of=1, label=below:Target Domain] {}; \node[main node,rectangle,minimum height=1.8cm,minimum width=5.7cm,rounded corners=.3cm,dashed] (8) [right of=4, label=below:Source Domain] {}; \path (3) edge node {} (2) edge [bend right] node {} (1) (2) edge node {} (1) (4) edge node {} (5) edge [bend left] node {} (6) (5) edge node {} (6); \path[-] (3) edge node {} (4); \end{tikzpicture} \vspace{-.1cm} \caption{{\protect\footnotesize Dependency of the source and target domains through their precision matrices for any class $l\in \{1,\cdots,L\}$.}} \label{fig1} \vspace{-.2cm} \end{figure*} Having defined the prior distributions in the previous section, we aim to derive the posterior distribution of the parameters of the target domain upon observing the training source $\mathcal{D}_s$ and target $\mathcal{D}_t$ datasets. The likelihood of the datasets $\mathcal{D}_t$ and $\mathcal{D}_s$ is conditionally independent given the parameters of the target and source domains. The dependence between the two domains is due to the dependence of the prior distributions of the precision matrices, as shown in Fig \ref{fig1}% . Within each domain, source or target, the likelihoods of the different classes are also conditionally independent given the parameters of the classes. As such, the joint likelihood of the datasets $\mathcal{D}_t$ and $% \mathcal{D}_s$ can be written as \vspace{-.2cm} \begin{equation} \label{likelihood} \begin{aligned} p(\mathcal{D}_t,&\mathcal{D}_s |\mathbf{\mu}_t,\mathbf{\mu}_s,\mathbf{\Lambda}_t,\mathbf{\Lambda}_s) = p(\mathcal{D}_t|\mathbf{\mu}_t,\mathbf{\Lambda}_t)p(\mathcal{D}_s|\mathbf{% \mu}_s,\mathbf{\Lambda}_s) \\ &=p(\mathcal{D}_t^1,\cdots,\mathcal{D}_t^L|\mathbf{\mu}_t^1,\cdots,\mathbf{% \mu}_t^L,\mathbf{\Lambda}_t^1,\cdots,\mathbf{\Lambda}_t^L) \\ &~~~~\times p(\mathcal{D}_s^1,\cdots,\mathcal{D}_s^L|\mathbf{\mu}_s^1,\cdots,\mathbf{% \mu}_s^L,\mathbf{\Lambda}_s^1,\cdots,\mathbf{\Lambda}_s^L) \\ &=\prod_{l=1}^L p(\mathcal{D}_t^l|\mathbf{\mu}_t^l,\mathbf{\Lambda}_t^l) \prod_{l=1}^L p(\mathcal{D}_s^l|\mathbf{\mu}_s^l,\mathbf{\Lambda}_s^l). \end{aligned} \end{equation} The posterior of the parameters given $\mathcal{D}_t$ and $\mathcal{D}_s$ satisfies \vspace{-.2cm} \begin{equation} \begin{aligned} &p(\mathbf{\mu}_t,\mathbf{\mu}_s,\mathbf{\Lambda}_t,\mathbf{\Lambda}_s|% \mathcal{D}_t,\mathcal{D}_s) \\ &\propto p(\mathcal{D}_t,\mathcal{D}_s|\mathbf{\mu}_t,\mathbf{\mu}_s,\mathbf{% \Lambda}_t,\mathbf{\Lambda}_s) p(\mathbf{\mu}_t,\mathbf{\mu}_s,\mathbf{\Lambda}_t,\mathbf{\Lambda}_s) \\ &\propto \prod_{l=1}^L p(\mathcal{D}_t^l|\mathbf{\mu}_t^l,\mathbf{\Lambda}_t^l) \prod_{l=1}^L p(\mathcal{D}_s^l|\mathbf{\mu}_s^l,\mathbf{\Lambda}_s^l) \prod_{l=1}^L p(\mathbf{\mu}_t^l,\mathbf{\mu}_s^l,\mathbf{\Lambda}_t^l,\mathbf{% \Lambda}_s^l), \end{aligned} \label{posterior_1} \end{equation} where we assume that the priors of the parameters in different classes are independent, $p(\mathbf{\mu}_t,\mathbf{\mu}_s,\mathbf{\Lambda}_t,\mathbf{% \Lambda}_s) = \prod_{l=1}^L p(\mathbf{\mu}_t^l,\mathbf{\mu}_s^l,\mathbf{% \Lambda}_t^l,\mathbf{\Lambda}_s^l)$. From (\ref{p_mu}) and (\ref{posterior_1}% ), \vspace{-.2cm} \begin{equation} \begin{aligned} p(\mu_t,\mu_s,\mathbf{\Lambda}_t,\mathbf{\Lambda}_s|\mathcal{D}_t,% \mathcal{D}_s) \propto \prod_{l=1}^L p(\mathcal{D}_t^l|\mathbf{\mu}_t^l,\mathbf{\Lambda}_t^l) p(\mathcal{D}_s^l |\mathbf{\mu}_s^l,\mathbf{\Lambda}_s^l) \\ \hspace{1cm}\times p\left(\mathbf{\mu}_s^l | \mathbf{\Lambda}_s^l\right) p\left(\mathbf{\mu}_t^l | \mathbf{\Lambda}_t^l\right) p\left(\mathbf{\Lambda}_s^l, \mathbf{\Lambda}_t^l\right). \end{aligned} \end{equation} We can see that the posterior of the parameters is equal to the product of the posteriors of the parameters of each class: \vspace{-.2cm} \begin{eqnarray} \label{posterior2} p(\mu_t,\mu_s,\mathbf{\Lambda}_t,\mathbf{\Lambda}_s|\mathcal{D}_t,\mathcal{D}% _s) = \prod_{l=1}^L p(\mathbf{\mu}_t^l,\mathbf{\mu}_s^l,\mathbf{\Lambda}_t^l,% \mathbf{\Lambda}_s^l|\mathcal{D}_t^l,\mathcal{D}_s^l) , \end{eqnarray} where \vspace{-.2cm} \begin{eqnarray} \label{posterior3} p(\mathbf{\mu}_t^l,\mathbf{\mu}_s^l,\mathbf{\Lambda}_t^l,\mathbf{\Lambda}% _s^l|\mathcal{D}_t^l,\mathcal{D}_s^l) \propto p(\mathcal{D}_t^l|\mathbf{\mu}% _t^l,\mathbf{\Lambda}_t^l) p(\mathcal{D}_s^l|\mathbf{\mu}_s^l,\mathbf{\Lambda% }_s^l) \notag \\ \times p\left(\mathbf{\mu}_s^l | \mathbf{\Lambda}_s^l\right) p\left(\mathbf{% \mu}_t^l | \mathbf{\Lambda}_t^l\right) p\left(\mathbf{\Lambda}_s^l, \mathbf{% \Lambda}_t^l\right). \end{eqnarray} Since we are interested in the posterior of the parameters of the target domain, we integrate out the parameters of the source domain in (\ref% {posterior2}): \vspace{-.2cm} \begin{equation} \begin{aligned} \label{posterior4} p(\mathbf{\mu}_t,\mathbf{\Lambda}_t &|\mathcal{D}_t,\mathcal{D}_s) = \int_{\mathbf{\mu}_s,\mathbf{\Lambda}_s} p(\mathbf{\mu}_t,\mathbf{\mu}_s,\mathbf{\Lambda}_t,\mathbf{\Lambda}_s|% \mathcal{D}_t,\mathcal{D}_s)d\mathbf{\mu}_s d\mathbf{\Lambda}_s \\ &= \prod_{l=1}^L \int_{\mathbf{\mu}_s^l,\mathbf{\Lambda}_s^l} p(\mathbf{\mu}_t^l,\mathbf{\mu}_s^l,\mathbf{\Lambda}_t^l,\mathbf{% \Lambda}_s^l |\mathcal{D}_t^l,\mathcal{D}_s^l) d\mathbf{\mu}_s^l d\mathbf{\Lambda}_s^l \\ &=\prod_{l=1}^L p(\mathbf{\mu}_t^l,\mathbf{\Lambda}_t^l |\mathcal{D}_t^l,\mathcal{D}_s^l), \nonumber \end{aligned} \end{equation} where \begin{equation} \begin{aligned} &p(\mathbf{\mu}_t^l,\mathbf{\Lambda}_t^l |\mathcal{D}_t^l,\mathcal{D}_s^l) \\ &= \int_{\mathbf{\mu}_s^l,\mathbf{\Lambda}_s^l} p(\mathbf{\mu}_t^l,\mathbf{\mu}_s^l,\mathbf{\Lambda}_t^l,\mathbf{% \Lambda}_s^l|\mathcal{D}_t^l,\mathcal{D}_s^l) d\mathbf{\mu}_s^l d\mathbf{\Lambda}_s^l \\ &\propto p(\mathcal{D}_t^l|\mathbf{\mu}_t^l,\mathbf{\Lambda}_t^l) p\left(\mathbf{\mu}_t^l | \mathbf{\Lambda}_t^l\right) \\ &\times \int_{\mathbf{\mu}_s^l,\mathbf{\Lambda}_s^l} p(\mathcal{D}_s^l|\mathbf{\mu}_s^l,\mathbf{\Lambda}_s^l) p\left(\mathbf{\mu}_s^l | \mathbf{\Lambda}_s^l\right) p\left(\mathbf{\Lambda}_s^l, \mathbf{\Lambda}_t^l\right) d\mathbf{\mu}_s^l d\mathbf{\Lambda}_s^l. \end{aligned} \label{posterior5} \end{equation} \begin{theorem} \label{thm:posterior} Given the target $\mathcal{D}_t$ and source $\mathcal{D}_s$ data, the posterior distribution of target mean $\mu_t^l$ and target precision matrix $\mathbf{\Lambda}_t^l$ for the class $l\in \{1,\cdots,L\}$ has Gaussian-hypergeometric-function distribution \begin{equation} \label{prop4} \begin{aligned} &p(\mathbf{\mu}_t^l,\mathbf{\Lambda}_t^l |\mathcal{D}_t^l,\mathcal{D}_s^l) = \\ & A^l \left|\mathbf{\Lambda}_t^l\right|^{\frac{1}{2}} \exp \left(-\frac{\kappa_{t,n}^l}{2}\left(\mathbf{\mu}_t^l - \mathbf{m}_{t,n}^l\right)^{'}\mathbf{\Lambda}_t^l \left(\mathbf{\mu}_t^l - \mathbf{m}_{t,n}^l\right) \right) \\ &\times \left|\mathbf{\Lambda}_{t}^l\right|^{\frac{\nu^l + n_t^l -d-1}{2}} \mathrm{etr}\left(-\frac{1}{2} {\left(\mathbf{T}_t^l\right)}^{-1}\mathbf{\Lambda}_{t}^l\right) \\ & \times ~_1F_1\left(\frac{\nu^l + n_s^l}{2}; \frac{\nu^l}{2}; \frac{1}{2} \mathbf{F}^l \mathbf{\Lambda}_{t}^l {\mathbf{F}^l}^{'} \mathbf{T}_s^l \right), \end{aligned} \end{equation} where $A^l$ is the constant of proportionality \begin{equation} \label{A4} \begin{aligned} &{\left(A^l\right)}^{-1} =\left(\frac{2\pi}{\kappa_{t,n}^l}\right)^{\frac{d}{2}} 2^{\frac{d\left(\nu^l+n_t^l \right)}{2}} \Gamma_d \left(\frac{\nu^l+n_t^l}{2} \right) \left|\mathbf{T}_t^l\right|^{\frac{\nu^l + n_t^l}{2}} \\ & ~~~~~~ \times ~_2F_1\left(\frac{\nu^l + n_s^l}{2}, \frac{\nu^l + n_t^l}{2}; \frac{\nu^l}{2}; \mathbf{T}_s^l\mathbf{F}^l \mathbf{T}_t^l {\mathbf{F}^l}^{'} \right), \end{aligned} \end{equation} and \begin{equation} \label{const1} \begin{aligned} &\kappa_{t,n}^l = \kappa_t^l + n_t^l, \\ &\mathbf{m}_{t,n}^l = \frac{\kappa_t^l \mathbf{m}_t^l + n_t^l \bar{\mathbf{x}}_t^l}{\kappa_t^l+n_t^l}, \\ &{\left(\mathbf{T}_t^l\right)}^{-1} = {\left(\mathbf{M}_{t}^l\right)}^{-1} + {\mathbf{F}^l}^{'}\mathbf{C}^l \mathbf{F}^l + \mathbf{S}_t^l \\ & \hspace{2cm} + \frac{\kappa_t^l n_t^l}{\kappa_t^l + n_t^l} (\mathbf{m}_t^l -\bar{\mathbf{x}}_t^l)(\mathbf{m}_t^l -\bar{\mathbf{x}}_t^l)^{'}, \\ &{\left(\mathbf{T}_s^l\right)}^{-1} = {\left(\mathbf{C}^l\right)}^{-1} + \mathbf{S}_s^l + \frac{\kappa_s^l n_s^l}{\kappa_s^l + n_s^l} (\mathbf{m}_s^l -\bar{\mathbf{x}}_s^l)(\mathbf{m}_s^l -\bar{\mathbf{x}}_s^l)^{'}, \end{aligned} \end{equation} with sample means and covariances for $z\in\{s,t\}$ as \begin{equation} \label{const2} \bar{\mathbf{x}}_z^l = \frac{1}{n_z^l} \sum_{i=1}^{n_z^l} \mathbf{x}% _{z,i}^l, ~~~ \mathbf{S}_z^l = \sum_{i=1}^{n_z^l} \left(\mathbf{x}_{z,i}^l - \bar{\mathbf{x}}_z^l \right)\left(\mathbf{x}_{z,i}^l - \bar{\mathbf{x}}_z^l \right)^{^{\prime }}. \notag \end{equation} \end{theorem} \begin{proof} \label{proof_posterior} See Appendix \ref{appendix:posterior}. \end{proof} \section{Effective Class-Conditional Densities} \label{sec4} In classification, the feature-label distributions are written in terms of class-conditional densities and prior class probabilities, and the posterior probabilities of the classes upon observation of data are proportional to the product of class-conditional densities and prior class probabilities, according to the Bayes rule. This also holds in the Bayesian setting except we use effective class-conditional densities, as shown in \cite{Lori1,Lori2}. For optimal Bayesian classifier \cite{Lori1,Lori2}, using the posterior predictive densities of the classes, called ``effective class-conditional densities", leads to the optimal choices for classifiers in order to minimize the Bayesian error estimates of the classifiers. Similarly, we can derive the effective class-conditional densities for defining the OBTL classifier in the target domain, albeit with the posterior of the target parameters derived from both the target and source datasets. Suppose that $\mathbf{x}$ denotes a $d\times 1$ new observed data point in the target domain that we aim to optimally classify into one of the classes $l\in \{1,\cdots ,L\}$. In the context of the optimal Bayesian classifier, we need the effective class-conditional densities for the $L$ classes, defined as \begin{equation} p(\mathbf{x}|l)=\int_{\mathbf{\mu }_{t}^{l},\mathbf{\Lambda }_{t}^{l}}p(% \mathbf{x}|\mathbf{\mu }_{t}^{l},\mathbf{\Lambda }_{t}^{l})\pi ^{\star }(% \mathbf{\mu }_{t}^{l},\mathbf{\Lambda }_{t}^{l})d\mathbf{\mu }_{t}^{l}d% \mathbf{\Lambda }_{t}^{l}, \label{eff} \end{equation}% for $l\in \{1,\cdots ,L\}$, where $\pi ^{\star }(\mathbf{\mu }_{t}^{l},% \mathbf{\Lambda }_{t}^{l})=p(\mathbf{\mu }_{t}^{l},\mathbf{\Lambda }_{t}^{l}|% \mathcal{D}_{t}^{l},\mathcal{D}_{s}^{l})$ is the posterior of $(\mathbf{\mu }% _{t}^{l},\mathbf{\Lambda }_{t}^{l})$ upon observation of $\mathcal{D}% _{t}^{l} $ and $\mathcal{D}_{s}^{l}$. \begin{theorem} \label{thm-effective} The effective class-conditional density, denoted by $p(\mathbf{x}|l)=O_{\mathrm{OBTL}}(\mathbf{x}| l)$, in the target domain is given by \begin{equation} \begin{aligned} &O_{\mathrm{OBTL}}(\mathbf{x}| l) = \pi^{-\frac{d}{2}} \left(\frac{\kappa_{t,n}^l}{\kappa_\mathbf{x}^l} \right)^{\frac{d}{2}} \Gamma_d \left(\frac{\nu^l+n_t^l + 1}{2} \right) \\ & \times \Gamma_d^{-1} \left(\frac{\nu^l+n_t^l}{2} \right) \left|\mathbf{T}_\mathbf{x}^l\right|^{\frac{\nu^l + n_t^l + 1}{2}} \left|\mathbf{T}_t^l\right|^{-\frac{\nu^l + n_t^l}{2}} \\ & \times ~_2F_1\left(\frac{\nu^l + n_s^l}{2}, \frac{\nu^l + n_t^l + 1}{2}; \frac{\nu^l}{2}; \mathbf{T}_s^l\mathbf{F}^l \mathbf{T}_\mathbf{x}^l {\mathbf{F}^l}^{'} \right) \\ & \times ~_2F_1^{-1}\left(\frac{\nu^l + n_s^l}{2}, \frac{\nu^l + n_t^l}{2}; \frac{\nu^l}{2}; \mathbf{T}_s^l\mathbf{F}^l \mathbf{T}_t^l {\mathbf{F}^l}^{'} \right), \end{aligned} \label{eff4} \end{equation} where \begin{equation} \begin{aligned} & \kappa_\mathbf{x}^l = \kappa_{t,n}^l + 1 = \kappa_t^l + n_t^l + 1, \\ & {\left(\mathbf{T}_\mathbf{x}^l\right)}^{-1} = {\left(\mathbf{T}_t^l\right)}^{-1} + \frac{\kappa_{t,n}^l}{\kappa_{t,n}^l + 1} \left(\mathbf{m}_{t,n}^l-\mathbf{x} \right) \left(\mathbf{m}_{t,n}^l-\mathbf{x} \right)^{'}. \end{aligned} \label{update_1} \end{equation} \end{theorem} \begin{proof} See Appendix \ref{appendix:effective}. \end{proof} \section{Optimal Bayesian Transfer Learning Classifier} \label{sec5} Let $c_{t}^{l}$ be the prior probability that the target sample $\mathbf{x}$ belongs to the class $l\in \{1,\cdots ,L\}$. Since $0<c_{t}^{l}<1$ and $% \sum_{l=1}^{L}c_{t}^{l}=1$, a Dirichlet prior is assumed: \begin{equation} (c_{t}^{1},\cdots ,c_{t}^{L})\sim \mathrm{Dir}(L,\mathbf{\xi }_{t}), \end{equation}% where $\mathbf{\xi }_{t}=(\xi _{t}^{1},\cdots ,\xi _{t}^{L})$ are the concentration parameters, and $\xi _{t}^{l}>0$ for $l\in \{1,\cdots ,L\}$. As the Dirichlet distribution is a conjugate prior for the categorical distribution, upon observing $\mathbf{n}=(n_{t}^{1},\cdots ,n_{t}^{L})$ data for class $l$ in the target domain, the posterior has a Dirichlet distribution: \begin{equation} \begin{aligned} \pi^{\star} = (c_t^1,\cdots,c_t^L| \mathbf{n}) &\sim \mathrm{Dir}(L,\mathbf{\xi}_t+\mathbf{n}) \\ & = \mathrm{Dir}(L,\xi_t^1+n_t^1, \cdots,\xi_t^L+n_t^L), \end{aligned} \end{equation}% with the posterior mean of $c_{t}^{l}$ as \begin{equation} \mathrm{E}_{\pi ^{\star }}(c_{t}^{l})=\frac{\xi _{t}^{l}+n_{t}^{l}}{% N_{t}+\xi _{t}^{0}}, \end{equation}% where $N_{t}=\sum_{l=1}^{L}n_{t}^{l}$ and $\xi _{t}^{0}=\sum_{l=1}^{L}\xi _{t}^{l}$. As such, the optimal Bayesian transfer learning (OBTL) classifier for any new unlabeled sample $\mathbf{x}$ in the target domain is defined as \begin{equation} \Psi _{\mathrm{OBTL}}(\mathbf{x})=\arg \!\max_{l\in \{1,\cdots ,L\}}\mathrm{E% }_{\pi ^{\star }}(c_{t}^{l})O_{\mathrm{OBTL}}(\mathbf{x}|l), \label{T-OBC} \end{equation}% which minimizes the expected error of the classifier in the target domain, that is, $\mathrm{E}_{\pi^{\star}}[\varepsilon(\Theta_t,\Psi _{\mathrm{OBTL}})] \leq \mathrm{E}_{\pi^{\star}}[\varepsilon(\Theta_t,\Psi)]$, where $\varepsilon(\Theta_t,\Psi)$ is the error of any arbitrary classifier $\Psi$ assuming the parameters $\Theta_t=\{c_t^l,\mu_t^l,\mathbf{\Lambda}_t^l\}_{l=1}^L$ of the feature-label distribution in the target domain, and the expectation is over the posterior $\pi^{\star}$ of $\Theta_t$ upon observation of data. If we do not have any prior knowledge for the selection of classes, we use the same concentration parameter for all the classes: $\mathbf{\xi }_{t}=(\xi ,\cdots ,\xi )$. Hence, if the number of samples in each class is the same, $n_{t}^{1}=\cdots =n_{t}^{L}$, the first term $\mathrm{E}_{\pi ^{\star }}(c_{t}^{l})$ is the same for all the classes and (\ref{T-OBC}) is reduced to: \begin{equation} \Psi _{\mathrm{OBTL}}(\mathbf{x})=\arg \!\max_{l\in \{1,\cdots ,L\}}O_{% \mathrm{OBTL}}(\mathbf{x}|l). \label{OBTL_2} \end{equation} We have derived the effective class-conditional densities in closed forms (\ref{eff4}). However, deriving the OBTL classifier (\ref{T-OBC}% ) requires computing the Gauss hypergeometric function of matrix argument. Computing the exact values of hypergeometirc functions of matrix argument using the series of zonal polynomials, as in (\ref{Gauss}), is time-consuming and is not scalable to high dimension. To facilitate computation, we propose to use the Laplace approximation of this function, as in \cite% {Laplace_approx}, which is computationally efficient and scalable. See Appendix \ref{appendix:Laplace} for the detailed description of the Laplace approximation of Gauss hypergeometric functions of matrix argument. \vspace{-.2cm} \section{OBC in Target Domain} \label{sec6} To see how the source data can help improve the performance, we compare the OBTL classifier with the OBC based on the training data only from the target domain. Using exactly the same modeling and parameters as the previous sections, the priors for $\mathbf{\mu}_t^l$ and $\mathbf{\Lambda}_t^l$, from (\ref% {mu_s_mu_t}) and (\ref{marg_t}), are given by \vspace{-.2cm} \begin{equation} \vspace{-.2cm} \begin{aligned} \mathbf{\mu}_t^l | \mathbf{\Lambda}_t^l &\sim \mathcal{N}\left(\mathbf{m}_t^l,\left(\kappa_t^l \mathbf{\Lambda}_t^l\right)^{-1}\right), \\ \mathbf{\Lambda}_t^l &\sim W_d(\mathbf{M}_t^l,\nu^l). \end{aligned} \end{equation} Using Lemma \ref{Lemma1} in Appendix \ref{appendix:posterior}, upon observing the dataset $\mathcal{D}_t^l$, the posteriors of $\mathbf{\mu}_t^l$ and $\mathbf{\Lambda}_t^l$ will be \vspace{% -.2cm} \begin{equation} \begin{aligned} \mathbf{\mu}_t^l|\mathbf{\Lambda}_t^l , \mathcal{D}_t^l &\sim \mathcal{N}\left(\mathbf{m}_{t,n}^l, \left(\kappa_{t,n}^l\mathbf{\Lambda}_t^l\right)^{-1}\right), \\ \mathbf{\Lambda}_t^l | \mathcal{D}_t^l &\sim W_d(\mathbf{M}_{t,n}^l,\nu_{t,n}^l), \end{aligned} \end{equation} where \vspace{-.2cm} \begin{equation} \label{const11} \begin{aligned} & \kappa_{t,n}^l = \kappa_t^l + n_t^l, ~~~ \nu_{t,n}^l = \nu^l + n_t^l, ~~~ \mathbf{m}_{t,n}^l = \frac{\kappa_t^l \mathbf{m}_t^l + n_t^l \bar{\mathbf{x}}_t^l}{\kappa_t^l+n_t^l}, \\ & {\left(\mathbf{M}_{t,n}^l\right)}^{-1} = {\left(\mathbf{M}_{t}^l\right)}^{-1} + \mathbf{S}_t^l + \frac{\kappa_t^l n_t^l}{\kappa_t^l + n_t^l} (\mathbf{m}_t^l -\bar{\mathbf{x}}_t^l)(\mathbf{m}_t^l -\bar{\mathbf{x}}_t^l)^{'}, \end{aligned} \end{equation} with the corresponding sample mean and covariance: \begin{equation} \vspace{-.2cm} \bar{\mathbf{x}}_t^l = \frac{1}{n_t^l} \sum_{i=1}^{n_t^l} \mathbf{x}_{t,i}^l, ~~~~ \mathbf{S}_t^l = \sum_{i=1}^{n_t^l} \left(\mathbf{x}% _{t,i}^l - \bar{\mathbf{x}}_t^l \right)\left(\mathbf{x}_{t,i}^l - \bar{% \mathbf{x}}_t^l \right)^{^{\prime }}. \label{const22} \end{equation} By (\ref{eff}) and similar integral steps, the effective class-conditional densities $p(\mathbf{x} | l) = O_{\mathrm{OBC}}(\mathbf{x}% | l)$ for the OBC are derived as \cite{Lori1} \begin{equation} \label{eff_target} \begin{aligned} O_{\mathrm{OBC}}(\mathbf{x}| l) = \pi^{-\frac{d}{2}} \left(\frac{\kappa_{t,n}^l}{\kappa_{t,n}^l + 1} \right)^{\frac{d}{2}} \Gamma_d \left(\frac{\nu^l+n_t^l + 1}{2} \right) \\ \times ~ \Gamma_d^{-1} \left(\frac{\nu^l+n_t^l}{2} \right) \left|\mathbf{M}_\mathbf{x}^l\right|^{\frac{\nu^l + n_t^l + 1}{2}} \left|\mathbf{M}_{t,n}^l\right|^{-\frac{\nu^l + n_t^l}{2}}, \end{aligned} \end{equation} where \begin{equation} \label{update_2} {\left(\mathbf{M}_{\mathbf{x}}^l\right)}^{-1} = {\left(\mathbf{M}% _{t,n}^l\right)}^{-1} + \frac{\kappa_{t,n}^l}{\kappa_{t,n}^l + 1} (\mathbf{m}% _{t,n}^l -\mathbf{x})(\mathbf{m}_{t,n}^l - \mathbf{x})^{^{\prime }}. \end{equation} The multi-class OBC \cite{dalton2015optimal}, under a zero-one loss function, can be defined as \begin{equation} \Psi_{\mathrm{OBC}}(\mathbf{x}) = \arg\!\max_{l\in\{1,\cdots,L\}} \mathrm{E}% _{\pi^{\star}}(c_t^l) O_{\mathrm{OBC}}(\mathbf{x}| l). \label{OBC} \end{equation} Similar to the OBTL, in the case of equal prior probabilities for the classes, \begin{equation} \Psi_{\mathrm{OBC}}(\mathbf{x}) = \arg\!\max_{l\in\{1,\cdots,L\}} O_{\mathrm{% OBC}}(\mathbf{x}| l). \label{OBC_2} \end{equation} For binary classification, the definition of the OBC in (\ref{OBC}) is equivalent to the definition in \cite{Lori1}, where it is defined to be the binary classifier possessing the minimum Bayesian mean square error estimate \cite{Lori-MMSE} relative to the posterior distribution. \begin{theorem} \label{theorem4} If $\mathbf{M}_{ts}^l=\mathbf{0}$ for all $l\in \{1,\cdots,L\}$, then \begin{equation} \Psi_{\mathrm{OBTL}}(\mathbf{x}) = \Psi_{\mathrm{OBC}}(\mathbf{x}), \end{equation} meaning that if there is no interaction between the source and target domains in all the classes a priori, then the OBTL classifier turns to the OBC classifier in the target domain. \end{theorem} \begin{proof} If $\mathbf{M}_{ts}^l=\mathbf{0}$ for all $l\in \{1,\cdots,L\}$, then $% \mathbf{F}^l = \mathbf{0}$. Since $_2F_1(a,b;c;\mathbf{0})=1$ for any values of $a$, $b$, and $c$, the Gauss hypergeometric functions will disappear in (% \ref{eff4}). From (\ref{const1}) and (\ref{const11}), $\mathbf{T}_t^l=% \mathbf{M}_{t,n}^l$. From (\ref{update_1}) and (\ref{update_2}), $\mathbf{T}% _{\mathbf{x}}^l=\mathbf{M}_{\mathbf{x}}^l$. As a result, $O_{\mathrm{OBTL}}(% \mathbf{x}| l)=O_{\mathrm{OBC}}(\mathbf{x}| l)$, and consequently, $\Psi_{% \mathrm{OBTL}}(\mathbf{x})=\Psi_{\mathrm{OBC}}(\mathbf{x})$. \end{proof} \vspace{-.5cm} \section{Experiments} \label{sec7} \subsection{Synthetic datasets} We have considered a simulation setup and evaluated the OBTL classifiers by the average classification error with different joint prior densities modeling the relatedness of the source and target domains. The setup is as follows. Unless mentioned, the feature dimension is $d=10$, the number of classes in each domain is $L=2$, the number of source training data per class is $n_{s}=n_{s}^{l}=200$, the number of target training data per class is $n_{t}=n_{t}^{l}=10$, $\nu =\nu ^{l}=25$, $\kappa _{t}=\kappa _{t}^{l}=100 $, $\kappa _{s}=\kappa _{s}^{l}=100$, for both the classes $% l=1,2$, $\mathbf{m}_{t}^{1}=\mathbf{0}_{d}$, $\mathbf{m}_{t}^{2}=0.05\times \mathbf{1}_{d}$, $\mathbf{m}_{s}^{1}=\mathbf{m}_{t}^{1}+\mathbf{1}_{d}$, and $\mathbf{m}_{s}^{2}=\mathbf{m}_{t}^{2}+\mathbf{1}_{d}$, where $\mathbf{0}_{d} $ and $\mathbf{1}_{d}$ are $d\times 1$ all-zero and all-one vectors, respectively. For the scale matrices, we choose $\mathbf{M}_{t}^{l}=k_{t}% \mathbf{I}_{d}$, $\mathbf{M}_{s}^{l}=k_{s}\mathbf{I}_{d}$, and $\mathbf{M}% _{ts}^{l}=k_{ts}\mathbf{I}_{d}$ for two classes $l=1,2$, where $\mathbf{I}% _{d}$ is the $d\times d$ identity matrix. Note that choosing an identity matrix for $\mathbf{M}_{ts}^{l}$ makes sense when the order of the features in the two domains is the same. We have the constraint that the scale matrix $\mathbf{M}^{l}=% \begin{psmallmatrix} \mathbf{M}_{t}^l & \mathbf{M}_{ts}^l \\ {\mathbf{M}_{ts}^l}^{'} & \mathbf{M}_{s}^l\end{psmallmatrix}$ should be positive definite for any class $l$. It is easy to check the following corresponding constraints on $k_{t}$, $k_{s}$, and $k_{ts}$: $k_{t}>0$, $% k_{s}>0$, and $|k_{ts}|<\sqrt{k_{t}k_{s}}$. We define $k_{ts}=\alpha \sqrt{% k_{t}k_{s}}$, where $|\alpha |<1$. In this particular example, the value of $% |\alpha |$ shows the amount of relatedness between the source and target domains. If $|\alpha |=0$, the two domains are not related and if $|\alpha |$ is close to one, we have greater relatedness. We set $k_{t}=k_{s}=1$ and plot the average classification error curves for different values of $% |\alpha |$. All the simulations assume equal prior probabilities for the classes, so we use (\ref{OBTL_2}) and (\ref{OBC_2}) for the OBTL classifier and OBC, respectively. We evaluate the prediction performance according to the common evaluation procedure of Bayesian learning by average classification errors. To sample from the prior (\ref{p_mu}) we first sample from a Wishart distribution $% W_{2d}(\mathbf{M}^{l},\nu ^{l})$ to get a sample for $\mathbf{\Lambda }^{l}=% \begin{psmallmatrix} \mathbf{\Lambda}_{t}^l & \mathbf{\Lambda}_{ts}^l \\ {\mathbf{\Lambda}_{ts}^l}^{'} & \mathbf{\Lambda}_{s}^l\end{psmallmatrix}$, for each class $% l=1,2$, and then pick $(\mathbf{\Lambda }_{t}^{l},\mathbf{\Lambda }_{s}^{l})$% , which is a joint sample from $p(\mathbf{\Lambda }_{t}^{l},\mathbf{\Lambda }% _{s}^{l})$ in (\ref{joint}). Then given $\mathbf{\Lambda }_{t}^{l}$ and $% \mathbf{\Lambda }_{s}^{l}$, we sample from (\ref{mu_s_mu_t}) to get samples of $\mathbf{\mu }_{t}^{l}$ and $\mathbf{\mu }_{s}^{l}$ for $l=1,2$. Once we have $\mathbf{\mu }_{t}^{l}$, $\mathbf{\mu }_{s}^{l}$, $\mathbf{\Lambda }% _{t}^{l}$, and $\mathbf{\Lambda }_{s}^{l}$, we generate $100$ different training and test sets from (\ref{x_s_x_t}). Training sets contain samples from both the target and source domains, but the test set contains only samples from the target domain. As the numbers of source and target training data per class are $n_{s}$ and $n_{t}$, there are $Ln_{s}$ and $Ln_{t}$ source and target training data in total, respectively. We assume the size of the test set per class is $1000$ in the simulations, so $2000$ in total. For each training and test set, we use the OBTL classifier and its target-only version, OBC, and calculate the error. Then we average all the errors for $100$ different training and test sets. We further repeat this whole process $1000$ times for different realizations of $\mathbf{\Lambda }_{t}^{l}$ and $\mathbf{% \Lambda }_{s}^{l}$, $\mathbf{\mu }_{t}^{l}$, and $\mathbf{\mu }_{s}^{l}$ for $l=1,2$, and finally average all the errors and return the average classification error. Note that in all figures, the hyperparameters used in the OBTL classifier are the same as the ones used for simulating data, except for the figures showing the sensitivity of the performance with respect to different hyperparameters, in which case we assume that true values of the hyperparameters used for simulating data are unknown. \begin{figure}[t!] \centering \begin{subfigure}[t]{0.45\textwidth} \centering \includegraphics[width=\textwidth]{Error_vs_n_t_d_10.eps} \caption{} \end{subfigure} \begin{subfigure}[t]{0.45\textwidth} \centering \includegraphics[width=\textwidth]{Error_vs_n_s_d_10.eps} \caption{} \end{subfigure} \caption{{\protect\footnotesize (a) Average classification error versus the number of target training data per class, $n_t$, (b) Average classification error versus the number of source training data per class, $n_s$.}} \label{fig2} \vspace{-.5cm} \end{figure} To examine how the source data improves the classifier in target domain, we compare the performance of the OBTL classifier with the OBC designed in the target domain alone. The average classification error versus $n_{t}$ is depicted in Fig. \ref{fig2}a for the OBC and OBTL with different values of $% \alpha $. When $\alpha $ is close to one, the performance of the OBTL classifier is much better than that of the OBC, this due to the greater relatedness between the two domains and appropriate use of the source data. This performance improvement is especially noticeable when $n_{t}$ is small, which reflects the real-world scenario. In Fig. \ref{fig2}a, we also observe that the errors of the OBTL classifier and OBC are converging to a similar value when $n_{t}$ gets very large, meaning that the source data are redundant when there is a large amount of target data. When $\alpha $ is larger, the error curves converge faster to the optimal error, which is the average Bayes error of the target classifier. The corresponding Bayes error averaged over $1000$ randomly generated distributions is equal to $0.122$ in this simulation setup. Recall that when $\alpha =0$, the OBTL classifier reduces to the OBC. In this particular example, the sign of $\alpha $ does not matter in the performance of the OBTL, which can be verified by (\ref{eff4}). Hence, we can use $|\alpha |$ in all the cases. Figure \ref{fig2}b depicts average classification error versus $n_{s}$ for the OBC and OBTL with different values of $\alpha $. The error of the OBC is constant for all $n_{s}$ as it does not employ the source data. The error of the OBTL classifier equals that of the OBC when $n_{s}=0$ and starts to decrease as $n_{s}$ increases. In Fig. \ref{fig2}b when $\alpha $ is larger, the amount of improvement is greater since the two domains are more related. Another important point in Fig. \ref{fig2}b is that having very large source data when the two domains are highly related can compensate the lack of target data and lead to a target classification error as small as the Bayes error in the target domain. Figure \ref{fig:box_plot} illustrates the box plots of the simulated classification errors corresponding to the $1000$ distributions randomly drawn from the prior distributions for both the OBC and OBTL with $\alpha=0.9$, which show the variability for different numbers $n_t$ of target data per class. \begin{figure}[t!] \centering \includegraphics[width=.45\textwidth]{Error_vs_n_t_box_plot_OBC_OBTL_alpha_0_9.eps} \caption{{\protect\footnotesize Box plots of $1000$ simulated classification errors for different $n_t$. Blue denotes the OBC and red denotes the OBTL with $\alpha=0.9$.}} \label{fig:box_plot} \vspace{-.5cm} \end{figure} We investigate the sensitivity of the OBTL with respect to the hyperparameters. Fig. \ref{fig3} represents the average classification error of the OBTL with respect to $|\alpha|$, where we assume that we do not know the true value $\alpha_{true}$ of the amount of relatedness between source and target domains. In Figs. \ref{fig3}a-\ref{fig3}d we plot the error curves when $\alpha_{true}=0.3,0.5,0.7,0.9$, respectively. We observe several important trends in these figures. First of all, the performance gain of the OBTL towards the OBC depends heavily on the relatedness (value of $\alpha_{true}$) of source and target and the value of $\alpha$ used in the classifier. Generally speaking, there exists an $\alpha_{max}$ in $(0,1)$ such that for $|\alpha| < \alpha_{max}$, the OBTL has a performance gain towards the OBC, where the maximum gain is achieved at $|\alpha| = \alpha_{true}$ (it might not be exactly at $\alpha_{true}$ due to the Laplace approximation of the Gauss hypergeometric function). Second, the performance gain is higher when the two domains are highly related (Fig. \ref{fig3}d). Third, when the two domains are very related, for example, $\alpha_{true}=0.9$ in Fig. \ref{fig3}d, $\alpha_{max}=1$, meaning that for any $|\alpha|$, the OBTL has performance gain towards the target-only OBC. However, when the source and target domains are not related much, like Figs. \ref{fig3}a and \ref{fig3}b, $\alpha_{max}<1$, and choosing $% |\alpha|$ greater than $\alpha_{max}$ leads to performance loss compared to the OBC. This means that exaggeration in the amount of relatedness between the two domains can hurt the transfer learning classifier when the two domains are not actually related, which refers to the concept of negative transfer. \begin{figure}[t!] \centering \begin{subfigure}[t]{0.23\textwidth} \centering \includegraphics[width=\textwidth]{Error_vs_alpha_true_0_3.eps} \caption{} \end{subfigure} ~ \begin{subfigure}[t]{0.23\textwidth} \centering \includegraphics[width=\textwidth]{Error_vs_alpha_true_0_5.eps} \caption{} \end{subfigure} \par \begin{subfigure}[t]{0.23\textwidth} \centering \includegraphics[width=\textwidth]{Error_vs_alpha_true_0_7.eps} \caption{} \end{subfigure} ~ \begin{subfigure}[t]{0.23\textwidth} \centering \includegraphics[width=\textwidth]{Error_vs_alpha_true_0_9.eps} \caption{} \end{subfigure} \caption{{\protect\footnotesize Average classification error vs $|\protect% \alpha|$}} \label{fig3} \vspace{-.7cm} \end{figure} Figure \ref{fig4} shows the errors versus $\nu $, assuming unknown true value $\nu _{true}$, for different values of $\alpha $ ($0.5$ and $0.9$) and $\nu _{true}$ ($25$ and $50$). The salient point here is that the performance of the OBTL classifier is not so sensitive to $\nu $ if it is chosen in its allowable range, that is, $\nu \geq 2d$. In Fig. \ref{fig4}, the error of the OBTL does not change much for $\nu \geq 2d=20$. As a result, we can choose any arbitrary $\nu \geq 2d$ in real datasets without worrying about critical performance deterioration. \begin{figure}[t!] \centering \begin{subfigure}[t]{0.23\textwidth} \centering \includegraphics[width=\textwidth]{Error_vs_nu_true_25_alpha_0_5.eps} \caption{} \end{subfigure} ~ \begin{subfigure}[t]{0.23\textwidth} \centering \includegraphics[width=\textwidth]{Error_vs_nu_true_50_alpha_0_5.eps} \caption{} \end{subfigure} \par \begin{subfigure}[t]{0.23\textwidth} \centering \includegraphics[width=\textwidth]{Error_vs_nu_true_25_alpha_0_9.eps} \caption{} \end{subfigure} ~ \begin{subfigure}[t]{0.23\textwidth} \centering \includegraphics[width=\textwidth]{Error_vs_nu_true_50_alpha_0_9.eps} \caption{} \end{subfigure} \caption{{\protect\footnotesize Average classification error vs $\protect\nu$% }} \label{fig4} \vspace{-.5cm} \end{figure} Figure \ref{fig5} depicts average classification error versus $\kappa _{t}$ for two different values of $\alpha $ ($0.5$ and $0.9$), where the true value of $\kappa _{t}$ is $\kappa _{true}=50$. Similar to $\nu $, if $\kappa _{t}$ is greater than a value ($20$ in Fig. \ref{fig5}), the performance does not change much. According to (\ref{const1}), it is better to choose $% \kappa _{t}^{l}$ and $\kappa _{s}^{l}$ to be proportional to $n_{t}$ and $% n_{s}$, respectively, since the values of updated means $\mathbf{m}% _{t,n}^{l} $ and $\mathbf{m}_{s,n}^{l}$ are weighted averages of our prior knowledge about means, $\mathbf{m}_{t}^{l}$ and $\mathbf{m}_{s}^{l}$, and the sample means $\bar{\mathbf{x}}_{t}^{l}$ and $\bar{\mathbf{x}}_{s}^{l}$. Assuming that $\kappa _{t}=\beta _{t}n_{t}$ and $\kappa _{s}=\beta _{s}n_{s}$% , for some $\beta _{t},\beta _{s}>0$, if we have higher confidence on our priors on means, we pick higher $\beta _{t}$ and $\beta _{s}$ (as in Fig. \ref% {fig5}); but for the untrustworthy priors, we choose lower values for $\beta _{t}$ and $\beta _{s}$. Sensitivity results in Figs. \ref{fig3}, \ref{fig4}, and \ref{fig5} reveal that in our simulation setup the performance improvement of the OBTL depends on the value of $\alpha$ and true relatedness ($\alpha_{true}$ in this example) between the two domains and is not affected that much by the choices of other hyperparameters like $\nu$, $\kappa_t$, and $\kappa_s$. We could have a reasonable range of $\alpha$ to get improved performance but the correct estimates of \textit{relatedness} or \textit{transferability} are critical, which is an important future research direction (see Conclusions in Section \ref{sec8}). \begin{figure}[t!] \centering \begin{subfigure}[t]{0.23\textwidth} \centering \includegraphics[width=\textwidth]{Error_vs_kappa_true_50_alpha_0_5.eps} \caption{} \end{subfigure} ~ \begin{subfigure}[t]{0.23\textwidth} \centering \includegraphics[width=\textwidth]{Error_vs_kappa_true_50_alpha_0_9.eps} \caption{} \end{subfigure} \caption{{\protect\footnotesize Average classification error vs $\protect% \kappa_t$}} \label{fig5} \vspace{-.5cm} \end{figure} \subsection{Real-world benchmark datasets} We test the OBTL classifier on \textit{Office} \cite{office} and \textit{% Caltech256} \cite{caltech} image datasets, which have been adopted to help benchmark different transfer learning algorithms in the literature. We have used exactly the same evaluation setup and data splits of MMDT (Max-Margin Domain Transform) \cite{hoffman2013}. \begin{table*}[h!] \caption{{\protect\footnotesize Semi-supervised accuracy for different source and target domains in the \textit{Office+Caltech256} dataset using SURF features. Domain names are denoted as a: \textit{amazon}, w: \textit{% webcam}, d: \textit{dslr}, c: \textit{Caltech256}. The numbers in red show the best accuracy and the numbers in blue show the second best accuracy in each column. The results of the first six methods have been adopted from \protect\cite{ILS2017}. Similar to \protect\cite{ILS2017}, we have also used the evaluation setup of \protect\cite{hoffman2013} for the OBTL.}} \label{table-1}\centering {\small \addtolength{\tabcolsep}{-3pt} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c||c|} \hline $~$ & a $\rightarrow$ w & a $\rightarrow$ d & a $\rightarrow$ c & w $% \rightarrow$ a & w $\rightarrow$ d & w $\rightarrow$ c & d $\rightarrow$ a & d $\rightarrow$ w & d $\rightarrow$ c & c $\rightarrow$ a & c $\rightarrow$ w & c $\rightarrow$ d & Mean \\ \hline\hline 1-NN-t & 34.5 & 33.6 & 19.7 & 29.5 & 35.9 & 18.9 & 27.1 & 33.4 & 18.6 & 29.2 & 33.5 & 34.1 & 29.0 \\ \hline SVM-t & 63.7 & 57.2 & 32.2 & 46.0 & 56.5 & 29.7 & 45.3 & 62.1 & 32.0 & 45.1 & 60.2 & 56.3 & 48.9 \\ \hline HFA \cite{HFA2012} & 57.4 & 55.1 & 31.0 & \textbf{\textcolor{red}{56.5}} & 56.5 & 29.0 & 42.9 & 60.5 & 30.9 & 43.8 & 58.1 & 55.6 & 48.1 \\ \hline MMDT \cite{hoffman2013} & 64.6 & 56. 7 & 36.4 & 47.7 & 67.0 & 32.2 & 46.9 & 74.1 & 34.1 & 49.4 & 63.8 & 56.5 & 52.5 \\ \hline CDLS \cite{CDLS2016} & \textbf{\textcolor{blue}{68.7}} & \textbf{% \textcolor{red}{60.4}} & 35.3 & 51.8 & 60.7 & 33.5 & 50.7 & 68.5 & 34.9 & 50.9 & \textbf{\textcolor{blue}{66.3}} & \textbf{\textcolor{blue}{59.8}} & 53.5 \\ \hline ILS (1-NN) \cite{ILS2017} & 59.7 & 49.8 & \textbf{\textcolor{red}{43.6}} & 54.3 & \textbf{\textcolor{blue}{70.8}} & \textbf{\textcolor{red}{38.6}} & \textbf{\textcolor{red}{55.0}} & \textbf{\textcolor{blue}{80.1}} & \textbf{% \textcolor{red}{41.0}} & \textbf{\textcolor{red}{55.1}} & 62.9 & 56.2 & \textbf{\textcolor{blue}{55.6}} \\ \hline \textbf{OBTL} & \textbf{\textcolor{red}{72.4}} & \textbf{% \textcolor{blue}{60.2}} & \textbf{\textcolor{blue}{41.5}} & \textbf{% \textcolor{blue}{55.0}} & \textbf{\textcolor{red}{75.0}} & \textbf{% \textcolor{blue}{37.4}} & \textbf{\textcolor{blue}{54.4}} & \textbf{% \textcolor{red}{83.2}} & \textbf{\textcolor{blue}{40.3}} & \textbf{% \textcolor{blue}{54.8}} & \textbf{\textcolor{red}{71.1}} & \textbf{% \textcolor{red}{61.5}} & \textbf{\textcolor{red}{58.9}} \\ \hline \end{tabular} } \end{table*} \begin{table*}[!t] \caption{{\protect\footnotesize The values of hyperparameter $\alpha$ of the OBTL used in each experiment. $n_t$ and $n_s$ are based on the data splits provided by \protect\cite{hoffman2013}.}} \label{table-2}\centering {\small \addtolength{\tabcolsep}{-3pt} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline $~$ & a $\rightarrow$ w & a $\rightarrow$ d & a $\rightarrow$ c & w $% \rightarrow$ a & w $\rightarrow$ d & w $\rightarrow$ c & d $\rightarrow$ a & d $\rightarrow$ w & d $\rightarrow$ c & c $\rightarrow$ a & c $\rightarrow$ w & c $\rightarrow$ d \\ \hline\hline $n_t$ & 3 & 3 & 3 & 3 & 3 & 3 & 3 & 3 & 3 & 3 & 3 & 3 \\ \hline $n_s$ & 20 & 20 & 20 & 8 & 8 & 8 & 8 & 8 & 8 & 8 & 8 & 8 \\ \hline $\alpha$ & 0.6 & 0.75 & 0.99 & 0.9 & 0.99 & 0.99 & 0.9 & 0.99 & 0.99 & 0.85 & 0.5 & 0.75 \\ \hline \end{tabular} } \vspace{-.2cm} \end{table*} \noindent $\bullet$ \textbf{Office dataset:} This dataset has images in three different domains: \textit{amazon}, \textit{webcam}, and \textit{dslr}% . The dataset contains 31 classes including the office stuff like backpack, chair, keyboard, etc. The three domains \textit{amazon}, \textit{webcam}, and \textit{dslr} contain images from Amazon's website, a webcam, and a digital single-lens reflex (dslr) camera, respectively, with different lighting and backgrounds. SURF \cite{surf} image features are used in all the domains, which are of dimension 800. \noindent $\bullet$ \textbf{Office + Caltech256 dataset:} This dataset has $% L=10$ common classes of both \textit{Office} and \textit{Caltech256} datasets with the same feature dimension $d=800$. According to the data splits of \cite{hoffman2013}, the numbers of training data per class in the source domain are $n_s=20$ for \textit{amazon} and $n_s=8$ for the other three domains, and in the target domain $n_t=3$ for all the four domains. For this four-domain dataset, 20 random train-test splits have been created by \cite{hoffman2013}. We run the OBTL classifier on that 20 provided train-test splits and report the average accuracy. Note that the test data are solely from the target domains. Authors of MMDT \cite{hoffman2013} reduce the dimension to $d=20$ using PCA. We follow the same procedure for the OBTL classifier. Following the comparison framework of \cite{ILS2017}, which used the same evaluation setup of \cite{hoffman2013}, we compare the OBTL's performance in terms of accuracy (10-class) in Table 1 with two target-only classifiers and four state-of-the-art semi-supervised transfer learning algorithms (including \cite{ILS2017} itself). The evaluation setup is exactly the same for the OBTL and all the other six methods. As a result, we use the results of \cite{ILS2017} for the first six methods in Table 1 and compare them with the OBTL classifier. The six methods are as follows. \noindent $\bullet$ \textbf{1-NN-t and SVM-t:} The Nearest Neighbor (1-NN) and linear SVM classifiers designed using only the target data. \noindent $\bullet$ \textbf{HFA \cite{HFA2012}:} This Heterogeneous Feature Augmentation (HFA) method learns a common latent space between source and target domains using the max-margin approach and designs a classifier in that common space. \noindent $\bullet$ \textbf{MMDT \cite{hoffman2013}:} This Max-Margin Domain Transform (MMDT) method learns a transformation between the source and target domains and employs the weighted SVM for classification. \noindent $\bullet$ \textbf{CDLS \cite{CDLS2016}:} This Cross-Domain Landmark Selection (CDLS) is a semi-supervised heterogeneous domain adaptation method, which derives a domain-invariant feature space for improved classification performance. \noindent $\bullet$ \textbf{ILS (1-NN) \cite{ILS2017}:} This is a recent method that learns an Invariant Latent Space (ILS) to reduce the discrepancy between the source and target domains and uses Riemannian optimization techniques to match statistical properties between samples projected into the latent space from different domains. In Table \ref{table-1}, we have calculated the accuracy of the OBTL classifier in 12 distinct experiments, where the source-target pairs are different (source $% \rightarrow$ target) in each experiment. We have marked the best accuracy in each column with red and the second best accuracy with blue. We see that the OBTL classifier has either the best or second best accuracy in all the 12 experiments. We have written the mean accuracy of each method in the last column, which has been averaged over all the 12 different experiments. The OBTL classifier has the best mean accuracy and the ILS \cite{ILS2017} has the second best accuracy among all the methods. We have assumed equal prior probabilities for all the classes and used (\ref{OBTL_2}) for the OBTL classifier. \noindent $\bullet$ \textbf{Hyperparameters of the OBTL:} We assume the same values of hyperparameters for all the 10 classes in each domain, so we can drop the superscript $l$ denoting the class label. We set $\nu=10d=200$ for all the experiments. We choose $\alpha$ separately in each experiment since the relatedness between distinct pairs of domains are different. For $% \mathbf{m}_t$ and $\mathbf{m}_s$, we pool all the target and source data in all the 10 classes, respectively, and use the sample means of the datasets. We fix $\beta_t=\beta_s=1$ (meaning that $\kappa_t=n_t$ and $\kappa_s=n_s$) and $k_t=k_s=1/\nu=1/200=0.005$. The mean of the Wishart precision matrix $\mathbf{\Lambda}_z$, for $z \in \{s,t\}$, with scale matrix $\mathbf{M}_z$ and $\nu$ degrees of freedom is $\nu\mathbf{M}_z$. Consequently, $E(\mathbf{\Lambda}_t)=E(\mathbf{\Lambda}_s)=I_d$, which is a reasonable choice, since the provided datasets of \cite{hoffman2013} have been normally standardized. Therefore, the only hyperparameter and the most important one is $\alpha$ ($\in (0,1)$), which shows the relatedness between the two domains. Figs. \ref{fig-real}a and \ref{fig-real}b demonstrate that the accuracy is robust for $k_t\in (0.005, 0.02)$ and $k_s\in (0.005, 0.02)$, respectively. Figs. \ref{fig-real}a and \ref{fig-real}b are corresponding to two experiments: $a \rightarrow w, \alpha = 0.6$ and $w \rightarrow d, \alpha = 0.99$. Figs. \ref{fig-real}c and \ref{fig-real}d show interesting results. We have already seen similar behavior in the synthetic data as well. In the case of $a \rightarrow w$, accuracy grows smoothly by increasing $\alpha$, reaches the maximum at $\alpha=0.6$, and decreases afterwards. This verifies the fact that the source domain $a$ cannot help the target domain $w$ that much. On the contrary, accuracy increases monotonically in Fig. \ref{fig-real}d, in the case of $w \rightarrow d$, and the difference between accuracy for $\alpha=0.01$ and $\alpha=0.99$ is huge. This confirms that the source domain $w$ is very related to the target domain $d$ and helps it a lot. Interestingly, this coincides with the findings from the literature that the two domains $w$ and $d$ are highly related. We choose the values of $\alpha$ in each experiment which give the best accuracy. They are shown in Table \ref{table-2}. The values of $\alpha$ in Table \ref{table-2} also reveal the amount of relatedness between any pairs of source and target domains. For example, both $w \rightarrow d$ and $d \rightarrow w$ have high relatedness with $\alpha=0.99$, which has already been verified by other papers as well \cite{gong2012geodesic}. \begin{figure}[t!] \centering \begin{subfigure}[t]{0.23\textwidth} \centering \includegraphics[width=\textwidth]{Accuracy_vs_k_t.eps} \caption{} \end{subfigure} ~ \begin{subfigure}[t]{0.23\textwidth} \centering \includegraphics[width=\textwidth]{Accuracy_vs_k_s.eps} \caption{} \end{subfigure} \begin{subfigure}[t]{0.23\textwidth} \centering \includegraphics[width=\textwidth]{Accuracy_vs_alpha_1.eps} \caption{} \end{subfigure} ~ \begin{subfigure}[t]{0.23\textwidth} \centering \includegraphics[width=\textwidth]{Accuracy_vs_alpha_2.eps} \caption{} \end{subfigure} \caption{{\protect\footnotesize Accuracy in the \textit{Office+Caltech256} dataset versus: (a) $k_t$ when $k_s=1/200$ and for two experiments $a \rightarrow w, \alpha = 0.6$ and $w \rightarrow d, \alpha = 0.99$, (b) $k_s$ when $k_t=1/200$ and for two experiments $a \rightarrow w, \alpha = 0.6$ and $w \rightarrow d, \alpha = 0.99$, (c) $\alpha$ when $k_t=k_s=1/200$ and for the experiment $a \rightarrow w$, (d) $\alpha$ when $k_t=k_s=1/200$ and for the experiment $w \rightarrow d$. }} \label{fig-real} \vspace{-.5cm} \end{figure} \section{Conclusions And Future Work} \label{sec8} We have constructed a Bayesian transfer learning framework to tackle the supervised transfer learning problem. The proposed Optimal Bayesian Transfer Learning (OBTL) classifier can deal with the lack of labeled data in the target domain and is optimal in this new Bayesian framework since it minimizes the expected classification error. We have obtained the closed-form posterior distribution of the target parameters and accordingly the closed-form effective class-conditional densities in the target domain to define the OBTL classifier. As the OBTL's objective function consists of hypergeometric functions of matrix argument, we use the Laplace approximations of those functions to derive a computationally efficient and scalable OBTL classifier, while preserving its superior performance. We have compared the performance of the OBTL with its target-only version, OBC, to see how transferring from source to target domain can help. We have tested the OBTL classifier with real-world benchmark image datasets and demonstrated its excellent performance compared to other state-of-the-art domain adaption methods. This paper considers a Gaussian model, in which we can derive closed-form solutions, as the case with the OBC. Since many practical problems cannot be approximated by a Gaussian model, an important aspect of OBC development has been the utilization of MCMC methods \cite{knight2014mcmc, knight2015detecting}. In a forthcoming paper, we extend the OBTL setting to count data with a Negative Binomial model, in which the inference of parameters is done by MCMC. We will also apply the OBTL in dynamical systems and time series scenarios \cite{Alireza_TCBB,Alireza_TCBB2,Alireza_ICASSP,Alireza_BMC}. We have only considered two domains in this paper, assuming there is only one source domain. Having seen the good performance of the OBTL classifier in two domains, in future work, we are going to apply it to the multi-source transfer learning problems, where we can benefit from the knowledge of different related sources in order to further improve the target classifier. As in the case of the OBC, a basic engineering aspect of the OBTL is prior construction. This has been studied under different conditions in the context of the OBC: using the data from unused features to infer a prior distribution \cite{dalton2011application}, deriving the prior distribution from models of the data-generating technology \cite{knight2014mcmc}, and applying constraints based on prior knowledge to map the prior knowledge into a prior distribution via optimization \cite{Esfahani_TCBB_1,Esfahani_TCBB_2,Shahin_BMC}. The methods in \cite{Esfahani_TCBB_1,Esfahani_TCBB_2} are very general and have been placed into a formal mathematical structure in \cite{Shahin_BMC}, where the prior results from an optimization involving the Kullback-Leibler (KL) divergence constrained by conditional probability statements characterizing physical knowledge, such as genetic pathways in genomic medicine. A key focus of our future work will be to extend this general framework to the OBTL, which will require a formulation that incorporates knowledge relating the source and target domains. It should be emphasized that with optimal Bayesian classification, as well as with optimal Bayesian filtering \cite{Lori-IBRF,Qian-OBF,Roozbeh_IBR_Kalman}% , the prior distribution is not on the operator to be designed (classifier or filter) but on the underlying scientific model (feature-label distribution, covariance matrix, or observation model) for which the operator is optimized. It is for this reason that uncertainty in the scientific model can be mapped into a prior distribution based on physical laws. \appendices \section{Theorems for Zonal Polynomials and Generalized Hypergeometric Functions of Matrix Argument} \label{appendix:hypergeometric} \begin{theorem} \label{theorem5} \cite{muirhead}: Let $\mathbf{Z}$ be a complex symmetric matrix whose real part is positive-definite, and let $\mathbf{X}$ be an arbitrary complex symmetric matrix. Then \begin{equation} \begin{aligned} \int_{\mathbf{R}>0} \mathrm{etr}(-\mathbf{Z}\mathbf{R}) |\mathbf{R}|^{\alpha-\frac{d+1}{2}} C_\kappa(\mathbf{R}\mathbf{X}) d\mathbf{R} \\ = \Gamma_d(\alpha)(\alpha)_\kappa |\mathbf{Z}|^{-\alpha} C_\kappa(\mathbf{X}\mathbf{Z}^{-1}), \end{aligned} \end{equation} the integration being over the space of positive-definite $d\times d$ matrices, and valid for all complex numbers $\alpha$ satisfying $\mathrm{Re}% (\alpha)>\frac{d-1}{2}$. $\Gamma_d(\alpha)$ is the multivariate gamma function defined in (\ref{Gamma_multi}). \end{theorem} \begin{theorem} \label{theorem6} \cite{Appell}: The zonal polynomials are invariant under orthogonal transformation. That is, for a $d \times d$ symmetric matrix $% \mathbf{X}$, \begin{equation} C_\kappa(\mathbf{X}) = C_\kappa(\mathbf{H}\mathbf{X}\mathbf{H}^{^{\prime }}), \end{equation} where $\mathbf{H}$ is an orthogonal matrix of order $d$. If $\mathbf{R}$ is a symmetric positive-definite matrix of order $d$, then \begin{equation} C_\kappa(\mathbf{RX})=C_\kappa(\mathbf{R}^{1/2}\mathbf{X} \mathbf{R}^{1/2}). \end{equation} \end{theorem} As a result, if $\mathbf{R}$ is a symmetric positive-definite matrix, the hypergeometric function has the following property: \begin{equation} \begin{aligned} ~_pF_q(a_1,\cdots,a_p;b_1,\cdots,b_q;\mathbf{RX}) \hspace{3cm} \\ =~_pF_q(a_1,\cdots,a_p;b_1,\cdots,b_q;\mathbf{R}^{1/2}\mathbf{X} \mathbf{R}^{1/2}). \end{aligned} \end{equation} \begin{theorem} \label{theorem7} \cite{gupta2016properties}: If $\mathbf{Z}>0$ and $\mathrm{% Re}(\alpha)>\frac{d-1}{2}$, and $\mathbf{X}$ is a $d \times d$ symmetric matrix, we have \begin{equation} \begin{aligned} &\int_{\mathbf{R}>0} \mathrm{etr}(-\mathbf{ZR}) |\mathbf{R}|^{\alpha-\frac{d+1}{2}} \\ &\hspace{2cm} \times ~_pF_q(a_1,\cdots,a_p;b_1,\cdots,b_q;\mathbf{RX}) d \mathbf{R} \\ &= \int_{\mathbf{R}>0} \mathrm{etr}(-\mathbf{ZR}) |\mathbf{R}|^{\alpha-\frac{d+1}{2}} \\ &\hspace{1.5cm} \times ~_pF_q(a_1,\cdots,a_p;b_1,\cdots,b_q;\mathbf{R}^{1/2}\mathbf{X} \mathbf{R}^{1/2}) d \mathbf{R} \\ &= \Gamma_d(\alpha) |\mathbf{Z}|^{-\alpha} ~_{p+1}F_q(a_1,\cdots,a_p,\alpha;b_1,\cdots,b_q;\mathbf{X}\mathbf{Z}^{-1}). \end{aligned} \notag \end{equation} \end{theorem} \section{Proof of Theorem \ref{thm:posterior}} \label{appendix:posterior} We require the following lemma. \begin{lemma} \label{Lemma1} \cite{muirhead} If $\mathcal{D}=\{\mathbf{x}_1,\cdots,\mathbf{x}_n\}$ where $% \mathbf{x}_i$ is a $d \times 1$ vector and $\mathbf{x}_i \sim \mathcal{N}(% \mathbf{\mu},(\mathbf{\Lambda})^{-1})$, for $i=1,\cdots,n$, and $(\mathbf{\mu% },\mathbf{\Lambda})$ has a Gaussian-Wishart prior, such that, $\mathbf{\mu} |% \mathbf{\Lambda} \sim \mathcal{N}(\mathbf{m},(\kappa \mathbf{\Lambda})^{-1})$ and $\mathbf{\Lambda} \sim W_d(\mathbf{M},\nu)$, then the posterior of $(% \mathbf{\mu},\mathbf{\Lambda})$ upon observing $\mathcal{D}$ is also a Gaussian-Wishart distribution: \begin{equation} \begin{aligned} \label{lemma1} \mathbf{\mu} |\mathbf{\Lambda} , \mathcal{D} &\sim \mathcal{N}(\mathbf{m}_n, (\kappa_n\mathbf{\Lambda})^{-1}), \\ \mathbf{\Lambda} | \mathcal{D} &\sim W_d(\mathbf{M}_n,\nu_n), \end{aligned} \end{equation} where \begin{equation} \begin{aligned} & \kappa_n = \kappa + n, \\ & \nu_n = \nu + n, \\ & \mathbf{m}_n = \frac{\kappa \mathbf{m} + n \bar{\mathbf{x}}}{\kappa + n}, \\ & \mathbf{M}_n^{-1} = \mathbf{M}^{-1} + \mathbf{S} + \frac{\kappa n}{\kappa + n} (\mathbf{m} -\bar{\mathbf{x}})(\mathbf{m} -\bar{\mathbf{x}})^{'}, \end{aligned} \end{equation} depending on the sample mean and covariance matrix \begin{equation} \begin{aligned} & \bar{\mathbf{x}} = \frac{1}{n} \sum_{i=1}^n \mathbf{x}_i, \\ & \mathbf{S} = \sum_{i=1}^n (\mathbf{x}_i - \bar{\mathbf{x}}) (\mathbf{x}_i - \bar{\mathbf{x% }}) ^{^{\prime }}. \end{aligned} \end{equation} \end{lemma} We now provide the proof. From (\ref{x_s_x_t}), for each domain $z\in\{s,t\}$, \begin{equation} p(\mathcal{D}_z^l|\mathbf{\mu}_z^l,\mathbf{\Lambda}_z^l) = (2\pi)^{-\frac{d n_z^l}{2}} ~ \left|\mathbf{\Lambda}_z^l\right|^{\frac{n_z^l}{2}} \exp \left(-% \frac{1}{2} \mathbf{Q}_z^l \right), \label{D} \end{equation} where $\mathbf{Q}_z^l = \sum_{i=1}^{n_z^l}\left(\mathbf{x}_{z,i}^l-\mathbf{% \mu}_z^l \right)^{^{\prime }} \mathbf{\Lambda}_z^l \left(\mathbf{x}_{z,i}^l-% \mathbf{\mu}_z^l \right) $. Moreover, from (\ref{mu_s_mu_t}), for each domain $z\in\{s,t\}$, \begin{eqnarray} p\left(\mathbf{\mu}_z^l | \mathbf{\Lambda}_z^l\right) = (2\pi)^{-\frac{d}{2}% } \left(\kappa_z^l\right)^{\frac{d}{2}} \left|\mathbf{\Lambda}_z^l\right|^{% \frac{1}{2}} \hspace{2cm} \notag \\ \times\exp \left(-\frac{\kappa_z^l}{2}\left(\mathbf{\mu}_z^l - \mathbf{m}% _z^l\right)^{^{\prime }}\mathbf{\Lambda}_z^l \left(\mathbf{\mu}_z^l - \mathbf{m}_z^l\right) \right). \label{mu} \end{eqnarray} From (\ref{joint}), (\ref{posterior5}), (\ref{D}), and (\ref{mu}), \begin{equation} \label{post_t} \begin{aligned} &p(\mathbf{\mu}_t^l,\mathbf{\Lambda}_t^l |\mathcal{D}_t^l,\mathcal{D}_s^l) \propto \left|\mathbf{\Lambda}_t^l\right|^{\frac{n_t^l}{2}} \exp \left(-\frac{1}{2} \mathbf{Q}_t^l \right) \left|\mathbf{\Lambda}_t^l\right|^{\frac{1}{2}} \\ &\times \exp \left(-\frac{\kappa_t^l}{2}\left(\mathbf{\mu}_t^l - \mathbf{m}_t^l\right)^{'}\mathbf{\Lambda}_t^l \left(\mathbf{\mu}_t^l - \mathbf{m}_t^l\right) \right) \\ &\times \left|\mathbf{\Lambda}_{t}^l\right|^{\frac{\nu^l-d-1}{2}} \mathrm{etr}\left(-\frac{1}{2} \left({\left(\mathbf{M}_{t}^l\right)}^{-1} + {\mathbf{F}^l}^{'}\mathbf{C}^l \mathbf{F}^l\right)\mathbf{\Lambda}_{t}^l\right) \\ &\times \int_{\mathbf{\mu}_s^l,\mathbf{\Lambda}_s^l} \left\lbrace \left|\mathbf{\Lambda}_s^l\right|^{\frac{n_s^l}{2}} \exp \left(-\frac{1}{2} \mathbf{Q}_s^l \right) \left|\mathbf{\Lambda}_s^l\right|^{\frac{1}{2}} \right.\\ &\times \exp \left(-\frac{\kappa_s^l}{2}\left(\mathbf{\mu}_s^l - \mathbf{m}_s^l\right)^{'}\mathbf{\Lambda}_s^l \left(\mu_s^l - \mathbf{m}_s^l\right) \right) \\ &\times \left|\mathbf{\Lambda}_{s}^l\right|^{\frac{\nu^l-d-1}{2}} \mathrm{etr}\left(-\frac{1}{2} {\left(\mathbf{C}^l\right)}^{-1} \mathbf{\Lambda}_{s}^l\right) \\ &\left. \times ~_0F_1\left(\frac{\nu^l}{2}; \frac{1}{4} {\mathbf{\Lambda}_{s}^l}^{\frac{1}{2}} \mathbf{F}^l \mathbf{\Lambda}_{t}^l {\mathbf{F}^l}^{'}{\mathbf{\Lambda}_{s}^l}^{\frac{1}{2}} \right) \right\rbrace d\mathbf{\mu}_s^l d\mathbf{\Lambda}_s^l. \end{aligned} \end{equation} Using Lemma \ref{Lemma1} we can simplify (\ref{post_t}) as \begin{equation} \label{prop2} \begin{aligned} &p(\mathbf{\mu}_t^l,\mathbf{\Lambda}_t^l |\mathcal{D}_t^l,\mathcal{D}_s^l) \\ &\propto \left|\mathbf{\Lambda}_t^l\right|^{\frac{1}{2}} \exp \left(-\frac{\kappa_{t,n}^l}{2}\left(\mathbf{\mu}_t^l - \mathbf{m}_{t,n}^l\right)^{'}\mathbf{\Lambda}_t^l \left(\mathbf{\mu}_t^l - \mathbf{m}_{t,n}^l\right) \right) \\ &\times \left|\mathbf{\Lambda}_{t}^l\right|^{\frac{\nu^l + n_t^l -d-1}{2}} \mathrm{etr}\left(-\frac{1}{2} {\left(\mathbf{T}_t^l\right)}^{-1}\mathbf{\Lambda}_{t}^l\right) \\ & \int_{\mathbf{\mu}_s^l,\mathbf{\Lambda}_s^l} \left\lbrace \left|\mathbf{\Lambda}_s^l\right|^{\frac{1}{2}} \exp \left(-\frac{\kappa_{s,n}^l}{2}\left(\mathbf{\mu}_s^l - \mathbf{m}_{s,n}^l\right)^{'}\mathbf{\Lambda}_s^l \left(\mathbf{\mu}_s^l - \mathbf{m}_{s,n}^l\right) \right) \right. \\ & \times \left|\mathbf{\Lambda}_{s}^l\right|^{\frac{\nu^l + n_s^l -d-1}{2}} \mathrm{etr}\left(-\frac{1}{2} {\left(\mathbf{T}_s^l\right)}^{-1}\mathbf{\Lambda}_{s}^l\right) \\ &\left. \times ~_0F_1\left(\frac{\nu^l}{2}; \frac{1}{4} {\mathbf{\Lambda}_{s}^l}^{\frac{1}{2}} \mathbf{F}^l \mathbf{\Lambda}_{t}^l {\mathbf{F}^l}^{'}{\mathbf{\Lambda}_{s}^l}^{\frac{1}{2}} \right) \right\rbrace d\mathbf{\mu}_s^l d\mathbf{\Lambda}_s^l, \end{aligned} \end{equation} where \begin{equation} \label{app:const1} \begin{aligned} &\kappa_{t,n}^l = \kappa_t^l + n_t^l, \hspace{1.5cm} \kappa_{s,n}^l = \kappa_s^l + n_s^l, \\ &\mathbf{m}_{t,n}^l = \frac{\kappa_t^l \mathbf{m}_t^l + n_t^l \bar{\mathbf{x}}_t^l}{\kappa_t^l+n_t^l}, ~~~~~ \mathbf{m}_{s,n}^l = \frac{\kappa_s^l \mathbf{m}_s^l + n_s^l \bar{\mathbf{x}}_s^l}{\kappa_s^l+n_s^l}, \\ &{\left(\mathbf{T}_t^l\right)}^{-1} = {\left(\mathbf{M}_{t}^l\right)}^{-1} + {\mathbf{F}^l}^{'}\mathbf{C}^l \mathbf{F}^l + \mathbf{S}_t^l \\ & \hspace{2cm} + \frac{\kappa_t^l n_t^l}{\kappa_t^l + n_t^l} (\mathbf{m}_t^l -\bar{\mathbf{x}}_t^l)(\mathbf{m}_t^l -\bar{\mathbf{x}}_t^l)^{'}, \\ &{\left(\mathbf{T}_s^l\right)}^{-1} = {\left(\mathbf{C}^l\right)}^{-1} + \mathbf{S}_s^l + \frac{\kappa_s^l n_s^l}{\kappa_s^l + n_s^l} (\mathbf{m}_s^l -\bar{\mathbf{x}}_s^l)(\mathbf{m}_s^l -\bar{\mathbf{x}}_s^l)^{'}, \end{aligned} \end{equation} with sample means and covariances for $z\in\{s,t\}$ as \begin{equation} \label{const2} \bar{\mathbf{x}}_z^l = \frac{1}{n_z^l} \sum_{i=1}^{n_z^l} \mathbf{x}% _{z,i}^l, ~~~ \mathbf{S}_z^l = \sum_{i=1}^{n_z^l} \left(\mathbf{x}_{z,i}^l - \bar{\mathbf{x}}_z^l \right)\left(\mathbf{x}_{z,i}^l - \bar{\mathbf{x}}_z^l \right)^{^{\prime }}. \notag \end{equation} Using the equation \begin{equation} \label{int_mu} \int_{\mathbf{x}} \exp \left(-\frac{1}{2}(\mathbf{x}-\mathbf{\mu})^{^{\prime }}\mathbf{\Lambda} (\mathbf{x}-\mathbf{\mu}) \right) d\mathbf{x} = (2\pi)^{% \frac{d}{2}} |\mathbf{\Lambda}|^{-\frac{1}{2}}, \end{equation} and integrating out $\mathbf{\mu}_s^l$ in (\ref{prop2}) yields \begin{equation} \label{prop3} \begin{aligned} &p(\mathbf{\mu}_t^l,\mathbf{\Lambda}_t^l |\mathcal{D}_t^l,\mathcal{D}_s^l) \\ &\propto \left|\mathbf{\Lambda}_t^l\right|^{\frac{1}{2}} \exp \left(-\frac{\kappa_{t,n}^l}{2}\left(\mathbf{\mu}_t^l - \mathbf{m}_{t,n}^l\right)^{'}\mathbf{\Lambda}_t^l \left(\mathbf{\mu}_t^l - \mathbf{m}_{t,n}^l\right) \right) \\ &\times \left|\mathbf{\Lambda}_{t}^l\right|^{\frac{\nu^l + n_t^l -d-1}{2}} \mathrm{etr}\left(-\frac{1}{2} {\left(\mathbf{T}_t^l\right)}^{-1}\mathbf{\Lambda}_{t}^l\right) \\ &\times \int_{\mathbf{\Lambda}_s^l} \left\lbrace \left|\mathbf{\Lambda}_{s}^l\right|^{\frac{\nu^l + n_s^l -d-1}{2}} \mathrm{etr}\left(-\frac{1}{2} {\left(\mathbf{T}_s^l\right)}^{-1}\mathbf{\Lambda}_{s}^l\right) \right. \\ & \hspace{1.5cm} \times \left. _0F_1\left(\frac{\nu^l}{2}; \frac{1}{4} {\mathbf{\Lambda}_{s}^l}^{\frac{1}{2}} \mathbf{F}^l \mathbf{\Lambda}_{t}^l {\mathbf{F}^l}^{'}{\mathbf{\Lambda}_{s}^l}^{\frac{1}{2}} \right) \right\rbrace d\mathbf{\Lambda}_s^l. \end{aligned} \end{equation} The integral, $I$, in (\ref{prop3}) can be done using Theorem \ref{theorem7} as \begin{equation} \begin{aligned} & I = \Gamma_d\left(\frac{\nu^l + n_s^l}{2}\right) \\ &\times \left| 2\mathbf{T}_s^l\right|^{\frac{\nu^l + n_s^l}{2}} ~_1F_1\left(\frac{\nu^l + n_s^l}{2}; \frac{\nu^l}{2}; \frac{1}{2} \mathbf{F}^l \mathbf{\Lambda}_{t}^l {\mathbf{F}^l}^{'} \mathbf{T}_s^l \right), \end{aligned} \end{equation} where $_1F_1(a;b;\mathbf{X})$ is the Confluent hypergeometric function with the matrix argument $\mathbf{X}$. As a result, (\ref{prop3}) becomes \begin{equation} \label{app:prop4} \begin{aligned} &p(\mathbf{\mu}_t^l,\mathbf{\Lambda}_t^l |\mathcal{D}_t^l,\mathcal{D}_s^l) = \\ & A^l \left|\mathbf{\Lambda}_t^l\right|^{\frac{1}{2}} \exp \left(-\frac{\kappa_{t,n}^l}{2}\left(\mathbf{\mu}_t^l - \mathbf{m}_{t,n}^l\right)^{'}\mathbf{\Lambda}_t^l \left(\mathbf{\mu}_t^l - \mathbf{m}_{t,n}^l\right) \right) \\ &\times \left|\mathbf{\Lambda}_{t}^l\right|^{\frac{\nu^l + n_t^l -d-1}{2}} \mathrm{etr}\left(-\frac{1}{2} {\left(\mathbf{T}_t^l\right)}^{-1}\mathbf{\Lambda}_{t}^l\right) \\ & \times ~_1F_1\left(\frac{\nu^l + n_s^l}{2}; \frac{\nu^l}{2}; \frac{1}{2} \mathbf{F}^l \mathbf{\Lambda}_{t}^l {\mathbf{F}^l}^{'} \mathbf{T}_s^l \right), \end{aligned} \end{equation} where the constant of proportionality, $A^l$, makes the integration of the posterior $p(\mathbf{\mu}_t^l,\mathbf{\Lambda}_t^l |\mathcal{D}_t^l,\mathcal{% D}_s^l)$ with respect to $\mathbf{\mu}_t^l$ and $\mathbf{\Lambda}_t^l$ equal to one. Hence, \begin{equation} \label{A1} \begin{aligned} &{\left(A^l\right)}^{-1} = \int_{\mathbf{\Lambda}_t^l} \left|\mathbf{\Lambda}% _{t}^l\right|^{\frac{\nu^l + n_t^l -d-1}{2}} \mathrm{etr}\left(-\frac{1}{2} {% \left(\mathbf{T}_t^l\right)}^{-1}\mathbf{\Lambda}_{t}^l\right) \left|\mathbf{% \Lambda}_t^l\right|^{\frac{1}{2}} \\ & \times \int_{\mathbf{\mu}_t^l} \exp \left(-\frac{\kappa_{t,n}^l}{2}\left(\mathbf{\mu}_t^l - \mathbf{m}_{t,n}^l\right)^{'}\mathbf{\Lambda}_t^l \left(\mathbf{\mu}_t^l - \mathbf{m}_{t,n}^l\right) \right) d\mathbf{\mu}_t^l \\ &\times _1F_1\left(\frac{\nu^l + n_s^l}{2}; \frac{\nu^l}{2}; \frac{1}{2} \mathbf{F}^l \mathbf{\Lambda}_{t}^l {\mathbf{F}^l}^{'} \mathbf{T}_s^l \right)d\mathbf{\Lambda}_t^l. \end{aligned} \end{equation} Using (\ref{int_mu}), the inner integral equals to $(2\pi)^{\frac{d}{2}} |\kappa_{t,n}^l \mathbf{\Lambda}_t^l|^{-\frac{1}{2}}=\left(\frac{2\pi}{% \kappa_{t,n}^l}\right)^{\frac{d}{2}} |\mathbf{\Lambda}_t^l|^{-\frac{1}{2}}$. Hence, \begin{equation} \label{A2} \begin{aligned} {\left(A^l\right)}^{-1} = \left(\frac{2\pi}{\kappa_{t,n}^l}\right)^{\frac{d}{2}} \int_{\mathbf{\Lambda}_t^l} \left|\mathbf{\Lambda}_{t}^l\right|^{\frac{\nu^l + n_t^l -d-1}{2}} \mathrm{etr}\left(-\frac{1}{2} {\left(\mathbf{T}_t^l\right)}^{-1}\mathbf{\Lambda}_{t}^l\right) \\ \times ~_1F_1\left(\frac{\nu^l + n_s^l}{2}; \frac{\nu^l}{2}; \frac{1}{2} \mathbf{F}^l \mathbf{\Lambda}_{t}^l {\mathbf{F}^l}^{'} \mathbf{T}_s^l \right)d\mathbf{\Lambda}_t^l. \hspace{1cm} \end{aligned} \end{equation} With the variable change $\Omega = \mathbf{F}^l \mathbf{\Lambda}_{t}^l {% \mathbf{F}^l}^{^{\prime }}$, we have $d\Omega=|\mathbf{F}^l|^{d+1} d\mathbf{% \Lambda}_t^l$ and $\mathbf{\Lambda}_t^l = {\left(\mathbf{F}^l\right)}^{-1} \Omega \left({\mathbf{F}^{l}}^{^{\prime }}\right)^{-1}$. Since $\mathrm{tr}(% \mathbf{ABCD})=\mathrm{tr}(\mathbf{BCDA})=\mathrm{tr}(\mathbf{CDAB})=\mathrm{% tr}(\mathbf{DABC})$ and $|\mathbf{ABC}|=|\mathbf{A}||\mathbf{B}||\mathbf{C}|$% , $A^l$ can be derived as \begin{equation} \label{app:A4} \begin{aligned} &{\left(A^l\right)}^{-1} = \left(\frac{2\pi}{\kappa_{t,n}^l}\right)^{\frac{d}{2}} |\mathbf{F}^l|^{-\left(\nu^l + n_t^l \right)} \int_{\Omega} \left\lbrace |\Omega |^{\frac{\nu^l + n_t^l -d-1}{2}} \right. \\ & \hspace{.5cm} \times\mathrm{etr}\left(-\frac{1}{2} {\left({\mathbf{F}^{l}}^{'}\right)}^{-1} {\left(\mathbf{T}_t^l\right)}^{-1} {\mathbf{F}^l}^{-1} \Omega \right) \\ &\left.\hspace{.5cm} \times ~_1F_1\left(\frac{\nu^l + n_s^l}{2}; \frac{\nu^l}{2}; \frac{1}{2} \Omega \mathbf{T}_s^l \right) \right\rbrace d\Omega \\ &= \left(\frac{2\pi}{\kappa_{t,n}^l}\right)^{\frac{d}{2}} 2^{\frac{d\left(\nu^l+n_t^l \right)}{2}} \Gamma_d \left(\frac{\nu^l+n_t^l}{2} \right) \left|\mathbf{T}_t^l\right|^{\frac{\nu^l + n_t^l}{2}} \\ & ~~~~~~ \times ~_2F_1\left(\frac{\nu^l + n_s^l}{2}, \frac{\nu^l + n_t^l}{2}; \frac{\nu^l}{2}; \mathbf{T}_s^l\mathbf{F}^l \mathbf{T}_t^l {\mathbf{F}^l}^{'} \right), \end{aligned} \end{equation} where the second equality follows from Theorem \ref{theorem7}, and $% _2F_1(a,b;c;\mathbf{X})$ is the Gauss hypergeometric function with the matrix argument $\mathbf{X}$. As such, we have derived the closed-form posterior distribution of the target parameters $(\mathbf{\mu}_t^l,\mathbf{% \Lambda}_t^l)$ in (\ref{prop4}), where ${A^l}$ is given by (\ref{A4}). \section{Proof of Theorem \ref{thm-effective}} \label{appendix:effective} The likelihood $p(\mathbf{x}|\mathbf{% \mu }_{t}^{l},\mathbf{\Lambda }_{t}^{l})$ and posterior $p(\mathbf{\mu }% _{t}^{l},\mathbf{\Lambda }_{t}^{l}|\mathcal{D}_{t}^{l},\mathcal{D}_{s}^{l})$ are given in (\ref{x_s_x_t}) and (\ref{prop4}), respectively. Hence, \begin{equation} \begin{aligned} & p(\mathbf{x} | l) = (2\pi)^{-\frac{d}{2}} A^l \int_{\mathbf{\mu}_t^l,\mathbf{\Lambda}_t^l} \left\lbrace |\mathbf{\Lambda}_t^l|^{\frac{1}{2}} \right. \\ & \times \exp\left(-\frac{1}{2} \left(\mathbf{x}-\mathbf{\mu}_t^l \right)^{'}\mathbf{\Lambda}_t^l \left(\mathbf{x}-\mathbf{\mu}_t^l \right) \right) \\ & \times \left|\mathbf{\Lambda}_t^l\right|^{\frac{1}{2}} \exp \left(-\frac{\kappa_{t,n}^l}{2}\left(\mathbf{\mu}_t^l - \mathbf{m}_{t,n}^l\right)^{'}\mathbf{\Lambda}_t^l \left(\mathbf{\mu}_t^l - \mathbf{m}_{t,n}^l\right) \right) \\ & \times \left|\mathbf{\Lambda}_{t}^l\right|^{\frac{\nu^l + n_t^l -d-1}{2}} \mathrm{etr}\left(-\frac{1}{2} {\left(\mathbf{T}_t^l\right)}^{-1}\mathbf{\Lambda}_{t}^l\right) \\ & \left. \times ~_1F_1\left(\frac{\nu^l + n_s^l}{2}; \frac{\nu^l}{2}; \frac{1}{2} \mathbf{F}^l \mathbf{\Lambda}_{t}^l {\mathbf{F}^l}^{'} \mathbf{T}_s^l \right) \right\rbrace d\mathbf{\mu}_t^l d\mathbf{\Lambda}_t^l. \end{aligned} \label{eff1} \end{equation}% Similarly, we can simplify (\ref{eff1}) as \begin{equation} \begin{aligned} & p(\mathbf{x}| l) = (2\pi)^{-\frac{d}{2}} A^l \int_{\mathbf{\mu}_t^l,\mathbf{\Lambda}_t^l} \left\lbrace |\mathbf{\Lambda}_t^l|^{\frac{1}{2}} \right. \\ & \times \exp\left(-\frac{\kappa_\mathbf{x}^l}{2} \left(\mathbf{\mu}_t^l-\mathbf{m}_\mathbf{x}^l \right)^{'}\mathbf{\Lambda}_t^l \left(\mathbf{\mu}_t^l-\mathbf{m}_\mathbf{x}^l \right) \right) \\ & \times \left|\mathbf{\Lambda}_{t}^l\right|^{\frac{\nu^l + n_t^l +1 -d-1}{2}} \mathrm{etr}\left(-\frac{1}{2} {\left(\mathbf{T}_\mathbf{x}^l\right)}^{-1}\mathbf{\Lambda}_{t}^l\right) \\ & \left. \times ~_{1}F_{1}\left( \frac{\nu ^{l}+n_{s}^{l}}{2};\frac{\nu ^{l}}{2};\frac{1}{2}\mathbf{F}^{l}\mathbf{\Lambda }_{t}^{l}{\mathbf{F}^{l}}^{^{\prime }}\mathbf{T}_{s}^{l}\right) \right\} d\mathbf{\mu }_{t}^{l}d\mathbf{\Lambda }_{t}^{l}, \end{aligned} \label{eff2} \end{equation} where \begin{equation} \begin{aligned} & \kappa_\mathbf{x}^l = \kappa_{t,n}^l + 1 = \kappa_t^l + n_t^l + 1, ~~~~~ \mathbf{m}_\mathbf{x}^l = \frac{\kappa_{t,n}^l \mathbf{m}_{t,n}^l + \mathbf{x}}{\kappa_{t,n}+1}, \\ & {\left(\mathbf{T}_\mathbf{x}^l\right)}^{-1} = {\left(\mathbf{T}_t^l\right)}^{-1} + \frac{\kappa_{t,n}^l}{\kappa_{t,n}^l + 1} \left(\mathbf{m}_{t,n}^l-\mathbf{x} \right) \left(\mathbf{m}_{t,n}^l-\mathbf{x} \right)^{'}. \end{aligned} \label{app:update_1} \end{equation}% The integration in (\ref{eff2}) is similar to the one in (\ref{A1}). As a result, using (\ref{A4}), \begin{equation} \begin{aligned} & p(\mathbf{x}| l) = (2\pi)^{-\frac{d}{2}} A^l \left(\frac{2\pi}{\kappa_\mathbf{x}^l}\right)^{\frac{d}{2}} 2^{\frac{d\left(\nu^l+n_t^l + 1\right)}{2}} \Gamma_d \left(\frac{\nu^l+n_t^l + 1}{2} \right) \\ & \left|\mathbf{T}_\mathbf{x}^l\right|^{\frac{\nu^l + n_t^l + 1}{2}} ~_2F_1\left(\frac{\nu^l + n_s^l}{2}, \frac{\nu^l + n_t^l + 1}{2}; \frac{\nu^l}{2}; \mathbf{T}_s^l\mathbf{F}^l \mathbf{T}_\mathbf{x}^l {\mathbf{F}^l}^{'} \right). \end{aligned} \label{eff3} \end{equation}% By replacing the value of $A^{l}$, we have the effective class-conditional density. We denote $O_{\mathrm{OBTL}}(\mathbf{x}|l)=p(\mathbf{x}|l)$, since it is the objective function for the OBTL classifier. As such, \begin{equation} \begin{aligned} &O_{\mathrm{OBTL}}(\mathbf{x}| l) = \pi^{-\frac{d}{2}} \left(\frac{\kappa_{t,n}^l}{\kappa_\mathbf{x}^l} \right)^{\frac{d}{2}} \Gamma_d \left(\frac{\nu^l+n_t^l + 1}{2} \right) \\ & \times \Gamma_d^{-1} \left(\frac{\nu^l+n_t^l}{2} \right) \left|\mathbf{T}_\mathbf{x}^l\right|^{\frac{\nu^l + n_t^l + 1}{2}} \left|\mathbf{T}_t^l\right|^{-\frac{\nu^l + n_t^l}{2}} \\ & \times ~_2F_1\left(\frac{\nu^l + n_s^l}{2}, \frac{\nu^l + n_t^l + 1}{2}; \frac{\nu^l}{2}; \mathbf{T}_s^l\mathbf{F}^l \mathbf{T}_\mathbf{x}^l {\mathbf{F}^l}^{'} \right) \\ & \times ~_2F_1^{-1}\left(\frac{\nu^l + n_s^l}{2}, \frac{\nu^l + n_t^l}{2}; \frac{\nu^l}{2}; \mathbf{T}_s^l\mathbf{F}^l \mathbf{T}_t^l {\mathbf{F}^l}^{'} \right). \end{aligned} \label{app:eff4} \end{equation} \section{Laplace Approximation of the Gauss Hypergeometric Function of Matrix Argument} \label{appendix:Laplace} The Gauss hypergeomeric function has the following integral representation: \begin{equation} \begin{aligned} &~_2F_1(a,b;c;\mathbf{X})= B_d^{-1}(a,c-a) \\ & \times \int_{0_d<\mathbf{Y}<\mathbf{I}_d} |\mathbf{Y}|^{a-\frac{d+1}{2}} |\mathbf{I}_d -\mathbf{Y}|^{c-a-\frac{d+1}{2}} |\mathbf{I}_d - \mathbf{X}\mathbf{Y}|^{-b} d\mathbf{Y}, \end{aligned} \label{int_rep} \end{equation}% which is valid under the following conditions: $\mathbf{X}\in \mathbf{C}% ^{d\times d}$ is symmetric and satisfies $\mathrm{Re}(\mathbf{X})<\mathbf{I}% _{d}$, $\mathrm{Re}(a)>\frac{d-1}{2}$, and $\mathrm{Re}(c-a)>\frac{d-1}{2}$. $B_{d}(\alpha ,\beta )$ is the multivariate beta function \begin{equation} B_{d}(\alpha ,\beta )=\frac{\Gamma _{d}(\alpha )\Gamma _{d}(\beta )}{\Gamma _{d}(\alpha +\beta )}, \end{equation}% where $\Gamma _{d}(\alpha )$ is the multivariate gamma function defined in (% \ref{Gamma_multi}). The Laplace approximation is one common solution to approximate the integral \begin{equation} I=\int_{y\in D}h(y)\exp (-\lambda g(y))dy, \label{laplace} \end{equation}% where $D\subseteq \mathbf{R}^{d}$ is an open set and $\lambda $ is a real parameter. If $g(\lambda )$ has a unique minimum over $D$ at point $\hat{y}% \in D$, then the Laplace approximation to $I$ is given by \begin{equation} \tilde{I}=(2\pi )^{\frac{d}{2}}\lambda ^{-\frac{d}{2}}|g^{^{\prime \prime }}(% \hat{y})|^{-\frac{1}{2}}h(\hat{y})\exp (-\lambda g(\hat{y})), \label{laplace2} \end{equation}% where $g^{^{\prime \prime }}(y)=\frac{\partial ^{2}g(y)}{\partial y\partial y^{T}}$ is the Hessian of $g(y)$. The hypergeometric function $% ~_{2}F_{1}(a,b;c;\mathbf{X})$ depends only on the eigenvalues of the symmetric matrix $\mathbf{X}$. Hence, without loss of generality, it is assumed that $\mathbf{X}=\mathrm{diag}\{x_{1},\cdots ,x_{d}\}$. The following $g$ and $h$ functions are used for (\ref{int_rep}): \begin{equation} \begin{aligned} &g(\mathbf{Y}) = -a\log |\mathbf{Y}| - (c-a) \log |\mathbf{I}_d - \mathbf{Y}| + \log |\mathbf{I}_d-\mathbf{X}\mathbf{Y}|, \\ &h(\mathbf{Y}) = B_d^{-1}(a,c-a) |\mathbf{Y}|^{-\frac{d+1}{2}} |\mathbf{I}_d-\mathbf{Y}|^{-\frac{d+1}{2}}. \end{aligned} \label{gh} \end{equation}% Using (\ref{laplace2}) and (\ref{gh}), the Laplace approximation to $% ~_{2}F_{1}(a,b;c;\mathbf{X})$ is given by \cite{Laplace_approx} \begin{equation} \begin{aligned} &~_2\tilde{F}_1(a,b;c;\mathbf{X}) = \frac{2^{\frac{d}{2}} \pi^{\frac{d(d+1)}{4}}}{B_d(a,c-a)} J_{2,1}^{-\frac{1}{2}} \\ & \hspace{1cm}\times\prod_{i=1}^d\{\hat{y}_i^a (1-\hat{y}_i)^{c-a}(1-x_i\hat{y}_i)^{-b}\}, \end{aligned} \label{laplace_approx} \end{equation}% where $\hat{y}_{i}$ is defined as \begin{equation} \hat{y}_{i}=\frac{2a}{\sqrt{\tau ^{2}-4ax_{i}(c-b)}-\tau }, \end{equation}% with $\tau =x_{i}(b-a)-c$, and \begin{equation} J_{2,1}=\prod_{i=1}^{d}\prod_{j=i}^{d}\{a(1-\hat{y}_{i})(1-\hat{y}_{j})+(c-a)% \hat{y}_{i}\hat{y}_{j}-bL_{i}L_{j}\}, \end{equation}% with \begin{equation} L_{i}=\frac{x_{i}\hat{y}_{i}(1-\hat{y}_{i})}{1-x_{i}\hat{y}_{i}}. \end{equation}% The value of $_{2}F_{1}(a,b;c;\mathbf{X})$ at $\mathbf{X}=\mathbf{0}$ is 1, that is, $~_{2}F_{1}(a,b;c;\mathbf{0})=1$. As a result, the Laplace approximation in (\ref{laplace_approx}) is calibrated at $\mathbf{X}=\mathbf{% 0}$ to give the calibrated Laplace approximation \cite{Laplace_approx}: \begin{equation} \begin{aligned} & ~_2\hat{F}_1(a,b;c;\mathbf{X}) = \frac{~_2\tilde{F}_1(a,b;c;\mathbf{X}) }{~_2\tilde{F}_1(a,b;c;\mathbf{0}) } = c^{cd-\frac{d(d+1)}{4}} R_{2,1}^{-\frac{1}{2}} \\ &\hspace{1cm} \times \prod_{i=1}^d \left\lbrace\left(\frac{\hat{y}_i}{a}\right)^a \left(\frac{1-\hat{y}_i}{c-a}\right)^{c-a} (1-x_i\hat{y}_i)^{-b}\right\rbrace, \end{aligned} \label{lablace_calib} \end{equation}% where \begin{equation} \begin{aligned} &R_{2,1} = \prod_{i=1}^d \prod_{j=i}^d \left\lbrace \frac{\hat{y}_i \hat{y}_j}{a} + \frac{(1-\hat{y}_i)(1-\hat{y}_j)}{c-a} \right. \\ & \hspace{2cm} \left. - \frac{b x_ix_j \hat{y}_i \hat{y}_j (1-\hat{y}_i)(1-\hat{y}_j)}{(1-x_i\hat{y}_i)(1-x_j\hat{y}_j)a(c-a)}\right% \rbrace. \end{aligned} \end{equation} According to \cite{Laplace_approx}, the relative error of the approximation remains uniformly bounded: \begin{equation} \sup |\log ~_2\hat{F}_1(a,b;c;\mathbf{X}) - \log ~_2F_1(a,b;c;\mathbf{X})| < \infty, \end{equation} supremum being over $c\geq c_0 > \frac{d-1}{2}$, $a,b\in R$, and $ 0_d\leq \mathbf{X} <(1-\epsilon)I_d$ for any $\epsilon \in (0,1)$. Authors provide in \cite{Laplace_approx} some numerical examples to show how well this approximation works. We also follow the same way and show two plots in Fig. \ref{fig_laplace}, which demonstrate a very good numerical accuracy for several different setups. As mentioned, the hypergeometric function $~_2F_1(a,b;c;\mathbf{X})$ of matrix argument is only a function of the eigenvalues of $\mathbf{X}$. So, we fix $\mathbf{X}=\tau I_d$ and draw the exact and approximate values of $~_2F_1(a,b;c;\tau I_d)$ versus $\tau$ (note $0<\tau<1$ for convergence as mentioned in the definition of $~_2F_1(a,b;c;\mathbf{X})$ in (\ref{Gauss})) in Fig. \ref{fig_laplace}a for $d=5$, $a=3$, $b=4$, and $c=6$. Fig. \ref{fig_laplace}b shows the exact and approximate values of $~_2F_1(a,b;c;\tau I_d)$ versus $c$ for $d=10$, $a=30$, $b=50$, and $\tau=0.01$. The authors stated in \cite{Laplace_approx} that when the integral representation is not valid, that is, when $c-a < \frac{d-1}{2}$, this Laplace approximation still gives good accuracy. We also see that approximation in Fig. \ref{fig_laplace}b is accurate for all range of $c$, even though the integral representation is not valid for $c<a+\frac{d-1}{2} = 34.5$. We also note that this approximation is more accurate in the smaller function values. \begin{figure}[t!] \centering \begin{subfigure}[t]{0.23\textwidth} \centering \includegraphics[width=\textwidth]{Laplace_approx_1.eps} \caption{} \end{subfigure} ~ \begin{subfigure}[t]{0.23\textwidth} \centering \includegraphics[width=\textwidth]{Laplace_approx_2.eps} \caption{} \end{subfigure} \caption{{\protect\footnotesize Exact values of function $~_2F_1(a,b;c;\tau I_d)$ and its corresponding Laplace approximation $~_2\hat{F}_1(a,b;c;\tau I_d)$ versus: (a) $\tau$, for $d=5$, $a=3$, $b=4$, and $c=6$, (b) $c$, for $d=10$, $a=30$, $b=50$, and $\tau=0.01$.}} \label{fig_laplace} \end{figure} \section*{Acknowledgment} This work was funded in part by Award CCF-1553281 from the National Science Foundation. \vspace{-.1cm} \bibliographystyle{IEEEtran}
1,314,259,994,097
arxiv
\section{Introduction} In the Standard Model, the electroweak interaction has both vector ($v$) and axial-vector ($a$) couplings. Measurements of two independent parameters, the ratio of widths, $R_f$, and the parity-violation parameter, $A_f$, at the $Z^0$ resonance probe combinations of these two couplings of the $Z^0$ to fermions, \begin{eqnarray} R_f &=& \frac{\Gamma(Z^0\rightarrow f\bar{f})} {\Gamma(Z^0\rightarrow Hadrons)} = \frac{{v_f}^2+{a_f}^2}{\sum_{i}^{udscb} ({v_i}^2+{a_i}^2) } \nonumber\\ A_f &=& \frac{2{v_f}{a_f}}{{v_f}^2-{a_f}^2}\ . \nonumber \end{eqnarray} The parameter $R_f$ measures $Zf\bar{f}$-coupling strength compared to other quark flavors, while $A_f$ expresses the extent of parity violation at the $Zf\bar{f}$ vertex. These measurements provide sensitive tests of the Standard Model. The measurements described here are based on a 550k $Z^0$-decay data sample taken in 1993-98 at the Stanford Linear Collider (SLC), with the SLC Large Detector (SLD). A general description of the SLD can be found elsewhere\cite{sld}. Polarized electron beams, a small and stable SLC interaction region, and the excellent CCD vertex detector\cite{vxd3} provide precision electroweak measurements, especially in the heavy-quark sector. \section{Flavor Tagging} Topologically reconstructed mass of the secondary vertex\cite{masstag} is used by many analyses at the SLD for heavy-quark tagging. To reconstruct the secondary vertices, the space points where track density functions overlap are searched in the 3-dimensional space. Only the vertices that are significantly displaced from the primary vertex (PV) are considered to be possible B- or D-hadron decay vertices. The mass of the secondary vertex is calculated using the tracks that are associated with the vertex. Since the heavy-hadron decays are frequently accompanied by neutral particles, the reconstructed mass is corrected to account for this fact. By using kinematic information from the vertex flight path and the momentum sum of the tracks associated with the secondary vertex, we calculate the $P_T$-corrected mass $M_{P_T}$ by adding a minimum amount of missing momentum to the invariant mass. This is done by assuming the true momentum of heavy hadron is in a direction which minimizes the amount of transverse momentum added to the momentum sum of the tracks associated with the secondary vertex, and given by $$M_{P_T} = \sqrt{{M^2}_{VTX} + {P_T}^2} + |P_T|,$$ where $M_{VTX}$ is the momentum sum for the tracks associated with the reconstructed secondary vertex. In this correction, vertexing resolution as well as the PV resolution are crucial. Due to the small and stable interaction point at the SLC and the excellent vertexing resolution from the SLD CCD Vertex detector, this technique has so far only been successfully applied at the SLD. FIG.~1-a) shows the $P_T$-corrected mass distributions for the data and Monte-Carlo predictions. To select the $Z^0\rightarrow b\bar{b}$ events, we apply the cut of $M_{P_T}>2\ GeV/c^2$, which provides 98\% purity. \begin{figure}[t] \centerline{\epsfysize 2.8 truein \epsfbox{iwasaki0103fig1.eps} } \vskip 10pt \caption{ a) Distributions of the $P_T$ corrected vertex mass for data (points) and Monte Carlo prediction of b, c and uds. b) Scatter plots of vertex momentum and mass for c (left) and b (right) events. } \end{figure} Charm tag relies on the intermediate mass region ($0.55\ GeV/c^2<M_{P_T}<2\ GeV/c^2$). Additional separation is provided by the 2-dimensional cut in the momentum-mass plane for the secondary vertex, as shown in FIG.~1-b) The cuts of $P_{vtx}>5\ GeV/c$ and $15M_{vtx}-P_{vtx}<10$ provides 70\% purity and 16\% efficiency for $Z^0\rightarrow c\bar{c}$ events. \section{Measurements of $R_b$ and $R_c$} The SLD $R_b$ measurement is based on the double-tag technique\cite{rbprl}. Events are divided into two hemispheres by the plane perpendicular to the thrust axis of the event, and a $b$-tag algorithm is applied to each hemisphere in turn. The fraction of hemispheres tagged as originating from $b$-quarks (single-tag) is given by $$F_s = R_b\epsilon_b + R_c\epsilon_c + (1 - R_c - R_b)\epsilon_{uds}\ ,$$ and the fraction of events with both hemispheres tagged as originating from $b$-quarks (double-tag) is given by $$F_d = R_b({\epsilon_b}^2 + \lambda_b(\epsilon_b-\epsilon_b^2)) + R_c({\epsilon_c}^2 + \lambda_c(\epsilon_c-\epsilon_c^2)) + (1 - R_c - R_b){\epsilon_{uds}}^2.$$ The above two equations are solved for both $R_b$ and the $b$-tag efficiency $\epsilon_b$. The background tagging efficiencies for $uds$- and $c$-hemispheres, $\epsilon_{uds}$ and $\epsilon_c$, as well as the $b$-tag hemisphere correlation $\lambda_b=(\epsilon^{double}_b-\epsilon^2_b)/(\epsilon_b-\epsilon^2_b)$ are estimated from the Monte-Carlo. $R_c$ is assumed to be a Standard Model value. For the $R_c$ measurement, the double-tag technique is extended to include both charm and bottom tags\cite{SLDRc}. Using the similar equations as above, we add the fraction of hemispheres tagged as originating from $c$-quarks $$G_s = R_b\eta_b + R_c\eta_c + (1 - R_c - R_b)\eta_{uds}\ ,$$ and the fraction of events with both hemispheres tagged as originating from $c$-quarks $$G_d = R_b({\eta_b}^2 + \lambda^{\prime}_b(\eta_b-\eta_b^2)) + R_c({\eta_c}^2 + \lambda^{\prime}_c(\eta_c-\eta_c^2)) + (1 - R_c - R_b){\eta_{uds}}^2.$$ In the $R_c$ measurement, we have one more fraction of events where one hemisphere is tagged as $b$ and another hemisphere is tagged as $c$ (mixed-tag) $$M = 2 \left[ R_b\epsilon_b\eta_b + R_c\epsilon_c\eta_c + (1 - R_c - R_b)\epsilon_{uds}\eta_{uds}\right].$$ The last three equations are solved for $R_c$, $c$-tag efficiency $\eta_c$, and $b$-tag efficiency $\eta_b$. Where the $uds$ efficiency $\eta_{uds}$ and the correlations are taken from the Monte Carlo. $R_b$ and $\epsilon_b$ are known from the first two equations. In general, a high purity tag is needed for a double-tag measurement. However, the residual background in the $c$-tag sample are mainly $b$'s and the mixed-tag equation allows us to solve $\eta_b$ from the data, using the high purity $b$-tag in the opposite hemisphere. \begin{figure}[ht] \parbox{250pt}{ \vskip 15pt \parbox{225pt}{ \epsfxsize 225pt \epsfysize 3.3in \epsfbox{iwasaki0103fig2_1.eps} } } \parbox{245pt}{ \vskip 15pt \parbox{220pt}{ \epsfxsize 220pt \epsfysize 3.3in \epsfbox{iwasaki0103fig2_2.eps} } } \vskip 18pt \caption{ Comparison of world $R_b$ (left) and $R_c$ (right) measurements. The inner and outer error bars present statistical and total errors, respectively. } \end{figure} The SLD preliminary results of \begin{eqnarray} R_b &=& 0.2159\pm 0.0014 (\mbox{stat.})\pm 0.0014 (\mbox{syst.}) \nonumber\\ R_c &=& 0.1685 \pm 0.0047 (\mbox{stat.})\pm 0.0043 (\mbox{syst.}), \nonumber \end{eqnarray} are obtained from the 1993-98 winter SLD run (400k $Z^0$) and the 1993-98 whole run (550k $Z^0$), respectively. Both results are in good agreement with the Standard-Model predictions. The largest uncertainties in $R_b$ and $R_c$ measurements are detector systematics, and Monte-Carlo statistics of the uds background, respectively. FIG.~2. shows the comparison of the preliminary results of $R_b$ and $R_c$ measurements from the SLD and LEP experiments. \section{$A_c$ measurements} $A_f$ can be extracted by forming the forward-backward asymmetry $$ A_{FB}^f(z) = \frac{\sigma^f(z)-\sigma^f(-z)}{\sigma^f(z)+\sigma^f(-z)} = A_e A_f \frac{2z}{1+z^2},$$ where $z = \cos\theta$ is the direction of the outgoing fermion relative to the incident electron. $A_{FB}$ for quarks depends on both the initial state $A_e$ and the final state $A_f$. At the SLC, the ability to manipulate the longitudinal polarization of the electron beam allows the isolation of the parameter $A_f$ independently of the $A_e$, through formation of the left-right forward-backward double asymmetry: $$ \tilde{A}_{FB}^f(z) = \frac{[\sigma^f_L(z)-\sigma^f_L(-z)]-[\sigma^f_R(z)-\sigma^f_R(-z)]} {[\sigma^f_L(z)+\sigma^f_L(-z)]+[\sigma^f_R(z)+\sigma^f_R(-z)]} = |P_e| A_f \frac{2z}{1+z^2},$$ where $P_e$ is the longitudinal polarization of the electron beam. The high polarization of $\sim$77\% at the SLC also provides a large statistical advantage of $(P_e/A_e)^2\sim25$ compared to the $A_{FB}^f$ on the sensitivity to $A_f$. In the actual analyses, we use an unbinned maximum likelihood fit based on the Born-level cross section for fermion production in $Z^0$-boson decay, to extract the $A_c$, instead of using the double asymmetry. The likelihood function used in the analyses is \begin{eqnarray} \ln{\cal L}= \sum^{n}_{i=1} & \ln & \{ f_c \cdot [(1-P_eA_e)(1+z_i^2)+2(A_e-P_e)z_i \cdot A_c \cdot (1-\Delta_{QCD}^c(z_i))] \nonumber \\ & + & f_b \cdot [(1-P_eA_e)(1+z_i^2)+2(A_e-P_e)z_i\cdot A_b \cdot (1-2\bar{\chi}) \cdot (1-\Delta_{QCD}^b(z_i)) ] \nonumber \\ & + & f_{BG} \cdot [(1+z_i^2)+2A_{BG}z_i] \} \nonumber \end{eqnarray} where $n$ is the total number of candidates, $f_c$, $f_b$, and $f_{BG}$ indicates the probabilities that a candidate is a signal from $c\bar{c}$, $b\bar{b}$, or background, respectively. $\bar{\chi}$ is the $B^0\bar{B^0}$ mixing parameter, and $\Delta_{QCD}^f(y)$ is the $O(\alpha_s)$ QCD correction to the asymmetry. At the SLD, four different techniques are used to measure the $A_c$: inclusive charm-asymmetry measurement with Kaon charge and Vertex charge, lepton, exclusively reconstructed D* and D-mesons, and a new method using inclusive soft-pion from D*. An inclusive charm tag using intermediate vertex mass is used to select charm events in a similar manner as the SLD $R_c$ analysis\cite{SLDAcinc}. A $b$ veto is also applied to reject any event with high vertex mass in either hemisphere. For the hemispheres with a secondary vertex, a secondary track identified as $K^\pm$ from the CRID, or a non-zero vertex charge, is used to sign the charm quark direction. The background is mostly $b$ events and its fraction is constrained by the double-tag calibration as for the $R_c$ measurement. The preliminary result from the 1993-98 data sample is $A_c=0.603 \pm0.028(\mbox{stat.}) \pm0.023 (\mbox{syst.})$. This analysis has significantly high statistical power and the systematic errors are still very much under control. We also measure the charm asymmetry with traditional technique using electrons and muons which not only tag the $c$ events but also determine the $c$-quark direction from the lepton\cite{SLDAclepton}. We get the preliminary result $A_c=0.567 \pm 0.051 (\mbox{stat.}) \pm 0.064 (\mbox{syst.})$ from the 1993-98 (muon) and 1993-97 (electron) SLD data. The exclusive reconstruction of charmed mesons provide the cleanest technique for the charm-asymmetry measurements\cite{SLDAcexcl}. We use four decay modes to identify $D^{\ast+}$: the decay $D^{\ast+} \rightarrow \pi_s^+ D^0 $ followed by $D^0 \rightarrow K^- \pi^+$, $D^0 \rightarrow K^- \pi^+ \pi^0$ (Satellite resonance), $D^0 \rightarrow K^- \pi^+ \pi^- \pi^+$, or $D^0 \rightarrow K^- l^+ \nu_l$ ($l=$e or $\mu$). We also identify $D^+$ and $D^0$ mesons via the decay of $D^+ \rightarrow K^- \pi^+ \pi^+$ and $D^0 \rightarrow K^- \pi^+$ (not from $D^{\ast+}$). In this analysis, we reject $Z^0 \rightarrow b\bar{b}$ events using $P_T$-corrected mass of the reconstructed vertices. We required that reconstructed vertices had a mass of less than 2.0 GeV/c$^2$. This cut rejected 57\% of $b\bar{b}$ events with 99\% of the remaining being $c\bar{c}$ events. The random-combinatoric background can be estimated from the mass sidebands. The SLD preliminary result from this analysis using 550k of data from 1993-98 runs is $A_c=0.690 \pm 0.042 (\mbox{stat.}) \pm 0.022 (\mbox{syst.})$. A new analysis using inclusive soft-pion from D* has been introduced by SLD in Winter-99. Since the decay $D^{\ast+}\rightarrow D^0 \pi_s$ has small Q value of $m_{D^\ast} - m_{D^0} - m_{\pi}$ = 6 MeV$/c^2$, the maximum transverse momentum of $\pi_s$ with respect to the $D^{\ast}$ flight direction is only 40 MeV. To determine the $D^{\ast}$ direction, charged tracks and neutral clusters are clustered into jets, using an invariant-mass algorithm. We also reject the $b\bar{b}$ background using $P_T$-corrected-mass information of reconstructed vertices. The background shape was determined by the function of $F_{BG}(P_T^2) = a / (1 + bP_T^2 + c(P_T^2)^2)$. FIG.~3. shows the $P_T^2$ distribution for the soft-pion tracks. The region of $P_T^2 < 0.01$ (GeV/c)$^2$ is regarded as a signal region, where a signal-to-background ratio of 1:2 is observed. From the 1993-98 SLD data, we get the preliminary result of $A_c=0.683 \pm 0.052 (\mbox{stat.}) \pm 0.050 (\mbox{syst.})$. The largest systematic uncertainty is the choice of background shape. \parbox{245pt}{ \vskip 15pt \vskip 0.3in \parbox{180pt}{ \epsfxsize 180pt \epsfysize 2.6in \epsfbox{iwasaki0103fig3.eps} } \vskip 0.4in \parbox{220pt}{ {\footnotesize \hspace{10pt} FIG.~3. $P_T^2$ distribution for the soft-pion tracks. Background shape is obtained by the function described in the text. } } } \parbox{245pt}{ \vskip 15pt \parbox{220pt}{ \epsfxsize 220pt \epsfysize 3.3in \epsfbox{iwasaki0103fig4.eps} } \parbox{220pt}{ {\footnotesize \hspace{10pt} FIG.~4. Comparison of world $A_c$ measurements. The inner and outer error bars present statistical and total errors, respectively.} } } \vskip 20pt FIG.~4. shows the preliminary results from the SLD and LEP measurements, where the LEP measurements are derived from $A_c = 4A_{FB}^{0,b}$/$\left( 3A_e \right)$ using $A_e = 0.1491 \pm 0.0018$ (the combined SLD $A_{LR}$ and LEP $A_{lepton}$). The combined preliminary SLD result for $A_c$ is obtained as $$A_c = 0.634 \pm 0.027 .$$ \section{CONCLUSION} SLD produces world class electroweak-parameter measurements in the heavy-quark sector. The SLD measurements of $R_c$ and $A_c$ are now the most precise single measurements in the world. The measured $R_b$, $R_c$ and $A_c$ results are consistent with the Standard Model, and some analyses will improve when the full set of 1993-98 SLD data is included.
1,314,259,994,098
arxiv
\section{Introduction} Empirically, the pairwise constraint is a kind of economic side-information that can be collected easily from the user feedback. For a user or annotator, it is more convenient to judge whether two images should be in the same category than to classify or tag them. During the past decades, a series of important progresses in the utilization of the pairwise constraint have been proposed. The pairwise constraint is widely used as the side-information in the metric learning \cite{xing2002distance,kulis2012metric}, which shows a remarkable improvement in the performance. These methods only pay close attention to the data instances with constraints and the others with no constraints will not be adjusted by the information. Whereas, in \cite{lu2008constrained} an approach of affinity propagation by Gaussian process is proposed and \citeauthor{eaton2010multi} proposed an approach of the constraint propagation based on the constrained k-Means \shortcite{eaton2010multi}. Using the idea of label diffusion \cite{zhou2004learning} for reference, some constraint propagation algorithms based on the diffusion have been proposed, such as Exhaustive and Efficient Constraint Propagation (E$^2$CP) \cite{lu2010constrained} and Symmetric Graph Regularized Constraint Propagation (SRCP) \cite{fu2011symmetric}. Subsequently, \citeauthor{fu2011multi} proposed the method Multi-Modal Constraint Propagation (MMCP) to extend E$^2$CP to a multi-view situation \shortcite{fu2011multi}. In unsupervised or semi-supervised multi-view problems, a difficulty is there is no enough training data to learn the weights of different views. Since the weight reflects the importance of each view, some previous works proposed that the weights should be small if we have some prior knowledge that some views are noisy \cite{kumar2011co,liu2013multi}. In MMCP , the prior probability of each graph is manually set, which means we need to decide the importance of each view by hand. The manual setting can be more difficult if there are too many views. Some other previous works, like \cite{wang2009unified,xu2016discriminatively}, regard the weights as variables in their objective function, which combine different views at a view level and can only be solved by iterative update. Instinctively, it is more reasonable to learn the weights from the robustness of views. Some progresses have been made to create a robust affinity matrix for spectral clustering \cite{pavan2007dominant,premachandran2013consensus,zhu2014constructing}. These methods can be regard as a criteria to judge whether a view is noisy, which makes it possible to have some prior knowledge of the views. Therefore, in this paper, we propose a novel method called Consensus Prior Constraint Propagation (CPCP), which learns the unified affinity with constraint propagation from the consensus information at a data instance level. Different from the proposals expressed in earlier works, our work is established on the robustness of the neighborhood of data instances, rather than the robustness of each view. We focus our attention on the probabilities contained in the multi-view constraint propagation. We adopt Consensus k-NN \cite{premachandran2013consensus} to produce the conditional probability of each view given data instance. Consequently, we derive the unified transition probability and affinity matrix from this probability. Moreover, we also introduce a method to balance the positive and negative constraints based on the objective function of MMCP and provide a straightforward way to make use of the result of constraint propagation. The main contributions of our work is: 1) We propose a novel method to derive the importance of each view at a instance level from the consensus information. 2) We introduce a straightforward way to mitigate the imbalance between the must-link and the cannot-link in the constraint propagation. 3) In our approach, we can generate a unified affinity matrix as a result, rather than adjusting the affinity matrix from each view with the result of propagation with . \section{Consensus Prior Constraint Propagation} In this subsection, we first briefly overview the framework of MMCP and the Consensus k-NN algorithm, followed by the details of our Consensus Prior Constraint Propagation. \subsection{Multi-Modal Constraint Propagation} Firstly, given a data set with a set of data instances $\mathcal{U} = \{u_1,\dots,u_n \}$, each data instance is denoted by $u_i$. Assume that there are $S$ different views in our multi-view data set, hence we can have $S$ graphs $G_s = (\mathcal{U},W_s)$, $s=1,\dots,S$, where $ W_s$ is the corresponding affinity matrix of view $s$ built by k-NN neighborhood with Gaussian kernel \cite{belkin2001laplacian}. The set $\mathcal{M} = \{(u_i,u_j)\}$ denotes the positive pairwise constraints that $u_i \text{ and } u_j$ should be in the same class, namely must-link. $\mathcal{C} = \{(u_i,u_j)\}$ denotes the set of negative pairwise constraints, namely cannot-link. Taking the pairwise constraints into consideration, we also build a side-information matrix $Y$, \begin{equation} Y_{i,j} = \begin{cases} 1, \qquad&(u_i,u_j)\in \mathcal{M};\\ -1, &(u_i,u_j)\in \mathcal{C};\\ 0, &\text{otherwise}. \end{cases} \label{eqY} \end{equation} With respect to each graph $G_s$, we build the diagonal matrix $D_s$ from $W_s$, with $D_{i,i,s} = \sum_j W_{i,j,s}$, as in \cite{chung1997spectral}. Then the transition probability on $G_s$ is \begin{equation} P(u_i\rightarrow u_j|G_s) = P(u_j|u_i,G_s) = \frac{W_{i,j,s}}{D_{i,i,s}} \label{eqmmcp_trans} \end{equation} and the probability of $u_i$ on $G_s$ is \begin{equation} P(u_i|G_s) = \frac{D_{i,i,s}}{\sum_iD_{i,i,s}}. \label{eqPulg} \end{equation} With $P(G_s)$, the manually decided prior probability of graph $G_s$, we can compute the unified transition probability $P(u_i\rightarrow u_j)$ and the probability of $u_i$, namely $P(u_i)$. Define a matrix $\hat{L}$: \begin{equation} \hat{L} = \Pi - \frac{\Pi P+P^T\Pi}{2} \label{} \end{equation} , where $\Pi$ is a diagonal probability matrix with $i$-th diagonal element $P(u_i)$ and $P$ is the unified transition matrix with element $P(u_j|u_i)$. If the result of the vertical pairwise constraint propagation is denoted by $F_v$. The optimization problem of vertical propagation is \begin{equation} \mathop{\mathrm{min}}_{F_v}\;\frac{1}{2}\eta \mathrm{tr}((F_v-Y)^T\Pi (F_v-Y))+\frac{1}{2}\mathrm{tr}(F_v^T\hat{L}F_v). \label{eqmmcp} \end{equation} Solving this optimization problem by differentiating, we can find the closed-form result of vertical propagation, \begin{equation} F_v = \eta(\eta \Pi+\hat{L})^{-1}\Pi Y. \label{} \end{equation} The result of the horizontal propagation is similar. By combining the results of vertical and horizontal propagation, we attain the final result of the constraint propagation \begin{equation} F = \eta^2(\eta\Pi+\hat{L})^{-1}\Pi Y\Pi(\eta\Pi+\hat{L})^{-1}. \label{} \end{equation} \subsection{Consensus k-NN} The Consensus k-NN algorithm proposed in \cite{premachandran2013consensus} collects the consensus information of multiple rounds of k-NN neighborhood to provide a criteria for the neighborhood selection. If a pair of nodes $u_i$ and $u_j$ keeps appearing among the k-NN neighborhood of many of other nodes, the chance of these two nodes being similar is much higher. In contrast, if the distance between $u_i$ and $u_j$ is quite short but they never appear among the k-NN neighborhood of other nodes, it is more likely to be the noise. In Algorithm \ref{al_consknn} we show some details of the consensus matrix $C$ in Consensus k-NN. And there is a threshold $\tau$ in Consensus k-NN. If the $C_{i,j}>\tau$, $u_i \text{ and }u_j$ will be contained in the neighborhood set of each other. \begin{algorithm}[t] \renewcommand{\algorithmicrequire}{\textbf{Input:}} \renewcommand{\algorithmicensure}{\textbf{Output:}} \caption{Consensus matrix of Consensus k-NNs} \label{al_consknn} \begin{algorithmic}[1] \STATE $C = 0$; \FOR {$i = 1:N$} \FORALL {$u_j, u_k \text{ such that } u_j, u_k \in \text{ k-NN}(u_i)$} \STATE $C_{j,k} = C_{j,k}+1$; \STATE $C_{k,j} = C_{k,j}+1$; \ENDFOR \ENDFOR \end{algorithmic} \end{algorithm} \subsection{Consensus Prior Knowledge} The consensus k-NN used in our approach is a little different from the algorithm proposed in \cite{premachandran2013consensus}. We implement consensus k-NN to prune noise edges from the existing affinity matrix, rather than build new neighborhood sets. We set \begin{equation} W^{cons}_{i,j} = \begin{cases} 0,\quad\qquad\qquad&\text{if }C_{i,j}<\tau;\\ W^{dense}_{i,j},&\text{otherwise.} \end{cases} \label{eqcons} \end{equation} , where $W^{dense}$ is a dense affinity matrix build from k-NN neighborhood and $W^{cons}$ is the consensus k-NN affinity matrix. Since the affinity matrix we build from the k-NN neighborhood is relatively dense, which we will elaborate in following paragraphs, the algorithm we implement in our approach can generate an approximate result and be more efficient. In CPCP, the conditional probability of each graph $P(G_s|u_i)$ is generated from the consensus information and most of the other probabilities are derived from such information. Since the prior knowledge of the views are derived from the consensus, we name it consensus prior knowledge. As the diagram Fig. \ref{fig:diag1} shows, the unified transition probability of multiple views can be attained once we obtain the pivotal probability $P(G_s|u_i)$. \begin{figure}[t] \centering \includegraphics[width = 0.8\columnwidth]{diag1.pdf} \caption{A diagram of transition probability of multiple views. The horizontal arrows indicate the conditional probability of each graph and vertical arrows indicate the internal transition probability of each view. In this example we have $P(u_4\rightarrow u_3) = P(u_3|u_4,G_1)P(G_1|u_4)+P(u_3|u_4,G_2)P(G_2|u_4)$.} \label{fig:diag1} \end{figure} With respect to each graph $G_s$, we construct the consensus k-NN affinity matrix $W^{cons}_s$ according to Eq. \ref{eqcons}. Subsequently we can define the consistency of each data instance in different views by the KL-divergence: \begin{equation} c_s(i) = \sum_j W^{dense}_{i,j,s}\;\text{ln}\frac{W^{dense}_{i,j,s}}{W^{cons}_{i,j,s}} \label{eqconsistency} \end{equation} , where the $W^{dense}_{i,j,s}$ is the k-NN neighborhood affinity between data $u_i$ and $u_j$ in the graph $G_s$ and $W^{cons}_{i,j,s}$ is the corresponding consensus k-NN affinity. Both $W_s^{dense}$ and $W_s^{cons}$ are normalized with row summation one, to make each row as probability distribution. Here the consistency $c_s(i)$ is a metric of the robustness of the relationship between data instance $u_i$ and its neighborhood in graph $G_s$. The smaller is the consistency, the more stable is the relationship. Because of the sparsity of the affinity matrices, two essential tricks are performed in our experimental. One is that the k-NN neighborhood affinity $W^{dense}_s$ are relatively dense matrices with $k = \#sample/\#cluster$. This setting is also necessary in the following probabilities. The other one is that we add a minute quantity (such as $10^{-8}$) to each element of $W_s^{dense}$ and $W_s^{cons}$ to avoid overwhelming zeros in the discrete probability. Once we have the consistency, we can obtain the conditional probability of graph $G_s$ given $u_i$ as follows. \begin{equation} P^\dagger (G_s|u_i) = \frac{(c_s(i)+1)^{-1}}{\sum_i (c_s(i)+1)^{-1}} \label{eqPglu} \end{equation} From Eq. \ref{eqPglu} we can notice that with a given view $s$ a smaller consistency implicates a greater conditional probability of graph $G_s$. The conditional probability $P^\dagger(u_i|G_s)$ is calculated in the same way as Eq. \ref{eqPulg}. It needs to pay attention here that the superscript $\dagger$ of $P^\dagger (G_s|u_i)$ and $P^\dagger (u_i|G_s)$ means the conditional probabilities are not proper. They are pseudo-conditional probability distributions. It is because that there may not exist $P(u_i), P(G_s)$ and $P(u_i,G_s)$ which can generate such two conditional distributions. $P^\dagger(G|u)$ and $P^\dagger(u|G)$ do not satisfy such causality constraint of the pair of conditional distribution and it makes these two distributions illegal. Under the condition that there is no zero in these discrete conditional probability distributions, we have the following proposition. \begin{prop} Define a quotient matrix $Q$, where $Q_{i,s} = \frac{P^\dagger(u_i|G_s)}{P^\dagger(G_s|u_i)}$. $P^\dagger(G|u)$ and $P^\dagger(u|G)$ are legal conditional probability distributions if and only if the quotient matrix $Q$ has rank 1 and $\sum_s\frac{1}{\sum_i Q_{i,s}} = 1$. \label{prop1} \end{prop} \begin{proof} $(\Rightarrow)$:\quad Since $P^\dagger(G|u)$ and $P^\dagger(u|G)$ are legal conditional probability distributions, we can always find the corresponding probability distribution $P(u), P(G)$ and $P(u, G)$. Therefore we have \begin{equation} Q_{i,s} = \frac{P^\dagger(u_i|G_s)}{P^\dagger(G_s|u_i)} = \frac{P(u_i)}{P(G_s)}. \label{eqq} \end{equation} Given row $i$ of the matrix $Q$, row $j$ can be determined by multiplying $\frac{P(u_i)}{P(u_j)}$ by row $i$. It leads to a rank 1 matrix $Q$. Under the condition Eq. \ref{eqq}, we also have \begin{equation} \sum_s\frac{1}{\sum_i Q_{i,s}} = \sum_s\frac{1}{\sum_i \frac{P(u_i)}{P(G_s)}} = \sum_s P(G_s) = 1 \label{} \end{equation} $(\Leftarrow)$:\quad Assume $Q$ is a rank 1 matrix, $Q$ can be factorized as a product of two positive vectors, which gives $Q = mn^T$. From the factorization we can derive the unique $m$ and $n$, if we set $\sum_i m_i = 1$. By substitution of $Q_{i,s} = m_in_s$ into the equation $\sum_s\frac{1}{\sum_i Q_{i,s}} = 1$, it leads to the result $\sum_s\frac{1}{n_s} = 1$. Considering the property of $m$ and $n$, we can interpret these two vectors as two marginal probability distributions: \begin{equation} P(u_i) = m_i, \quad P(G_s) = n_s \label{} \end{equation} Now we need to prove there exists a joint probability distribution $P(u, G)$ that satisfies $P(u, G) = P^\dagger(G|u)P(u) = P^\dagger(u|G)P(G)$. $P^\dagger(G|u)$ and $P(u)$ are two probability distributions, thus the product of them is a probability distribution. Similarly $P^\dagger(u|G)P(G)$ is also a probability distribution. Hence our goal is to prove that $P^\dagger(G_s|u_i)P(u_i) = P^\dagger(u_i|G_s)P(G_s)$ holds for all $i$ and $s$. Making use of $Q_{i,s} = m_in_s$, we then obtain that \begin{equation} \begin{split} &\frac{P^\dagger(u_i|G_s)}{P^\dagger(G_s|u_i)} = Q_{i,s} = m_in_s\\ \Rightarrow &\frac{P^\dagger(u_i|G_s)}{n_i} = P^\dagger(G_s|u_i)m_s\\ \Rightarrow &P^\dagger(u_i|G_s)P(G_s) = P^\dagger(G_s|u_i)P(u_i) \end{split} \label{} \end{equation} , which shows that $P^\dagger(G|u)$ and $P^\dagger(u|G)$ are a pair of legal conditional probability distributions. \end{proof} With Proposition \ref{prop1}, we notice that if we want the $P^\dagger(G|u)$ and $P^\dagger(u|G)$ to be the legal conditional distributions and derive $P(u_i), P(G_s)$ from them. We need the quotient matrix $Q$ to have rank 1. Here we ignore the condition $\sum_s\frac{1}{\sum_i Q_{i,s}} = 1$, which can be satisfied by scaling. Generally, $Q$ will not be a rank 1 matrix. However, if the $P^\dagger(G|u)$ and $P^\dagger(u|G)$ are approximate to the true value, $Q$ will be similar to a rank 1 matrix. Particularly, we apply singular value decomposition to the matrix $Q$ to obtain a approximation $\hat{Q}$ with rank 1. As a result of new quotient matrix $\hat{Q}$, only the $P(u_i), P(G_s)$ can be derived from $\hat{Q}$ directly and we rebuild $P(G|u)$, $P(u|G)$ and the corresponding $P(u, G)$ with the constraint $\hat{Q}$. With following optimization problem, we aim to solve proper $P(G|u)$ and $P(u|G)$ as similar to $P^\dagger(G|u)$ and $P^\dagger(u|G)$ as possible. \begin{equation} \begin{split} \mathop{\mathrm{min}}_{P(u_i|G_j), P(G_j|u_i)} & \quad \frac{1}{2} \alpha \sum_{i,j}(P(u_i|G_j)-P^\dagger(u_i|G_j))^2 \\ &+ \frac{1}{2} \beta \sum_{i,j}(P(G_j|u_i)-P^\dagger(G_j|u_i))^2\\ \mathrm{s.t.}\quad \quad \quad&\quad \frac{P(u_i|G_j)}{P(G_j|u_i)} = \hat{Q}_{i,j}\\ \end{split} \label{eqOpt} \end{equation} We remove the normalization constraints $\sum_{i}P(u_i|G_j) = 1$ and $\sum_{j}P(G_j|u_i) = 1$ in Eq.\ref{eqOpt}. With such a relaxation, this optimization problem can be solved element by element efficiently, and the normalization process will be finished after the optimization. In order to balance two part of the objective function, we set $\alpha = \frac{1}{\sum_{i,j}P(u_i|G_j)^2}$ and $\beta = \frac{1}{\sum_{i,j}P(G_j|u_i)^2}$. By solving this optimization problem, we can obtain a closed-form solution. Finally we obtain the unified affinity matrix with Eq. \ref{eqmmcp_trans} by \begin{equation} W_{i,j} = P(u_i,u_j) = P(u_i)\sum_s P(u_j|u_i, G_s)P(G_s|u_i) \end{equation} , and the matrix is sparsified as a k-NN neighborhood. \subsection{Balance Between Must-link and Cannot-link} Here, we separate constraint matrix $Y$ into two parts $Y = Y_++Y_-$, $Y_+$ for the positive elements and $Y_-$ for the negative ones. Then we have \begin{equation} \begin{split} &\mathrm{tr}((F_v - Y )^T\Pi(F_v - Y))\\ =\; & \mathrm{tr}(F_v^T\Pi F_v+Y_+^T\Pi Y_++Y_-^T\Pi Y_- -2F_v^T\Pi Y_+ \\ &- 2F_v^T\Pi Y_- + 2Y_+^T\Pi Y_- )\\ \end{split} \label{fpif} \end{equation} Since in Eq. \ref{fpif} we have $Y_+^T\Pi Y_- = \bf{0}$, we can obtain that Eq. \ref{fpif} equals to \begin{equation} \begin{split} \;&\mathrm{tr}((F_v-Y_+)^T\Pi(F_v-Y_+))\\ &+\mathrm{tr}((F_v-Y_-)^T\Pi(F_v-Y_-))-\mathrm{tr}(F_v^T\Pi F_v) \end{split} \label{fpif2} \end{equation} As we separate constraints into two parts, we can weight the positive part with a parameter based on ratio of must-link to cannot-link $\alpha = \sqrt{\#negative/\#positive}$. Our objective function is obtained by substituting the weighted constraint into Eq. \ref{eqmmcp} to give \begin{equation} \begin{split} \mathop{\mathrm{min}}_{F_v}\quad&\frac{1}{2}\alpha\eta \mathrm{tr}((F_v-Y_+)^T\Pi(F_v-Y_+))\\ &+\frac{1}{2}\eta \mathrm{tr}((F_v-Y_-)^T\Pi(F_v-Y_-))\\ &+\frac{1}{2}\mathrm{tr}(F_v^T(L-\eta \Pi)F_v) \end{split} \label{obj2} \end{equation} , where the $L$ is different from the $\hat{L}$ in Eq. \ref{eqmmcp} and $L = \Pi-W$. By differentiating the function in Eq.\ref{obj2} with respect to $F_v$ and setting it to zero as \cite{fu2011multi} did, we can obtain the matrix $F_v$ after vertical propagation on multiple graphs \begin{equation} \begin{split} & F_v = \eta(L+\alpha\eta\Pi)^{-1}\Pi(\alpha Y_++Y_-) \end{split} \label{} \end{equation} By combining the results of vertical and horizontal propagation, the final result of the constraint propagation is \begin{equation} \begin{split} & F = \eta^2(L+\alpha\eta\Pi)^{-1}\Pi(\alpha Y_++Y_-)\Pi(L+\alpha\eta\Pi)^{-1} \end{split} \label{} \end{equation} \subsection{Affinity with Constraint Propagation} Most of the previous works employ the constraint propagation result $F$ to refine the k-NN neighborhood affinity matrix. The adjustment of the affinity matrix leads to a sparse matrix as the result of their approaches. Different from them, there is no adjustment in our approach. We instinctively generate the final affinity matrix from the $F$ itself. The affinity with constraint propagation $W^*$ is build with the Sigmoid activation of $F$, \begin{equation} W^*_{i,j} = \begin{cases} \frac{1}{1+\text{exp}(-F_{i,j}/\sigma)} \qquad &\text{if }F_{i,j}>0;\\ 0 &\text{otherwise} \end{cases} \label{} \end{equation} , where $\sigma$ is the average magnitude of the elements in $F$. The $W^*$ is not a sparse matrix as the other methods, the number of zeros in $W^*$ is related to the ratio of the positive constraints to the negative ones. Finally, we employ the spectral clustering \cite{von2007tutorial} on the affinity $W^*$ to form the clusters. \begin{table}[t] \caption{Description of data sets} \label{tab_data} \centering \begin{tabular}{c l l} \hline View & Corel 5k & PASCAL VOC'07\\ \hline 1& Lab (4096) & Lab (4096) \\ 2& DenseSift (1000) & DenseSift (1000) \\ 3& annot (260)& tags (804) \\ 4& Hsv (4096) & Hsv (4096) \\ 5& Gist (512) & Gist (512) \\ 6& RgbV3H1 (5184) & RgbV3H1 (5184) \\ 7& HarrisHueV3H1 (300) & HarrisHueV3H1 (300) \\ 8& HsvV3H1 (5184) & HsvV3H1 (5184) \\ \hline Images &4999 & 9963 \\ Classes &50 & 20 \\ \hline \end{tabular} \end{table} \section{Experimental Results} In this section, we conduct several experiments to demonstrate the performance of the proposed approach CPCP on two benchmark data sets. We compare our CPCP algorithm with some state-of-the-art methods. The clustering result of Multi-Modal Constraint Propagation (MMCP) \cite{fu2011multi} and the method for single source data Exhaustive and Efficient Constraint Propagation (E$^2$CP) \cite{lu2010constrained} are also reported in the this section. Moreover, we use the Normalized Cuts \cite{shi2000normalized} with no pairwise constraints as the baseline method in the evaluation of clustering performance. There are some more recent multi-view constraint propagation methods Unified Constraint Propagation (UCP) \cite{lu2013unified} and Multi-Source Constraint Propagation (MSCP) \cite{lu2013exhaustive}) which also have satisfactory performance, but these methods are designed specifically for the case of 2 views and are difficult to extend to more views. Hence they are not included in our experiments. \subsection{Data Sets} We consider two benchmark image data sets with the textual description, which have been used in previous work. {\bf Corel 5k}. This data set is an important benchmark which has been widely used in many tasks. It contains 50 classes with approximate 5000 image. Each image is annotated with several keywords from a dictionary of 260 words. {\bf PASCAL VOC'07}. This data set \cite{pascal-voc-2007} contains 20 different object categories and around 10000 images. All of the images of PASCAL VOC'07 are annotated with one or more categories. For the ease of experimental results reproduction or direct comparison, we employ the publicly available features, INRIA features \cite{guillaumin2009tagprop}, of these two data sets instead of extracting features by ourselves. Tab. \ref{tab_data} summarizes a part of the features that we used for the clustering performance evaluation in our experiments. \begin{figure}[t] \centering \subfigure[Corel 5k]{ \includegraphics[width = 0.47\columnwidth]{corel5k_nmi.pdf} \label{fig:3viewnmi1} } \subfigure[PASCAL VOC'07]{ \includegraphics[width = 0.47\columnwidth]{voc07_nmi.pdf} \label{fig:3viewnmi2} } \caption{Clustering result on three views of Corel 5k and PASCAL VOC'07 with different number of constraints.} \label{fig:3viewnmi} \end{figure} \begin{figure}[t] \centering \subfigure[Corel 5k]{ \includegraphics[width = 0.47\columnwidth]{corel5k_8_nmi.pdf} \label{fig:8viewnmi1} } \subfigure[PASCAL VOC'07]{ \includegraphics[width = 0.47\columnwidth]{voc07_8_nmi.pdf} \label{fig:8viewnmi2} } \caption{Clustering result on eight views of Corel 5k and PASCAL VOC'07 with different number of constraints.} \label{fig:8viewnmi} \end{figure} \subsection{Experiment Setup} In the experiments of clustering evaluation, we use one textual feature and seven other image features of INRIA features. All the image features are normalized with zero mean and scaled to $[-1,1]$. The affinity matrix of image features are computed by Gaussian kernel and the affinity of textual feature is produced from $cosine$ distance. It is worth noting that about one-third of images in PASCAL VOC'07 don't have any tag. Thus, there will be around 3000 zero rows in the affinity matrix of this view. To deal with such blank feature, we add some random noise, which is minute and can be ignored, to every dimension of these data instance. \begin{table*}[t] \addtolength{\tabcolsep}{3pt} \caption{Clustering performance (NMI) on of E$^2$CP and MMCP on Corel 5k and PASCAL VOC'07} \label{tab_e2cp} \centering \begin{tabular}{|l| c| c| c| c| c| c| } \hline \multirow{2}{100pt}{\centering Method} &\multicolumn{3}{ |c }{Corel 5k} &\multicolumn{3}{ |c |}{PASCAL VOC'07} \\ \cline{2-7} & Avg & Max & Min & Avg & Max & Min \\ \hline E$^2$CP-Lab & 0.3510 & 0.3573 & 0.3460 & 0.0929 & 0.0966 & 0.0860 \\ E$^2$CP-DenseSift & 0.2701 & 0.2742 & 0.2661 & 0.2237 & 0.2267 & 0.2212 \\ E$^2$CP-annot/tags & 0.6153 & 0.6259 & 0.6051 &{\it 0.4533}&{\it 0.4650}&{\it 0.4364} \\ E$^2$CP-Hsv & 0.3409 & 0.3459 & 0.3321 & 0.0790 & 0.0811 & 0.0750 \\ E$^2$CP-Gist & 0.2150 & 0.2198 & 0.2115 & 0.1622 & 0.1726 & 0.1582 \\ E$^2$CP-RgbV3H1 & 0.3111 & 0.3162 & 0.3054 & 0.1037 & 0.1060 & 0.1009 \\ E$^2$CP-HarrisHueV3H1 & 0.2608 & 0.2652 & 0.2513 & 0.0992 & 0.0966 & 0.1008 \\ E$^2$CP-HsvV3H1 & 0.3415 & 0.3491 & 0.3351 & 0.0910 & 0.0947 & 0.0860 \\ \hline CPCP-3Views &{\bf 0.6892}&{\bf 0.6954}&{\bf 0.6845}&{\bf 0.4804}&{\bf 0.4823}&{\bf 0.4725}\\ CPCP-8Views &{\it 0.6358}&{\it 0.6400}&{\it 0.6301}& 0.3973 & 0.4009 & 0.3925 \\ \hline \end{tabular} \end{table*} In order to evaluate the performance of clustering of these method, we adopt the Normalized Mutual Information (NMI) as the measure to compare the clustering result with the given ground-truth. As we mentioned above, PASCAL VOC'07 is a multi-label data set, in which many images has more than one categories in the ground-truth. The multi-label ground-truth makes it difficult to find a matching between the clustering result and the ground-truth. To deal with such difficulty, we copy the multi-label data instances and separate their labels. Concretely, assume a data instance $u_i$ with three labels $A, B \text{ and } C$, which can be expressed as a pair $(u_i, \{A, B, C\})$. We separate this data-label pair into three distinct pairs $(u_i, A), (u_i, B) \text{ and }(u_i, C)$ to generate a new single-label ground-truth. Similarly, we make two copies of the corresponding clustering result. For instance, if the clustering result is $(u_i, A)$, we will regard it as three different ones $(u_i, A), (u_i, A) \text{ and } (u_i, A)$ in the new clustering result. In this case, the data instance are seen to be clustered correctly, and the accuracy is 1/3. We can notice that with this trick the accuracy or NMI will be less than 1, even every data sample is clustered correctly. We divide the evaluation result by the ideal score to normalize it from 0 to 1. We call this method multi-label augmentation. In our experiments, we impose a fixed parameter selection criteria, because there is no validation data set in the clustering tasks. We set the size of k-NN neighborhood to be $k = \text{Round}(\text{log}_2(n/c))$, where $n$ is the number of the data instances and $c$ is the number of classes. The propagation parameter $\eta = 0.25$. The embedding dimension in the spectral clustering is $c+1$. \subsection{Clustering Performance Evaluation} In this subsection we will report the performance of CPCP and the comparison methods on clustering experiments. The clustering results of 3 views are shown in Fig. \ref{fig:3viewnmi}. In this experiment, we use the first three views listed in Tab. \ref{tab_data} of two data sets, including two image features and one textual features. We implement two versions of Multi-Modal Constraint Propagation in this 3-views experiment. One is MMCP, which we set the prior graph probabilities of 3 views to 0.2, 0.05 and 0.75 proposed by the authors of MMCP \cite{fu2011multi}. Considering in practice it is difficult to decide the importance of a view manually, the other version is MMCP with same weight (MMCP-SW), which means that we assign each views the same importance and each prior probability is 1/3. Since our CPCP can generate an unified affinity graph after the propagation, the experiments in this section only considering the unified affinity graph. In MMCP, there is a unified graph, into which we can incorporate the propagation result $F$. As to E$^2$CP, we impose the constraint propagation on each view and fuse each affinity graph after the propagation by linear combination. The number of pairwise constraints used for constrained clustering is from 0.01\% to 0.08\% of the total number of pairwise constraints in both data sets. From Fig. \ref{fig:3viewnmi} we can see that our approach CPCP significantly outperforms the other methods on both data sets in this 3-views experiment. The clustering results of 8 views are shown in Fig. \ref{fig:8viewnmi}. In this experiment we use all the views listed in Table \ref{tab_data}. The details of the 8-views experiment is highly similar to the 3-views case. One difference is that, when we have 8 views, it is almost impossible to decide the importance of each view by hand. Therefore in this experiment, there is only one version of MMCP. Every view in MMCP has the same prior graph probability 1/8. From the figure, we can see that, the task of clustering is becoming harder while the number of views is increasing. Meanwhile, the advantage of our approach is more obvious than the 3-views case. We also compare our approach CPCP with the constraint propagation on single view. As Tab. \ref{tab_e2cp} shows, we adopt E$^2$CP on eight views with 20000 pairwise constraints. CPCP-3Views gives the clustering results of CPCP on the first three views, which is also shown in Fig. \ref{fig:3viewnmi}. Similarly CPCP-8Views gives the clustering results of 8-views case. The {\it annot} and the {\it tags} are two different names of the textual feature in Corel 5k and PASCAL VOC'07, thus we write them in the same row. We can see that in both data sets the textual feature has the best performance. While the results of the features in PASCAL VOC'07 except textual feature are not satisfactory. Empirically, we have no idea that which view is a satisfactory representation. It makes the results of multi-view clustering worse when we keep increasing the number of views, since some views which do not have the discriminative representations can be regarded as the noise. That is the reason why our CPCP has the best performance when we use 3 views, but when we use 8 views CPCP only has the second best perform in Corel 5k and the third best in PASCAL VOC'07. \begin{figure}[t] \centering \includegraphics[width = 0.8\columnwidth]{corel5k_1_10_nmi.pdf} \caption{Clustering results on Corel 5k data set with different number of views.} \label{fig:corel5k_1_10} \end{figure} \subsection{View Selection} Because of the ability of weighting the importance in the multi-view problem, we can regard the consensus information of CPCP as a guidance in the view selection. With the consensus information, the marginal probability $P(G_s)$ of the each graph can be generated immediately. After sorting the marginal probabilities, we can eliminate the view with the smallest probability. Fig. \ref{fig:corel5k_1_10} shows the clustering results when impose CPCP on different number of views. In this experiment, we selection 10 views as the initialization ({\it annot, Lab, DenseSift, Hsv, Gist, RgbV3H1, HarrisSiftV3H1, HsvV3H1, HarrisSift, DenseSiftV3H1}). By the strategy mentioned above, we eliminate the one view at the time leaving the views which have more contributions to the clustering. As the figure demonstrates, the NMI increases rapidly when we remove the one view at the first time. The result of clustering has a continuous improvement as we remove the views one by one. Here we have the best performance with four views, and if we keep decrease the number of views the NMI will go down. Although there is no criteria that can gives a proper number of views, it is possible to eliminate some worst views which will corrupt the clustering performance by considering consensus information. \section{Conclusions} In this paper, we present a novel multi-view constraint propagation approach, called Consensus Prior Constraint Propagation. In our method, the unified affinity matrix after constraint propagation is produced from consensus information of each data instance, and the imbalance between positive and negative constraints is solved. Extensive experiments demonstrate the superiority if the proposed method CPCP. \bibliographystyle{aaai}
1,314,259,994,099
arxiv
\section{Proof of the First Zonklar Equation} \ifCLASSOPTIONcaptionsoff \newpage \fi \def\bibfont{\footnotesize} \bibliographystyle{IEEEtran} \section{Appendix} \label{sec:Appendix} \subsection{Calculation of the steady state probabilities in \eqref{eq:steady_state_prob_chain_nr_msgs_underway}} \label{sec:appendix_proof_steady_state_prob_fwd} We consider the queueing model from Sect.~\ref{sec:underlying_model} with the corresponding Markov chain as sketched in Fig.~\ref{fig:MC_fwd}. In the following we prove the formulation of the steady state probabilities $p_n$ of the given Markov chain through induction. We first show that the formulation holds for $p_0$ and $p_1$. Then we show that given that the formulation holds for all $p_k$ for $k<n$ it also holds for $p_n$. From the balance equations we can write $p_o\lambda = \sum_{i=1}^{I_{\max}}p_i\mu$. From the normalization condition $\sum_{i=0}^{I_{\max}}p_i=1$ we obtain $p = \frac{\mu}{\lambda+\mu}$ as $\sum_{i=1}^{I_{\max}}p_i =1-p_0$. For $p_1$ we can write $p_1(\lambda+\mu) = p_0\lambda + \sum_{i=2}^{I_{\max}}p_i\mu$. Using $p_0$ and the normalization condition this reduces to $p_1\lambda = p_0 \lambda + \mu(1-p_0-p_1)-p_1\mu$ which leads to $p_1 = \frac{2\lambda\mu}{(\lambda+\mu)(\lambda+2\mu)}$. Now, considering \eqref{eq:steady_state_prob_chain_nr_msgs_underway} we directly see that it holds for $n=0$ and $n=1$. For a state $k$ of the given Markov chain we can write using the balance equations \begin{align} p_k(\lambda+k\mu) &= p_{k-1}\lambda+\mu\sum_{i=k+1}^{I_{\max}}p_i \nonumber\\ &= p_{k-1}\lambda+\mu(1- \sum_{i=0}^{k}p_i) \,, \label{eq:steady_state_prob_fwd_general_recursion} \end{align} which we can rewrite as \begin{align} &p_k(\lambda+(k+1)\mu) \nonumber\\ &= p_{k-1}(\lambda-\mu)+\mu - \mu \sum_{i=0}^{k-2}p_i \nonumber\\ &= \frac{k\lambda^{k-1}\mu(\lambda-\mu)}{\prod_{j=1}^{k}(\lambda+j\mu)} + \mu - \mu \sum_{i=0}^{k-2}\frac{(i+1)\lambda^i\mu}{\prod_{j=1}^{i+1}(\lambda+j\mu)}\nonumber\\ &=\frac{k\lambda^{k-1}\mu(\lambda-\mu)\Gamma\left(\frac{\lambda+\mu}{\mu}\right)}{\mu^k\Gamma\left(\frac{\lambda+(k+1)\mu}{\mu}\right)} + \mu\nonumber\\ & - \mu\Gamma\left(\frac{\lambda+\mu}{\mu}\right)\left(\frac{\lambda+\mu}{\mu\Gamma\left(\frac{\lambda+2\mu}{\mu}\right)} - \frac{\lambda^{k-1}(k\mu+\lambda)}{\mu^k\Gamma\left(\frac{\lambda+(k+1)\mu}{\mu}\right)}\right) \,, \label{eq:steady_state_prob_fwd_general_recursion2} \end{align} where we used the identity $\prod_{j=1}^{k}(\lambda+j\mu) = \frac{\mu^k\Gamma\left(\frac{\lambda+(k+1)\mu}{\mu}\right)}{\Gamma\left(\frac{\lambda+\mu}{\mu}\right)}$. Now, we can further simplify \eqref{eq:steady_state_prob_fwd_general_recursion2} using an instance of this the identity $\Gamma\left(\frac{\lambda+2\mu}{\mu}\right) = \frac{\lambda+\mu}{\mu} \Gamma\left(\frac{\lambda+\mu}{\mu}\right)$. Finally through rearranging terms we obtain \begin{align} p_k = \frac{(k+1) \lambda^k\mu \Gamma\left(\frac{\lambda+\mu}{\mu}\right)}{\mu^k\Gamma\left(\frac{\lambda+(k+1)\mu}{\mu}\right)} \,, \label{eq:steady_state_prob_fwd_final_result} \end{align} which completes the proof. Calculating $p_k$ for $k = I_{\max}$ follows along using the balance equation as shown above. \subsection{Calculation of the transition rates of the reversed process \eqref{eq:lambda_prime}, \eqref{eq:mu_prime}} The transition rates for the reversed Markov process are obtained directly from [Theorem 1.12] from \cite{Kelly:Reversibility-2011} as $Q'(i,j) = \frac{p_j Q(j,i)}{p_i}, i,j \in E$ with the same steady state distribution $p_n n\in E$. Now given the forward Markov process with transition rates in \eqref{eq:transition_rates_fwd_process} (as sketched in Fig.~\ref{fig:MC_fwd}) we obtain the following transition rates for the reverse process $Q'_{i,i-1} :=\lambda_{i}'$ for $i=1...I_{\max}$, $Q'_{i,j} :=\mu_{ij}'$ for $i=0...I_{\max}-1,i<j\leq I_{\max}$ and $Q'_{i,j} :=0$ otherwise. Hence, we obtain from \eqref{eq:transition_rates_fwd_process} and \eqref{eq:steady_state_prob_chain_nr_msgs_underway} for $i<I_{\max}$ \begin{align}\label{eq:derivation_lambda_prime} \lambda_i' &= \lambda\frac{p_{i-1}}{p_{i}} = \lambda \frac{i\lambda^{i-1}\mu}{\prod_{j=1}^{i}\left(\lambda+j\mu\right)} \frac{\prod_{j=1}^{i+1}\left(\lambda+j\mu\right)}{\left(i+1\right)\lambda^i\mu}\nonumber\\ &= \frac{i}{i+1}\left(\lambda+(i+1)\mu\right) \,. \end{align} For $i=I_{\max}$ the derivation goes accordingly to find $\lambda_i' = I_{\max}\mu$. Similarly, for $i<j<I_{\max}$ we obtain \begin{align}\label{eq:derivation_mu_prime} \mu_{ij}' &= \mu\frac{p_{j}}{p_{i}} = \mu \frac{(j+1)\lambda^{j}\mu}{\prod_{k=1}^{j+1}\left(\lambda+k\mu\right)} \frac{\prod_{k=1}^{i+1}\left(\lambda+k\mu\right)}{\left(i+1\right)\lambda^i\mu}\nonumber\\ &= \frac{(j+1) \mu \lambda^{j-i}}{(i+1)\prod\limits_{k=i+2}^{j+1}\left(\lambda + k\mu\right)} \,. \end{align} Again, for $j=I_{\max}$ the derivation goes similarly. \subsection{On the numerical calculation of the LSTs in \eqref{eq:joint_LST_x_t}} \label{app-inv} For completeness, we show in the following an alternative method to the direct calculation of the conditional LST in \eqref{eq:joint_LST_x_t} that we used in this paper. We underline that the following numerical computation may be beneficial in speeding up computations especially for evaluation purposes. The direct computation of \eqref{eq:joint_LST_x_t} by computing the conditional LST vectors \eqref{eq:LST_of_kth_departure_given_state_n_rest_of_recursion} through recursion and matrix inversions, as well as, the LST vector in \eqref{eq:LST_of_next_arrival_given_state_nprime_matrix} and the following insertion of the vector components $f_{n,k}$ and $\tilde{f}_{n}$ into \eqref{eq:joint_LST_x_t} becomes computationally intensive when $I_{\max}$ is large. The reason for this is the computation of the matrix inverse $\mt{\Phi^{-1}}$ in \eqref{eq:LST_of_1st_departure_given_state_n_initial_condition} as well its exponentiation in form of $\mt{\Psi^{k-1}}$ in \eqref{eq:LST_of_kth_departure_given_state_n_rest_of_recursion}. Next we discuss an alternative numerical method to compute the quantities in \eqref{eq:LST_of_1st_departure_given_state_n_initial_condition} - \eqref{eq:LST_of_kth_departure_given_state_n_rest_of_recursion}. First, we recognize that $\mt{\Phi^{-1} = (\theta I + D - M)^{-1}}$ used in \eqref{eq:LST_of_1st_departure_given_state_n_initial_condition} is a fraction by the adjugate matrix formula, i.e., \begin{equation}\label{eq:invphi_fraction} \mt{\Phi^{-1}} = \frac{\mt{R}(\theta)}{\rho(\theta)} \,, \end{equation} where $\mt{R}(\theta)$ is a polynomial with matrix coefficients given by \begin{equation}\label{eq:R_polynomial} \mt{R}(\theta) = \sum_{k=0}^{I_{\max}}\theta^k\mt{P}_k \,, \end{equation} and the denominator is the characteristic polynomial of $\mt{M-D}$, i.e. \begin{equation}\label{eq:denom_rho} \rho(\theta):=\sum_{k=0}^{I_{\max}+1}r_k\theta^k = \det\mt{(\Phi)} \end{equation} Also observe that $r_0 = \det\mt{(D-M)}$ and that $r_{I_{\max}+1} = 1$. Note that the matrices $\mt{P}_k$ are of dimensions $(I_{\max}+1) \times (I_{\max}+1)$ large. Next, we obtain $\mt{P}_k$ iteratively using LeVerrier's method. In a nutshell, we plug \eqref{eq:invphi_fraction} into the identity $\mt{\Phi^{-1} \Phi = I}$ and rearrange the terms to obtain \begin{equation}\label{eq:detthetaM} \sum_{k=1}^{I_{\max}+1} \theta^k \mt{P}_{k-1} + \sum_{k=0}^{I_{\max}}\theta^k \mt{P}_k \mt{(D-M)} = \left(\sum_{k=0}^{I_{\max}+1}r_k\theta^k\right)\mt{ I } \,. \end{equation} Now, we can compare the coefficients of $\theta^k$ on both sides of \eqref{eq:detthetaM} and obtain the recursive form for the matrices $\mt{P}_k$ as \begin{equation}\label{eq:Pkrecursive} \mt{P}_k= r_{k+1}\mt{I} - \mt{P}_{k+1}\mt{(D-M)} \,, \end{equation} for $k\in\{I_{\max-1},..,1\}$. From the comparison of the coefficients in \eqref{eq:detthetaM} we know that $\mt{P}_0= \left(\det(\mt{D-M})\right)(\mt{D-M})^{-1}$ and $\mt{P}_{I_{\max}}=\mt{I}$ such that we can iteratively find the matrices $\mt{P}_k$ using \eqref{eq:Pkrecursive}, hence, calculate the coefficients of $\mt{R(\theta)}$. Now, given that we calculate $\mt{\Phi^{-1}}$ using the method above we can use this result to simplify the matrix multiplication in $\mt{\Psi^{k}}$ as $\mt{\Psi= \Phi^{-1}\Lambda}$. Hence, we can write \begin{equation}\label{eq:psipowerk} \mt{\Psi}^k = \frac{\tilde{\mt{R}}(\theta)^k}{\rho(\theta)^k} \,, \end{equation} where we used the polynomial $\tilde{\mt{R}}(\theta)$ that is defined as \begin{equation}\label{eq:Rtildepolynomial} \tilde{\mt{R}}(\theta) = \sum_{k=0}^{I_{\max}}\theta^k\mt{\tilde{P}_k} \,, \end{equation} with $\mt{\tilde{P}_k = P_k \Lambda}$. Now calculating the denominator of \eqref{eq:psipowerk} is simple as $\mt{\Phi}$ is triangular and its determinant is obtained in closed form as \begin{equation}\label{eq:Rtildepolynomial} \det\mt{(\Phi)} = \prod_{i=0}^{I_{\max}} \left(\theta+\sum_{j}Q'_{i,j}\right) \,. \end{equation} As $\mt{\tilde{R}}(\theta)^k$ is a product of polynomials with matrix coefficients, we calculate the numerator in \eqref{eq:psipowerk} using an iterative convolution operation of the coefficients $\mt{\tilde{P}_k}$. Now, given the calculation method above we can numerically obtain $\mt{\Psi^{k}}$ for insertion in \eqref{eq:LST_of_kth_departure_given_state_n_rest_of_recursion}. Note that the same procedure can be used to obtain the elements $\tilde{f}_{n}(\nu)$ in \eqref{eq:joint_LST_x_t} by numerically calculating the inversion in \eqref{eq:LST_of_next_arrival_given_state_nprime_matrix}. Finally, calculating the inverse Laplace transform of the Palm joint density $f^\circ(t_1,x_0)$ entails taking the inverse Laplace transform of the right hand side (RHS) of \eqref{eq:joint_LST_x_t}. Given the factorization of $\rho(\theta)$ and the matrix coefficient form of the polynomial $\mt{R(\theta) = \sum_{k=0}^{I_{\max}}\theta^k\mt{P}_k}$ we observe that $f_{n',n+1}(\theta)$ on the RHS of \eqref{eq:joint_LST_x_t} has the form $\sum_i\frac{\alpha_i}{(\theta+d'_i)^{\kappa_i}}$ with constants $\alpha_i$ and $\kappa_i\leq n$ due to the partial fraction decomposition of \eqref{eq:psipowerk}. To obtain the Palm joint density $f^\circ(t_1,x_0)$ we calculate the inverse Laplace-Stieltjes transform of the RHS of \eqref{eq:joint_LST_x_t}. Given the observation that $f_{n',n+1}(\theta)$ can be rewritten as $\sum_i\frac{\alpha_i}{(\theta+d'_i)^{\kappa_i}}$ we know that the inverse LST of $f_{n',n+1}(\theta)$ has the form $\sum_i c_i x_0^{\kappa_i} e^{-d'_i x_0}$ with constants $c_i$ and $\kappa_i\leq n$. The same observation holds for $\tilde{f}_{n}(\nu)$ in \eqref{eq:joint_LST_x_t}, i.e., by calculating the inversion of \eqref{eq:LST_of_next_arrival_given_state_nprime_matrix} using the method above we finally obtain a partial fraction decomposition and subsequent inverse LST that has the form $\sum_j h_j t_1^{\varsigma_j} e^{-\tilde{d}_j t_1}$. \section{Conclusion} \label{sec:conclusion} In this paper, we considered the problem of computing the distribution of the Age of Information (AoI) at any point in time for non-preemptive, non-FIFO systems. Our key observation is that this networked system can be modeled as a batch queueing system where the served batch size is random and the sojourn time of the freshest message in the batch corresponds to the age of an arriving informative message. The batch (except for the its freshest message) essentially models the set of messages that are generated before the freshest message and hence are rendered obsolete by its arrival. This captures message reordering due to the non-FIFO system property. Equipped with this queueing model we use Palm calculus together with time inversion to decompose the elements that form the joint distribution of the age and the time between the arrival of informative messages at the receiver. Then, Palm inversion allows us to compute the distribution of the age at any point in time given this joint distribution. We find recursions for the corresponding Laplace-Stieltjes transforms of the conditional age and informative message inter-arrival time distributions owing to the Markovian nature of the underlying model. As these transforms turn to be rational we obtain a computable expression for the AoI distribution composed of matrix-exponential terms. This main result allows further to deduce formulations for the expected age, as well as, the age distribution at the arrival time points of informative messages. We validate the exact model using discrete-event simulations and show the skewness of the PDFs of the age. Further, we show the impact of the arrival and service rates on the age CDF and its quantiles. We leave the extension of this model to multi-stage queueing networks to future work. \section{Evaluation} \label{sec:evaluation} In this section, we compare the obtained expressions for the age distributions to results from empirical discrete event simulations. We consider the system as described in Sect.~\ref{sec:palm_model} with messages arriving as a Poisson process with rate $\lambda$ where each message observes a service time sampled from an exponential distribution with parameter $\mu$. The system simulation results are obtained from simulation runs over $10^5$ messages and we set the number of non-obsolete messages under way to $I_{\max}=20$ . Figure~\ref{fig:ccdf_age_at_inform_2} shows the age distribution at the arrival time points of informative messages. The dashed curve is obtained from the model \eqref{eq:expectation_test_fct_age} (with test function $\varphi$ being the identity function). The figure also shows the empirical age distribution obtained from simulations. We observe that these two distributions match very well and the impact of the average service rate of one message $\mu$ on the tail of the age distribution. Fig.~\ref{fig:pdf_age_at_anytime} shows the probability density of the age at any point in time that is obtained from \eqref{eq:f_age_fct_of_pdf_at_arrival} using the Laplace inverse of \eqref{eq:joint_LST_x_t}. Observe the skewness of the density function. This shows that approximations based on the first few (two) moments, e.g. obtained based on work that calculate the moments of the age distribution~\cite{Yates:MDS:TINT20}, will be inaccurate. Figure~\ref{fig:mean_age_anytime_I_20} shows a comparison of the expected age at any point in time as a function of the message arrival rate $\lambda$. The expected age that is obtained from the model is computed using the density in \eqref{eq:f_age_fct_of_pdf_at_arrival} in closed form. To empirically obtain the average age from the event based simulation we utilize the Palm inversion formula \eqref{eq:palm_inversion_formula} with $\varphi$ set as identity function. Hence we can write \begin{align} &\E\left[X_t\right] = \mu \bar{N} \E^\circ\left[\int_{T_0}^{T_1}X_s ds \right] = \mu \bar{N} \E^\circ\left[\int_{0}^{T_1}(A_0+s) ~ds \right] \nonumber \\ &= \mu \bar{N} \E^\circ\left[A_0(T_1-T_0) + \frac{1}{2} (T_1-T_0)^2 \right] \label{eq:plam_expectation} \end{align} where $A_0$ is the age of the informative message received at time $T_0$. The estimate of $\E\left[X_t\right]$ obtained from one simulation run is \begin{equation} \mu \hat{N} \sum_{n=1}^{n_{\mathrm{tot}}-1} \left( A_n(T_{n+1}-T_n)+\frac12 (T_{n+1}-T_n)^2 \right) \end{equation} where $A_n$ [resp. $T_n$] is the age upon delivery at the receiver [resp. delivery time] of the $n$th informative message, $n_{\mathrm{tot}}$ is the total number of informative messages delivered in the simulation run, and $\hat{N}$ is the time-average number of messages in the channel. Here too, the comparison with the empirically obtained average age from \eqref{eq:plam_expectation} shows a close match. Note that the empirically obtained average age still requires invoking the Palm inversion formula \eqref{eq:palm_inversion_formula} as given in \eqref{eq:plam_expectation}. We observe in Fig.~\ref{fig:mean_age_anytime_I_20} that the expected age decreases monotonically with the message arrival rate $\lambda$. This stand in line with similar age models with finite message capacity assuming, however, FIFO message delivery, such as in \cite{Kam16}. In Fig.~\ref{fig:quantile_age_anytime_I_20} we show the quantiles of the age at any point in time based on the age probability density in \eqref{eq:f_age_fct_of_pdf_at_arrival}. These quantiles can be utilized for system dimensioning by providing operating points, in terms of setting the service rate $\mu$ or throttling the message generation rate $\lambda$, to retain a corresponding age quantile $x_{\varepsilon}$ that is only violated with probability $P[X>x_{\varepsilon}]=\varepsilon$. \section{Introduction} Cyber-physical systems (CPS) constitute a type of hybrid system hat combines physical processes and computation~\cite{Lee08:CPS}. Often, the considered physical process such as those arising in chemical plants or platoons of automated vehicles are controlled via feedback loops. Due to this sensor-computation-actuator feedback loop, CPS are characterized by the mutual interaction of the physical process, the software computation, and essentially, the network. While CPS encompass diverse key interactions worth accurate modeling such as control correctness and concurrency we are concerned in the following with the effort of characterizing the timeliness of sensor data when received at the controller. This is the first step to ensure that the actions taken by the controller and hence executed by the actuator are based on fresh information. A key metric to express this timeliness of sensor data at the controller is the Age of Information (AoI)~\cite{Kaul12}, which has recently been a vivid object of study~\cite{Yates:MDS:TINT20,Bedewy17}. AoI is a semi-continuous function that denotes the age of the sensor (sender) status at the controller (receiver). The status age is hence best described by a \emph{jump-and-drift} process that grows linearly with time and jumps downwards at the time points when informative messages arrive at the receiver. An informative message is defined as a message containing an update that was generated after the generation time point of the last received update at the controller. Now, the time points at which messages arrive at the receiver, as well as the timestamps contained in these messages, are random and essentially dependent on the generation and transmission of messages at the sender and at every network node on the path from the sender to the receiver. Figure~\ref{fig:age_sample_path} shows a sketch of this scenario where a newer message\footnote{in terms of the generation time point} ($m_2$) overtakes an older message ($m_1$). Note that the age at the receiver does not jump downwards at the reception of the outdated message $m_1$. One research direction to optimize the timeliness of sensor information in CPS is through advancing the state-of-the-art physical layer techniques such as deterministically reserved transmission time slots over all available frequencies as in low-latency 5G network slices known as URLLC~\cite{Popovski:19}. While this eliminates contention on the wireless link in 5G, data packet interactions and sporadic network congestion still occur on the end-to-end path between the sensors and the controller. Research on the topic of AoI has been characterized by the analysis of mathematical models that capture the stochastic process of the age at the receiver given a combination of ingredients, i.e., (i) the process of data generation and transmission at the source, (ii) a model of the network interactions such as traffic scheduling and the variability of the link transmission rate. The prevalent approach in many works on AoI is to capture these ingredients in form of a queueing system (or a series thereof) that naturally capture the former and models the latter through the service process. In many works the arrival process is often considered as a Poisson process for tractability~\cite{Kam13,Kam16:ToIT} or as a periodic process to capture simple sensor device implementations~\cite{Fidler21:AoI}. The variety of queueing models ranges from simple M/M/1 queues with FIFO service to preemptive Last-Generated, First-Served (LGFS) systems. A remarkable difficulty of some AoI models is due to the lack of FIFO service. Allowing messages to overtake each other leads to considerable complexity as shown in the basic example in Figure~\ref{fig:age_sample_path}. A direct approach to model AoI systems with non-FIFO service is presented in \cite{Yates:multiple_source:TINT19} using the Stochastic-Hybrid-System (SHS) technique. This technique essentially depends on the Fokker-Planck partial differential equation (PDE) satisfied by the time-dependent probability density of the AoI as shown in \cite{Yates:MDS:TINT20} and quickly becomes intractable. For a comprehensive overview we point the reader to \cite{YatesSBKMU21a}. The \textbf{key differences} of this work to the works in \cite{Kam16:ToIT,Yates18,YatesSBKMU21a} are: (i) Instead of considering the time variant PDE of the density of the age we reduce the problem to a simple model where we are interested in the stationary distribution of the age in a system where message overtaking is allowed. (ii) We obtain the distribution of the age using Palm calculus and time inversion where we essentially require to know the joint distribution of the age when a message arrives and the time until the next message arrives. Note that the obtained inversion formula applies to all types of arrivals in this model but we apply it to the informative messages to obtain the age density at the receiver. This model naturally captures the distribution of functions of the age. (iii) Due to the used mathematical tools our results only require \emph{stationarity} of the underlying queuing model, which is a Markov process on a discrete state space and can thus be analyzed with elementary techniques. (iv) The model considered in this paper is different from \cite{Kam16:ToIT,Yates18} as we consider a window flow controlled sender that injects at most a fixed number of non-obsolete messages into the network channel. We denote this model as $M/M/I_{\max}/I_{\max}^*$. Our contributions in this paper are summarized as follows: \begin{itemize} \item We use Palm calculus and time inversion to derive the probability distribution of the age of information in a stationary $M/M/I_{\max}/I_{\max}^*$ system. \item We calculate the Laplace-Stieltjes transform of the distribution of the age at the arrival time points of informative messages as well as at any point in time. \end{itemize} \section{A Palm Calculus approach to the AoI} \label{sec:palm_model} \subsection{The Underlying Queuing Model} \label{sec:underlying_model} First we consider a continuous time Markov jump process $\{Z_t\}_{t\ge0}$ that models the $M/M/I_{\max}/I_{\max}^*$ queue described in the previous section. Let $Z_t$ represent the number of messages underway from the sender to the receiver, for $t\in \mathbb{R}^+$, with $Z_t\in E=\{0,1, ..., I_{\max}\}$. Recall that, by our modelling assumption, this counts only informative messages. At any time $t$ such that $Z_t=n>0$, and for $i \in\{1, ..., n\}$ we call $i$th message, the message with the $i$th smallest timestamp among all messages present in the channel. When the sender generates a new message at time $t$ (which occurs at constant rate $\lambda$), if $Z_{t^-}<I_{\max}$ then the message is accepted in the channel and $Z_t$ is incremented by $1$, i.e. $Z_{t^+}=Z_{t^-}+1$; else, i.e. if $Z_{t^-}=I_{\max}$, the message is discarded and $Z_t$ is unchanged. Consider now message departures from the channel. Whenever $Z_t=n>0$ all $n$ messages in the channel can leave the channel with same rate $\mu$, thus the rate of message departure is $n\mu$ and all messages are equally likely to leave the channel. Assume that a departure occurs at time $t$ and $Z_{t^-}=n>0$. For $i\in \{1...n\}$, the probability that the departing message is the $i$th message is $\frac1n$. In this case, $Z_t$ is decremented by $i$, i.e. $Z_{t^+}=Z_{t^-}-i$; in other words, the transition $n\to n-i$ occurs at rate $\mu$ for every $i\in \{1...n\}$. Thus $Z_t$ is a continuous-time Markov chain with finite state space $E$ and with transition rates (Fig.~\ref{fig:MC_fwd}): \vspace{-10pt} \begin{eqnarray} Q_{i,i+1}&=\lambda, & i=0...I_{\max}-1\nonumber\\ Q_{i,j}&=\mu, & i=1...I_{\max}, 0\leq j \leq i-1\nonumber\\ Q_{i,j}&=0,& \mbox{otherwise.} \label{eq:transition_rates_fwd_process} \end{eqnarray} \vspace{-10pt} Observe that $Z_t$ can be regarded as the number of messages in a FIFO queue with Poisson arrivals of rate $\lambda$ and drained using a batch service process. It is ergodic as the state space is finite and fully connected. Using the balance equation, the steady state probabilities $p_n$ can be computed and are given by: \begin{equation} p_n = \begin{cases} \frac{(n+1) \lambda^n\mu}{\prod\limits_{j=1}^{n+1}\left(\lambda + j\mu\right)} &\text{for $0\leq n < I_{\max}$}\\ \\ \frac{\lambda^n}{\prod\limits_{j=1}^{n}\left(\lambda + j\mu\right)}&\text{for $n = I_{\max}$} \end{cases} \label{eq:steady_state_prob_chain_nr_msgs_underway} \end{equation} The derivation of \eqref{eq:steady_state_prob_chain_nr_msgs_underway} is given in appendix~\ref{sec:appendix_proof_steady_state_prob_fwd}. \begin{figure} \centering \includegraphics[width=\linewidth]{./gfx/MC_forward} \caption{State transition diagram of the Markov chain $Z_t$ representing the number of non-obsolete messages underway.} \label{fig:MC_fwd} \end{figure} \subsection{A Palm Calculus Approach} In the following we use Palm Calculus to compute the stationary distribution of age. To this end, we assume that the continuous time Markov chain $Z_t$ is in its unique stationary regime, which, since it is ergodic, occurs in practice if the system has been operating for a long time. With Palm calculus, we are able to relate the stationary distribution of the age to quantities that are computed for the Markov chain $Z_t$. Palm calculus \cite{baccelli2012palm,serfozo2009basics,le2010performance} applies to a stationary point process $T_n$ ($n \in \mathbb{Z}$) and an observable (random) process $X_t$ ($t\in\mathbb{R}$) that are jointly stationary. Here we take for $T_n$ the sequence of times at which a departure occurs from the $M/M/I_{\max}/I_{\max}^*$ queue $Z_t$ (i.e. when $Z_t$ is decremented, which also corresponds to arrivals of informative messages at the receiver). Since we assume $Z_t$ is in its stationary regime, this point process is also stationary. In the context of Palm calculus, it is customary to assume that the numbering convention is such that $T_0\leq 0 < T_1$. The observable $X_t$ is the age of information at the receiver, as defined earlier. Note that $X_t$ can be computed in a deterministic way from the trajectory $Z_{(-\infty,t]}$ and is invariant with respect to change of time origins, therefore it is jointly stationary with $Z_t$, hence with $T_n$ \cite[Section 7.2.1]{le2010performance}. Also note that Palm calculus does not require the point process to be Poisson (the arrival process is Poisson by definition, but it can be seen that the departure process is not). \begin{figure} \centering \includegraphics[width=\linewidth]{./gfx/system_model2} \caption{Our system model assumes that messages are transmitted upon generation and take random iid one way delay to reach the receiver. Hence the system model assumes for every message an independent channel each with exponentially distributed service time with identical parameter $\mu$. We assume a network channel (as sketched in Fig.~\ref{fig:cps_scenario}) that is constrained by a finite number of informative messages under way denoted by $I_{\max}$. This assumption corresponds to a window flow constrained sender with a maximum number of outstanding informative messages $I_{\max}$ given a perfect reverse channel.} \label{fig:system_model} \end{figure} We can now apply Palm's inversion formula \cite[Theorem 7.1]{le2010performance}, which states that, for any bounded, measurable test function $\varphi$ we have \vspace{-5pt} \begin{equation} \E\left[\varphi(X_t)\right] = \hat{\lambda} \E^\circ\left[\int_{T_0=0}^{T_1}\varphi(X_s) ds\right] \label{eq:palm_inversion_formula} \end{equation} In the above, $\E^\circ$ stands for the Palm expectation, which is the conditional expectation given that the point process has a point at time $0$ (i.e. given that there is a departure from the $M/M/I_{\max}/I_{\max}^*$ queue at time $0$)\footnote{Note that the definition of the conditional expectation can be given a meaning even though the probability of the point process having a point at time $0$ exactly is $0$ \cite[Section 7.2.2]{le2010performance}.}. Also, under this conditional expectation, $T_0=0$ and $T_1$ is the following departure instant. Last, $\hat{\lambda}$ is the intensity of the point process of departures, which can be calculated from the Markov chain as $\hat{\lambda} = \sum_i i p_i \mu = \mu \bar{N}$ with the stationary expectation of $Z_t$ denoted as $\bar{N}:=\sum_{i=1}^{I_{\max}}i p_{i}$. Observe that obtaining $\E\left[\varphi(X_t)\right]$ for arbitrary $\varphi$ is equivalent to finding the stationary distribution of the age of information at an arbitrary point in time. Applying these ideas to the AoI gives the following theorem: \begin{theorem} The stationary PDF of the age of information at an arbitrary point in time, $f(x)$, is given by \begin{equation} f(x) = \hat{\lambda} \int_{x-x_0}^{\infty} \int_{0}^{x} f^\circ(x_0,t_1) dx_0 dt_1 \label{eq:f_age_fct_of_pdf_at_arrival} \end{equation} where $f^\circ(x_0, t_1)$ denotes the joint PDF of the age $x_0$ just after an informative message arrival \emph{and} of the time $t_1$ that will elapse until the next informative message arrives. \end{theorem} Note that $f^\circ(x_0, t_1)$ is a Palm PDF, i.e. it corresponds to observations made upon the arrival of an informative message. Following the conventions in \cite{baccelli2012palm}, we use a $^\circ$ superscript to denote a Palm PDF. \begin{proof} We apply Palm's inversion formula \eqref{eq:palm_inversion_formula}. Next, note that for $0\leq s \leq T_1$ we have $X_s=s+X_{0^+}$, therefore \begin{align} \E^\circ\left[\int_{0}^{T_1}\varphi(X_s) ds\right] = \E^\circ\left[\int_{0}^{T_1}\varphi(s+X_{0^+}) ds\right] \label{eq:plam_inversion_derivation1} \end{align} By definition of $f^\circ(x_0, t_1)$, it follows that \begin{align} &\E^\circ\left[\int_{0}^{T_1}\varphi(X_s) ds\right] \nonumber \\ &= \int_{0}^{\infty} \int_{0}^{\infty} \int_{0}^{t_1} \varphi(x_0 + s) ds f^\circ(x_0,t_1) dx_0 dt_1\nonumber \\ &= \int_{0}^{\infty} \varphi(u) \int_{0}^{u} \int_{u-x_0}^{\infty} f^\circ(x_0,t_1) dt_1 dx_0 du \quad, \label{eq:plam_inversion_derivation} \end{align} where, in the last line we substituted $u=x+s$ with $x\leq u \leq x+t$. Now we find the PDF of the age at any arbitrary point in time $f(x)$ from comparing \eqref{eq:plam_inversion_derivation} with \begin{align} \E\left[\varphi(X_t)\right] = \int_{0}^{\infty} \varphi(x) f(x) dx \label{eq:expectation_test_fct_age} \end{align} where we find \begin{equation} f(x) = \hat{\lambda} \int_{0}^{x} \int_{x-x_0}^{\infty} f^\circ(x_0,t_1) dt_1 dx_0 \label{eq:f_age_fct_of_pdf_at_arrival} \end{equation} \end{proof} In the following, we calculate the Palm distribution $f^\circ(x_0,t_1)$. This is tractable because it involves $Z_t$ only, which is a Markov process on a finite state space. \section{Related Work} The problem of status updating to combat data staleness in distributed systems that use a shared and unreliable network was first discussed in the context of real-time database systems in \cite{songliu}. Essentially, a recent reincarnation of this problem in the context of IoT that is known as Age of information (AoI) considers transmission scheduling strategies to update the status at some receiver in a way that optimizes the freshness of that information~\cite{Kaul12,sun18,ioannidis09}. This problem has been in particular of interest in the context of vehicular networks~\cite{kaul11,kaul11b} and sensory information transmission over wireless networks~\cite{He:18,Hribar:17} as the freshness of information such as the captured environment model that is exchanged between vehicles is safety critical. For a comprehensive review see~\cite{YatesSBKMU21a}. Given a single sender and a system modeled as an M/M/1 queue the work in \cite{Kaul12} derives the \emph{sample path average age} at the receiver as $\lim_{T\rightarrow\infty} \frac{1}{T}\int_{0}^{T}\Delta(t) dt = \lambda\left(E[XS]+E[X^2]/2\right)$ with $\Delta(t)$ denoting the age of information at the receiver at time $t$, $\lambda$ being the message arrival rate and the random variables $X,S$ that denote the message inter-arrival time and message response time, respectively. The same seminal work provides expressions for the sample path average age in a D/M/1 system using a transcendental function. Given the forms derived above the work \cite{Kaul12} also provides the parametrization that minimizes the average age. Similarly, works such as \cite{Kam13,Kam16:ToIT} consider the age at the receiver given a $D/G/1$ and $M/M/\infty$ systems where messages may arrive out of order. The work in \cite{Kam13} considers the distribution of the age process for a deterministic transmission schedule and a single server with general service time distribution under the FIFO assumption. The authors derive in \cite{Kam16:ToIT} an expression for the \emph{average age} using a similar reasoning as \cite{Kaul12} as a function of the average message arrival rate and service time distribution. Note that the provided expression there is not directly computable as it contains multiple infinite sums and infinite products. Applications of the methods above to the special case of G/G/1/1 queueing systems under message blocking and message preemption is found in \cite{soysal21}. In contrast to this work, however, our approach here provides the distribution of the age using a computable closed form that is constructed as the combination of Laplace-Stieltjes transforms of elementary functions Going beyond elementary queues, the work in \cite{Talak2017MinimizingAI} considers the AoI for a path consisting of a concatenation of multiple links where the random delay at each of the links is only due random access. The authors show that given this delay model and a graph model of the network the problem of finding a transmission strategy for minimizing the AoI at $N$ sender-receiver pairs can be decomposed into a simpler equivalent optimization problem. We believe that the main reason for this lies in the delay due to random access model that does not incorporate queueing and scheduling effects. A similar work considering multihop networks, i.e., \cite{Bedewy17}, that considers, however, a multihop queueing network, shows that a preemptive Last Generated First Served (LGFS) policy at all nodes minimizes any non-decreasing functional of the age in stochastic dominance sense. This result is obtained under the assumption that all message transmission times are iid exponentially distributed at all nodes. The works in \cite{Yates:multiple_source:TINT19,Yates:MDS:TINT20} provide a method to calculate the MGF and the moments of the AoI at a network monitor for networks of \emph{preemptive finite buffer servers} based on the so called stochastic hybrid system (SHS) framework~\cite{Hu:SHS:2000} by leveraging a notion of a hybrid state $[\{q(t)\}_{t\geq 0},\mathbf{x}(t)]$ where $\{q(t)\}_{t\geq 0}$ describes a continuous time Markov chain over finite state and $\mathbf{x}(t)\geq 0 \in \mathbb{R}^{1\times n}$ describes a vector of positive real values of the age. By attaching deterministic matrices $\mathbf{A_l}\in\{0,1\}^{n\times n}$ to the transitions $l\in L$ of the Markov chain $\{q(t)\}_{t\geq 0}$, where $L$ denotes the set transitions, one is able to track the jumps of the age vector as $\mathbf{x'}=\mathbf{x}\mathbf{A_l}$. The key to finding a formulation for the expected age and for the Moment generating function (MGF) of the age is based on a set of first order differential equations that assume $\{q(t)\}_{t\geq 0}$ is ergodic and utilize its steady state stationary probability distribution~\cite{Yates:multiple_source:TINT19,Yates:MDS:TINT20} Note that the exists a direct relation between the presented SHS framework and utilizing the Master equation $\frac{d}{d t} E\left[\varphi(q_t,X_t)\right] = E\left[G \varphi(q_t,X_t)\right]$ with $\varphi$ being a test function, $G$ a generator and $q_t$ and $X_t$ describing the queue state, as well as, the age at time $t$ respectively. This relation is explored in~\cite{Yates:multiple_source:TINT19,Yates:MDS:TINT20} to simplify the SHS formulation. We note that, in general, applying the master equation to the infinitesimal generator that describes the jump and drift evolution of the age results in the Fokker-Plank equation describing the time evolution of the age density. Analytical closed-form results to solve this formulation, even for the stationary age distribution, are yet to be shown. Computable solutions to the presented SHS system of equations in are provided for the examples of a single M/M/1/1 queue, a line network of M/M/1/1 queues~in\cite{Yates:MDS:TINT20}. Note that the SHS framework was applied in~\cite{Yates18} to obtain a close form for the expected age for a system of parallel servers where a new message arrival preempts the oldest message under way. Concerning message reordering, a seminal work on packet reordering is \cite{BaccelliGP84} that provides a recursive expression for the total delay distribution of a system in which messages that arrive in order are delivered through a disordering system, hence the out-of-order arrivals require resequencing. The author provides an analytical solution for the case of an $M/G/\infty$ disordering system given in terms of a Laplace-Stieltjes transform. A min-plus approach to the Age of Information is given in~\cite{Fidler21:AoI} showing that the virtual delay at a FIFO system, i.e., the horizontal deviation of cumulative arrivals and departures at a min-plus system, is an upper bound on the age. Equipped with lower bounds on the cumulative arrival traffic, the work in~\cite{Fidler21:AoI} shows deterministic and statistical upper bound on the age of information for different combinations of arrivals and systems with deterministic or probabilistic description. In contrast to~\cite{Fidler21:AoI}, we consider here non-FIFO systems with possible message reordering. \section{Computing the Palm Distributions} \label{sec:computing_palm_probabilities} \subsection{Decomposition into Forward and Backward Components} In order to compute the Palm PDF $f^\circ(x_0, t_1)$ we observe that the part on $x_0$ (the age) involves the past of $Z_t$ whereas the part on $t_1$ (time until a new arrival) involves the future of $Z_t$. This is captured by the following theorem. \begin{theorem} The joint PDF of both the age $x_0$ just after the informative message arrival \emph{and} the length of the cycle $t_1$ until the arrival of the next informative message is given by \begin{align} \label{eq:forward-backward2} f^\circ(x_0,t_1)= \sum_{(n',n) \mst 1\leq n+1\leq n'\leq I_{\max}} p^\circ_{n',n} g^\circ(x_0|n',n)h(t_1|n) \end{align} where \begin{itemize} \item $p^\circ_{n',n}$ is the probability that an arbitrary message arrival happens at a transition $(n'\to n)$ of the Markov chain $Z_t$ and is given by \begin{equation} p^\circ_{n',n} = \frac{p_{n'}}{\bar{N}} \; \ind{1\leq n+1\leq n'\leq I_{\max}} \end{equation} in the above, $\bar{N}:=\sum_{i=1}^{I_{\max}}i p_{i}$ is the stationary expectation of $Z_t$ and $p_i$ is given in \eqref{eq:steady_state_prob_chain_nr_msgs_underway}; \item $g^\circ(x_0|n',n)$ is the PDF of the Palm distribution of the age $x_0$ just after the informative message arrival given that the state of the Markov chain is $n'$ just before the arrival of the informative message and $n$ just after the arrival, where $n'\geq n+ 1$; \item $h(t_1|n)$ is the stationary PDF of the time that will elapse from time $t$ until the next informative message arrives, given that $Z_t=n$. \end{itemize} \end{theorem} The proof exploits the Markov property of $Z_t$, which expresses that the future depends on the past only through the present state. \begin{proof} Define $f^\circ(x_0, t_1|n', n)$ as the joint PDF of the Palm distribution of both the age $x_0$ just after the informative message arrival \emph{and} the length of the cycle $t_1$ until the arrival of the next informative message, given that the state of the Markov chain is $n'$ just before the arrival of the informative message and $n$ just after the arrival, where $n'\geq n+ 1$. It follows that the required PDF $f^\circ(x_0, t_1)$ is given by \begin{align} \label{eq:forward-backward} f^\circ(x_0,t_1)= \sum_{(n',n) \mst 1\leq n+1\leq n'\leq I_{\max}} p^\circ_{n',n} f^\circ(x_0, t_1|n',n) \end{align} where $p^\circ_{n',n}$ is the probability that an arbitrary message arrival happens at a transition $(n'\to n)$ of the Markov chain $Z_t$. By \cite[Thm 7.1.2]{boudec2011performance}, such a probability is given by \begin{equation} p^\circ_{n',n} = \eta p_{n'} Q_{n',n}, \; \ind{1\leq n+1\leq n'\leq I_{\max}} \end{equation} where $\mathbf{1}_{\{\cdot\}}$ is the indicator function, equal to $1$ when the condition is true and $0$ otherwise, $p_{n'}$ is the stationary probability given in \eqref{eq:steady_state_prob_chain_nr_msgs_underway}, $Q_{n',n}$ is the transition rate in \eqref{eq:transition_rates_fwd_process} and $\eta$ is a normalizing constant. Observe that $Q_{n',n}=\mu$, which gives $\eta^{-1}=\sum_{i=1}^{I_{\max}}i p_{i}:=\bar{N}$ where $\bar{N}$ is the stationary expectation of $Z_t$. It finally comes \begin{equation} p^\circ_{n',n} = \frac{p_{n'}}{\bar{N}} \; \ind{1\leq n+1\leq n'\leq I_{\max}} \end{equation} \TBD{Because $p_nQ'_{n,n'}=p_{n'}Q_{n',n}$ this should be the same as \eqref{eq:normalization constant}...} Let $g^\circ(x_0|n',n)$ denote the PDF of the Palm distribution of the age $x_0$ just after the informative message arrival given that the state of the Markov chain is $n'$ just before the arrival of the informative message and $n$ just after the arrival, where $n'\geq n+ 1$. Recall that $h(t_1|n)$ denotes the stationary PDF of the time that will elapse from time $t$ until the next informative message arrives, given that $Z_t=n$. By the Markov property, this is also the PDF of the Palm distribution of the time until the next informative message arrives given that the state of the Markov chain is $n'$ just before the arrival of the informative message and $n$ just after the arrival. Again by the Markov property, $f(x_0, t_1|n'n)= g^\circ(x_0|n',n)h(t_1|n)$, which proves \eqref{eq:forward-backward2}. \end{proof} We next compute $h(t_1|n)$, which we call the forward component of \eqref{eq:forward-backward2}. The computation of the backward component $g^\circ(x_0|n',n)$ will involve a similar method plus a time-reversal argument. \subsection{Computation of the Forward Component} First, we will introduce the following lemma to calculate the Laplace-Stieltjes Transform (LST) of the time until the occurrence of the \emph{next transition of interest} in a continuous-time Markov chain $\{\tilde{Z}(t)\}_{t\in\mathbb{R_+}}$ conditioned on $Z_t=n$. The transitions of interest are defined by some subset $\tilde{\mathcal{F}}$ of $E\times E$, where $E\subseteq \mathbb{N}$ is the state-space of the Markov chain. \begin{lemma}\label{lemma:LST_of_1st_event_forward_component_given_state_n} Consider a time-homogeneous, continuous-time Markov chain $(Z_t)_{t\in\mathbb{R_+}}$ with state space $E\subseteq \Nats$ and with transition rates $Q_{n,n'}$; let $\tilde{d}_{n}=\sum_{n'\in E}Q_{n,n'}$ denote the sum of all outgoing rates from state $n$ and assume that $\tilde{d}_{n}>0$ for all $n\in E$. Let $\tilde{\mathcal{F}}\subseteq E$ such that $Q_{n,n'}> 0$ for all $(n,n')\in \tilde{\mathcal{F}}$. Call $\tilde{Y}_t$ the time that will elapse from $t$ until the next jump in $\tilde{\mathcal{F}}$ of the Markov chain, i.e. $\tilde{Y}_t =\inf\{s> 0, (Z_{(t+s)^-},Z_{(t+s)^+})\in \tilde{\mathcal{F}}\}$. The conditional LST of $\tilde{Y}_t$ given that $Z_t=n$, denoted as $\tilde{f}_{n}(\nu)$, satisfies \begin{align} &\tilde{f}_{n}(\nu) \coloneqq \E\left[e^{-\nu \tilde{Y}_t} | Z_t=n\right] \nonumber\\ &= \frac{1}{\tilde{d}_{n}+ \nu}\left(\sum_{\substack{n',\\(n,n')\notin\tilde{\mathcal{F}}}} \tilde{f}_{n'}(\nu) Q_{n,n'}+ \sum_{\substack{n',\\(n,n')\in\tilde{\mathcal{F}}}} Q_{n,n'}\right). \label{eq:LST_of_next_transition_of_interest} \end{align} \end{lemma} \begin{proof} \TBD{See if proof cannot be simplified by considering the time until next visit to a subset of states} Fix some arbitrary time $t$ and define $\tilde{S}_t$ as the time until the next transition (of interest or not) out of state $Z_t$ and let $ N'_t\coloneqq Z_{t+\tilde{S}_t}$ denote the next state. It is known \cite{gillespie1976general} that, conditional to $Z_t=n$, $N'_t$ and $\tilde{S}_t$ are independent, the distribution of $\tilde{S}_t$ is exponential with rate $\tilde{d}_{n}$ and the distribution of $N'_t$ is given by $\P(N'_t=n'| Z_t=n)=\frac{Q_{n,n'}}{\tilde{d}_n}$. It follows that \begin{equation} \P\left[N'_t=n'| Z_t=n,\tilde{S}_t=s\right]=\frac{Q_{n,n'}}{\tilde{d}_n} \label{eq:gillespie} \end{equation} and \begin{equation} \E\left[e^{-\nu \tilde{S}_t}\right] = \frac{\tilde{d}_{n}}{\tilde{d}_{n}+ \nu} \label{eq:gillespie2} \end{equation} Also let $\tilde{R}_t$ denote the residual time from the next transition until the next transition of interest, i.e. $\tilde{R}_t=0$ whenever $(Z_t,N'_t)\in \tilde{\mathcal{F}}$ and otherwise $\tilde{R}_t=\tilde{Y}_{t+\tilde{S}_t}$. Hence \begin{equation} \tilde{Y}_t = \tilde{S}_t + \tilde{R}_t \label{eq:Y_first_departure} \end{equation} By conditioning on $\tilde{S}_t=s$ we can write \begin{align} &\E\left[e^{-\nu \tilde{Y}_t} | Z_t=n,\tilde{S}_t=s\right] \nonumber\\ &= e^{-\nu s}\E\left[e^{-\nu \tilde{R}_t} | Z_t=n,\tilde{S}_t=s\right] \label{eq:Y_lst_1} \end{align} By conditioning with respect to $N'_t$ in the latter term and applying \eqref{eq:gillespie} we obtain \begin{align} &\E\left[e^{-\nu \tilde{R}_t} | Z_t=n,\tilde{S}_t=s\right] \nonumber\\ &= \sum_{n'\in E}\left(\E\left[e^{-\nu \tilde{R}_t} | Z_t=n,\tilde{S}_t=s, N'_t=n'\right]\times \right.\nonumber\\ &\left.\P\left[N'_t=n'| Z_t=n,\tilde{S}_t=s\right]\right)\nonumber\\ &=\sum_{n'\in E}\left(\E\left[e^{-\nu \tilde{R}_t} | Z_t=n,\tilde{S}_t=s, N'_t=n'\right]\frac{Q_{n,n'}}{\tilde{d}_n}\right) \label{eq:Y_lst_1b} \end{align} Now if $(n,n')\in \tilde{\mathcal{F}}$ then $\tilde{R}_t=0$ hence \begin{equation} \E\left[e^{-\nu \tilde{R}_t} |Z_t=n,\tilde{S}_t=s, N'_t=n'\right]=1 \mbox{ if } (n,n')\in \tilde{\mathcal{F}} \label{eq:Y_lst_1c} \end{equation} Else, i.e. if $(n,n')$ is not in $\tilde{\mathcal{F}}$, $\tilde{R}_t=\tilde{Y}_{t+\tilde{S}_t}$ is the time that remains to elapse until the next transition of interest; by the Markov property, the future of the Markov chain depends on the history only via the current state, i.e. \begin{align} &\E\left[e^{-\nu \tilde{Y}_{t+\tilde{S}_t}} | Z_t=n,\tilde{S}_t=s, N'_t=n'\right] \nonumber\\ & =\E\left[e^{-\nu \tilde{Y}_{t+s}} | N'_t=n'\right]= \E\left[e^{-\nu \tilde{Y}_{t+s}} | Z_{t+s}=n'\right] \nonumber\\ &=\tilde{f}_{n'}(\nu) \label{eq:Y_lst_1d} \end{align} where the last equality is because the Markov chain is time-homogeneous. Combining \eqref{eq:Y_lst_1} with \eqref{eq:Y_lst_1b}-\eqref{eq:Y_lst_1d} gives \begin{align} &\E\left[e^{-\nu \tilde{Y}_t} | Z_t=n,\tilde{S}_t=s\right]\nonumber\\ &= e^{-\nu s}\left(\sum_{\substack{n',\\(n,n')\notin\tilde{\mathcal{F}}}} \tilde{f}_{n'}(\nu) \frac{Q_{n,n'}}{\tilde{d}_{n}}+ \sum_{\substack{n',\\(n,n')\in\tilde{\mathcal{F}}}} \frac{Q_{n,n'}}{\tilde{d}_{n}}\right) \label{eq:Y_lst_1f} \end{align} By the law of total expectation we can now write \begin{align} &\tilde{f}_{n}(\nu) = \E\left[e^{-\nu \tilde{Y}_t} | Z_t=n\right] \nonumber\\ &=\E\left[\E\left[e^{-\nu \tilde{Y}_t} | Z_t=n,\tilde{S}_t\right]\right] \nonumber\\ &= \E\left[e^{-\nu \tilde{S}_t}\right]\left(\sum_{\substack{n',\\(n,n')\notin\tilde{\mathcal{F}}}} \tilde{f}_{n'}(\nu) \frac{Q_{n,n'}}{\tilde{d}_{n}}+ \sum_{\substack{n',\\(n,n')\in\tilde{\mathcal{F}}}} \frac{Q_{n,n'}}{\tilde{d}_{n}}\right) \label{eq:LST_of_next_transition_of_interest2} \end{align} Using \eqref{eq:gillespie2} completes the proof. \end{proof} \vspace{5pt} Now we can use Lem.~\ref{lemma:LST_of_1st_event_forward_component_given_state_n} to calculate the stationary PDF $h(t_1|n)$ of the time that will elapse from a fixed time $t$ until the next informative message arrives conditioned on $Z_t=n$. The set of transitions of interest is $\tilde{\mathcal{F}}\coloneqq \{(i,j)\}_{i>j}$, i.e. the transitions associated with the arrival of informative messages. The transition rates $Q_{i,j}$ are given in \eqref{eq:transition_rates_fwd_process} and $\tilde{d}_{n}= \lambda \mathbf{1}_{\{n<I_{\max}\}} + \mu n$, where $\mathbf{1}_{\{\cdot\}}$ is the indicator function, equal to $1$ when the condition is true and $0$ otherwise. The Laplace-Stieltjes Transform of $h(t_1|n)$ continues to be denoted by $\tilde{f}_{n}(\nu)$; the application of Lem.~\ref{lemma:LST_of_1st_event_forward_component_given_state_n} gives: \begin{align} \tilde{f}_{n}(\nu) = \frac{1}{\tilde{d}_{n}+ \nu} \left[ n \mu + \lambda\tilde{f}_{n+1}(\nu)\right]. \label{eq:LST_of_next_arrival_given_state_nprime} \end{align} \begin{figure}[t] \centering \includegraphics[width=0.9\linewidth]{./gfx/sample_path_jumps3} \caption{A sample path of the number of non-obsolete messages under way given the system model from Sect.~\ref{sec:system_model}. Looking at the forward process: The downward jumps mark the arrival of informative messages at the receiver which make previous messages obsolete. As messages depart in batches of random sizes in this model the waiting time of the freshest message of the batch corresponds to the age value set upon the arrival of that message at the receiver.} \label{fig:sample_path_jumps} \end{figure} This recursive relation can be rewritten using matrix notation as \begin{equation} \mt{(\nu I + \tilde{D}) \tilde{f} = \pmb{\bar{\mu}} + \bar{\Lambda} \tilde{f}} \label{eq:LST_of_next_arrival_given_state_nprime_matrix_recursive} \end{equation} with the identity matrix $\mt{I}$, the vectors $\mt{\tilde{f}}= [\tilde{f}_{0}(\nu),\dots,\tilde{f}_{I_{\max}}(\nu)]\T$, and $\pmb{\bar{\mu}} = [0,\mu,2\mu\dots,I_{\max}\mu]\T$, and the matrices $\mt{\tilde{D}} \coloneqq \diag (\tilde{d}_0,\tilde{d}_1,\dots,\tilde{d}_{I_{\max}})$, and \begin{align} \pmb{\bar{\Lambda}} \coloneqq \begin{bmatrix} \mathbf{0} & \lambda& \hdots& \mathbf{0} \\ \vdots & \ddots& \lambda& \vdots \\ \vdots& \ddots& \ddots &\lambda\\ \mathbf{0} & \hdots & \mathbf{0}&\mathbf{0} \end{bmatrix}. \label{eq:definition_of_Lambda_bar} \end{align} Now we can directly solve for the conditional LSTs as \begin{equation} \mt{\tilde{f} = (\nu I + \tilde{D} - \bar{\Lambda})^{-1} \pmb{\bar{\mu}} } \label{eq:LST_of_next_arrival_given_state_nprime_matrix} \end{equation} \subsection{Computation of the Backward Component} Recall that $g^\circ(x_0|n',n)$ denotes the PDF of the Palm distribution of the age $x_0$ just after the informative message arrival given that the state of the Markov chain is $n'$ just before the arrival of the informative message and $n$ just after the arrival, where $n'\geq n+ 1$. since the arrival of the freshest message in the served batch. For the computation of $g^\circ(x_0|n',n)$ we resort to time-reversal as this allows to use a similar method as for the forward component. The time-reversed process $Z_t^r$ is defined by $Z_t^r=Z_{-t}$. In a nutshell, time reversal allows us to change the underlying queueing model from Sect.~\ref{sec:underlying_model} into a FIFO queue where arrivals occur in message batches of random size while the server removes exactly \emph{one} message on each visit. To illustrate this, consider the sample path shown in Fig.~\ref{fig:sample_path_jumps} in the reverse time direction. It is shown in \cite{Kelly:Reversibility2011} [Section 1.7] that if $Z_t$ is endowed with its stationary probability, then the time-reversed process is also a time-homogeneous continuous time Markov chain with same state space and same stationary probability, but with different transitions rates. Specifically, by \cite{Kelly:Reversibility2011} [Theorem 1.12] the transition rates $Q'_{i,j}$ for $Z_t^r$ depend on the transition rates of the original Markov process \eqref{eq:transition_rates_fwd_process} and its stationary distribution \eqref{eq:steady_state_prob_chain_nr_msgs_underway}. We obtain $Q'_{i,i-1} =\lambda_{i}'$ for $i=1...I_{\max}$, $Q'_{i,j} =\mu_{ij}'$ for $i=0...I_{\max}-1,i<j\leq I_{\max}$ and $Q'_{i,j} =0$ otherwise, with \begin{equation} \lambda_i' = \begin{cases} \frac{i}{i+1}\left(\lambda + (i+1) \mu\right) &\text{for $1\leq i < I_{\max}$}\\ \\ I_{\max} \mu &\text{for $i = I_{\max}$} \end{cases} \label{eq:lambda_prime} \end{equation} and \begin{equation} \mu_{ij}' = \begin{cases} \frac{(j+1) \mu \lambda^{j-i}}{(i+1)\prod\limits_{k=i+2}^{j+1}\left(\lambda + k\mu\right)}&\text{for $j \neq I_{\max}$}\\ \\ \frac{\lambda^{I_{\max}-i}}{(i+1)\prod\limits_{k=i+2}^{I_{\max}}\left(\lambda + k\mu\right)}&\text{for $j = I_{\max}$} \end{cases} \label{eq:mu_prime} \end{equation} for $0\leq i< I_{\max}$. The derivation of \eqref{eq:lambda_prime}, \eqref{eq:mu_prime} is given in the appendix. \TBD{We now have the mapping:\\ \begin{tabular}{c|c|c|c|c} old & $n-k$ & $n$ & $n-k+1$ & $k$\\\hline new & $n$ & $n'$ & $n+1$ & $n'-n$ \end{tabular} } In $Z_t$, upon serving a batch of messages, the sojourn time of the freshest message of that batch is the age of that particular informative message. In $Z^r_t$, this is given by the sojourn time of the $(n-k+1)$st message of an arriving batch of size $k\leq n$. In $Z^r_t$, the arrival of a batch corresponds to a transition $n\to n'$ with $n'\geq n+1$ and the size of the arriving batch is $k=n'-n$. It follows that, for $n'\geq n+ 1$, $g^\circ(x_0|n',n)$ can be re-interpreted as the PDF of the time from now until the $(n+1)$st departure of $Z^r_t$, given that $Z^r_t$ is doing a transition $n\to n'$ now. Since $Z^r_t$ is also Markov, we can apply the Markov property and obtain that this is simply the PDF of the time from now until the $(n+1)$st departure of $Z^r_t$ given that $Z^r_t=n'$. For $n+1=1$ this is the conditional PDF of the time until a next departure, which is exactly the problem that is solved in Lemma~\ref{lemma:LST_of_1st_event_forward_component_given_state_n}, and which we now extend as follows (the proof is similar and is not given). \begin{lemma}\label{lemma:LST_of_all_events_forward_component_given_state_n} Consider a time-homogeneous, continuous-time Markov chain $(\tilde{Z}_t)_{t\in\mathbb{R_+}}$ with state space $E\subseteq \Nats$ and with transition rates $\tilde{Q}_{n,n'}$; let $\tilde{d}_{n}=\sum_{n'\in E}\tilde{Q}_{n,n'}$ denote the sum of all outgoing rates from state $n$ and assume that $\tilde{d}_{n}>0$ for all $n\in E$. Let $\tilde{\mathcal{F}}\subseteq E$ such that $\tilde{Q}_{n,n'}> 0$ for all $(n,n')\in \tilde{\mathcal{F}}$. For $k\geq 1$, call $\tilde{Y}^k_t$ the time that will elapse from $t$ until the $k$th jump in $\tilde{\mathcal{F}}$ of the Markov chain, i.e. $\tilde{Y}^1_t =\inf\{s\geq 0, (\tilde{Z}_{(t+s)^-},\tilde{Z}_{(t+s)^+})\in \tilde{\mathcal{F}}\}$ and for $k\geq 2$, $\tilde{Y}^k_t =\inf\{s> Y^{k-1}, (\tilde{Z}_{(t+s)^-},\tilde{Z}_{(t+s)^+})\in \tilde{\mathcal{F}}\}$. The conditional LST of $\tilde{Y}^k_t$ given that $\tilde{Z}_t=n$, denoted as $\tilde{f}_{n,k}(\theta)$, satisfies, for $k\geq 1$: \begin{align} &\tilde{f}_{n,k}(\theta) = \frac{1}{\tilde{d}_{n}+ \theta} \times \nonumber\\ &\left(\sum_{\substack{n',\\(n,n')\notin\tilde{\mathcal{F}}}} \tilde{f}_{n',k}(\theta) \tilde{Q}_{n,n'}+ \sum_{\substack{n',\\(n,n')\in\tilde{\mathcal{F}}}} \tilde{f}_{n',k-1}(\theta)\tilde{Q}_{n,n'}\right). \label{eq:LST_of_next_transition_of_interest} \end{align} where $\tilde{f}_{n,0}(\theta)=1$ by convention. \end{lemma} Let $f_{n',k}(\theta)$ be the LST of the time from now until the $k$th departure of $Z^r_t$ given that $Z^r_t=n'$, so that the LST of $g^\circ(x_0|n',n)$ is $f_{n', n+1}$. We now apply Lemma~\ref{lemma:LST_of_all_events_forward_component_given_state_n} to the Markov chain $Z^r_t$ to compute $f_{n',k}(\theta)$ and obtain: \begin{eqnarray} f_{n',1}(\theta) \left[d_{n'} +\theta\right]&=& \lambda_{n'} + \sum\limits_{j>n'} \mu_{n',j}' f_{j,1}(\theta)\label{eq:LST_of_next_departure_given_state_n_v2a}\\ f_{n',k}(\theta) \left[d_{n'} +\theta\right]&=& \lambda_{n'} f_{n',k-1}(\theta)+ \sum\limits_{j>n'} \mu_{n',j}' f_{j,1}(\theta) \label{eq:LST_of_next_departure_given_state_n_v2b} \end{eqnarray} for $0\leq n'\leq I_{\max}$ and $k\geq 2$. We use the following matrix notation: $\mt{D} \coloneqq \diag (d_0,d_1,\dots,d_{I_{\max}})$, $\mt{f}_{\cdot,k} = [f_{0,k}(\theta),\dots,f_{I_{\max},k}(\theta)]\T$, $\pmb{\lbar'} = [0,\lambda_1',\dots,\lambda_{I_{\max}}']\T$. $\mt{M}$ is the upper triangular matrix \begin{equation*} \mt{M} = \begin{cases*} \mu_{ij}'&\text{for $i < j$}\\ 0&\text{for $i \geq j$} \end{cases*} \end{equation*} and $\pmb{\Lambda}$ is the matrix with $\lambda_n'$ on the subdiagonal defined by \begin{align} \pmb{\Lambda} \coloneqq \begin{bmatrix} \mathbf{0} & \hdots& \hdots& \mathbf{0} \\ \lambda_1' & \ddots & & \vdots \\ \vdots& \ddots& \ddots &\vdots\\ \mathbf{0} & \hdots & \lambda_{I_{\max}}'&\mathbf{0} \end{bmatrix}. \label{eq:definition_of_Lambda} \end{align} for $i,j \in \{0,1,\dots,I_{\max}\}$, We can rewrite the recursive relation of the conditional LST in \eqref{eq:LST_of_next_departure_given_state_n_v2a} as \begin{equation} \mt{(\theta I + D) f_{\cdot,1} = \pmb{\lbar'} + M f_{\cdot,1}} \label{eq:LST_of_next_departure_given_state_n_matrix_recursive} \end{equation} The previous equation can be solved and we obtain: \begin{equation} \mt{f_{\cdot,1} = (\theta I + D - M)^{-1} \pmb{\lbar'} } \label{eq:LST_of_next_departure_given_state_n_matrix} \end{equation} Similarly, we can re-write \eqref{eq:LST_of_next_departure_given_state_n_v2b} as \begin{equation} \mt{(\theta I + D) f_{\cdot,k} = \pmb{\Lambda}f_{\cdot,k-1} + M f_{\cdot,k}} \label{eq:LST_of_only_kth_departure_given_state_n_matrix_recursive} \end{equation} for $k\geq 2$. Now we can construct a block matrix form that takes \eqref{eq:LST_of_next_departure_given_state_n_matrix_recursive} as well as \eqref{eq:LST_of_only_kth_departure_given_state_n_matrix_recursive} to follow the form \begin{align} \scalemath{0.83}{ \begin{bmatrix} \mt{\theta I + D} & & \mathbf{0} \\ & \ddots & \\ \mathbf{0} & & \mt{\theta I + D} \end{bmatrix} \begin{bmatrix} \mt{f_{\cdot,1}} \\ \vdots \\ \mt{f_{\cdot,I_{\max}}} \end{bmatrix} = \begin{bmatrix} \pmb{\lbar'} \\ \vdots \\ \mathbf{0} \end{bmatrix} + \begin{bmatrix} \mt{M} & & \mathbf{0} \\ \pmb{\Lambda} & \ddots & \\ \mathbf{0} &\pmb{\Lambda} & \mt{M} \end{bmatrix} \begin{bmatrix} \mt{f_{\cdot,1}} \\ \vdots \\ \mt{f_{\cdot,I_{\max}}} \end{bmatrix} } \label{eq:LST_of_kth_departure_given_state_n_matrix_recursive} \end{align} We can directly find the vector of conditional LST as \begin{align} \begin{bmatrix} \mt{f_{\cdot,1}} \\ \vdots \\ \mt{f_{\cdot,I_{\max}}} \end{bmatrix} = \begin{bmatrix} \mt{\theta I + D-M} & & \mathbf{0} \\ \pmb{-\Lambda} & \ddots & \\ \mathbf{0} &\pmb{-\Lambda} & \mt{\theta I + D-M} \end{bmatrix} ^{-1} \begin{bmatrix} \pmb{\lbar'} \\ \vdots \\ \mathbf{0} \end{bmatrix} \label{eq:LST_of_kth_departure_given_state_n_matrix} \end{align} Since the computation of \eqref{eq:LST_of_kth_departure_given_state_n_matrix} requires the inversion of a matrix of the order of $I_{\max}^2\times I_{\max}^2$ we show in the following how to calculate the conditional LST recursively from \eqref{eq:LST_of_kth_departure_given_state_n_matrix_recursive}. We observe that $\mt{M- D = Q' - \Lambda}$ where $\mt{Q'}$ denotes the transition rate matrix of the continuous Markov chain associated with the reversed process $Z_t^r$. We define $\mt{\Phi \coloneqq \theta I + D - M}$ and obtain the following recursion in block matrix form \begin{align*} \begin{bmatrix} \mt{\Phi} & & & \\ \pmb{-\Lambda} & \ddots & \mbox{\Large $\mathbf{0}$}&\\ \mathbf{0} & \ddots& \ddots & \\ \mathbf{0} & \mathbf{0} & \pmb{-\Lambda} & \mt{\Phi} \end{bmatrix} \begin{bmatrix} \mt{f_{\cdot,1}} \\ \vdots \\ \vdots \\ \mt{f_{\cdot,I_{\max}}} \end{bmatrix} = \begin{bmatrix} \pmb{\lbar'} \\ \vdots \\ \vdots \\ \mathbf{0} \end{bmatrix} \end{align*} Now we obtain the conditional LST $\mt{f_{\cdot,n}}$ recursively with the initial condition \begin{align} \mt{f_{\cdot,1} = \Phi^{-1} \pmb{\lbar'}} \label{eq:LST_of_1st_departure_given_state_n_initial_condition} \end{align} and for $k\geq 2$ \begin{align} \mt{f_{\cdot,k} = \Psi^{k-1}\Phi^{-1} \pmb{\lbar'}} \label{eq:LST_of_kth_departure_given_state_n_rest_of_recursion} \end{align} where we used the shorthand notation $\mt{\Psi \coloneqq \Phi^{-1}\Lambda}$. \begin{figure*} \centering \begin{subfigure}[b]{0.49\linewidth} \includegraphics[width=\linewidth]{./simulations/matlab/fig/ccdf_age_at_inform_arrival_lambda1_comparison_mu_05_1_2_max_msgs20_log_trace_len5.eps} \caption{} \label{fig:ccdf_age_at_inform_2} \end{subfigure} \hfill \begin{subfigure}[b]{0.49\linewidth} \includegraphics[width=\linewidth]{./simulations/matlab/fig/pdf_age_at_any_point_lambda1_mu_comparison_0.5_1_2_max_msgs20_log_trace_len5.eps} \caption{} \label{fig:pdf_age_at_anytime} \end{subfigure} \caption{(a) CCDF of the age at the arrival times of informative messages for arrival rate $\lambda=1$ and varying OWD parameter $\mu$. $I_{\max} = 20$. (b) Probability density of the age $f(x)$ at any point in time obtained from \eqref{eq:f_age_fct_of_pdf_at_arrival} for $\lambda=1$ and varying OWD parameter $\mu$. $I_{\max} = 20$.} \end{figure*} \subsection{Computing $f^\circ(x_0, t_1)$} We can now put together the forward and backward elements. Let $\hat{f}(\nu,\theta)$ denote the LST of $f^\circ(x_0,t_1)$, specifically \begin{equation} \hat{f}(\nu,\theta):=\int_{0}^{\infty}\int_{0}^{\infty} f^\circ(x_0,t_1) e^{-\nu t_1} e^{-\theta x_0} dt_1 dx_0 \end{equation} From \eqref{eq:forward-backward2} is comes \begin{align} \label{eq:joint_LST_x_t} \int_{0}^{\infty}\int_{0}^{\infty} & f^\circ(x_0,t_1) e^{-\nu t_1} e^{-\theta x_0} dt_1 dx_0 \\ & = \frac{1}{\eta}\sum\limits_{\substack{k,n \\ k\leq n}} p_{n-k}\mt{Q'(n-k,n)} f_{n,n-k+1}(\theta) \tilde{f}_{n-k}(\nu)\nonumber \end{align} where $\tilde{f}_{n-k}(\nu)$ resembles the conditional LST of the time until the next departure in the \emph{forward component} given state $n-k$. The formulation \eqref{eq:joint_LST_x_t} reflects the terms in \eqref{eq:forward-backward2}, i.e., that the joint stationary PDF of the age and the time until the next arrival of an informative message are conditionally independent given state $n-k$. \section{Computing Age performance Metrics} \TBD{This needs now to be aligned to the previous section.} \subsection{Age distribution at arbitrary points in time} To obtain the age distribution at any point in time we construct the PDF $f^\circ(x_0, t_1)$ from \eqref{eq:forward-backward2}. From the previous subsection we obtain the LST of the decomposed forward and backward components $\tilde{f}_{n-k}(\nu)$ and $f_{n,n-k+1}(\theta)$, respectively. Next, we compute $p^\circ_n$, i.e., the probability that the state of the Markov chain is $n$ just after the arrival of an informative message. To compute this we consider the backward component and use the set of transitions of interest $\mathcal{F'}\coloneqq \{(i,j)\}_{i<j}$, i.e., the transitions at the rates $\mu_{ij}'$. Note that these transitions correspond to the arrival of informative messages. Now, we can calculate the stationary probability $p^\circ_n$ as \begin{align} p_n^0 &= \frac{\sum\limits_{\substack{n'<n,\\(n',n)\in{\mathcal{F'}}}}\mkern-18mu p_{n'}Q'_{n',n}}{\sum\limits_{\substack{n',\\(n',n)\in{\mathcal{F'}}}}\mkern-18mu p_{n'}Q'_{n',n}} \\ &= \frac{1}{\eta}\sum_{k<n}p_{n-k} \mt{Q'(n-k,n)} \label{eq:p_circ_n2} \end{align} where $p_{n-k}$ is the steady state probability from \eqref{eq:steady_state_prob_chain_nr_msgs_underway}, $\mt{Q'(n-k,n)}$ denotes the $(n-k,n)$ entry (transition rate $Q'_{n-k,n}$) of the transition rate matrix $\mt{Q'}$, and $\eta$ is normalization constant for the steady state probabilities $p_n$ by the aggregate transitions rates leading to the arrival of an informative message, i.e., \begin{align} \eta = \sum_{n=0}^{I_{\max}-1} p_n \sum_{j>n} \mu_{nj}' . \label{eq:normalization constant} \end{align} . We can now obtain the distribution of the age at any point in time through the Palm inversion as given in \eqref{eq:f_age_fct_of_pdf_at_arrival}. To this end we compute the LST of $f^\circ(x_0,t_1)$ after insertion into \eqref{eq:forward-backward2} as \begin{align} \label{eq:joint_LST_x_t} \int_{0}^{\infty}\int_{0}^{\infty} & f^\circ(x_0,t_1) e^{-\nu t_1} e^{-\theta x_0} dt_1 dx_0 \\ & = \frac{1}{\eta}\sum\limits_{\substack{k,n \\ k\leq n}} p_{n-k}\mt{Q'(n-k,n)} f_{n,n-k+1}(\theta) \tilde{f}_{n-k}(\nu)\nonumber \end{align} where $\tilde{f}_{n-k}(\nu)$ resembles the conditional LST of the time until the next departure in the \emph{forward component} given state $n-k$. The formulation \eqref{eq:joint_LST_x_t} reflects the terms in \eqref{eq:forward-backward2}, i.e., that the joint stationary PDF of the age and the time until the next arrival of an informative message are conditionally independent given state $n-k$. \TBD{This should be clear from Section IV.A and should not be explained now. Also, it is too vague to be convincing.} Now, we proceed to the distribution of the age at any point in time by inserting the Palm joint density $f^\circ(x_0,t_1)$ that is obtained from the calculation of the inverse Laplace transform of the formulation \eqref{eq:joint_LST_x_t} for given $\lambda, \mu$ into \eqref{eq:f_age_fct_of_pdf_at_arrival}. \TBD{What is the result ? Shouldn't we say more as this is supposed to be the main outcome !} \paragraph*{Remark} As all the Laplace-Stieltjes transforms encountered here are rational the distributions associated with them are matrix-exponential. This allows computing closed form results for the age distribution given $\lambda, \mu$. \subsection{The expected age} To obtain the expected age we use the formulation of the age density given in \eqref{eq:f_age_fct_of_pdf_at_arrival}. As illustrated above we obtain the density $f^\circ(x_0,t_1)$ by calculating the inverse Laplace transform of the formulation \eqref{eq:joint_LST_x_t} for given $\lambda, \mu$. Calculating the expected age from the closed form density formulation obtained from \eqref{eq:f_age_fct_of_pdf_at_arrival} is, hence, straightforward. To empirically obtain the average age from the event based simulation we utilize the Palm inversion formula \eqref{eq:palm_inversion_formula} with $\varphi$ set as identity function and $\lambda$ in \eqref{eq:palm_inversion_formula} replaced by $\lambda^\circ = \lambda (1 - p_{I_{\max}})$. Hence we can write \begin{align} &E\left[X\right] = \lambda^\circ E^\circ\left[\int_{0}^{T_1}X_s ds \right] = \lambda^\circ E^\circ\left[\int_{0}^{T_1}A+s ~ds \right] \nonumber \\ &= \lambda^\circ E^\circ\left[AT_1 + \frac{1}{2} T_1^2 \right] \nonumber \\ &= \lambda^\circ E\left[A_n(T_{n+1}-T_n)\right] + \frac{\lambda^\circ}{2} E\left[(T_{n+1}-T_n)^2 \right]. \label{eq:plam_expectation} \end{align} Here, $A_n$ is the age at the receiver just after the arrival of the $n$th informative message at the time point $T_n$. \subsection{Age distribution at the arrival of informative messages} Given the calculations above we can easily calculate the distribution of the age at the \emph{arrival instants of informative messages}. This is directly obtained by computing the LST of the time until the departure of an informative message with density function $f^\circ(v)$ as \begin{align} \int_{0}^{\infty} f^\circ(v) e^{-\theta v} dv = \frac{1}{\eta}\sum\limits_{\substack{k,n \\ k\leq n}} p_{n-k}\mt{Q'(n-k,n)} f_{n,n-k+1}(\theta) . \label{eq:LST_time_until_informative_msg} \end{align} Note that the Laplace-Stieltjes transforms $f_{n,n-k+1}(\theta)$ utilized above are rational, hence, the distributions associated with them are matrix-exponential which renders $f^\circ(v)$ to be matrix-exponential. For given $\lambda$ and $\mu$ we obtain $f^\circ(v)$ by calculating the inverse Laplace transform of the formulation above. \begin{figure*} \centering \begin{subfigure}[b]{0.49\linewidth} \includegraphics[width=\linewidth]{./simulations/matlab/fig/mean_age_at_anytime_mu1_lambdastart0.1_lambdaend2_nr_lambda29_max_msgs20_log_trace_len5_no_relwork.eps} \caption{} \label{fig:mean_age_anytime_I_20} \end{subfigure} \hfill \begin{subfigure}[b]{0.49\linewidth} \includegraphics[width=\linewidth]{./simulations/matlab/fig/quantile_age_at_anytime_mu1_lambdastart0.1_lambdaend2_nr_lambda20_max_msgs20_log_trace_len5.eps} \caption{} \label{fig:quantile_age_anytime_I_20} \end{subfigure} \caption{(a) Expected age $E[X]$ obtained from simulations compared to the model in \eqref{eq:expectation_test_fct_age} with $\varphi$ being the identity function for $I_{\max}=20$ and $\mu=1$. (b) Quantiles of the age $P[X>x_{\varepsilon}]=\varepsilon$ obtained from integrating the age density in \eqref{eq:f_age_fct_of_pdf_at_arrival} for $I_{\max}=20$ and $\mu=1$.} \end{figure*} \section{Computing the Palm Distributions} \label{sec:computing_palm_probabilities} \subsection{Decomposition into Forward and Backward Components} In order to compute the Palm PDF $f^\circ(x_0, t_1)$ we observe that the part on $x_0$ (the age) involves the past of $Z_t$ whereas the part on $t_1$ (time until a new arrival) involves the future of $Z_t$. This is captured by the following theorem. \begin{theorem} The joint PDF of both the age $x_0$ just after the informative message arrival \emph{and} the length of the cycle $t_1$ until the arrival of the next informative message is given by \begin{align} \label{eq:forward-backward2} f^\circ(x_0,t_1)= \sum_{(n',n) \mst 1\leq n+1\leq n'\leq I_{\max}} p^\circ_{n',n} g^\circ(x_0|n',n)h(t_1|n) \end{align} where \begin{itemize} \item $p^\circ_{n',n}$ is the probability that an arbitrary message arrival happens at a transition $(n'\to n)$ of the Markov chain $Z_t$ and is given by \begin{equation} p^\circ_{n',n} = \frac{p_{n'}}{\bar{N}} \; \ind{1\leq n+1\leq n'\leq I_{\max}} \label{eq-p0} \end{equation} in the above, $\bar{N}$ is the stationary expectation of $Z_t$ and $p_i$ is given in \eqref{eq:steady_state_prob_chain_nr_msgs_underway}; \item $g^\circ(x_0|n',n)$ is the PDF of the Palm distribution of the age $x_0$ just after the informative message arrival given that the state of the Markov chain is $n'$ just before the arrival of the informative message and $n$ just after the arrival, where $n'\geq n+ 1$; \item $h(t_1|n)$ is the stationary PDF of the time that will elapse from time $t$ until the next informative message arrives, given that $Z_t=n$. \end{itemize} \end{theorem} The proof exploits the Markov property of $Z_t$, which expresses that the future depends on the past only through the present state. \begin{proof} Define $f^\circ(x_0, t_1|n', n)$ as the joint PDF of the Palm distribution of both the age $x_0$ just after the informative message arrival \emph{and} the length of the cycle $t_1$ until the arrival of the next informative message, given that the state of the Markov chain is $n'$ just before the arrival of the informative message and $n$ just after the arrival, where $n'\geq n+ 1$. It follows that the required PDF $f^\circ(x_0, t_1)$ is given by \begin{align} \label{eq:forward-backward} f^\circ(x_0,t_1)= \sum_{(n',n) \mst 1\leq n+1\leq n'\leq I_{\max}} p^\circ_{n',n} f^\circ(x_0, t_1|n',n) \end{align} where $p^\circ_{n',n}$ is the probability that an arbitrary message arrival happens at a transition $(n'\to n)$ of the Markov chain $Z_t$. By \cite[Thm 7.1.2]{boudec2011performance}, such a probability is given by \begin{equation} p^\circ_{n',n} = \eta p_{n'} Q_{n',n}, \; \ind{1\leq n+1\leq n'\leq I_{\max}} \end{equation} where $\mathbf{1}_{\{\cdot\}}$ is the indicator function, equal to $1$ when the condition is true and $0$ otherwise, $p_{n'}$ is the stationary probability given in \eqref{eq:steady_state_prob_chain_nr_msgs_underway}, $Q_{n',n}$ is the transition rate in \eqref{eq:transition_rates_fwd_process} and $\eta$ is a normalizing constant. Observe that $Q_{n',n}=\mu$, which gives $\eta^{-1}=\sum_{i=1}^{I_{\max}}i p_{i}=\bar{N}$ where $\bar{N}$ is the stationary expectation of $Z_t$. It finally comes \begin{equation} p^\circ_{n',n} = \frac{p_{n'}}{\bar{N}} \; \ind{1\leq n+1\leq n'\leq I_{\max}} \end{equation} Let $g^\circ(x_0|n',n)$ denote the PDF of the Palm distribution of the age $x_0$ just after the informative message arrival given that the state of the Markov chain is $n'$ just before the arrival of the informative message and $n$ just after the arrival, where $n'\geq n+ 1$. Recall that $h(t_1|n)$ denotes the stationary PDF of the time that will elapse from time $t$ until the next informative message arrives, given that $Z_t=n$. By the Markov property, this is also the PDF of the Palm distribution of the time until the next informative message arrives given that the state of the Markov chain is $n'$ just before the arrival of the informative message and $n$ just after the arrival. Again by the Markov property, $f(x_0, t_1|n'n)= g^\circ(x_0|n',n)h(t_1|n)$, which proves \eqref{eq:forward-backward2}. \end{proof} We next compute $h(t_1|n)$, which we call the forward component of \eqref{eq:forward-backward2}. The computation of the backward component $g^\circ(x_0|n',n)$ will involve a similar method plus a time-reversal argument. \subsection{Computation of the Forward Component} First, we will introduce the following lemma to calculate the Laplace-Stieltjes Transform (LST) of the time until the occurrence of the \emph{next transition of interest} in a continuous-time Markov chain $\{\tilde{Z}(t)\}_{t\in\mathbb{R_+}}$ conditioned on $Z_t=n$. The transitions of interest are defined by some subset $\tilde{\mathcal{F}}$ of $E\times E$, where $E\subseteq \mathbb{N}$ is the state-space of the Markov chain. \begin{lemma}\label{lemma:LST_of_1st_event_forward_component_given_state_n} Consider a time-homogeneous, continuous-time Markov chain $(\tilde{Z}_t)_{t\in\mathbb{R_+}}$ with state space $E\subseteq \Nats$ and with transition rates $\tilde{Q}_{n,n'}$; let $\tilde{d}_{n}=\sum_{n'\in E}\tilde{Q}_{n,n'}$ denote the sum of all outgoing rates from state $n$ and assume that $\tilde{d}_{n}>0$ for all $n\in E$. Let $\tilde{\mathcal{F}}\subseteq E$ such that $\tilde{Q}_{n,n'}> 0$ for all $(n,n')\in \tilde{\mathcal{F}}$. Call $\tilde{Y}_t$ the time that will elapse from $t$ until the next jump in $\tilde{\mathcal{F}}$ of the Markov chain, i.e. $\tilde{Y}_t =\inf\{s> 0, (Z_{(t+s)^-},Z_{(t+s)^+})\in \tilde{\mathcal{F}}\}$. The conditional LST of $\tilde{Y}_t$ given that $\tilde{Z}_t=n$, denoted as $\tilde{f}_{n}(\nu)$, satisfies \begin{align} &\tilde{f}_{n}(\nu) \coloneqq \E\left[e^{-\nu \tilde{Y}_t} | \tilde{Z}_t=n\right] \nonumber\\ &= \frac{1}{\tilde{d}_{n}+ \nu}\left(\sum_{\substack{n',\\(n,n')\notin\tilde{\mathcal{F}}}} \tilde{f}_{n'}(\nu) \tilde{Q}_{n,n'}+ \sum_{\substack{n',\\(n,n')\in\tilde{\mathcal{F}}}} \tilde{Q}_{n,n'}\right). \label{eq:LST_of_next_transition_of_interest} \end{align} \end{lemma} \begin{proof} Fix some arbitrary time $t$ and define $\tilde{S}_t$ as the time until the next transition (of interest or not) out of state $\tilde{Z}_t$ and let $ N'_t\coloneqq Z_{t+\tilde{S}_t}$ denote the next state. It is known \cite{gillespie1976general} that, conditional to $\tilde{Z}_t=n$, $N'_t$ and $\tilde{S}_t$ are independent, the distribution of $\tilde{S}_t$ is exponential with rate $\tilde{d}_{n}$ and the distribution of $N'_t$ is given by $\P(N'_t=n'| \tilde{Z}_t=n)=\frac{\tilde{Q}_{n,n'}}{\tilde{d}_n}$. It follows that \begin{equation} \P\left[N'_t=n'| \tilde{Z}_t=n,\tilde{S}_t=s\right]=\frac{\tilde{Q}_{n,n'}}{\tilde{d}_n} \label{eq:gillespie} \end{equation} and \begin{equation} \E\left[e^{-\nu \tilde{S}_t}\right] = \frac{\tilde{d}_{n}}{\tilde{d}_{n}+ \nu} \label{eq:gillespie2} \end{equation} Also let $\tilde{R}_t$ denote the residual time from the next transition until the next transition of interest, i.e. $\tilde{R}_t=0$ whenever $(\tilde{Z}_t,N'_t)\in \tilde{\mathcal{F}}$ and otherwise $\tilde{R}_t=\tilde{Y}_{t+\tilde{S}_t}$. Hence \begin{equation} \tilde{Y}_t = \tilde{S}_t + \tilde{R}_t \label{eq:Y_first_departure} \end{equation} By conditioning on $\tilde{S}_t=s$ we can write \begin{align} &\E\left[e^{-\nu \tilde{Y}_t} | \tilde{Z}_t=n,\tilde{S}_t=s\right] \nonumber\\ &= e^{-\nu s}\E\left[e^{-\nu \tilde{R}_t} | \tilde{Z}_t=n,\tilde{S}_t=s\right] \label{eq:Y_lst_1} \end{align} By conditioning with respect to $N'_t$ in the latter term and applying \eqref{eq:gillespie} we obtain \begin{align} &\E\left[e^{-\nu \tilde{R}_t} | \tilde{Z}_t=n,\tilde{S}_t=s\right] \nonumber\\ &= \sum_{n'\in E}\left(\E\left[e^{-\nu \tilde{R}_t} | \tilde{Z}_t=n,\tilde{S}_t=s, N'_t=n'\right]\times \right.\nonumber\\ &\left.\P\left[N'_t=n'| \tilde{Z}_t=n,\tilde{S}_t=s\right]\right)\nonumber\\ &=\sum_{n'\in E}\left(\E\left[e^{-\nu \tilde{R}_t} | \tilde{Z}_t=n,\tilde{S}_t=s, N'_t=n'\right]\frac{\tilde{Q}_{n,n'}}{\tilde{d}_n}\right) \label{eq:Y_lst_1b} \end{align} Now if $(n,n')\in \tilde{\mathcal{F}}$ then $\tilde{R}_t=0$ hence \begin{equation} \E\left[e^{-\nu \tilde{R}_t} |\tilde{Z}_t=n,\tilde{S}_t=s, N'_t=n'\right]=1 \mbox{ if } (n,n')\in \tilde{\mathcal{F}} \label{eq:Y_lst_1c} \end{equation} Else, i.e. if $(n,n')$ is not in $\tilde{\mathcal{F}}$, $\tilde{R}_t=\tilde{Y}_{t+\tilde{S}_t}$ is the time that remains to elapse until the next transition of interest; by the Markov property, the future of the Markov chain depends on the history only via the current state, i.e. \begin{align} &\E\left[e^{-\nu \tilde{Y}_{t+\tilde{S}_t}} | \tilde{Z}_t=n,\tilde{S}_t=s, N'_t=n'\right] \nonumber\\ & =\E\left[e^{-\nu \tilde{Y}_{t+s}} | N'_t=n'\right]= \E\left[e^{-\nu \tilde{Y}_{t+s}} | Z_{t+s}=n'\right] \nonumber\\ &=\tilde{f}_{n'}(\nu) \label{eq:Y_lst_1d} \end{align} where the last equality is because the Markov chain is time-homogeneous. Combining \eqref{eq:Y_lst_1} with \eqref{eq:Y_lst_1b}-\eqref{eq:Y_lst_1d} gives \begin{align} &\E\left[e^{-\nu \tilde{Y}_t} | \tilde{Z}_t=n,\tilde{S}_t=s\right]\nonumber\\ &= e^{-\nu s}\left(\sum_{\substack{n',\\(n,n')\notin\tilde{\mathcal{F}}}} \tilde{f}_{n'}(\nu) \frac{\tilde{Q}_{n,n'}}{\tilde{d}_{n}}+ \sum_{\substack{n',\\(n,n')\in\tilde{\mathcal{F}}}} \frac{\tilde{Q}_{n,n'}}{\tilde{d}_{n}}\right) \label{eq:Y_lst_1f} \end{align} By the law of total expectation we can now write \begin{align} &\tilde{f}_{n}(\nu) = \E\left[e^{-\nu \tilde{Y}_t} | \tilde{Z}_t=n\right] \nonumber\\ &=\E\left[\E\left[e^{-\nu \tilde{Y}_t} | \tilde{Z}_t=n,\tilde{S}_t\right]\right] \nonumber\\ &= \E\left[e^{-\nu \tilde{S}_t}\right]\left(\sum_{\substack{n',\\(n,n')\notin\tilde{\mathcal{F}}}} \tilde{f}_{n'}(\nu) \frac{\tilde{Q}_{n,n'}}{\tilde{d}_{n}}+ \sum_{\substack{n',\\(n,n')\in\tilde{\mathcal{F}}}} \frac{\tilde{Q}_{n,n'}}{\tilde{d}_{n}}\right) \label{eq:LST_of_next_transition_of_interest2} \end{align} Using \eqref{eq:gillespie2} completes the proof. \end{proof} \vspace{5pt} Now we can use Lem.~\ref{lemma:LST_of_1st_event_forward_component_given_state_n} to calculate the stationary PDF $h(t_1|n)$ of the time that will elapse from a fixed time $t$ until the next informative message arrives conditioned on $Z_t=n$. The set of transitions of interest is $\tilde{\mathcal{F}}\coloneqq \{(i,j)\}_{i>j}$, i.e. the transitions associated with the arrival of informative messages. The transition rates $Q_{i,j}$ are given in \eqref{eq:transition_rates_fwd_process} and \begin{equation} \label{eq-dtilde}\tilde{d}_{n}= \lambda \mathbf{1}_{\{n<I_{\max}\}} + \mu n\end{equation} where $\mathbf{1}_{\{\cdot\}}$ is the indicator function, equal to $1$ when the condition is true and $0$ otherwise. The Laplace-Stieltjes Transform of $h(t_1|n)$ continues to be denoted by $\tilde{f}_{n}(\nu)$; the application of Lem.~\ref{lemma:LST_of_1st_event_forward_component_given_state_n} gives: \begin{align} \tilde{f}_{n}(\nu) = \frac{1}{\tilde{d}_{n}+ \nu} \left[ n \mu + \lambda\tilde{f}_{n+1}(\nu)\right]. \label{eq:LST_of_next_arrival_given_state_nprime} \end{align} \begin{figure}[t] \centering \includegraphics[width=0.9\linewidth]{./gfx/sample_path_jumps3} \caption{A sample path of the number of non-obsolete messages under way given the system model from Sect.~\ref{sec:system_model}. Looking at the forward process: The downward jumps mark the arrival of informative messages at the receiver which make previous messages obsolete. As messages depart in batches of random sizes in this model the waiting time of the freshest message of the batch corresponds to the age value set upon the arrival of that message at the receiver.} \label{fig:sample_path_jumps} \end{figure} This recursive relation can be rewritten using matrix notation as \begin{equation} \mt{(\nu I + \tilde{D}) \tilde{f} = \pmb{\bar{\mu}} + \bar{\Lambda} \tilde{f}} \label{eq:LST_of_next_arrival_given_state_nprime_matrix_recursive} \end{equation} with the identity matrix $\mt{I}$, the vectors $\mt{\tilde{f}}= [\tilde{f}_{0}(\nu),\dots,\tilde{f}_{I_{\max}}(\nu)]\T$, and $\pmb{\bar{\mu}} = [0,\mu,2\mu\dots,I_{\max}\mu]\T$, and the matrices $\mt{\tilde{D}} \coloneqq \diag (\tilde{d}_0,\tilde{d}_1,\dots,\tilde{d}_{I_{\max}})$, and \begin{align} \pmb{\bar{\Lambda}} \coloneqq \begin{bmatrix} \mathbf{0} & \lambda& \hdots& \mathbf{0} \\ \vdots & \ddots& \lambda& \vdots \\ \vdots& \ddots& \ddots &\lambda\\ \mathbf{0} & \hdots & \mathbf{0}&\mathbf{0} \end{bmatrix}. \label{eq:definition_of_Lambda_bar} \vspace{-20pt} \end{align} Now we can directly solve for the conditional LSTs as \begin{equation} \mt{\tilde{f} = (\nu I + \tilde{D} - \bar{\Lambda})^{-1} \pmb{\bar{\mu}} } \label{eq:LST_of_next_arrival_given_state_nprime_matrix} \end{equation} \subsection{Computation of the Backward Component} Recall that $g^\circ(x_0|n',n)$ denotes the PDF of the Palm distribution of the age $x_0$ just after the informative message arrival given that the state of the Markov chain is $n'$ just before the arrival of the informative message and $n$ just after the arrival, where $n'\geq n+ 1$. since the arrival of the freshest message in the served batch. For the computation of $g^\circ(x_0|n',n)$ we resort to time-reversal as this allows to use a similar method as for the forward component. The time-reversed process $Z_t^r$ is defined by $Z_t^r=Z_{-t}$. In a nutshell, time reversal allows us to change the underlying queueing model from Sect.~\ref{sec:underlying_model} into a FIFO queue where arrivals occur in message batches of random size while the server removes exactly \emph{one} message on each visit. To illustrate this, consider the sample path shown in Fig.~\ref{fig:sample_path_jumps} in the reverse time direction. It is shown in \cite{Kelly:Reversibility-2011} [Section 1.7] that if $Z_t$ is endowed with its stationary probability, then the time-reversed process is also a time-homogeneous continuous time Markov chain with same state space and same stationary probability, but with different transitions rates. Specifically, by \cite{Kelly:Reversibility-2011} [Theorem 1.12] the transition rates $Q'_{i,j}$ for $Z_t^r$ depend on the transition rates of the original Markov process \eqref{eq:transition_rates_fwd_process} and its stationary distribution \eqref{eq:steady_state_prob_chain_nr_msgs_underway}. We obtain $Q'_{i,i-1} =\lambda_{i}'$ for $i=1...I_{\max}$, $Q'_{i,j} =\mu_{ij}'$ for $i=0...I_{\max}-1,i<j\leq I_{\max}$ and $Q'_{i,j} =0$ otherwise, with \begin{equation} \lambda_i' = \begin{cases} \frac{i}{i+1}\left(\lambda + (i+1) \mu\right) &\text{for $1\leq i < I_{\max}$}\\ \\ I_{\max} \mu &\text{for $i = I_{\max}$} \end{cases} \label{eq:lambda_prime} \end{equation} and \begin{equation} \mu_{ij}' = \begin{cases} \frac{(j+1) \mu \lambda^{j-i}}{(i+1)\prod\limits_{k=i+2}^{j+1}\left(\lambda + k\mu\right)}&\text{for $j \neq I_{\max}$}\\ \\ \frac{\lambda^{I_{\max}-i}}{(i+1)\prod\limits_{k=i+2}^{I_{\max}}\left(\lambda + k\mu\right)}&\text{for $j = I_{\max}$} \end{cases} \label{eq:mu_prime} \end{equation} for $0\leq i< I_{\max}$. The derivation of \eqref{eq:lambda_prime}, \eqref{eq:mu_prime} is given in the appendix. In $Z_t$, upon serving a batch of messages, the sojourn time of the freshest message of that batch is the age of that particular informative message. In $Z^r_t$, this is given by the sojourn time of the $(n+1)$st message of an arriving batch of size $n'-n$. In $Z^r_t$, the arrival of a batch corresponds to a transition $n\to n'$ with $n'\geq n+1$ and the size of the arriving batch is $n'-n$. It follows that, for $n'\geq n+ 1$, $g^\circ(x_0|n',n)$ can be re-interpreted as the PDF of the time from now until the $(n+1)$st departure of $Z^r_t$, given that $Z^r_t$ is doing a transition $n\to n'$ now. Since $Z^r_t$ is also Markov, we can apply the Markov property and obtain that this is simply the PDF of the time from now until the $(n+1)$st departure of $Z^r_t$ given that $Z^r_t=n'$. For $n+1=1$ this is the conditional PDF of the time until a next departure, which is exactly the problem that is solved in Lemma~\ref{lemma:LST_of_1st_event_forward_component_given_state_n}, and which we now extend as follows (the proof is similar and is not given). \begin{lemma}\label{lemma:LST_of_all_events_forward_component_given_state_n} Consider a time-homogeneous, continuous-time Markov chain $(\tilde{Z}_t)_{t\in\mathbb{R_+}}$ with state space $E\subseteq \Nats$ and with transition rates $\tilde{Q}_{n,n'}$; let $\tilde{d}_{n}=\sum_{n'\in E}\tilde{Q}_{n,n'}$ denote the sum of all outgoing rates from state $n$ and assume that $\tilde{d}_{n}>0$ for all $n\in E$. Let $\tilde{\mathcal{F}}\subseteq E$ such that $\tilde{Q}_{n,n'}> 0$ for all $(n,n')\in \tilde{\mathcal{F}}$. For $k\geq 1$, call $\tilde{Y}^k_t$ the time that will elapse from $t$ until the $k$th jump in $\tilde{\mathcal{F}}$ of the Markov chain, i.e. $\tilde{Y}^1_t =\inf\{s\geq 0, (\tilde{Z}_{(t+s)^-},\tilde{Z}_{(t+s)^+})\in \tilde{\mathcal{F}}\}$ and for $k\geq 2$, $\tilde{Y}^k_t =\inf\{s> Y^{k-1}, (\tilde{Z}_{(t+s)^-},\tilde{Z}_{(t+s)^+})\in \tilde{\mathcal{F}}\}$. The conditional LST of $\tilde{Y}^k_t$ given that $\tilde{Z}_t=n$, denoted as $\tilde{f}_{n,k}(\theta)$, satisfies, for $k\geq 1$: \begin{align} &\tilde{f}_{n,k}(\theta) = \frac{1}{\tilde{d}_{n}+ \theta} \times \nonumber\\ &\left(\sum_{\substack{n',\\(n,n')\notin\tilde{\mathcal{F}}}} \tilde{f}_{n',k}(\theta) \tilde{Q}_{n,n'}+ \sum_{\substack{n',\\(n,n')\in\tilde{\mathcal{F}}}} \tilde{f}_{n',k-1}(\theta)\tilde{Q}_{n,n'}\right). \label{eq:LST_of_next_transition_of_interest} \end{align} where $\tilde{f}_{n,0}(\theta)=1$ by convention. \end{lemma} Let $f_{n',k}(\theta)$ be the LST of the time from now until the $k$th departure of $Z^r_t$ given that $Z^r_t=n'$, so that the LST of $g^\circ(x_0|n',n)$ is $f_{n', n+1}$. To compute $f_{n',k}(\theta)$, we now apply Lemma~\ref{lemma:LST_of_all_events_forward_component_given_state_n} to the Markov chain $Z^r_t$, with transition rate matrix $Q'$, and obtain: \begin{eqnarray} \hspace{-20pt}f_{n',1}(\theta) \left[d'_{n'} +\theta\right]\mkern-18mu&=&\mkern-18mu \lambda'_{n'}\ind{n'>0} + \mkern-12mu\sum\limits_{j>n'} \mu_{n',j}' f_{j,1}(\theta)\label{eq:LST_of_next_departure_given_state_n_v2a}\\ \hspace{-20pt} f_{n',k}(\theta) \left[d'_{n'} +\theta\right]\mkern-18mu&=&\mkern-18mu \lambda'_{n'}\ind{n'>0} f_{n',k-1}(\theta)+ \mkern-11mu\sum\limits_{j>n'} \mu_{n',j}' f_{j,1}(\theta) \label{eq:LST_of_next_departure_given_state_n_v2b} \end{eqnarray} for $0\leq n'\leq I_{\max}$ and $k\geq 2$. In the above, $\lambda'$ and $\nu'$ are given in \eqref{eq:lambda_prime} and \eqref{eq:mu_prime} and \begin{equation} d'_{n'} = \sum_{n=0}^{I_{\max}}Q'_{n',n} \label{eq-dprime} \end{equation} We use the following matrix notation: $\mt{D} \coloneqq \diag (d'_0,d'_1,\dots,d'_{I_{\max}})$, $\mt{f}_{\cdot,k} = [f_{0,k}(\theta),\dots,f_{I_{\max},k}(\theta)]\T$, $\pmb{\lbar'} = [0,\lambda_1',\dots,\lambda_{I_{\max}}']\T$. $\mt{M}$ is the upper triangular matrix \begin{equation*} \mt{M} = \begin{cases*} \mu_{ij}'&\text{for $i < j$}\\ 0&\text{for $i \geq j$} \end{cases*} \end{equation*} and $\pmb{\Lambda}$ is the matrix with $\lambda_n'$ on the subdiagonal defined by \begin{align} \pmb{\Lambda} \coloneqq \begin{bmatrix} \mathbf{0} & \hdots& \hdots& \mathbf{0} \\ \lambda_1' & \ddots & & \vdots \\ \vdots& \ddots& \ddots &\vdots\\ \mathbf{0} & \hdots & \lambda_{I_{\max}}'&\mathbf{0} \end{bmatrix}. \label{eq:definition_of_Lambda} \end{align} for $i,j \in \{0,1,\dots,I_{\max}\}$, We can rewrite the recursive relation of the conditional LST in \eqref{eq:LST_of_next_departure_given_state_n_v2a} as \begin{equation} \mt{(\theta I + D) f_{\cdot,1} = \pmb{\lbar'} + M f_{\cdot,1}} \label{eq:LST_of_next_departure_given_state_n_matrix_recursive} \end{equation} The previous equation can be solved and we obtain: \begin{equation} \mt{f_{\cdot,1} = (\theta I + D - M)^{-1} \pmb{\lbar'} } \label{eq:LST_of_next_departure_given_state_n_matrix} \end{equation} Similarly, we can re-write \eqref{eq:LST_of_next_departure_given_state_n_v2b} as \begin{equation} \mt{(\theta I + D) f_{\cdot,k} = \pmb{\Lambda}f_{\cdot,k-1} + M f_{\cdot,k}} \label{eq:LST_of_only_kth_departure_given_state_n_matrix_recursive} \end{equation} for $k\geq 2$. Now we can construct a block matrix form that takes \eqref{eq:LST_of_next_departure_given_state_n_matrix_recursive} as well as \eqref{eq:LST_of_only_kth_departure_given_state_n_matrix_recursive} to follow the form \begin{align} \scalemath{0.83}{ \begin{bmatrix} \mt{\theta I + D} & & \mathbf{0} \\ & \ddots & \\ \mathbf{0} & & \mt{\theta I + D} \end{bmatrix} \begin{bmatrix} \mt{f_{\cdot,1}} \\ \vdots \\ \mt{f_{\cdot,I_{\max}}} \end{bmatrix} = \begin{bmatrix} \pmb{\lbar'} \\ \vdots \\ \mathbf{0} \end{bmatrix} + \begin{bmatrix} \mt{M} & & \mathbf{0} \\ \pmb{\Lambda} & \ddots & \\ \mathbf{0} &\pmb{\Lambda} & \mt{M} \end{bmatrix} \begin{bmatrix} \mt{f_{\cdot,1}} \\ \vdots \\ \mt{f_{\cdot,I_{\max}}} \end{bmatrix} } \label{eq:LST_of_kth_departure_given_state_n_matrix_recursive} \end{align} We can directly find the vector of conditional LST as \begin{align} \begin{bmatrix} \mt{f_{\cdot,1}} \\ \vdots \\ \mt{f_{\cdot,I_{\max}}} \end{bmatrix} = \begin{bmatrix} \mt{\theta I + D-M} & & \mathbf{0} \\ \pmb{-\Lambda} & \ddots & \\ \mathbf{0} &\pmb{-\Lambda} & \mt{\theta I + D-M} \end{bmatrix} ^{-1} \begin{bmatrix} \pmb{\lbar'} \\ \vdots \\ \mathbf{0} \end{bmatrix} \label{eq:LST_of_kth_departure_given_state_n_matrix} \end{align} Since the computation of \eqref{eq:LST_of_kth_departure_given_state_n_matrix} requires the inversion of a matrix of the order of $I_{\max}^2\times I_{\max}^2$ we show in the following how to calculate the conditional LST recursively from \eqref{eq:LST_of_kth_departure_given_state_n_matrix_recursive}. We observe that $\mt{M- D = Q' - \Lambda}$ where $\mt{Q'}$ denotes the transition rate matrix of the continuous Markov chain associated with the reversed process $Z_t^r$. We define $\mt{\Phi \coloneqq \theta I + D - M}$ and obtain the following recursion in block matrix form \begin{align*} \begin{bmatrix} \mt{\Phi} & & & \\ \pmb{-\Lambda} & \ddots & \mbox{\Large $\mathbf{0}$}&\\ \mathbf{0} & \ddots& \ddots & \\ \mathbf{0} & \mathbf{0} & \pmb{-\Lambda} & \mt{\Phi} \end{bmatrix} \begin{bmatrix} \mt{f_{\cdot,1}} \\ \vdots \\ \vdots \\ \mt{f_{\cdot,I_{\max}}} \end{bmatrix} = \begin{bmatrix} \pmb{\lbar'} \\ \vdots \\ \vdots \\ \mathbf{0} \end{bmatrix} \end{align*} Now we obtain the conditional LST $\mt{f_{\cdot,n}}$ recursively with the initial condition \begin{align} \mt{f_{\cdot,1} = \Phi^{-1} \pmb{\lbar'}} \label{eq:LST_of_1st_departure_given_state_n_initial_condition} \end{align} and for $k\geq 2$ \begin{align} \mt{f_{\cdot,k} = \Psi^{k-1}\Phi^{-1} \pmb{\lbar'}} \label{eq:LST_of_kth_departure_given_state_n_rest_of_recursion} \end{align} where we used the shorthand notation $\mt{\Psi \coloneqq \Phi^{-1}\Lambda}$. \begin{figure*} \centering \begin{subfigure}[b]{0.49\linewidth} \includegraphics[width=\linewidth]{./simulations/matlab/fig/ccdf_age_at_inform_arrival_lambda1_comparison_mu_05_1_2_max_msgs20_log_trace_len5.eps} \caption{} \label{fig:ccdf_age_at_inform_2} \end{subfigure} \hfill \begin{subfigure}[b]{0.49\linewidth} \includegraphics[width=\linewidth]{./simulations/matlab/fig/pdf_age_at_any_point_lambda1_mu_comparison_0.5_1_2_max_msgs20_log_trace_len5_new.eps} \caption{} \label{fig:pdf_age_at_anytime} \end{subfigure} \caption{(a) CCDF of the age at the arrival times of informative messages for arrival rate $\lambda=1$ and varying OWD parameter $\mu$. $I_{\max} = 20$. (b) Probability density of the age $f(x)$ at any point in time obtained from \eqref{eq:f_age_fct_of_pdf_at_arrival} for $\lambda=1$ and varying OWD parameter $\mu$. $I_{\max} = 20$.} \end{figure*} \subsection{Computing $f^\circ(x_0, t_1)$} We can now put together the forward and backward elements. Let $\hat{f}(\nu,\theta)$ denote the LST of $f^\circ(x_0,t_1)$, specifically \begin{equation*} \hat{f}(\nu,\theta):=\int_{0}^{\infty}\int_{0}^{\infty} f^\circ(x_0,t_1) e^{-\nu t_1} e^{-\theta x_0} dt_1 dx_0 \end{equation*} From \eqref{eq:forward-backward2} this becomes \begin{align} \label{eq:joint_LST_x_t} \hat{f}(\nu,\theta)=\sum_{(n',n) \mst 1\leq n+1\leq n'\leq I_{\max}} p^\circ_{n',n} f_{n',n+1}(\theta)\tilde{f}_n(\nu) \end{align} where $p^\circ_{n',n}$ is in \eqref{eq-p0}, $\tilde{f}_n(\nu)$ is the $n$th component of \eqref{eq:LST_of_next_arrival_given_state_nprime_matrix} and $f_{n',n+1}(\theta)$ is obtained by setting $k=n+1$ in \eqref{eq:LST_of_kth_departure_given_state_n_rest_of_recursion}. As all the Laplace-Stieltjes transforms encountered here are rational fractions in $\theta$ [resp. $\nu$], the distributions associated with them are matrix-exponential and can be computed in closed form given $\lambda, \mu$. Specifically, we obtain \begin{eqnarray} g^\circ(x_0|n',n)= \sum_{i=0}^{I_{\max}}\pi^{i,n',n}(x_0)e^{-x_0d'_i} \label{eq-g0-fin} \end{eqnarray} where $d'_i$ is given in \eqref{eq-dprime} and $\pi^{i,n',n}$ is a polynomial in $x_0$, the coefficients of which are computed numerically for every $(\lambda, \mu)$. Similarly, we obtain \begin{eqnarray} h(t_1|n)= \sum_{j=0}^{I_{\max}}\tilde{\pi}^{j,n}(t_1)e^{-t_1\tilde{d}_j} \label{eq:ht1_given_n} \end{eqnarray} where $\tilde{d}_j$ is given in \eqref{eq-dtilde} and $\tilde{\pi}^{j,n}$ is a polynomial in $t_1$, the coefficients of which are computed numerically for every $(\lambda, \mu)$. Putting things together we obtain \begin{align} f^\circ(x_0, t_1)&=\sum_{i=0}^{I_{\max}}\sum_{j=0}^{I_{\max}}e^{-x_0d'_i-t_1\tilde{d}_j}\label{eq-f0-fin}\\ &\sum_{(n',n) \mst 1\leq n+1\leq n'\leq I_{\max}} p^\circ_{n',n} \pi^{i,n',n}(x_0) \tilde{\pi}^{j,n}(t_1) \nonumber \end{align} \section{Computing Age Performance Metrics} \subsection{Age Distribution at Arbitrary Points in Time} To obtain the PDF of the age at any point in time we insert the formulation of the PDF \eqref{eq-f0-fin} into \eqref{eq:f_age_fct_of_pdf_at_arrival}. To calculate this expression, we first show the calculation of a generic term that represents the core of this expression. We compute \begin{align} & \int_0^x\int_{x-x_0}^{+\infty} x_0^k t_1^{\ell} e^{-d'x_0 -\tilde{d}t_1} dt_1 dx_0 \nonumber \\ & = \int_0^x x_0^k e^{-d'x_0} \left[-e^{-\tilde{d}t_1} \sum_{i=0}^{\ell}\frac{\ell!}{i!\tilde{d}^{l-i+1}}t_1^i\right]_{x-x_0}^{\infty} dx_0 \nonumber \\ & = \int_0^x x_0^k e^{-d'x_0} \left(e^{-\tilde{d}(x-x_0)} \sum_{i=0}^{\ell}\frac{\ell!}{i!\tilde{d}^{l-i+1}}(x-x_0)^i\right) dx_0 . \label{eq:generic_term_eq_f} \end{align} The expression in \eqref{eq:generic_term_eq_f} stems from the fact that a primitive of $e^{-\tilde{d}t_1}P(t_1)$, with polynomial $P(t_1)=t_1^{\ell}$, is $-e^{-\tilde{d}t_1}\sum_{i=0}^{\mathrm{deg}(P)} \frac{P^{(i)}(t_1)}{\tilde{d}^{i+1}}$ where $P^{(i)}$ is the $i$th derivative of $P$ and $P^{(0)}=P$. This sum can be written in a compact form as $\sum_{i=0}^{\ell}\frac{\ell!}{i!\tilde{d}^{l-i+1}}t_1^i$. For the evaluation of the integral we used that $\lim_{t_1\rightarrow \infty} t_1^{\ell}e^{-\tilde{d}t_1} = 0$ for any fixed $\ell$ and positive $\tilde{d}$. Using the binomial theorem to expand the term $(x-x_0)^i$ in the expression above we can rewrite \eqref{eq:generic_term_eq_f} as \begin{align} & \frac{\ell! e^{-\tilde{d}x}}{\tilde{d}^{\ell+1}} \int_0^x x_0^k e^{-(d'-\tilde{d})x_0} \sum_{i=0}^{\ell}\frac{ \tilde{d}^{i}}{i!}(x-x_0)^i dx_0 \nonumber\\ & = \frac{\ell! e^{-\tilde{d}x}}{\tilde{d}^{\ell+1}} \int_0^x x_0^k e^{-(d'-\tilde{d})x_0} \sum_{i=0}^{\ell} c_{i,\ell}(x) x_0^i dx_0 . \label{eq:generic_term_eq_f_calc} \end{align} where we expanded $(x-x_0)^i = \sum_{k=0}^{i}(-1)^k \binom{i}{k}x^{i-k}x_0^k$. Then we rearrange the sum terms in the first line to express it as a polynomial in $x_0$ with coefficients \begin{equation} c_{i,\ell}(x) = \sum_{j=i}^{l} (-1)^i \binom{j}{i} x^{j-i} \frac{\ell!}{j!} \label{eq:coefficient_cell} \end{equation} with $j\geq i$. Here, we explicitly express the dependency of the coefficients on $x$ through $c_{i,\ell}(x)$. In a last step to compute \eqref{eq:generic_term_eq_f_calc} we calculate for $d'\neq\tilde{d}$ \begin{align} & \frac{\ell! e^{-\tilde{d}x}}{\tilde{d}^{\ell+1}} \sum_{i=0}^{\ell} c_{i,\ell}(x) \int_0^x x_0^{k+i} e^{-(d'-\tilde{d})x_0} dx_0 \nonumber\\ & = \frac{\ell! e^{-\tilde{d}x}}{\tilde{d}^{\ell+1}} \sum_{i=0}^{\ell} c_{i,\ell}(x) \left[-e^{-(d'-\tilde{d})x_0}\sum_{j=0}^{k+i}\frac{(k+i)!x_0^j}{j!(d'-\tilde{d})^{k+i-j+1}}\right]_{0}^{x} \nonumber\\ & = \frac{\ell! e^{-\tilde{d}x}}{\tilde{d}^{\ell+1}} \sum_{i=0}^{\ell} c_{i,\ell}(x) \left[\frac{(k+i)!}{(d'-\tilde{d})^{k+i+1}} \right.\nonumber\\ & \left. -e^{-(d'-\tilde{d})x}\sum_{j=0}^{k+i}\frac{(k+i)!x^j}{j!(d'-\tilde{d})^{k+i-j+1}}\right]. \label{eq:generic_term_eq_f_calc_final} \end{align} For the case when $d'=\tilde{d}$ we obtain as a solution of \eqref{eq:generic_term_eq_f_calc} \begin{align}\label{eq:generic_term_eq_f_calc_final_d_equal} \frac{\ell! e^{-\tilde{d}x}}{\tilde{d}^{\ell+1}} \sum_{i=0}^{\ell} c_{i,\ell}(x) \frac{x^{k+i+1}}{k+i+1} \end{align} Now, using the steps from above we insert the formulation of the PDF \eqref{eq-f0-fin} into \eqref{eq:f_age_fct_of_pdf_at_arrival} and obtain the PDF of the age at any point in time in the following theorem. \begin{theorem} In a stationary $M/M/I_{\max}/I_{\max}^*$ system, the PDF of the age of information at an arbitrary point in time, $f(x)$, is given by \begin{align} &f(x) = \hat{\lambda} \sum_{i=0}^{I_{\max}}\sum_{j=0}^{I_{\max}} \sum_{(n',n) \mst 1\leq n+1\leq n'\leq I_{\max}} p^\circ_{n',n} \,e^{-x\tilde{d}_j} \, \times\nonumber\\ &\mkern-38mu\sum_{l=0}^{\tilde{\xi}_{j,n}+\xi_{i,n',n}} \mkern-28mu \tilde{c}_l(x) \mkern-4mu \left(\mkern-4mu\frac{l!}{{(d'_i-\tilde{d}_j)}^{l+1}} - e^{-x (d'_i-\tilde{d}_j)}\mkern-4mu\sum_{v=0}^{l}\frac{l!}{v!{(d'_i-\tilde{d}_j)}^{l-v+1}}x^{v}\mkern-4mu\right) \label{eq:f_age_any_point_in_time_complete} \end{align} where \begin{itemize} \item $\lambda$ is the message generation rate at the sender, $1/\mu$ is the mean message transit time and $I_{\max}$ is the maximum number of messages in transit; \item $p^\circ_{n',n}$ from \eqref{eq-p0}, $d'_i$ from \eqref{eq-dprime}, and $\tilde{d}_j$ from \eqref{eq-dtilde}; \item $\xi_{i,n',n},\tilde{\xi}_{j,n}$ are the degrees of the polynomials ${\pi}^{i,n',n}(x_0)$ and $\tilde{\pi}^{j,n}(t_1)$ from \eqref{eq-g0-fin} and \eqref{eq:ht1_given_n}. Specifically, these are given as $$\pi^{i,n',n}(x_0):=\sum_{k=0}^{\xi_{i,n',n}} a_k^{i,n',n} x_0^k$$ and $$\tilde{\pi}^{j,n}(t_1):=\sum_{k=0}^{\tilde{\xi}_{j,n}} \tilde{a}_k^{j,n} t_1^k.$$ \item $\tilde{c}_l(x)$ are the polynomial coefficients obtained through the convolution \begin{equation}\label{eq:cell_final} \tilde{c}_l(x) = \sum_{v=0}^{l} \tilde{z}_v^{j,n}(x) a_{l-v}^{i,n',n} \end{equation} with $\tilde{z}_k^{j,n}(x) = \sum_{i=k}^{\tilde{\xi}_{j,n}} \tilde{a}_i^{j,n} c_{k,i}(x)$, where $c_{k,i}(x)$ is given in \eqref{eq:coefficient_cell}. \end{itemize} \end{theorem} \begin{proof} By inserting the formulation of the PDF \eqref{eq-f0-fin} into \eqref{eq:f_age_fct_of_pdf_at_arrival} we obtain the PDF of the age at any point in time as \begin{align} &f(x) = \hat{\lambda} \int_{0}^{x} \int_{x-x_0}^{\infty} f^\circ(x_0,t_1) dt_1 dx_0 \nonumber\\ &=\hat{\lambda} \sum_{i=0}^{I_{\max}}\sum_{j=0}^{I_{\max}} \sum_{(n',n) \mst 1\leq n+1\leq n'\leq I_{\max}} p^\circ_{n',n} \, \times\nonumber\\ &\int_{0}^{x} e^{-x_0d'_i} \pi^{i,n',n}(x_0) e^{-(x-x_0)\tilde{d}_j} \, \times \nonumber\\ &\int_{x-x_0}^{\infty} \tilde{\pi}^{j,n}(t_1) e^{-t_1\tilde{d}_j} dt_1 dx_0 \nonumber\\ &=\hat{\lambda} \sum_{i=0}^{I_{\max}}\sum_{j=0}^{I_{\max}} \sum_{(n',n) \mst 1\leq n+1\leq n'\leq I_{\max}} p^\circ_{n',n} \, \times\nonumber\\ &\int_{0}^{x} e^{-x_0d'_i} \sum_{k=0}^{\xi_{i,n',n}} a_k^{i,n',n} x_0^k e^{-(x-x_0)\tilde{d}_j} \, \times \nonumber\\ &\int_{x-x_0}^{\infty} \sum_{k=0}^{\tilde{\xi}_{j,n}} \tilde{a}_k^{j,n} t_1^k e^{-t_1\tilde{d}_j} dt_1 dx_0 \label{eq:intermediate_1_pi_substituted} \end{align} Looking closely at \eqref{eq:intermediate_1_pi_substituted} after rearranging terms and swapping the sum in $\tilde{\pi}^{j,n}(t_1)$ and the integral over $t_1$ we observe that at the core of the problem we need to compute the expression in \eqref{eq:generic_term_eq_f}. The additional complexity compared to the result in \eqref{eq:generic_term_eq_f_calc_final} arises due to the sums in $\tilde{\pi}^{j,n}(t_1)$ and $\pi^{i,n',n}(x_0)$. In the next step, we evaluate the second integral in \eqref{eq:intermediate_1_pi_substituted} using the same method as for \eqref{eq:generic_term_eq_f} to obtain \begin{align} &\hat{\lambda} \sum_{i=0}^{I_{\max}}\sum_{j=0}^{I_{\max}} \sum_{(n',n) \mst 1\leq n+1\leq n'\leq I_{\max}} p^\circ_{n',n} \, \times\nonumber\\ &\int_{0}^{x} e^{-x_0d'_i} \sum_{k=0}^{\xi_{i,n',n}} a_k^{i,n',n} x_0^k \sum_{k=0}^{\tilde{\xi}_{j,n}}\tilde{a}_k^{j,n} \sum_{v=0}^{k}\frac{k!}{v!\tilde{d}_j^{k-v+1}}(x-x_0)^{v} dx_0\nonumber\\ &=\hat{\lambda} \sum_{i=0}^{I_{\max}}\sum_{j=0}^{I_{\max}} \sum_{(n',n) \mst 1\leq n+1\leq n'\leq I_{\max}} p^\circ_{n',n} \, \times\nonumber\\ &\int_{0}^{x} e^{-x_0d'_i} \sum_{k=0}^{\xi_{i,n',n}} a_k^{i,n',n} x_0^k \sum_{k=0}^{\tilde{\xi}_{j,n}}\tilde{z}_k^{j,n}(x) x_0^k dx_0 . \label{eq:intermediate_2_pi_substituted} \end{align} Here, in the second line we expanded $(x-x_0)^{v}$ in the same way as in \eqref{eq:generic_term_eq_f_calc}. The difference to \eqref{eq:generic_term_eq_f_calc} stems from the additional sum leading to the intermediate form $\sum_{k=0}^{\tilde{\xi}_{j,n}}\tilde{a}_k^{j,n} \sum_{i=0}^{k} c_{i,k}(x) x_0^i$ after using the expansion in \eqref{eq:generic_term_eq_f_calc}. Now, after collecting the terms we can rewrite this sum as $\sum_{k=0}^{\tilde{\xi}_{j,n}}\tilde{z}_k^{j,n}(x) x_0^k$ with $\tilde{z}_k^{j,n}(x) = \sum_{i=k}^{\tilde{\xi}_{j,n}} \tilde{a}_i^{j,n} c_{k,i}(x)$, where $c_{k,i}(x)$ is given in \eqref{eq:coefficient_cell}. Inspecting \eqref{eq:intermediate_2_pi_substituted}, we see that the product of the two given polynomials in $x_0$ can be rewritten as one polynomial $\sum_{l=0}^{\tilde{\xi}_{j,n}+\xi_{i,n',n}} \tilde{c}_l(x) x_0^l$ with $\tilde{c}_l(x) = \sum_{v=0}^{l}\tilde{z}_v^{j,n}(x)a_{l-v}^{i,n',n}$, i.e., the convolution of the coefficients of the two polynomials. Equipped with the integral evaluation in \eqref{eq:generic_term_eq_f_calc_final} we can now compute \begin{align} &\hat{\lambda} \sum_{i=0}^{I_{\max}}\sum_{j=0}^{I_{\max}} \sum_{(n',n) \mst 1\leq n+1\leq n'\leq I_{\max}} p^\circ_{n',n} \, \times\nonumber\\ &\int_{0}^{x} e^{-x_0d'_i} \sum_{l=0}^{\xi_{i,n',n}+\tilde{\xi}_{j,n}} \tilde{c}_l(x) x_0^l dx_0\nonumber\\ &=\hat{\lambda} \sum_{i=0}^{I_{\max}}\sum_{j=0}^{I_{\max}} \sum_{(n',n) \mst 1\leq n+1\leq n'\leq I_{\max}} p^\circ_{n',n} \,e^{-x\tilde{d}_j} \, \times\nonumber\\ &\mkern-38mu\sum_{l=0}^{\tilde{\xi}_{j,n}+\xi_{i,n',n}} \mkern-28mu \tilde{c}_l(x) \mkern-4mu \left(\mkern-4mu\frac{l!}{{(d'_i-\tilde{d}_j)}^{l+1}} - e^{-x (d'_i-\tilde{d}_j)}\mkern-4mu\sum_{v=0}^{l}\frac{l!}{v!{(d'_i-\tilde{d}_j)}^{l-v+1}}x^{v}\mkern-4mu\right)\nonumber\\ \end{align} \end{proof} \subsection{Age Distribution at the Arrival of Informative Messages} In addition to calculating the age distribution at any point in time we can easily calculate the distribution of the age at the \emph{arrival instants of informative messages}. The corresponding PDF $f_A^\circ(x_0)$ is readily obtained as \begin{equation*} f_A^\circ(x_0)=\sum_{(n',n) \mst 1\leq n+1\leq n'\leq I_{\max}} p^\circ_{n',n} g^\circ(x_0|n',n) \end{equation*} Using \eqref{eq-g0-fin} we obtain \begin{align} f_A^\circ(x_0)=\sum_{i=0}^{I_{\max}}e^{-x_0d'_i} \mkern-18mu \sum_{(n',n) \mst 1\leq n+1\leq n'\leq I_{\max}} p^\circ_{n',n} \pi^{i,n',n}(x_0) \label{eq-ff0-fin} \end{align} \begin{figure*} \centering \begin{subfigure}[b]{0.49\linewidth} \includegraphics[width=\linewidth]{./simulations/matlab/fig/mean_age_at_anytime_mu1_lambdastart0.1_lambdaend2_nr_lambda20_max_msgs20_log_trace_len5new2.eps} \caption{} \label{fig:mean_age_anytime_I_20} \end{subfigure} \hfill \begin{subfigure}[b]{0.49\linewidth} \includegraphics[width=\linewidth]{./simulations/matlab/fig/quantile_age_at_anytime_mu1_lambdastart0.1_lambdaend2_nr_lambda29_max_msgs20_log_trace_len5new2.eps} \caption{} \label{fig:quantile_age_anytime_I_20} \end{subfigure} \caption{(a) Expected age $E[X]$ obtained from simulations compared to the model in \eqref{eq:expectation_test_fct_age} with $\varphi$ being the identity function for $I_{\max}=20$ and $\mu=1$. (b) Quantiles of the age $P[X>x_{\varepsilon}]=\varepsilon$ obtained from integrating the age density in \eqref{eq:f_age_fct_of_pdf_at_arrival} for $I_{\max}=20$ and $\mu=1$.} \end{figure*} \section{Problem Statement and System Model} \label{sec:system_model} \label{ch:problem} \begin{figure} \centering \includegraphics[width=\linewidth]{./gfx/CPS_scenario_v1} \caption{The sensory information is generated and immediately transmitted in form of messages. These can overtake each other on the network. Informative messages keep the total message order at the receiver and reduce the age of the status information at the receiver to their respective one way delay.} \label{fig:cps_scenario} \vspace{-20pt} \end{figure} We consider cyber-physical systems as depicted in Fig.~\ref{fig:cps_scenario} where sensors transmit status updates to a central control and data acquisition function. We assume that timestamped messages are transmitted at the sender according to some parameterized stochastic process. When a message is generated, it obsoletes any previous message. However, every message is subject to a one way delay and \emph{messages can overtake each other}. We say that a message is ``informative" if its timestamp was generated after the generation time of all messages received so far. When a non informative message arrives, it is of no use and is discarded. We are interested in the age of information at the receiver, $X_t$, which is formally defined as follows. Timestamped messages are generated at times $\{\tau_i\}$ and received at times $\{\tau'_i\}$ respectively (with $\tau_i\leq \tau'_i$ ). Then \begin{equation} X_t= t -\max_{i: \tau'_i\leq t} \tau_i \end{equation} The dynamic evolution of $X_t$ is such that the age $X_t$ increases at rate $1$ between arrival events; furthermore, when message $i$ arrives, the value of $X_t$ just after the arrival, namely $X_{\tau_i^+}$, is set to $\min\left(\tau'_i-\tau_i\;, X_{\tau_i^-}\right)$ as seen in Figure~\ref{fig:age_sample_path}. \begin{figure} \centering \includegraphics[width=\linewidth]{./gfx/age_sample_path_v1} \caption{The age process $X_t$. Message $m_i$ is emitted at time $\tau_i$ and received at time $\tau'_i$. Observe that message $m_1$ is overtaken by message $m_2$ hence the age process at the receiver does not change when $m_1$ arrives. } \label{fig:age_sample_path} \vspace{-30pt} \end{figure} We assume that messages are generated according to a Poisson process of rate $\lambda$. The channel is modelled as a number of independent parallel servers each serving at most one message at a time with exponentially distributed service times, i.e. the random variables $\tau'_i-\tau_i$ are independent of each other and of the arrival process $\{\tau_i\}$ and they are exponentially distributed with same parameter $\mu$. Furthermore, in order to not overwhelm the channel the sender is window-flow-controlled and allows only a fixed number of outstanding informative messages $I_{\max}$ in the channel: arriving messages are dropped if the number of outstanding informative messages is equal to $I_{\max}$. We assume that the sender knows the number of informative messages in the channel (presumably via some instantaneous reverse channel). We use the notation $M/M/I_{\max}/I_{\max}^*$ for this queueing system, where the $*$ here means that the departure of a message flushes all older messages out of the system. In this paper we are interested in the stationary distribution of the age $X_t$ given the process parameters $\lambda,\mu$ and $I_{\max}$. The global notation used in the paper is recalled in Table~\ref{tab-nl}. \begin{table}[tb] \caption{Notation List} \label{tab-nl} \centering \begin{tabular}{|cp{6cm}|} \hline $\tilde{d}_n$, $d'_n$ & $\tilde{d}_n=\sum_{n'}Q_{n,n'}$, $d'_n=\sum_{n'}Q'_{n,n'}$\\ $f(x)$ & PDF of age of information at received at an arbitrary point in time;\\ $f^\circ(x_0,t_1)$ & Joint PDF of age of information $x_0$ and time to wait until next delivery of informative message, sampled when an informative message arrives at receiver;\\ $f^\circ_A(x_0)$ & PDF of age of information sampled when an informative message arrives at receiver;\\ $f_{n',n}$ & Laplace-Stieltjes Transform of $x_0\mapsto g^\circ(x_0|n',n)$\\ $\tilde{f}_n$ & Laplace-Stieltjes Transform of $t_1\mapsto h(t_1|n)$\\ $g^\circ(x_0|n',n)$ &PDF of the age $x_0$ just after an informative message arrival given that the state of the Markov chain is $n'$ just before the arrival of the informative message and $n$ just after the arrival;\\ $h(t_1|n)$ & PDF of the time that will elapse from time $t$ until the next informative message arrives, given that $Z_t=n$;\\ $I_{\max}$ & Maximum number of messages in transit; messages generated when $Z_t=I_{\max}$ are discarded;\\ $\lambda$ & Rate of generation of messages;\\ $\mu$ & Message transit time is exponential with rate $\mu$;\\ $\bar{N}$ &=$\sum_{i=1}^{I_{\max}}i p_{i}$\\ $p_n$ & Stationary probability of $Z_t$\\ $p^\circ_{n',n}$ & Probability that an arbitrary informative message arrival happens at a transition $(n'\to n)$ of the Markov chain $Z_t$\\ $Q_{i,j}, Q'_{i,j}$ & Rate of transition of $Z_t$ [resp. $Z^r_t$] from state $i$ to state $j$\\ $X_t$ & Age of information at receiver at time $t$;\\ $Z_t$ & Number of messages in transit at time $t$;\\ $Z^r_t$ & Time-reversed process derived from $Z_t$\\ \hline \end{tabular} \vspace{-30pt} \end{table}
1,314,259,994,100
arxiv
\section{Background} \label{background} The following notation will hold throughout the paper. Let $\bar X$ be a closed Riemann surface with an integrable meromorphic quadratic differential $q$. We remind the reader that $q$ may have poles of order $1$. We denote the vertical and horizontal foliations of $q$ by $\lambda^+$ and $\lambda^-$ respectively. Let $\PP$ be a finite subset of $\bar X$ that includes the poles of $q$ if any, and let $X = \bar X \ssm \PP$. Let $\mathrm{sing}(q)$ denote the union of $\PP$ with the set of zeros of $q$. We require further that $q$ has no horizontal or vertical saddle connections, that is no leaves of $\lambda^\pm$ that connect two points of $\mathrm{sing}(q)$. This situation holds in particular if $\lambda^\pm$ are the stable/unstable foliations of a pseudo-Anosov map $f:X\to X$, which will often be the case for us. If $\PP=\mathrm{sing}(q)$ (i.e. $\PP$ contains all zeros of $q$) we say $X$ is {\em fully-punctured}. Let $\hat X$ denote the metric completion of the universal cover $\til X$ of $X$, and note that there is an infinite branched covering $\hat X \to \bar X$, infinitley branched over the points of $\PP$. The preimage $\hat\PP$ of $\PP$ is the set of completion points. The space $\hat X$ is a complete CAT$(0)$ space with the metric induced by $q$. \subsection{Veering triangulations} \label{veering defs} In this section let $\PP=\mathrm{sing}(q)$. The veering triangulation, originally defined by Agol in \cite{agol2011ideal} in the case where $q$ corresponds to a pseudo-Anosov $f:X\to X$, is an ideal layered triangulation of $X\times\mathbb{R}$ which projects to a triangulation of the mapping torus $M$ of $f$. The definition we give here is due to Gu\'eritaud \cite{gueritaud}. (Agol's ``veering'' property itself will not actually play a role in this paper, so we will not give its definition). A {\em singularity-free rectangle} in $\hat X$ is an embedded rectangle whose edges consist of leaf segments of the lifts of $\lambda^\pm$ and whose interior contains no singularities of $\hat X$. If $R$ is a {\em maximal} singularity-free rectangle in $\hat X$ then it must contain a singularity on each edge. Note that there cannot be more than one singularity on an edge since $\lambda^\pm$ have no saddle connections. We associate to $R$ an ideal tetrahedron whose vertices are $\partial R \cap \hat\PP$, as in \Cref{gue-tetra}. This tetrahedron comes equipped with a ``flattening'' map into $\hat X$ as pictured. \realfig{gue-tetra}{A maximal singularity-free rectangle $R$ defines a tetrahedron equipped with a map into $R$.} The tetrahedron comes with a natural orientation, inherited from the orientation of $\hat X$ using the convention that the edge connecting the horizontal boundaries of the rectangle lies {\em above} the edge connecting the vertical boundaries. This orientation is indicated in \Cref{gue-tetra}. The union of all these ideal tetrahedra, with faces identified whenever they map to the same triangle in $\hat X$, is Gu\'eritaud's construction of the veering triangulation of $\til X \times \mathbb{R}$. \begin{theorem}\label{gueritaud construction} {\rm\cite{gueritaud}} Suppose that $X$ is fully-punctured. The complex of tetrahedra associated to maximal rectangles of $q$ is an ideal triangulation $\til\tau$ of $\til X\times \mathbb{R}$, and the maps of tetrahedra to their defining rectangles piece together to a fibration $\pi:\til X \times \mathbb{R} \to \til X$. The action of $\pi_1(X)$ on $(\til X,\til q)$ lifts simplicially to $\til\tau$, and equivariantly with respect to $\pi$. The quotient is a triangulation of $X \times \mathbb{R}$. If $q$ corresponds to a pseudo-Anosov $f:X\to X$ then the action of $f$ on $(X,q)$ lifts simplicially and $\pi$-equivariantly to $\Phi:X\times\mathbb{R}\to X\times\mathbb{R}$. The quotient is a triangulation $\tau$ of the mapping torus $M$. The fibers of $\pi$ descend to flow lines for the suspension flow of $f$. \end{theorem} We will frequently abuse notation and use $\tau$ to refer to the triangulation both in $M$ and in its covers. We note that a saddle connection $\sigma$ of $q$ is an edge of $\tau$ if and only if $\sigma$ spans a singularity-free rectangle in $X$. See \Cref{extend-rect}. \realfig{extend-rect}{The singularity-free rectangle spanned by $\sigma$ can be extended horizontally (or vertically) to a maximal one.} If $e$ and $f$ are two crossing $\tau$-edges spanning rectangles $R_e$ and $R_f$, note that $R_e$ crosses $R_f$ from top to bottom, or from left to right -- any other configuration would contradict the singularity-free property of the rectangles (\Cref{edges-cross}). If $\sl(e)$ denotes the absolute value of the slope of $e$ with respect to $q$, we can see that $R_e$ crosses $R_f$ from top to bottom if and only if $e$ crosses $f$ and $\sl(e) > \sl(f)$. We say that $e$ is {\em more vertical} than $f$ and also write $e>f$. We will see that $e>f$ corresponds to $e$ lying higher than $f$ in the uppward flow direction. Indeed we can see already that the relation $>$ is transitive, since if $e>f$ and $f>g$ then the rectangle of $g$ is forced to intersect the rectangle of $e$. \realfig{edges-cross}{The rectangle of $e$ crosses $f$ from top to bottom and we write $e>f$.} We conclude with a brief description of the local structure of $\tau$ around an edge $e$: The rectangle spanned by $e$ can be extended horizontally to define a tetrahedron lying below $e$ in the flow direction (\Cref{extend-rect}), and vertically to define a tetrahedron lying above $e$ in the flow direction. Call these $Q_-$ and $Q_+$ as in \Cref{edge-swing}. Between these, on each side of $e$, is a sequence of tetrahedra $Q_1,\ldots,Q_m$ $(m\ge 1)$ so that two successive tetrahedra in the sequence $Q_-,Q_1,\ldots,Q_m,Q_+$ share a triangular face adjacent to $e$. We find this sequence by starting with one of the two top faces of $Q_-$, extending its spanning rectangle vertically until it hits a singularity, and calling $Q_1$ the tetrahedron whose projection is inscribed in the new rectangle. If the new singularity belongs to $Q_+$ we are done $(m = 1)$, otherwise we repeat from the top face of $Q_1$ containing $e$ to find $Q_2$, and continue in this manner. \Cref{edge-swing} illustrates this structure on one side of an edge $e$. Repeating on the other side, note that the link of the edge $e$ is a circle, as expected. \realfig{edge-swing}{The tetrahedra adjacent to an edge $e$ on one side form a sequence ``swinging'' around $e$} \subsection{Arc and curve complexes} \label{sec: arc_complex} The arc and curve complex $\A(Y)$ for a compact surface $Y$ is usually defined as follows: its vertices are essential homotopy classes of embedded circles and properly embedded arcs $([0,1],\{0,1\}) \to (Y,\partial Y)$, where ``essential'' means not homotopic to a point or into the boundary \cite{MM2}. We must be clear about the meaning of homotopy classes here, for the case of arcs: If $Y$ is not an annulus, homotopies of arcs are assumed to be homotopies of maps of pairs. When $Y$ is an annulus the homotopies are also required to fix the endpoints. Simplices of $\A(Y)$, in all cases, correspond to tuples of vertices which can be simultaneously realized by maps that are disjoint on their interiors. We endow $\A(Y)$ with the simplicial distance on its $1$-skeleton. It will be useful, in the non-annular case, to observe that the following definition is equivalent: Instead of maps of closed intervals consider proper embeddings $\mathbb{R} \to \int(Y)$ into the interior of $Y$, with equivalence arising from proper homotopy. This definition is independent of the compactification of $\int(Y)$. The natural isomorphism between these two versions of $\A(Y)$ is induced by a straightening construction in a collar neighborhood of the boundary. If $Y\subset S$ is an essential subsurface (meaning the inclusion of $Y$ is $\pi_1$-injective and is not homotopic to a point or to an end of $S$), we have subsurface projections $\pi_Y(\lambda)$ which are defined for simplices $\lambda\subset \A(S)$ that intersect $Y$ essentially. Namely, after lifting $\lambda$ to the cover $S_Y$ associated to $\pi_1(Y)$ (i.e. the cover to which $Y$ lifts homeomorphically and for which $S_Y \cong \int(Y)$), we obtain a collection of properly embedded disjoint essential arcs and curves, which determine a simplex of $\A(Y)$. We let $\pi_Y(\lambda)$ be the union of these vertices \cite{MM2}. We make a similar definition for a lamination $\lambda$ that intersects $Y$ essentially, except that we include not just the leaves of $\lambda$ but all leaves that one can add in the complement of $\lambda$ which accumulate on $\lambda$. This is natural when we realize $\lambda$ as a measured {\em foliation} (as we do in most of the paper), and need to include {\em generalized leaves}, which are leaves that are allowed to pass through singularities. Note that the diameter of $\pi_Y(\lambda)$ in $\A(Y)$ is at most 2. Note that when $Y$ is an annulus these arcs have natural endpoints coming from the standard compactification of $\til S = \HH^2$ by a circle at infinity. We remark that $\pi_Y$ does not depend on any choice of hyperbolic metric on $S$. When $Y$ is not an annulus and $\lambda$ and $\partial Y$ are in minimal position, we can also identify $\pi_Y(\lambda)$ with the isotopy classes of components of $\lambda\cap Y$. These definitions naturally extend to immersed surfaces arising from covers of $S$. Let $\Gamma$ be a finitely generated subgroup of $\pi_1(S)$. Then the corresponding cover $S_\Gamma \to S$ has a compact core $W$ -- a compact subsurface $W \subset S_\Gamma$ such that $S_\Gamma \ssm W$ is a collection of boundary parallel annuli. For curves or laminations $\lambda^\pm$ of $S$, we have lifts $\widetilde{\lambda}^\pm$ to $S_\Gamma$ and define $d_W(\lambda^-,\lambda^+) = d_{S_\Gamma}(\widetilde{\lambda}^-, \widetilde{\lambda}^+)$. Throughout this paper, when $\lambda,\lambda'$ are two laminations or arc/curve systems, we denote by $d_Y(\lambda,\lambda')$ the {\em minimal} distance between their images in $\A(Y)$, that is $$ d_Y(\lambda,\lambda') = \min\{d_Y(l,l') : l \in \pi_Y(\lambda), l' \in \pi_Y(\lambda') \}. $$ To denote the \emph{maximal} distance between $\lambda$ and $\lambda'$ in $\A(Y)$ we write $$ \mathrm{diam}_Y(\lambda,\lambda') = \mathrm{diam}_{\A(Y)}(\pi_Y(\lambda)\cup\pi_Y(\lambda')). $$ \subsection{Flat geometry} \label{AY in flat geometry} In this section we return to the singular Euclidean geometry of $(X,q)$ and describe a circle at infinity for the flat metric induced by $q$ on the universal cover $\til X$. We identify $\til X$ with $\HH^2$ after fixing a reference hyperbolic metric on $X$. Because of incompleteness of the flat metric at the punctures $\PP$, the connection between the circle we will describe and the usual circle at infinity for $\HH^2$ requires a bit of care. A related discussion appears in Gu\'eritaud \cite{gueritaud}, although he deals explicitly only with the fully-punctured case. With this picture of the circle at infinity we will be able to describe $\pi_Y$ in terms of $q$-geodesic representatives, and to describe a $q$-convex hull for essential subsurfaces of $X$. In this section we do not assume that $X$ is fully-punctured. The completion points $\hat \PP$ in $\hat X$ correspond to parabolic fixed points for $\pi_1(X)$ in $\partial \HH^2$, and we abuse notation slightly by identifying $\hat\PP$ with this subset of $\partial \HH^2$. A {\em complete $q$-geodesic ray} is either a geodesic ray $r:[0,\infty)\to\hat X$ of infinite length, or a finite-length geodesic segment that terminates in $\hat\PP$. A complete $q$-geodesic line is a geodesic which is split by any point into two complete $q$-geodesic rays. Our goal in this section is to describe a circle at infinity that corresponds to endpoints of these rays. \begin{proposition}\label{same compactification} There is a compactification $\beta(\til X)$ of $\til X$ on which $\pi_1(X)$ acts by homeomorphisms, with the following properties: \begin{enumerate} \item There is a $\pi_1(X)$-equivariant homeomorphism $\beta(\til X) \to \overline{\HH^2}$, extending the identification of $\til X$ with $\HH^2$ and taking $\hat \PP$ to the corresponding parabolic fixed points in $\partial \HH^2$. \item If $l$ is a complete $q$-geodesic line in $\hat X$ then its image in $\overline{\HH^2}$ is an embedded arc with endpoints on $\partial\HH^2$ and interior points in $\HH^2 \cup \hat \PP$. Conversely, every pair of distinct points $x,y$ in $\partial\beta(\til X) = \beta(\til X) \ssm \til X$ are the endpoints of a complete $q$-geodesic line. The termination point in $\partial\HH^2$ of a complete $q$-geodesic ray is in $\hat\PP$ if and only if it has finite length. \item The $q$-geodesic line connecting distinct $x,y\in\partial\beta(\til X)$ is either unique, or there is a family of parallel geodesics making up an infinite Euclidean strip. \end{enumerate} \end{proposition} One of the tricky points of this picture is that $q$-geodesic rays and lines may meet points of the boundary $\partial\beta(\til X)$ not just at their endpoints. \begin{proof} When $\PP = \emptyset$ and $X$ is a closed surface, $\til X$ is quasi-isometric to $\HH^2$ and the proposition holds for the standard Gromov compactification. We assume from now on that $\PP\ne\emptyset$. We begin by setting $\hat \HH^2 = \HH^2 \cup \hat \PP$ and endowing it with the topology obtained by taking, for each $p \in \hat \PP$, horoballs based at $p$ as a neighborhood basis for $p$. \begin{lemma}\label{hat same} The natural identification of $\widetilde{X}$ with $\HH^2$ extends to a homeomorphism from $\hat X$ to $\hat \HH^2$. \end{lemma} \begin{proof} First note that $\hat \PP$ is discrete as both a subspace of $\hat X$ and of $\hat \HH^2$. Hence, it suffices to show that a sequence of points $x_i$ in $\widetilde{X} = \HH^2$ converges to a point $p \in \hat \PP$ in $\hat X$ if and only if it converges to $p$ in $\hat \HH^2$. This follows from the fact that the horoball neighborhoods of $p$ descend to cusp neighborhoods in $X$ which form a neighborhood basis for the puncture that is equivalent to the neighborhood basis of $q$-metric balls. \end{proof} Our strategy now is to form the {\em Freudenthal space} of $\hat X$ and equivalently $\hat \HH^2$, which appends a space of {\em ends}. This space will be compact but not Hausdorff, and after a mild quotient we will obtain the desired compactification which can be identified with $\overline{\HH^2}$. Simple properties of this construction will then allow us to obtain the geometric conclusions in part (2) of the proposition. Let $\epsilon(\hat X)$ be the space of ends of $\hat X$, that is the inverse limit of the system of path components of complements of compact sets in $\hat X$. The Freudenthal space $\operatorname{Fr}(\hat X)$ is the union $\hat X\cup \ep(\hat X)$ endowed with the toplogy generated by using path components of complements of compacta to describe neighborhood bases for the ends. Because $\hat X$ is not locally compact, $\operatorname{Fr}(\hat X)$ is not guaranteed to be compact, and we have to take a bit of care to describe it. The construction can of course be repeated for $\hat\HH^2$, and the homeomorphism of \Cref{hat same} gives rise to a homeomorphism $\operatorname{Fr}(\hat X) \to \operatorname{Fr}(\HH^2)$. Let us work in $\hat\HH^2$ now, where we can describe the ends concretely using the following observations: Every compact set $K\subset \hat\HH^2$ meets $\hat\PP$ in a finite set $A$ (since $\hat \PP$ is discrete in $\hat\HH^2$), and such a $K$ is contained in an embedded closed disk $D$ which also meets $\hat\PP$ at $A$. (This is not hard to see but does require attention to deal correctly with the horoball neighborhood bases). The components of $\hat\HH^2\ssm D$ determine a partition of $\ep(\hat\HH^2)$, which in fact depends only on the set $A$ and not on $D$ (if $D'$ is another disk meeting $\hat\PP$ at $A$, then $D\cup D'$ is contained in a third disk $D''$, and this common refinement of the neighborhoods gives the same partition). Thus we have a more manageble (countable) inverse system of neighborhoods in $\ep(\hat\HH^2)$, and with this description it is not hard to see that $\ep(\hat\HH^2)$ is a Cantor set. For each $p\in\hat\PP$ there are two distinguished ends $p^+,p^-\in \ep(\hat\HH^2)$ defined as follows: For each finite subset $A\subset\hat\PP$ with at least two points one of which is $p$, the two partition terms adjacent to $p$ in the circle (or equivalently, in the boundary of any $D\subset \hat \HH^2$ meeting $\hat\PP$ in $A$) define neighborhoods in $\ep(\hat\HH^2)$, and this pair of neighborhood systems determines $p^+$ and $p^-$ respectively. One can also see that $p^+$ (and $p^-$) and $p$ do not admit disjoint neighborhoods, and this is why $\operatorname{Fr}(\hat\HH^2)$ is not Hausdorff. We are therefore led to define the quotient space \[ \beta(\hat \HH^2) = \operatorname{Fr}(\hat\HH^2) / \sim , \] where we make the identifications $p^- \sim p \sim p^+$, for each $p\in\hat\PP$. We can make the same definitions in $\hat X$, obtaining \[ \beta(\hat X) = \operatorname{Fr}(\hat X) / \sim , \] which we rename $\beta(\til X)$. Since the definitions are purely in terms of the topology of the spaces $\hat\HH^2$ and $\hat X$, the homeomorphism of \Cref{hat same} extends to a homeomorphism $\beta(\til X) \to \beta(\hat\HH^2)$. Part (1) of \Cref{same compactification} follows once we establish that the identity map of $\HH^2$ extends to a homeomorphism $$ \beta(\hat\HH^2) \cong \overline{\HH^2}. $$ This is not hard to see once we observe that the disks used above to define neighborhood systems can be chosen to be ideal hyperbolic polygons. Their halfspace complements serve as neighborhood systems for points of $\partial\HH^2\setminus\hat\PP$. A sequence converges in $\overline{\HH^2}$ to a point $p\in \hat\PP$ if it is eventually contained in any union of a horoball centered at p and two half-planes adjacent to $p$ on opposite sides. This is modeled exactly by the equivalence relation $\sim$. For part (2), let $D_0$ be a fundamental domain for $\pi_1(X)$ in $\hat X$, which may be chosen to be a disk with vertices at points of $\hat\PP$, and of finite $q$-diameter. Translates of $D_0$ can be glued to build a sequence of nested disks $D_n$ exhausting $\hat X$, each of which meets $\hat\PP$ in a finite set of vertices, and whose boundary is composed of arcs of bounded diameter between successive vertices. A complete $q$-geodesic ray $r$ either has finite length and terminates in a point of $\hat\PP$, or has infinite length in which case it leaves every compact set of $\hat X$, and visits each point of $\hat\PP$ at most once. Thus it must terminate in a point of $\ep(\hat X)$ in the Freudenthal space. We claim that this point cannot be $p^+$ or $p^-$ for $p\in\hat\PP$. If $r$ terminates in $p^+$, then for each disk $D_n$ ($n$ large) it must pass through the edge of $\partial D_n$ adjacent to $p$ on the side associated to $p^+$. Any two such consecutive edges meet in $p$ at one of finitely many angles (images of corners of $D_0$), and hence the accumulated angle between edges goes to $\infty$ with $n$. If we replace these edges by their $q$-geodesic representatives, the angles still go to $\infty$. This means that $r$ contains infinitely many disjoint subsegments whose endpoints are a bounded distance from $p$, but this contradicts the assumption that $r$ is a geodesic ray. The image of $r$ in the quotient $\beta(\til X)$ therefore terminates in a point of $\hat\PP$ when it has finite length, and a point in $\partial\beta(\til X)\ssm \hat\PP$ otherwise. The same is true for both ends of a complete $q$-geodesic line $l$, and we note that both ends of $l$ cannot land on the same point because then we would have a sequence of segments $l_n\subset l$ of length going to $\infty$ with both endpoints of $l_n$ on the same edge or on two consecutive edges of $\partial D_n$, a contradiction to the fact that $l_n$ is a geodesic and the arcs in $\partial D_n$ have bounded $q$-length. Now let $x,y$ be two distinct points in $\partial\beta(\til X)$. Assume first that both are not in $\hat\PP$. Then for large enough $n$, they are in separate components of the complement of $D_n$. If we let $x_i \to x$ and $y_i\to y$ be sequences in $\beta(\til X)$, then eventually $x_i$ and $y_i$ are in the same components of the complement of $D_n$ as $x$ and $y$, respectively. The geodesic from $x_i$ to $y_i$ must therefore pass through the corresponding boundary segments of $D_n$ and in particular through $D_n$, so we can extract a convergent subsequence as $i\to\infty$. Letting $n\to\infty$ and diagonalizing we obtain a limiting geodesic which terminates in $x,y$ as desired. If $x\in \hat\PP$ or $y\in\hat\PP$ the same argument works except that we can take $x_i \equiv x$ or $y_i \equiv y$. This establishes part (2). Now let $l$ and $l'$ be two $q$-geodesics terminating in $x$ and $y$. If $x$ and $y$ are in $\hat\PP$ then $l=l'$ since the metric is CAT(0). If $x\notin \hat\PP$ then both $l$ and $l'$ pass through infinitely many segments of $\partial D_n$ on their way to $x$. Since these segments have uniformly bounded lengths, $l$ and $l'$ remain a bounded distance apart. If $y\in\hat\PP$ then again CAT(0) implies that $l=l'$, and if $y\notin\hat\PP$ then $l$ and $l'$ must cobound an infinite flat strip. This establishes part (3). \qedhere \end{proof} With \Cref{same compactification} in hand we can consider each complete $q$-geodesic line $l$ in $\hat X = \hat \HH^2$ as an arc in the closed disk $\overline{\HH^2}$, which by the Jordan curve theorem separates the disk ${\HH^2}$ into at least $2$ components. Each component is an open disk whose closure meets $\partial\HH^2$ in a subarc of one of the complementary arcs of the endpoints of $l$. We call the union of disks whose closures meet one of these complementary arcs of the endpoints of $l$ an {\em open side} $\openside{l}$ of $l$. The closure of each open side in $\overline \HH^2$ is then a connected union of closed disks, attached to each other along the points of $\hat\PP$ that $l$ meets on the circle. We call the closure of the open side $\openside{l}$ of $l$ in $\overline{\HH^2}$ the {\em side} $\side{l}$. Note that $\openside{l} = \int (\side{l} \cap \HH^2) = \side{l} \ssm (\partial \HH^2 \cup l)$, and if $\side{l}$ and $\side{l}'$ are the two sides of $l$, then $\side{l} \cap \side{l}' = l$. See \Cref{line-disks}. \realfig{line-disks}{A complete $q$-geodesic line $l$ ands its endpoints on $\partial \HH^2$.} With this picture we can state the following: \begin{corollary} \label{cor:side_coherence} Let $a,b$ be disjoint arcs in $\HH^2$ with well-defined, distinct endpoints on $\partial \HH^2$ and let $a_q,b_q$ be $q$-geodesic lines with the same endpoints as $a$ and $b$, respectively. Then $b_q$ is contained in a single side of $a_q$. \end{corollary} \realfig{ab-intersect}{Disjoint arcs with their $q$-geodesic representatives.} \begin{proof} Letting $L$ and $R$ be the arcs of $\partial\HH^2$ minus the endpoints of $a$, the endpoints of $b$ must lie in one of them, say $L$, since $a$ and $b$ are disjoint. Since $a_q$ and $b_q$ are geodesics in the CAT$(0)$ space $\hat X$, their intersection is connected. If their intersection is empty, then the corollary is clear. Otherwise, $b_q\ssm a_q$ is one or two arcs, each with one endpoint on $a_q$ and the other on $L$. It follows that $b_q\ssm a_q$ is on one open side of $a_q$, and the corollary follows. \end{proof} \subsection*{Subsurfaces and projections in the flat metric} Let $Y \subset X$ be an essential compact subsurface, and let $X_Y=\til X/\pi_1(Y)$ be the associated cover of $X$. (Here we have identified $\pi_1(X)$ with the deck transformations of $\widetilde{X} \to X$ and fixed $\pi_1(Y)$ within its conjugacy class.) For any lamination $\lambda$ in $X$, we want to show that the projection $\pi_Y(\lambda)$ can be realized by subsegments of the $q$-geodesic representative of $\lambda$. Recall that $X$ is not necessarily fully-punctured. We say a boundary component of $Y$ is {\em puncture-parallel} if it bounds a disk in $\bar X \ssm Y$ that contains a single point of $\PP$. We denote the corresponding subset of $\PP$ by $\PP_Y$ and refer to them as the \emph{punctures} of $Y$. Let $\til{\PP}_Y$ denote the subset of punctures of $X_Y$ which are encircled by the boundary components of the lift of $Y$ to $X_Y$. In terms of the completed space $\bar X_Y$, $\til \PP_Y$ is exactly the set of completion points which have finite total angle. Let $\partial_0Y$ denote the union of the puncture-parallel components of $\partial Y$ and let $\partial'Y$ denote the rest. Observe that the components of $\partial_0 Y$ are in natural bijection with $\PP_Y$ and set $Y' = Y\ssm\partial_0Y$. Identifying $\til X$ with $\HH^2$, let $\Lambda\subset \partial\HH^2$ be the limit set of $\pi_1(Y)$, $\Omega = \partial\HH^2 \ssm \Lambda$, and $\hat\PP_Y\subset \Lambda$ the set of parabolic fixed points of $\pi_1(Y)$. Let $C(X_Y)$ denote the compactification of $X_Y$ given by $(\HH^2 \cup \Omega\cup \hat\PP_Y)/\pi_1(Y)$, adding a point for each puncture-parallel end of $X_Y$, and a circle for each of the other ends. Now given a lamination (or foliation) $\lambda$, realized geodesically in the hyperbolic metric on $X$, its lift to $X_Y$ extends to properly embedded arcs in $C(X_Y)$, of which the ones that are essential give $\pi_Y(\lambda)$. \Cref{same compactification} allows us to perform the same construction with the $q$-geodesic representative of $\lambda$. Note that the leaves we obtain may meet points of $\til \PP_Y$ in their interior, but a slight perturbation produces properly embedded lines in $X_Y$ which are properly isotopic to the leaves coming from $\lambda$. If $Y$ is an annulus the same construction works, with the observation that the ends of $Y$ cannot be puncture-parallel and hence $C(Y)$ is a closed annulus and the leaves have well-defined endpoints in its boundary. We have proved: \begin{lemma} \label{q arcs for AY} Let $Y\subset X$ be an essential subsurface. If $\lambda$ is a proper arc or lamination in $X$ then the lifts of its $q$-geodesic representatives to $X_Y$, after discarding inessential components, give representatives of $\pi_Y(\lambda)$. \end{lemma} \subsection*{$q$-convex hulls} We will need a flat-geometry analogue of the hyperbolic convex hull. The main idea is simple -- pull the boundary of the regular convex hull tight using $q$-geodesics. The only difficulty comes from the fact that these geodesics can pass through parabolic fixed points, and fail to be disjoint from each other, so the resulting object may fail to be an embedded surface. Our discussion is similar to Section $3$ of Rafi \cite{rafi2005characterization}, but the discussion there requires adjustments to handle correctly the incompleteness at punctures. As above, identify $\til X$ with $\HH^2$. Let $\Lambda \subset \partial \HH^2$ be a closed set and let ${\operatorname{CH}}(\Lambda)$ be the convex hull of $\Lambda$ in $\HH^2$. We define ${\operatorname{CH}}_q(\Lambda)$ as follows. Assume first that $\Lambda$ has at least 3 points. Each boundary geodesic $l$ of ${\operatorname{CH}}(\Lambda)$ has the same endpoints as a (biinfinite) $q$-geodesic $l_q$. By part (3) of \Cref{same compactification}, $l_q$ is unique unless it is part of a parallel family of geodesics, making a Euclidean strip. The plane is divided by $l_q$ into two sides as in the discussion before \Cref{cor:side_coherence}, and one of the sides, which we call $\side{l}$, meets $\partial \HH^2$ in a subset of the complement of $\Lambda$. Recall that $\side{l}$ is either a disk or a string of disks attached along puncture points. If $l_q$ is one of a parallel family of geodesics, we include this family in $\side{l}$. After deleting from $\hat X$ the interiors of $\side{l}$ for all $l$ in $\partial {\operatorname{CH}}(\Lambda)$ (which are disjoint by \Cref{cor:side_coherence}), we obtain ${\operatorname{CH}}_q(\Lambda)$, the $q$-convex hull. If $\Lambda$ has 2 points then ${\operatorname{CH}}_q(\Lambda)$ is the closed Euclidean strip formed by the union of $q$-geodesics joining those two points. \medskip Now fixing a subsurface $Y$ we can define a $q$-convex hull for the cover $X_Y$, by taking a quotient of the $q$-convex hull of the limit set $\Lambda_Y$ of $\pi_1(Y)$. This quotient, which we will denote by ${\operatorname{CH}}_q(X_Y)$, lies in the completion $\bar{X}_Y$. Because ${\operatorname{CH}}_q(X_Y)$ may not be homeomorphic to $Y$, we pay explicit attention to a marking map between $Y$ and its hull. Let $\hat\iota:Y \to X_Y$ be the lift of the inclusion map to the cover. \begin{lemma} \label{q tight} The lift $\hat\iota:Y\to X_Y$ is homotopic to a map $\hat\iota_q:Y\to \bar X_Y$ whose image is the $q$-hull ${\operatorname{CH}}_q(X_Y)$ such that \begin{enumerate} \item The homotopy $(h_t)_{t \in[0,1]}$ from $\hat\iota$ to $\hat\iota_q$ has the property that $h_t(Y) \subset X_Y$ for all $t \in [0,1)$. \item Each component of $\partial_0 Y$ is taken by $\hat\iota_q$ to the corresponding completion point of $\til{\PP}_Y$. \item If $Y$ is an annulus then the image of $\hat\iota_q$ is either a maximal flat cylinder in $\bar X_Y$ or the unique geodesic representative of the core of $Y$ in $\bar X_Y$. \item If $Y$ is not an annulus then each component $\gamma$ of $\partial' Y$ is taken by $\hat\iota_q$ to a $q$-geodesic representative in $\bar X_Y$. If there is a flat cylinder in the homotopy class of $\gamma$ then the interior of the cylinder is disjoint from $\hat\iota_q(Y)$. \item There is a deformation retraction $r:\bar X_Y \to \hat\iota_q(Y)$. For each component $\gamma$ of $\partial'Y$, the preimage $r^{-1}(\hat\iota_q(\gamma))$ intersects $X_Y$ in either an open annulus or a union of open disks joined in a cycle along points in their closures. \item If the interior $\int({\operatorname{CH}}_q(\Lambda_Y))$ is a disk then $\hat\iota_q$ is a homeomorphism from $Y' = Y\ssm\partial_0Y$ to its image. \end{enumerate} \end{lemma} \begin{proof} Let $\Gamma = \pi_1 Y$ and let $\Lambda =\Lambda_Y \subset \partial \HH^2$ denote the limit set of $\Gamma$. As usual, ${\operatorname{CH}}(\Lambda)/\Gamma$ can be identified with $Y' = Y \ssm \partial_0 Y$. After isotopy we may assume $\hat\iota:Y'\to {\operatorname{CH}}(\Lambda)/\Gamma$ is this identification. First assume that $Y$ is not an annulus. Form ${\operatorname{CH}}_q(\Lambda)$ as above, and for a boundary geodesic $l$ of ${\operatorname{CH}}(\Lambda)$ define $l_q$ and its side $\side{l}$ as in the discussion above. The quotient of $l_q$ is a geodesic representative of a component of $\partial Y$, and the quotient of the open side $\openside{l}$ in $X_Y$ is either an open annulus or a union of open disks joined in a cycle along points in their completion. The $q$-geodesic may pass through points of $\hat \PP$, so that there is a homotopy from $l$ to $l_q$ rel endpoints which stays in $\HH^2$ until the last instant. We may equivariantly deform the identity to a map ${\operatorname{CH}}(\Lambda) \to {\operatorname{CH}}_q(\Lambda)$, which takes each $l$ to $l_q$: since ${\operatorname{CH}}_q(\Lambda)$ is contractible, it suffices to give a $\Gamma$-invariant triangulation of ${\operatorname{CH}}(\Lambda)$ and define the homotopy successively on the skeleta. This homotopy descends to a map from $Y'$ to ${\operatorname{CH}}_q(\Lambda)/\Gamma$, and can be chosen so that the puncture-parallel boundary components map to the corresponding points of $\PP_Y$. This gives the desired map $\hat\iota_q$ and establishes properties (1-4). Using the description of the sides $\side{l}$, we may equivariantly retract $\overline\HH^2$ to ${\operatorname{CH}}_q(\Lambda)$, giving rise to the retraction $r$ of part (5). Finally, if the interior of ${\operatorname{CH}}_q(\Lambda)$ is a disk, then its quotient is a surface. Our homotopy yields a homotopy-equivalence of $Y'$ to this surface which preserves peripheral structure and can therefore be deformed rel boundary to a homeomorphism. We let $\hat\iota_q$ be this homeomorphism, giving part $(6)$. When $Y$ is a (nonperipheral) annulus, $\Lambda_Y$ is a pair of points and we recall from above that ${\operatorname{CH}}_q(\Lambda)$ is either a flat strip in $\hat{X}$ which descends to a flat cylinder in $\bar X_Y$, or it is a single geodesic. The proof in the annular case now proceeds exactly as above. \end{proof} Let $\iota_q : Y \to \bar X$ be the composition of $\hat \iota_q$ with the (branched) covering $\bar X_Y \to \bar X$ and set $\partial_q Y = \iota_q(\partial' Y)$. Note that this will be a 1-complex of saddle connections and not necessarily a homeomorphic image of $\partial' Y$. \subsection{Fibered faces of the Thurston norm} A fibration $\sigma\colon M\to S^1$ of a finite-volume hyperbolic 3-manifold $M$ over the circle comes with the following structure: there is an integral cohomology class in $H^1(M;\mathbb{Z})$ represented by $\sigma_*:\pi_1M\to \mathbb{Z}$, which is the Poincar\'e dual of the fiber $F$. There is a representation of $M$ as a quotient $F\times\mathbb{R}/\Phi$ where $\Phi(x,t) = (f(x),t-1)$ and $f:F\to F$ is called the monodromy map. This map is pseudo-Anosov and has stable and unstable (singular) measured foliations $\lambda^+$ and $\lambda^-$ on $F$. Finally there is the suspension flow inherited from the natural $\mathbb{R}$ action on $F\times\mathbb{R}$, and suspensions $\Lambda^\pm$ of $\lambda^\pm$ which are flow-invariant 2-dimensional foliations of $M$. All these objects are defined up to isotopy. The fibrations of $M$ are organized by the {\em Thurston norm} $||\cdot||$ on $H^1(M;\mathbb{R})$ \cite{thurston1986norm} (see also \cite{candel2000foliations}). This norm has a polyhedral unit ball $B$ with the following properties: \begin{enumerate} \item Every cohomology class dual to a fiber is in the cone $\mathbb{R}_+\mathcal{F}$ over a top-dimensional open face $\mathcal{F}$ of $B$. \item If $\mathbb{R}_+\mathcal{F}$ contains a cohomology class dual to a fiber then {\em every} irreducible integral class in $\mathbb{R}_+\mathcal{F}$ is dual to a fiber. $\mathcal{F}$ is called a {\em fibered face} and its irreducible integral classes are called fibered classes. \item For a fibered class $\omega$ with associated fiber $F$, $||\omega||=-\chi(F)$. \end{enumerate} In particular if $\dim H^1(M;\mathbb{R})\ge 2$ and $M$ is fibered then there are infinitely many fibrations, with fibers of arbitrarily large complexity. We will abuse terminology a bit by saying that a fiber (rather than its Poincar\'e dual) is in $\mathbb{R}_+\mathcal{F}$. The fibered faces also organize the suspension flows and the stable/unstable foliations: If $\mathcal{F}$ is a fibered face then there is a single flow $\psi$ and a single pair $\Lambda^\pm$ of foliations whose leaves are invariant by $\psi$, such that {\em every} fibration associated to $\mathbb{R}^+\mathcal{F}$ may be isotoped so that its suspension flow is $\psi$ up to a reparameterization, and the foliations $\lambda^\pm$ for the monodromy of its fiber $F$ are $\Lambda^\pm\cap F$. These results were proven by Fried \cite{fried1982geometry}; see also McMullen \cite{mcmullen2000polynomial}. \subsection*{Veering triangulation of a fibered face} A key fact for us is that the veering triangulation of the manifold $M$ depends only on the fibered face $\mathcal{F}$ and not on a particular fiber. This was known to Agol for his original construction (see sketch in \cite{agol-overflow}), but Gu\'eritaud's construction makes it almost immediate. \begin{proposition}[Invariance of $\tau$] \label{prop:invariance} Let $M$ be a hyperbolic 3-manifold with fully-punctured fibered face $\mathcal{F}$. Let $S_1$ and $S_2$ be fibers of $M$ each contained in $\mathbb{R}_+ \mathcal{F}$ and let $\tau_1$ and $\tau_2$ be the corresponding veering triangulations of $M$. Then, after an isotopy preserving transversality to the suspension flow, $\tau_1 = \tau_2$. \end{proposition} \begin{proof} The suspension flow associated to $\mathcal{F}$ lifts to the universal cover $\til M$, and any fiber $S$ in $\mathbb{R}_+\mathcal{F}$ is covered by a copy of its universal cover $\til S$ in $\til M$ which meets every flow line transversely, exactly once. Thus we may identify $\til S$ with the leaf space ${\mathcal L}$ of this flow. The lifts $\til\Lambda^\pm$ of the suspended laminations project to the leaf space where they are identified with the lifts $\til\lambda^\pm$ of $\lambda^\pm$ to $\til S$. The foliated rectangles used in the construction of $\tau$ from $\til{q}$ on $\til{S}$ depend only on the (unmeasured) foliations $\til\lambda^\pm$. Thus the abstract cell structure of $\tau$ depends only on the fibered face $\mathcal{F}$ and not on the fiber. The map $\pi$ from each tetrahedron to its rectangle does depend a bit on the fiber, as we choose $q$-geodesics for the edges (and the metric $q$ depends on the fiber); but the edges are always mapped to arcs in the rectangle that are transverse to both foliations. It follows that there is a transversality-preserving isotopy between the triangulations associated to any two fibers. \end{proof} \subsection*{Fibers and projections} We next turn to a few lemmas relating subsurface projections over the various fibers in a fixed face of the Thurston norm ball. \begin{lemma}\label{lem:subgroup_projection} If $\mathcal{F}$ is a fibered face for $M$ and $Y \to S$ is an infinite covering where $S$ is a fiber in $\mathbb{R}_+\mathcal{F}$ and $\pi_1(Y)$ is finitely generated, then the projection distance $d_Y(\lambda^-,\lambda^+)$ depends only on $\mathcal{F}$ and the conjugacy class of the subgroup $\pi_1(Y) \le \pi_1(M)$ (and not on $S$). \end{lemma} Note that $Y$ need not correspond to an embedded subsurface of $S$. \begin{proof} As in the proof of \Cref{prop:invariance}, $\til S$ can be identified with the leaf space ${{\mathcal L}}$ of the flow in $\til M$. The action of $\pi_1(M)$ on $\til M$ descends to ${\mathcal L}$, and thus the cover $Y = \til S/\pi_1(Y)$ is identified with the quotient ${{\mathcal L}}/\pi_1(Y)$ and the lifts of $\lambda^\pm$ to $Y$ are identified with the images of $\til\Lambda^\pm$ in ${{\mathcal L}}/\pi_1(Y)$. Thus the projection $d_Y(\lambda^+,\lambda^-)$ can be obtained without reference to the fiber $S$. \end{proof} This lemma justifies the notation $d_Y(\Lambda^+,\Lambda^-)$ used in the introduction. We will also require the following lemma, where we allow maps homotopic to fibers which are not necessarily embeddings. \begin{lemma}\label{lem:flow_to_fiber_2} Let $F$ be a fiber of $M$. Let $Y\subset M$ be a compact surface and let $h \colon F \to M$ be a map which is homotopic to the inclusion. Suppose that $h(F) \cap Y$ is inessential in $Y$, i.e. each component of the intersection is homotopic into the ends of $Y$. Then the image of $\pi_1(Y)$ is contained in $\pi_1(F) \vartriangleleft \pi_1(M)$. \end{lemma} \begin{proof} Let $\zeta$ be the cohomology class dual to $F$. Since $h(F)$ meets $Y$ inessentially, every loop in $Y$ can be pushed off of $h(F)$ so $\zeta$ vanishes on $\pi_1(Y)$. But the kernel of $\zeta$ in $\pi_1(M)$ is exactly $\pi_1(F)$, so the image of $\pi_1(Y)$ is in $\pi_1(F)$. \end{proof} \section{Rectangle and triangle hulls} \label{hulls} In this section we discuss a number of constructions that associate a configuration of $\tau$-edges to a saddle connection of the quadratic differential $q$. These will be used later to show that subsurfaces with large projection are compatible with the veering triangulation in the appropriate sense. As a byproduct of our investigation, we prove the (to us) unexpected result (\Cref{th:total geodesic}) that the edges of the veering triangulation form a totally geodesic subgraph of the curve and arc graph of $X$. We emphasize that in \Cref{sec:rectangles_along_saddles} and \Cref{sec:t_hulls}, the surface $X$ is not necessarily fully-punctured. Thus by $\tau$ we mean the veering triangulation associated to the fully-punctured surface $X \ssm \mathrm{sing}(q)$. We will say that a saddle connection of $X$ is a {\em $\tau$-edge} if its interior is an edge of this veering triangulation. In particular this means that its lift to $\til X$ spans a singularity-free rectangle. \subsection{Maximal rectangles along a saddle connection} \label{sec:rectangles_along_saddles} Let $\sigma$ be a saddle connection, for the moment in the completed universal cover $\hat X$. Consider the set $\mathcal{R}(\sigma)$ of all rectangles which are {\em maximal with respect to the property that $\sigma$ passes through a diagonal}. Thus each $R\in\mathcal{R}(\sigma)$ contains singularities in at least two edges. Let $h(R)$ be the convex hull in $R$ of the singularities in the boundary of $R$ and let $h^{(1)}(R)$ denote its $1$-skeleton (see \Cref{rect-hulls}). \realfig{rect-hulls}{The eight possible (up to symmetry) convex hulls $h(R)$, assuming at most one singularity per leaf of $\lambda^\pm$. The saddle connection $\sigma$ is in blue.} Let $$ \rhull(\sigma) = \bigcup \{ h^{(1)}(R): R\in \mathcal{R}(\sigma) \}. $$ See \Cref{r-example} for an example. Note that all the saddle connections in $\rhull(\sigma)$ are edges of $\tau$ --- each of these arcs spans a singularity-free rectangle by construction. Moreover, $\rhull(\sigma) = \{\sigma\}$ if $\sigma$ is itself a $\tau$-edge. \realfig{r-example}{Example of $\rhull(\sigma)$ (in red)} The following lemma will play an important role throughout this paper. \begin{lemma}\label{r disjoint} If saddle connections $\sigma_1$ and $\sigma_2$ have no transversal intersections then neither do $\rhull(\sigma_1)$ and $\rhull(\sigma_2)$. \end{lemma} \begin{proof} Say that two rectangles meet {\em crosswise} if their interiors intersect, and no corners of one are in the interior of the other. Note that when two distinct rectangles meet crosswise, any two of their diagonals intersect. We say that the rectangles meet {\em properly crosswise} if they also do not share any corners, in which case any two diagonals intersect in the interior. Let $\tau_1$ and $\tau_2$ be saddle connections in $\rhull(\sigma_1)$ and $\rhull(\sigma_2)$, respectively, and suppose that they intersect transversely. Hence their spanning rectangles $Q_1$ and $Q_2$ must cross as in \Cref{edges-cross}. Assume that $Q_1$ is the taller and $Q_2$ the wider. Now let $R_1$ and $R_2$ be the rectangles of $\mathcal{R}(\sigma_1)$ and $\mathcal{R}(\sigma_2)$ containing $Q_1$ and $Q_2$, respectively. Because of the singularities in the corners of $Q_1$ and $Q_2$, $R_2$ is no taller than $Q_1$ and $R_1$ is no wider than $Q_2$. Hence $R_1$ and $R_2$ meet crosswise. (See \Cref{crossing-rects}). \realfig{crossing-rects}{Three examples of the crossing pattern. The rectangles $R_1$ and $R_2$ are in blue, $\tau_1$ and $\tau_2$ are in red, and $Q_1$ and $Q_2$ are shaded. In {\em (i)} and {\em (ii)} the crossing is proper. In {\em (iii)} the corner $c$ is shared.} If they met properly crosswise then $\sigma_1$ and $\sigma_2$ would have an interior intersection, which is a contradiction. Hence $R_1$ and $R_2$ share a corner $c$. But the edges meeting at $c$ would have to pass through boundary edges of $Q_1$ and $Q_2$. Those edges already have the singularities of $\tau_1$ and $\tau_2$, and so $c$ cannot be a singularity. Thus if $c$ is the intersection of the diagonals contained in $\sigma_1$ and $\sigma_2$ it would be in the interior of both saddle connections, again a contradiction. We conclude that $\tau_1$ and $\tau_2$ cannot cross. \end{proof} An immediate consequence of \Cref{r disjoint} is that we can carry out the construction downstairs: If $\sigma$ is a saddle connection in $\bar X$ we can construct $\rhull(\hat \sigma)$ for each of its lifts $\hat \sigma$ to $\hat X$, and the lemma tells us none of them intersect transversally. Thus the construction projects downstairs to give a collection of $\tau$-edges with disjoint interior. Moreover if $K$ is {\em any} collection of saddle connections with disjoint interiors then $\rhull(K)$ makes sense as a subcomplex of $\tau$ supported on some section by \Cref{lem:extension}. Hence, we will continue to use $\rhull(\cdot)$ to denote the corresponding map on saddle connections of $\bar X$. We remark that although $\rhull(\cdot)$ takes collections of saddle connections with disjoint interiors to collections of $\tau$-edges with disjoint interiors, it may do so with multiplicity. \subsection{Triangle hulls} \label{sec:t_hulls} Now let us consider a similar operation that uses right triangles instead of rectangles, and associates to a transversely oriented saddle connection in the universal cover a homotopic path of saddle connections. If $\sigma$ is a saddle connection in $\hat X$ equipped with a transverse orientation, let $\cT(\sigma)$ denote the collection of Euclidean right triangles which are {\em maximal with respect to the property that they are attached along the hypotenuse to $\sigma$ along the side given by its transverse orientation}. A triangle $t$ in $\cT(\sigma)$ must have exactly one singularity in each of its legs, and so their convex hull $h(t)$ is a single saddle connection. The set $\cT(\sigma)$ must be finite, and its hypotenuses cover $\sigma$ in a sequence of non-nested intervals, ordered by their left (or right) endpoints. See \Cref{t-example}. Let $\thull(\sigma)$ be the union of segments $h(t)$ for $t\in\cT(\sigma)$. \realfig{t-example}{An example of $\thull(\sigma)$ and $P(\sigma)$.} \begin{lemma}\label{thull structure} Either $\thull(\sigma) = \sigma$ or $\sigma\cup\thull(\sigma)$ is the boundary of an embedded Euclidean polygon $P(\sigma)$ in $\hat X$ which is foliated by arcs of $\lambda^\pm$. \end{lemma} \begin{proof} Suppose that $t$ and $t'$ are triangles of $\cT(\sigma)$ and $p\in t\cap t'$ is in the interior of $t$. Let $l$ and $l'$ be the vertical line segments in $t$ and $t'$, respectively, joining $p$ to the respective hypotenuses ($l'$ could be a single point). If $l$ and $l'$ leave $p$ in opposite directions then $l\cup l'$ is a vertical geodesic connecting two points of $\sigma$, which contradicts the uniqueness of geodesics in $\hat X$. If they leave $p$ in the same direction but are not equal, then their difference is a vertical geodesic with endpoints on $\sigma$, again a contradiction. We conclude that if $t$ and $t'$ intersect they do so on a common subarc of their hypotenuses. This subarc spans a (nonmaximal) right triangle which is exactly $t\cap t'$. Now given $t\in\cT(\sigma)$, the vertical and horizontal legs of $t$ each contain a single singularity of $\hat X$; denote these singularities by $v_t$ and $h_t$, respectively. By construction of $\cT(\sigma)$, there is a unique triangle $t' \in \cT(\sigma)$ such that $h_{t'} = v_{t}$, unless $v_{t}$ is an endpoint of $\sigma$. Hence, given an orientation on $\sigma$, the edges of $\thull(\sigma)$ come with a natural ordering induced by moving along $\sigma$. By our observations above, we see that $\thull(\sigma)$ is an embedded arc and meets $\sigma$ only at its endpoints. Since $\hat X$ is contractible, $\sigma$ and $\thull(\sigma)$ must be homotopic and hence cobound a disk $P(\sigma)$. In fact this disk is foliated by both $\lambda^+$ and $\lambda^-$, as we can see by noting that each edge of $\thull(\sigma)$ cobounds a vertical (similarly a horizontal) strip with a segment in $\sigma$. Hence $P(\sigma)$ admits an isometry to a polygon in $\mathbb{R}^2$. \end{proof} Let us define a map $\thull^+_\sigma:\sigma\to \thull(\sigma)$ (resp. $\thull^-_\sigma$) which is the result of pushing the points of $\sigma$ along the vertical (resp. horizontal) foliation to the other side of $P(\sigma)$. If $f\colon I \to \hat X$ is an embedding of an oriented 1-manifold $I$ that parametrizes some union of saddle connections, we let \begin{equation}\label{thull f} \thull^+ f\colon I \to \hat X \end{equation} be the map that sends each $p\in I$ to $\thull^+_\sigma(f(p))$, where $\sigma$ is the saddle connection containing $f(p)$ with transverse orientation induced by the orientation on $I$. By composing with covering maps we can use the same notation for the resulting operation in quotients $\hat X_Y$ or $\bar X$. Unlike the rectangle hulls, the edges of $\thull(\sigma)$ are not necessarily $\tau$-edges. (See the upper-right red saddle connection in \Cref{t-example}.) Moreover, the $\thull$-version of \Cref{r disjoint} is in general not true. That is, the image of $\thull$ may not project to an embedded complex in $\bar X$ since $\sigma_1$ and $\sigma_2$ can be disjoint while $\thull(\sigma_1)$ and $\thull(\sigma_2)$ cross. However, we do have the following: \begin{lemma}\label{disjoint_thulls} Let $\sigma,\sigma'$ be saddle connections in $\hat X$ with disjoint interiors. Let $l$ be an arc of $\lambda^+$ with endpoints on $\sigma$ and $\sigma'$, and give $\sigma$ and $\sigma'$ the transverse orientation pointing toward the interior of $l$. Then the polygons $P(\sigma)$ and $P(\sigma')$ of $\hat X$ (from \Cref{thull structure}) have disjoint interiors. \end{lemma} \begin{proof} Suppose towards a contradiction that there is a point $p$ which is in the interior of each of the polygons $P = P(\sigma)$ and $P' = P(\sigma')$. Since $P$ and $P'$ are foliated by $\lambda^+$, let $m$ and $m'$ be the arcs of $\lambda^+$ which are properly embedded in $P$ and $P'$ respectively, and pass through $p$. Orient $m$ so that it begins in $\sigma$, and $m'$ so that it terminates in $\sigma'$. These orientations agree at $p$: if they did not we would obtain a contradiction by applying Gauss--Bonnet to the circuit passing through $m$,$\sigma$,$l$,$\sigma'$ and $m'$. Thus, the union $J=m\cup m'$ is an interval in a leaf of $\lambda^+$ with endpoints on $\sigma$ and $\sigma'$, with $p$ in the interior of $m\cap m'$. (If $p$ were in $l$ already then we would have $J=l$.) Orienting $J$ as $[y,y']$ where $y\in \sigma$ and $y'\in\sigma'$, we can write $m=[y,x]$ and $m'=[x',y']$, where $x = J\cap \thull(\sigma)$ and $x'= J\cap \thull(\sigma')$. These points appear, in order along $J$, as $y,x',p,x,y'$. \realfig{disjoint-P}{The point $p$ cannot lie in the interior of both $P(\sigma)$ and $P(\sigma')$.} Let $t$ and $t'$ be the triangles of $\cT(\sigma)$ and $\cT(\sigma')$ containing $x$ and $x'$, respectively. Then $p\in t\cap t'$. Let $\kappa$ and $\kappa'$ be the saddle connections of $\thull(\sigma)$ and $\thull(\sigma')$ spanning $t$ and $t'$, respectively (See \Cref{disjoint-P}). The fact that the endpoints of $\kappa$ and $\kappa'$ are disjoint from the intersection of $t$ and $t'$ implies that $\kappa\cap J$, which is $x$, lies below $\kappa'\cap J$, which is $x'$. This contradicts the ordering of the points in $J$. \qedhere \end{proof} \subsection{Retractions in $\A$} \label{sec:totally_geo} In this subsection, $X$ is fully-punctured. Let $\A(\tau) \subset \A(X)$ be the span of the vertices of $\A(X)$ which are represented by edges of $\tau$. We will construct a \emph{coarse 1-Lipschitz retraction} from $\A(X)$ to $\A(\tau)$. By this, we mean a coarse map which takes diameter $\le 1$ sets to diameter $\le 1$ sets and restricts to the identity on the 0-skeleton of $\A(\tau) \subset \A(X)$. First, let $\mathcal{SC}(q) \subset \A(X)$ be the arcs of $X$ which can be realized by saddle connections of $q$. Hence, $\A(\tau) \subset \mathcal{SC}(q) \subset \A(X)$. For any $a \in \A(X)$ define $\mathbf{s}(a)\subset \mathcal{SC}(q)$ as follows: If $a_q$ is the $q$-geodesic representative of $a$ in $\bar X$, then let $\mathbf{s}(a)$ be the set of saddle connections of $q$ composing $a_q$. If $a$ is a cylinder curve of $q$, then we take $\mathbf{s}(a)$ to be the set of saddle connections appearing in the boundary of the maximal cylinder of $a$. Note that if $a \in \A(X)$ is itself represented by a saddle connection of $q$, then $\mathbf{s}(a)=\{a\}$. The following lemma shows that $\mathbf{s}$ is well-defined and is a coarse $1$-Lipschitz retraction, in the above sense. \begin{lemma} \label{saddle_proj} For adjacent vertices $a,b \in \A(X)$, the vertices of $\mathbf{s}(a)$ and $\mathbf{s}(b)$ are pairwise adjacent or equal. \end{lemma} \begin{proof} Recall that adjacency of vertices in $\A(X)$ corresponds to disjointness of their hyperbolic geodesic representative, and for vertices realized by saddle connections, this corresponds to the lack of transverse intersection of their interiors. But if any arcs of $\mathbf{s}(a)$ and $\mathbf{s}(b)$ have crossing interiors, \Cref{cor:side_coherence} implies that the hyperbolic geodesic representatives of $a$ and $b$ must cross as well. The lemma follows. \end{proof} Combining this lemma with \Cref{r disjoint} gives us the proof of \Cref{th:total geodesic}, which we restate here in somewhat more precise language: \restate{th:total geodesic}{ {\rm (Geodesically connected theorem).} Let $(X,q)$ be fully punctured with associated veering triangulation $\tau$. The composition $\rhull \circ \mathbf{s} \colon \A(X) \to \A(\tau)$ is a coarse $1$--Lipschitz retraction in the sense that it takes diameter $\le 1$ sets to diameter $\le 1$ sets, and is the identity on the $0$-skeleton of $\A(\tau)$. Hence, any two vertices in $\A(\tau)$ are joined by a geodesic of $\A(X)$ that lies in $\A(\tau)$.} \begin{proof} \Cref{saddle_proj} says that $\mathbf{s}:\A(X)\to\mathcal{SC}(q)$ is a coarse $1$-Lipschitz retraction. \Cref{r disjoint}, interpreted as a statement about the arc and curve complexes, says the same for $\rhull:\mathcal{SC}(q)\to\A(\tau)$. The theorem follows. \end{proof} \section{Introduction} \label{intro} Let $M$ be a 3-manifold fibering over the circle with fiber $S$ and pseudo-Anosov monodromy $f$. The stable/unstable laminations $\lambda^+,\lambda^-$ of $f$ give rise to a function on the essential subsurfaces of $S$, $$ Y \mapsto d_Y(\lambda^+,\lambda^-), $$ where $d_Y$ denotes distance in the curve and arc complex of $Y$ between the lifts of $\lambda^\pm$ to the cover of $S$ homeomorphic to $Y$. This distance function plays an important role in the geometry of the mapping class group of $S$ \cite{MM2,BKMM, masur2013geometry}, and in the hyperbolic geometry of the manifold $M$ \cite{ECL1, ELC2}. In this paper we study the function $d_Y$ when $M$ is fixed and the fibration is varied. The fibrations of a given manifold are organized by the faces of the unit ball of Thurston's norm on $H_2(M,\partial M)$, where each {\em fibered face} $\mathcal{F}$ has the property that every irreducible integral class in the open cone $\RR_+\mathcal{F}$ represents a fiber. There is a pseudo-Anosov flow which is transverse to every fiber represented by $\mathcal{F}$, and whose stable/unstable laminations $\Lambda^\pm\subset M$ intersect each fiber to give the laminations associated to its monodromy. With this we note that the projection distance $d_Y(\lambda^+,\lambda^-)$ can be defined for any subsurface $Y$ of any fiber in $\mathcal{F}$. We use $d_Y(\Lambda^+,\Lambda^-)$ to denote this quantity. Our main results give explicit connections between $d_Y$ and the {\em veering triangulation} of $M$, introduced by Agol \cite{agol2011ideal} and refined by Gu\'eritaud \cite{gueritaud}, with the main feature being that when $d_Y$ satisfies explicit lower bounds, a thickening of $Y$ is realized as an embedded subcomplex of the veering triangulation. In this way, the ``complexity'' of the monodromy $f$ is visible directly in the triangulation in a way that is independent of the choice of fiber in the face $\mathcal{F}$. This is in contrast with the results of \cite{ELC2} in which the estimates relating $d_Y$ to the hyperbolic geometry of $M$ are heavily dependent on the genus of the fiber. The results are cleanest in the setting of a {\em fully-punctured} fiber, that is when the singularities of the monodromy $f$ are assumed to be punctures of the surface $S$ (one can obtain such examples by starting with any $M$ and puncturing the singularities and their flow orbits). All fibers in a face $\mathcal{F}$ are fully-punctured when any one is, and in this case we say that $\mathcal{F}$ is a {\em fully-punctured face.} In this setting $M$ is a cusped manifold and the veering triangulation $\tau$ is an ideal triangulation of $M$. We obtain bounds on $d_W(\Lambda^+,\Lambda^-)$ that hold for $W$ in any fiber of a given fibered face: \begin{theorem}[Bounding projections over a fibered face] \label{th:bounding projections} Let $M$ be a hyperbolic 3-manifold with fully-punctured fibered face $\mathcal{F}$ and veering triangulation $\tau$. For any essential subsurface $W$ of any fiber of $\mathcal{F}$, \[ \alpha \cdot (d_W(\Lambda^- ,\Lambda^+) -\beta) < |\tau|, \] where $|\tau|$ is the number of tetrahedra in $\tau$, $\alpha = 1$ and $\beta = 10$ when $W$ is an annulus and $\alpha = 3|\chi(W)|$ and $\beta = 8$ when $W$ is not an annulus. \end{theorem} Note that this means that $d_W \le |\tau| +10$ for each subsurface $W$, no matter which fiber $W$ lies in. Further, the complexity $|\chi(W)|$ of any subsurface $W$ of any fiber of $\mathcal{F}$ with $d_W(\Lambda^+,\Lambda^-) \ge 9$ is also bounded in terms of $M$ alone. In addition, given one fiber with a collection of subsurfaces of large $d_Y$, we obtain control over the appearance of high-distance subsurfaces in all other fibers: \begin{theorem}[Subsurface dichotomy] \label{th:sub_dichotomy_fully_punctured} Let $M$ be a hyperbolic 3-manifold with fully-punctured fibered face $\mathcal{F}$ and suppose that $S$ and $F$ are each fibers in $\mathbb{R}_+\mathcal{F}$. If $W$ is a subsurface of $F$, then either $W$ is isotopic along the flow to a subsurface of $S$, or $$3|\chi(S)| \ge d_W(\Lambda^-,\Lambda^+) -\beta,$$ where $\beta =10$ if $W$ is an annulus and $\beta = 8$ otherwise. \end{theorem} One can apply this theorem with $S$ taken to be the smallest-complexity fiber in $\mathcal{F}$. In this case there is some finite list of ``large'' subsurfaces of $S$, and for all other fibers and all subsurfaces $W$ with $d_W$ sufficiently large, $W$ is already accounted for on this finite list. For a sample application of \Cref{th:sub_dichotomy_fully_punctured}, let $W$ be an essential annulus with core curve $w$ in a fiber $F$ of $M$ and suppose that $d_W(\Lambda^-,\Lambda^+) \ge K$ for some $K > 10$. (We note that it is easy to construct explicit examples of $M$ with $d_W(\Lambda^-,\Lambda^+)$ as large as one wishes by starting with a pseudo-Anosov homeomorphism of $F$ with large twisting about the curve $w$.) If $w$ is trivial in $H_1(M)$, then \Cref{th:sub_dichotomy_fully_punctured} (or more precisely \Cref{always subsurface}) implies that $w$ is actually isotopic to a simple closed curve in \emph{every} fiber in the open cone $\RR_+\mathcal{F}$ containing $F$. When $w$ is nontrivial in $H_1(M)$ it determines a codimension-$1$ hyperplane $P_w$ in $H^1(M) = H_2(M,\partial M)$ consisting of cohomology classes which vanish on $w$. For each fiber $S$ of $\RR_+\mathcal{F}$ either $S$ is contained in $P_w$, in which case $w$ is isotopic to a simple closed curve in $S$ as before, or $S$ lies outside of $P_w$ and $|\chi(S)| \ge \frac{K-10}{3}$. We remark that the second alternative is non-vacuous so long as $H^1(M)$ has rank at least 2. The general (non-fully-punctured) setting is also approachable with our techniques, but a number of complications arise and the connection to the veering triangulation of the fully-punctured manifold is much less explicit. An extension of the results in this paper to the general setting will be the subject of a subsequent paper. \subsection*{Pockets in the veering triangulation} When $Y$ is a subsurface of a fiber $X$ in $\mathcal{F}$ and $d_Y(\Lambda^+,\Lambda^-)>1$, we show (\Cref{thm: tau-compatible}) that $Y$ is realized simplicially in the veering triangulation lifted to the cover $X\times\mathbb{R}$. If $d_Y(\Lambda^+,\Lambda^-)$ is even larger then this realization can be thickened to a ``pocket'', which is a simplicial region bounded by two isotopic copies of $Y$. With sufficiently large assumptions this pocket can be made to embed in $M$ as well, and this is our main tool for connecting arc complexes to the veering triangulation and establishing Theorems \ref{th:bounding projections} and \ref{th:sub_dichotomy_fully_punctured}: \begin{theorem}\label{thm:pocket summary} Suppose $Y$ is a subsurface of a fiber $X$ with $d_Y(\lambda^-,\lambda^+) > \beta$, where $\beta=8$ if $Y$ is nonannular and $\beta=10$ if $Y$ is an annulus. Then there is an embedded simplicial pocket $V$ in $M$ isotopic to a thickening of $Y$, and with $d_Y(V^-,V^+) \ge d_Y(\lambda^-,\lambda^+) - \beta$. \end{theorem} In this statement, $V^+$ and $V^-$ refer to the triangulations of the top and bottom surfaces of the pockets, regarded as simplices in the curve and arc complex $\A(Y)$. Also, $d_Y(V^-,V^+)$ denotes the smallest $d_Y$-distance between an arc of $V^ -$ and an arc of $V^+$.\\ The veering triangulation in fact recovers a number of aspects of the geometry of curve and arc complexes in a fairly concrete way. As an illustration we prove \begin{theorem}\label{th:total geodesic} In the fully punctured setting, the arcs of the veering triangulation form a geodesically connected subset $\A(\tau)$ of the curve and arc graph, in the sense that any two points in $\A(\tau)$ are connected by a geodesic that lies in $\A(\tau)$. \end{theorem} \subsection*{Hierarchies of pockets} One is naturally led to generalize \Cref{thm:pocket summary} from a result embedding one pocket at a time to a description of all pockets at once. Indeed \Cref{prop:disjoint_pockets} tells us that whenever subsurfaces $Y$ and $Z$ of $X$ have large enough projection distances and are not nested, they have associated pockets $V_Y$ and $V_Z$ which are disjoint in $X \times \RR$. These facts, taken together with \Cref{th:total geodesic}, strongly suggest that the veering triangulation $\tau$ encodes the hierarchy of curve complex geodesics between $\lambda^\pm$ as introduced by Masur-Minsky in \cite{MM2}. We expect that, using a version of \Cref{th:total geodesic} that applies to subsurfaces and adapting the notion of ``tight geodesic'' from \cite{MM2}, one can carry out a hierarchy-like construction within the veering triangulation and recover much of the structure found in \cite{MM2}, with more concrete control, at least in the fully-punctured setting. We plan to explore this approach in future work. \subsection*{Related and motivating work} The theme of using fibered 3-manifolds to study infinite families of monodromy maps is deeply explored in McMullen \cite{mcmullen2000polynomial} and Farb-Leininger-Margalit \cite{farb-leininger-margalit}, where the focus is on Teichm\"uller translation distance. Distance inequalities analogous to \Cref{th:sub_dichotomy_fully_punctured}, in the setting of Heegaard splittings rather than surface bundles, appear in Hartshorn \cite{hartshorn}, and then more fully in Scharlemann-Tomova \cite{scharlemann-tomova}. Bachman-Schleimer \cite{BSc} use Heegaard surfaces to give bounds on the curve-complex translation distance of the monodromy of a fibering. All of these bounds apply to entire surfaces and not to subsurface projections. In Johnson-Minsky-Moriah \cite{johnson-minsky-moriah:subsurface}, subsurface projections are considered in the setting of Heegaard splittings. A basic difficulty in these papers which we do not encounter here is the compressibility of the Heegaard surfaces, which makes it tricky to control essential intersections. On the other hand, unlike the surfaces and handlebodies that are used to obtain control in the Heegaard setting, the foliations we consider here are infinite objects, and the connection between them and finite arc systems in the surface is a priori dependent on the fiber complexity. The veering triangulation provides a finite object that captures this connection in a more uniform way. The totally-geodesic statement of \Cref{th:total geodesic} should be compared to Theorem 1.2 of Tang-Webb \cite{tang-webb}, in which Teichm\"uller disks give rise to quasi-convex sets in curve complexes. While the results of Tang-Webb are more general, they are coarse, and it is interesting that in our setting a tighter statement holds. Finally, we note that work by several authors has focused on geometric aspects of the veering triangulation, including \cite{hodgson2011veering,futer2013explicit,hodgson2016non}. \subsection*{Summary of the paper} In \Cref{background} we set some notation and give Gu\'eritaud's construction of the veering triangulation. We also recall basic facts about curve and arc complexes, subsurface projections and Thurston's norm on homology. We spend some time in this section describing the flat geometry of a punctured surface with an integrable holomorphic quadratic differential, and in particular giving an explicit description of the circle at infinity of its universal cover (\Cref{same compactification}). While this is a fairly familiar picture, some delicate issues arise because of the incompleteness of the metric at the punctures. In \Cref{sections} we study {\em sections} of the veering triangulations, which are simplicial surfaces isotopic to $X$ in the cover $X\times\mathbb{R}$, and transverse to the suspension flow of the monodromy. These can be thought of as triangulations of the surface $X$ using only edges coming from the veering triangulation. We prove \Cref{lem:extension} which says that a partial triangulation of $X$ using only edges from $\tau$ can always be extended to a full section, and \Cref{prop:connect} which says that any two extensions of a partial triangulation are connected by a sequence of ``tetrahedron moves''. This is what allows us to define and study the ``pockets'' that arise between any two sections. In \Cref{hulls} we define a simple but useful construction, rectangle and triangle hulls, which map saddle connections in our surface to unions of edges of the veering triangulation. An immediate consequence of the properties of these hulls is a proof of \Cref{th:total geodesic}. In \Cref{surface_reps} we apply the flat geometry developed in \Cref{background} to control the convex hulls of subsurfaces of the fiber, and then use \Cref{hulls} to construct what we call $\tau$-hulls, which are representatives of the homotopy class of a subsurface that are simplicial with respect to the veering triangulations. \Cref{thm: tau-compatible} states that quite mild assumptions on $d_Y(\lambda^+,\lambda^-)$ imply that the $\tau$-hull of $Y$ has embedded interior. The idea here is that any pinching point of the $\tau$-hull is crossed by leaves of $\lambda^+$ and $\lambda^-$ that intersect each other very little. The main results of both \Cref{hulls} and \Cref{surface_reps} apply in a general setting and do not require that the surface $X$ be fully-punctured. In \Cref{pockets} we put these ideas together to prove our main theorems for fibered manifolds with a fully-punctured fibered face. In \Cref{Y pocket} we describe the maximal pocket associated to a subsurface $Y$ with $d_Y(\Lambda^+,\Lambda^-)$ sufficiently large (greater than 2, for nonannular $Y$). We then introduce the notion of an {\em isolated pocket}, which is a subpocket of the maximal pocket that has good embedding properties in the manifold $M$. The existence and embedding properties of these pockets are established in \Cref{lem:iso_pocket} and \Cref{prop:disjoint_pockets}, which together allow us to prove \Cref{thm:pocket summary}. From here, a simple counting argument gives \Cref{th:bounding projections}: the size of the embedded isolated pockets is bounded from below in terms of $d_Y(\Lambda^+,\Lambda^-)$ and $\chi(Y)$, and from above by the total number of veering tetrahedra. To obtain \Cref{th:sub_dichotomy_fully_punctured}, we use the pocket embedding results to show that, if $Y$ is a subsurface of one fiber $F$ and $Y$ essentially intersects another fiber $S$, then $S$ must cross every level surface of the isolated pocket of $Y$, and hence the complexity of $S$ gives an upper bound for $d_Y(\Lambda^+,\Lambda^-)$. To complete the proof we need to show that, if $Y$ does not essentially cross $S$, it must be isotopic to an embedded (and not merely immersed) subsurface of $S$. This is handled by \Cref{lem:embedding_fullly_punctured}, which may be of independent interest. It gives a uniform upper bound for $d_Y(\Lambda^+,\Lambda^-)$ when $Y$ corresponds to a finitely generated subgroup of $\pi_1(S)$, unless $Y$ covers an embedded subsurface. \subsection*{Acknowledgments} The authors are grateful to Ian Agol and Fran\c{c}ois Gu\'eritaud for explaining their work to us. We also thank Tarik Aougab, Jeff Brock, and Dave Futer for helpful conversations and William Worden for pointing out some typos in an earlier draft. Finally, we thank the referee for a thorough reading of our paper and comments which improved its readability. \section{Embedded pockets of the veering triangulation and bounded projections} \label{pockets} In this section, let $X$ be fully-punctured with respect to the foliations $\lambda^\pm$ of a pseudo-Anosov $f:X\to X$, and let $M$ be the mapping torus. Recall that every fiber associated to the fibered face $\mathcal{F}$ of $X$ must also be fully-punctured because they are transverse to the same suspension flow, and hence that $\mathcal{F}$ is a \emph{fully-punctured fibered face}. We now prove our two main theorems on the structure of subsurface projections in a fully-punctured fibered face, \Cref{th:bounding projections} and \Cref{th:sub_dichotomy_fully_punctured}. The main tools in the proof are the structure and embedding theorems for pockets associated with high-distance subsurfaces, which we develop below. Recall that $\mathrm{diam}_Z(\cdot)$ denotes the diameter of $\pi_Z(\cdot)$ in $\A(Z)$ and that subsurfaces $Y$ and $Z$ \emph{overlap} if, up to isotopy, they are neither disjoint nor nested. \subsection{Projections and $\tau$--compatible subsurfaces} We begin by discussing projection to $\tau$-compatible subsurfaces. \begin{lemma} \label{lem:overlap_tau} Let $Y$ and $Z$ be $\tau$-compatible subsurfaces of $X$ and let $K \subset X$ be a disjoint collection of saddle connections which correspond to edges from $\tau$. Then \begin{enumerate} \item If $K$ meets $\int_\tau(Y)$, then $\pi_Y(K) \neq \emptyset$, and $\mathrm{diam}_Y(\pi_Y (K)) \le 1$. \item If $Y$ and $Z$ are disjoint, then so are $\int_\tau(Y)$ and $\int_\tau(Z)$. \item If $Y$ and $Z$ overlap, then $\mathrm{diam}_Z(\partial Y \cup \partial_\tau Y) \le 1$. \item The subsurface $\int_\tau(Y)$ is in minimal position with the foliations $\lambda^\pm$. In particular, the arcs of $\int_\tau(Y) \cap \lambda^\pm$ agree with the arcs of $\pi_Y(\lambda^\pm)$. \end{enumerate} \end{lemma} \begin{proof} For item (1), the main point is to show that an edge of $K$ that meets $\int_\tau(Y)$ lifts to an {\em essential edge} in $\bar X_Y$. This is true for edges meeting $\int_q(Y)$, using the local CAT(0) geometry of $\bar X_Y$ and the fact that $\hat\iota_q(Y')$ is a locally convex embedding. Thus it will suffice to show that any $\tau$-edge $e$ meeting $\int_\tau(Y)$ must also meet $\int_q(Y)$. Suppose, on the contrary, that $e$ meets $\int_\tau(Y)$ but not $\int_q(Y)$. Then $e$ meets the interior of a polygon $P(\sigma)$ where $\sigma$ is an outward-oriented saddle connection in $\partial_q Y$ (recall from \Cref{rmk:fully} that, since $X$ is fully-punctured, the inner $\thull$ step in the construction of $\hat\iota_\tau$ is the identity, and the outer $\thull$ is in fact a rectangle hull). Let $R$ be the singularity-free rectangle spanned by $e$. If $e$ is contained in $P(\sigma)$ then $R$ can be extended to a rectangle whose diagonal lies in $\sigma$, and hence $e$ is one of the edges of $\rhull(\sigma)$; but this contradicts the assumption that $e$ meets $\int_\tau(Y)$. Thus $e$ crosses some edge $f$ of $\rhull(\sigma)$. However, $f$ is contained in a singularity-free triangle whose hypotenuse lies along $\sigma$ and so $\sigma$ must cross the rectangle $R$ either top-to-bottom or side-to-side. In either case, we see that $e$ crosses $\sigma \subset \partial_q Y$, a contradiction. We conclude that if a $\tau$-edge meets $\int_\tau(Y)$, then it also meets $\int_q(Y)$ and hence has a well-defined projection to $Y$. The diameter bound in item $(1)$ is then immediate since $K$ is a disjoint collection of essential arcs of $\A(X)$. For item $(2)$, first note that when $Y$ and $Z$ are disjoint subsurfaces of $X$, the interiors $\int_q(Y)$ and $\int_q(Z)$ are also disjoint. This follows from \Cref{cor:side_coherence} and the $q$-hulls construction in \Cref{q tight}. More precisely, let $\Lambda_Y$ and $\Lambda_Z$ be the limit sets of $Y$ and $Z$ in $\partial \HH^2$ (using our identifications from \Cref{AY in flat geometry}). Since $Y$ and $Z$ do not intersect, $\Lambda_Y$ and $\Lambda_Z$ do not link in $\partial \HH^2$ and so ${\operatorname{CH}}_q(\Lambda_Y)$ and ${\operatorname{CH}}_q(\Lambda_Z)$ have disjoint interiors by \Cref{cor:side_coherence}. This implies that $\int_q(Y)$ and $\int_q(Z)$ are disjoint in $X$. To obtain $\int_\tau (Y)$ from $\int_q (Y)$ we append to each saddle connection $\sigma$ in $\partial_q Y$ the (open) polygon $P(\sigma)$, where $\sigma$ is oriented out of $Y$. We obtain $\int_\tau (Z)$ from $\int_q (Z)$ by the same construction. Since $\int_q(Y)$ and $\int_q(Z)$ are disjoint in $X$, it suffices to show that $P(\sigma)$ and $P(\kappa)$ have disjoint interiors, where $\sigma \subset \partial_qY$ and $\kappa \subset \partial_q Z$. If $\sigma = \kappa$, then this saddle connection spans a singularity-free rectangle and $P(\sigma) = \sigma = \kappa = P(\kappa)$. Otherwise, $\sigma$ and $\kappa$ have disjoint interiors and \Cref{disjoint_thulls} implies that $P(\sigma)$ and $P(\kappa)$ have disjoint interiors, as required. This proves item (2). Since $\int_\tau(Y)$ is an embedded representative of the interior of $Y$, $\partial Y$ has a representative disjoint from the collection of saddle connections in $\partial_\tau Y$. Hence $\mathrm{diam}_Z(\partial Y \cup \partial_\tau Y) \le 1$, proving item $(3)$. For item $(4)$, first note that the subsurface $\int_q(Y)$ is in minimal position with the foliations $\lambda^\pm$. This is immediate from the local CAT$(0)$ geometry in $\bar X_Y$ and the fact that $\lambda^\pm$ are geodesic: any bigon in $\bar X_Y$ between $\hat \iota_q(\partial' Y)$ and a leaf of $\lambda^\pm$ would lift to a bigon in $\hat X$ bounded by two geodesic segments, a contradiction to uniqueness of geodesics in $\hat X$. The statement for $\int_\tau(Y)$ then follows from the fact that the homotopy from $\partial_q Y$ to $\partial_\tau Y$ can be taken to move either along vertical or along horizontal leaves, using either $\thull^+$ or $\thull^-$ as in the proof of \Cref{thm: tau-compatible}. \end{proof} \subsection{Pockets for a $\tau$-compatible subsurface} Suppose that $Y \subset X$ is \linebreak $\tau$--compatible. By \Cref{cor:top}, the set $T(\partial_\tau Y)$ of sections containing $\partial_\tau Y$ contains a top and a bottom section, denoted $T^+ = T^+(\partial_\tau Y)$ and $T^- = T^-(\partial_\tau Y)$, which between them bound a number of pockets. See \Cref{sections} for terminology related to sections and pockets. Our assumption on $d_Y(\lambda^-,\lambda^+)$ will imply that one of these pockets is isotopic to a thickening of $Y$, as explained in the following proposition: \begin{proposition}[Pockets in $\tau$]\label{Y pocket} Let $(X,q)$ be fully-punctured and $Y\subset X$ an essential nonannular subsurface. \begin{enumerate} \item If $d_Y(\lambda^-,\lambda^+) > 0$ then $d_Y(T^+,\lambda^+) = d_Y(T^-,\lambda^-) = 0$. \item If $d_Y(\lambda^-,\lambda^+) > 2$ then $T^+$ and $T^-$ bound a pocket $U_Y$ whose interior is isotopic to a thickening of $\int(Y)$. \end{enumerate} When $Y$ is an annulus, \begin{enumerate} \item If $d_Y(\lambda^-,\lambda^+) > 1$ then $d_Y(T^+,\lambda^+) = d_Y(T^-,\lambda^-) = 1$. \item If $d_Y(\lambda^-,\lambda^+) > 4$ then $T^+$ and $T^-$ bound a pocket $U_Y$ whose interior is isotopic to a thickening of $\int(Y)$. \end{enumerate} \end{proposition} \begin{proof} Begin with the following lemma: \begin{lemma}\label{near lambda} Suppose that $Y \subset X$ is $\tau$-compatible, let $e$ be an edge of $\partial_\tau Y$ and let $f$ be a $\tau$-edge crossing $e$ with $f>e$. Then $d_Y(f,\lambda^+) \le 1$ if $Y$ is an annulus and $d_Y(f,\lambda^+) =0$ otherwise. Similarly if $f<e$ then the same statement holds for $d_Y(f,\lambda^-)$. \end{lemma} \realfig{f-above-e_2}{Local picture near the $\tau$-edge $e$ of $\partial_\tau Y$ with $\int_\tau(Y) \subset X$ shaded. When $f>e$, the edge $l^+$ of $Q$ represents $\pi_Y(\lambda^+)$ and is disjoint from $f$. Note that $Q$ is \emph{immersed} in $X$.} The key idea of the proof is pictured in \Cref{f-above-e_2}. Here it is shown that if $f$ crosses $e \subset \partial_\tau Y$ with $f>e$, then some component of the intersection of $f$ with $\int_\tau(Y)$ is disjoint from some arc in $\pi_Y(\lambda^+)$. However, the spanning rectangle $Q$ for $f$ is immersed in $X$ (rather than necessarily embedded). To handle this issue, we work in the cover $\widetilde X$. \begin{proof} Let $C^{\mathrm{o}}$ be a component of the preimage of $\int_\tau(Y)$ under $\widetilde X \to X$ and choose a saddle connection $\til e$ in the boundary of $C^{\mathrm{o}}$ which projects to $e$. Further, let $\til f$ be any lift of $f$ which crosses $\til e$. Since $f$ is a $\tau$-edge, $\til f$ spans a singularity-free rectangle $\til Q$ whose immersed image in $X$ we denote by $Q$. Every $\tau$-edge which crosses $\til Q$ does so either top to bottom or side to side. Since $f>e$, $\til e$ must cross $\til Q$ from side to side (see \Cref{sections}). Since all $\tau$-edges in $\partial C^{\mathrm{o}}$ are disjoint, they all must cross $\til Q$ from side to side. Since $\int_\tau(Y)$ is in minimal position with $\lambda^+$ (\Cref{lem:overlap_tau}), $C^{\mathrm{o}}$ intersects each leaf of the vertical foliation in a connected set. Together these observations imply that $\til Q \cap C^{\mathrm{o}}$ is a single polygon $\til B$, bounded by at least one edge crossing $\til Q$ from side to side (which we have called $\til e)$. See \Cref{f-above-e_cover}. \begin{figure}[htbp] \begin{center} \includegraphics[scale = .7]{f-above-e_cover} \caption{The 3 possibilities for $\til B$. The lightly shaded region is part of $C^{\mathrm{o}}$ in $\widetilde X$.} \label{f-above-e_cover} \end{center} \end{figure} \begin{claim*} $\til B$ embeds in $\int_\tau (Y)$ under the covering $\til X \to X$. \end{claim*} \begin{proof}[Proof of claim] Since $\til B \subset C^{\mathrm{o}}$, the image of $\til B$ is contained in $\int_\tau (Y)$. Suppose that $x,y \in \til B$ map to the same point in $\int_\tau(Y)$, and denote by $l_x$ and $l_y$ the vertical leaf segments in $\til X$ starting at $x$ and $y$, respectively, and continuing to $\til e$. Since $\til B$ is convex, $l_x,l_y \subset \til B$. Suppose that $l_x$ is no longer than $l_y$ and let $l_y'$ be the subsegment of $l_y$ with length equal to that of $l_x$. Then $l_x$ and $l_y'$ are identified under the map $\til X \to X$. But the identification of $ \partial l_x \ssm \{x\} \subset \til e$ and $ \partial l'_y \ssm \{y\} \subset \til B \cup \til e \subset C^{\mathrm{o}} \cup \til e$ gives a contradiction, unless $x=y$: the edge $\til e$ is mapped injectively into $X$ with image $e\subset \partial_\tau Y$ disjoint from the image of $C^{\mathrm{o}}$, which is $\int_\tau (Y)$. \end{proof} Let $\til s$ be the vertex of $\til f$ which is on the same side of $\til e$ as $\til B$. Let $\til l$ be the vertical side of $\til Q$ starting at $\til s$. Let $B$ be the image of $\til B$ in $X$. By the claim, $B$ is a singularity-free quadrilateral in $X$ whose interior is contained in $\int_\tau(Y)$. The images in $X$ of $\til f \cap \til B$ and $\til l \cap \til B$ are therefore disjoint proper arcs in $\int_\tau(Y)$, which by \Cref{lem:overlap_tau} are representatives of $\pi_Y(f)$ and $\pi_Y(\lambda^+)$, respectively. Moreover, these arcs are properly homotopic in $\int_\tau(Y)$ by a homotopy supported in $B$. Hence, when $Y$ is nonannular, we conclude that $d_Y(f,\lambda^+) =0$. If $Y$ is an annulus, we project the picture to the annular cover $X_Y$, where we note that the image $l$ of $\til l$, continued to infinity, cannot intersect $f$ without meeting $Q$, and hence $e$, again. Since $l$ can only meet $\partial_q Y$ once in the annular cover, we conclude it is disjoint from $f$ and so $d_Y(f,\lambda^+) =1$. The case $f<e$ is similar, so \Cref{near lambda} is proved. \end{proof} We return to the proof of \Cref{Y pocket}. Let $Y$ be nonannular. Note that by definition the only upward-flippable edges in $T^+$ must lie in $\partial_\tau Y$. Let $e$ be such an edge and consider the single flip move that replaces $e$ with an edge $f$. Then $f>e$, so by \Cref{near lambda}, $d_Y(f,\lambda^+) = 0$. On the other hand $f$ and $e$ are diagonals of a quadrilateral made of edges of $T^+$, at least one of which, $e'$, gives the same element of $\A(Y)$ as $f$. Hence $d_Y(T^+,\lambda^+) = 0.$ If $Y$ is an annulus, we note that $e'$ and the vertical leaf in the proof of \Cref{near lambda} give adjacent vertices of $\A(Y)$, so $d_Y(T^+,\lambda^+)\le 1$. Note that $d_Y(T^+,\lambda^+) \ne 0$ because no leaf of the foliation $\lambda^+$ has both its endpoints terminating at completion points. To prove the statements about pockets, let $K$ be the common edges of $T^+$ and $T^-$, viewed as a subcomplex of $X$. If $\int_\tau(Y)$ contains an edge of $K$ then from the triangle inequality, together with the first part of the proposition, we obtain $d_Y(\lambda^+,\lambda^-) \le 2$ when $Y$ is nonannular, and $d_Y(\lambda^+,\lambda^-) \le 4$ when $Y$ is an annulus. By our hypotheses this does not happen, so we conclude that $T^+,T^- \in T(\partial_\tau Y)$ have no common edges contained in $\int_\tau (Y)$. Hence $T^+$ and $T^-$ bound a pocket $U_Y$ whose base is $\int_\tau(Y)$. This completes the proof. \end{proof} \subsection{Isolated pockets and projection bounds} Let $X$ be a fiber in $\mathbb{R}_+\mathcal{F}$, and let $Y$ be a $\tau$--compatible subsurface of $X$ such that $d_Y(\lambda^-,\lambda^+)>4$. An \emph{isolated pocket} for $Y$ in $(X \times \mathbb{R}, \tau)$ is a subpocket $V = V_Y$ of $U_Y$ with base $\int_\tau (Y)$ such that \begin{enumerate} \item For each edge $e$ of $V$ which is not contained in $\partial_\tau Y$, \[ d_Y(e,\lambda^+) \ge 3 \quad \text{and} \quad d_Y(e,\lambda^-) \ge 3 \] if $Y$ is nonannular, and \[ d_Y(e,\lambda^+) \ge 4 \quad \text{and} \quad d_Y(e,\lambda^-) \ge 4 \] if $Y$ is an annulus. \item Denoting by $V^\pm$ the top and bottom of $V$ with their induced triangulations, \[ d_Y(V^-, V^+) \ge 1. \] \end{enumerate} Note that condition $(2)$ guarantees that $\mathrm{int}(V_Y) \cong \int_\tau(Y) \times (0,1)$ is still a pocket just as in \Cref{Y pocket}. The next lemma shows that for $Y$ with $d_Y(\lambda^-,\lambda^+)$ sufficiently large, $Y$ has an isolated pocket with $d_Y(V^-,V^+)$ roughly $d_Y(\lambda^-,\lambda^+)$. \begin{lemma} \label{lem:iso_pocket} Suppose that $Y$ is a nonannular subsurface of $X$ with $d_Y(\lambda^-,\lambda^+) > 8$. Then $Y$ has an isolated pocket $V$ with $d_Y(V^-,V^+) \ge d_Y(\lambda^-,\lambda^+) - 8$. If $Y$ is an annulus with $d_Y(\lambda^-,\lambda^+) > 10$, then $Y$ has an isolated pocket $V$ with $d_Y(V^-,V^+) \ge d_Y(\lambda^-,\lambda^+) - 10$. \end{lemma} \begin{proof} Let $c=4$ if $Y$ is an annulus and $c=3$ otherwise, and assume that $d_Y(\lambda^+,\lambda^-) > 2c+2$. Since the pocket $U = U_Y$ is connected (\Cref{prop:connect}), there is a sequence of sections $T^- = T_0, T_1, \ldots, T_N = T^+$ in $T(\partial_\tau Y)$ such that $T_{i+1}$ differs from $T_i$ by an upward diagonal exchange. From \Cref{Y pocket}, we know that $d_Y(T^- , \lambda^-) \le 1$ and $d_Y(T^+ , \lambda^+) \le 1$. Let $0 < a < N$ be largest integer such that $d_Y(T_{a-1}, \lambda^-) < c$; hence $d_Y(T_{i}, \lambda^-) \ge c$ for all $i\ge a$. Now let $b<N$ be the smallest integer greater than $a$ such that $d_Y(T_{b+1},\lambda^+) < c$; then $d_Y(T_{i},\lambda^+) \ge c$ for all $a\le i \le b$. Note that these indices exist since $d_Y(\lambda^-,\lambda^+) \ge 2c+1$. Now let $V$ be the pocket between $T_a$ and $T_b$ with base contained in $\int_\tau(Y)$ and note that $V$ is a subpocket of $U$. Any edge $e$ of $V$ not contained in $\partial_\tau Y$ is contained in a section $T_i \in T(\partial_\tau Y)$ for $a \le i \le b$. Since we have $d_Y(T_i,\lambda^\pm)\ge c$, we have $d_Y(e,\lambda^\pm)\ge c$. Thus it only remains to get a lower bound on $d_Y(V^+,V^-)$. The triangle inequality (and diameter bound on $T_a$ and $T_b$) gives \[ d_Y (V^-,V^+) = d_Y(T_a,T_b) \ge d_Y(\lambda^-,\lambda^+) - 2c -2 \ge 1. \] This implies that $\int_\tau (Y)$ is the base of $V$ and completes the proof. \end{proof} The following proposition shows that isolated pockets coming from either disjoint or overlapping subsurfaces of $X$ have interiors which do not meet. \begin{proposition}[Disjoint pockets] \label{prop:disjoint_pockets} Suppose that $Y$ and $Z$ are subsurfaces of $X$ with isolated pockets $V_Y$ and $V_Z$. Then, up to switching $Y$ and $Z$, either $Y$ is nested in $Z$, or the isolated pockets $V_Y$ and $V_Z$ have disjoint interiors in $X \times \mathbb R$. \end{proposition} \begin{proof} If the subsurfaces $Y$ and $Z$ are disjoint, then $\int_\tau(Y)$ and $\int_\tau(Z)$ are also disjoint by \Cref{lem:overlap_tau}. Hence, the maximal pockets $U_Y$ and $U_Z$ have disjoint interiors by definition. Now suppose that $Y$ is not an annulus. We claim that if $Y$ and $Z$ overlap then either \[ d_Y(\partial_\tau Z, \lambda^+)\le 1 \; \text{ or } \; d_Y(\partial_\tau Z,\lambda^-) \le 1. \] To see this, first note that there is some edge $f$ contained in $\int_\tau(Z)$ such that $f$ crosses some edges of $\partial_\tau Y$. Otherwise, every triangulation of $\int_\tau(Z)$ by $\tau$--edges would contain edges from $\partial_\tau Y$. But then applying this to $T^\pm(\partial_\tau Z)$ and using \Cref{Y pocket}, we would have that \[ d_Z(\lambda^-,\lambda^+) \le 2 + \mathrm{diam}_Z(\partial_\tau Y) \le 3, \] contradicting our assumption on the subsurface $Z$. Now if $f$ intersects an edge $e$ of $\partial_\tau Y$ and $f>e$, then by \Cref{near lambda}, $d_Y(\partial_\tau Z, \lambda^+)\le d_Y(\partial_\tau Z, f) \le 1$. If $f<e$ then \Cref{near lambda} gives $d_Y(\partial_\tau Z, \lambda^-)\le d_Y(\partial_\tau Z, f) \le 1$. Now suppose that $e$ is an edge of $U_Y \cap U_Z$ which is not contained in $\partial_\tau Y \cup \partial_\tau Z$. Then $e$, as a $\tau$-edge in $X$, is disjoint from $\partial_\tau Z$ and so $d_Y(e,\lambda^+) \le 2$ or $d_Y(e,\lambda^-) \le 2$. Hence $e$ cannot be contained in $V_Y$. We conclude that $V_Y \cap V_Z \subset \partial_\tau Y \cup \partial_\tau Z$. This completes the proof when $Y$ is not an annulus. When $Y$ is an annulus, then a similar argument using the annular case of \Cref{near lambda} shows that if $Y$ and $Z$ overlap then either \[ d_Y(\partial_\tau Z, \lambda^+)\le 2 \; \text{ or } \; d_Y(\partial_\tau Z,\lambda^-) \le 2. \] Hence, if $e$ is an edge of $U_Y \cap U_Z$ which is not contained in $\partial_\tau Y \cup \partial_\tau Z$, then $d_Y(e,\lambda^\pm) \le 3$. So again $e$ cannot be contained in $V_Y$ and we conclude that $V_Y \cap V_Z \subset \partial_\tau Y \cup \partial_\tau Z$ as required. \end{proof} We next prove that isolated pockets embed into the fibered manifold $M$. This is \Cref{thm:pocket summary}, which we restate here in more precise language. \restate{thm:pocket summary}{ {\rm (Embedding the pocket).} Suppose $Y$ is a subsurface of a fully-punctured fiber $X$ with $d_Y(\lambda^-,\lambda^+) > \beta$, where $\beta=8$ if $Y$ is nonannular and $\beta=10$ if $Y$ is an annulus. Then $Y$ has an isolated pocket $V_Y$ in $X \times \mathbb{R}$, and the covering map $X \times \mathbb{R} \to M$ restricts to an embedding of the subcomplex $V_Y \to M$. } \begin{proof} Let $\Phi$ be the simplicial isomorphism of $X \times \mathbb{R}$ induced by $f$ as in \Cref{gueritaud construction}. Note that if $T$ is a section of $\tau$, then $\Phi (T)$ is the section of $\tau$ whose corresponding triangulation of $X$ is $f(T)$. Hence, $\Phi(T(\partial_\tau Y)) = T(\partial_\tau{f (Y)})$. By \Cref{lem:iso_pocket}, $Y$ has an isolated pocket $V =V_Y$. Note that $V$ embeds into $M$ if and only if it is disjoint from its translates $V_i = \Phi^i(V)$ for each $i \neq 0$. By the remark above, each $V_i$ is itself an isolated pocket for the subsurface $Y_i = f^i(Y)$, and any two of these subsurfaces are either disjoint or overlap in $X$. Hence, by \Cref{prop:disjoint_pockets} the isolated pockets $V_i$ are disjoint as required. \end{proof} We will now prove \Cref{th:bounding projections}, whose statement we recall here: \restate{th:bounding projections}{ Let $M$ be a hyperbolic 3-manifold with fully-punctured fibered face $\mathcal{F}$ and veering triangulation $\tau$. For any subsurface $W$ of any fiber of $\mathcal{F}$, \[ \alpha \cdot (d_W(\lambda^- ,\lambda^+) -\beta) < |\tau|, \] where $|\tau|$ is the number of tetrahedra in $\tau$, $\alpha = 1$ and $\beta = 10$ when $W$ is an annulus and $\alpha = 3|\chi(W)|$ and $\beta = 8$ when $W$ is not an annulus. } \begin{proof} Suppose that $W$ is any nonannular subsurface of any fiber $F$ in $\mathbb{R}_+ \mathcal{F}$. We may assume that $d_W(\lambda^-,\lambda^+) >8$. Then \Cref{lem:iso_pocket} implies that $W$ has an isolated pocket $V_W$ in $(F \times \mathbb{R}, \tau)$ such that $d_W(V_W^-,V_W^+)\ge d_Y(\lambda^-,\lambda^+) -8$. By \Cref{thm:pocket summary}, the isolated pocket $V_W \subset (F \times \mathbb{R}, \tau)$ embeds into $(M,\tau)$. Hence $|V_W| \le |\tau|$, where $|V_W|$ denotes the number of tetrahedra of $V_W$. Now each tetrahedron of $V_W$ corresponds to a diagonal exchange between the triangulations $V_W^-$ and $V_W^+$ of $W_\tau$ and each diagonal exchange replaces a single edge of the triangulation. There are at least $3|\chi(W)| + 1$ non-boundary edges to each triangulation of $W$, and the diameter in $\A(W)$ of an ideal triangulation is 1, so we conclude \begin{align} \label{ineq:pocket_growth} |\tau| &\ge |V_W| = \#\{\text{diagonal exchanges from } V_W^- \text{ to } V_W^+\}\\ &> 3|\chi(W)| \cdot d_W(V^-,V^+) \nonumber \\ &\ge 3|\chi(W)| \cdot (d_W(\lambda^-,\lambda^+) - 8) \nonumber. \end{align} This completes the proof when $W$ is nonannular. When $W$ is an annulus, we use the annular case of \Cref{lem:iso_pocket} to obtain an isolated pocket $V_W$ in $(F \times \mathbb{R}, \tau)$ such that $d_W(V_W^-,V_W^+)\ge d_Y(\lambda^-,\lambda^+) -10$. Noting that a triangulation of the annulus contains at least 2 (non-boundary) edges, the same argument implies that \begin{align*} |\tau| &\ge |V_W| = \#\{\text{diagonal exchanges from } V_W^- \text{ to } V_W^+\}\\ &> d_W(V^-,V^+) \nonumber \\ &\ge d_W(\lambda^-,\lambda^+) - 10 \nonumber, \end{align*} as required. \end{proof} \subsection{Sweeping through embedded pockets} We are now ready to prove \Cref{th:sub_dichotomy_fully_punctured}, whose statement we reproduce below. This theorem relates subsurfaces of large projections among different fibers of a fixed face. \restate{th:sub_dichotomy_fully_punctured}{ Let $M$ be a hyperbolic 3-manifold with fully-punctured fibered face $\mathcal{F}$ and suppose that $S$ and $F$ are each fibers in $\mathbb{R}_+\mathcal{F}$. If $W$ is a subsurface of $F$, then either $W$ is isotopic along the flow to a subsurface of $S$, or $$3|\chi(S)| \ge d_W(\lambda^-,\lambda^+) -\beta,$$ where $\beta =10$ if $W$ is an annulus and $\beta = 8$ otherwise. } Recall from \Cref{lem:subgroup_projection} that we can identify $d_W(\lambda^+,\lambda^-)$ with $d_W(\Lambda^+,\Lambda^-)$, agreeing with the statement given in the introduction. We will require the following lemma, which essentially states that immersed subsurfaces with large projection are necessarily covers of subsurfaces. Recall that in \Cref{sec: arc_complex} we defined the distance $d_W(\lambda^+,\lambda^-)$ when $W$ is a compact core of a cover $X_\Gamma \to X$ corresponding to a finitely generated subgroup $\Gamma \le \pi_1(X)$. \begin{lemma}[Immersion to cover] \label{lem:embedding_fullly_punctured} Suppose that $(X,q)$ is a fully-punctured surface. Let $\Gamma$ be a finitely generated subgroup of $\pi_1(X)$ and let $W$ be a compact core of the cover $X_\Gamma \to X$. If $W$ is nonannular and $d_W(\lambda^-,\lambda^+) > 4$ or if $W$ is an annulus and $d_W(\lambda^-,\lambda^+) >6$, then there is a subsurface $Y$ of $X$ such that $W \to X$ is homotopic to a finite cover $W \to Y \subset X$. In particular, $\Gamma$ is a finite index subgroup of $\pi_1(Y)$. \end{lemma} \begin{proof} Suppose that $d_W(\lambda^-,\lambda^+) > 4$ if $W$ is nonannular and $d_W(\lambda^-,\lambda^+) >6$ if $W$ is an annulus. Let $p \colon \check X \to X$ be a finite cover to which $W \to X$ lifts to an embedding $W \to \check{X}$ (this exists since surface groups are LERF \cite{scott-LERF}), and identify $W$ with its image in $\check{X}$. Lift $q$ along with the veering triangulation to $(\check{X} \times \mathbb{R},\tau)$. By \Cref{thm: tau-compatible}, $W$ is a $\tau$--compatible subsurface of $\check X$, and by \Cref{thm: tau-compatible} and \Cref{prop:connect}, $T_{\check{X}}(\partial_\tau W)$ is nonempty and connected. To prove the lemma, we show that $\int_\tau(W) \to X$ covers a subsurface of $X$. For this, it suffices to prove that each edge of $p^{-1}(p(\partial_\tau W))$ is disjoint from $\int_\tau(W)$. Indeed, since $W$ is $\tau$--compatible, one component of $\check X \ssm \partial_\tau W$ is $\int_\tau(W)$. If $p^{-1}(p(\partial_\tau W))$ is disjoint from $\int_\tau(W)$, then $\int_\tau(W)$ is also a component of $\check X \ssm p^{-1}(p(\partial_\tau W))$. As components of $\check X \ssm p^{-1}(p(\partial_\tau W))$ cover components of $X \ssm p(\partial_\tau W)$, this will show that $\int_\tau(W) \to X$ covers a subsurface of $X$. Hence, we must show that each edge of $p^{-1}(p(\partial_\tau W))$ is disjoint from $\int_\tau(W)$. This is equivalent to the statement that no edge of $p^{-1}(p(\partial_\tau W))$ crosses $\partial_\tau W$ nor is contained in $\int_\tau(W)$. First suppose that $W$ is not an annulus. If $\check T$ is a section of $(\check{X} \times \mathbb{R},\tau)$ with an edge $f$ such that $f>e$ for an edge $e$ of $\partial_\tau W$, then \Cref{near lambda} implies that $d_W(\check T,\lambda^+) = 0$. Similarly if $f<e$ then $d_W(\check T,\lambda^-) = 0$. Hence, if $T$ is \emph{any section of} $(X \times \mathbb{R},\tau)$ such that $d_W(T,\lambda^\pm) \ge 1$, then its lift $\check{T} = p^{-1}(T)$ to $\check{X}$ must contain the edges of $\partial_\tau W$ and so $\check{T} \in T_{\check{X}}(\partial_\tau W)$. Moreover, such a section $T$ of $(X \times \mathbb{R},\tau)$ with $d_W(T,\lambda^\pm) \ge 1$ must exist. This is because by \Cref{gue-sweep}, we may sweep through $X\times\mathbb{R}$ with sections going from near $\lambda^-$ to near $\lambda^+$. If all sections were to have $d_W$--distance $0$ from either $\lambda^-$ or $\lambda^+$, then there would be a pair $T,T'$ differing by a single diagonal exchange such that $d_W(T, \lambda^-)= d_W(T',\lambda^+) = 0$. But this would imply that $d_W(\lambda^-,\lambda^+) \le 2$, contradicting our assumption on distance. Putting these facts together, we conclude that there exists a section $T$ of $(X \times \mathbb{R},\tau)$ with $d_W(T,\lambda^\pm)\ge 1$, and that for each such section \[ p^{-1}(T) \in T_{\check{X}}(p^{-1}(p(\partial_\tau W))). \] Note that this in particular implies that no edge of $p^{-1}(p(\partial_\tau W))$ crosses an edge of $\partial_\tau W$. We claim now that no edge $e$ in $p^{-1}(p(\partial_\tau W))$ can be contained in $\int_\tau (W)$. Such an edge would have a well-defined projection to $\A(W)$ and would necessarily appear in each section of $T_{\check{X}}(p^{-1}(p(\partial_\tau W)))$ (by definition of $T_{\check X}(\cdot)$). Using our conclusion from above, this would imply that $d_W(p^{-1}(T),e) = 0$ whenever $d_W(T,\lambda^\pm)\ge 1$. But just as before, by sweeping through $X\times\mathbb{R}$ with sections going from near $\lambda^-$ to near $\lambda^+$, we produce sections $T_1, T_2$ with $d_W(T_1, \lambda^-) =d_W(T_2,\lambda^+)=1$. Since each of these sections' preimage in $\check X$ contains the edge $e$, we get that $d_W(\lambda^\pm,e)\le 2$, which contradicts our hypothesis that $d_W(\lambda^+,\lambda^-) > 4$. This shows that no edge of $p^{-1}(p(\partial_\tau W))$ can meet $\int_\tau(W)$ and completes the proof when $W$ is nonannular. When $W$ is an annulus, one proceeds exactly as above using the annular version of \Cref{near lambda}. \end{proof} \begin{proof}[Proof of \Cref{th:sub_dichotomy_fully_punctured}] We may assume that $W$ is a subsurface of $F$ such that $d_W(\lambda^-,\lambda^+) > \beta$. First suppose that $\pi_1(W)$ is contained in $\pi_1(S)$. Then by \Cref{lem:embedding_fullly_punctured}, there is a subsurface $Y$ of $S$ such that, up to conjugation in $\pi_1(S)$, $\pi_1(W) \le \pi_1(Y)$ is a finite index subgroup; let $n \ge 1$ denote this index. If $\eta_F \colon \pi_1(M) \to \mathbb{Z}$ is the homomorphism representing the cohomology class dual to $F$, then $\eta_F | \pi_1(Y)$ vanishes on the index $n$ subgroup $\pi_1(W)$. Since $\mathbb{Z}$ is torsion-free we must have that $\eta$ vanishes on $\pi_1(Y)$ and hence $\pi_1(Y)$ is contained in $\pi_1(F)$. However, since the fundamental group of an embedded subsurface, in this case $W \subset F$, can not be nontrivially finite-index inside another subgroup of $\pi_1(F)$, we see that $n=1$ and $\pi_1(W) = \pi_1(Y)$. That $W$ is isotopic along the flow in $M$ to $Y \subset S$ can be seen by lifting $W$ and $Y$ to the cover $S \times \mathbb{R} \to M$. Hence, we may suppose by \Cref{lem:flow_to_fiber_2} that the image of any $S \to M$ homotopic to the fiber $S$ intersects any isotope of $W \subset F$ essentially. Since $d_W(\lambda^-,\lambda^+) > \beta$, $W$ has a nonempty isolated pocket $V_W \subset F \times \mathbb R$ which simplicially embeds into $(M, \tau)$ by \Cref{thm:pocket summary}. Let $\{W_i\}$ denote a sequence of sections of $V_W$ from $V^-_W$ to $V^+_W$ with $W_{i+1}$ differing from $W_i$ by an upward diagonal flip. Also, fix a simplicial map $f \colon S \to (M,\tau)$ which is obtained by composing a section of $(S \times \mathbb{R},\tau)$ with the covering map $S \times \mathbb{R} \to M$. Note that for each $i$, $f(S)$ meets at least one edge of the interior of $W_i$. Otherwise, the image of $S$ in $M$ misses the interior of $W_i$ contradicting our assumption. In fact, even more is true: Call a component $c$ of $f(S) \cap W_i$ \emph{ removable} if the triangles of $f(S)$ incident to the edges of $c$ lie locally to one side of $W_i$ in $M$. If $c$ is removable, then there is an isotopy of $W_i$ supported in a neighborhood of $c$ which removes $c$ from the intersection $f(S) \cap W_i$. Hence, if we denote by $E_i$ the edges of $f(S) \cap W_i$ which do not lie in removable components , then $E_i$ must be nonempty for each $i$. We claim that for each $i$, $E_i$ shares an edge with $E_{i+1}$. Otherwise, both $E_i$ and $E_{i+1}$ consist of a single edge and the tetrahedron corresponding to the diagonal exchange from $W_i$ to $W_{i+1}$ has $E_i$ as its bottom edge and $E_{i+1}$ as its top edge. But then both of these edges must be removable since pushing the bottom two faces of the tetrahedron slightly upward makes that intersection disappear, and similarly for the top. This contradicts our above observation and establishes that $E_i$ and $E_{i+1}$ have a common edge. We obtain a sequence in $\A(W)$, \[ V^-_W \supset E_0 , E_1, \ldots, E_n \subset V^+_W, \] having the property that for each edge $e_i$ of $E_i$ there is an edge $e_{i+1}$ of $E_{i+1}$ such that $e_i$ and $e_{i+1}$ are disjoint. We conclude that the number of distinct edges in the sequence $E_0 , E_1, \ldots, E_n$ is at least $d_W(V^-_W, V^+_W)$. Combining this with the fact that the number of edges in an ideal triangulation of $S$ is $3|\chi(S)|$ and \Cref{lem:iso_pocket}, we see that \[ 3|\chi(S)| \ge d_W(V^-_W, V^+_W) \ge d_W(\lambda^-,\lambda^+) - \beta, \] as required. \end{proof} \medskip We conclude the paper by recording the following corollary of \Cref{lem:embedding_fullly_punctured} and the proof of \Cref{th:sub_dichotomy_fully_punctured}. \begin{corollary}\label{always subsurface} Let $M$ be a hyperbolic manifold with fully-punctured fibered face $\mathcal{F}$. Let $W$ be a subsurface of a fiber $F\in\mathbb{R}_+\mathcal{F}$ such that $d_W(\Lambda^+,\Lambda^-) > 4$ if $W$ is nonannular and $d_W(\Lambda^+,\Lambda^-) > 6$ if $W$ is an annulus. If $S$ is any fiber in $\mathbb{R}_+\mathcal{F}$ such that $\pi_1(W) < \pi_1(S)$, then $W$ is isotopic to a subsurface of $S$. \end{corollary} \section{Sections and pockets of the veering triangulation} \label{sections} In this section the surface $X$ is fully-punctured. A {\em section} of the veering triangulation $\tau$ is an embedding $(X,T) \to (X \times \mathbb{R}, \tau)$ which is simplicial with respect to an ideal triangulation $T$ of $X$, and is a section of the fibration $\pi \colon X \times \mathbb{R} \to X$ (hence transverse to the vertical flow). By \emph{simplicial} we mean that the map takes simplices to simplices. The edges of $T$ are saddle connections of $q$ that are also edges of $\tau$ (i.e. those which span singularity-free rectangles), and indeed any triangulation by $\tau$-edges gives rise to a section. We will abuse terminology a bit by letting $T$ denote both the triangulation and the section. A {\em diagonal flip} $T\to T'$ between sections is an isotopy that pushes $T$ through a single tetrahedron of $\tau$, either above it or below it. Equivalently, if $R$ is a maximal rectangle and $Q$ its associated tetrahedron, the bottom two faces of $Q$ might appear in $T$, in which case $T'$ would be obtained by replacing these with the top two faces. This is an upward flip, and the opposite is a downward flip. We will refer to the transition as both a \emph{diagonal flip/exchange} and a \emph{tetrahedron move}, depending on the perspective. An edge $e$ of $T$ can be flipped downward exactly when it is the tallest edge, with respect to $q$, among the edges in either of the two triangles adjacent to it. This makes $e$ the top edge of a tetrahedron (i.e. the diagonal of a quadrilateral that connects the horizontal sides of the corresponding rectangle). Similarly it can be flipped upward when it is the widest edge among its neighbors. See \Cref{flippability2}. \realfig{flippability2}{The edge $e$ is upward flippable, $g$ is downward flippable, and $f$ is not flippable.} In particular it follows that every section has to admit both an upward and downward flip -- simply find the tallest edge and the widest edge. However it is not a priori obvious that a section even exists. Gu\'eritaud gives an argument for this and more: \begin{lemma}[\cite{gueritaud}]\label{gue-sweep} There is a sequence of sections $\cdots \to T_i\to T_{i+1}\to\cdots$ separated by upward diagonal flips, which sweeps through the entire manifold $(X\times\mathbb{R},\tau)$. Moreover, when $(X\times\mathbb{R},\tau)$ covers the manifold $(M,\tau)$, this sequence is invariant by the deck translation $\Phi$. \end{lemma} We remark that Agol had previously proven a version of \Cref{gue-sweep} with his original definition of the veering triangulation \cite{agol2011ideal}. For an alternative proof that sections exist, see the second proof of \Cref{lem:extension}. We remark that \Cref{gue-sweep} does not give a complete picture of all possible sections of $\tau$. In this section we will establish a bit more structure. \medskip For a subcomplex $K \le \tau$, denote by $T(K)$ the collection of sections $T$ of $\tau$ containing the edges of $K$. A necessary condition for $T(K)$ to be nonempty is that $\pi(K)$ is an embedded complex in $X$ composed of $\tau$-simplices. We will continue to blur the distinction between $K$ and $\pi(K)$. Our first result states that the necessary condition is sufficient: \begin{lemma}[Extension lemma] \label{lem:extension} Suppose that $E$ is a collection of $\tau$-edges in $X$ with pairwise disjoint interiors. Then $T(E)$ is nonempty. \end{lemma} The second states that $T(K)$ is always connected by tetrahedron moves. This includes in particular the case of $T(\emptyset)$, the set of all sections. \begin{proposition}[Connectivity] \label{prop:connect} If $K$ is a collection of $\tau$-edges in $X$ with pairwise disjoint interiors, then $T(K)$ is connected via tetrahedron moves. \end{proposition} \subsection*{Finding flippable edges} Let $T$ be a section and let $\sigma$ be an edge of $\tau$, which is not an edge of $T$. Any edge $e$ of $T$ crossing $\sigma$ must do so from top to bottom ($e>\sigma$) or left to right ($e<\sigma$), as in \Cref{veering defs}, and we further note that all edges of $T$ that cross $\sigma$ do it consistently, all top-bottom or all left-right, since they are disjoint from each other. \begin{lemma} \label{lem:down_flip} Let $T$ be a section and suppose that an edge $\sigma$ of $\tau$ is crossed by an edge $e$ of $T$. If $e>\sigma$, then there is an edge of $T$ crossing $\sigma$ which is downward flippable. Similarly if $e<\sigma$ then there is an edge of $T$ crossing $\sigma$ which is upward flippable. \end{lemma} \begin{proof} Assuming the crossings of $\sigma$ are top to bottom, let $e$ be the edge crossing $\sigma$ that has largest height with respect to $q$. Let $D$ be a triangle of $T$ on either side of $e$ and let $f$ be its tallest edge. Drawing the rectangle $M$ in which $D$ is inscribed (\Cref{tallest-crossing}) one sees that $R$, the rectangle of $\sigma$, is forced to cross it from left to right. Hence, the edge $f$ must also cross $\sigma$. Therefore, $f=e$ by choice of $e$. It follows that $e$ is a downward flippable edge. \end{proof} \realfig{tallest-crossing}{The tallest $T$-edge crossing $\sigma$ must also be tallest in its own triangles.} \subsection*{Pockets} Let $T$ and $T'$ be two sections and $K$ their intersection, as a subcomplex in $X\times\mathbb{R}$. Because both sections are embedded copies of $X$ transverse to the suspension flow, their union $T \cup T'$ divides $X \times \mathbb{R}$ into two unbounded regions and some number of bounded regions. Each bounded region $U$ is a union of tetrahedra bounded by two isotopic subsurfaces of $T$ and $T'$, which correspond to a component $W$ of the complement of $\pi(K)$ in $X$. The isotopy is obtained by following the flow, and if it takes the subsurface of $T'$ upward to the subsurface of $T$ we say that {\em $T$ lies above $T'$ in $U$}. We call $U$ a \emph{pocket over $W$}, and sometimes write $U_W$. We call $W$ the \emph{base} of the pocket $U$. \begin{lemma}\label{lem:slope_drop} With notation as above, $T$ lies above $T'$ in the pocket $U_W$ if and only if, for every edge $e$ of $T$ in $W$ and edge $e'$ of $T'$ in $W$, if $e$ and $e'$ cross then $e>e'$. \end{lemma} Note that, for each edge $e$ of $T$ in $W$ there is in fact an edge $e'$ of $T'$ in $W$ which crosses $e$, since both $T$ and $T'$ are triangulations, with no common edges in $W$. \begin{proof} Suppose that $T$ lies above $T'$ in $U_W$ and let $e$ be an edge of $T$ in $W$; hence, it is in the top boundary of $U$. Let $Q$ be the tetrahedron of $\tau$ for which $e$ is the top edge. Via the local picture around $e$ (see \Cref{veering defs} and \Cref{edge-swing}), we see that $Q$ lies locally below $T$. Its interior is of course disjoint from $T$ and $T'$ (and the whole $2$- skeleton), hence it is inside $U$. Let $e_1$ be the bottom edge of $Q$. Note $e > e_1$. If $e_1$ is in $T'$, stop (with $e' = e_1$). Otherwise it is in the interior of $U$, and we can repeat with the tetrahedron for which $e_1$ is the top edge. We get a sequence of steps terminating in some $e'$ in $T'$, which must be in the boundary of $U$, and conclude $e > e'$ (by the transitivity of $>$ as in \Cref{veering defs}). Now from the paragraph before \Cref{lem:down_flip}, the same slope relation holds for every edge of $T'$ crossing $e$, hence giving the first implication of the lemma. For the other direction, exchange the roles of $T$ and $T'$ in the proof. \end{proof} \subsection*{Connectedness of $T(K)$} We can now prove \Cref{prop:connect}. \begin{proof} Let us consider $T$, $T'$ in $T(K)$. Let $U$ be one of the pockets, and suppose $T$ lies above $T'$ in $U$. \Cref{lem:slope_drop} together with \Cref{lem:down_flip} implies that $T$ has a downward flippable edge $e$ which crosses an edge of $T'$ that is in $W$. In particular $e$ itself is in $W$. Performing this flip we reduce the number of tetrahedra contained in pockets. Thus a finite number of moves will take $T$ to $T'$, without disturbing $K$. \end{proof} As a consequence of \Cref{prop:connect} and its proof we have: \begin{corollary}\label{cor:top} If $K$ is a nonempty subcomplex of $\tau$ and $T(K) \neq \emptyset$, then there are unique sections $T^+(K)$ and $T^-(K)$ in $T(K)$ such that every $T \in T(K)$ can be upward flipped to $T^+(K)$ and downward flipped to $T^-(K)$. \end{corollary} \begin{proof} First note that $T(K)$ is finite: because $\tau$ is locally finite at the edges, there are only finitely many choices for a triangle adjacent to $K$. We then enlarge $K$ successively, noting that there is a bound on the number of triangles in a section. Thus there exists a section $T^+$ in $T(K)$ which is not upward flippable in $T(K)$. For any two sections $T_1,T_2\in T(K)$ there is a $T_3\in T(K)$ obtained as the union of the tops of the pockets of $T_1$ and $T_2$ and their intersection. Thus $T_1$ is upward flippable unless $T_1=T_3$, and similarly for $T_2$. This implies that $T^+$ is the unique section in $T(K)$ which is not upward flippable, and every other section is upward flippable to $T^+$. We define $T^-$ analogously. \end{proof} The section $T^+(K)$ is called the \emph{top of $T(K)$} and the section $T^-(K)$ is called the \emph{bottom of $T(K)$}. Note that any section obtained from $T^+(K)$ by upward diagonal exchanges is not in $T(K)$. \subsection*{Extension lemma} We conclude this section with two proofs of \Cref{lem:extension}. \begin{proof}[Proof one] \Cref{gue-sweep} gives us, in particular, the existence of at least one section $T_0$ which is disjoint from $E$, which we may assume lies above every edge of $E$. Then by \Cref{lem:down_flip} there is a downward flippable edge $e$ in $T_0$. The tetrahedron involved in the move lies above $E$, so $E$ still lies below (or is contained in) the new section $T_1$. We repeat this process, and at each stage every edge of $E$ is either contained in $T_i$ or crosses an edge of $T_i$ and lies below it. Thus by \Cref{lem:down_flip}, unless $E\subset T_i$ each $T_i$ contains a downward flippable edge that is not contained in $E$. Because $\tau$ is locally finite at each edge, {\em any} sequence of downward flips is a proper sweepout of the region below $T_0$, and hence must eventually meet every edge of $\tau$ below $T_0$. Thus we may continue until every edge of $E$ lies in $T_i$. \end{proof} \begin{proof}[Proof two] Our second proof does not use \Cref{gue-sweep}, and in particular it gives an independent proof of the existence of sections. Let $D$ be a component of the complement of $E$ which is not a triangle. Let $e$ be an edge of $\partial D$ and consider the collection of $\tau$-tetrahedra adjacent to $e$. These contain a sequence $Q_-,Q_1,\ldots Q_m,Q_+$, as in \Cref{edge-swing}, where $Q_-$ is the tetrahedron with $e$ as its top edge, $Q_+$ is the tetrahedron with $e$ as its bottom edge, and the rest are adjacent to $e$ on the same side as $D$ (if $D$ meets $e$ on two sides we just choose one). Two successive tetrahedra in this sequence share a triangular face. We claim that one of these faces must be contained in $D$. Equivalently we claim that one of the triangles is not crossed by any edge of $E$. Since each tetrahedron $Q$ is inscribed in a singularity free rectangle $R$, if an edge $f$ of $E$ crosses any edge of $Q$ its rectangle crosses all of $R$. It follows immediately, since the edges of $E$ have disjoint interiors, that they consistently cross $R$ all vertically, or all horizontally. Because successive tetrahedra in the sequence share a face it follows inductively that, if all the faces are crossed by $E$, then they are all consistently crossed horizontally, or all vertically. However, $Q_-$ can only be crossed vertically by $E$ (since $E$ does not cross $e$). Similarly $Q_+$ can only be crossed horizontally. It follows that there must be a triangular face $F$ that is {\em not} crossed by $E$. Thus $F$ is contained in $D$. Since $D$ is not a triangle, at least one edge of $F$ passes through the interior of $D$. We add this edge to $E$ and proceed inductively. \end{proof} \section{Projections and compatible subsurfaces} \label{surface_reps} In this section we show that if $Y\subset X$ is a compact essential subsurface of large projection distance $d_Y(\lambda^+,\lambda^-)$, then $Y$ has particularly nice representations with respect to, first, the quadratic differential $q$ and, second, the veering triangulation $\tau$. We emphasize that in this section, the surface $X$ is not necessarily fully-punctured. \subsection{Projection and $q$--compatibility} Recall the $q$-convex hull map $\hat\iota_q \colon Y \to \bar X_Y$ constructed in \Cref{q tight}. We say that $Y$ is {\em $q$-compatible} if $\hat\iota_q$ is an embedding of $Y' = Y\ssm \partial_0 Y$, as in part (6) of \Cref{q tight}. (Recall that $\partial_0 Y$ maps to completion points of $\til \PP_Y$). This condition implies a little more: \realfig{Y_q_cartoon_2}{The image of a $q$-compatible subsurface $Y$ in $\bar X_Y$ under $\hat \iota_q$. Open circles are points of $\til \PP_Y$ (corresponding to the image of $\partial_0 Y$) and dots are singularities not contained in $\til \PP_Y$. The ideal boundary of $X_Y$ is in blue.} \begin{lemma} \label{lem:int_embed} If $Y \subset X$ is $q$-compatible, then \begin{enumerate} \item the projection $\iota_q \colon Y \to \bar X$ of $\hat \iota_q$ to $\bar X$ is an embedding from $\int(Y)$ into $X$ which is homotopic to the inclusion, and \item $\hat\iota_q(\partial'Y)$ does not pass through points of $\til\PP_Y$. \end{enumerate} \end{lemma} Recall that $\partial'Y = \partial Y \ssm \partial_0Y$. \begin{proof} Recall from \Cref{q tight} that $q$-compatibility of $Y$ is equivalent to the statement that the interior of the $q$-hull ${\operatorname{CH}}_q(\Lambda) \subset \hat X$ is a disk (i.e. it is not pinched along singularities or saddle connections). If $\iota_q \colon \int(Y) \to X$ fails to be an embedding, then it must be that for some deck transformation $g$ of the universal covering $\til X \to X$ the interiors of ${\operatorname{CH}}_q(\Lambda)$ and $g \cdot {\operatorname{CH}}_q(\Lambda)$ are distinct and overlap. But then it follows immediately from \Cref{cor:side_coherence} that the distinct hyperbolic convex hulls ${\operatorname{CH}}(\Lambda)$ and $g \cdot {\operatorname{CH}}(\Lambda)$ overlap, contradicting that $Y$ is a subsurface of $X$. This proves part (1). For part (2), let $\beta$ be a component of $\partial_0 Y$. Since $\hat\iota_q$ embeds $Y\ssm \partial_0Y$, a collar neighborhood $U$ of $\beta$ in $Y$ maps to a neighborhood $V$ of the puncture $p = \hat\iota_q(\beta)$. Now if $\gamma$ is a component of $\partial'Y$, $q$-compatibility again implies its image must avoid $V\ssm p$. Since $\hat\iota_q(\gamma)$ cannot equal $p$, it must be disjoint from it. \end{proof} Note that $Y$ is a $q$-compatible annulus if and only if the core of $Y$ is a cylinder curve in $X$. In this case, the corresponding open flat cylinder in $X$ is $\iota_q(\int (Y))$. In general, if $Y$ is $q$-compatible then one component of $X \ssm \partial_q Y$ is an open subsurface isotopic to the interior of $Y$; this is the image $\iota_q(\int(Y))$ and is denoted $\int_q(Y)$. The following proposition shows that mild assumptions on $d_Y(\lambda^+,\lambda^-)$ imply that $Y$ is $q$-compatible. \begin{proposition}[$q$-Compatibility] \label{prop: q_compatible} Let $Y\subset X$ be an essential subsurface. If $Y$ is non-annular and $d_Y(\lambda^+,\lambda^-) > 0$, then $Y$ is $q$-compatible. If $Y$ is an annulus and $d_Y(\lambda^+,\lambda^-) > 1$, then $Y$ is $q$-compatible. In this case, $\int_q(Y)$ is a flat cylinder. \end{proposition} \begin{proof} We treat the non-annular case first. Suppose that $d_Y(\lambda^+,\lambda^-)>0$. Recall from \Cref{AY in flat geometry} that we have identified $\til X$ with $\HH^2$, set $\Lambda\subset \partial\HH^2$ to be the limit set of $\Gamma = \pi_1(Y)$, set $\Omega = \partial\HH^2\ssm \Lambda$, and defined $\hat\PP_Y\subset \Lambda$ to be the set of parabolic fixed points of $\pi_1(Y)$. Note that $\hat \PP_Y = \Lambda \cap \hat \PP$. Further recall from part (6) of \Cref{q tight} that the map from $Y'$ to ${\operatorname{CH}}_q(X_Y)$ is an embedding, provided the interior of ${\operatorname{CH}}_q(\Lambda)$ is a disk. Since ${\operatorname{CH}}_q(\Lambda)$ is the result of deleting the interior of the side $\side{l}$ from $\hat X$ for each hyperbolic geodesic line $l$ in $\partial {\operatorname{CH}}(\Lambda)$, it suffices to show that \begin{enumerate} \item for each geodesic line $l$ in $\partial {\operatorname{CH}}(\Lambda)$, the interior of the corresponding $q$-geodesic $l_q$ does not meet $\partial \HH^2 \ssm \side{l}$, and \item if $l$ and $l'$ are distinct geodesic lines in $\partial {\operatorname{CH}}(\Lambda)$ then $l_q$ and $l'_q$ do not meet in $\widetilde X$. \end{enumerate} First suppose that condition $(1)$ is violated for some geodesic line $l$ in $\partial {\operatorname{CH}}(\Lambda)$ and point $\hat p \in \partial \HH^2 \ssm \side{l}$. Set $p$ to be the image of $\hat p$ in $\bar X_Y$. Letting $\gamma$ be the boundary component of $\partial' Y$ that is the image of $l$ in $X_Y$, we see that the image of $l_q$ in $\bar X_Y$, which equals $\gamma_q = \hat \iota_q(\gamma)$, passes through the point $p$. Since $l_q$ is a geodesic in $\hat X$, we see that $\hat p$ is a completion point and so either $\hat p \in \hat \PP_Y$ or $\hat p \in \hat \PP \ssm \hat \PP_Y$. Assume that $\hat p\in \hat \PP_Y$. Then $p\in \widetilde \PP_Y$ corresponds to a puncture of $Y$. Recall that by \Cref{q tight}, the image of the open side $\openside{l} = \int(\side{l} \cap \widetilde X)$ in $X_Y$ is either an open annulus or a disjoint union of open disks; in either case, set $A_\gamma$ equal to the component which contains $p$ in its boundary. The angle at $p$ in $A_\gamma$ between the incoming and outgoing edges of $\gamma_q$ is at least $\pi$, which implies that $A_\gamma$ contains a horizontal and a vertical ray $l^-,l^+$ emanating from $p$. (\Cref{p_gamma}.) \realfig{p_gamma}{When $\hat\iota_q(\partial' Y)$ passes through a point of $\widetilde \PP_Y$, $d_Y(\lambda^+,\lambda^-) =0$.} These rays are proper $q$-geodesic lines in $X_Y$ (because $p$ is a puncture, not a point of $X_Y$), and hence by \Cref{q arcs for AY} represent vertices of $\pi_Y(\lambda^-)$ and $\pi_Y(\lambda^+)$, respectively. Further, since the rays only intersect within the annulus or disk $A_\gamma$ and $Y$ is itself nonannular, we see that $l^-$ and $l^+$ in fact represent the same point in $\A(Y)$. (Actually, if $A$ does not contain a flat cylinder, then the interiors of $l^-$ and $l^+$ are disjoint as we show below). Either way, it follows that $$ d_Y(\lambda^+,\lambda^-) =0, $$ a contradiction. Next assume that $\hat p \in \hat \PP \ssm \hat \PP_Y$. Since $\hat p \notin \side{l} \cap \partial \HH^2$ we may set $A$ to be the component of the image of $\openside{l}$ in $X_Y$ which contains $p \in \hat X_Y$ in its boundary. As before, the angle subtended by $\gamma_q$ at $x$ in the boundary of $A$ is at least $\pi$ (see \Cref{modified_pinch-puncture}). A pair of rays $l^\pm$ emanating from $x$ into $A$ are properly embedded lines and again represent the same vertex of $\A(Y)$, giving us $d_Y(\lambda^+,\lambda^-) =0$. \realfig{modified_pinch-puncture}{$Y_q$ is pinched at a completion point.} We conclude that condition $(1)$ is satisfied. Next suppose that geodesics $l$ and $l'$ in the boundary of ${\operatorname{CH}}(\Lambda)$ violate $(2)$, i.e. $l_q$ and $l'_q$ meet in $\til X$. Let $\til I = l_q \cap l'_q \subset \hat X$ which, since $\hat X$ is CAT$(0)$, is a connected subset of each of $l_q,l'_q$. In general, the intersection in $\hat X$ of two $q$-geodesic lines is either a single singularity (possibly a completion point) or a union of saddle connections. Because $l_q$ and $l'_q$ meet in $\widetilde X$, $\til I$ contains either a saddle connection or a singularity which is not a completion point. Let $\gamma,\gamma',\gamma_q,\gamma'_q, I$, be the images in $\bar X_Y$ of $l,l',l_q,l'_q,\til I$, respectively. Suppose first that $I$ contains a saddle connection $\sigma$. In this case, let $A$ be the component of the image of the open side $\openside{l}$ in $X_Y$ which contains $\sigma$ in its boundary, and define $A'$ similarly. (Note that it is possible that $A = A'$ and that $A$ and $A'$ meet along other saddle connections and singularities besides $\sigma$, but this will not change the discussion.) Any point of $\sigma$ is crossed by a pair $l^+,l^-$ of leaves of $\lambda^+,\lambda^-$, which as proper arcs of $X_Y$ determine the same vertex of $\A(Y)$. Hence, we conclude once again that $d_Y(\lambda^+,\lambda^-) =0.$ \realfig{common-saddle}{$Y_q$ is pinched along a saddle connection.} Finally, suppose that $I$ contains a singularity $x$ in $X_Y$ (i.e. $x$ is not a completion point). Again, set $A$ to be the component of the image of $\openside{l}$ in $X_Y$ which contains $x$ in its boundary and $A'$ to be the component of the image of $\openside{l'}$ in $X_Y$ which contains $x$ in its boundary. As before, there is an angle of at least $\pi$ on the $A$ side of $\gamma_q$ and on the $A'$ side of $\gamma'_q$, so we can find pairs of rays $r_0^\pm$ emanating from $x$ on the $A$ side, and $r_1^\pm$ emanating on the $A'$ side (see \Cref{common-sing}). The unions $l^+ = r_0^+\cup_x r_1^+$ and $l^- = r_0^-\cup_x r_1^-$ are generalized leaves of $\lambda^+$ and $\lambda^-$, respectively, and again determine the same point in $\A(Y)$ so we conclude that $d_Y(\lambda^+,\lambda^-) =0. $ \realfig{common-sing}{$Y_q$ is pinched at a singularity which is not a completion point.} We conclude that if $Y$ is nonannular and $d_Y(\lambda^+,\lambda^-) >0$, then $Y$ is $q$-compatible. When $Y$ is an annulus, almost the same argument applies. The difference is that the arcs $l^\pm$ we obtain are not homotopic with fixed endpoints, and so do not determine the same vertex of $\A(Y)$. However, in each case we will show they have disjoint interiors, concluding $d_Y(l^+,l^-) \le 1$, and so $$ d_Y(\lambda^+,\lambda^-) \le 1. $$ To see this, let $\gamma$ denote the core of $Y$ and let $\gamma_q$ be a geodesic representative in $\bar X_Y$. Supposing that $\int_q(Y)$ is not a flat annulus, we first claim the following: For any singular point $p$ crossed by $\gamma_q$, if $l^+$ and $l^-$ are rays of $\lambda^+$ and $\lambda^-$, respectively, meeting with angle $\pi/2$ at $p$, then the interiors of $l^+$ and $l^-$ do not meet. \realfig{GB_annulus_2}{The $q$-geodesic $\gamma_q$ is the black hexagon. An interior intersection between $l^+$ and $l^-$ contradicts the Gauss--Bonnet theorem.} To establish the claim, assume that the interiors of $l^{\pm}$ meet and refer to \Cref{GB_annulus_2}. Let $A_\gamma$ be the complementary region of $\gamma_q$ in $\bar X_Y$ containing $p'$, the interior intersection of $l^{\pm}$. If $A_\gamma$ is a disk, then the claim follows immediately from the uniqueness of geodesics in a CAT(0) space. Hence, we may assume that $A_\gamma$ is an annulus. Let $l^+_\ep$ be leaf of $\lambda^+$ parallel to $l^+$ and slightly displaced to the interior of $A_\gamma$, so that the region $R$ bounded by $\gamma_q$ and the segments of $l^-$ and $l^+_\ep$ is an annulus. The total curvature of the $l^-l^+_\ep$ boundary of $R$ is 0 since it is straight except for two right turns of opposite signs, and the total curvature of $\gamma_q$ as measured from inside $R$ is nonpositive (since each singularity on $\gamma_q$ subtends at least angle $\pi$ within $R$). Since $\chi(R)= 0$ and the Gaussian curvature in $R$ (including singularities) is nonpositive, the Gauss--Bonnet theorem implies that the total curvature of $\partial R$ is nonnegative. This implies that the total curvature of $\gamma_q$ is 0, which means that $\gamma_q$ bounds a flat cylinder, contradicting our assumption. This establishes the claim. We now return to the proof of the proposition. First suppose that $\gamma_q$ passes through a completion point $x$ of $\bar X_Y$. Then, just as in \Cref{modified_pinch-puncture}, we can find a pair of rays $l^\pm$ emanating from $x$ into $A_\gamma$. By the claim above, the interiors of these rays do not meet and so $d_Y(\lambda^+,\lambda^-) \le 1$ as desired. Finally, suppose that $\gamma_q$ remains in $X_Y$, i.e. it does not pass through any completion points. It must still pass through a singularity $x$, and we note that the total angle at $x$ is at least $3\pi$. Recall that $\gamma_q$ subtends at least angle $\pi$ at $x$ to either of its sides and we note that some side of $\gamma$ sees angle at least $3\pi /2$ at $x$. Let $A$ denote this side of $\gamma_q$ and let $A'$ denote the other side. Note that $A \neq A'$ since $X_Y$ is an annulus which $\gamma_q$ separates. The angle of $3\pi/2$ tells us there are at least $3$ rays of $\lambda^\pm$ emanating into $A$. Now choose rays $r_0^\pm$ of $\l^\pm$ emanating from $x$ on the $A'$ side. Because the $3$ (or more) rays of $\lambda^\pm$ emanating from $x$ into $A$ alternate between $\lambda^+$ and $\lambda^-$, we can choose from them two rays $r_1^\pm$ of $\l^\pm$ such that $r_0^+,r_1^+,r_1^-,r_0^-$ are listed in the cyclic ordering of directions at $x$ (either clockwise or counterclockwise). The generalized leaves $l^+ = r_0^+\cup_x r_1^+$ and $l^- = r_0^-\cup_x r_1^-$ then represent arcs in the projections $\pi_Y(\l^+)$ and $\pi_Y(\l^-)$ and after a slight perturbation these leaves have disjoint interiors. Hence, again we see that $d_Y(\l^+,\l^-)\le 1$. We conclude that if $Y$ is an annulus with $d_Y(\l^+,\l^-)\ge 1$ then $Y$ is $q$-compatible. \qedhere \end{proof} \subsection{Projections and $\tau$-compatibility} \label{sec: hulls_punctured} We now show how to associate to a subsurface $Y$ of large projection a representative of $Y$ which is ``simplicial'' with respect to the veering triangulation. This will later be used to prove that such a subsurface induces a pocket of the veering triangulation $\tau$. Informally, we start with a $q$-compatible subsurface $Y \subset X$ and homotope $\hat \iota_q$ by pushing $\partial_q Y$ onto $\tau$-edges (this process is depicted locally in \Cref{thull-twice}). Formally, this is done in two steps using the map $\thull(\cdot)$ described in \Cref{sec:t_hulls}, although some care must be taken in order to ensure that the resulting object gives an embedded representative of $\int(Y)$. Call a subsurface $Y \subset X$ \emph{$\tau$-compatible} if the map $\hat\iota_q:Y \to \bar X_Y$ is homotopic rel $\partial_0 Y$ to a map $\hat\iota_\tau:Y\to \bar X_Y$ which is an embedding on $Y' = Y\ssm \partial_0Y$ such that \begin{enumerate} \item $\hat\iota_\tau$ takes each component of $\partial'Y = \partial Y \ssm \partial_0 Y$ to a simple curve in $\bar X_Y \ssm \til{\PP}_Y$ composed of a union of $\tau$-edges and \item the map $\iota_\tau \colon Y \to \bar X$ obtained by composing $\hat \iota_\tau$ with $\bar X_Y \to \bar X$ restricts to an embedding from $\int(Y)$ into $X$. \end{enumerate} We will show that when $d_Y(\lambda^-,\lambda^+)$ is sufficiently large, the subsurface $Y$ is $\tau$-compatible and in this case we set $\partial_\tau Y = \iota_\tau(\partial'Y)$ which is a collection of $\tau$-edges with disjoint interiors. We call $\partial_\tau Y$ the \emph{$\tau$--boundary} of $Y$ and consider it as a $1$-complex of $\tau$-edges. Similar to the situation of a $q$-compatible subsurface, if $Y$ is $\tau$-compatible then one component of $X \ssm \partial_\tau Y$ is an open subsurface isotopic to the interior of $Y$; this is the image $\iota_\tau(\int(Y))$ and is denoted $\int_\tau (Y)$. \begin{theorem}[$\tau$-Compatibility]\label{thm: tau-compatible} Let $Y\subset X$ be an essential subsurface. \begin{enumerate} \item If $Y$ is nonannular and $d_Y(\lambda^+,\lambda^-) > 0$, then $Y$ is $\tau$-compatible. \item If $Y$ is an annulus and $d_Y(\lambda^+,\lambda^-) > 1$, then $Y$ is $\tau$-compatible. \end{enumerate} \end{theorem} \begin{proof} Suppose that $d_Y(\lambda^+,\lambda^-) > 0$ if $Y$ is nonannular and $d_Y(\lambda^+,\lambda^-) > 1$ otherwise. By \Cref{prop: q_compatible}, $Y$ is $q$-compatible and so $\hat \iota_q:Y\to \bar X_Y$ is an embedding on $Y'$. Let $Y_q$ denote its image. We first suppose that $Y$ is not an annulus. Give $\partial ' Y$ the transverse orientation pointing into $Y$. For any saddle connection $\sigma$ in $\hat\iota_q(\partial' Y)$ and any triangle $t \in \cT(\sigma)$ pointing into $Y$ (see \Cref{sec:t_hulls} for definitions), note that the singularities of $\bar X_Y$ in $\partial t$ are \emph{not} completion points of $\bar X_Y$, that is they do not correspond to punctures of $X$. This is because any completion point lying in $t$ is the endpoint of leaves $l^\pm$ of $\lambda^\pm$ whose initial segments lie in $t$. These leaves correspond to essential proper arcs of $X_Y$ which are homotopic giving $d_Y(\lambda^-,\lambda^+) =0$, a contradiction. Similarly, we can conclude that for each saddle connection $\sigma$ in $\hat\iota_q(\partial' Y)$ and any $t \in \cT(\sigma)$ pointing into $Y$, the triangle $t$ is entirely contained in $Y_q$. Otherwise, similar to the proof of \Cref{prop: q_compatible}, we find leaves $l^+$ and $l^-$ in $\bar X_Y$ whose intersection with $Y_q$ is contained in $t$ and hence whose projections to $\A(Y)$ are equal. See the left side of \Cref{modified_thull-overlap}. Since $d_Y(\lambda^+,\lambda^-)>0$ this is impossible. Hence, the map $\thull^+(\hat\iota_q|_{\partial' Y})$ (as defined in (\ref{thull f}) in \Cref{sec:t_hulls}) is homotopic to $\hat\iota_q|_{\partial' Y}$ in $\bar X_Y \ssm \til{\PP}_Y$ by pushing across the polygonal regions given by \Cref{thull structure} along leaves of $\lambda^+$. This extends to a homotopy of $\hat \iota_q$ to a map $\hat \iota' \colon Y \to \bar X_Y$ which we claim is still an embedding. (Note that, in the case that $X$ is fully-punctured, $\hat \iota' = \hat\iota_q$, since all singularities of fully-punctured surfaces are completion points.) To prove that $\hat\iota'$ is an embedding, let $C$ be a component of the preimage of $\hat\iota_q(Y')$ in $\hat X$ (using the notation of \Cref{AY in flat geometry}, $C$ is a translate of ${\operatorname{CH}}_q(\Lambda)$). If $\alpha$ is a geodesic segment in $\partial C$, the triangles used in the hull construction are attached to $\alpha$ and are contained in $C$. If such a triangle $t$ intersects a triangle $t'$ from a different segment $\alpha'$, they overlap as in the right side of \Cref{modified_thull-overlap}. Any two arcs $l^+,l^-$ of $\lambda^+$ and $\lambda^-$ passing through a point in the overlap must intersect both $\alpha$ and $\alpha'$. These arcs are at distance 0 in $\A(Y)$, since they can be isotoped to each other rel $\partial Y$. Hence $d_Y(\lambda^-,\lambda^+) = 0$, contradicting the hypothesis. Therefore, $t,t'$ cannot overlap. \realfig{modified_thull-overlap}{Left: If $t \in \cT(\sigma)$ (in red) is not contained in $Y_q$ then $d_Y(\lambda^+,\lambda^-) =0$. Right: An overlap of two hull triangles. Any completion point in the boundary of a hull triangle does not correspond to a puncture in $\til{\PP}_Y$.} We conclude that the polygonal regions of our homotopy are embedded and disjoint, and thus the homotopy can be chosen so that $\hat \iota'$ is an embedding. Since the image of $\hat \iota'$ is contained in the image of $\hat \iota_q$, we apply \Cref{lem:int_embed} to get that the projection $\iota' \colon Y \to \bar X$ restricts to an embedding on $\int(Y)$. Now orient $\partial'Y$ in the opposite direction, pointing out of the surface, and apply $\thull$ again, this time to $\hat \iota'(\partial' Y)$. The triangles in the construction now extend outside the surface, and the result of the operation is the rectangle hull $\rhull(\thull(\hat\iota_q(\partial' Y)))$, which is therefore composed of $\tau$-edges. Using the homotopy pushing $\hat \iota'|_{\partial' Y}$ outward along leaves of $\lambda^+$ to $\thull^+(\hat\iota'|_{\partial' Y})$ (again using \Cref{thull structure}) we obtain our final map $\hat\iota_\tau$. See \Cref{thull-twice}. It remains to show that $\hat\iota_\tau \colon Y \to \bar X_Y$ has the required properties. To prove this, let us recapitulate the construction in the universal cover. \realfig{thull-twice}{An inner $\thull$ followed by outer $\thull$ yields $\tau$-edges. This locally depicts the homotopy from $\hat \iota_q$ to $\hat \iota_\tau$.} As before, let $C = {\operatorname{CH}}_q(\Lambda)$. The map $\hat \iota'$ lifts to a $\pi_1(Y)$-equivariant homeomorphism $C \to C'$, where $C'$ is obtained by giving each saddle connection $\kappa$ in the boundary of $C$ the transverse orientation pointing into $C$ and removing the polygons $P(\kappa)$ given in \Cref{thull structure}. This map is equivariantly homotopic to the identity by pushing along leaves of the vertical foliation. The outward step of our construction then pushes back along leaves of the vertical foliation to obtain a $\pi_1(Y)$-equivariant map $C'\to C_\tau\subset \hat X$, so that the composition $C\to C'\to C_\tau$ is a lift of the map $\hat\iota_\tau \colon Y' \to \bar X_Y$. To show that $\hat\iota_\tau \colon Y' \to \bar X_Y$ is an embedding, it suffices to show that the composition $C\to C_\tau$ is a homeomorphism. For every non-singular point $p\in \partial C$ there is an arc $n_p$ in $\lambda^+$ such that the deformation of $C$ to $C_\tau$ is supported on the union $\bigcup n_p$, and preserves each $n_p$. Thus to show that $C\to C_\tau$ is a homeomorphism it suffices to show that $n_p \cap n_{p'} =\emptyset$ for each $p\ne p'$ in $\partial C$. The interior pieces, $n_p\cap C$, are already disjoint for distinct points, by our construction. Thus if $n_p$ intersects $n_{p'}$ their union is an interval $J$ in a leaf of $\lambda^+$ with some subinterval between $p$ and $p'$ lying outside $C$. This contradicts the convexity of $C$. To show that $\iota_\tau$ is an embedding when restricted to $\int(Y)$, it suffices to check that the interior of $C_\tau$ is disjoint from all its translations under the entire deck group $\pi_1(X)$. To see this, take $g \in \pi_1(X)$ so that $C_\tau$ and $g \cdot C_\tau$ are distinct and intersect. Since $\iota' \colon \int(Y) \to X$ is an embedding, $C'$ and $g \cdot C'$ meet only along their boundary. Further, if $\sigma$ is a saddle connection in $\partial C' \cap \partial (g \cdot C')$, then $\sigma$ is the hypotenuse of a singularity-free triangle pointing into $C'$ as well as one pointing into $g\cdot C'$. Hence, $\sigma$ is a $\tau$-edge and so is fixed under the map $C' \to C_\tau$. Now if the interiors of $C_\tau$ and $g\cdot C_\tau$ intersect there must be saddle connections $\sigma,\kappa \subset \partial C'$ such that $P(\sigma)$ and $P(g \cdot \kappa)$ have intersecting interiors. (Here, $\sigma,\kappa$ are oriented out of $C'$.) By the previous paragraph, $\sigma$ and $g\cdot \kappa$ are distinct. As $C'$ and $g \cdot C'$ meet only along their boundary, $\sigma$ and $g \cdot \kappa$ have disjoint interiors and any arc $l$ of $\lambda^+$ joining $\sigma$ to $g \cdot \kappa$ within $P(\sigma) \cup P(g \cdot \kappa)$ lives outside of $C'$ and $g \cdot C'$. In particular, the chosen transverse orientations on $\sigma$ and $g \cdot \kappa$ point to the interior of $l$. However, by \Cref{disjoint_thulls}, in this situation, the interiors of $P(\sigma)$ and $P(g \cdot \kappa)$ do not intersect. It follows that $\iota_\tau \colon \int(Y) \to X$ is an embedding. It only remains to prove property $(1)$ of the definition of $\tau$-compatible. Since $\hat \iota_\tau \colon Y' \to \bar X_Y$ is an embedding, it follows that $\hat \iota_\tau|_{\partial' Y}$ is an embedding, and its image does not meet $\widetilde P_Y$ by the same argument used to prove item $(2)$ of \Cref{lem:int_embed}. By construction the image $\hat \iota_\tau (\partial' Y)$ is composed of $\tau$-edges. Now suppose that $Y$ is an annulus. Then $\hat \iota_q(Y)$ is the (nondegenerate) maximal flat cylinder of $\bar X_Y$ by \Cref{prop: q_compatible}. Choosing the inward-pointing orientation for $\partial Y$, we claim that $\thull^+(\hat \iota_q|_{\partial Y}) = \hat \iota_q|_{\partial Y}$: Otherwise, there must be a saddle connection $\sigma$ on the boundary of the flat annulus $\hat \iota_q(Y)$, and a triangle $t$ pointing into the annulus with hypotenuse on $\sigma$, which encounters a singularity or puncture $x$ on the other side of the annulus. The picture is similar to the left side of \Cref{modified_thull-overlap}. A variation on the Gauss--Bonnet argument in the annulus case of \Cref{prop: q_compatible} then produces vertical and horizontal leaves passing through $x$ which have disjoint representatives, and hence $d_Y(\lambda^+,\lambda^-) \le 1$. Thus the inward step of the process is the identity, and the outward step and the rest of the proof proceed just as in the nonannular case. \end{proof} \begin{remark} \label{rmk:fully} From the proof of \Cref{thm: tau-compatible}, we record the fact that if $X$ is fully-punctured and $Y$ satisfies the hypotheses of \Cref{thm: tau-compatible}, then $\thull^+(\hat \iota_q|_{\partial Y}) = \hat \iota_q|_{\partial Y}$ and $\iota' = \hat \iota_q$. Hence, in this case we have that $\partial_\tau Y = \rhull(\partial_q Y)$. \end{remark}
1,314,259,994,101
arxiv
\section{Introduction} This paper focuses on distributed graph algorithms, particularly on the fundamental problem of deterministic and local ways to compute network decompositions and \emph{low-diameter clusterings}, which cluster at least half of the nodes in a given graph into non-adjacent clusters with small diameter. In particular, the paper describes a drastically simplified efficient deterministic distributed construction for computing such a low-diameter clustering with polylogarithmic diameter in polylogarithmic rounds of the distributed $\mathsf{CONGEST}\,$ model. \medskip Starting with the seminal work of Luby \cite{luby86_lubys_alg} from the 1980's, fast and simple $O(\log n)$-round \emph{randomized} distributed algorithms are known for many fundamental symmetry breaking problems like maximal independent set (MIS) or $\Delta+1$ vertex coloring. For a long time, this was in stark contrast with the state-of-the-art deterministic algorithms. For multiple decades, it was a major open problem in the area of distributed graph algorithms to get deterministic algorithms with round complexity $\operatorname{poly} \log (n)$ for such problems, e.g., MIS or $\Delta+1$ vertex coloring. A recent breakthrough of Rozhoň and Ghaffari \cite{rozhon_ghaffari2019decomposition} managed to resolve this open problem. In their work, the authors presented the first polylogarithmic-round deterministic algorithm for network decompositions using a (weak-diameter version of) low-diameter clusterings. Network decomposition is the object we get by repeatedly finding a low diameter clustering and removing all the nodes in the clustering, until no node remains. See \cref{subsec:background} for the formal definitions. It was long known that low-diameter clusterings is the up-to-then-missing fundamental tool required for a large class of $\mathsf{LOCAL}\,$ deterministic distributed algorithms. The clustering construction of \cite{rozhon_ghaffari2019decomposition} directly implied, among others, first efficient distributed algorithms for MIS (together with the work of \cite{censor2017deterministic_mis_congest}) and $\Delta+1$ vertex coloring (together with the work of \cite{bamberger2020efficient}) in the standard bandwidth-limited $\mathsf{CONGEST}\,$ model of distributed computing. The main difference in the natural low-diameter clustering problem defined above and the weaker version solved in \cite{rozhon_ghaffari2019decomposition} is that clusters are not necessarily connected or induce a low low-diameter subgraph on their own but instead have low \emph{weak-diameter}. A cluster has weak-diameter at most $D$ if any two nodes in the cluster are connected by a path of length at most $D$ \emph{in the original graph $G$ instead of within the cluster itself}. Hence, a cluster may even be disconnected. While the weak-diameter guarantee is enough for derandomizing local computations without bandwidth limitations, including MIS and $\Delta+1$-coloring, the original -- strong-diameter -- clustering stated above is clearly the natural and right object to ask for: It is strictly stronger, easier to define, easier to use in applications, and requires less and simpler objects and notation. Indeed, in distributed models with bandwidth limitations, such as the standard $\mathsf{CONGEST}\,$ model in which message sizes are restricted, it is not sufficient that clusters have small weak-diameter but one also needs to guarantee that there exist so-called low-depth Steiner trees connecting the nodes of each cluster. The collection of these Steiner-trees must furthermore satisfy additional low-congestion guarantees, i.e., each edge or each node in the graph is not used by too many trees (as a Steiner node). Algorithms must also be able to compute the Steiner forest of a weak-diameter clustering efficiently. Lastly, there are several applications, e.g., low-stretch spanning trees, where strong-diameter clusterings are strictly required and the weak-diameter guarantee does not suffice \cite{elkin_haeupler_rozhon_gruanu2022Clusterings_LSST}. This motivated the later works of \cite{chang_ghaffari2021strong_diameter,elkin_haeupler_rozhon_gruanu2022Clusterings_LSST} to give low-diameter clustering algorithms with strong-diameter guarantees, typically first building a weak-diameter clustering and then using this weak-diameter clustering either for communication or using it as a starting point for building a strong-diameter clustering out of it recursively. This multi-step process still requires to define and maintain Steiner forests for weak-diameter clusterings during intermediate steps. In this work, we show that there is a much simpler and direct way to get strong-diameter guarantees by designing a natural clustering process that combines key ideas from \cite{rozhon_ghaffari2019decomposition} and \cite{elkin_haeupler_rozhon_gruanu2022Clusterings_LSST}. \subsection{Preliminaries: Distributed $\mathsf{CONGEST}\,$ Model and Low-Diameter Clusterings} \label{subsec:background} We will now briefly introduce the standard model for distributed message-passing algorithms -- the $\mathsf{CONGEST}\,$ model of distributed computing~\cite{peleg00} and also give the definitions of clustering that we use (see \cite{elkin_haeupler_rozhon_gruanu2022Clusterings_LSST} for more discussion). \paragraph{$\mathsf{CONGEST}\,$} Throughout the paper, we work with the $\mathsf{CONGEST}\,$ model, which is the standard distributed message-passing model for graph algorithms~\cite{peleg00}. The network is abstracted as an $n$-node undirected graph $G=(V, E)$ where each node $v\in V$ corresponds to one processor in the network. Communications take place in synchronous rounds. Per round, each node sends one $O(\log n)$-bit message to each of its neighbors in $G$. We also consider the relaxed variant of the model where we allow unbounded message sizes, called $\mathsf{LOCAL}\,$. At the end of the round, each node performs some computations on the data it holds, before we proceed to the next communication round. We capture any graph problem in this model as follows: Initially, the network topology is not known to the nodes of the graph, except that each node $v\in V$ knows its own unique $O(\log n)$-bit identifier. It also knows a suitably tight (polynomial) upper bound on the number $n$ of nodes in the network. At the end of the computation, each node should know its own part of the output, e.g., in the graph coloring problem, each node should know its own color. Whenever we say that there is ``an efficient distributed algorithm'', we mean that there is a $\mathsf{CONGEST}\,$ algorithm for the problem with round complexity $\operatorname{poly}(\log n)$. \paragraph{Low Diameter Clustering} The main object of interest that we want to construct is a so-called \emph{low diameter clustering}, which we formally define after introducing a bit of notation. Throughout the whole paper we work with undirected unweighted graphs and write $G[U]$ for the subgraph of $G$ induced by $U \subseteq V(G)$. We use $d_G(u,v)$ to denote the distance of two nodes $u,v \in V(G)$ in $G$. We also simplify the notation to $d(u,v)$ when $G$ is clear from context and generalize it to sets by defining $d_G(U,W) = \min_{u \in U, w \in W} d_G(u,w)$ for $U,W \subseteq V(G)$. The diameter of $G$ is defined as $\max_{u,v \in V(G)} d_G(u,v)$. We use the term \emph{clustering of $G$} to denote any set of disjoint vertex subsets of $G$. A low diameter clustering is a clustering with additional properties: \begin{definition}[Low Diameter Clustering] \label{def:low_diam_clustering} A low diameter clustering $\mathcal{C}$ with diameter $D$ of a graph $G$ is a clustering of $G$ such that: \begin{enumerate} \item No two clusters $C_1 \not= C_2 \in \mathcal{C}$ are adjacent in $G$, i.e., $d(C_1, C_2) \ge 2$. \item For every cluster $C \in \mathcal{C}$, the diameter of $G[C]$ is at most $D$. \end{enumerate} \end{definition} Similarly, we define a low diameter clustering with weak-diameter at most $D$ by replacing the condition (2) with he requirement that for each cluster $C \in \mathcal{C}$ and any two nodes $u,v \in C$ we have $d_G(u,v) \le D$. Whenever we construct a low diameter clustering, we additionally want it to cover as many nodes as possible. Usually, we want to cover at least half of the nodes of $G$, or formally, we require that $\left| \bigcup_{C \in \mathcal{C}} C \right| \ge n/2$. Sometimes, it is also necessary to generalize (1) and require a larger separation of the clusters, but this is not considered in this paper. Let us now give a formal definition of network decomposition. \begin{definition}[Network Decomposition] \label{def:network_decomposition} A network decomposition with $C$ colors and diameter $D$ is a coloring of nodes with colors $1, 2, \dots, C$ such that each color induces a low-diameter clustering of diameter $D$. \end{definition} Notice that whenever we can construct a low-diameter clustering with diameter $D$ that covers at least $n/2$ nodes, we get a network decomposition by repeatedly constructing a low diameter clustering and removing it from the graph. This way, we achieve a network decomposition with $C = O(\log n)$ and diameter $D$. Since virtually all deterministic constructions of network decomposition work this way, we focus on constructing low-diameter clusterings from now on. The reason why network decomposition is a useful object is that it corresponds to the canonical way of using clusterings in distributed computing. To give an example, we show how to use it to solve the maximal independent set problem in the less restrictive $\mathsf{LOCAL}\,$ model. Given access to a network decomposition, we iterate over the $C$ color classes and gradually build independent sets $I_1 \subseteq I_2 \subseteq \dots \subseteq I_C$ where $I_C$ is maximal. In the $i$-th step, each cluster $K$ of the low-diameter clustering induced by the $i$-th color computes a maximal independent set in the graph induced by all the nodes in $K$ that are not neighboring a node in $I_{i-1}$ and we define $I_i$ by adding these independent sets to $I_{i-1}$. The set $I_C$ is clearly maximal. Computing the maximal independent set inside one cluster $K$ can be done in $O(D)$ rounds of the $\mathsf{LOCAL}\,$ model as follows: One node of the cluster collects all the information about $G[K]$ and its neighborhood in $G$, then locally computes a maximal independent set, and afterwards broadcasts the solution to the nodes in the cluster. Hence, the overall algorithm has round complexity $O(CD)$. Hence, given a network decomposition with $C,D = \operatorname{poly}(\log n)$, one can compute a maximal independent set in $\operatorname{poly}(\log n)$ rounds. Note that this brute-force approach for computing a maximal independent set critically relies on the fact that the $\mathsf{LOCAL}\,$ model does not restrict the size of messages. In the more restrictive $\mathsf{CONGEST}\,$ model, computing a maximal independent set inside a low diameter cluster becomes nontrivial, but one can use the deterministic MIS algorithm of \cite{censor2017deterministic_mis_congest} with round complexity $O(D \cdot \operatorname{poly}(\log n))$ where $D$ is the diameter of the input graph. \subsection{Comparison with Previous Work} We summarize the work on deterministic distributed low-diameter clusterings in the $\mathsf{CONGEST}\,$ model in \cref{table:papers}. \begin{table}[ht] \centering \begin{tabular}{||c | p{2.7cm} | c | p{2cm} | c||} \hline Paper & Fraction of clustered nodes & Diameter of clusters & Strong diameter? & round complexity \\ [0.5ex] \hline\hline \cite{awerbuch89} & $2^{- \Omega(\sqrt{\log n \log \log n})}$& $2^{O(\sqrt{\log n \log \log n})}$ & \checkmark & $2^{O(\sqrt{\log n \log \log n})}$ \\ \hline \cite{ghaffari2019distributed_MIS_congest} & $2^{-\Omega(\sqrt{\log n})}$& $2^{O(\sqrt{\log n})}$ & \checkmark & $2^{O(\sqrt{\log n })}$ \\ \rowcolor{Gray} \hline \cite{rozhon_ghaffari2019decomposition} & $1/2$ & $O(\log^3 n)$ & $\times$ & $O(\log^7 n)$ \\ \hline \cite{ghaffari_grunau_rozhon2020improved_network_decomposition} & $1/2$ & $O(\log^2 n)$ & $\times$ & $O(\log^4 n)$ \\ \hline \cite{chang_ghaffari2021strong_diameter} & $1/2$ & $O(\log^2 n)$ & \checkmark & $O(\log^{10} n)$ \\ \hline \cite{chang_ghaffari2021strong_diameter} & $1/2$ & $O(\log^3 n)$ & \checkmark & $O(\log^7 n)$ \\ \rowcolor{Gray} \hline \cite{elkin_haeupler_rozhon_gruanu2022Clusterings_LSST} & $1/2$ & $O(\log^2 n)$ & \checkmark & $O(\log^4 n)$ \\ \hline \cite{ghaffari2022improved} & $\Omega(1 / \log\log n)$ & $O(\log n ) $ & \checkmark & $\log^2(n) \cdot \operatorname{poly}(\log\log n)$ \\ \hline \cite{ghaffari2022improved} & $1/2$ & $O(\log n \, \cdot \, \log\log\log n) $ & \checkmark & $\log^2(n) \cdot \operatorname{poly}(\log\log n)$ \\ \rowcolor{Gray} \hline this paper & $1/2$ & $O(\log^3 n) $ & \checkmark & $\log^6(n)$ \\ \hline \end{tabular} \caption{This table shows the previous work on distributed deterministic algorithms for low-diameter clusterings. We highlighted the three results relevant for this paper. } \label{table:papers} \end{table} There are three highlighted rows in the table, besides our result we highlight the work of \cite{rozhon_ghaffari2019decomposition} and \cite{elkin_haeupler_rozhon_gruanu2022Clusterings_LSST}; The algorithm of this paper combines ideas from both of these papers. Let us now go through the rows of the table. The first two rows, together with the related results of \cite{panconesi-srinivasan,ghaffari_portmann2019improved,ghaffari_kuhn2018derandomizing_spanners_dominatingsets} represent the results before the work of \cite{rozhon_ghaffari2019decomposition} and are not relevant to our paper. Next, there is the work of \cite{rozhon_ghaffari2019decomposition} and an improved variant of it by \cite{ghaffari_grunau_rozhon2020improved_network_decomposition}. These were the first deterministic efficient constructions of low diameter clusterings, however, they suffer from only providing a weak-diameter guarantee. Next, the work of \cite{chang_ghaffari2021strong_diameter} and \cite{elkin_haeupler_rozhon_gruanu2022Clusterings_LSST} use the algorithm of \cite{ghaffari_grunau_rozhon2020improved_network_decomposition} as a black blox and use additional ideas on top of the weak-diameter algorithm to create strong-diameter clusterings. The row with \cite{elkin_haeupler_rozhon_gruanu2022Clusterings_LSST} is highlighted because our algorithm uses an idea similar to theirs. Finally, a very recent algorithm of \cite{ghaffari2022improved} manages to bring down the diameter of the clusters as well as the round complexity, with a very different technique than \cite{rozhon_ghaffari2019decomposition}. However, their algorithm is very complicated. By far the simplest efficient algorithm from those in the table is the one from \cite{rozhon_ghaffari2019decomposition}. We show that with a small modification to their algorithm in the spirit of the algorithm of \cite{elkin_haeupler_rozhon_gruanu2022Clusterings_LSST}, we can get a very simple algorithm computing strong-diameter clusters. Formally, we show the following result. \begin{theorem} \label{thm:main} There is a deterministic distributed algorithm that outputs a clustering $\mathcal{C}$ of the input graph $G$ consisting of separated clusters of diameter $O(\log^3 n)$ such that at least $n/2$ nodes are clustered. The algorithm runs in $O(\log^6 n)$ $\mathsf{CONGEST}\,$ rounds. \end{theorem} Recall that by repeatedly applying above result we get the following corollary. \begin{corollary} \label{cor:main} There is a deterministic distributed algorithm that outputs a network decomposition with $C = O(\log n)$ colors and diameter $D = O(\log^3 n)$. The algorithm runs in $O(\log^7 n)$ $\mathsf{CONGEST}\,$ rounds. \end{corollary} \paragraph{Comparison of our algorithm with \cite{rozhon_ghaffari2019decomposition}} We now give a high-level explanation of the algorithm of \cite{rozhon_ghaffari2019decomposition} and afterwards compare it to our algorithm. In the algorithm of \cite{rozhon_ghaffari2019decomposition}, we start with a trivial clustering where every node is a cluster. Every cluster inherits the unique identifier from the starting node. During the algorithm, a cluster can grow, shrink and some vertices are deleted from the graph and will not be part of the final output clustering. In the end, the nonempty clusters cluster at least $n/2$ nodes and their weak-diameter is $O(\log^3 n)$. More concretely, the algorithm consists of $b = O(\log n)$ phases where $b$ is the number of bits in the node identifiers. In phase $i$, we split clusters into red and blue clusters based on the $i$-th bit in their identifier; the goal of the phase is to disconnect the red from the blue clusters by deleting at most $n/(2b)$ nodes in the graph. Here is how this is done. The $i$-th phase consists of $O(b \log n)$ steps. In general red clusters can only grow and blue clusters can only shrink. More concretely, in each step every node in a blue cluster neighboring with a red cluster proposes to join an arbitrary neighboring red cluster. Now, for a given red cluster $C$, if the total number of proposing blue nodes is at least $|C|/(2b)$, then $C$ decides to grow by adding all the proposing blue nodes to the cluster. Otherwise, the proposing nodes are deleted which results in $C$ not being adjacent to any other blue nodes until the end of the phase. One can see that the number of deleted nodes per phase is only $n/(2b)$ in total, as needed. On the other hand, each cluster can grow only $O(b \log n)$ times until it has more than $n$ nodes, which implies that the weak-diameter of each cluster grows only by $O(b \log n) = O(\log^2 n)$ per phase. This concludes the description of the algorithm of \cite{rozhon_ghaffari2019decomposition}. Note that the clusters from their algorithm only have small weak-diameter since the nodes in a cluster can leave it in the future and the cluster may then even disconnect. \textbf{Our strong-diameter algorithm: } To remedy the problem with the weak-diameter guarantee, we change the algorithm of \cite{rozhon_ghaffari2019decomposition} as follows: Instead of clusters, we will think in terms of their centers that we call \emph{terminals}. Given a set of terminals $Q$ such that $Q$ is $R$-ruling, i.e., for every $u \in V(G)$ we have $d_G(Q, u) \le R$, we can always construct a clustering with strong-diameter $R$ by running a breadth first search from $Q$. Hence, keeping a set of terminals is equivalent to keeping a set of strong-diameter clusters. Our algorithm starts with the trivial clustering where $Q = V(G)$. During the algorithm, we keep a set of terminals $Q$ and in each of the $b$ phases we delete at most $n/(2b)$ nodes and make some nodes of $Q$ nonterminals such that those remaining terminals with their $i$-th bit equal to $0$ are in a different component than those that have their $i$-th bit equal to $1$ (see \cref{fig:outer}). Moreover, we want that if at the beginning of the phase the set $Q$ is $R$-ruling, then it is $R + O(b \log n)$-ruling at the end of the phase (cf. the $O(b \log n)$ increase in weak-diameter in the algorithm of \cite{rozhon_ghaffari2019decomposition}). At the beginning of each phase, we run a breadth first search from the set $Q$, which gives us a clustering with strong diameter $R$ (see the left picture in \cref{fig:alg}). We in fact think of each cluster as a rooted tree of radius $R$. We then implement the same growing process as \cite{rozhon_ghaffari2019decomposition}, but with a twist: whenever a blue node $v$ proposes to join a red cluster, the whole subtree rooted at $v$ proposes instead of just $v$ (see the middle picture in \cref{fig:alg}). This is because rehanging/deleting the whole subtree does not break the strong-diameter guarantee of blue clusters. If a blue node joins a red cluster, it stops being a terminal. The only new argument that needs to be done is that the diameter of red clusters does not grow a lot, which is trivial in the algorithm of \cite{rozhon_ghaffari2019decomposition} and follows by a simple argument in our algorithm. We note that the algorithm of \cite{elkin_haeupler_rozhon_gruanu2022Clusterings_LSST} also keeps track of terminals. However, to separate the red and blue terminals in one phase their algorithm relies on computing global aggregates, which can only be done efficiently on a low-diameter input graph. \section{Clustering Algorithm} In this section we prove \cref{thm:clustering_theorem} given below, which is a more precise version of \cref{thm:main}. \begin{theorem}[Clustering Theorem] \label{thm:clustering_theorem} Consider an arbitrary $n$-node network graph $G = (V,E)$ where each node has a unique $b = O(\log n)$-bit identifier. There is a deterministic distributed algorithm that, in $O(\log^6 n)$ rounds in the $\mathsf{CONGEST}\,$ model, finds a subset $V' \subseteq V$ of nodes, where $|V'| \geq |V|/2$, such that the subgraph $G[V']$ induced by the set $V'$ is partitioned into non-adjacent disjoint clusters of diameter $O(\log^3 n)$. \end{theorem} \begin{figure}[ht] \centering \includegraphics[width = .9\textwidth]{fig/outer.eps} \caption{The figure shows one phase of the algorithm from \cref{thm:clustering_theorem}. The left figure contains a $3$-ruling set of terminal nodes $Q_i$ that we start with at the beginning of phase $i$. We split $Q_i$ into red and blue terminals according to the $(i+1)$-th bit of their identifiers. Then, we implement one phase of the algorithm. As a result, some of the nodes are deleted (grey) and some blue terminals stop being terminals. The set of remaining terminals $Q_{i+1}$ is on one hand $6$-ruling, on the other hand the blue terminals in $Q_{i+1}$ are separated from the red terminals. } \label{fig:outer} \end{figure} \begin{figure}[ht] \centering \includegraphics[width = .9\textwidth]{fig/alg.eps} \caption{This figure explains one step of the algorithm of \cref{thm:clustering_theorem}, namely it shows what happens between the middle and the right picture of \cref{fig:outer}. The left picture illustrates the beginning of the phase where we compute a BFS forest $F_0$ from the set $Q$ of terminals. In the first (and any other) step (the middle picture) we construct a set $V_0^{propose}$. Some proposals are accepted and the respective blue nodes join red clusters, while some proposals are rejected and respective blue nodes are deleted (the right picture). } \label{fig:alg} \end{figure} We start by describing the algorithm outline of \cref{thm:clustering_theorem}. The construction has $b = O(\log n)$ phases, corresponding to the number of bits in the identifiers. For $i \in [0,b-1]$, we denote by $V_i$ the set of living vertices at the beginning of phase $i$. Initially, all nodes are living and therefore $V_0 = V$. In each phase, at most $|V|/(2b)$ nodes die. Dead nodes remain dead and will not be contained in $V'$. Some of the alive nodes are terminals. We denote the set of terminals at the beginning of phase $i$ by $Q_i$. Initially, all living nodes are terminals and therefore $Q_0 = V$. Slightly abusing the notation, we let $V_b$ and $Q_b$ denote the set of living vertices and terminals at the end of phase $b-1$, respectively. We define $V'$ to be the final set of living nodes, i.e., $V' = V_b$, and each connected component of $G[V']$ will contain exactly one terminal in $Q_b$. For stating the key invariants the algorithm satisfies, we need the following standard definition of a ruling set: \begin{definition}[Ruling set] We say that a subset $Q \subseteq V(G)$ is $R$-ruling in $G$ if every node $v \in V(G)$ satisfies $d_G(Q, v) \le R$. \end{definition} \paragraph{Construction invariants} The construction is such that, for each $i \in [0,b]$, the following three invariants are satisfied: \begin{enumerate}[label = \Roman*.] \item Ruling Invariant: $Q_i$ is $R_i$-ruling in $G[V_i]$ for $R_i = i \cdot O(\log^2 n)$. \item Separation Invariant: Let $q_1,q_2 \in Q_i$ be two nodes in the same connected component of $G[V_i]$. Then, the identifiers of $q_1$ and $q_2$ coincide in the first $i$ bits. \item Deletion Invariant: $|V_i| \geq \left(1 - \frac{i}{2b}\right)|V|$. \end{enumerate} Note that setting $V_0 = Q_0 = V$ indeed results in the invariant being satisfied for $i = 0$. In the end, we set $V' = V_b$. The deletion invariant for $i = b$ states that $|V'| \geq |V|/2$. The separation invariant implies that each connected component of $G[V']$ contains at most one node of $Q_b$. Together with the ruling invariant, which states that $Q_b$ is $R_b$-ruling in $G[V']$ for $R_b = O(\log^3 n)$, this implies that each connected component of $G[V']$ has diameter $O(\log^3 n)$. Next, in \cref{sec:outline_one_phase} we present the outline of one phase. Afterwards, in \cref{sec:analysis_phase} we prove the correctness of the algorithm and analyse the $\mathsf{CONGEST}\,$ complexity. \subsection{Outline of One Phase} \label{sec:outline_one_phase} In phase $i$, we compute a sequence of rooted forests $F_0, F_1, \ldots, F_t$ in $t = 2b^2 = O(\log^2 n)$ steps. At the beginning, $F_0$ is simply a BFS forest in $G[V_i]$ from the set $Q_i$. At the end, we set $V_{i+1} = V(F_t)$ and $Q_{i+1}$ is the set of roots of the forest $F_t$. Let $j \in \{0,1,\ldots,t-1\}$ be arbitrary. We now explain how $F_{j+1}$ is computed given $F_j$. In general, each node contained in $F_{j+1}$ is also contained in $F_j$, i.e., $V(F_{j+1}) \subseteq V(F_j)$, and each root of $F_{j+1}$ is also a root in $F_j$. We say that a tree in $F_j$ is a red tree if the $(i+1)$-th bit of the identifier of its root is $0$ and otherwise we refer to the tree as a blue tree. Also, we refer to a node in a red tree as a red node and a node in a blue tree as a blue node. Each red node in $F_j$ will also be a red node in $F_{j+1}$. Moreover, the path to its root is the same in both $F_j$ and $F_{j+1}$. Each blue node in $F_j$ can (1) either be a blue node in $F_{j+1}$, in which case the path to its root is the same in both $F_j$ and $F_{j+1}$, (2) be deleted and therefore not be part of any tree in $F_{j+1}$, (3) become a red node in $F_{j+1}$. Let $V_j^{propose}$ be the set which contains each node $v$ which (1) is a blue node in $F_j$, and (2) $v$ is the only node neighboring a red node (in the graph $G$) in the path from $v$ to its root in $F_j$. For a node $v \in V_j^{propose}$, let $T_v$ be the subtree rooted at $v$ with respect to $F_j$. Note that it directly follows from the way we defined $V_j^{propose}$ that $v$ is the only node in $T_v$ which is contained in $V_j^{propose}$. Each node in $V_j^{propose}$ proposes to an arbitrary neighboring red tree in $F_j$. Now, a given red tree $T$ in $F_j$ decides to grow if \[\sum_{\substack{v \in V^{propose}_j \colon \\ \text{$v$ proposes to $T$}}} |V(T_v)| \geq \frac{|V(T)|}{2b}.\] If $T$ decides to grow, then it accepts all the proposals it received, and otherwise $T$ declines all proposals it received. We now set \[V(F_{j+1}) = V(F_j) \setminus \left( \bigcup_{\substack{v \in V^{propose}_j, \\ \text{ the proposal of $v$ was declined}}} V(T_v) \right).\] Each node in $V(F_{j+1}) \setminus V^{propose}_i$ has the same parent in $F_{j+1}$ and $F_j$, or is a root in both $F_{j+1}$ and $F_j$. Each node in $V(F_{j+1}) \cap V^{propose}_j$, i.e., each node whose proposal got accepted by some red tree $T$ in $F_j$, changes its parent to be an arbitrary neighboring node in the tree $T$. Note that if a red tree $T$ decides to grow, then the corresponding tree in $F_{j+1}$ contains at least $\left(1 + \frac{1}{2b}\right)|V(T)|$ vertices. Moreover, if $T$ does not decide to grow, then $T$ is also a tree in $F_{j+1}$ and is not neighboring with any blue tree in $F_{j+1}$. This follows from the fact that each blue node neighboring a red tree either becomes a red node or gets deleted. We now have fully specified how the rooted forests $F_0, F_1, \ldots, F_t$ are computed and recall that in the end we set $V_{i+1} = V(F_t)$ and $Q_{i+1}$ is the set of roots of the forest $F_t$. \subsection{Analysis} \label{sec:analysis_phase} For each $j \in \{0,1,\ldots,t\}$ and $u \in V(F_j)$, we define $d_j(u)$ as the length of the path from $u$ to its root in $F_j$. Note that as $F_0$ is a BFS forest, for any neighboring nodes $w,v \in V(F_0)$ it holds that $d_0(w) \leq d_0(v) + 1$. \begin{claim}[Ruling Claim] \label{claim:distance_to_root} For every $i \in \{0,1,\ldots,t\}$, the following holds: \begin{enumerate}[leftmargin = 4cm] \item [Blue Property:\hspace{1em}] Every blue node in $F_j$ satisfies $d_j(u) = d_0(u)$. \item [Red Property:\hspace{1em}] Every red node in $F_j$ satisfies $d_j(u) \leq d_0(u) + 2j$. \end{enumerate} In particular, this implies that Invariant (I) is preserved. \end{claim} \begin{proof} The blue property directly follows from the fact that for any blue node the path to its root in $F_j$ is the same as the path to its root in $F_0$. We prove the red property by induction on $j$. The base case $j = 0$ trivially holds. For the induction step, consider an arbitrary $j \in \{0,1,\ldots,t-1\}$. We show that the statement holds for $j+1$ given that it holds for $j$. Consider an arbitrary red node $u$ in $F_{j+1}$. We have to show that $d_{j+1}(u) \leq d_0(u) + 2(j+1)$. If $u$ is also a red node in $F_j$, then we can directly use induction. Hence, it remains to consider the case $u$ is a blue node in $F_j$. In that case, there exists a node $v \in V^{propose}_j$ such that $u \in V(T_v)$ and the proposal of $v$ was accepted. In particular, $v$'s parent in $F_{j+1}$ is some neighboring node $w$ which is part of some red tree in $F_j$ (see \cref{fig:rehanging}). The path from $u$ to its root $r$ in $F_{j+1}$ can be decomposed into a path from $u$ to $v$, an edge from $v$ to $w$ and a path from $w$ to its root $r$. The path from $u$ to $v$ in $F_{j+1}$ is the same as the path from $u$ to $v$ in $F_0$ and therefore of length $d_0(u) - d_0(v)$. The path from $w$ to $r$ in $F_{j+1}$ is the same as the path from $w$ to $r$ in $F_j$ and therefore has a length of $d_j(w)$ with $d_j(w) \leq d_0(w) + 2j$ according to the induction hypothesis. Moreover, we noted above that because $w$ and $v$ are neighbors, we have $d_0(w) \leq d_0(v) + 1$. Hence, we can upper bound the length of the path from $u$ to its root in $F_{j+1}$ by \[d_{j+1}(u) \leq \left( d_0(u) - d_0(v)\right) + 1 + \left( d_0(w) + 2j\right) \leq d_0(u) + 2(j+1)\] which finishes the induction proof. It remains to prove the last part of the claim. To that end, assume that the ruling invariant is satisfied for $i$, i.e., $Q_i$ is $R_i$-ruling in $G[V_i]$ for $R_i = i \cdot O(\log^2 n)$. Then, every node $u$ in $V(F_t) = V_{i+1}$ satisfies \[d_{G[V_{i+1}]}(Q_{i+1},u) \leq d_t(u) \leq d_0(u) + 2t \leq i \cdot O(\log^2 n) + O(\log^2 n) = (i+1)O(\log^2 n)\] and therefore the ruling invariant is satisfied for $i+1$. \end{proof} \begin{figure} \centering \includegraphics{fig/rehanging.eps} \caption{The figure shows the situation in the proof of \cref{claim:distance_to_root}. The path from $u$ to $r$ splits into three parts: from $u$ to $v$, then to $w$, then to $r$. The length of each part is upper bounded separately. } \label{fig:rehanging} \end{figure} \begin{claim}(Separation Claim) \label{claim:tree_size} No red node in $F_t$ is neighboring a blue node in $F_t$. In particular, this implies that Invariant (II) is preserved. \end{claim} \begin{proof} We observed during the algorithm description that each red tree that decides to grow grows by at least a $(1 + \frac{1}{2b})$-factor in a given step. Our choice of $t = 2b^2$ implies that \[\left(1 + \frac{1}{2b}\right)^{t} = \left( \left(1 + \frac{1}{2b}\right)^{2b} \right)^{(t/2b)} > 2^{t/2b} = 2^b \geq n,\] and therefore each tree eventually stops growing. However, once a tree decides not to grow, it is not neighboring with any blue node and therefore no red node in $F_t$ is neighboring a blue node in $F_t$. In particular, this implies that each connected component of $G[V_{i+1}] = G[V(F_t)]$ either entirely consists of blue nodes in $F_t$ or entirely consists of red nodes in $F_t$. As the $(i+1)$-th bit of the identifier of each red root in $F_t$ is $0$ and the $(i+1)$-th bit of the identifier of each blue root in $F_t$ is $1$, we get that each connected component of $G[V_{i+1}]$ either contains no node in $Q_{i+1}$ with the $(i+1)$-th bit of the identifier being $0$ or no node in $Q_{i+1}$ with the $(i+1)$-th bit of the identifier being $1$, which implies that the separation invariant is preserved. \end{proof} \begin{claim}[Deletion Claim] \label{claim:deletion} It holds that $|V_{i+1}| = |V(F_t)| \geq \left(1 - \frac{1}{2b}\right)|V(F_0)| \geq |V_i| - \frac{|V|}{2b}$. In particular, this implies that Invariant (III) is preserved. \end{claim} \begin{proof} A node $u$ got deleted in step $i$, i.e., $u \in V(F_j) \setminus V(F_{j+1})$, because of some tree $T$ in $F_j$ which decided to stop growing, as \[\sum_{v \in V_j^{propose} \colon \text{$v$ proposes to $T$}} |V(T_v)| < \frac{|V(T)|}{2b}.\] We blaim this tree $T$ for deleting $u$. Note that $T$ only receives blaim in step $j$ and at most $\frac{|V(T)|}{2b}$ deleted nodes blaim $T$. During the algorithm description, we observed that $T$ is not neighboring any blue node in $F_{j+1}$ and therefore $T$ is also a tree in $F_t$. Hence, each deleted node in $V(F_0) \setminus V(F_t)$ can blaim one tree $T$ in $F_t$ for being deleted in such a way that each such tree gets blaimed by at most $\frac{1}{2b}|V(T)|$ nodes, which directly proofs the claim. \end{proof} \begin{proof}[Proof of \cref{thm:clustering_theorem}] The algorithm has $O(\log n)$ phases, with each phase consisting of $O(\log^2 n)$ steps. It directly follows from the ruling claim that each step can be executed in $O(\log^3 n)$ $\mathsf{CONGEST}\,$ rounds. Hence, we can compute $V'$ in $O(\log^6 n)$ $\mathsf{CONGEST}\,$ rounds, which together with the previous discussion finishes the proof of \cref{thm:clustering_theorem}. \end{proof} \section{Acknowledgments} We want to thank Mohsen Ghaffari for many valuable suggestions. \bibliographystyle{alpha} \section{The Algorithm} Our algorithm is based on the following idea. Instead of talking about clusters, we will just talk about their centers that we call terminals. At the beginning, every node is a terminal. Then in each of the following $b = O(\log n)$ phases we delete at most $n/(2b)$ nodes of $G$ such that in the new graph, terminals with their $i$-th bit equal to $0$ cannot be in the same connected component with terminals with their $i$-th bit equal to $1$. This means that after $b$ phases every terminal is in a separate connected component and this component is the final cluster. To make this approach work, we need to control how close the nodes of $G$ are to terminals. This is controlled by the following well-known definition of a ruling set. \begin{definition}[Ruling set] We say that a subset $Q \subseteq V(G)$ is $R$-ruling if every node $v \in V(G)$ satisfies $d(Q, v) \le R$. \end{definition} Our main result is the following lemma that says that we can carry out our plan of deleting $n/(2b)$ nodes of the input graph so as to separate terminals of different $i$-th bit. Moreover, if the set of terminals was $R$-ruling, it is only $R + O(\log^2 n)$ ruling afterwards. \begin{lemma} \label{lemma:main} Assume that each node of the input graph is equipped with a unique $b$-bit identifier for some $b = O(\log n)$. Suppose we are given a set of red and blue \emph{terminals} $Q = Q^\mathcal{R} \sqcup Q^\mathcal{B} \subseteq V(G)$ such that $Q^\mathcal{R} \sqcup Q^\mathcal{B}$ is $R$-ruling for some $R = O(b^3)$. Then, there is a deterministic $\mathsf{CONGEST}\,$ algorithm running in $O(\log^5(n))$ rounds and which outputs some subset $Q'^\mathcal{B} \subseteq Q^{\mathcal{B}}$ of the blue terminals and a set $V' \subseteq V(G)$ which satisfies $Q^\mathcal{R} \cup Q'^\mathcal{B} \subseteq V'$ such that the following three properties are satisfied: \begin{enumerate} \item Ruling Property: $Q^\mathcal{R} \cup Q'^\mathcal{B}$ is $(R+4b^2)$-ruling in $G[V']$. \item Separation Property: Each connected component in $G[V']$ contains either no node from $Q^{\mathcal{R}}$ or no node from $Q'^\mathcal{B}$. \item Deletion Property: It holds that $|V(G) \setminus V'| \le n/(2b)$. \end{enumerate} \end{lemma} Before we prove \cref{lemma:main}, we first show that we can apply the algorithm of \cref{lemma:main} $O(\log n)$ times to compute a clustering which clusters at least half of the vertices into non-adjacent clusters of diameter $O(\log^3 n)$, thus proving MAIN CLUSTERING RESULT. \begin{proof}[Proof of MAIN CLUSTERING RESULT] The algorithm computes two sequences $V_0 := V(G) \supseteq V_1 \supseteq V_2 \supseteq \ldots \supseteq V_b$ and $Q_0 := V(G) \supseteq Q_1 \supseteq \ldots \supseteq Q_b$. In general, $Q_i$ is $(i4b^2)$-ruling in $G[V_i]$ for $i \in \{0,1,\ldots,b\}$. In the end, each connected component of $G[V_b]$ will form one cluster of the output clustering. Moreover, each such connected component will contain exactly one node in $Q_b$. As $Q_b$ is $4b^3$-ruling in $G[V_b]$, this implies that each cluster has diameter $O(\log^3 n)$. For $i \in \{0,1,\ldots,b-1\}$, given $(V_i,Q_i)$, we compute $(V_{i+1},Q_{i+1})$ as follows: first, let $Q_i^\mathcal{R}$ contain all nodes in $Q_i$ with the $(i+1)$-th bit of the identifier being equal to $0$ and $Q_i^\mathcal{B}$ contain all nodes in $Q_i$ with the $(i+1)$-th bit of the identifier being equal to $1$. Note that $Q_i = Q_i^\mathcal{R} \sqcup Q_i^\mathcal{B}$. We invoke \cref{lemma:main} with $Q^\mathcal{R}_{\Lref{lemma:main}} = Q^\mathcal{R}_i$, $Q^\mathcal{B}_{\Lref{lemma:main}} = Q^\mathcal{B}_i$ and $R_{\Lref{lemma:main}} = i(4b^2)$ on input graph $G_{\Lref{lemma:main}} = G[V_i]$. As an output, we obtain sets $Q'^\mathcal{B}$ and $V'$. We then define $Q_{i+1} = Q^\mathcal{R}_i \cup Q'^\mathcal{B}$ and $V_{i+1} = V'$. The guarantees of \cref{lemma:main} then imply that: \begin{enumerate} \item Ruling Property: $Q_{i+1}$ is $((i+1)4b^2)$-ruling in $G[V_{i+1}]$. \item Separation Property: Each connected component in $G[V_{i+1}]$ contains either no node from $Q_{i+1}$ with the $(i+1)$-th bit of the identifier being $0$ or no node from $Q_{i+1}$ with the $(i+1)$-th bit of the identifier being $1$. \item Deletion Property: $|V_{i+1}| \geq |V_i| - \frac{n}{2b}$. \end{enumerate} The deletion property together with the fact that $|V_0| = n$ directly implies that $|V_b| \geq \frac{n}{2}$ and therefore at least half of the nodes are clustered. The separation property directly implies that in the end each connected component of $G[V_b]$ contains at most one node of $Q_b$. We already discussed above why this fact together with the ruling property implies that each cluster has diameter $O(\log^3 n)$. To compute the output clustering, we invoke the algorithm of \cref{lemma:main} $b = O(\log n)$ times. Hence, the algorithm runs in $O(\log n) \cdot O(\log^5 n) = O(\log^6 n)$ rounds, as needed. \end{proof} \paragraph{Algorithm Description} \todo{How to structure it} Let $t = 2b^2$. The algorithm works in $t$ steps and computes a sequence of rooted forests $F_0, F_1, \ldots, F_t$. The rooted forest $F_0$ is simply a BFS forest rooted at the set of terminals $Q$. Next, consider an arbitrary $i \in \{0,1,\ldots,t-1\}$. We now explain how $F_{i+1}$ is computed given $F_i$. In general, each node contained in $F_{i+1}$ is also contained in $F_i$, i.e., $V(F_{i+1}) \subseteq V(F_i)$, and each root of $F_{i+1}$ is also a root in $F_i$. In particular, each root in $F_i$ is contained in $Q$. We say that a tree in $F_i$ is a red tree if its root is in $Q^\mathcal{R}$ and otherwise, if its root is in $Q^\mathcal{B}$, we refer to the tree as a blue tree. Also, we refer to a node in a red tree as a red node and a node in a blue tree as a blue node. Each red node in $F_i$ will also be a red node in $F_{i+1}$. Moreover, the path to its root is the same in both $F_i$ and $F_{i+1}$. Each blue node in $F_i$ can (1)\todo{Is this the correct way how to enumerate stuff} either be a blue node in $F_{i+1}$, in which case the path to its root is the same in both $F_i$ and $F_{i+1}$, (2) be deleted and therefore not be part of any tree in $F_{i+1}$, (3) become a red node in $F_{i+1}$. Let $V_i^{propose}$ be the set which contains each node $v$ which (1) is a blue node in $F_i$, and (2) $v$ is the only node neighboring a red node (in the graph $G$) in the path from $v$ to its root in $F_i$. For a node $v \in V_i^{propose}$, let $T_v$ be the subtree rooted at $v$ with respect to $F_i$. Note that it directly follows from the way we defined $V_i^{propose}$ that $v$ is the only node in $T_v$ which is contained in $V_i^{propose}$. Each node in $V_i^{propose}$ proposes to an arbitrary neighboring red tree in $F_i$. Now, a given red tree $T$ in $F_i$ decides to grow if \[\sum_{v \in V^{propose}_i \colon \text{$v$ proposes to $T$}} |V(T_v)| \geq \frac{|V(T)|}{2b}.\] If $T$ decides to grow, then it accepts all the proposals it received, and otherwise $T$ declines all proposals it received. We now set \[V(F_{i+1}) = V(F_i) \setminus \left( \bigcup_{v \in V^{propose}_i, \text{the proposal of $v$ was declined}} V(T_v) \right).\] \todo{define Vdeleted here?} Each node in $V(F_{i+1}) \setminus V^{propose}_i$ has the same parent in $F_{i+1}$ and $F_i$, or is a root in both $F_{i+1}$ and $F_i$. Each node in $V(F_{i+1}) \cap V^{propose}_i$, i.e., each node whose proposal got accepted by some red tree $T$ in $F_i$, changes its parent to be an arbitrary neighboring node in the tree $T$. \todo{Where do you wanna define $V^{deleted}_i$} Note that if a red tree $T$ decides to grow, then the corresponding tree in $F_{i+1}$ contains at least $\left(1 + \frac{1}{2b}\right)|V(T)|$ vertices. Moreover, if $T$ does not decide to grow, then $T$ is also a tree in $F_{i+1}$ and is not neighboring with any blue tree in $F_{i+1}$. This follows from the fact that each blue node neighboring a red tree either becomes a red node or gets deleted. We now have fully specified how the rooted forests $F_0, F_1, \ldots, F_t$ are computed. In the end, we set $V' = V(F_t)$ and we obtain the output set $Q'^\mathcal{B}$ by only keeping those nodes in $Q^\mathcal{B}$ that are the root of a blue tree in $F_t$. Note that $Q^\mathcal{R} \cup Q'^\mathcal{B} \subseteq V'$, which is one of the conditions of \cref{lemma:main}. \todo{Maybe some connective text} \paragraph{Analysis} For each $i \in \{0,1,\ldots,t\}$ and $u \in V(F_i)$, we define $d_i(u)$ as the length of the path from $u$ to its root in $F_i$. \todo{Add intuition to claims?} \begin{claim}[Ruling Claim] \label{claim:distance_to_root} For every $i \in \{0,1,\ldots,t\}$, the following holds: \begin{enumerate} \item If $u$ is in a blue tree in $F_i$, then $d_i(u) = d(Q,u)$. \item If $u$ is in a red tree in $F_i$, then $d_i(u) \leq d(Q,u) + 2i$. \end{enumerate} \end{claim} In particular, $Q^\mathcal{R} \cup Q'^\mathcal{B}$ is $(R+4b^2)$-ruling in $G[V']$. \begin{proof} We prove the statement by induction on $i$. The base case $i = 0$ trivially follows from the fact that $F_0$ is a BFS tree rooted at $Q$. For the induction step, consider an arbitrary $i \in \{0,1,\ldots,t-1\}$. We show that the statement holds for $i+1$ given that it holds for $i$. Consider an arbitrary $u \in V(F_{i+1})$. We have to show that $d_{i+1}(u) = d(Q,u)$ in case $u$ is a blue node in $F_{i+1}$ and $d_{i+1}(u) \leq d(Q,u) + 2(i+1)$ in case $u$ is a red node in $F_{i+1}$. The only case where this does not trivially follow from induction is the case that $u$ is a blue node in $F_i$ and a red node in $F_{i+1}$. In that case, there exists a node $v \in V^{propose}_i$ such that $u \in T_v$ and the proposal of $v$ was accepted. In particular, $v$'s parent in $F_{i+1}$ is some neighboring node $w$ which is part of some red tree in $F_i$. As $w$ is $v$'s parent in $F_{i+1}$, we have $d_{i+1}(v) = 1 + d_{i+1}(w)$ and as $w$ is in a red tree in $F_i$, we have $d_{i+1}(w) = d_i(w)$. Moreover, the induction hypothesis gives $d_i(w) \leq d(Q,w) + 2i$ and as $v$ is a neighbor of $w$, it trivially holds that $d(Q,w) \leq d(Q,v) + 1$. Combining the four (in)equalities gives \todo{V: picture?} \[d_{i+1}(v) = 1 + d_{i+1}(w) = 1 + d_i(w) \leq 1 + d(Q,w) + 2i \leq d(Q,v) + 2(i+1).\] The path between $u$ and $v$ in $F_i$ is the same as the path between $u$ and $v$ in $F_{i+1}$. Hence, $d_{i+1}(u) - d_{i+1}(v) = d_i(u) - d_i(v)$ and therefore \[d_{i+1}(u) = d_{i+1}(u) - d_{i+1}(v) + d_{i+1}(v) \leq d_i(u) - d_i(v) + d(Q,v) + 2(i+1) \leq d_i(u) + 2(i+1) = d(Q,u) + 2(i+1),\]\todo{V: I would try to say it is clear instead of this calculation} where the last equality follows from the induction hypothesis. This finishes the induction. In particular, $d_t(u) \leq d(Q,u) + 2t \leq R + 4b^2$ for every node $u \in V(F_t) = V'$, where we used that $Q$ is $R$-ruling in $G$. As the set of roots of $F_t$ is equal to $Q^\mathcal{R} \sqcup Q'^\mathcal{B}$, it follows that $Q^\mathcal{R} \cup Q'^\mathcal{B}$ is $(R+4b^2)$-ruling in $G[V']$. \end{proof} \begin{claim}(Separation Claim) \label{claim:tree_size} No red node in $F_t$ is neighboring a blue node in $F_t$. In particular, each connected component in $G[V']$ contains either no node from $Q^\mathcal{R}$ or no node from $Q'^\mathcal{B}$. \end{claim} \begin{proof} We observed during the algorithm description that each red tree that decides to grow grows by at least a $(1 + \frac{1}{2b})$-factor in a given step. As \[\left(1 + \frac{1}{2b}\right)^{t} = \left( \left(1 + \frac{1}{2b}\right)^{2b} \right)^{(t/2b)} > 2^{t/2b} = 2^b \geq n,\] each tree eventually has to stop growing. However, once a tree decides not to grow, it is not neighboring with any blue node and therefore no red node in $F_t$ is neighboring a blue node in $F_t$. \end{proof} \begin{claim}[Deletion Claim] \label{claim:deletion} It holds that $|V(G) \setminus V(F_t)| = |V(G) \setminus V'| \leq \frac{n}{2b}$. \end{claim} \begin{proof} A node $u$ got deleted in step $i$, i.e., $u \in V(F_i) \setminus V(F_{i+1})$, because of some tree $T$ in $F_i$ which decided to stop growing, as \[\sum_{v \in V_i^{propose} \colon \text{$v$ proposes to $T$}} |V(T_v)| < \frac{|V(T)|}{2b}.\] We blaim this tree $T$ for deleting $u$. Note that $T$ only receives blaim in step $i$ and at most $\frac{V(T)}{2b}$ deleted nodes blaim $T$. During the algorithm description, we observed that $T$ is not neighboring any blue node in $F_{i+1}$ and therefore $T$ is also a tree in $F_t$. Hence, each deleted node can blaim one tree $T$ in $F_t$ for being deleted in such a way that each such tree gets blaimed by at most $\frac{1}{2b}|V(T)|$ nodes, which directly proofs the claim. \end{proof} \begin{proof}[Proof of \cref{lemma:main}] \cref{claim:distance_to_root}, \cref{claim:tree_size}, \cref{claim:deletion} together with the fact that $Q^\mathcal{R} \cup Q'^\mathcal{B} \subseteq V'$ imply that $(V',Q'^\mathcal{B})$ is a valid output. Moreover, each of the $O(\log^2 n)$ steps can be implemented in $O(\log^3 n)$ $\mathsf{CONGEST}\,$ rounds. This follows because the diameter of each tree in $F_i$ is $O(\log^3 n)$ according to \cref{claim:distance_to_root}. Hence, $V'$ and $Q'^\mathcal{B}$ can indeed be computed in $O(\log^5 n)$ $\mathsf{CONGEST}\,$ \todo{How do I get a space after CONGEST?} rounds. \end{proof}
1,314,259,994,102
arxiv
\section{A dictionary of universal algebra: \\ \section{Introduction} We consider classes of linear multioperator algebras defined by operations and identities among them, that is, varieties of multioperator algebras. Given such a variety, a sequence of vector spaces is associated to it called a {\it cocharacter sequence} or an {\it algebraic operad} associated to the variety. The sequence of dimensions of these vector spaces ({\it codimension sequence}) and its (exponential) generating function ({\em codimension series} of the variety, or {\em generating series} of the operad) is one of the most important invariant of an operad or a variety. In particular, the asymptotic growth of the cocharacter sequence is a measure of the growth of the operad. Both the generating series and the asymptotics of the coefficients have been studied for a number of varieties and operads, see~\cite{giza} and~\cite{zin} and references therein. Here we discuss the algorithmic approach to determining asymptotic growth of operads. First, we should mention the recent progress in determining such a growth in the case of varieties of algebras with one binary operation~\cite{giza} where the theory of representations of symmetric groups is extremely useful. In particular it is shown~\cite{berele} that the codimension series of each variety of associative algebras is a holonomic function. In contrast, here we concentrate on the case of the varieties of multi-operational algebras. In this case, operadic methods are extremely useful. These operadic methods lead to algorithms based on various symbolic computation concepts such as; the theory of Groebner bases in operads, the combinatorial problems which are close to the problem of enumeration of labelled trees avoiding certain patterns, the formal power series solutions of algebraic and algebraic differential equations and the problem of determining the asymptotics of the coefficients of such solutions. We try to explain here a chain of algorithms which leads from a list of operations and identities to either asymptotic growth of corresponding variety or bounds for asymptotics. We discuss also theoretical obstructions for such algorithms to be applicable in the most general case. We discuss briefly a recent implementation of some algorithms from the chain (that is, algorithms for calculation of Groebner bases in operads). More precisely, we first show that, in general, there does not exist an algorithm which always determines the growth asymptotics of codimensions of a variety defined by finite collection of operations and identities. Next, we give a lower bound for the generating series of the codimension sequence in cases when the number of defining identities is bounded in each arity. This lower bound is a formal power series solution of some algebraic equation whose coefficients encode dimensions of spaces of generators and relations of the operad. In particular, this gives explicit lower bounds for the codimensions. The bound becomes an equality (that is, it gives a formula for the generating function of the operad) if the operad satisfies a simple homological condition, that is, it has right homological dimension at most two. Then, we give upper bounds for the generating functions of the codimension sequences of arbitrary operads. This upper bound is equal to the generating series of the monomial operad corresponding to any partial Groebner basis of a given operad. Under some mild conditions on defining monomial relations of the monomial operad, this lower bound is the formal power series solution of a differential algebraic equation or a pure algebraic equation. So, this bound occurs to be an equality if the operad under consideration has finite Groebner basis. Note that there are operads for which both of our bounds become equalities. Consider, for example, the variety of alia algebras introduced by Dzhumadildaev, that is, the variety of non-associative algebras that are satisfying the identity $$ \{[x_1, x_2], x_3\} + \{[x_2, x_3], x_1\} + \{[x_3, x_1], x_2\} = 0, $$ where $[a,b] = ab-ba$ and $\{a,b\} = ab+ba$, these algebras are also referred to as 1-alia algebras~\cite{dzh}. It is shown in~\cite[Example~3.5.1]{kp} that the corresponding operad $\alia$ has quadratic Groebner basis (in particular, it is Koszul). Its generating series $y=\alia(z)$ is equal to the generating series of the corresponding quadratic monomial operad. Moreover, the relations of the monomial operad are symmetric regular (see Section~\ref{sec:we}), so that $y$ satisfies an algebraic equation $y-y^2+y^3/6=z$. On the other hand, it follows that the generating series of the quadratic dual operad is $\alia^!(z) = z+z^2 +z^3/6$. It follows that the operad $\alia$ has a homological dimension of two, so that our lower bound becomes an equality too. Indeed, Corollary~\ref{cor:GS_quadr} implies that the lower bound $y=y(z)$ satisfies the same algebraic equation. The plan of the paper is as follows. In Section~\ref{sec1} we briefly recall what varieties of algebras and algebraic operads are. In particular, we give a brief `phrase-book' for explaining operads in terms of varieties and vice versa. In Section~\ref{sec:undec}, we explain a theoretical obstruction for determining growth of an operad. First, we show that for sufficiently large $n$ the set of generating functions of quadratic operads generated by $n$ binary operations (= set of codimension series of varieties of algebras with $n$ binary operations) is infinite. This solves a conjecture by Bremner and Dotsenko negatively~\cite[Conjecture 10.4.1.1]{db}. Moreover, we show that there does not exist an algorithm which, given a list of generators and relations of a non-symmetric operad known to have an exponential growth, always decides whether the exponent of the growth is equal to a given rational number. This means that there is no algorithm to decide that, given a set of generators and identities of a variety of multioperator algebras such that its codimensions $c_n$ grow approximately as $ n! c^n $ for some $c\ge 1$, whether $c$ is equal to a given rational number. So, the asymptotic growth of a variety defined by a finite number of identities is not algorithmically recognized in general. In Section~\ref{sec:est} we give estimates for the dimensions of the components of operads provided that the number of defining identities in each degree is bounded. In this case, the generating series of an operad is bounded below by an algebraic function depending only on degrees and arities of generators and identities. Particularly, we use an operadic version of the Golod--Shafarevich theorem~\cite{kurosh} to establish the lower and upper asymptotic bounds of type $c^n \frac{(2n)!}{n!} $ for quadratic operads generated by at least two non-symmetric operations with bounded number of defined relations in each degree. In Section~\ref{sec:groeb} we briefly recall the foundations of the theory of Groebner bases for operads~\cite{dk}. Then we discuss the operads with finite Groebner bases. The generating function of such an operad is equal to the generating function of the corresponding {\em monomial} operad. So, in the next Section~\ref{sec:we} we discuss finitely presented monomial operads. We recall that under some mild symmetry conditions the generating function of such operad does satisfy an algebraic differential equation. We also briefly describe an algorithm to generate such an equation based on the results of~\cite{kp}. We focus on the additional conditions that imply that the obtained equation is in fact algebraic or even rational. In the last two cases the asymptotics of the generating function coefficients can be recovered by the standard computer algebra tools. \section{Operads and varieties} \label{sec1} For the details on operads we refer the reader to the monographs~\cite{oper} and~\cite{Loday_Valet}; see also the textbook~\cite{db}. \subsection{A definition of operad} We consider multioperator linear algebras over a field $\KK$ of zero characteristic. Let $W$ be a variety of $\KK$--linear algebras (without constants, with identity and without other unary operations) of some signature $\Omega$. We assume $\Omega$ is a finite union of finite sets $\Omega = \Omega_2 \cup \dots \cup \Omega_k$ where the elements $\omega$ of $\Omega_t$ act on each algebra $A \in W$ as $t$-linear operations, $\omega:A^{\otimes t} \to A$. Recall that a variety is defined by two sets, the signature $\Omega$ and a set of defining identities $R$. By the linearization process, one can assume that $R$ consists of multilinear identities. Consider the free algebra $F^W (x)$ on a countable set of indeterminates $x= \{ x_1, x_2, \dots \}$. Let $ {\mathcal P} _n \subset F$ be the subspace consisting of all multilinear generalized homogeneous polynomials on the variables $\{ x_1, \dots, x_n \}$, that is, $ {\mathcal P} _n$ is the component $F^W (x) [1, \dots, 1, 0, 0,\dots]$ with respect to the $\ensuremath{\mathbb Z}^\infty$-grading by the degrees on $x_i$. \begin{defi} Given such a variety $W$, the sequence $ {\mathcal P} _W = {\mathcal P} := \{ {\mathcal P} _1, {\mathcal P} _2, \dots \}$ of the vector subspaces of $F^W (x)$ is called an {\it operad}\footnote{More precisely, symmetric connected $\KK$--linear operad with identity.}. \end{defi} The $n$-th component $ {\mathcal P} _n$ may be identified with the set of all derived $n$-linear operations on the algebras of $W$; in particular, $ {\mathcal P} _n$ carries a natural structure of a representation of the symmetric group $S_n$. Such a sequence $Q = \{ Q(n) \}_{n \in \ensuremath{\mathbb Z}}$ of representations $Q(n)$ of the symmetric groups $S_n$ is called an {$\ensuremath{\mathbb S}$--module}, so that an operad carries a structure of $\ensuremath{\mathbb S}$-module with $ {\mathcal P} _n = {\mathcal P} (n)$. Also, the compositions of operations (that is, a substitution of an argument $x_i$ by a result of another operation with a subsequent monotone re-numbering the inputs to avoid repetitions) gives natural maps of $S_*$-modules $\circ_i : {\mathcal P} (n)\otimes {\mathcal P} (m) \to {\mathcal P} (n+m-1)$. Note that the axiomatization of these operations gives an abstract definition of operads, see~\cite{oper} for the discussion. Note that the signature $\Omega$ can be considered as a sequence of subsets of $ {\mathcal P} $ with $\Omega_n \subset {\mathcal P} _n$. Then $\Omega$ generates the operad $ {\mathcal P} $ up to the $\ensuremath{\mathbb S}$--module structure and the compositions $\circ_i$ so that it is called a {\em set of generators} of the operad. More generally, the $\ensuremath{\mathbb S}$-module $X$ generated by $\Omega$ is called the (minimal) {\em module of generators} of the operad $ {\mathcal P} $. It can be also defined independently of $\Omega$ as $X = {\mathcal P} _+/( {\mathcal P} _+\circ {\mathcal P} _+)$ where $ {\mathcal P} _+ = p_2\cup p_3 \cup \dots$ and $\circ$ denotes the span of all compositions of two $\ensuremath{\mathbb S}$-modules. Then one can define a variety $W$ corresponding to a (formal) operad $ {\mathcal P} $ by picking a set $\Omega$ of generators of $X$ to be the signature and considering all relations in $ {\mathcal P} $ as defining identities of the variety, so that the variety $W$ can be recovered by $ {\mathcal P} $ ``up to a change of variables''. Moreover, one can consider the algebras from $W$ as vector spaces $V$ with the actions $ {\mathcal P} (n): V^{\otimes n} \to V$ compatible with compositions and the $\ensuremath{\mathbb S}$-module structures, so that the algebras of $W$ are recovered by $ {\mathcal P} $ up to isomorphisms. Given an $\ensuremath{\mathbb S}$-module $X$, one can define also a {\em free operad} $\f(X)$ generated by $X$ as the span of all possible compositions of a basis of $X$ modulo the action of symmetric groups. For example, the free operad $\f(\ensuremath{\mathbb S}\Omega)$ on the free $\ensuremath{\mathbb S}$-module $\ensuremath{\mathbb S}\Omega$ corresponds to the variety of all algebras of signature $\Omega$. One can define a simpler notion of {\em non-symmetric operad} as a union $P = P_1\cup P_2\cup \dots$ with the compositions $\circ_i$ as above but without actions of the symmetric groups. To distinguish them, we refer to the operads defined above as {\em symmetric}. Each symmetric operad can be considered as a non-symmetric one. Moreover, to each non-symmetric operad $P$ one can assign a symmetric operad $ {\mathcal P} $ where $ {\mathcal P} _n = S_n P_n$ is a free $S_n$ module generated by $P_n$. Then $ {\mathcal P} $ is called a {\em symmetrization} of $P$. In particular, here $\mbox{dim\,} {\mathcal P} _n = n! \mbox{dim\,} P_n$. An $n$-th codimension of a variety $W$ is just the dimension of the respective operad component: $c_n(W) = \mbox{dim\,}_k {\mathcal P} _n$ for $ {\mathcal P} = {\mathcal P} _W$. We consider both exponential and ordinary generating series for this sequence: \begin{equation} \label{eq::E::gen::ser} E_{ {\mathcal P} } (z) := \sum_{n \ge 1} \frac{\mbox{dim\,} {\mathcal P} (n)}{n!} z^n , G_{ {\mathcal P} } (z) := \sum_{n \ge 1} {\mbox{dim\,} {\mathcal P} (n)} z^n . \end{equation} For example, if $ {\mathcal P} $ is a symmetrization of a non-symmetric operad $P$ then $E_{ {\mathcal P} } (z) = G_P (z)$. By {\em generating series} of a symmetric operad $ {\mathcal P} $ we mean the exponential generating function $ {\mathcal P} (z) = E_{ {\mathcal P} } (z)$. In contrast, for a non-symmetric operad $P$ its generating series is defined as the ordinary generating function $P(z) = G_{P} (z)$. In the case of varieties, both the ordinary and exponential versions of the codimension series are studied. If the set $\Omega $ is finite then the series $ {\mathcal P} (z)$ defines an analytic function in a neighborhood of zero. For example, the non-symmetric operad $\mathrm{Ass}$ of associative algebras is the operad defined by one binary operation $m$ (multiplication) subject to the relation $m(m(x_1,x_2),x_3)) = m(x_1,m(x_2,x_3))$ which is the associativity identity. Its $n$-th component consists of the only equivalence class of all arity $n$ compositions of $m$ with itself modulo the relation, so that $ \mathrm{Ass}(z) = G_{\mathrm{Ass}} (z) = \frac{z}{1-z}. $ Its symmetrization is the symmetric operad $\: {\mathcal A}ss \:$ generated by two operations $m(x_1,x_2)$ and $m'(x_1,x_2)=m(x_2,x_1)$ with the $S_2$ action $(12) m' = m$ subject to all the relations of the form $m(m(x_i,x_j),x_k)) = m(x_i,m(x_j,x_k))$. By the above, we have $E_{\: {\mathcal A}ss \:}(z) = \: {\mathcal A}ss \: (z) = \mathrm{Ass}(z)= G_{\mathrm{Ass}} (z)$, so that $\mbox{dim\,} \: {\mathcal A}ss \:_n = n!$. For the reader's convenience, let us give an approximate translation table in a phrase-book style for the two languages of linear universal algebra. Here the objects at the same line are in correspondence up to a choice of signature while the last two strings represent equalities. \centerline{ \begin{tabular}{rcl} variety & --- & the category of all algebras \\ & & $\qquad$ over an operad \\ subvariety & --- & quotient operad \\ signature & --- & set of generators \\ identities & --- & relations \\ free algebra & --- & free operad \\ free algebra of a variety & --- & operad \\ $n$-th codimension & = & dimension of the $n$-th component \smallskip \\ (ordinary or exponential) & = & (ordinary or exponential) \\ codimension series $\phantom{ab}$ & & $\phantom{abc}$ generating series \end{tabular} } \section{General algorithmic undecidability} \label{sec:undec} In the next theorem, we assume that the field $\KK$ is computable. \begin{theorem} \label{th:undec} Consider the set of non-symmetric quadratic operads $P$ defined by a fixed finite set $x$ of generators and some set $r$ of quadratic relations on $x$. Let ${\mathcal H}(x)$ be the set of ordinary generating functions of all such operads with various $r$. Then there is a natural $n$ such that if $|x| \ge n$ then (i) the set ${\mathcal H}(x)$ is infinite; (ii) for some $x$ and some rational function $Q(z)$ with integral coefficients, there does not exist an algorithm which takes as an input the list $r$ such that there is a coefficient-wise inequality $P(z)\le Q(z)$ and returns {\em TRUE} if the equality $P(z)= Q(z)$ holds and {\em FALSE} if not; (iii) for some $x$ and some rational number $q$, there does not exist an algorithm which takes as an input the list $r$ such that $c(P) \le q$ for $c(P)= \lim\sup_{n\to\infty} \sqrt[n]{\mbox{dim\,} P_n} $ and returns {\em TRUE} if $c(P) = q$ and {\em FALSE} if $c(P) < q$. \end{theorem} Note that Dotsenko and Brenner conjectured that `for given arities of generators $a_1,\dots , a_d$ and weights of relations $w_1, \dots,w_r$, the set of possible Hilbert series of operads with these types of generators and relations is always finite'~(see \cite{db}, Conjecture 10.4.1.1). So the part (ii) of Theorem~\ref{th:undec} solves this conjecture negatively even in the case $a_1 = \dots =a_d=2$ and $w_1= \dots =w_r =3$. Using the family of algebras from~\cite[Example~3.1]{iyudu2017automaton} in place of the algebra $A$ in the proof below we see that one can put here $d=r=3$. \begin{proof} It is pointed out by Dotsenko~\cite{dots2016} that each graded connected associative algebra $A = A_0\oplus A_1\oplus\dots$ (where $A_0 = k$) can be considered as a non-symmetric operad $P = P(A)$ by putting $P_k = A_{k-1}$ for all $k\ge 1$, $A_m\circ_i A_n = 0$ for $i\ge 2$ and $a \circ_1 b = ab $ for homogeneous $a,b \in A$. Moreover, the operad $P$ is quadratic if $A$ is quadratic. In this case the relations of $P$ can be easily recovered by the relations of $A$. Obviously, the ordinary generating function of $P$ is $P(z) = zA(z)$ where $A(z) = \sum_{k\ge 0} \mbox{dim\,} A_k$ is the Hilbert series of the algebra $A$. Moreover, in this case \begin{displaymath} c(P)= \lim\sup_{n\to\infty} \sqrt[n]{\mbox{dim\,} P_n} = \lim\sup_{n\to\infty} \sqrt[n]{\mbox{dim\,} A_{n-1}} \end{displaymath} \begin{displaymath} = \lim_{n\to\infty} \sqrt[n]{\mbox{dim\,} A_{n}} \end{displaymath} is the exponent of growth (aka entropy) $h(A)$ of the algebra $A$. Parts (i) and~(ii) follow from the corresponding theorems on the Hilbert series of quadratic algebras by~\cite{an2}. The theorem on the exponent of growth of quadratic algebras which is analogous to~(iii) has been proved in~\cite{gr} using Anick's construction. So, part~(iii) follows as well. \end{proof} In fact, one can show even more in part~(iii). It follows from the results of~\cite{an2} and~\cite{gr} that there is a quadratic polynomial $f(z) = 1-gz+rz^2$ with two rational roots $p^{-1} $ and $q^{-1} $ (where $q$ is the number from the part~(iii) and $|p| < q$) such that either $\mbox{dim\,} P_n < \gamma c_2^n$ for all $n$ with $\gamma>0, 0<c_2<q$ or $A(z) = 1/f(z)$. In the last case, the sequence $\{\mbox{dim\,} P_n\}$ is a linear recurrence of order 2, so that $\mbox{dim\,} P_n = \alpha q^n +\beta p^n \cong \alpha q^n$ for some $\alpha, \beta >0$. For the symmetrization $ {\mathcal P} $ of $P$ with $\mbox{dim\,} {\mathcal P} _n = n! \mbox{dim\,} P_n$ we have $\mbox{dim\,} {\mathcal P} _n < \gamma c_2^n n!$ in the first case and $\mbox{dim\,} {\mathcal P} _n \cong \alpha q^n n!$ in the second case. By part~(iii), there is no algorithm to separate these two cases. This means that there is no algorithm to recognize the asymptotic growth both for symmetric and non-symmetric operads. \section{Estimates for the growth of operads} \label{sec:est} In this section, we discuss an approach to lower bounds for generating series of symmetric operads based on the results of~\cite{kurosh}. Note that Dotsenko has obtained the same results and even more in the case of monomial shuffle operads~\cite[Section~3]{dots2012}. The next theorem is an operadic version of famous Golod--Shafarevich theorem which gives a criterion for an associative algebra to be infinite-dimensional~\cite{gs}. The proof of the first statement of it is sketched in~\cite[Theorem~4.1]{kurosh}. \begin{theorem} \label{th:GS_oper} Let $ {\mathcal P} $ be a symmetric operad minimally generated by an $\ensuremath{\mathbb S}$-module $X \subset {\mathcal P} $ with a minimal $\ensuremath{\mathbb S}$-module of relations $R \subset \f(X)$. We assume here that both these $\ensuremath{\mathbb S}$-modules are locally finite, that is, all their graded components are of finite dimension. Suppose that the formal power series $t/f(t)$ has non-negative coefficients, where $f(t) = t - X(t) +R(t)$. Then the operad $ {\mathcal P} $ is infinite and there is a coefficient-wise inequality of formal power series $$ {\mathcal P} (z) \ge f^{[-1]} (z), $$ where $f^{[-1]} (z)$ is the composition power series inverse of $f(t)$. \end{theorem} \begin{proof} We are working in the category of right graded modules over the operad $ {\mathcal P} $. Consider the trivial bimodule $I = {\mathcal P} / {\mathcal P} _+$ (where $ {\mathcal P} _+ = {\mathcal P} (2) \oplus {\mathcal P} (3) \oplus \dots$ is the maximal ideal of $ {\mathcal P} $, as before). For the generators of the beginning of its minimal free resolutions, we have $\mbox{Tor\,}_0^ {\mathcal P} (I,I)\cong I$, $\mbox{Tor\,}_1^ {\mathcal P} (I,I)\cong X$ and $\mbox{Tor\,}_2^ {\mathcal P} (I,I)\cong R$, see~\cite[Sec.~3]{kurosh}. This means that the beginning of the resolution looks as \begin{equation} \label{resol_k} 0\to \Omega^3 \to R\circ {\mathcal P} \stackrel{d_2}{\to} X\circ {\mathcal P} \to {\mathcal P} \to I \to 0, \end{equation} where $\Omega^3$ is the kernel of $d_2$. Obviously, the formal power series $\Omega^3(z)$ has nonnegative coefficients. Taking the Euler characteristics of the exact sequence~(\ref{resol_k}), we get an equality of formal power series $$ \Omega^3(z) = (R\circ {\mathcal P} )(z) - (X\circ {\mathcal P} )(z) + {\mathcal P} (z) -I(z) \ge 0. $$ Since $I(z)=z$, we obtain a coefficient-wise inequality $$ R( {\mathcal P} (z)) - X( {\mathcal P} (z)) + {\mathcal P} (z) - z \ge 0, $$ or $$ f( {\mathcal P} (z)) = z+ \Omega^3(z) \ge z. $$ Let $y = z+ \Omega^3(z)$. By the Lagrangian inverse formula, we have $$ {\mathcal P} (z) = f^{[-1]}(y) = \sum_{i\ge 0} \frac{\pi_n}{n!}y^n, $$ where $\pi_n$ is the value of the derivative $ \frac{d^{n-1}}{d t^{n-1}} \left( \frac{t}{f(t)} \right)^{n} $ at $t=0$. Since the formal power series $t/f(t)$ has nonnegative coefficients, it follows that the series $\left( \frac{t}{f(t)} \right)^{n}$ and all its derivations satisfy this property as well. It follows that $\pi_n \ge 0$ for all $n\ge 0$. Moreover, if $a_k z^k$ is any positive summand in the decomposition of $t/{f(t)} $, then for every $n\ge 0$ we have $ \pi_{kn+1} \ge (kn)! \left( \begin{array}{c} kn+1 \\ n \end{array} \right) a_k > 0, $ so that the series $f^{-1}(y)$ is infinite with nonnegative coefficients. It follows that $ {\mathcal P} (z) = f^{[-1]}(z +\Omega^3(z)) \ge f^{[-1]} (z) $. In particular, the series $ {\mathcal P} (z)$ is infinite. \end{proof} \begin{cor} \label{cor:GS_binary} Let $ {\mathcal P} $ be a symmetric operad and let $X$ and $R$ be as above. Suppose that $ {\mathcal P} $ is generated by binary operations (that is, $X=X(2)$). Suppose that the function $$ \phi(z) = 1 - \frac{X(z)}{z} + \frac{R(z)}{z} $$ is analytic in a neighbourhood of zero (it is always the case if $X$ is finitely generated) and has a positive real root $z_0$ in this neighbourhood. Then the dimensions of the operad's components satisfy the inequalities $$ \frac{(2n)!}{(n-1)!} z_0^{-n} \le \mbox{dim\,} {\mathcal P} _{n+1} \le \frac{(2n)!}{n!} (\mbox{dim\,} X/2)^n. $$ \end{cor} \begin{proof} In the above notations, we have $$ {\mathcal P} (z) = f^{[-1]}(y) = \sum_{n\ge 0} \frac{\pi_n}{n!}y^n \ge \sum_{n\ge 0} \frac{\pi_n}{n!}z^n , $$ where $\pi_n$ is the value of the derivative $ \frac{d^{n-1}}{d t^{n-1}} \left( \phi(z)^{-n} \right) $ at $z=0$. This means that $\pi_n /(n-1)!$ is the coefficient of $z^{n-1}$ in the formal power series $ \phi(z)^{-n}$. Put $a = \mbox{dim\,} X /2$. The series $\phi(z)$ has the form $1-az+ z^2r(z)$, where $a >0$ and the series $r(z)$ has nonnegative coefficients. Then it follows from~\cite{me} that there is a coefficient-wise inequality $$\phi(z)^{-1} \ge \sum_{k\ge 0} z_0^{-k} z^k = (1-z/z_0)^{-1}.$$ Using the binomial theorem, we get $$ \phi(z)^{-n} \ge (1-z/z_0)^{-n} = \sum_{k\ge 0} \frac{(n+k-1)!}{k!(n-1)!} z_0^{-k} z^k .$$ So, we get the first inequality: $$ \mbox{dim\,} {\mathcal P} _{n+1} \ge \pi_{n+1} \ge n! \left( \frac{(2n)!}{n!(n-1)!} z_0^{-n}\right) = \frac{(2n)!}{(n-1)!} z_0^{-n}. $$ On the other hand, for the free operad $F$ generated by $X$ we have $R=0$ and $\Omega^3 =0$, so that $$ F (z) = (t-X(t))^{[-1]} = (t-at^2 )^{[-1]} = \frac{1-\sqrt{1-4a z}}{2a} $$ $$ = \sum_{n\ge 0} z^{n+1} C_n a^n, $$ where $C_n = \frac{(2n)!}{n!(n+1)!}$ is the $n$-th Catalan number. The coefficient-wise inequality $F(z) \ge {\mathcal P} (z)$ gives the second inequality $\mbox{dim\,} {\mathcal P} _{n+1} \le \frac{(2n)!}{n!} (\mbox{dim\,} X/2)^n$. \end{proof} Consider, in particular, the case of quadratic operad, that is, an operad having generators of arity 2 and relations of arity 3 only. \begin{cor} \label{cor:GS_quadr} Let $ {\mathcal P} $ be a quadratic symmetric operad, that is, $X=X(2)$ and $R=R(3)$, and let $c= \mbox{dim\,} X$ and $d = \mbox{dim\,} R$. If $d\le 3 c^2/8$, then $$ {\mathcal P} (z) \ge (z-\frac{c}{2}z^2+\frac{d}{6}z^3)^{[-1]} . $$ In particular, $\frac{(2n)!}{(n-1)!} z_1^{-n} \le \mbox{dim\,} {\mathcal P} _{n+1} \le \frac{(2n)!}{n!} (c/2)^n$, where \linebreak $z_1 = 3\frac{c/2-\sqrt{c^2/4 - 2d/3}}{d}$. \end{cor} \begin{rema} Note that the inequality of Corollary~\ref{cor:GS_binary} can be re-written in the form $c_1^n n^n \le \mbox{dim\,} {\mathcal P} _n \le c_2^n n^n$ where the constants $c_1$ and $c_2$ are constructively recovered by the generators and the relations of the operad $ {\mathcal P} $. In general, this result cannot be effectively re-written in the form~$\mbox{dim\,} {\mathcal P} _n \sim c^n n^n$ because even if such $c$ exists it cannot be algorithmically recognized according to~Theorem~\ref{th:undec}. Note that for infinitely presented operads similar asymptotics does not generally exist even for operads generated by a single binary operation, that is, for a variety of non-associative algebras~\cite{zaitsev_no}. \end{rema} \section{Monomial bases and Groebner bases in operads} \label{sec:groeb} The Groebner bases in operads are introduced in~\cite{dk}. We also refer the reader to~\cite{Loday_Valet} and~\cite{db} for the details. Here we briefly recall some basics. Fix a discrete set $\Omega$ of generators of a (symmetric or non-symmetric) free operad. A {\em nonsymmetric monomial} is a multiple composition of operations from $\Omega$. A {\em symmetric monomial} is a nonsymmetric monomial applied to pairwise different variables $x_{i_1}, x_{i_2}, \dots$ as a composite operation. Each monomial is represented by a rooted planar tree with internal vertices labelled by operations. We assume that the edges of the tree lead from the root to the leaves which are free edges. In the case of symmetric monomial, the leaves are also labelled by the variables $x_{i_1}, x_{i_2}, \dots$ All non-symmetric monomials (including the empty monomial corresponding to the identical operation) form a linear basis of the free non-symmetric operad generated by $\Omega$. In contrast, all symmetric monomials generate the free operad $\f = \f(\ensuremath{\mathbb S}\Omega)$ as a linear dependent set. To describe a linear basis of $\f$, let us mark each edge in the tree by the minimal label of a leaf which can be reached from that edge. A symmetric monomial is called {\em shuffle monomial} if for every internal vertex and for the root, the minimal edge leading from it is the leftmost. Then the shuffle monomials labelled by the sets $\{ x_1, \dots , x_n \}$ in some ordering (where for each monomial $n$ is equal to its arity) form a linear basis of $\f$. Two non-symmetric monomials are called isomorphic if they are isomorphic as labelled trees. Two symmetric monomials are isomorphic if the underlying non-symmetric monomials are isomorphic and the lists of the labels of their leaves collected from the left to the right are isomorphic as ordered sets. A (non-)symmetric monomial $P$ is {\em divisible} by a (non-)symmetric monomial $Q$ if $Q$ is isomorphic to a submonomial of $P$ where 'submonomial' means a labelled subtree with the labels on leaves induced by the labels on the free edges. For example, see Figure~\ref{pic::shuffle_monom} (copied from~\cite{kp}) to ensure that the shuffle monomial $g(f(f(x_1,x_3)$, $g(x_2,f(x_4,x_9)$, $g(x_5,x_6,x_{11}))),x_7,f(x_8,x_{10}))$ is divisible by the shuffle monomial $ f(x_1,g(x_2,x_3,x_4))$. \begin{figure*} \includegraphics{shuffle_monom1.eps} \caption{Divisibility of shuffle monomials} \label{pic::shuffle_monom} \end{figure*} A {\em shuffle composition} is a composition of two shuffle monomials (as trees) whose leaves are labelled in such a way that the composition is again a shuffle monomial and the composed monomials are isomorphic to the corresponding submonomials of the result. The symmetric operads considered as sequences of vector spaces generated by shuffle monomials coupled with the set of all shuffle compositions are called {\em shuffle operads}. It the theory of Groebner bases, the (abstract) shuffle operads are considered in place of the symmetric operads so that the actions of the symmetric groups are not used here. There are families of orderings on the sets of non-symmetric and shuffle monomials which are compatible with the corresponding compositions. This defines the notion of the leading term of an element of free operad and leads to a rich Groebner bases theory. The theory includes a version of the Buchberger algorithm~\cite{dk} and even the triangle lemma~\cite{db}. We say that an operad $ {\mathcal P} $ has a finite Groebner basis (of relations) if the ideal of its relations admits a finite Groebner basis as an ideal of a free operad. Whereas general operad has no finite Groebner basis, a number of important operads (including the classical operads of commutative, associative and Lie algebras) admit such bases. The only known implementation of Groebner bases algorithms for operad is the Haskell package {\sf Operads}~\cite{dv}. Its slightly improved version with some bugs fixed by Andrey Lando can be downloaded at https://github.com/Dronte/Operads . Experiments with operads of non-associative algebras (that is, operads generated in arity two by a two-dimensional subspace) provided by Lando show that recent version of the package allows to calculate (in a standard laptop) the Groebner basis of an ideal generated by identities of degree 3 up to degree 6. The last degree is essentially less than the degree in some analogous calculations for associative algebras provided, e.~g., by BERGMAN. One could hope that new algorithms (including a possible F4 algorithm for operads which could generalize an analogous algorithm for Groebner--Shirshov bases in associative algebras~\cite{nc_f4}) and new implementation principles will essentially extend the performance of the computer algebra software for such calculations. \section{Growth and generating series for operads with finite Groebner bases} \label{sec:we} The generating series of an operad with known Groebner basis is equal to the generating series of the corresponding {\em monomial} operad, that is, a shuffle operad or a non-symmetric operad whose relations are the leading monomials of the corresponding Groebner basis. The dimension of the $n$-th component of a monomial operad is equal to the number of the monomials of arity $n$ which are not divisible by the monomial relations of the operad. In this section, we consider the monomial operads only. Suppose that we know a (finite) subset $\widetilde G$ of the Groebner basis $G$ of an operad $ {\mathcal P} $. Then we get a coefficient-wise inequality $$\widetilde {\mathcal P} (z) \le {\mathcal P} (z) $$ for the monomial operad $\widetilde {\mathcal P} $ whose relations are the leading monomials of the elements of $\widetilde G$. This lower bound for $ {\mathcal P} (z)$ complements the inequality of Theorem~\ref{th:GS_oper}. That is why the generating series of monomial operads defined by finite sets of monomial relations is of our interest here. For such an operad, the calculation of the dimensions of its components is a purely combinatorial problem of enumeration of the labelled trees which does not contain a subtree isomorphic to a relation as a submonomial (a pattern avoidance problem for labelled trees), see~\cite{dk-pattern}. Unfortunately, this problem is too hard to be treated recently in its full generality. In this section we discuss some partial methods based on the results of~\cite{kp}. First, let us discuss a simpler case of non-symmetric operads. \begin{theorem}[\cite{kp}, Th.~{2.3.1}] \label{th-nonsym-intro} The ordinary generating series of a non-symmetric operad with finite Gr\"obner basis is an algebraic function. \end{theorem} One of the methods for finding the algebraic equation for the generating series of a non-symmetric operad $P$ defined by a finite number of monomial relations $R$ is the following. We consider the monomials (called stamps) of the level less than the maximal level of an element of $R$ which is nonzero in $P$. For each stamp $m=m_i$, we consider the generating function $y_i(z)$ of the set of all nonzero monomials which are left divisible by $m_i$ and are not left divisible by $m_t$ with $t<i$. Then the sum of all $y_i(z)$ is equal to $P(z)$. The divisibility relations on the set of all stamps leads to a system of $N$ equations of the form $$ y_i = f_i(z, y_1, \dots, y_N) $$ for each $y_i = y_i(z)$, where $f_i$ is a polynomial and $N$ is the number of all stamps. Note that the degree $d_i$ of the polynomial $f_i$ does not exceed the maximal arity of generators of the operad $P$. Then the elimination of the variables leads to an algebraic equation of degree at most $d = d_1^2 \dots d_N^2$ on $P(z)$. A couple of similar algorithms which in some cases reduce either the number or the degrees of the equations are also discussed in~\cite{kp}. Knowing an algebraic equation for $P(z)$, one can evaluate the asymptotics for the coefficients $\mbox{dim\,} P_n$ by well-known methods~\cite[Theorem~D]{flj-slg-fn}. Let us consider the case of shuffle operad. A set $M$ of shuffle monomials is called {\em shuffle regular} if for each $m\in M$ the set $M$ also contains all shuffle monomials which are obtained from $m$ by permutations of the labels on the leaves. Moreover, $M$ is called {\em symmetric regular} if for each other planar representation of the labelled tree $T$ defined by $m$, all shuffle monomials that are obtained from $T$ by permutations of the leaf labels also belong to $M$. \begin{theorem}[\cite{kp}, Cor.~{0.1.4} and Th.~3.3.2] \label{th:main_sym_intro} Let $ {\mathcal P} $ be symmetric operad with a finite Gr\"obner basis, and let $M$ be the set of leading terms of the elements of the Groebner basis. (a) If the set $M$ is shuffle regular, then $ {\mathcal P} (z)$ is a differential algebraic, that is, it satisfies a non-trivial algebraic differential equations with polynomial coefficients. (b) If the set $M$ is symmetric regular, then $ {\mathcal P} (z)$ is algebraic. \end{theorem} The differential equation in the part~(a) is obtained by a similar way as the algebraic equation in Theorem~\ref{th-nonsym-intro}. Similar arguments lead to a system of equations of the form $$ y_i = C_i(z, y_1, \dots, y_N), $$ where $C_i$ is a linear combination of the multiple compositions of the operation $C(f,g)(z) := \int_0^z f'(w) g(w) \, dw$ where $f$ and $g$ are formal power series. The terms in the equations encode the overlapping of stamps as it is clear from the example considered below. A multiple differentiation then leads to a system of algebraic ordinary differential equations on $y_1, \dots, y_N$. The elimination of variables gives a single differential equation on $ {\mathcal P} (z)$. Note that in many examples, the function $ {\mathcal P} (z)$ happens to be essentially simpler than one may expect. Consider an example~\cite[Example 3.5.3]{kp}. Let $N$ be an operad with one binary operation (multiplication) subject to the identity $$ [x_1,x_2][x_3,x_4] = 0, $$ where $[a,b] = ab-ba$ (this identity holds in the ring of the upper-triangular matrices of order two over a non-associative commutative ring). The corresponding shuffle operad ${\mathcal {N}}$ is generated by two binary generators, namely, the multiplication $\mu$ and $\alpha: (x_1,x_2) \mapsto [x_1,x_2]$. Then the above identity is equivalent to the pair of shuffle regular monomial identities $$ f_1 = \mu(\alpha(\textrm{-},\textrm{-}), \alpha(\textrm{-},\textrm{-}) )=0 \quad \text{ and } \quad f_2 = \alpha(\alpha(\textrm{-},\textrm{-}), \alpha(\textrm{-},\textrm{-}) )=0. $$ Therefore, the ideal of relations of the shuffle operad ${\mathcal {N}}$ is generated by the following six shuffle monomials obtained from $f_1$ and $f_2$ by substituting all shuffle compositions of four variables (which we denote for simplicity by 1,2,3,4): $$ \begin{array}{ll} m_1 = \mu(\alpha(1,2), \alpha(3,4) ), & m_2 = \mu(\alpha(1,3), \alpha(2,4) ), \\ m_3 = \mu(\alpha(1,4), \alpha(2,3) ), & m_4 = \alpha(\alpha(1,2), \alpha(3,4) ), \\ m_5 = \alpha(\alpha(1,3), \alpha(2,4) ), & m_6 = \alpha(\alpha(1,4), \alpha(2,3) ).\\ \end{array} $$ The operad ${\mathcal {N}}$ is monomial, so that the monomials $m_1, \dots, m_6$ form Groebner basis for it. Let us describe the set $B$ of all stamps of all nonzero monomials in ${\mathcal {N}}$. Since the relations have their leaves at level 2, $B$ includes all monomials of level at most one, that is, the monomials $$ B_0 = \mbox{Id\,}, B_1 = \mu(\textrm{-},\textrm{-}), B_2 =\alpha(\textrm{-},\textrm{-}). $$ For the corresponding generating series $y_i = y_i(z)$ with $i=0,1,2$ we have $$ \left\{ \begin{array}{lll} y_0 & = &z, \\ y_1 &= & C(y_0,y_0) + C(y_1,z)+C(z,y_1) +C(y_2,z) \\ & & +C(z,y_2) + C(y_1,y_1)+ C(y_1,y_2)+ C(y_2,y_1), \\ y_2 & = &C(y_0,y_0) + C(y_1,z)+C(z,y_1) +C(y_2,z) \\ & & +C(z,y_2) + C(y_1,y_1)+ C(y_1,y_2)+ C(y_2,y_1). \end{array} \right. $$ Here all terms correspond to compositions of stamps, e.g., the term $C(y_1,y_2)$ in the second line denotes `the stamp of the composition $\mu(B_1,B_2)$ is $B_1$' etc. We see that $y_1(z) = y_2(z)$ and ${\mathcal {N}}(z) = y(z) = y_0(z)+y_1(z)+y_2(z) = z+2y_1(z)$. The second equation of the above system gives, after differentiation, a differential equation on $y_1$ which is equivalent to the equation $$ (y'(z)-1)(2-z-3y(z)) = 4y(z) $$ on $y(z)$ with the initial condition $y(0) = 0$. Surprisingly, the solution of this non-linear differential equation is an algebraic function $$ \begin{array}{l} {\mathcal {N}}(z) = y(z) = \frac{1}{3}\left( 2-z-2 \sqrt {1-4\,z+{z}^{2}} \right) \\ = z+{z}^{2}+2\,{z}^{3}+{\frac {19}{4}}{z}^{4}+{\frac {25}{2}}{z}^{5}+{ \frac {281}{8}}{z}^{6}+{\frac {413}{4}}{z}^{7}+ o(z^{7}) \end{array} $$ On the other hand, one can see that the set $\{f_1,f_2\}$ is symmetric regular. This leads to additional symmetries in the above integral equations. Using the identities $y_1=y_2, y_0=z$ and applying the formula $C(f,g)+C(g,f) = fg$, we get the functional equation $$ 2y_1 = z^2 + 4zy_1+3y_1^2 $$ which immediately implies the same formula for ${\mathcal {N}}(z)$. In fact, in all examples considered in~\cite{kp} the resulting functions $ {\mathcal P} (z)$ are holonomic, that is, these functions satisfy linear differential equations with polynomial coefficients. This means that their coefficient asymptotics can be calculated by recent computer algebra tools, see in particular~\cite{kauers}. We see that in these cases the ODE system generated by our algorithm imply a linear ODE with polynomial coefficients. For more complicated examples, one could apply the tools based on the differential algebra elimination theory to obtain a single differential or functional equation for $ {\mathcal P} (z)$ in its simpler form. If the growth of the operad is bounded, our equations give even more. The next theorem explains, in particular, why so simple operad as the operad Com of commutative algebras has non-algebraic exponential generating function $e^z-1$. \begin{theorem} \label{th:intro_slow_growth} Let $ {\mathcal P} $ be an operad with a finite Gr\"obner basis and let $M$ be the set of its leading terms. Suppose that either \\ $\phantom{1111}(i)$ $ {\mathcal P} $ is non-symmetric and the numbers $\mbox{dim\,} {\mathcal P} (n)$ are bounded by some polynomial in $n$\\ or\\ $\phantom{1111}(ii)$ $M$ is shuffle regular and the dimensions $\mbox{dim\,} {\mathcal P} (n)$ are bounded by an exponential function $a^n$ for some $a>1$.\\ Then the ordinary generating series $G_ {\mathcal P} (z)$ is rational. \end{theorem} Of course the methods for finding the asymptotics of coefficients of such rational functions are well-known for centuries. In this case the sequence of codimensions is linear recurrent. \bibliographystyle{ACM-Reference-Format}
1,314,259,994,103
arxiv
\section{Introduction} Nonclassical states of bosonic modes, such as optical fields \cite{SZ}, motional states of trapped ions \cite{PhysRevLett.76.1796} or mechanical oscillators in optomechanical systems \cite{PhysRevA.56.4175}, are important resources for quantum-enhanced technologies \cite{loncar2019development}, including quantum communication \cite{Braunstein98,Milburn99,PhysRevLett.88.057902}, quantum computation \cite{PhysRevA.59.2631,Lloyd99,Bennett:2000aa,Knill:2001aa, PhysRevLett.119.030502}, and quantum metrology \cite{Caves81,Holland:1993aa,Braunstein:1994aa, Pezze08,Xu:2012aa,PhysRevLett.110.163604, Aasi:2013aa,Lang13, Liu:2013aa, TanPRA14,toth2014quantum, Demkowicz-Dobrzanski:2015aa, GePRL2018, Mccormick2018,PhysRevLett.124.171101,PhysRevLett.124.171102,PhysRevApplied.13.024037,polino2020photonic}. This is enabled by the superposition of coherent states, the most intriguing feature of nonclassical states. Due to its potential as an important resource, quantitative understanding of single-mode nonclassicality is crucial to the field of quantum optics and related subjects. The minimum requirement for obtaining a nonclassicality measure is that a quantification is nonnegative for any state and it is zero only if the state is classical. Based on this requirement, there have been many different measures proposed for quantifying single-mode nonclassicality, including the nonclassical distance \cite{Marian:02, hillery1987nonclassical}, the nonclassicality depth \cite{Lee:91}, the entanglement potential \cite{AsbothPRL05}, and quantifications via the Schmidt rank \cite{GehrkePRA12, Vogel:14}. Many of these interesting definitions capture some aspects of the intriguing nonclassical feature. For example, the quantifications using the Schmidt rank are defined to be proportional to the minimum number of coherent superpositions in a quantum state \cite{GehrkePRA12}. Recently, quantifying nonclassicality has been studied on a stricter notion based on resource theories (RTs) \cite{Tan17, Streltsov17, RevModPhys.91.025001}. According to RTs, all quantum states can be categorized into two groups, one being free states and the other being resource states. In addition, RTs define a set of operations as free operations such that they can not increase the quantity of interest on any state. For the resource theory of nonclassicality \cite{Tan17}, the free states are classical states and the resource states are nonclassical states. A natural choice of free quantum operations for nonclassicality is the set of classical operations, such as applying phase shifts and beam splitters. By being a stricter definition, an RT nonclassicality measure has to both satisfy aformentioned non-negativity and be monotonically non-increasing under any classical operations \cite{Streltsov17}. While RTs provide the basic methodology for defining a measure of nonclassical states in terms of resources \cite{RevModPhys.91.025001}, they do not necessarily require a measure to quantify the ability of quantum states to provide enhanced performance for certain tasks, such as precision sensing, which is referred as "operational" \footnote{This definition of operational is different from the usual definition, where it means the ability to be converted and manipulated \cite{PhysRevResearch.2.012035, Winter16}.}. Recently, there have been some efforts devoted to the study of an operational resource theory (ORT) of nonclassicality \cite{YadinPRX18, Kwon19, ge2019operational}. An important RT measure of nonclassicality in terms of mean quadrature variance has been proposed by Yadin \emph{et al.}~\cite{YadinPRX18} and Kwon \emph{et al.}~\cite{Kwon19} independently. The measure has the meaning of the metrological enhancement beyond the standard quantum limit for an averaged sensing task for pure states, however, it is unknown if it has a direct operational meaning in terms of metrology for mixed states \cite{Kwon19}. Ge \emph{et al.} \cite {ge2019operational} proposed the first operational resource theory measure of nonclassicality, which satisfies the minimal requirements of an RT, quantifies the ability to perform quadrature sensing for pure states, and is a tight upper bound for the latter for mixed states. Interestingly, this measure also quantifies macroscopicity \cite{Frowis2018} of nonclassical states in terms of the averaged size of coherent superpositions. In this work, we apply the ORT measure to evaluate the nonclassicality of single-mode quantum states, both pure and mixed. While some pure-state examples have been given in Ref. \cite{ge2019operational} to show the concept and the crucial properties of ORT of nonclassicality, there are some interesting questions remaining about evaluating single-mode nonclassicality using the measure. First, the ORT measure suggests the maximum nonclassical state for a fixed energy is a squeezed vacuum state \cite{ge2019operational}. Then an interesting question is that is it possible to find other states that may achieve this maximum nonclassicality asymptotically in certain limiting case. Second, are there any simple operations to greatly enhance nonclassicality of a quantum state? Third, the ORT measure for a mixed state is based on a convex roof construction, where a minimization is over all possible decomposition of a quantum state. Then an important question is how to calculate nonclassicality for mixed states at least for some classes of states. We answer these questions in this work by evaluating many examples of single-mode quantum states, including Fock states, squeezed coherent states, cat states, single-photon added states, and mixed states in diagonal Fock basis. We categorize the sets of quantum states using the ORT measure and its relation to quadrature sensing. In particular, we categorize the group of nonclassical pure states into three classes depending on their nonclassicality. We find that there is a class of states that are as nonclassical as a squeezed vacuum in the asymptotic limit of a large number of average excitations, which provides interesting alternatives for quantum metrology. We also investigate the nonclassicalities of a quantum state before and after single-photon addition. Our results show that single-photon operations can greatly increase the nonclassicality of a state quantified by the ORT measure, which will be important for preparing strong nonclassical states using weaker nonclassical states. For mixed states with finite dimensions, we show some examples of calculating the nonclassicality by analytically finding the convex roof using the measure. Our results show that the nonclassicality of some Fock state superpositions may not be affected by coupling to an environment, e.g., phase damping. For mixed states with infinite dimensions, we calculate lower bounds for the measure. The paper is organized as follows. In Sec. \ref{sec2}, we introduce the ORT measure of nonclassicality and its relation to two important quantum sensing tasks. In Sec. \ref{sec3}, we investigate extensively examples of nonclassical states using the ORT measure. In Sec. \ref{sec4}, we compare the ORT measure with some existing measures of nonclassicality. We summarize the main results of this work in Sec. \ref{sec5}. \section{Some Basics of Nonclassical States \label{sec2}} \subsection{Operational Nonclassicality Measure} We begin by introducing the definition of a nonclassical state. A single-mode quantum state $\hat{\rho}$ can be represented using the Glauber-Sudarshan $P$ function \cite{Glauber63, Sudarshan63} as \begin{eqnarray} \hat{\rho}=\int P(\alpha,\alpha^{\ast})\ket{\alpha}\bra{\alpha}d^2\alpha. \label{eq:P-function} \end{eqnarray} The state $\hat\rho$ is defined as classical if the probability distribution function $P(\alpha,\alpha^{\ast})$ is positive definite mimicking a classical probability density over the coherent states $|\alpha\rangle$. The state is nonclassical if $P(\alpha,\alpha^{\ast})$ is singular and not positive definite \cite{Lee:91,SZ}. Now we introduce the operational resource theory measure of nonclassicality given by \cite{ge2019operational} \begin{align} \mathcal{N}\left(\hat{\rho}\right) & = \min_{\{p_j,\ket{\psi_j}\}}\biggl\{\max_{\mu}\sum_j p_j\langle \psi_j | (\Delta\hat{X}_{\mu})^2 | \psi_j \rangle \biggr\}-\frac{1}{2}\nonumber\\ &=\min_{\{p_j,\ket{\psi_j}\}} \biggl\{\sum_j p_j\left(\bar{n}_j-|\bar{\alpha}_j|^2\right)+\biggl|\sum_j p_j\left(\bar{\xi}_j-\bar{\alpha}_j^2\right)\biggr|\biggr\} , \label{eq:n-mixed} \end{align} where the minimization is over all possible ensembles with $\hat{\rho}=\sum_jp_j \ket{\psi_j}\bra{\psi_j}$ $\big(p_j>0$ and $\sum_jp_j=1\big)$ and the maximization is over all possible quadratures, defined by $\hat{X}_{\mu}=i\left(e^{-i\mu}\hat{a}^{\dagger}-e^{i\mu}\hat{a}\right)/\sqrt{2}$ in which $\hat{a}$ is the annihilation operator for the bosonic mode and $\mu\in [0,2\pi]$. In the second line of Eq. \eqref{eq:n-mixed}, we have used the moments $\bar{n}_j \equiv \bra{\psi_j}\hat{a}^{\dagger}\hat{a}\ket{\psi_j}$, $\bar{\xi}_j \equiv \bra{\psi_j}\hat{a}^2\ket{\psi_j}$, and $\bar{\alpha}_j \equiv \bra{\psi_j}\hat{a}\ket{\psi_j}$. Hence, one can see that $\mathcal{N}\left(\hat{\rho}\right)=0$ for a classical state using the coherent-state decomposition from the definition in Eq. \eqref{eq:P-function}. Unlike the nonclassicality witness via squeezing (minimum variance) \cite{SZ}, the definition Eq. \eqref{eq:n-mixed} manifests itself as the maximum quadrature variance, which is shown to give rise to the relation to quantum-enhanced metrology. For pure states, the measure reduces to \begin{align} \mathcal{N}(|\psi\rangle) = \bar{n}-|\bar{\alpha}|^2+\left|\bar{\xi}-\bar{\alpha}^2\right|, \label{Npure} \end{align} where $\bar{n} \equiv \langle \hat{a}^{\dagger}\hat{a}\rangle$, $\bar{\xi}\equiv\langle \hat{a}^2\rangle$, and $\bar{\alpha}\equiv\langle \hat{a}\rangle$. In general, the definition $\mathcal{N}\left(\hat{\rho}\right)$ has been shown \cite{ge2019operational} to satisfy (i) Non-negativity: $\mathcal{N}\left(\hat{\rho}\right)\ge0$ for any state $\hat{\rho}$ where the equality holds if and only if $\hat{\rho}$ is classical; (ii) Weak monotonicity: $\mathcal{N}$ cannot be increased by any classical operation $\Lambda$, i.e., $\mathcal{N}\left(\Lambda[\hat{\rho}]\right) \le \mathcal{N}\left(\hat{\rho}\right)$; (iii) Convexity: $\sum_jp_j\mathcal{N}\left(\hat{\rho}_j\right)\ge\mathcal{N}\left(\sum_jp_j\hat{\rho}_j\right)$ for any quantum states $\hat{\rho}_j$ and probabilities $p_j$. A classical operation can be augmentation by any number of classical states, the application of any passive linear optical operations and displacements, or tracing out of the auxiliary modes. The first two conditions are the minimum requirements for a meaningful measure in the RT of nonclassicality. The concept of the ORT of nonclassicality \cite{YadinPRX18,Kwon19,ge2019operational} in addition requires a RT measure to have an operational meaning such that it relates to the ability for performing certain tasks. This will be discussed in the following when we introduce the tasks of quadrature sensing and phase sensing in a Mach-Zehnder interferometer. \subsection{Quantum Metrology with Nonclassical States} \begin{figure}[t] \leavevmode\includegraphics[width = 1\columnwidth]{sensing.pdf} \caption{Quantum sensing tasks. (a) Quadrature sensing using a single-mode quantum state $\hat\rho$. (b) Phase sensing in a balanced Mach-Zehnder interferometer using $\hat\rho$ with a classical resource, i.e., a coherent state $\ket{\alpha_r}$.} \label{fig:sensing} \end{figure} \subsubsection{Quadrature Sensing} In quadrature sensing, the task is to estimate an unknown parameter $\theta$ that is encoded in a state $\hat\rho$ via unitary dynamics, i.e., $\hat{\rho}(\theta)=e^{-i\hat{X_{\mu}}\theta}\hat\rho e^{i\hat{X_{\mu}}\theta}$ as shown in Fig.~\ref{fig:sensing}~(a). Quadrature sensing has many applications, such as continuous-variable quantum crytography\cite{PhysRevLett.88.057902} and mechanical displacement sensing \cite{Hoff:2013aa}. According to the quantum Cramer-Rao bound \cite{Braunstein:1994aa}, the estimation sensitivity of an unbiased estimator $\Theta$ for the parameter $\theta$ satisfies \begin{align} \Delta^2\Theta\ge \frac{1}{MF_X\left(\hat{\rho}\right)}, \label{eq:CRB} \end{align} where $M$ is the number of repetitions and $F_X\left(\hat{\rho}\right) = 4\max_\mu \Biggl[ \min_{\{p_j,\ket{\psi_j}\}}\biggl\{\sum_j p_j\langle \psi_j | (\Delta\hat{X}_\mu)^2 | \psi_j \rangle \biggr\} \Biggr]$ is the optimized quantum Fisher information (QFI) over $\mu$ for the state $\hat{\rho}(\theta)$ \cite{toth2014quantum, Demkowicz-Dobrzanski:2015aa}. For a classical state, $F_X\left(\hat{\rho}\right) \le2$ and $\Delta\Theta=1/\sqrt{2M}$ is defined as the standard quantum limit (SQL) in quadrature sensing. Therefore, the metrological power for quadrature measurement is $\mathcal{W}\left(\hat{\rho}\right) \equiv 1/4\max[F_X\left(\hat{\rho}\right)-2,0]$ \cite{Kwon19, ge2019operational}, quantifying the amount of metrological advantage beyond the SQL. The operational meaning of the measure $\mathcal{N}$ is given by the following relation with the metrological power $\mathcal{W}(\hat{\rho})$ \cite{ge2019operational} \begin{align} \mathcal{N}\left(\hat{\rho}\right) \ge \mathcal{W}(\hat{\rho}), \label{eq:ineq1} \end{align} where the equality holds when $\hat\rho$ is a pure state, meaning that every nonclassical pure state is useful for beating the SQL in quadrature sensing. Using the concepts of $\mathcal{N}$ and $\mathcal{W}$ and their relation, we can visualize different sets of states in Fig. \ref{fig:state}. For example, the set of classical states is given by $\mathcal{N}\left(\hat\rho_{\text{cl}}\right)=0$ and the set of metrological useful states is given by $\mathcal{W}\left(\hat\rho_{\text{mp}}\right)>0$. In between, there is a region of nonclassical states with a zero metrological power, i.e., $\mathcal{N}\left(\hat\rho_{\text{nc}}\right)>0$ and $\mathcal{W}\left(\hat\rho_{\text{nc}}\right)=0$, as shown in the figure. Since every nonclassical pure state has a nonzero metrological power, a nonclassical state in this region must be a mixed state, which will be discussed in Sec. \ref{sec3b}. \subsubsection{Phase Sensing\label{sec2b}} Now we introduce another metrological task in terms of phase sensing in a balanced Mach-Zehnder interferometer (MZI) as shown in Fig.~\ref{fig:sensing}~(b). The MZI is a paradigmatic model in optical metrology \cite{TanPRA14, Demkowicz-Dobrzanski:2015aa, Caves81,Pezze08} with the benchmark example by feeding a coherent state and a squeezed vacuum state \cite{Caves81,Pezze08,PhysRevLett.124.171101,PhysRevLett.124.171102}, which has been used in the LIGO experiments for quantum enhanced sensitivity \cite{ Aasi:2013aa}. Here this idea is generalized by feeding a coherent state $\ket{\alpha_r}$ and a quantum state $\hat\rho$ \cite{ge2019operational}. For a given input state $\hat\rho$, it has been shown that the optimal precision in estimating the phase difference between two paths in a MZI is achieved by choosing the first beam-splitter to be $50/50$, i.e., balanced \cite{Jarzyna12,Hofmann:2009aa, ge2019operational}. The optimal QFI at the MZI is given by $F_{\theta}^{\text{MZI}}\left(\hat{\rho}\right)= N +\frac{\left|\alpha_r\right|^2}{2}\left[F_X\left(\hat{\rho}\right)-2\right]$, where $N=|\alpha_r|^2+\bar{n}$ is the mean number of total input photons. Similar to quadrature sensing, the QFI relates to phase estimation sensitivity in the MZI as $\Delta^2\Theta\ge 1/MF_{\theta}^{\text{MZI}}\left(\hat{\rho}\right)$ \cite{Demkowicz-Dobrzanski:2015aa}. Therefore, two implications can follow from the expression of the QFI \cite{ge2019operational}: \\ (i) $F_{\theta}^{\text{MZI}}\left(\hat{\rho}\right)>N\Leftrightarrow F_X\left(\hat{\rho}\right)>2$, meaning that achieving sensitivity beyond the SQL in quadrature sensing using $\hat\rho$ is equivalent to that in phase sensing in the MZI. \\ (ii) Heisenberg-limited phase sensing, i.e., $F_{\theta}^{\text{MZI}}\sim N^2$, can be achieved when $F_X\left(\hat{\rho}\right)-2\sim \bar{n}$. In particular, this condition is met when $\mathcal{N(\hat\rho)}\sim \bar{n}$ for pure states by employing the equality in Eq. \eqref{eq:ineq1}. We define $\mathcal{N}_{\bar{n}}\left(\hat\rho\right)\equiv\mathcal{N}\left(\hat\rho\right)/\bar{n}$ as the nonclassicality per unit energy. Then the Heisenberg-limited sensing can be achieved when $\mathcal{N}_{\bar{n}}\sim 1$. \section{Evaluating nonclassicality\label{sec3}} \begin{figure}[t] \leavevmode\includegraphics[width = 0.8 \columnwidth]{state-sets.pdf} \caption{Sets of states categorized by the ORT measure $\mathcal{N}$ and the metrological power $\mathcal{W}$. The white region between the black (the largest) and the green (the second largest) ovals corresponds to the set of classical states, i.e., $\mathcal{N}\left(\hat\rho_{cl}\right)=0$. The region inside the green (the second largest) oval is the set of nonclassical states, i.e., $\mathcal{N}\left(\hat\rho_{nc}\right)>0$. The region inside the red (the third largest) oval is the set of nonclassical states with nonzero metrological power, i.e., $\mathcal{W}\left(\hat\rho_{mp}\right)>0$. The region inside the blue (the smallest) oval is the set of nonclassical pure states, which is further categorized into three different classes according to their nonclassicality per unit energy (see the text in Sec.~\ref{sec3a} for details).} \label{fig:state} \end{figure} \subsection{Nonclassicality for pure states\label{sec3a}} We have shown that nonclassical pure states have the ability to achieve sensitivity beyond the SQL in both displacement sensing and interferometric phase sensing schemes. According to Eq. \eqref{eq:ineq1}, the amount of nonclassicality of a pure state has a one-to-one correspondence to its power for quantum-enhanced metrology, which is the region in the blue oval (the smallest oval) in Fig. \ref{fig:state}. Furthermore, we can categorize the nonclassical pure states into three classes using the nonclassicality per unit energy:\\ \noindent (i)~~~Class 1: $\lim_{\bar{n}\rightarrow \infty}\mathcal{N}_{\bar{n}}=2$;\\ \noindent (ii)~~Class 2: $1\le\lim_{\bar{n}\rightarrow \infty}\mathcal{N}_{\bar{n}}<2$;\\ \noindent (iii)~Class 3: $0<\lim_{\bar{n}\rightarrow \infty}\mathcal{N}_{\bar{n}}<1$.\\ In addition, we study single-photon added states and how they can be put into these categories. \subsubsection{Class 1: The asymptotic maximum nonclassical states $\lim_{\bar{n}\rightarrow \infty}\mathcal{N}_{\bar{n}}=2$} It has been shown that a squeezed vacuum state $\ket{\xi}$ has the maximum nonclassical state per unit energy \cite{ge2019operational} with $\mathcal{N}_{\bar{n}}=1+\sqrt{1+1/\bar{n}}$. Therefore, it is the most useful state in the MZI for phase sensing with a coherent input \cite{Lang13, Pezze08}. Here we investigate the class of states that can achieve the asymptotic maximum nonclassicality, i.e., $\lim_{\bar{n}\rightarrow \infty}\mathcal{N}_{\bar{n}}=2$, which could provide alternative possibilities to a squeezed vacuum in quantum-enhanced metrology. Consider a pure state $\ket{\psi}=\sum_{k=1}^L c_k\ket{\alpha_k}$ that is a superposition of $L$ coherent states $\ket{\alpha_k}$. The complex amplitudes $\alpha_k$ and the coefficients $c_k$ satisfy the normalization condition $\sum_{j,k}c_jc_k^{\ast}f_{jk}=1$, where $f_{jk}=\braket{\alpha_k|\alpha_j}$. Using this representation, we find a class of cat states that can achieve the asymptotic value of maximum nonclassicality $2\bar{n}$ in the limit of $\bar{n}\gg1$. One way to obtain some of these states is to choose $\bar{\alpha}=0$ and $\bar{n}=|\bar{\xi}|$. Using the superposition of coherent states, we find \begin{align} \bar{\alpha}=\bra{\psi}\hat{a}\ket{\psi}&=\sum c_jc_k^{\ast}\alpha_jf_{jk}\approx\sum |c_j|^2\alpha_j, \nonumber\\ \bar{\xi}=\bra{\psi}\hat{a}\hat{a}\ket{\psi}&=\sum c_jc_k^{\ast}\alpha^2_jf_{jk}\approx\sum |c_j|^2\alpha^2_j, \nonumber\\ \bar{n}=\bra{\psi}\hat{a}^{\dagger}\hat{a}\ket{\psi}&=\sum c_jc_k^{\ast}\alpha_j\alpha_k^{\ast}f_{jk}\approx\sum |c_j|^2\left|\alpha_j\right|^2, \end{align} where $\approx$ are made in the limit of $|\alpha_k-\alpha_j|\gg1$ such that $f_{jk}\approx \delta_{jk}$. Then $\bar{\alpha}=0$ is equivalent to the weighted sum of all coherences to be zero. $\bar{n}=|\bar{\xi}|$ limits the choice of the phases of $\alpha_j$ such that $\alpha_j^2$ have to be all in the same orientation. \begin{figure}[t] \leavevmode\includegraphics[width = 0.8 \columnwidth]{Npure.pdf} \caption{Nonclassicality per unit energy $\mathcal{N}_{\bar{n}}$ for different pure states as a function of $\bar{n}$.} \label{fig:Npure} \end{figure} For concreteness, we list some of these states. They can be even and odd cat states $\ket{\alpha}_{\pm}= N_{\pm}^{-1/2}\left(\ket{\alpha}\pm\ket{-\alpha}\right)$ \cite{Dodonov:1974aa} with $N_{\pm}=2\pm2 e^{-2|\alpha|^2}$ and their nonclassicalities per unit energy are given by $1+N_{\pm}/N_{\mp}$. Another example is a three-headed cat state $\ket{\psi}_{3h}=N_{3h}^{-1/2}\left(\ket{\alpha}+\ket{0}+\ket{-\alpha}\right)$ with $\mathcal{N}_{\bar{n}}\left(\ket{\psi}_{3h}\right)=1+(N_++2e^{-|\alpha|^2/2})/N_-$, where $N_{3h}=3+4e^{-|\alpha|^2/2}+2e^{-2|\alpha|^2}$. The values of $\mathcal{N}_{\bar{n}}$ of these states all approach to $2$ for $\bar{n}\gg1$ (Fig. \ref{fig:Npure}). Our results show that there is a class of nonclassical states that are as useful as a squeezed vacuum for phase sensing in the MZI in the asymptotical limit, which are important for a number of experiments, such as gravitational wave detections using the LIGO \cite{PhysRevLett.124.171101,PhysRevLett.124.171102, Aasi:2013aa}. \subsubsection{Class 2: $1\le\lim_{\bar{n}\rightarrow \infty}\mathcal{N}_{\bar{n}}<2$} The second class of nonclassical states are also useful for Heisenberg-limited sensing in the MZI. To find states that belong to this class, we consider two approaches: (i) adding a coherent displacement $\mathcal{D}(\alpha)$ \cite{SZ} onto the states in the first class; (ii) searching for the conditions that $\bar{\alpha}=0$ and $\bar{\xi}<\bar{n}$ according to Eq. \eqref{Npure}. \begin{figure}[t] \leavevmode\includegraphics[width = 0.8 \columnwidth]{coherent-squeezed.pdf} \caption{Nonclassicality per unit energy of squeezed coherent states $\ket{\alpha,\xi}$ as a function of $|\alpha|^2$ normalized with respect to $\left(\sinh 2r\right)/2$ at $r=5$. For $2|\alpha|^2\lesssim\sinh 2r$, $\mathcal{N}_{\bar{n}}\gtrsim1$, indicating the ability for Heisenberg-limited sensing.} \label{fig:SC} \end{figure} The nonclassicality $\mathcal{N}$ is fixed by adding a coherent displacement since it is a classical operation \cite{Tan17}, while the nonclassicality per unit energy $\mathcal{N}_{\bar{n}}$ decreases. We give an example using the squeezed coherent states $\ket{\alpha,\xi}=\mathcal{D}(\alpha)\mathcal{S}(\xi)\ket{0}$. For a squeezed coherent state, we calculate its nonclassicality to be $\mathcal{N}\left(\ket{\alpha,\xi}\right)=\sinh^2r+\cosh r\sinh r$, where $\xi=re^{i\theta}$. Obviously, $\mathcal{N}\left(\ket{\alpha,\xi}\right)$ is independent of the coherent displacement. The nonclassicality per unit energy of the state is given by \begin{align} \mathcal{N}_{\bar{n}}\left(\ket{\alpha,\xi}\right)=\frac{\sinh^2r+\cosh r\sinh r}{|\alpha|^2+\sinh^2r}. \end{align} When we increase $|\alpha|$, the nonclassicality per unit energy will be reduced. For $0<|\alpha|^2\le\cosh r\sinh r$, we find $1\le\mathcal{N}_{\bar{n}}\left(\ket{\alpha,\xi}\right)<2$ for $\bar{n}\gg1$ (Fig. \ref{fig:SC}). In the second approach, we consider a few examples that the conditions $\bar{\alpha}=0$ and $\bar{\xi}<\bar{n}$ are met. For example, Fock states have nonclassicalities simply given by $\mathcal{N}(\ket{n})=n$ according to Eq. \eqref{Npure}. They are an important class of nonclassical states which are also non-Gaussian \cite{PhysRevA.76.042327} and they can allow Heisenberg-limited phase sensing in a interferometer together with another input state, classical \cite{Xu:2012aa,PhysRevLett.110.163604} or nonclassical \cite{Holland:1993aa, GePRL2018}. A superposition of two Fock states \cite{Ryl17} $\ket{\psi(n)}=1/\sqrt{2}\left(\ket{0}+\ket{n}\right)$ can have nonclassicality $\mathcal{N}_{\bar{n}}(\ket{n})=1+\delta_{0,2}/\sqrt{2}$, which can also allow Heisenberg-limited sensitivity \cite{Mccormick2018}. For another example, a four-headed cat state $\ket{\psi}_{4h}=N_{4h}^{-1/2}\left(\ket{\alpha}-\ket{i\alpha}+\ket{-\alpha}-\ket{-i\alpha}\right)$ with $N_{4h}=4-8e^{-|\alpha|^2}\cos|\alpha|^2+4e^{-2|\alpha|^2}$, which is also useful for quantum error correction \cite{PhysRevLett.119.030502}, has the same nonclassicality as a Fock state for the same amount of energy, i.e., $\mathcal{N}_{\bar{n}}\left(\ket{\psi}_{4h}\right)=1$. The nonclassicalities per unit energy of these states are shown in Fig. \ref{fig:Npure} as a function of $\bar{n}$. \begin{widetext} \renewcommand{\arraystretch}{1.4} \setlength{\tabcolsep}{7pt} \begin{table} \caption{Nonclassicality with Single-photon addition} \begin{ruledtabular} \begin{tabular}{c c c c c} \label{tb} & $\ket{\alpha}$ & $\ket{\xi}$ & $\ket{\alpha}_{\pm}$ \\ \hline $\mathcal{N}$ & $0$ & $\frac{1}{2}(e^{2r}-1)$ & $|\alpha|^2\left(1+N_{\mp}/N_{\pm}\right)$\\[0.75ex] $\mathlarger{\mathcal{N}_{\bar{n}}}$ & $0$ & $1+\sqrt{1+\frac{1}{\bar{n}}}$ & $1+N_{\pm}/N_{\mp}$\\[1.25ex] $\mathcal{N}[\hat{a}^{\dagger}]$ & $\mathlarger{\frac{1}{1+|\alpha|^2}}$ & $\frac{1}{2}(3e^{2r}-1)$ & $ \mathlarger{\frac{|\alpha|^2\left(|\alpha|^2+3\right)\left(1+\frac{N_{\mp}}{N_{\pm}}\right)+1}{|\alpha|^2N_{\mp}/N_{\pm}+1}}$\\[3ex] $\mathlarger{\mathcal{N}[\hat{a}^{\dagger}]_{\bar{n}}}$ & $\mathlarger{\frac{1}{\left(1+|\alpha|^2\right)^2}}$ & $1+\sqrt{1+\frac{1}{\bar{n}}-\frac{2}{\bar{n}^2}}$ & $\mathlarger{\frac{|\alpha|^2\left(|\alpha|^2+3\right)\left(1+\frac{N_{\mp}}{N_{\pm}}\right)+1}{|\alpha|^2\left(|\alpha|^2+3N_{\mp}/N_{\pm}\right)+1}}$ \end{tabular} \end{ruledtabular} \label{tab} \end{table} \end{widetext} \subsubsection{Class 3: $0<\lim_{\bar{n}\rightarrow \infty}\mathcal{N}_{\bar{n}}<1$} This class of states are less nonclassical than the previous two classes. Nevertheless, they can provide sensitivity beyond the SQL in the sensing tasks. States that fall in category must have nonzero coherence, i.e., $\bar{\alpha}\ne0$. For example, they can be coherent squeezed states with $|\alpha|^2>\cosh r\sinh r$, $0<\mathcal{N}_{\bar{n}}\left(\ket{\alpha,\xi}\right)<1$ (see Fig. \ref{fig:SC}). In summary, we have categorized the nonclassical pure states into three classes using the nonclassicality measure per unit energy. This provides a useful reference for comparing nonclassicality for any pure states in terms of quantum metrology. \subsubsection{Photon-added states} Now we study the nonclassicality of a state after single-photon addition \cite{PhysRevA.43.492, Zavatta660,PhysRevA.82.063833, Ryl17,PhysRevA.91.022317}, which can be achieved via conditioned nonlinear interaction with a single two-level atom \cite{PhysRevA.43.492} or a parametric crystal \cite{Zavatta660,PhysRevA.82.063833}. Single-photon addition has been demonstrated to generate nonclassical features using classical states \cite{Zavatta660,PhysRevA.82.063833}. Here we examine quantitatively the nonclassicalities of a quantum state before and after single-photon addition (indicated by $\mathcal{N}[\hat{a}^{\dagger}]$ and $\mathlarger{\mathcal{N}[\hat{a}^{\dagger}]_{\bar{n}}}$ in Table \ref{tb}). \begin{figure}[t] \leavevmode\includegraphics[width = 0.8 \columnwidth]{cat-addition.pdf} \caption{Nonclassicality of an even cat state $\ket{\alpha}_+$ and that after a single-photon addition as a function of $\bar{n}$.} \label{fig:Cat-A} \end{figure} \begin{figure}[t] \leavevmode\includegraphics[width = 0.8 \columnwidth]{cat-addition-per-energy.pdf} \caption{Nonclassicality per unit energy of an even cat state $\ket{\alpha}_+$ and that after a single-photon addition as a function of $\bar{n}$.} \label{fig:Cat-APer} \end{figure} We observe that the nonclassicality is increased for all the states we studied. For example, single-photon added coherent state $\ket{1,\alpha}\equiv\frac{\hat{a}^{\dagger}}{\sqrt{1+|\alpha|^2}}\ket{\alpha}$ \cite{PhysRevA.82.063833, Zavatta660} has a nonclassicality of $1/(1+|\alpha|^2)$ comparing with zero of a coherent state. We note that the nonclassicality of $\ket{1,\alpha}$ is smaller than that of a coherent-displaced single-photon state $\ket{\alpha,1}=\mathcal{D}(\alpha)\ket{1}$. The reason is that the former is a superposition of a coherent state (a classical state) and $\ket{\alpha,1}$, i.e. $\ket{1,\alpha}=\left(\alpha^{\ast}\ket{\alpha}+\ket{\alpha,1}\right)/\sqrt{1+|\alpha|^2}$, so that its nonclassicality is between $0$ and $1$. Interestingly, for initial nonclassical states in Class 1 ($\lim_{\bar{n}\rightarrow \infty}\mathcal{N}_{\bar{n}}=2$), the nonclassicality enhancement, $\mathcal{N}[\hat{a}^{\dagger}]- \mathcal{N}$ , after single-photon addition can be much greater than $1$. For example, the nonclassicality of single-photon added squeezed vacuum is almost three time larger than its value before the operation (Table \ref{tb}). Similarly, the nonclassicality of even cat states is increased to twice its value when $\bar{n}=2$ as shown in Fig. \ref{fig:Cat-A}. This result suggests a potential protocol for preparing a stronger nonclassical state via single-photon addition. We also observe that the nonclassicality per unit energy, $\mathcal{N}_{\bar{n}}$, after single-photon addition is between the value of the single-photon state and that of the state before the operation. For example, $0<\mathcal{N}_{\bar{n}}(\ket{1,\alpha})<1$, and $1<\mathcal{N}_{\bar{n}}(\ket{1,\xi})<1+\sqrt{1++\frac{1}{\bar{n}}}$. For Fock states, the nonclassicality per unity energy is the same before and after the single-photon addition. For cat states, we provide the derivations in Table \ref{tb}. However, the analytical results are difficult to see this feature, so we plot the even cat state before and after the single-photon addition as an example in Fig. \ref{fig:Cat-APer}. The nonclassicality per unit energy decreases after single-photon addition for the even cat state although the difference is marginal for $\bar{n}\gg1$. We have checked this statement for a single-photon added odd cat state and we conjecture it to be true for any pure states. \subsection{Nonclassicality for mixed states\label{sec3b}} Although pure states are ideal for various applications, mixed states are inevitable in practical situations due to their coupling to the environment. According to Eq. \eqref{eq:n-mixed}, evaluating single-mode nonclassicality for mixed states is a nontrivial task as one needs to find the minimum value of the expression among all possible decomposition of $\hat\rho$, while the number of decompositions can be infinite in principle. To evaluate the ORT measure, we analyze the structure of a special class of mixed states that can be written in diagonal Fock basis. We study some properties of these states and derive the nonclassicalities of some nontrivial examples. In the situation when directly evaluating the nonclassicality measure is challenge, for example a mixed state with an infinite dimensions, we can lower bound $\mathcal{N}\left(\hat\rho\right)$ using the metrological power $\mathcal{W}\left(\hat\rho\right)$ from QFI via Eq. \eqref{eq:ineq1}. We provide an example using a single-photon added thermal state. We consider those states that can be written in diagonal Fock basis, i.e. $\hat\rho=\sum_{i=0}^L p_i\ket{i}\bra{i}$, where $\sum_i^L p_i=1$ and $p_i\ge0$. Obviously, $\{p_i,\ket{i}\}$ is one possible decomposition. Therefore, any decomposition of $\hat\rho$ can be given by \begin{align} \hat{\rho}=\sum_{j=0}^Kq_j\ket{\phi_j}\bra{\phi_j}, \end{align} where $\ket{\phi_j}=1/\sqrt{q_j}\sum_{i=0}^LU_{ij}\sqrt{p_i}\ket{i}$, $q_j=\sum_{i=0}^LU_{ij}U_{ij}^{\ast}p_i$, and $K\ge L$. The matrix $U$ is an isometry such that $\sum_{j=0}^KU_{ij}U_{i^{\prime}j}^{\ast}=\delta_{i,i^{\prime}}$. \begin{figure}[t] \leavevmode\includegraphics[width = 0.8 \columnwidth]{Wvsnbar.pdf} \caption{The metrological power $\mathcal{W}$ of single-photon-added thermal state $\hat\rho_{\text{th},1}$ as a function of mean thermal photon number $\bar{n}_{\text{th}}$. The metrological power transits from positive values to zero as $\bar{n}_{\text{th}}$ increases.} \label{fig:Wnbar} \end{figure} For any decomposition, we have $\bar{n}_j=1/q_j\sum_{i=0}^LU_{ij}U_{ij}^{\ast}p_ii$, $\bar{\alpha}_j=1/q_j\sum_{i=0}^LU_{ij}U_{i-1j}^{\ast}\sqrt{p_ip_{i-1}i}$, and $\bar{\xi}_j=1/q_j\sum_{i=0}^LU_{ij}U_{i-2j}^{\ast}\sqrt{p_ip_{i-2}i(i-1)}$. Interestingly, we find that for any decomposition set $\{q_j,\ket{\phi_j}\}$, the following properties hold \begin{align} \sum_{j=0}^Kq_j\bar{n}_j=\braket{\hat{a}^{\dagger}\hat{a}},\qquad \sum_{j=0}^Kq_j\bar{\alpha}_j=0, \qquad \sum_{j=0}^Kq_j\bar{\xi}_j=0, \end{align} where $\braket{\hat{a}^{\dagger}\hat{a}}=\sum_{i=0}^Lp_ii$. According to its definition Eq. \eqref{eq:n-mixed}, we obtain that \begin{align} \mathcal{N}\left(\hat\rho\right)=\braket{\hat{a}^{\dagger}\hat{a}}-\max_{\{q_j,\ket{\phi_j}\}}\biggl\{\sum_{j=0}^K q_j|\bar{\alpha}_j|^2-\biggl|\sum_{j=0}^K q_j\bar{\alpha}_j^2\biggr|\biggr\}\le\braket{\hat{a}^{\dagger}\hat{a}}. \end{align} The inequality can be achieved when $\bar{\alpha}_j=0$ and some trivial solutions are the state $\hat\rho$ with $p_ip_{i-1}=0$ for any $i$. For example, $\hat\rho=\sum_{i=0}^L p_{2i}\ket{2i}\bra{2i}$, which can be a completely phase-damped squeezed vacuum state with $p_{2i}=\left(\cosh r\right)^{-1}\frac{\left(2i\right)!}{\left(i!\right)^2}\left(\frac{1}{2}\tanh r\right)^{2i}$. Another example is the state $\hat\rho=p\ket{0}\bra{0}+(1-p)\ket{n}\bra{n}$ ($n>2$), whose nonclassicality equals to its pure state counterpart $1/\sqrt{p}\ket{0}+1/\sqrt{1-p}\ket{n}$. This suggests that certain the nonclassicality of Fock state superpositions may not be affected by a phase-damped environment \cite{PhysRevA.62.053807}. On the other hand, the nonclassicality is lowered bounded via the relation \begin{align} \mathcal{N}\left(\hat\rho\right)&\ge\braket{\hat{a}^{\dagger}\hat{a}}-\max_{\{q_j,\ket{\phi_j}\}}\sum_{j=0}^K q_j|\bar{\alpha}_j|^2\end{align} when $\sum_{j=0}^K q_j\bar{\alpha}_j^2=0$. Now we consider some examples to achieve this lower bound. If the state consists of two nearest Fock basis, e.g, \begin{align} \hat\rho_{2F}=(1-p)\ket{n+1}\bra{n+1}+p\ket{n}\bra{n}, \end{align} it can be decomposed via $\ket{\phi_j}=1/\sqrt{2q_j}\left(\sqrt{1-p}e^{i\varphi_j}\cos\theta_j\ket{n+1}+\sqrt{p}\sin\theta_j \ket{n}\right)$ ($j=0,1,2,3$) and choosing $\varphi_0=\varphi_1=\varphi_2-\pi/2=\varphi_3-\pi/2$ and $\theta_0=\theta_1-\pi/2=\theta_2=\theta_3-\pi/2$. We find \begin{align} &\max_{\{q_j,\ket{\phi_j}\}}\sum_{j=0}^K q_j|\bar{\alpha}_j|^2\nonumber\\ =&\max_\theta \left\{\frac{(n+1)p(1-p)\sin^2\theta\cos^2\theta}{\left[(1-p)\cos^2\theta+p\sin^2\theta\right]\left[p\cos^2\theta+(1-p)\sin^2\theta\right]}\right\}\nonumber\\ =&(n+1)p(1-p). \end{align} So the nonclassicality of the state is $\mathcal{N}\left(\hat\rho_{2F}\right)=(n+1)(1-p)^2+np$. For a mixed state with components $L\ge3$, the analytical optimization can be more difficult. Instead, we use the operational relation Eq. \eqref{eq:ineq1} to lower-bound the nonclassicality $\mathcal{N}$ of $\hat\rho=\sum_{i=0}^L p_i\ket{i}\bra{i}$, which is given by \begin{align} \mathcal{N}\left(\hat\rho\right)\ge\mathcal{W}\left(\hat\rho\right)=\max\left\{\sum_{i=1}^{L}\frac{ip_i\left(p_i-p_{i-1}\right)}{p_i+p_{i-1}},0\right\}. \end{align} \begin{widetext} \renewcommand{\arraystretch}{1.4} \setlength{\tabcolsep}{7pt} \begin{table} \caption{Comparison of Nonclassicality Measures for a pure state $\ket{\psi}=\sum_{k=1}^Lc_k\ket{\alpha_k}$} \begin{ruledtabular} \begin{tabular}{c c c c c c} &Nonclassical depth $\tau$ \cite{Lee:91} & Degree of Nonclassicality \cite{GehrkePRA12} & RT measure $Q$ \cite{YadinPRX18, Kwon19} & ORT measure $\mathcal{N}$ \cite{ge2019operational} \\ \hline & Convolution parameter \\[-0.75ex] Definition & for a positive-definite & $E_\text{Ncl}=1-e^{-L+1}$ & $2\left(\bar{n}-|\alpha|^2\right)$ & $\bar{n}-|\bar{\alpha}|^2+\left|\bar{\xi}-\bar{\alpha}^2\right|$ \\[-0.75ex] &quasi-prob. distribution \\[-0.25ex] Operationality & unknown& unknown & Pure states only & An arbitrary state\\[0.25ex] Maximum value & $1$ & $1$ & $2\bar{n}$ & $\bar{n}+\sqrt{\bar{n}(\bar{n}+1)}$\\[0.25ex] Most nonclassical states & Non-Gaussian states & States with $r=\infty$ & $\ket{n},\ \ket{\xi}, \ \ket{\alpha}_{\pm}$, etc. & $\ket{\xi}$\\[0ex] Distinguishability of $\ket{n}$ & No& Yes & Yes & Yes\\ Distinguishability of $\ket{\xi}$ & Yes& No & Yes & Yes\\ \begin{tabular}{c} Distinguishability \\[-0.5ex] between $\ket{n}$ and $\ket{\xi}$ \end{tabular} & Yes& Yes & No & Yes \end{tabular} \end{ruledtabular} \label{tab2} \end{table} \end{widetext} \noindent We observe from the above expression that $\mathcal{W}$ can be zero even for a nonclassical mixed state, meaning there is no metrological advantage for sensing tasks using certain nonclassical states, which falls into the region between the red and the green ovals in Fig. \ref{fig:state}. An example we give is a single-photon added thermal state $\hat\rho_{\text{th},1}$ with $p_i=\frac{i}{\bar {n}^2}\left(\frac{\bar{n}}{\bar{n}+1}\right)^{i+1}$, which is shown to be nonclassical for any positive value of $\bar n_{\text{th}}$ \cite{PhysRevA.75.052106, PhysRevA.83.032116}. However, $\mathcal{W}=0$ for $\bar{n}_{\text{th}}>0.4567$ (see Fig. \ref{fig:Wnbar}). \section{Discussion\label{sec4}} We summarize some interesting properties of the ORT measure of nonclassicality in comparison with those of some important existing measures in Table \ref{tab2}. We give an example by discussing the nonclassicalities of Fock states and squeezed vacuum states using these measures. According to the nonclassicality depth \cite{Lee:91}, Fock states have the same nonclassical depth of $1$, which is always greater than any nonclassical Gaussian states, e.g., squeezed vacuum states, of which the nonclassical depth is between $0$ and $1/2$. The nonclassicality depth concludes Fock states are more nonclassical than squeezed vacuum states regardless of the squeezing strength. The conclusion is exactly the opposite using the definition via the Schmidt rank \cite{GehrkePRA12}. In that definition, any squeezed vacuum state has the maximum nonclassicality independent of its squeezing strength $r$, while those of Fock states $\ket{n}$ increase with $n$ and are bounded by the maximum value. Using the RT measure measure $Q$ \cite{YadinPRX18, Kwon19}, the nonclassicality of Fock states and squeezed vacuum states are equal for a fixed energy and it grows with the energy. Using the ORT measure, we draw some different conclusions. First, the ORT nonclassicality measure can be used to compare non-Gaussian states (Fock states) with that of Gaussian states (squeezed vacuum states) from the operational meaning. Second, the measure for both Fock states and squeezed vacuum states increases with its energy, but it increases at different rates such that the squeezed vacuum states are more nonclassical for a fixed energy. One implication is that the nonclassicalities of these states can be the same when a Fock state has more energy than that of a squeezed vacuum. \section{Conclusion\label{sec5}} We have evaluated extensively the nonclassicality of single-mode quantum states using the ORT measure, which satisfies the minimum requirements of the resource theory of nonclassicality and directly relates to the ability for quantum-enhanced sensing tasks. For pure states, the ORT measure quantifies the ability of quadrature sensing and phase sensing in the balanced interferometer and we have investigated the measure for different classes of states. In particular, we have discovered a class of nonclassical states that can attain the maximum sensing ability in the asymptotic limit of large energy. We have also found that single-photon additions can greatly improve nonclassicality of a quantum state, hence its ability for quantum sensing. For mixed states, the measure is difficult to evaluate as one needs to find the convex roof expression over an infinite possibilities of decompositions of a quantum state $\hat\rho$. Nevertheless, we have studied some nontrivial examples by analytically deriving the measure, which provides some idea of evaluating nonclassicality for mixed states. Our work has taken a step further in quantitatively understanding nonclassicality, which will be important for fundamental research in quantum optics and practical applications in quantum technologies. In particular, our results on evaluating single-mode nonclassicality will have useful applications in quantum sensing tasks. \begin{acknowledgments} This research is supported by a grant from King Abdulaziz City for Science and Technology (KACST). \end{acknowledgments}
1,314,259,994,104
arxiv
\section{I. Introduction} Three-dimensional topological insulators (3D TIs), are predicted to feature helical topological surface states (TSS) with linear dispersion and time reversal symmetry protection \cite{PhysRevB.76.045302,PhysRevB.75.121306,PhysRevB.79.195322,RevModPhys.82.3045}. Experimentally, the first 3D TI was realized in Bi$_{1-x}$Sb$_x$ \cite{Hsieh2008}, sparking a vast amount of research especially around a whole family of mostly Bismuth-based compounds. The alloy Bi$_2$Se$_3$ (BS) was quickly identified as a promising member of this family. However, while ab-initio calculations showed a prototypical TI band structure \cite{Zhang2009}, angle-resolved photoemission spectroscopy (ARPES) measurements consistently revealed the Fermi energy (E$_\mathrm{F}$) to lie in the bulk conduction band because of donor-type Selenium vacancies and/or Se$_\mathrm{Bi}$ anti-sites \cite{Xia2009,doi:10.1002/adma.201200187}. Electronic transport experiments are therefore often dominated by bulk states, making the full utilization of the unique TSS characteristics challenging. The presence of parasitic bulk conduction is generally shared by all Bi-based compounds and the strategies to counteract this issue have been multifold. Successful compensation of unintentional dopants has for example been achieved in single crystalline Bi-Sb-Te-Se solid solutions grown by the Bridgman technique, resulting in suppressed bulk conduction and surface dominated transport \cite{Xu2014}.\\ Next to the Bridgman method a widely spread approach to grow crystalline 3D TI samples is molecular beam epitaxy (MBE) that provides crucial advantages for many experimental and possible technological applications. For example, MBE offers quick adjustment of alloy stoichiometries, precise control of sample thickness down to single layers and the capability of in-situ preparation of hybrid devices with well defined interfaces, all while possibly opening a way to wafer-size scalability. However, sample quality has lacked behind significantly compared to other preparation methods and while the issue of parasitic bulk conduction due to structural disorder has not been conclusively solved, research especially concerning the promising quaternary alloy (Bi$_{1-x}$Sb$_x$)$_2$(Te$_{1-y}$Se$_y$)$_3$ (BSTS) has stalled.\\ In this contribution, we investigate MBE-grown BS/BSTS heterostructures within a vertical p-n-type concept. We show that BS acts as an excellent seed layer for epitaxial BSTS preparation already reducing unintentional doping due to improved crystallinity. Furthermore, we deliberately tune BSTS into a slight p-type regime via its stoichiometry and use the intrinsically n-type BS to create a band bending within the heterostructure by compensation of opposite excess charges. In a systematic study, we investigate the transport properties of such heterostructures grown on SrTiO$_3$ (STO) and provide a recipe for a highly reproducible growth of BS/BSTS with minimized bulk conduction as-grown. Depending on the respective BS and BSTS thickness, we observe a strong suppression of trivial bulk conduction of the BS layer and a separation of the topological surface states. The choice of highly dielectric STO furthermore allows us to tune the electronic properties of the samples via back-gating leaving the top surface unoccupied for potential surface experiments or interfacing in hybrid devices. \section{II. Experimental} All samples presented in this work were grown by molecular beam epitaxy. The thicknesses of the layers were determined through reflective high energy electron diffraction (RHEED) oscillations. ARPES characterization was performed at 77$\,$K with a spot-size of 150$\,\mu$m$\,\times\,$50$\,\mu$m and a photo energy of 36$\,$eV for maximum photoemission intensity of the surface states with respect to bulk states. The ARPES samples were protected from oxidation by removable selenium capping layers. The samples for magnetotransport measurements were capped in-situ by 7$\,$nm Al$_2$O$_3$. All magnetotransport measurements were carried out at 4.2$\,$K, utilizing a standard 4-point, low-frequency lock-in technique in a Hall bar geometry with the magnetic field applied perpendicular to the film. The Hall bar has a width of $w = 20\, \mu$m and a length of $l = 300\, \mu$m. The obtained sheet resistance is defined as $R_\mathrm{S} = \frac{U_{\mathrm{xx}}}{I}\cdot \frac{w}{l}$, where $I$ is the applied current and $U_{\mathrm{xx}}$ the measured longitudinal voltage. For electrostatic back-gating a voltage was applied between the sample and the bottom of the chip carrier with the STO substrate acting as a dielectric barrier. For additional front-gating the samples were covered by an insulating bilayer of 30$\,$nm SiO$_2$ and 100$\,$nm of Al$_2$O$_3$ and a gold electrode. \section{III. Results} \subsection{A. BS as seed layer} \begin{figure} \includegraphics[scale=0.67]{Fig1_V2.pdf} \caption{\label{}RHEED patterns of BSTS directly grown on STO (a), with BST seed layer (b) and BS seed layer (c). d) - f) RHEED patterns of BSTS (right) with BS seed layer on different substrates (middle). } \label{Fig1} \end{figure} For Bi-based TI alloys, the optimization of growth quality is crucial since the electronic properties are largely governed by unintentional doping caused by lattice defects. The most widely investigated MBE-grown 3DTI is Bi$_2$Se$_3$. Its tetradymite crystal structure is built up by quintuple layers (1$\,$QL $\approx$ 1$\,$nm), with weak van der Waals (vdW) interlayer bonding between QLs, enabling successful growth on a variety of substrates via vdW epitaxy and high crystallinity was achieved by precise optimization of growth conditions \cite{Ginley_2016,doi:10.1002/pssb.202000007}. MBE of related ternary (e.g. Bi$_2$(Se$_{1-x}$Te$_x$)$_3$ or (Bi$_x$Sb$_{1-x}$)$_2$Te$_3$) and especially quaternary compounds like BSTS has been less intensively studied. Expanding the alloy complicates the growth procedure, it increases the amount of atomic disorder naturally occurring in those systems and reduces the amount of suitable substrates since a large lattice mismatch induces crystal defects. \\ Investigating epitaxial preparation of BSTS directly on the STO(111) substrate, we were unable to find a reliable and reproducible regime of sole single-crystalline order and routinely observed patterns with poly-crystalline features in RHEED imaging during growth, as examplarily shown in Fig. \ref{Fig1}a) for 6$\,$QL BSTS with $(x|y)=(70|90)$. Using a BST seed layer (Fig. \ref{Fig1}b) lead to improvements, but caused 3D features in the RHEED pattern. In addition, different substrates or BSTS stoichiometries compel an adaptation of growth parameters. Introducing a BS seed layer, however, facilitates the growth of high-quality BSTS films, independently of its stoichiometry and the used substrate. Surprisingly, even a single BS layer acts as a highly oriented vdW seed and is sufficient to ease the vdW epitaxy of subsequent BSTS. The protocol to grow the BS seed layers is as follows: saturating the substrate surface by Se at 190$^\circ\,$C for 150$\,$s, growing the BS layers while ramping the substrate temperature from 190$^\circ\,$C to 250$^\circ\,$C within the first 2$\,$QL, followed by annealing under constant Se-flux at 290$^\circ\,$C. At the BSTS growth temperature of 255$^\circ\,$C, the RHEED pattern shows pronounced oscillations and no indication of 3D or poly-crystalline features. Next to STO(111) (see Fig. \ref{Fig1}c), this protocol has successfully been applied to a variety of substrates, also beyond the common (111)-orientation \cite{doi:10.1002/pssb.202000007}, without requiring to change the growth protocol. Figures \ref{Fig1} c)-e) exemplarily show RHEED patterns of TI samples (right) and the respective substrate (middle), demonstrating single-crystalline growth on Al$_2$O$_3$(0001) (c), GaAs(111) (d), and even disordered C(111) (f). In addition, successful growth was achieved on Al$_2$O$_3$ (11-20), GaAs (001) and InP(111).\\ Crucial for the electronic properties, we found BS to also function as an "electrostatic seed" layer. Selenium vacancies and Se$_\mathrm{Bi}$ anti-sites lead to a large bulk donor level in BS, as shown in Fig. 2a) \cite{doi:10.1002/adma.201200187,Chen178,RUMANN2019258}. This pins the Fermi level to the bulk conduction band. It therefore reproducibly fixes the starting point for subsequent layers to an n-type foundation independent of the used substrate. BSTS growth directly on a substrate lead to strong variations of the samples' electronic properties even for constant stoichiometry. Since the interface potential between sample and substrate is susceptible to minor fluctuations of growth conditions, a controlled positioning of E$_\mathrm{F}$ in the band gap throughout the complete sample thickness has proven to be challenging. Hence, the epitaxial and the electrostatic seed layer functionality of BS dramatically improved the quality, controllability, and especially the reproducibility of crystallographic and electronic sample properties, as we will demonstrate in the following. \subsection{B. Heterostructure concept} \begin{figure*} \centering \includegraphics[width=0.8\textwidth]{Fig2_V6_revised.pdf} \caption{Schematic band structure and density of states (DOS) of dopant levels in BS (a) and BSTS (b). c) Heterostructure concept: depending on respective BS and BSTS thicknesses, band bending is introduced within the bilayer, leading to different sizes of metal-like (m-bulk) and semiconductor-like (semiC) bulk contribution additional to conduction of top (t-TSS) and bottom (b-TSS) topological surface states. f) ARPES at 77$\,$K imaging the band bending evolution for samples of 1 QL BS and 3 (i), 6 (ii) and 12 QL (iii) BSTS. The horizontal black dashed line represents the Fermi level. The black dashed triangles are a guide to the eye to the position of the t-TSS} \end{figure*} While the implementation of a BS seed layer proved to be highly favorable for epitaxial BSTS preparation, its prevalent bulk donor level potentially adds a large contribution to the overall bulk conductance of the bilayer. On the other hand, (Bi$_{1-x}$Sb$_x$)$_2$(Te$_{1-y}$Se$_y$)$_3$ allows an engineering of key band structure features, especially the fine tuning of the effective donor to acceptor ratio via the stoichiometric parameters $x$ and $y$ \cite{PhysRevB.84.165311,Arakane2012,PhysRevB.87.085442,PhysRevB.93.245149}. Based on a test series, we chose $x = 70-74\%$ and $y = 87-91\%$, aiming to maximize the BSTS band gap while creating a slight acceptor surplus (Fig. 2b). A heterostructure with the hence p-type BSTS and n-type BS generates a bending of the system's electronic bands \cite{doi:10.1002/pssr.201206391,Eschbach2015,PhysRevB.96.125125}, schematically pictured in Figs. 2c)-e). For very thin BSTS, opposite excess charges begin to compensate, but the effect is too small and the Fermi level stays above the conduction band minimum (CBM) (Fig. 2c), as is revealed by ARPES on a heterostructure with 1$\,$QL BS and 3$\,$QL BSTS in Fig. 2f)i), where an occupation of the bulk conduction band can be observed. Increasing the BSTS thickness enhances the band bending until E$_\mathrm{F}$ is pulled below the CBM into the energy gap at the top surface (Fig. 2d ii). The color grading towards iii) in Fig. 2d) indicates the evolution of the shift when increasing the BSTS thickness. This behavior is verified by ARPES in Fig. 2f)ii) and iii). Ideally, at some point, the band bending is sufficient to pull E$_\mathrm{F}$ into the band gap almost throughout the whole heterostructure by completely depleting the BS layer (Fig 2e). It is important to stress that while our ARPES measurements follow the trend expected for this thickness-dependent band bending, they only image the energy bands at the very surface of the sample. Electrical transport properties, however, are governed by the complete band structure throughout the whole sample thickness. In the most general case, illustrated in Fig. 2d), the sample can be divided into three segments contributing to transport: a semiconductor-like channel (semiC bulk) where E$_\mathrm{F}$ lies in the band gap, a trivial, metal-like bulk channel (m-bulk) for E$_\mathrm{F}$ intersecting the conduction band and the non-trivial top and bottom TSS (t-TSS, b-TSS). \subsection{C. Magnetotransport characterization} \begin{figure}[b] \centering \includegraphics[scale=0.85]{Fig3_V6.pdf} \caption{Sheet resistance as a function of temperature normalized to room temperature of 1+$x$ (a), 2+$x$ (b), and 4+$x$ (c) series. d) Conductivity at 4.2$\,$K of all three series versus total sample thickness. The inset shows the 1+$x$ series versus 1/$t_{\mathrm{tot}}$ and a linear fit (light blue dashed line). The y-intercept yields the asymptote (black dashed line) in the main figure. } \end{figure} \noindent To study the contributions of these channels to electronic transport, systematic BSTS thickness series are investigated with 1, 2 and 4 QL of BS seed layers, in the following referred to as 1+$x$, 2+$x$ and 4+$x$ series, with the BSTS thickness $x$ reaching from 2 to 43$\,$QL. Figures 3a)-c) show the temperature dependence of the normalized sheet resistance $R_\mathrm{S}^\mathrm{norm}$($T$) = $R_\mathrm{S}$($T$)/$R_\mathrm{S}$(300$\,$K) - 1 for the three series. The different transport contributions manifest in the measurements due to their different temperature dependencies. For the semiconductor-like channel, activated carriers freeze out upon reducing the temperature and $R_\mathrm{S}$ increases. The trivial metal-like bulk conduction and the TSS, on the other hand, act like metals: $R_\mathrm{S}$ decreases towards lower temperatures due to the reduction of electron-phonon scattering \cite{Gao2012}. The competition of these three transport channels as a function of BSTS thickness is observed for the 1+$x$ (Fig. 3a) and 2+$x$ (Fig. 3b) series. Similar to many observations in bulk conducting TIs, the thinnest samples show a strict metallic behavior due to E$_\mathrm{F}$ lying above the conduction band edge, corresponding to Fig. 2c). As expected from the sketch of Fig. 2d), this trivial metallic contribution gradually diminishes with growing BSTS thickness, leading to the semiconductor-like contribution beginning to dominate $R_\mathrm{S}$(T) at high temperatures. For the thickest samples of Figs. 3a) and b) $R_\mathrm{S}$ increases to about 120$\,$K, before a small metallic decrease is observed. This behavior has been reported for fully bulk compensated TIs. There, the drop of $R_\mathrm{S}$ at low temperatures is ascribed to dominant TSS transport \cite{Xu2014,Arakane2012}. A stark contrast is presented by the $R_\mathrm{S}$(T) behavior of the 4+$x$ series in Fig. 3c). Here, the thicker BS layer leads to a trivial metal-like bulk channel large enough to dominate transport for all BSTS thicknesses in the complete temperature range.\\ These observations are confirmed by plotting the conductivity $\sigma$ at 4.2$\,$K versus the total sample thickness $t_\mathrm{tot}$ (Fig. 3d, see supplemental material for resistivity values \cite{supplemental}). All three series show a significant decrease of $\sigma$ with increasing $t_\mathrm{tot}$. The dashed line is obtained from the y-intercept of the linear fit of the 1+$x$ series in the 1/$t_{\mathrm{tot}}$ depiction (inset) and therefore represents the bulk conductivity of BSTS in the limit of $t\rightarrow \infty$ \footnote{Within each series, the BS thickness remains constant while the BSTS thickness gradually increases. The limit of $t\rightarrow \infty$ therefore means that the BSTS thickness approaches infinity: any thickness independent contribution to conduction like the TSS or any contribution from the BS vanishes.}. We find a comparatively low value of $\approx$2500$\,$S/m. This non-zero bulk conductivity is commonly ascribed to randomly distributed charge puddles and thermally activated carriers from acceptor and donor levels \cite{PhysRevLett.109.176801,Skinner2013,PhysRevB.87.165119,PhysRevB.93.245149,Rischau_2016}. Any offset from the dashed line is expected to mainly stem from a trivial bulk contribution caused by the BS seed layer or TSS conduction. The 1+x series (blue circles) approaches the asymptote slightly more quickly than the 2+$x$ samples (red squares), but for thicknesses above $\sim$20$\,$QL both curves begin to converge. Again, the 4+$x$ series (grey triangles) provides a contrast in showing a significantly larger conductivity for all thicknesses. These observations confirm the conclusions already drawn from the $R_\mathrm{S}$(T) measurements: Using 4$\,$QL of BS induces a large metal-like bulk channel. It dominates the $R_\mathrm{S}$(T) behavior and also substantially contributes to the overall conductivity of all samples at 4.2$\,$K. With 1$\,$QL or 2$\,$QL a qualitatively different behavior is observed: For sufficient BSTS thickness ($>20\,$QL) the BS contribution seems to become negligible, yielding almost identical conductivity very close to the value of bulk BSTS. This indicates that we have approached the ideal case of Fig. 2e). \begin{figure} \centering \includegraphics[scale=0.85]{Fig4_V3.pdf} \caption{Magnetoresistance at 4.2$\,$K for the 1+$x$ (a) and 4+$x$ (b) series. c) HLN fits (white dotted lines) to $\Delta G(B)$ for the 1+x series. d) $\alpha$ values from HLN fits for all three series versus BSTS thickness at 4.2$\,$K. The insets show band structure sketches for different BSTS thicknesses corresponding to the evolution of $\alpha$.} \end{figure} To further characterize the samples, we applied a perpendicular magnetic field B. We find that the significant contribution of the m-bulk channel in the 4+x series is confirmed in the Hall resistance (see supplemental material \cite{supplemental}). Figures 4a) and b) compare the absolute magnetoresistance $MR\,$(B) = $R_\mathrm{S}$(B) - $R_\mathrm{S}$(0$\,$T) at 4.2$\,$K of the 1+$x$ and 4+$x$ series. In all measurements a characteristic cusp-like, positive MR around zero magnetic field is observed, commonly described to stem from weak anti-localization \cite{10.1117/12.2063426}. Transport mediated by TSS in TIs is expected to be especially sensitive to this effect, due to their spin helicity and the arising $\pi$ Berry phase \cite{10.1117/12.2063426, PhysRevB.86.035422}. Applying a perpendicular magnetic field breaks time-reversal symmetry and therefore lifts the enhanced delocalization, causing an increase of sample resistance with magnetic field. We conclude on the TSS to be the main source of MR, reasoning by exclusion between the three relevant transport-contributing channels introduced in Fig. 2 c)-e) (metallic bulk channel, semiconductor bulk channel, TSS). Comparing first the 1+x (small metallic bulk contribution) and the 4+x series (larger metallic bulk contribution): if the metallic bulk channel were the dominant source of the observed MR, the absolute MR values for the 4+x series should be larger than for 1+x. This is not the case, since the MR values for the 4+x series is significantly smaller than for the 1+x. As a consequence, we exclude the metallic bulk contribution to be dominantly responsible for the observed MR. Considering the semiconductor contribution, we have previously discussed that it will increase with increasing BSTS thickness. In our data, however, the MR of 1+40 is identical to 1+12 and smaller than the one of 1+20. This excludes the semiconducting channel to be responsible for the observed MR behavior. We thus interpret the observed MR behavior as a manifestation of the presence of the TSS. In addition to the cusp-signature around zero field, a transition to quadratic or linear behavior at higher magnetic fields is often reported in TIs. Whereas the quadratic behavior is widely accepted to stem from 3D bulk conduction \cite{Gao2012}, the linear MR is subject to more discussion. In the measurements of Figs. 4a) and b) we never observe signatures $\sim$B$^2$, again suggesting the absence of a sizeable 3D bulk contribution in all our samples. In the 4+$x$ series (Fig. 4b) the MR approaches a linear regime above $\approx$2.5T. In contrast, for 1$\,$QL seed layer (Fig. 4a) the cusp-behavior prevails in the complete investigated field range. To explain the origin of such a linear MR (LMR) mainly two models have been proposed: The quantum model of Abrikosov yields an LMR as consequence of linear dispersion, which could link the observation to the linearly dispersive surface states of TIs \cite{PhysRevB.58.2788}. The classical model by Parish and Littlewood, however, shows an LMR to emerge in inhomogeneous two dimensional trivial conductors \cite{Parish2003,PhysRevB.72.094417}. Since we only observe an LMR in the 4+x series, we conclude the classical model to be a more likely explanation. As evaluated above, 4$\,$QL of BS seed layer lead to a significant, but very thin, metal-like bulk conduction channel largely dominating transport. In the framework of Parish and Littlewood, this channel could be subject to LMR that superimposes with the cusp-like WAL behavior of the TSS. This metallic bulk contribution could furthermore serve as an explanation for the strikingly smaller overall magnetoresistances observed in the 4+x series. It acts as a channel parallel to the TSS and therefore reduces the ratio of transport channels underlying weak anti-localization effects.\\ For a more detailed analysis of the WAL signature observed in TIs, the theory of Hikami, Larkin and Nagaoka (HLN) \cite{10.1143/PTP.63.707} is commonly applied in the literature to fit the measured data with \[ \Delta G_{\mathrm{HLN}}(B) = \alpha \frac{\mathrm{e}^2}{\mathrm{\pi} \mathrm{h}} \left[ \Psi \left( \frac{\mathrm{\hbar}}{4\mathrm{e}Bl_\mathrm{\phi}^2} + \frac{1}{2} \right) - \ln \left(\frac{\mathrm{\hbar}}{4\mathrm{e}Bl_\mathrm{\phi}^2}\right)\right], \] assuming $G(B) \approx 1/R_\mathrm{S} (B)$ and $\Delta G_{\mathrm{HLN}}(B) \approx$ $\Delta G(B) \equiv$ $G(B)$ - $G$(0). Since $\Delta G(B)$ is directly obtained from the magnetoresistance measurements presented in Fig. 4a), it is important to note that it is, in general, not free from bulk contributions that may not underlie weak anti-localization. The origins and signatures of such additional contributions to MR, as well as their influence on HLN accuracy, need to be subject to a more thorough investigation. In the above equation, e is the elementary charge, h Planck's constant and $\psi$ the digamma function. The free fit parameters are the phase coherence length $l_\Phi$ and the dimensionless pre-factor $\alpha$. The simplectic case of the HLN theory, distinguished by strong spin-orbit coupling and the absence of magnetic scattering, is usually associated to topological insulators \cite{PhysRevB.86.035422}. It is expected to yield a value of $\alpha = -0.5$ per independent parallel channel contributing to conduction. Figure 4c) exemplarily shows the HLN fits (white dashed line) to $\Delta G$ in the 1+$x$ series, demonstrating a very good agreement of the theory and the measured data within $\pm$1$\,$T. The $\alpha$-values for all three series obtained from this fit interval are plotted as a function of the respective BSTS thickness $t_{\mathrm{BSTS}}$ in Fig. 4d) and a striking resemblance, independent of the seed layer, is observed. For small $t_{\mathrm{BSTS}}$, $\alpha$ starts around a value of -0.5, before an increase sets in, approaching -1 above 20$\,$QL. For the smallest BSTS thickness we expect the Fermi level to lie above the conduction band edge throughout the complete heterostructure (see upper inset). Hence, the whole sample effectively acts as one conducting channel and $\alpha = -0.5$ is expected. It has furthermore been suggested that below a thickness of approximately 10$\,$QL, $\alpha = -0.5$ would even be expected for separated channels due to coupling of top and bottom TSS mediated by tunneling or hopping \cite{PhysRevLett.113.026801,Wang2016}. This could explain the simultaneous increase in all three series to start around this threshold. The approach of $\alpha = -1$ above 20$\,$QL then suggests a true separation of two independent conduction channels. Contrary to a common interpretation, our data shows that $\alpha = -1$ not necessarily allows to conclude a completely insulating bulk, where the TSS at top and bottom surface each contribute -0.5 to $\alpha$. We have shown that the 4+$x$ series clearly shows significant bulk conduction for all BSTS thicknesses. However, our analysis indicates that with increasing t$_{\mathrm{BSTS}}$ the upper TSS still decouples from this bulk channel regardless of seed layer thickness. The lower inset of Fig. 4d) illustrates this more general case with the bottom TSS in direct contact with bulk states and a separated top TSS. In Ref. \cite{PhysRevB.86.035422} it has been theoretically predicted that this configuration can also yield $\alpha = -1$.\\ \begin{figure} \centering \includegraphics[scale=0.85]{Fig5_V3.pdf} \caption{Normalized sheet resistance against back-gate voltage at 4.2$\,$K for the 4+$x$ series (a) with a zoom-in for the thickest samples (b) and the 1+$x$ series (c). d) Dual-gated measurement of sheet resistance for sample 1+40. The white dashed line is a guide to the eye along the maximum of $R_\mathrm{S}^\mathrm{norm}$.} \end{figure} In addition to optimization of as-grown electronic properties, the heterostructure approach using a BS seed layer enables direct epitaxial growth on SrTiO$_3$ (STO). Due to its ultrahigh relative permittivity at cryogenic temperatures \cite{PhysRev.174.613,PhysRevB.19.3593}, STO can function as back-gate (BG) dielectric allowing easy implementation of electrostatic gating to further investigate and modify transport behavior by means of a field effect-induced shift of the chemical potential. Figures 5 a)-c) compare the relative change of $R_\mathrm{S}$ versus back-gate voltage V$_\mathrm{BG}$ defined as $R_\mathrm{S}^\mathrm{norm}$(V$_\mathrm{BG}$) = $R_\mathrm{S}$(V$_\mathrm{BG}$)/$R_\mathrm{S}$(0$\,$V) - 1 of 4+$x$ (Fig. 5a and b) and 1+$x$ (Fig. 5b) samples at 4.2$\,$K. Applying a negative voltage causes an upward band bending. Samples 4+2, 4+6, and 4+12 (Fig. 5a) therefore show a steep increase of $R_\mathrm{S}^\mathrm{norm}$ for $V_\mathrm{BG}<0$ since the predominant n-type metallic bulk conduction in theses samples is reduced by the induced field effect. $R_\mathrm{S}^\mathrm{norm}$(V$_\mathrm{BG}$) reaches values of 500\%, 200\%, and 100\%, respectively, but no maximum is observed as would be expected when tuning the chemical potential through the charge neutrality point (CNP). We ascribe this to the effective screening of the BG-induced electric field by the large metal-like conduction channel at the STO/TI interface before the CNP is reached. Intriguingly, starting from a BSTS thickness of 20$\,$QL the behavior changes and a maximum of $R_\mathrm{S}^\mathrm{norm}$ with respect to the voltage is observed that shifts towards 0$\,$V with increasing BSTS thickness (Fig. 5b). We interpret this observation as direct evidence for the compensation of opposite excess charges within the p-n-heterostructure: for thin BSTS, the metal-like bulk conduction channel at the STO/TI interface induced by the 4$\,$QL of strongly n-type BS remains large enough to screen the static electric field. Increasing the BSTS thickness, however, can sufficiently deplete the BS layer and BG-tunability is achieved. Figure 5c) shows $R_\mathrm{S}^\mathrm{norm}$($V_\mathrm{BG}$) for the 1+$x$ series. From the measurements of Fig. 3, we conclude that sample 1+12 still shows a small remaining metal-like bulk channel as-grown that practically vanishes for 20$\,$QL and 40$\,$QL of BSTS. These observations directly manifest in the BG behavior of the respective sample: Applying a positive voltage yields a downward bending of the energy bands that hence increases the remaining n-type bulk channel in sample 1+12. For +30$\,$V$\geq$ $V_\mathrm{BG}$ $\gtrsim$ +5$\,$V this channel is then large enough to screen the field effect, equivalent to the observations in thin 4+$x$ samples. Thus, $R_\mathrm{S}^\mathrm{norm}$ remains constant. Below 5$\,$V the BG tunability is regained and $R_\mathrm{S}^\mathrm{norm}$ increases up to a pronounced maximum at -10.5$\,$V. In stark contrast, samples 1+20 and 1+40 only show a small increase with a maximum at $V_\mathrm{BG}\approx-2\,$V. This clearly indicates the as-grown depletion of the metal-like bulk conduction channel induced by the BS seed layer. As observed for the other samples, any remaining contribution of this channel to electrical transport would be largely tunable by the field effect and therefore cause a significant increase of $R_\mathrm{S}$ for negative voltages in comparison to the unbiased measurement.\\ To gain deeper insight into the capability of BG-induced $R_\mathrm{S}$ tuning, Figure 5d) shows a dual-gate \footnote{To account for hysteresis effects, the front-gate was swept at 4.2$\,$K from +30$\,$V to -30$\,$V twice before the measurement was started. Then, the back-gate was set in steps of 1.5$\,$V in positive direction. After 10 minutes, to prevent time-dependent influences, the front-gate was swept from +30$\,$V to -30$\,$V with 60$\,$mV/s. After the measurement, the sample was brought to room-temperature to reset both gates and the procedure was repeated with the back-gate being set into negative direction. Due to this required reset of the gates a small discrepancy between the measurements for positive and negative back-gate direction is unavoidable. To account for this, the $R_\mathrm{S}$ values for positive back-gate voltage were shifted by 3.3$\,$V front-gate in Fig. 5d).} measurement of sample 1+40. The white dashed line follows the maximum of $R_\mathrm{S}$ with respect to both gate voltages. For +30$\,$V$\geq$ $V_\mathrm{BG}$ $\gtrsim$ +10$\,$V the maximum of $R_\mathrm{S}$ stays almost constant and remains at the same $V_\mathrm{FG}\approx$16.5$\,$V. In this regime, the large positive BG again leads to a large metal-like conduction channel screening the BG field effect. Between a BG voltage of 10$\,$V and 0$\,$V the global maximum of $R_\mathrm{S}$ is reached. Furthermore, in the same $V_\mathrm{BG}$ range, the location of the maximum in terms of FG voltage starts to shift to higher values. This shift shows a coupling between BG and FG field effects demonstrating the BG's capability to tune the sample's electronic properties throughout its complete thickness even for 40$\,$QL of BSTS. \section{IV. Conclusion} In this study, we have presented an approach to band structure engineering of 3DTI thin films by means of epitaxial MBE growth. Introducing down to a single QL BS seed layer lead to a significant improvement of the BSTS growth quality drastically reducing structural disorder and therefore reducing unintentional bulk doping. By varying the respective thicknesses of the n-type BS and p-type BSTS we were able to substantially tune the as-grown electronic properties of the heterostructures and disentangle the different contributions to the electronic transport of the occurring channels by temperature-, magnetic field-, and gate-dependent measurements. We have shown that the p-n-type architecture of our samples leads to a compensation of opposite excess charges, culminating in a complete depletion of metal-like bulk conduction for 1$\,$QL BS seed layer and a BSTS thickness above 20$\,$QL. By applying the theoretical framework of Hikami, Larkin and Nagaoka \cite{10.1143/PTP.63.707}, we observed a gradual formation of two separated conduction channels with increasing BSTS thickness, independent of the seed layer thickness, revealing a decoupling of at least the top-TSS from bulk states. The chosen STO substrate allowed the application of back-gating that was shown to be capable of modifying the samples' electronic properties throughout the whole thickness. This tuning capability, without occupying the top surface, in combination with the decoupling of the top TSS, is particularly attractive for surface experiments or the implementation in hybrid devices. \section{Acknowledgments} \begin{acknowledgments} We acknowledge the financial support of the Deutsche Forschungsgemeinschaft through project ID~422~314695032-SFB1277 (subproject A01). We thank Magdalena Marganska, Klaus Richter, Cosimo Gorini, and Michael Barth for fruitful discussions. \end{acknowledgments}
1,314,259,994,105
arxiv
\section{Introduction}\label{section:Intro} Compactification of ten-dimensional supergravities on generalized manifolds with $G$-structure has been studied for some time.\footnote{For reviews on this subject see, for example, \cite{Grana:2005jc, Wecht:2007wu,Samtleben:2008pe} and references therein.} These manifolds are characterized by a reduced structure group $G$ which, when appropriately chosen, preserves part of the original ten-dimensional supersymmetry~\cite{Gauntlett:2002sc,Gauntlett:2003cy}. Furthermore, they generically have a non-trivial torsion which physically corresponds to gauge charges or mass parameters for some anti-symmetric tensor gauge potentials. Therefore, the low-energy effective action is a gauged or massive supergravity with a scalar potential which (partially) lifts the vacuum degeneracy present in conventional Calabi-Yau compactifications. The critical points of this scalar potential can further spontaneously break (some of) the left-over supercharges. As a consequence of this, such backgrounds are of interest both from a particle physics and a cosmological perspective. Most studies so far concentrated on six-dimensional manifolds with $\su{3}$ or more generally $\su{3}\times\su{3}$ structure. Compactifying the ten-dimensional heterotic/type~I supergravity on such manifolds leads to an $\mathcal{N}=1$ effective theory in four dimensions~\cite{Cardoso:2002hd,Becker:2003yv,Micu:2004tz,Gurrieri:2004dt,Benmachiche:2008ma}, while compactifying type II supergravity results in an $\mathcal{N}=2$ theory~\cite{Gurrieri:2002wz,Grana:2005ny,Grana:2006hr,Cassani:2007pq,Cassani:2008rb,Grana:2009im}. By employing an appropriate orientifold projection \cite{Benmachiche:2006df,Koerber:2007xk} or by means of spontaneous supersymmetry breaking \cite{Louis:2009xd,Louis:2010ui}, this $\mathcal{N}=2$ can be further broken to $\mathcal{N}=1$ (or $\mathcal{N}=0$). A similar study for six-dimensional manifolds with $\su{2}$ or $\su{2}\times \su{2}$ structure which generalize Calabi-Yau compactifications on $K3\times T^2$ has not been completed yet. In Refs.~\cite{Gauntlett:2003cy,Bovy:2005qq,Triendl:2009ap}, geometrical properties of such manifolds were studied and the scalar field space was determined. Furthermore, it was shown in Ref.~\cite{Triendl:2009ap} that manifolds with $\su{2}\times \su{2}$ structure cannot exist and therefore we only discuss the case of a single $\su{2}$ in this paper. In Ref.~\cite{Louis:2009dq}, the heterotic string was then compactified on manifolds with $\su{2}$ structure and the $\mathcal{N}=2$ low-energy effective action was derived. In \cite{Danckaert:2009hr}, type IIA compactifications on $\su{2}$ orientifolds were studied and again the corresponding $\mathcal{N}=2$ effective action was determined. Finally in Refs.~\cite{ReidEdwards:2008rd,Spanjaard:2008zz}, preliminary studies of the $\mathcal{N}=4$ effective action for type IIA compactification on manifolds with $\su{2}$ structure were conducted.\footnote{The effective action for IIA compactified on $K3\times T^2$ has been given in \cite{Duff:1995wd,Duff:1995sm}. $\mathcal{N}=4$ flux compactifications have been discussed for example in \cite{Aldazabal:2008zza,Dall'Agata:2009gv,Dibitetto:2010rg}.} The purpose of this paper is to continue these studies and in particular determine the bosonic $\mathcal{N}=4$ effective action of the corresponding gauged supergravity. One of the technical difficulties arises from the fact that frequently in these compactifications magnetically charged multiplets and/or massive tensors appear in the low-energy spectrum. Fortunately, the most general $\mathcal{N}=4$ supergravity covering such cases has been determined in Ref.~\cite{Schon:2006kz} using the embedding tensor formalism of Ref.~\cite{deWit:2005ub}. We therefore rewrite the action obtained from a Kaluza-Klein (KK) reduction in a form which is consistent with the results of \cite{Schon:2006kz}. As we will see, this amounts to a number of field redefinitions and duality transformations in order to choose an appropriate symplectic frame. The organization of this paper is as follows: In Section \ref{sec:SU2} we briefly review the relevant geometrical aspects of $\su{2}$--structure manifolds and set the stage for carrying out the compactification. Section~\ref{subsec:NS} deals with the reduction of the NS-sector, which in fact coincides with the heterotic analysis carried out in \cite{Louis:2009dq} and therefore we basically recall their results. In Section~\ref{subsec:RR} we compactify the RR-sector and give the effective action in the KK-basis. In Section \ref{sec:ConsistencyN=4} we perform the appropriate field redefinitions and duality transformations in order to compare the action with the results of Ref.~\cite{Schon:2006kz}. This allows us to determine the components of the embedding tensor parametrizing the $\mathcal{N}=4$ gauged supergravity action in terms of the intrinsic torsion. From the embedding tensor we then can easily compute the gauge group in Section~\ref{Killing}. Section~\ref{section:Conclude} contains our conclusions and some of the technical material is supplied in the Appendices \ref{sec:dualizations-appendix} and \ref{sec:so6-n-coset}. \section{SU(2) structures in six-manifolds} \label{sec:SU2} \subsection{General setting} \label{sec:setting} In this paper, we study type IIA space-time backgrounds of the form \begin{equation}\label{background} M_{1,3} \times \M \ , \end{equation} where $M_{1,3}$ denotes a four-dimensional Minkowski space-time and $\M$ a six-dimensional compact manifold.\footnote{Note that we do not consider warped compactifications in this work. For discussions of a non-trivial warp factor, see for instance~\cite{Koerber:2007xk,Martucci:2009sf}.} Furthermore, we focus on manifolds which preserve sixteen supercharges or in other words~$\mathcal{N}=4$ supersymmetry in four space-time dimensions. This implies that $\M$ admits two globally-defined nowhere-vanishing spinors $\eta^i$, $i=1,2$, that are linearly independent at each point of~$\M$. The necessity for this requirement can be most easily seen by considering the two ten-dimensional supersymmetry generators $\epsilon^1,\epsilon^2$, which are Majorana-Weyl and thus reside in the representation~$\mathbf{16}$ of the Lorentz group~$\so{1,9}$. For backgrounds of the form~\eqref{background}, the Lorentz group is reduced to~$\so{1,3} \times \so6$ and the spinor representation decomposes as \begin{equation} \mathbf{16} \to (\mathbf{2},\mathbf{4}) \oplus (\mathbf{\bar2},\mathbf{\bar4}) \ , \end{equation} where $\mathbf{2}$ and $\mathbf{4}$ denote respectively four- and six-dimensional Weyl-spinor representations, while ${\mathbf{\bar2}}$ and ${\mathbf{\bar4}}$ are the corresponding conjugates. In terms of spinors we thus have \begin{equation} \begin{aligned} \epsilon^1 & = \sum_{i=1}^2 (\xi^{1}_{i+} \otimes \eta^i_+ + \xi^{1}_{i-} \otimes \eta^i_-) \ , \\ \epsilon^2 & = \sum_{i=1}^2 (\xi^{2}_{i+} \otimes \eta^i_- + \xi^{2}_{i-} \otimes \eta^i_+) \ , \end{aligned} \end{equation} where the $\xi^{1,2}_i$ are the four $\mathcal{N}=4$ supersymmetry generators of $M_{1,3}$ and the subscript $\pm$ indicates both the four- and six-dimensional chiralities. The existence of two nowhere-vanishing spinors $\eta^i$ forces the structure group of $\M$ to be $\su2$. This can be seen as follows. Recall that the spinor representation for a generic six-dimensional manifold is the fundamental representation $\mathbf{4}$ of $\su4 \simeq \so6$. The existence of two singlets implies the decomposition \begin{equation} \mathbf{4} \to \mathbf{2} \oplus \mathbf{1} \oplus \mathbf{1} \ , \end{equation} which in turn leads to the fact that the structure group of the manifold is reduced to the subgroup acting on this $\mathbf{2}$, namely~$\su2$. \subsection{Algebraic structure} \label{sec:algebraic} Let us now briefly review the algebraic properties of $\su2$-structure manifolds. For a more detailed discussion, see~\cite{Triendl:2009ap}. Instead of using the spinors $\eta^i$, we can parametrize the~$\su2$~structure on a six-dimensional manifold by means of a complex one-form~$K$, a real two-form~$J$ and a complex two-form~$\Omega$ \cite{Gauntlett:2003cy,Bovy:2005qq}. The two-forms satisfy the relations \begin{equation}\label{relations_two_forms} \Omega \wedge \bar{\Omega} = 2 J \wedge J \ne 0 \ , \qquad \Omega \wedge J = 0 \ , \qquad \Omega \wedge \Omega = 0 \ , \end{equation} while the one-form is such that \begin{equation}\label{K_compatible} K \cdot K =0 \ , \qquad \bar{K} \cdot K = 2 \ , \qquad \iota_K J =0 \ , \qquad \iota_K \Omega = \iota_{\bar{K}} \Omega =0 \ . \end{equation} These forms can be expressed in terms of the spinors as follows, \begin{align} K_m & = \bar{\eta}^c_{2} \gamma_m \eta_{1} \ , \label{definition_one-form_K} \\ J_{mn} & = \tfrac{1}{2} \iu \left(\bar{\eta}_{1} \gamma_{mn} \eta_{1} + \bar{\eta}_{2} \gamma_{mn} \eta_{2}\right) \ ,\qquad \Omega_{mn} = \bar{\eta}_{2} \gamma_{mn} \eta_{1} \ , \label{definition_two-forms} \end{align} where $\gamma_m$, $m=1,\ldots,6$, are $\so6$ gamma-matrices and $\gamma_{mn} = \frac12(\gamma_m \gamma_n - \gamma_n \gamma_m)$. By using Fierz identities and assuming that each $\eta^i$ satisfies $\bar\eta^i \eta^i=1$, it can be checked that these definitions for $K$, $J$ and $\Omega$ indeed fulfill the relations \eqref{relations_two_forms} and \eqref{K_compatible}. The existence of the one-form~$K$ allows one to define an almost product structure~${P_m}^n$ on the manifold through the expression \begin{equation} \label{almost_product_structure} {P_m}^n = K_m \bar{K}^n + \bar{K}_m K^n - \delta_m^{\phantom{m}n} \ . \end{equation} Using \eqref{K_compatible}, it is easy to check that~${P_m}^n$ does square to the identity, that is \begin{equation}\label{P=1} P_m^{\phantom{m}n} P_n^{\phantom{n}p} = \delta_m^{\phantom{m}p} \ . \end{equation} From the definition~\eqref{almost_product_structure} and the first two relations in~\eqref{K_compatible}, it can be seen that $K_m$ and $\bar{K}_m$ are eigenvectors of $P_m^{\phantom{m}n}$ with eigenvalue $+1$. Also, all vectors simultaneously orthogonal to $K_m$ and $\bar{K}_m$ have eigenvalue $-1$. Thus $K_m$ and $\bar{K}_m$ span the $+1$ eigenspace and as a consequence the tangent space of $\M$ splits as \begin{equation}\label{tangent_space_splitting} T \M = T_2 \M \oplus T_4 \M \ , \end{equation} where $T_2 \M$ has a trivial structure group and is spanned by $\Re\,K^m$ and $\Im\,K^m$. We can then choose a basis of one-forms $v^i$, $i=1,2$ on $T_2 \M$ normalized as \begin{equation} \label{one-forms_relation} v^i \wedge v^j = \epsilon^{ij}\, \vol_2 \ , \end{equation} where $\vol_2$ is the volume form on $T_2 \M$. From the last constraints in~\eqref{K_compatible}, it follows that the two-forms~$J$ and~$\Omega$ have `legs' only along~$T_4 \M$. The three real two-forms $J^1=\Re\,\Omega$, $J^2=\Im\,\Omega$ and $J^3=J$ form a triplet of symplectic two-forms on~$T_4 \M$ and from~\eqref{relations_two_forms} we infer that \begin{equation}\label{j_wedge_j} J^\alpha \wedge J^\beta = 2 \delta^{\alpha\beta} \vol_4 \ ,\qquad \alpha,\beta =1,2,3\ , \end{equation} where $\vol_4$ denotes the volume form on~$T_4 \M$. Eq.~\eqref{j_wedge_j} states that the~$J^\alpha$ span a space-like three-plane in the space of two-forms on~$T_4 \M$. The triplet~$J^\alpha$ therefore defines an~$\su2$ structure on~$T_4 \M$. Finally, note that any pair of spinors $\tilde \eta^i$ which is related to~$\eta^i$ by an~$\su2 \simeq \so3$ transformation defines the same $\su2$ structure \cite{ReidEdwards:2008rd}. The one-form~$K$ is invariant under this rotation but the two-forms~$J^\alpha$ transform as a triplet.\footnote{Note also that the phase of $K$ corresponds to the overall phase of the pair $\eta^i$.} Thus there is an~$\su2$ freedom in the parametrization of the~$\su2$ structure. This $\su2$ is a subgroup of the R-symmetry group $\su4$ of $\mathcal{N}=4$ supergravity. The case when all forms $K$, $J$ and $\Omega$ (or equivalently $v^i$ and $J^\alpha$) are closed corresponds to a manifold $\M$ having $\su2$ holonomy. This can be seen from Eq.~\eqref{definition_one-form_K} and~\eqref{definition_two-forms}, since these forms being closed translates into the spinors $\eta^i$ being covariantly constant with respect to the Levi-Civita connection. The only such manifold in six dimensions is the product manifold $K3 \times T^2$, that is the product of a $K3$ manifold with a two-torus. In that case, the almost product structure $P$ is trivially realized by the Cartesian product. \subsection{Kaluza-Klein data} \label{sec:KKdata} So far, we analyzed the parametrization of an~$\su2$ structure over a single point of~$\M$. This gives all deformations of the~$\su2$ structure. But in order to find the low-energy effective action we have to perform a Kaluza-Klein truncation of the spectrum and thereby eliminate all modes with a mass above the compactification scale. This we do in two steps. First, we have to ensure that there are no massive gravitino multiplets in the $\mathcal{N}=4$ theory. It can be shown that these additional gravitino multiplets are~$\su2$ doublets which must therefore be projected out \cite{Grana:2005ny,Triendl:2009ap}. This also automatically removes all one- and three-forms in the space of forms acting on tangent vectors in~$T_4 \M$. Furthermore, the splitting~\eqref{tangent_space_splitting} becomes rigid, since a variation of this splitting is parametrized by a two-form with one leg on~$T_2 \M$ and the other on~$T_4 \M$ over each point of~$\M$, but one-forms acting on~$T_4 \M$ are projected out. In the following, we will make the additional assumption that the almost product structure~\eqref{almost_product_structure} is integrable. This means that every neighborhood $\U$ of $\M$ can be written as a product $\U_2 \times \U_4$ such that $T_2 \M$ and $T_4 \M$ are tangent to $U_2$ and $U_4$, respectively. In other words, local coordinates $z^i,i=1,2$ and $y^a, a=1,\dotsc,4$ can be introduced on $\M$ such that $T_2 \M$ is generated by $\partial/\partial z^i$ and $T_4 \M$ by $\partial/\partial y^a$. The metric on~$\M$ can therefore be written in block-diagonal form as \begin{equation}\label{metric} \diff s^2 = g_{ij}(z,y)\, \diff z^i \diff z^j + g_{ab}(z,y)\, \diff y^a \diff y^b\ . \end{equation} In a second step, we truncate the infinite set of differential forms on $\M$ to a finite-dimensional subset. This chooses the light modes out of an infinite tower of (heavy) KK-states. This has to be done in a consistent way, \emph{i.e.}~such that only (but also all) scalars with masses below a chosen scale are kept in the low-energy spectrum. Let us denote by $\Lambda^2 T_4 \M$ the space of two-forms on $\M$ that vanish identically when acting on tangent vectors in~$T_2 \M$. The Kaluza-Klein truncation means that we only need to consider an $n$-dimensional subspace~$\Lambda_\mathrm{KK}^2 T_4 \M$ having signature~$(3,n-3)$ with respect to the wedge product. The two-forms $J^\alpha$ span a space-like three-plane in~$\Lambda_\mathrm{KK}^2 T_4 \M$ and therefore parametrize the space \cite{Triendl:2009ap} \begin{equation}\label{moduli_space} \mathcal{M}_{J^\alpha} = \frac{\so{3,n-3}}{\so3\times\so{n-3}} \end{equation} with dimension $3n-9$. Together with the volume~$\vol_4 \sim \e^{-\rho}$ this gives~$3n-8$ geometric scalar fields on~$T_4 \M$. Let us choose a basis $\omega^I$, $I=1,\dots,n$ on $\Lambda_\mathrm{KK}^2 T_4 \M$ such that \begin{equation}\label{omega_wedge_omega} \omega^I \wedge \omega^J = \eta^{IJ} \e^{\rho} \vol_4 \ , \end{equation} with $\eta^{IJ}$ being the (symmetric) intersection matrix with signature~$(3,n-3)$. The factor~$\e^\rho$ was introduced in order to keep $\omega^I$ and~$\eta^{IJ}$ independent of the volume modulus. The remaining geometric scalars are parametrized by $K$. The latter is a complex one-form acting on~$T_2 \M$ which can be expanded in terms of the~$v^i$ fulfilling eq.~\eqref{one-forms_relation}. The overall real factor of~$K$ is proportional to the square root of~$\vol_2$, while the overall phase of~$K$ is not physical.\footnote{The overall phase of $K$ corresponds to the overall phase of the spinor pair $\eta^i$, which is of no physical relevance.} The other two degrees of freedom in $K$ parametrize the complex structure on~$T_2 \M$. This gives altogether three geometric scalars on~$T_2 \M$. On a generic manifold with $\su2$ structure, the one- and two-forms are not necessarily closed. On the truncated subspace we just introduced, one can generically have\cite{Spanjaard:2008zz,ReidEdwards:2008rd} \begin{equation}\label{differential_algebra} \begin{aligned} \diff v^i &= \tor^i v^1\wedge v^2+ \torr^i_I \omega^I\ , \\ \diff \omega^I &= {\tilde\Tor}_{iJ}^I v^i \wedge \omega^J\ , \end{aligned} \end{equation} where the parameters $\tor^i$, $\torr^i_I$ and ${\tilde\Tor}_{iJ}^I$ are constant. Indeed, eqs.~\eqref{differential_algebra} state that $J^\alpha$ and $K$ are in general not closed, their differential being related to the torsion classes of the manifold\cite{Gauntlett:2003cy}. The parameters in the r.h.s.~of~\eqref{differential_algebra} play the role of gauge charges in the low-energy effective supergravity, as we will see in section~\ref{subsec:NS}. One can show that demanding integrability of the almost product structure~\eqref{almost_product_structure} forces~$\torr^i_I$ to vanish\cite{Louis:2009dq}. The reason is that in such a case it is impossible to generate a form in $\Lambda^2 T_4 \M$ like $\omega^I$ by differentiating a one-form $v^i$ that acts non-trivially only on vectors in $T_2 \M$. We will therefore restrict the discussion in the following to this case and set $t^i_I=0$. On the other hand, the parameters~$\tor^i$ and~$\tilde\Tor^I_{iJ}$ are not completely arbitrary but constrained by Stokes' theorem and nilpotency of the $\diff$-operator. Acting with $\diff$ on eqs.~\eqref{differential_algebra} and using $\diff^2=0$ leads to \begin{equation}\label{constraint_quadratic} \tor^i {\tilde\Tor}^I_{iJ} - \epsilon^{ij} {\tilde\Tor}^I_{iK} {\tilde\Tor}^K_{jJ} = 0 \ , \end{equation} where we choose $\epsilon^{12}=1$. On the other hand, Stokes' theorem implies the vanishing of $\int_Y \diff (v^i \wedge\omega^I\wedge\omega^J)$ for any compact $\M$, which yields \begin{equation}\label{constraint_linear} \tor^i \eta^{IJ} - \epsilon^{ij} {\tilde\Tor}^I_{jK} \eta^{KJ} - \epsilon^{ij} {\tilde\Tor}^J_{jK} \eta^{KI} = 0 \ . \end{equation} This in turn implies that ${\tilde\Tor}^I_{iJ}$ can be written as \begin{equation}\label{expression_torsion_matrix} {\tilde\Tor}^I_{iJ} = \tfrac12\, \epsilon_{ij} \tor^j \delta^I_J + \Tor^I_{iJ} \ , \end{equation} with $\epsilon_{12} = -1$ and $\Tor^I_{iJ}$ satisfying \begin{equation}\label{so3n} \Tor^I_{iK} \eta^{KJ} = - \Tor^J_{iK} \eta^{KI}\ . \end{equation} It will be useful to define two $n \times n$ matrices $\Tor_i = (\Tor_{i})_J^I$, which due to \eqref{so3n} are in the algebra of $\so{3,n-3}$. Finally, substituting $\torr^i_I = 0$ and~\eqref{expression_torsion_matrix} into~the expressions~\eqref{differential_algebra} we are left with \begin{equation}\label{differential_algebra_simpler} \begin{aligned} \diff v^i &= \tor^i v^1\wedge v^2 \ , \\ \diff \omega^I &= \tfrac12\, \tor^i \epsilon_{ij} v^j \wedge \omega^I + \Tor_{iJ}^I v^i \wedge \omega^J\ , \end{aligned} \end{equation} where, according to eq.~\eqref{constraint_quadratic}, the matrices $\Tor_i$ satisfy the commutation relation \begin{equation}\label{XXXX} [\Tor_1, \Tor_2] = \tor^i \Tor_i \ . \end{equation} If all parameters $\tor^i$ and $\Tor^I_{iJ}$ vanish, we recover the case with closed forms $v^i$ and $J^\alpha$ and consequently the manifold is $K3 \times T^2$. In this case, the two-forms $\omega^I$ are harmonic and span the second cohomology of $K3$, their number being fixed to $n = 22$. \section{The low-energy effective action} \label{sec:eff_action} \subsection{The NS-NS sector} \label{subsec:NS} As already mentioned in the introduction, the reduction of the NS-NS sector is completely similar to that performed in Ref.~\cite{Louis:2009dq} for the heterotic string, therefore we will essentially only recall the results. The massless fields arising from the NS-NS sector in type IIA supergravity are the metric $g_{MN}$, the two-form $\mathcal{B}_2$ and the dilaton $\Phi$. The ten-dimensional action governing the dynamics of these fields is given by \begin{equation}\label{action_NS_10} S_\mathrm{NS} = \tfrac12 \int_{M_{1,3} \times \M} \e^{-2\Phi} \big( \mathcal{R} + 4 \diff \Phi \wedge \ast \diff \Phi - \tfrac12 \mathcal{H}_3 \wedge \ast \mathcal{H}_3 \big) \ , \end{equation} where $\mathcal{R}$ is the Ricci scalar and $\mathcal{H}_3=\diff \mathcal{B}_2$ is the field-strength of the two-form~$\mathcal{B}_2$. A KK ansatz for these fields can be written as \begin{equation}\label{KK_expansion_NS} \begin{aligned} \diff s^2 &= g_{\mu\nu} \diff x^\mu \diff x^\nu + g_{ij} \mathcal{E}^i \mathcal{E}^i + g_{ab} \diff y^a \diff y^b \ , \\ \mathcal{B}_2 &= B + B_i \wedge \mathcal{E}^i + b_{12} \mathcal{E}^1 \wedge \mathcal{E}^2 + b_I \omega^I \ , \end{aligned} \end{equation} where we have defined the `gauge-invariant' one-forms $\mathcal{E}^i = v^i - G^i_\mu \diff x^\mu$. The expansion of the ten-dimensional two-form~$\mathcal{B}_2$ leads to a set of four-dimensional fields: a two-form~$B$, two vectors or one-forms~$B_i$ and~$n+1$ scalar fields~$b_I$ and $b_{12}$.\footnote{Note that in this paper we do not consider background flux for $\mathcal{H}_3$. This situation has been discussed for example in \cite{Aldazabal:2008zza,Dall'Agata:2009gv,Dibitetto:2010rg} where it was shown that, as usual, the background fluxes appear as gauge charges in the effective action which gauge specific directions in the $N=4$ field space.} In computing the low-energy effective action, one has to express the variation of the metric components~$g_{ab}$ in terms of the~$3n-8$ geometric moduli on $T_4 \M$ or, more precisely, one needs an expression for the line element $g^{ac} g^{bd} \delta g_{ab} \delta g_{cd}$. As a first step one expands the two-forms~$J^\alpha$ parametrizing the $\su2$ structure in terms of the basis $\omega^I$ according to \begin{equation}\label{expansion_j} J^\alpha = \e^{-\frac\rho2} \zeta^\alpha_I \omega^I \ . \end{equation} However, the~$3n$ parameters~$\zeta^\alpha_I$ are not all independent. Inserting the expansion~\eqref{expansion_j} into Eq.~\eqref{j_wedge_j}, and using the relation~\eqref{omega_wedge_omega}, one obtains the six independent constraints \begin{equation} \eta^{IJ} \zeta^\alpha_I \zeta^\beta_J = 2 \delta^{\alpha\beta} \ . \end{equation} Moreover, an~$\so3$ rotation acting on the upper index of~$\zeta^\alpha_I$ gives new two-forms~$J^\alpha$ that are linear combinations of the old ones, defining therefore the same three-plane and leaving us at the same point of the moduli space. Altogether, we end up with the right number of $3n-9$ geometric moduli parametrizing~$\mathcal{M}_{J^\alpha}$ in Eq.~\eqref{moduli_space}. Furthermore, Ref.~\cite{Louis:2009dq} derived the line element to be \begin{equation}\label{line_element} g^{ac} g^{bd} \delta g_{ab} \delta g_{cd} = \delta\rho^2 + (2\eta^{IJ} - \zeta^{\alpha I} \zeta^{\beta J}) \delta\zeta^\alpha_I \delta\zeta^\beta_J \ , \end{equation} where~$\zeta^{\alpha I} = \eta^{IJ} \zeta^\alpha_J$. Note that this expression is indeed the metric on the coset \begin{equation} \mathbb{R}^+ \times \frac{\so{3,n-3}}{\so3 \times \so{n-3}} \ . \end{equation} With the last result at hand, it is straightforward to insert the ansatz~\eqref{KK_expansion_NS} into the action~\eqref{action_NS_10} and obtain the effective four-dimensional action \begin{equation}\label{action_NS} \begin{aligned} S_\mathrm{NS} = \tfrac12 \int_{M_{1,3}} & \Big[ R \ast 1 - \tfrac12 \e^{-4\phi} \big\vert {\mathcal{D}} B \big\vert^2 - \tfrac12 \e^{-2\phi-\eta} \tilde{g}_{ij} {\mathcal{D}} G^i \wedge \ast {\mathcal{D}} G^j \\ & - \tfrac12 \e^{-2\phi+\eta} \tilde{g}^{ij} \big( {\mathcal{D}} B_i - b_{12} \epsilon_{ik} {\mathcal{D}} G^k \big) \wedge \ast \big( {\mathcal{D}} B_j - b_{12} \epsilon_{jl} {\mathcal{D}} G^l \big) \\ & - \vert \diff \phi \vert^2 - \tfrac12 \e^{2\eta} \big( \vert {\mathcal{D}} b_{12} \vert^2 + \vert {\mathcal{D}} \e^{-\eta} \vert^2 \big) - \tfrac14 \tilde{g}^{ik} \tilde{g}^{jl} {\mathcal{D}} \tilde{g}_{ij} \wedge \ast {\mathcal{D}} \tilde{g}_{kl} \\ & - \tfrac14 \vert {\mathcal{D}}\rho \vert^2 - \tfrac14 (H^{IJ} - \eta^{IJ}) {\mathcal{D}}\zeta^\alpha_I \wedge \ast {\mathcal{D}}\zeta^\beta_J - \tfrac12 \e^\rho H^{IJ} {\mathcal{D}} b_I \wedge \ast {\mathcal{D}} b_J \\ & - \tfrac54 \e^{2\phi+\eta} \tilde{g}_{ij} \tor^i \tor^j + \tfrac18 \e^{2\phi+\eta} \tilde{g}^{ij} {[H, \Tor_i]^I}_J {[H, \Tor_j]^J}_I \\ & - \tfrac18 \e^{2\phi-\eta+\rho} \tilde{g}_{ij} \tor^i \tor^j H^{IJ} b_I b_J - \tfrac12 \e^{2\phi+\eta+\rho} \tilde{g}^{ij} H^{IJ} \Tor^K_{iI} \Tor^L_{jJ} b_K b_L \Big] \ , \end{aligned} \end{equation} where $R$ denotes the Ricci scalar in four-dimensions and we have introduced the notation $\vert f \vert^2 = f \wedge \ast f$ for any form~$f$. Moreover, the symmetric matrix~$H^{IJ}$ is defined according to $\omega^I \wedge \ast \omega^J = H^{IJ} \e^\rho \vol_4$, which can be expressed in terms of the parameters~$\zeta^\alpha_I$ by \cite{Louis:2009dq}\footnote{This expression can be derived by using the fact that the two-forms $J^\alpha$ are self-dual, $J^\alpha = \ast J^\alpha$, with all other orthogonal linear combinations of the $\omega^I$ being anti-self dual.} \begin{equation} H^{IJ} = - \eta^{IJ} + \zeta^{\alpha I} \zeta^{\alpha J} \ . \end{equation} (The commutators in \eqref{action_NS} use ${H^I}_J = H^{IK} \eta_{KJ}$.) In the two-dimensional metric $g_{ij}$ defined in \eqref{metric} we separated the overall volume $\e^{-\eta}$ from the other two independent (complex structure) degrees of freedom by introducing the rescaled metric~$\tilde{g}_{ij} = \e^\eta g_{ij}$. It satisfies $\det \tilde{g} = 1$ and can be expressed in terms of a complex-structure parameter~$\kappa$ as \begin{equation} \tilde{g}_{ij} = \frac1{\Im \kappa} \begin{pmatrix} 1 & \Re \kappa \\ \Re \kappa & \vert \kappa \vert^2 \end{pmatrix} \ . \end{equation} In order to write the action in the Einstein frame, we also performed the Weyl rescaling~$g_{\mu\nu} \to \e^{2\phi} g_{\mu\nu}$ of the four-dimensional metric, where $\phi = \Phi + \frac12 (\eta + \rho)$ is the four-dimensional dilaton. Finally, the various non-Abelian field-strengths and covariant derivatives in \eqref{metric} are given by \begin{subequations}\label{fieldstrengths_NS} \begin{align} {\mathcal{D}} B & = \diff B + B_i \wedge {\mathcal{D}} G^i \ , \\ {\mathcal{D}} G^i & = \diff G^i - \tor^i G^1 \wedge G^2 \ , \label{DGi} \\ {\mathcal{D}} B_i & = \diff B_i + \epsilon_{ij} \tor^k G^j \wedge B_k \ , \\ {\mathcal{D}} \tilde{g}_{ij} & = \diff \tilde{g}_{ij} + (\epsilon_{il} \tilde{g}_{jk} + \epsilon_{jl} \tilde{g}_{ik} - \epsilon_{kl} \tilde{g}_{ij}) \tor^k G^l \ , \\ {\mathcal{D}} \e^{-\eta} & = \diff \e^{-\eta} - \epsilon_{ij} t^j \e^{-\eta} G^i \ , \label{Deta} \\ {\mathcal{D}} b_{12} & = \diff b_{12} - \epsilon_{ij} \tor^j b_{12} G^i - \tor^i B_i \ , \label{Db12} \\ {\mathcal{D}}\rho & = \diff\rho - \epsilon_{ij} \tor^j G^i \ , \\ {\mathcal{D}}\zeta^\alpha_I & = \diff\zeta^\alpha_I + \Tor^J_{iI} \zeta^\alpha_J G^i \ , \\ {\mathcal{D}} b_I & = \diff b_I + \tilde{\Tor}^J_{iI} b_J G^i \ . \end{align} \end{subequations} As a next step let us turn to the R-R sector. \subsection{The R-R sector}\label{subsec:RR} So far, we have reduced the kinetic term for the NS fields. The remaining part of the ten-dimensional action for type IIA supergravity consists of the kinetic terms for the R-R fields and the Chern-Simons term, \begin{align}\label{action_RR_10} S_\mathrm{RR} & = -\tfrac14 \int_{ M_{1,3} \times \M } \big(\mathcal{F}_2\wedge \ast \mathcal{F}_2 + \tilde{\mathcal{F}}_4 \wedge \ast \tilde{\mathcal{F}}_4 \big) \ , \\ S_\mathrm{CS} & = - \tfrac14 \int_{ M_{1,3} \times \M } \mathcal{B}_2\wedge \mathcal{F}_4 \wedge \mathcal{F}_4 \ ,\label{action_CS_10} \end{align} where $\mathcal{F}_2 = \diff \mathcal{A}_1$ and $\mathcal{F}_4 = \diff \mathcal{C}_3$. $\tilde{\mathcal{F}}_4$ is the modified field strength of $\mathcal{C}_3$ defined as \begin{equation}\label{modified_fieldstrength} \tilde{\mathcal{F}}_4= \diff \mathcal{C}_3 - \mathcal{A}_1 \wedge \diff \mathcal{B}_2. \end{equation} Analogously to the KK ansatz~\eqref{KK_expansion_NS}, we expand the ten-dimensional RR fields in the set of internal one-forms~${\mathcal{E}^i}$ and two-forms~$\omega^I$ as follows, \begin{equation}\label{KK_expansion_RR} \begin{aligned} \mathcal{A}_1 = & {} A + a_i {\mathcal{E}^i} \ , \\ \mathcal{C}_3 = & {} (C - A \wedge B) + (C_i - A \wedge B_i) \wedge {\mathcal{E}^i} \\ & {} + (C_{12} - b_{12} A) \wedge {\mathcal{E}^1} \wedge {\mathcal{E}^2} + (C_I - b_I A) \wedge \omega^I + c_{iI} {\mathcal{E}^i} \wedge \omega^I \ . \end{aligned} \end{equation} In terms of four-dimensional fields we thus have a three-form~$C$, two two-forms~$C_i$, $2+n$ vectors or one-forms~$A$, $C_{12}$ and $C_I$, and finally $2n+2$ scalars~$a_i$ and~$c_{iI}$.\footnote{As for the $B$-field, we also do not consider background fluxes for the RR field strengths in this paper. Their effect is similar to an $\mathcal{H}_3$ flux in that additional directions in the $N=4$ field space become gauged \cite{Aldazabal:2008zza,Dall'Agata:2009gv,Dibitetto:2010rg}.} In the expansion of the three form $\mathcal{C}_3$, it is convenient to introduce some mixing with the four-dimensional components from $\mathcal{A}_1$ and $\mathcal{B}_2$. The reason for this is that in this case the four-dimensional field strengths $\diff C$, $\diff C_i$, $\diff C_{12}$ and $\diff C_I$ remain invariant under the gauge transformations \begin{equation}\label{pformgaugetransf} \begin{aligned} \mathcal{A}_1 & \to \mathcal{A}_1 + \diff \Lambda\ , \\ \mathcal{B}_2 & \to \mathcal{B}_2 + \diff \Lambda_1\ ,\\ \mathcal{C}_3 & \to \mathcal{C}_3 + \diff \Lambda_2 + \Lambda \diff \mathcal{B}_2\ , \end{aligned} \end{equation} which is a symmetry of type IIA supergravity, as can be seen from the modified field-strength~\eqref{modified_fieldstrength}. Before we continue, let us pause and count the total number of light modes arising from the KK ansatz in the NS-NS plus RR-sector. From Eq.~\eqref{KK_expansion_NS} (and the subsequent analysis) we learn that the spectrum in the NS-sector contains the graviton, a two-form $B$, four vectors $G^i, B_i$ and $4n-3$ scalars. From Eq.~\eqref{KK_expansion_RR}, we see that two two-forms, $2+n$ vectors and $2n+2$ scalars arise in the RR-sector. After dualizing the three two-forms to scalars we thus have a total spectrum of a graviton, $6+n$ vectors and $6n+2$ scalars. As we review in the next section, this is indeed the spectrum of an $\mathcal{N}=4$ supergravity with $n$ vector multiplets. Substituting this expansion for the ten-dimensional fields into the action~\eqref{action_RR_10} and performing at the end the Weyl rescaling~$g_{\mu\nu} \to \e^{2\phi} g_{\mu\nu}$, we obtain \begin{equation}\label{action_RR_kin} \begin{aligned} S_\mathrm{RR}= -\tfrac14 \int_{M_{1,3}} & \Big[ \e^{-\eta-\rho} \big\vert \diff A - a_i {\mathcal{D}} G^i \big\vert^2 + \e^{-4\phi-\eta-\rho} \big\vert {\mathcal{D}} C - \diff A \wedge B \big\vert^2 \\ & + \e^{-2\phi-\rho} \tilde{g}^{ij} \big( {\mathcal{D}} C_i - \diff A \wedge B_i + a_i {\mathcal{D}} B \big) \wedge \\ & \hspace{2in} \wedge \ast \big( {\mathcal{D}} C_j - \diff A \wedge B_j + a_j {\mathcal{D}} B \big) \\ & + \e^{\eta-\rho} \big\vert {\mathcal{D}} C_{12} - b_{12} \diff A - a_i (\epsilon^{ij} {\mathcal{D}} B_j - b_{12} {\mathcal{D}} G^i) \big\vert^2 \\ & + \e^{-\eta} H_{IJ} \big( {\mathcal{D}} C^I - b^I \diff A - c_i^I {\mathcal{D}} G^i \big) \wedge \ast \big( {\mathcal{D}} C^J - b^J \diff A - c_j^J {\mathcal{D}} G^j \big) \\ & + \e^{2\phi} \tilde{g}^{ij} H_{IJ} \big( {\mathcal{D}} c_i^I + a_i {\mathcal{D}} b^I \big) \wedge \ast \big( {\mathcal{D}} c_j^J + a_j {\mathcal{D}} b^J \big) \\ & + \e^{2\phi-\rho} \tilde{g}^{ij} {\mathcal{D}} a_i \wedge \ast {\mathcal{D}} a_j + \e^{4\phi+\eta-\rho} (\tor^i a_i)^2 \ast 1 \\ & + \e^{4\phi + \eta} H_{IJ} \big[\epsilon^{ij} \Tor^I_{iK} (c_j^K + a_j b^K) - \tor^i (c_i^I - a_i b^I) \big] \cdot \\ & \hspace{1.5in} \cdot \big[\epsilon^{kl} \Tor^J_{kL} (c_l^L + a_l b^L) - \tor^k (c_k^J - a_k b^J) \big] \ast 1 \Big] \ . \end{aligned} \end{equation} On the other hand, the Chern-Simons term \eqref{action_CS_10} gives the following contribution \begin{equation}\begin{aligned}\label{action_RR_CS} S_\mathrm{CS} = - \tfrac14 \int_{M_{1,3}} & \Big[ \, 2 \epsilon^{ij} c^J_i \tilde{\Tor}^I_{jJ} b_I \big( {\mathcal{D}} C - \diff A \wedge B \big) \\ & - 2 \big({\mathcal{D}} C_i - \diff A \wedge B_i \big) \wedge \epsilon^{ij} b_I {\mathcal{D}} c_j^I + b_{12} \eta_{IJ} {\mathcal{D}} C^I \wedge {\mathcal{D}} C^J \\ & + 2 \big( {\mathcal{D}} C_{12} - b_{12} \diff A \big) \wedge b_I \big({\mathcal{D}} C^I - \tfrac12 b^I \diff A - c_i^I {\mathcal{D}} G^i \big) \\ & - {\mathcal{D}} B \wedge \epsilon^{ij} c_{iI} \big( {\mathcal{D}} c_j^I - \tilde{\Tor}_{jJ}{}^I C^J \big) + 2 B_i \wedge \epsilon^{ij} \tilde{\Tor}_{jIJ} C^I \wedge {\mathcal{D}} C^J \\ & - 2 \big( {\mathcal{D}} B_i - b_{12} \epsilon_{ik} {\mathcal{D}} G^k \big) \wedge \epsilon^{ij} c_{jI} \big( {\mathcal{D}} C^I - \tfrac12 c_l^I {\mathcal{D}} G^l \big) \Big] \ . \end{aligned}\end{equation} The non-Abelian field-strengths and covariant derivatives of all four-dimensional RR-fields are given by \begin{subequations}\label{fieldstrengths_RR} \begin{align} {\mathcal{D}} C & = \diff C - C_i \wedge {\mathcal{D}} G^i \ , \\ {\mathcal{D}} C_i & = \diff C_i + \epsilon_{ij} \tor^k G^j \wedge C_k + \epsilon_{ij} C_{12} \wedge {\mathcal{D}} G^j \ , \\ {\mathcal{D}} C_{12} & = \diff C_{12} + t^i C_i - \epsilon_{ij} \tor^j G^i \wedge C_{12} \ , \label{DC12} \\ {\mathcal{D}} C^I & = \diff C^I + \tilde{\Tor}_{iJ}{}^I G^i \wedge C^J\ , \label{DCI} \\ {\mathcal{D}} a_i & = \diff a_i + \epsilon_{ij} \tor^k a_k G^j \ , \\ {\mathcal{D}} c_i^I & = \diff c_i^I + \epsilon_{ij} \tor^k c_k^I G^j - \tilde{\Tor}_{jJ}{}^I c_i^J G^j + \tilde{\Tor}_{jJ}{}^I C^J \ . \end{align} \end{subequations} Let us summarize. The bosonic part of the low-energy four-dimensional effective action arising from the compactification of type IIA supergravity on~$\su2$-structure manifolds is given by the sum of the contribution from the NS-NS sector, Eq.~\eqref{action_NS}, and the contribution from the RR sector, Eqs.~\eqref{action_RR_kin} and~\eqref{action_RR_CS}, that is \begin{equation}\label{totalaction} S_\mathrm{eff} = S_\mathrm{NS} + S_\mathrm{RR} + S_\mathrm{CS} \ . \end{equation} The covariant derivatives and field strengths corresponding to the various four-dimensional fields are given in Eqs.~\eqref{fieldstrengths_NS} and~\eqref{fieldstrengths_RR}. The next step is to establish the consistency of this action with four-dimensional $\mathcal{N}=4$ supergravity. To do this, we will bring the action into the canonical form proposed in Ref.~\cite{Schon:2006kz} by performing a series of field redefinitions. \section{Consistency with $\mathcal{N}=4$ supergravity} \label{sec:ConsistencyN=4} The gravity multiplet of $\mathcal{N}=4$ supergravity in four dimensions contains as bosonic degrees of freedom the metric, six massless vectors and two real scalars while a vector multiplet consist of a massless vector field and six real scalars. $\mathcal{N}=4$ supergravity coupled to $n$ vector multiplets has a global symmetry $\mathrm{SL}(2) \times \so{6,n}$ and the scalar fields of the theory assemble into a complex field $\tau$ describing an $\mathrm{SL}(2)/\so2$ coset and a $(6+n)\times(6+n)$ matrix $M_{MN}$ parametrizing the coset \begin{equation} \frac{\mathrm{SO}(6,n)}{\mathrm{SO}(6)\times \mathrm{SO}(n)} \ . \end{equation} In Ref.~\cite{Schon:2006kz}, the action of the most general gauged $\mathcal{N}=4$ supergravity is given using the embedding tensor formalism. All possible gaugings are encoded in two tensors,~$f_{\alpha MNP}$ and~$\xi_{\alpha M}$, where $\alpha$ is an~$\mathrm{SL}(2)$ index taking the values $+$ and $-$. As it turns out, for the effective action \eqref{totalaction} both $f_{-MNP}$ and $\xi_{-M}$ vanish, and therefore we choose to start with the formulas of Ref.~\cite{Schon:2006kz} adapted to this case. In order to simplify the notation, we omit the $\alpha=+$ index in the couplings~$f_{+MNP}$ and~$\xi_{+M}$ and write simply $f_{MNP}$ and $\xi_M$ for the non-trivial couplings. With this in mind, the action for gauged $\mathcal{N}=4$ supergravity can be divided in three parts, \begin{equation}\label{N=4general} S_{\mathcal{N}=4} = S_\mathrm{kin} + S_\mathrm{top} + S_\mathrm{pot} \ , \end{equation} that is kinetic, topological and potential terms. The part of the action containing the kinetic terms reads \begin{multline}\label{N=4_canonical_kin} S_\mathrm{kin} = \tfrac12 \int_{M_{1,3}} \big[ R \ast 1 + \tfrac18 {\mathcal{D}} M_{MN} \wedge \ast {\mathcal{D}} M^{MN} - \tfrac12 (\Im \tau)^{-2} {\mathcal{D}} \tau \wedge \ast {\mathcal{D}} \bar\tau \\ {} - (\Im \tau) \, M_{MN} {\mathcal{D}} V^{M+} \wedge \ast {\mathcal{D}} V^{N+} + (\Re \tau) \, \eta_{MN} {\mathcal{D}} V^{+} \wedge {\mathcal{D}} V^{N+} \big] \ , \end{multline} where the constant matrix $\eta_{MN}$ is an $\mathrm{SO}(6,n)$ metric and the non-Abelian field-strengths for the electric vector fields $V^{M+}$ are given by the expression \begin{equation}\label{fieldstrengths_embedding_tensor} {\mathcal{D}} V^{M+} = \diff V^{M+} - \tfrac12 {\hat{f}}_{NP}{}^M V^{N+} \wedge V^{P+} + \tfrac12 \xi^M B^{++} \ , \end{equation} where $B^{++}$ is an auxiliary two-form whose role we soon explain.\footnote{As noted above, we omit the $+$ index of Ref.~\cite{Schon:2006kz} in the couplings $f_{MNP}$ and~$\xi_{M}$, but we do keep it for the gauge fields and denote the electric vectors by $V^{M+}$ while the magnetic vectors are $V^{M-}$.} The covariant derivatives of the scalar fields are defined as \begin{align} {\mathcal{D}} \tau & = \diff \tau + \xi_M \tau V^{M+} + \xi_M V^{M-} \ , \label{Dtau} \\ {\mathcal{D}} M_{MN} & = \diff M_{MN} + \Theta_{PM}{}^Q M_{NQ} V^{P+} + \Theta_{PN}{}^Q M_{MQ} V^{P+} \ . \end{align} In these expressions, the following useful shorthands were used, \begin{align} \hat{f}_{MNP} & = f_{MNP} - \tfrac12 \xi_M \eta_{PN} + \tfrac12 \xi_P \eta_{MN} - \tfrac32 \xi_N \eta_{MP} \ , \\ \Theta_{MNP} & = f_{MNP} - \tfrac12 \xi_N \eta_{PM} - \tfrac12 \xi_P \eta_{NM} \ . \end{align} As we can see, the presence of an auxiliary two-form field~$B^{++}$ is related to the fact that the complex scalar~$\tau$ is charged with respect to the magnetic duals~$V^{M-}$ of the electric vector fields~$V^{M+}$. The two-form~$B^{++}$ acts as a Lagrange multiplier, in the sense that its equation of motion merely ensures that~$V^{M-}$ and~$V^{M+}$ are related by an electric-magnetic duality. This follows from the last term in the topological part of the $\mathcal{N}=4$ supergravity action \begin{equation}\label{N=4top} \begin{aligned} S_\mathrm{top} = -\tfrac12 \int_{M_{1,3}} & \big[ \xi_M \eta_{NP} V^{M-} \wedge V^{N+} \wedge \diff V^{P+} - \tfrac14 \hat{f}_{MNR} \hat{f}_{PQ}{}^R V^{M+} \wedge V^{N+} \wedge V^{P+} \wedge V^{Q-} \\ &\quad - \xi_M B^{++} \wedge \big( \diff V^{M-} - \tfrac12 \hat{f}_{QR}{}^M V^{Q+} \wedge V^{R-} \big) \big] \ . \end{aligned} \end{equation} Finally, there is also a potential energy that contributes to the action as \begin{equation}\label{N=4pot} \begin{aligned} S_\mathrm{pot} = - \tfrac1{16} \int_{M_{1,3}} & (\Im \tau)^{-1} \big[ 3 \xi^M \xi^N M_{MN} \\ & + f_{MNP} f_{QRS} \big( \tfrac13 M^{MQ} M^{NR} M^{PS} + (\tfrac23 \eta^{MQ} - M^{MQ}) \eta^{NR} \eta^{PS} \big) \big] \ . \end{aligned} \end{equation} \subsection{Field dualizations} \label{sec:dualizations} The action $S_\mathrm{eff}$ that was obtained in \eqref{totalaction} does not have the same structure as the action given in Eq.~\eqref{N=4general}. Most obviously, the spectrum currently contains two-form fields, which we must replace by their dual scalar fields. Furthermore, as can be easily verified, the quadratic couplings of the vector field-strengths are not of the simple form seen in Eq.~\eqref{N=4_canonical_kin}, which implies that also some of the vector fields must be traded for their dual fields. Our strategy will be the following. First we remove the (non-dynamical) three-form field~$C$ from the theory and dualize the two-forms~$B$ and~$C_i$ to scalars~$\beta$ and~$\gamma^i$, respectively. In a second step, we determine the correct electric-magnetic duality frame in which the action for the vector fields takes the form \eqref{N=4_canonical_kin}. This we can do by setting to zero the parameters~$\Tor^I_{iJ}$ and~$\tor^i$ determining the charges, which makes it easier to perform electric-magnetic duality transformations on the vector fields. Once we have identified the correct electric-magnetic duality frame, we can read off the~$\so{6,n}$ coset matrix~$M_{MN}$, the complex scalar~$\tau$ and the metric~$\eta_{MN}$. The final step is then to turn on the charges and use the information obtained in the previous steps to determine the components of the embedding tensor. Using the embedding tensor, we can then find the full expressions for the electric field strengths in the canonical action~\eqref{N=4_canonical_kin}, as well as the correct topological terms~\eqref{N=4top}. We can then verify that the action obtained in this way is equivalent to~$S_\mathrm{eff}$ by elimination of the extra two-form~$B^{++}$ introduced by the embedding tensor formalism. As already mentioned, the four-dimensional three-form~$C$ carries no degrees of freedom. We can integrate it out using its equation of motion. {}From the part of the effective action $S_\mathrm{eff}$ that depends on~$C$, namely \begin{equation}\label{action_threeform} S_{C} = - \tfrac14 \int_{M_{1,3}} \Big[ \e^{-4\phi-\eta-\rho} \big\vert {\mathcal{D}} C - \diff A \wedge B \big\vert^2 - 2 \epsilon^{ij} b_I \tilde{\Tor}^I_{iJ} c_j^J \big( {\mathcal{D}} C - \diff A \wedge B \big) \Big] \ , \end{equation} follows the equation of motion \begin{equation}\label{eqmotion_threeform} {\mathcal{D}} C - \diff A \wedge B = - e^{4\phi+\eta+\rho} \epsilon^{ij} b_I \tilde{\Tor}^I_{iJ} c_j^J \ast 1 \ . \end{equation} Substituting this back into the action \eqref{action_threeform}, we obtain the potential term \begin{equation}\label{potential_threeform} S^\prime_C= - \tfrac14 \int_{M_{1,3}} \e^{4\phi+\eta+\rho} \big( \epsilon^{ij} b_I \tilde{\Tor}^I_{iJ} c_j^J \big)^2 \ast 1\ . \end{equation} Next, we trade the two-forms~$C_i$ and~$B$ for their dual scalars. In contrast to the three-form~$C$, the two-forms~$C_i$ do not appear in the Lagrangian exclusively in the form~$\diff C_i$. As can be seen in the expression~\eqref{DC12} for the covariant field strength~${\mathcal{D}} C_{12}$, they are also present as a St\"uckelberg-like mass term~$\tor^i C_i$, making it necessary to dualize the vector field~$C_{12}$ as well. Therefore, we dualize the~$C_i$ into scalar fields~$\gamma^i$ while at the same time dualizing the vector field~$C_{12}$ to a vector field~$\tilde{C}$. As already mentioned, the scalar field dual to $B$ will be called $\beta$. We present the details of this calculation in Appendix~\ref{sec:dualizations-appendix}. After these steps, we arrive at an action~$S^\prime_\mathrm{eff}$ containing only scalar and vector fields (apart from the metric). The total action can be split into three components \begin{equation}\label{Seff} S^\prime_\mathrm{eff} = S_{\mathrm{scalar}} + S_{\mathrm{vector}} + S_{\mathrm{potential}}\,, \end{equation} where the kinetic terms for the scalar fields (and the four-dimensional metric) are \begin{equation}\label{n=4scalar} \begin{aligned} S_{\mathrm{scalar}} = -\tfrac12 \int_{M_{1,3}} & \Big[ R \ast 1 + \vert \diff \phi \vert^2 + \tfrac12 \e^{2\eta} \big( \vert {\mathcal{D}} b_{12} \vert^2 + \vert {\mathcal{D}} \e^{-\eta} \vert^2 \big) + \tfrac14 \tilde{g}^{ik} \tilde{g}^{jl} {\mathcal{D}} \tilde{g}_{ij} \wedge \ast {\mathcal{D}} \tilde{g}_{kl} \\ & + \tfrac14 \vert {\mathcal{D}}\rho \vert^2 + \tfrac14 (H^{IJ} - \eta^{IJ}) {\mathcal{D}}\zeta^\alpha_I \wedge \ast {\mathcal{D}}\zeta^\beta_J + \tfrac12 \e^\rho H^{IJ} {\mathcal{D}} b_I \wedge \ast {\mathcal{D}} b_J \\ & + \e^{2\phi-\rho} \tilde{g}^{ij} {\mathcal{D}} a_i \wedge \ast {\mathcal{D}} a_j + \e^{2\phi} \tilde{g}^{ij} H_{IJ} ({\mathcal{D}} c_i^I + a_i {\mathcal{D}} b^I) \wedge \ast ({\mathcal{D}} c_j^J + a_j {\mathcal{D}} b^J) \\ & + \e^{2\phi+\rho} \tilde{g}^{ij} ({\mathcal{D}} \gamma_i + b^I {\mathcal{D}} c_{iI}) \wedge \ast ({\mathcal{D}} \gamma_j + b^J D c_{jJ}) \\ & + \e^{4\phi} \big\vert {\mathcal{D}} \beta - \epsilon^{ij} (a_i {\mathcal{D}} \gamma_j + a_i b_I {\mathcal{D}} c_j^I - \tfrac12 c_{iI} {\mathcal{D}} c_j^I) \big\vert^2 \Big] \ . \end{aligned} \end{equation} The covariant derivatives $D\gamma_i$ and $D\beta$ are given by \begin{subequations}\label{covderivs_gamma_beta} \begin{align} {\mathcal{D}} \gamma_i & = \diff \gamma_i - \epsilon_{ij} \tor^j (\gamma_k G^k + \tilde{C}) \ , \\ {\mathcal{D}} \beta & = \diff \beta + \tfrac12 c_{iJ} \tilde{\Tor}_{iI}{}^J C^I \ . \end{align} \end{subequations} The kinetic and topological terms for the vector fields are \begin{equation}\label{n=4vector} \begin{aligned} S_{\mathrm{vector}} = -\tfrac14 \int_{M_{1,3}} & \Big[ \e^{-2\phi-\eta} \tilde{g}_{ij} {\mathcal{D}} G^i \wedge \ast {\mathcal{D}} G^j + \e^{-\eta-\rho} \vert \diff A - a_i {\mathcal{D}} G^i \vert^2 \\ & + \e^{-2\phi+\eta} \tilde{g}^{ij} ({\mathcal{D}} B_i - b_{12} \epsilon_{ik} {\mathcal{D}} G^k) \wedge \ast ({\mathcal{D}} B_j - b_{12} \epsilon_{jl} {\mathcal{D}} G^l) \\ & + \e^{-\eta+\rho} \big\vert {\mathcal{D}} \tilde{C} - \gamma_i {\mathcal{D}} G^i + b_I ({\mathcal{D}} C^I - \tfrac12 b^I \diff A - c_k^I {\mathcal{D}} G^k) \big\vert^2 \\ & + \e^{-\eta} H_{IJ} \big( {\mathcal{D}} C^I - b^I \diff A - c_i^I {\mathcal{D}} G^i \big) \wedge \ast \big( {\mathcal{D}} C^J - b^J \diff A - c_j^J {\mathcal{D}} G^j \big) \\ & + \ b_{12} \eta_{IJ} {\mathcal{D}} C^I \wedge {\mathcal{D}} C^J + 2 b_{12} \diff A \wedge {\mathcal{D}} \tilde{C} \\ & - 2 ({\mathcal{D}} B_i - b_{12} \epsilon_{il} {\mathcal{D}} G^l) \wedge \epsilon^{ij} \big[ (c_{jI} + a_j b_I) {\mathcal{D}} C^I + (\gamma_j - \tfrac12 a_j b_I b^I) \diff A \\ & \hspace{1.2in} {} + a_j {\mathcal{D}} \tilde{C} - (\epsilon_{jk} \beta + a_j \gamma_k + \tfrac12 c_{jI} c_k^I + a_j b_I c_k^I) {\mathcal{D}} G^k \big] \\ & + 2 B_i \wedge \big( \epsilon^{ij} \tilde{\Tor}_{jIJ} C^I \wedge {\mathcal{D}} C^J + \tor^i \tilde{C} \wedge \diff A \big) \Big] \ . \end{aligned} \end{equation} Here, the non-Abelian field-strength for the vector field $\tilde{C}$ is \begin{equation}\label{DCtilde} {\mathcal{D}} \tilde{C} = \diff \tilde{C} + \epsilon_{ij} \tor^j G^i \wedge \tilde{C} \ . \end{equation} Finally, the total potential reads \begin{equation}\label{n=4potential} \begin{aligned} S_{\mathrm{potential}}= -\tfrac14 \int_{M_{1,3}} & \Big[ \e^{4\phi+\eta+\rho} \big( \epsilon^{ij} b_I \tilde{\Tor}^I_{iJ} c_j^J \big)^2 + \tfrac52 \e^{2\phi+\eta} \tilde{g}_{ij} \tor^i \tor^j + \tfrac14 \e^{2\phi-\eta+\rho} \tilde{g}_{ij} \tor^i \tor^j H^{IJ} b_I b_J \\ & + \tfrac14 \e^{2\phi+\eta} \tilde{g}^{ij} {[H, \Tor_i]^I}_J {[H, \Tor_j]^J}_I + \e^{2\phi+\eta+\rho} \tilde{g}^{ij} H^{IJ} \Tor^K_{iI} \Tor^L_{jJ} b_K b_L \\ & + \e^{4\phi + \eta} H_{IJ} \big[\epsilon^{ij} \Tor^I_{iK} (c_j^K + a_j b^K) - \tor^i (c_i^I - a_i b^I) \big] \cdot \\ & \hspace{1in} \cdot \big[\epsilon^{kl} \Tor^J_{kL} (c_l^L + a_l b^L) - \tor^k (c_k^J - a_k b^J) \big] \Big] \ast 1 \ . \end{aligned} \end{equation} \subsection{Determination of the embedding tensor} \label{sec:embedding_tensor} At this point, we can identify which vector fields in the effective action~\eqref{Seff} correspond to the electric vector fields $V^{M+}$ in the canonical action~\eqref{N=4general} and which vector fields should be dualized. Setting the parameters~$\Tor^I_{iJ}$ and~$\tor^i$ to zero in the action~\eqref{Seff}, we can very easily trade vector fields for their electric-magnetic duals via the usual dualization procedure. It turns out that exchanging the vector fields~$B_i$ with their dual fields~$B^{\bar{\imath}}$ suffices to bring the (ungauged) Lagrangian into the form~\eqref{N=4_canonical_kin}.\footnote{Note that turning off the parameters $\Tor^I_{iJ}$ and~$\tor^i$ corresponds to compactifications on $K3 \times T^2$. The effective action for this case has been determined in \cite{Spanjaard:2008zz,Duff:1995wd,Duff:1995sm}.} The computation of the action for the fields~$B^{\bar{\imath}}$ is given in section~\ref{sec:dual-Bi} of the Appendix. From the action for the dualized fields we can determine the $\so{6,n}$ metric~$\eta_{MN}$ as well as the complex scalar~$\tau$ and the coset matrix~$M_{MN}$ which determine the canonical action~\eqref{N=4_canonical_kin}. If we choose to arrange the electric vectors into the fundamental representation of $\so{6,n}$ as \begin{equation}\label{electric_vectors} V^{M+} = (G^i, B^{\bar{\imath}}, A, \tilde C, C^I) \end{equation} we find that the $\so{6,n}$ metric $\eta_{MN}$ is given by \begin{equation}\label{SO6nmetric} \eta_{MN} = \left( \begin{array}{ccccc} 0 & \delta_{i\bar{\jmath}} & 0 & 0 &0\\ \delta_{\bar{\imath} j} & 0 & 0 &0 &0\\ 0 & 0 & 0 &1 & 0\\ 0 & 0 & 1& 0 & 0\\ 0 & 0 & 0 & 0 & \eta_{IJ} \end{array} \right)\ , \end{equation} and that the scalar factor in the topological vector field couplings is given by \begin{equation}\label{imtau} \Re\tau = -\tfrac12 {b_{12}} \ . \end{equation} We can find the imaginary part of $\tau$ by checking the kinetic term for $b_{12}$ in the action~\eqref{action_NS}, since according to~\eqref{N=4_canonical_kin} this should contain a factor $(\Im\tau)^{-2}$. In this way, we determine that the complex scalar $\tau$ is given by \begin{equation}\label{tau} \tau = \tfrac12 (-b_{12} + \iu \e^{-\eta}) \ . \end{equation} For completeness, the matrix $M_{MN}$ is given in Appendix \ref{sec:so6-n-coset}. We now have enough information to determine the embedding tensor from the covariant derivatives and the non-Abelian field strengths in the action~\eqref{Seff}. We start by determining the components~$\xi_{\alpha M}$ from the covariant derivative of $\tau$. Comparing Eqs.~\eqref{Deta} and~\eqref{Db12} with the general formula~\eqref{Dtau} we conclude that \begin{equation}\label{embedding_tensor_xi} \xi_i = - \epsilon_{ij} t^j\ , \end{equation} and $\xi_{\bar{\imath}} = \xi_5 = \xi_6 = \xi_I = 0$. On the other hand, the components $f_{MNP}$ of the embedding tensor are most easily determined from the non-Abelian field strengths of the vector fields~$V^{M+}$. It turns out that setting \begin{subequations}\label{embedding_tensor_f} \begin{align} f_{ij\bar{\imath}} & = - \tfrac12 \epsilon_{ij} \delta_{\bar{\imath} k} \tor^k \ ,\\ f_{i56} & = \tfrac12 \epsilon_{ij} \tor^j \ , \\ f_{iIJ} & = - \Tor_{iIJ} \ , \end{align} \end{subequations} in the general formula~\eqref{fieldstrengths_embedding_tensor} leads to an agreement with the field-strengths computed in~\eqref{DGi}, \eqref{DCI} and \eqref{DCtilde}. Moreover, it can be checked that~\eqref{embedding_tensor_xi} and \eqref{embedding_tensor_f} satisfy the following quadratic constraints described in Ref.~\cite{Schon:2006kz}, \begin{equation} \xi^M \xi_M = 0 \ , \qquad \xi^M f_{MNP} = 0 \ , \qquad 3 f_{R[MN} f_{PQ]}{}^R - 2 \xi_{[M} f_{NPQ]} = 0 \ , \end{equation} where square brackets denote antisymmetrization of the corresponding indices. That the first two constraints are satisfied follows trivially from the expressions~\eqref{embedding_tensor_xi} and \eqref{embedding_tensor_f} with a metric \eqref{SO6nmetric}. The third one follows from the commutation relation satisfied by the matrices $T^I_{iJ}$ given in Eq.~\eqref{XXXX}, which as we saw is a consequence of demanding nilpotency of the exterior differential acting on the two-forms $\omega^I$. We now have all the information we need in order to write down the action with charged fields in the electric frame. The total field-strength for the electric vector field $B^{\bar{\imath}}$ in the action~\eqref{N=4_canonical_kin} is then \begin{equation}\label{fieldstrength_B_electric} F^{\bar{\imath} +} = \diff B^{\bar{\imath}} + \tfrac12 \delta^{i\bar{\imath}} \big[ \epsilon_{ik} \tor^k (\delta_{j\bar{\jmath}} G^j \wedge B^{\bar{\jmath}} - A \wedge \tilde{C}) + \Tor_{iIJ} C^I\wedge C^J - \epsilon_{ij} \tor^j B^{++}\big] \ , \end{equation} while the topological term is given by \begin{equation}\begin{aligned}\label{topological_action} S_\mathrm{top} = \tfrac14 \int_{M_{1,3}} & \Big[ B^{++} \wedge \tor^j {\mathcal{D}} B_j - \tor^i {\mathcal{D}} B_i \wedge (\delta_{j\bar{\jmath}} B^{\bar{\jmath}} \wedge {\mathcal{D}} G^j + \tilde{C} \wedge \diff A) \\ & \qquad + 2 \tor^i B_i \wedge (\delta_{j\bar{\jmath}} G^j \wedge B^{\bar{\jmath}} + A \wedge \tilde{C} + \tfrac12 \eta_{IJ} C^I \wedge {\mathcal{D}} C^J) \Big] \ . \end{aligned}\end{equation} Using the expressions for $f_{MNP}$, $M_{MN}$ and $\eta_{MN}$, it can be shown that the potential in \eqref{N=4pot} agrees with the potential \eqref{n=4potential} obtained from the KK reduction. Summarizing, we have obtained an action of the form given in \eqref{N=4_canonical_kin}, \eqref{N=4top} and \eqref{N=4pot}. In order to write the action in this form, we had to introduce extra vector fields $B^{\bar{\imath}}$, as well as a tensor field $B^{++}$, which appears in the field strength $F^{+\bar{\imath}}$. To see that this form of the action is equivalent to the action given in equations \eqref{n=4scalar}, \eqref{n=4vector} and \eqref{n=4potential}, one can use the equations of motion for $B^{++}$ to eliminate $B^{++}$ and $B^{\bar{\imath}}$. This reduces the action for the vector fields to the one in \eqref{n=4vector}. \subsection{Killing vectors and gauge algebra}\label{Killing} Finally let us determine the gauge group which arises from the compactifications studied in this paper. It will be useful to collectively denote all $(6n+2)$ scalar fields in the effective action by \begin{equation} \varphi^\Lambda = (b_{12}, \eta, \phi, \tilde{g}_{ij}, \rho, \zeta^x_I, a_i, \gamma_i, c_i^I, \beta, b_I)\ , \qquad \Lambda = 1, \ldots, 6n+2 \ . \end{equation} Then the Killing vectors $k_{M\alpha} = k^\Lambda_{M\alpha}(\varphi) \frac\partial{\partial\varphi^\Lambda}$ can be read off from the covariant derivatives of these fields in Eqs.~\eqref{fieldstrengths_NS}, \eqref{fieldstrengths_RR} and~\eqref{covderivs_gamma_beta} by comparing with the general formula \begin{equation} {\mathcal{D}} \varphi^\Lambda = \diff \varphi^\Lambda - k^\Lambda_{M\alpha}(\varphi) V^{M\alpha} \ . \end{equation} Doing this, we obtain the following expressions for the Killing vectors, \begin{equation} \begin{aligned} k_{i+} = {} & \epsilon_{ij} \tor^j \Big( b_{12} \frac\partial{\partial b_{12}} - \frac\partial{\partial \eta} + \frac\partial{\partial \rho} \Big) - \Tor^J_{iI} \zeta^x_J \frac\partial{\partial \zeta^x_I} + \tor^j (\epsilon_{ik} \tilde{g}_{jl} + \epsilon_{il} \tilde{g}_{jk} - \epsilon_{ij} \tilde{g}_{kl}) \frac\partial{\partial \tilde{g}_{kl}} \\ & \qquad + \epsilon_{ij} \tor^k a_k \frac\partial{\partial a_j}+ \epsilon_{jk} \tor^k \gamma_i \frac\partial{\partial \gamma_j} + \big(\epsilon_{ij} \tor^k \delta_I^J - \delta_j^k \tilde{\Tor}_{iI}{}^J \big) c_k^I \frac\partial{\partial c_j^J} - \tilde{\Tor}^J_{iI} b_J \frac\partial{\partial b_I} \ , \\ k_{6+} = {} & \epsilon_{ij} \tor^j \frac\partial{\partial \gamma_i} \ , \qquad\quad k_{I+} = {} \tilde{\Tor}_{iI}{}^J \Big(\frac\partial{\partial c_i^J} - \tfrac12 \epsilon^{ij} c_{jJ} \frac\partial{\partial \beta} \Big) \ , \qquad k_{i-} = {} \epsilon_{ij} \tor^j \frac\partial{\partial b_{12}} \ , \\ k_{\bar{\imath}\pm} = {} & k_{5\pm} = k_{6-} = k_{I-} = 0 \ . \end{aligned} \end{equation} Now we can compute the Lie brackets for this set of vectors to obtain \begin{equation}\label{Lie_brackets} \begin{aligned} {}[k_{i+}, k_{j+}] & = -\epsilon_{ij} \tor^k k_{k+} \ , \qquad [k_{i+}, k_{6+}] = -\epsilon_{ij} \tor^j k_{6+} \ , \\ [k_{i+}, k_{I+}] & = -\tilde{\Tor}_{iI}{}^J k_{J+} \ , \qquad [k_{i+}, k_{j-}] = \epsilon_{jk} \tor^k k_{i-} \ , \end{aligned} \end{equation} with the all other brackets vanishing. Inspecting \eqref{differential_algebra} we see that by choosing appropriate linear combinations of $v^1$ and $v^2$ we can set $\tor^1=0$ without loss of generality and then rename $\tor^2\equiv\tor$. If we do this, $k_{2-}$ is zero, and the non-vanishing Lie brackets~\eqref{Lie_brackets} read \begin{equation} \begin{aligned} {}[k_{1+}, k_{2+}] & = \tor k_{2+} \ , \qquad [k_{1+}, k_{1-}] = -\tor k_{1-} \ , \\ [k_{1+}, k_{6+}] & = \tor k_{6+} \ , \qquad [k_{1+}, k_{I+}] = \tfrac12 \tor k_{I+} + \Tor^J_{1I} k_{J+} \ , \\ [k_{2+}, k_{I+}] & = \Tor^J_{2I} k_{J+} \ . \end{aligned} \end{equation} This corresponds to the solvable algebra $(\mathbb{R}_{k_{6+}} \times \mathbb{R}_{k_{1-}} \times (\mathbb{R}^n_{k_{I+}} \rtimes \mathbb{R}_{k_{2+}})) \rtimes \mathbb{R}_{k_{1+}}$, where in the first semi-direct product, $\mathbb{R}_{k_{2+}}$ acts on $\mathbb{R}^n_{k_{I+}}$ by means of the matrix $\Tor^J_{2I}$, while in the second, $\mathbb{R}_{k_{1+}}$ acts on $\mathbb{R}_{k_{6+}} \times \mathbb{R}_{k_{1-}} \times (\mathbb{R}^n_{k_{I+}} \rtimes \mathbb{R}_{k_{2+}})$ through the matrix \begin{equation} \mathrm{diag} (\tor, -\tor, \tfrac12 \tor \delta^J_I + \Tor^J_{1I}, \tor) \ . \end{equation} That the algebra \eqref{Lie_brackets} is indeed consistent with gauged $\mathcal{N} = 4$ supergravity we see by defining the following matrices~\cite{Schon:2006kz} \begin{equation}\label{X_matrices} X_{M+} = \begin{pmatrix} X_{M+N+}{}^{P+} & 0 \\ 0 & X_{M+N-}{}^{P-} \end{pmatrix} \ , \qquad X_{M-} = \begin{pmatrix} 0 & X_{M-N+}{}^{P-} \\ 0 & 0 \end{pmatrix} \ , \end{equation} with non-vanishing entries given in terms of the embedding tensors by \begin{equation} \begin{aligned} X_{M+N+}{}^{P+} & = -f_{MN}{}^P - \tfrac12 (\delta^P_M \xi_N - \delta^P_N \xi_N - \eta_{MN} \xi^P) \ , \\ X_{M+N-}{}^{P-} & = -f_{MN}{}^P - \tfrac12 (\delta^P_M \xi_N + \delta^P_N \xi_N - \eta_{MN} \xi^P) \ , \\ X_{M-N+}{}^{P-} & = - \delta^P_N \xi^M \ . \end{aligned} \end{equation} As discussed in Ref.~\cite{Schon:2006kz}, the non-Abelian gauge algebra of the $\mathcal{N} = 4$ supergravity should be reproduced by the commutators \begin{equation}\label{comm_X} \begin{aligned} {} [X_{M+}, X_{N+}] & = X_{M+N+}{}^{P+} X_{P+} \ , \\ [X_{M+}, X_{N-}] & = X_{M+N-}{}^{P-} X_{P-} = -X_{N-M+}{}^{P-} X_{P-} \ , \\ [X_{M-}, X_{N-}] & = 0 \ , \\ \end{aligned} \end{equation} And indeed, by using the expressions \eqref{embedding_tensor_xi} and \eqref{embedding_tensor_f} for the embedding tensor in the formulas \eqref{X_matrices} to \eqref{comm_X}, the algebra \eqref{Lie_brackets} is recovered. \section{Conclusions}\label{section:Conclude} In this paper we considered type IIA supergravity compactified on a specific class of six-dimensional manifolds which have $\su2$ structure. Such manifolds admit a pair of globally defined spinors and they can be further characterized by their non-trivial intrinsic torsion. Among the $\su2$-structure manifolds one also finds the Calabi-Yau manifold $K3\times T^2$ for which the intrinsic torsion vanishes. Furthermore, the entire class of six-dimensional $\su2$-structure manifolds necessarily has an almost product structure of a four-dimensional component times a two-dimensional component which also generalizes the Calabi-Yau case. However, in order to simplify the analysis in this paper, we confined our attention to torsion classes which lead to an integrable almost product structure. For this class of compactifications (with the additional requirement of the absence of massive gravitino multiplets) we determined the resulting four-dimensional $\mathcal{N}=4$ low-energy effective action by performing a Kaluza-Klein reduction. By appropriate dualizations of one- and two-forms it was possible to go from the `natural' field basis of the KK reduction to a supergravity field basis where the consistency with the `standard' $\mathcal{N}=4$ form as given in \cite{Schon:2006kz} could be established. In that process, we determined the components of the embedding tensor or in other words the couplings of the $\mathcal{N}=4$ action in terms of the intrinsic torsion. The resulting gauge group is solvable, as usually is the case for these compactifications. \section*{Acknowledgments} B.S.~would like to acknowledge useful discussions with Stefan Groot-Nibbelink, Olaf Hohm, Andrei Micu, Ron Reid-Edwards, Henning Samtleben and Martin Weidner. This work was partly supported by the German Science Foundation (DFG) under the Collaborative Research Center (SFB) 676 ``Particles, Strings and the Early Universe''. The work of H.T.\ is supported by the DSM CEA/Saclay, the ANR grant 08-JCJC-0001-0 and the ERC Starting Independent Researcher Grant 240210 - String-QCD-BH. \vfill \newpage
1,314,259,994,106
arxiv
\section{Introduction} Counterexamples for every open 3-manifold embeds in a compact 3-manifold have been discovered for over 60 years. Indeed, there are plenty of such examples even for open manifolds which are algebraically very simple (e.g., contractible). A rudimentary version of such examples can be traced back to \cite{Whi35} (the first stage of the construction is depicted in Figure \ref{whitehead}) where Whitehead surprisingly found the first example of a contractible open 3-manifold different from $\mathbb{R}^3$. However, the Whitehead manifold does embed in $S^3$. In 1962, Kister and McMillan noticed the first counterexample in \cite{KM62} where they proved that an example proposed by Bing (see Figure \ref{3_1knot}) doesn't embed in $S^3$ although every compact subset of it does. In the meantime, they conjectured that Bing's example is a desired counterexample, i.e., such example embeds in no compact 3-manifold. This conjecture was confirmed later by Haken using his famous finiteness theorem \cite{Hak68} stating that there is an upper bound on the number of incompressible nonparallel surfaces in a compact 3-manifold. Similar examples can readily derive from Haken's finiteness theorem (or see \cite[Thm. 2.3]{MW79}). In 1977, an interesting example (see Figure \ref{sternfeld}) was given in Sternfeld's PhD dissertation \cite{Ste77}. Instead of using Haken's finiteness theorem, Sternfeld applied covering space theory to produce a contractible open $n$-manifold ($n\geq 3$) that embeds in no compact $n$-manifold\footnote{It doesn't appear that Haken's finiteness theorem can be used to produce high-dimensional examples.}. His constructions can be viewed as a modification of Bing's\footnote{A connection between Bing's and Sternfeld's examples are illustrated in \S \ref{section: questions}.}, but he claimed that his examples cannot embed as an open subset in any compact, locally connected and locally 1-connected metric space, which is much more general than a compact manifold. More importantly, at the time of writing, Sternfeld's constructions are the only known examples of such phenomenon in high dimensions. \begin{remark} There is an error in Sternfeld's dissertation which directly affects his whole argument. In the process of proving our main theorem, we correct this error, thereby, confirming the validity of his example (see Remark \ref{Error} in \S \ref{section: The surjection} for details). \end{remark} It is natural to ask if Bing's example can embed in a more general compact space, say, a compact absolute neighborhood retract or compact, locally connected and locally 1-connected 3-dimensional metric space. Here we answer the above question in negative. \begin{theorem}\label{Thm: W^3 embeds in no compact ANR} $W^3$ embeds as an open subset in no compact, locally connected, locally 1-connected metric space. In particular, $W^3$ embeds in no compact $3$-manifold. \end{theorem} Making use of the high-dimensional construction developed in \cite{Ste77}, we extend Theorem \ref{Thm: W^3 embeds in no compact ANR} to all finite dimensions. \begin{theorem}\label{Thm: high dimensional collection} There exists a contractible open $n$-manifold $W^n$ ($n\geq 4$) which embeds as an open subset in no compact, locally connected, locally 1-connected metric $n$-space. Hence, $W^n$ embeds in no compact $n$-manifold. \end{theorem} The strategy of our proof heavily relies on the techniques and results from Sternfeld's dissertation \cite{Ste77}. Succinctly speaking, the key is to show that the union of $W^3$ and a 3-ball (advertised as a knot complement $K_j$) has a finite cover which contains infinitely many pairwise disjoint incompressible surfaces. Many results from \cite{Ste77} will not be re-proved here, but we will take shortcuts afforded by knot theory and software GAP \cite{GAP18} in this work. The outline of this paper is: \S \ref{section: The constructiion of a 3-dimensional example} gives a detailed review of the construction of Bing's example and discusses its cruical connection with a knot space $K_j$. That is, showing Bing's example can embed in no compact, locally connected and locally 1-connected metric space is equivalent to showing $\pi_1(K_j)$ is not finitely generated. Towards that goal, in \S \ref{section: A presentation} we find the Wirtinger presentation of $\pi_1(K_j)$ and in \S \ref{section: The surjection}, we define an important surjection of $\pi_1(K_j)$ onto $\mathbb{A}_5$. Meanwhile, we fix an error in Sternfeld's dissertation. \S \ref{section: properties of cube hole} paves the road for \S \ref{section: proof of proposition} by showing that the key ingredient is to focus on an object called a cube with a trefoil-knotted hole. \S \ref{section: proof of proposition} proves Theorem \ref{Thm: W^3 embeds in no compact ANR} by using results obtained from \S \ref{section: The constructiion of a 3-dimensional example}-\S \ref{section: properties of cube hole}. The proof of Theorem \ref{Thm: high dimensional collection} is presented at the end of this section. In \S \ref{section: questions}, we discuss some related questions of this work. \section{The construction of a 3-dimensional example}\label{section: The constructiion of a 3-dimensional example} First, we reproduce the example originially proposed by Bing, i.e., a 3-dimensonal contractible open manifold $W^3$. Let $\{T_l|l = 0,1,2,\dots \}$ be a collection of disjoint solid tori standardly embedded in $S^3$. Let the solid torus $T_l'$ be embedded in $\operatorname{Int}T_l$ as in Figure \ref{3_1knot}.\footnote{Changing the cube with a trefoil-knotted hole $C_l$ as shown in Figure \ref{3_1knot} can result in different contractible open manifold. For instance, one can replace $C_l$ by a cube with a square-knotted hole. Proposition \ref{Prop: W is contractible} is true for all contractible manifolds constructed in such fashion.} Let the oriented simple closed curve $\alpha_l$, $\beta_l$, $\gamma_l$ and $\delta_l$ be as shown in Figure \ref{3_1knot}. The curves $\alpha_l$ and $\beta_l$ are transverse in $\partial T_l$, and meet at the point $q_l \in \partial T_l$. In a similar fashion, the curves $\gamma_l$ and $\delta_l$ are transverse in $\partial T_l'$, and meet at the point $p_l \in \partial T_l'$. For $l \geq 1$, let $L_l = T_l \backslash \operatorname{Int}T_l'$. Define an embedding $h_{l+1}^{l}: T_l \to T_{l+1}$ so that $T_l$ is carried onto $T_{l+1}'$ with $h_{l+1}^{l}(\alpha_l) = \delta_{l+1}$ and $h_{l+1}^{l}(\beta_l) = \gamma_{l+1}$. $W^3$ is the direct limit of the $T_l$'s and denoted as $W^3 = \lim\limits_{l\to \infty}(T_l,h_{l+1}^{l})$. That is equivalent to view $W^3$ as the quotient space: $\sqcup_l T_l \xrightarrow{q} W^3$, where $\sqcup_l T_l$ is the disjoint union of the $T_l$'s and $q$ is the quotient map induced by the relation $\sim$ on $\sqcup_l T_l$. If $x\in T_i$ and $y\in T_j$, then $x \sim y$ iff there exists a $k$ larger than $i$ and $j$ such that $h_{k}^{i}(x) = h_{k}^{j}(y)$, where $h_{t}^{s} = h_{t}^{t-1} \circ h_{t-1}^{t-2} \circ \cdots \circ h_{s+2}^{s+1} \circ h_{s+1}^{s}$ for $t > s$. Let $\iota_l: T_l \hookrightarrow \sqcup_l T_l$ be the obvious inclusion map. The composition $q \circ \iota_l$ embeds $T_l$ in $W^3$ as a closed subset. The injectivity follows from the injectivity of $h_{k+1}^{k}$. It is closed since for $j > l$ the set $h_{j}^{l}(T_l)$ is closed in $T_j$. Let $T_l^*$ denote $q \circ \iota_l(T_l)$. $T_l^*$ is embedded in $T_{l+1}^*$ just as the way $h_{l+1}^{l}(T_l)$ ($= T_{l+1}'$) is embedded in $T_{l+1}$. Hence, Figure \ref{3_1knot} can be viewed as a picture of the embedding of $T_l^*$ in $T_{l+1}^{*}$. In general, for $k > l$, $T_l^*$ is embedded in $T_k^*$ just as $h_{k}^{l} (T_l)$ is embedded in $T_k$. \begin{figure}[h!] \centering \includegraphics[ width=8cm, height=10cm]{3_1knot} \caption{$L_l = T_l \backslash T_l'$. The "inner" boundary component of $L_l$ is $\partial T_l'$. The "outer" boundary component of $L_l$ is $\partial T_l$} \label{3_1knot} \end{figure} \begin{proposition}\label{Prop: W is contractible} $W^3$ is an contractible open connected $3$-manifold. \end{proposition} \begin{proof} By the construction described above, $W^3$ is an expanding union of $T_l^*$'s, hence, connected. The interior of each $h_{j}^{l}(T_l)$ is open in $T_j$, so $\operatorname{Int}T_l^*$ is open in $W^3$. Since $T_l^*$ is contained in $\operatorname{Int}T_{l+1}^*$, $W^3$ is an open 3-manifold. To show the contractibility of $W^3$, we first triangulate $W^3$ by choosing for each $T_l$ ($l\geq 0$), a simplicial subdivision such that each embedding $h_{k+1}^{k}$ ($k \geq 0$) is simplicial with respect to the chosen subdivision of its domain and range. Let $H: W^3 \times [0,1] \to W^3$ be the contraction to be constructed. Define $H$ inductively on the skeleton of $W^3 \times [0,1]$. Pick $p\in W^3$ to be the point to which we want to contract. Map each vertex cross $[0,1]$ to a path beginning at the vertex and ending at $p$. Let $\Delta^{(1)}$ be a 1-simplex of $W^3$. Define the restrictions $H|_{\Delta^{(1)}\times \{0\}}$ to be the identity and $H|_{\Delta^{(1)}\times \{1\}}$ to be the constant map taking all points to $p$. Note that $\partial \Delta^{(1)}$ lies in the 0-skeleta of $W^3$. $H$ has already been defined on $\partial \Delta^{(1)} \times [0,1] = \partial (\Delta^{(1)} \times [0,1])$. Note that $T_l^*$ contracts in $T_{l+1}^*$ (see Figure \ref{3_1knot}). $H$ can be extended to the rest of $\Delta^{(1)} \times [0,1]$ by the fact that $H|_{\partial (\Delta^{(1)}\times [0,1])}$ contracts in $W$. Doing this for all 1-simplexes so $H$ is well-defined on the 1-skeleta cross $[0,1]$. One can do this for 2- and 3-skeleta cross $[0,1]$ inductively. \end{proof} \begin{definition} A topological space $X$ is \emph{locally \emph{1}-connected at the point} $x\in X$ if for each neighborhood $U$ of $x$ there is a neighborhood $V$ of $x$, $V\subset U$, such that every loop in $V$ contracts in $U$. We say that $X$ is \emph{locally \emph{1}-connected} if $X$ is locally 1-connected at each of its points. \end{definition} The approach of proving Theorem \ref{Thm: W^3 embeds in no compact ANR} does not rely on Haken's finiteness theorem \cite{Hak68}. Instead, we take advantage of the covering space argument in \cite{Ste77}. Suppose there is a compact, locally connected, locally 1-connected metric space $U$ such that $U$ contains $W^3$ as an open subset. By taking the component of $U$ containing $W^3$ we may assume that $U$ is connected. Then the following result assures that $\pi_1(U \backslash \operatorname{Int}T^*_0)$ must be finitely generated. \begin{lemma}\cite[Lemma 1.1, P.7]{Ste77} If $X$ is a compact, connected, locally connected, locally $1$-connected metric space, then $\pi_1(X)$ is finitely generated. \end{lemma} Instead of working on $\pi_1(U\backslash \operatorname{Int}T^*_0)$ directly, it is easier to focus on a knot space $K_j = S^3 \backslash \operatorname{Int}h_j^0(T_0)$ ($j \geq 1$).\footnote{In \cite{Ste77}, $K_i$ (instead of our $K_j$) denotes the knot space corresponding to his 3-dimensional example $W$. In addition, $K_i$ is homeomorphic to an amalgamation $A_i$ in his thesis. At the end of this section, we also decompose $K_j$ into an amalgamation (see (\ref{amalgamation of K})).} Combining with Claim \ref{Claim}, we have an observation as follows. \begin{claim} $\pi_1(K_j)$ is a homomorphic image of $\pi_1(U \backslash \operatorname{Int}T^*_0)$. \end{claim} \begin{proof} Let $p_j$ and $p_j'$ be quotient maps in the commutative diagram (see Figure \ref{Commutative diagram}). \begin{figure}[h] \begin{center} \begin{tikzpicture}[>=angle 90] \matrix(a)[matrix of math nodes, row sep=3em, column sep=2.5em, text height=1.5 ex, text depth=0.25ex] {T_j^*\backslash \operatorname{Int}T_0^* & U \backslash \operatorname{Int}T_0^* \\ (T_j^*\backslash \operatorname{Int}T_0^*)/\partial T_j^* & (U\backslash \operatorname{Int}T_0^*)/(U \backslash \operatorname{Int}T_j^*) \\}; \path[->](a-1-1) edge node[above]{$\iota_j$} (a-1-2); \path[->](a-2-1) edge node[above]{$g_j$} node[below]{$\approx$} (a-2-2); \path[->] (a-1-2) edge node[right]{$p_j$} (a-2-2); \path[->] (a-1-1) edge node[right]{$p_j'$} (a-2-1); \end{tikzpicture} \end{center} \caption{Commutative diagram} \label{Commutative diagram} \end{figure} The inclusion, $\iota_j$, followed by $p_j$ induces the map $g_j$ since the restriction of $p_j$ on $T_j^* \backslash \operatorname{Int}T_{0}^{*}$ is to collapse $\partial T_{j}^{*}$ to a point. It's not hard to see that $g_j$ is actually a homeomorphism. Since $\partial T_{j}^{*}$ is collared in $T_{j}^{*}\backslash \operatorname{Int}T_{0}^{*}$, Lemma \ref{lemma: collar} implies that $p_j'$ induces a surjection on fundamental groups. By the commutativity of the diagram \ref{Commutative diagram}, $p_{j^*}' = g_{j^*}^{-1}p_{j^*}\iota_{j^*}$, where $p_{j^*}'$, $g_{j^*}$, $p_{j^*}$ and $ \iota_{j^*}$ are the homomorphisms induced by maps $p_j'$, $g_j$, $p_j$ and $\iota_j$ respectively. Since $p_{j^*}'$ is a surjection, $g_{j^*}^{-1}p_{j^*}$ is also a surjection. Hence, $\pi_1((T_j\backslash \operatorname{Int}T_0^*)/\partial T_j^*)$ is a homomorphic image of $\pi_1(U\backslash\operatorname{Int}T_0^*)$. According to the construction of $W^3$, the pair $(T_j^*,T_0^*)$ is homeomorphic to the pair $(T_j,h_j^0 (T_0))$. Then the claim follows from Claim \ref{Claim}. \end{proof} Since the rank\footnote{When we say the \emph{rank} of a group $G$, denoted by $\operatorname{Rank}G$, it means the smallest cardinality of a generating set for $G$.} of a group must be a least as large as that of any homomorphic image, it suffices to show that the rank of $\pi_1(K_j)$ is unbounded. The space $K_j$ is advertised as "knot space" is because it can be viewed as a knot complement. To see that, we need the construction based on two important tools in producing knots. The first one is \begin{definition} Let $K_P$ be a non-trivial knot in $S^3$ and $V_P$ an unknotted solid torus in $S^3$ with $K_P\subset V_P \subset S^3$. Let $K_C \subset S^3$ be another knot and let $V_C$ be a tubular neighborhood of $K_C$ in $S^3$. Let $h: V_P \to V_C$ be a homeomorphism and let $K_W$ be $h(K_P)$. We say $K_C$ is a \emph{companion} of any knot $K_W$ constructed (up to knot type) in this manner. If $h$ is \emph{faithful}, meaning that $h$ takes the preferred longitude\footnote{"Preferred longitude" means that $K_W$ has writhe number zero.} and meridian of $V_P$ respectively to the preferred longitude and meridian of $V_C$, We say $K_W$ is an \emph{untwisted Whitehead double} of $K_C$. Otherwise, $K_W$ is a \emph{twisted Whitehead double}. For instance, Figure \ref{whiteheaddouble_3_1} is a 3-twisted Whitehead double of a trefoil knot. The pair $(V_P, K_P)$ is the \emph{pattern} of $K_W$. \end{definition} \begin{figure}[h!] \centering \includegraphics[ width=10cm, height=6cm]{whiteheaddouble_3_1} \caption{A 3-twisted Whitehead double of a trefoil knot} \label{whiteheaddouble_3_1} \end{figure} The second tool is based on a type of connected sum of a pair of manifolds $(M_{1}^{m},N_{1}^{n})\#(M_{2}^m,N_{2}^{n})$, where $N_{i}^{n}$ is a locally flat submanifold of $M_{i}^{m}$. Treat the above pair as $(S^3, k_1) \# (S^3,k_2)$ where $k_i$ are tame knots. Removing a standard ball pair $(B_i^3,B_i^1)$ from $(S^3,k_1)$ and gluing the resulting pairs by a homeomorphism $h: (\partial B_2^3,\partial B_2^1) \to (\partial B_1^3,\partial B_1^1)$ to form the pair connected sum. For convenience, we use $k_1 \# k_2$ other than pairs of manfolds. See \cite{Rol76} for details. To help readers get a better feeling about group $\pi_1(K_j)$, we show that $\pi_1(K_j)$ is isomorphic to $\pi_1 \left( (T_j \backslash \operatorname{Int} h_j^0 (T_0)) /\partial T_j \right)$. Geometrically, $K_j$ is the space obtained by sewing the solid torus $S^3 \backslash \operatorname{Int}T_j$ to $T_j \backslash \operatorname{Int}h_{j}^0 (T_0)$ along $\partial T_j$. We decompose $S^3 \backslash \operatorname{Int}T_j$ into two 3-cells $B_1$ and $B_2$, i.e., $S^3 \backslash \operatorname{Int}T_j = B_1 \cup B_2$, where $B_1$ is the thickened meridional disk $D$ in $S^3 \backslash \operatorname{Int}T_j$ with $\partial D = \alpha_j$ (see Figure \ref{fig 3}) and $B_2$ is the closure of the complement of $B_1$ in $S^3 \backslash \operatorname{Int}T_j$. Sewing $B_1$ to $T_j \backslash \operatorname{Int}h_{j}^0 (T_0)$ along an annular neighborhood of $\alpha_j$ in $\partial T_j$. By Seifert-van Kampen, the inclusion $T_j \backslash \operatorname{Int}h_{j}^0 (T_0) \hookrightarrow (T_j \backslash \operatorname{Int}h_j^0 (T_0)) \cup B_1$ induces a surjection on fundamental groups whose kernel is the normal closure of the curve $\alpha_j$ in $\pi_1(T_j \backslash \operatorname{Int}h_{j}^0 (T_0))$. \begin{figure}[h!] \centering \includegraphics[ width=8cm, height=10cm]{thicken_torus_3_1} \caption{$\beta_j$ contracts in $S_{j-1}^{3}\backslash \operatorname{Int}h_{j}^{0}(T_0)$, where $h_{j}^{0}(T_0)$ is not pictured.} \label{fig 3} \end{figure} Adding $B_2$ to $(T_j \backslash \operatorname{Int}h_j^0 (T_0)) \cup B_1$ to form the knot complement $K_j$ does not affect the fundamental group. This follows readily from Seifert-van Kampen. Hence, the inclusion $T_j \backslash \operatorname{Int}h_j^0 (T_0) \hookrightarrow K_j$ induces a surjection on fundamental groups whose kernel is the normal closure of the curve $\alpha_j$ in $\pi_1(T_j \backslash \operatorname{Int}h_{j}^0 (T_0))$. \begin{claim}\label{Claim} $\pi_1(K_j)$ is isomorphic to $\pi_1 \left( (T_j \backslash \operatorname{Int} h_j^0 (T_0)) /\partial T_j \right)$. \end{claim} \begin{proof} It's sufficient to show that the meridian $\beta_j$ of $T_j$ is trivial in $\pi_1(K_j)$. In other words, we will show that $\beta_j$ contracts in the complement of $h_j^0(T_0)$. Consider Figure \ref{fig 3}. $h_{j}^0(T_0)$ (not pictured) is contained in $h_{j}^{j-1}(T_{j-1})$, which is also contained in the solid torus $A$. Since $A$ is an unknotted solid torus, $\beta_j$ bounds a 2-chain in $S^3 \backslash A$. \end{proof} It's clear that $\pi(K_1)$ is isomorphic to a trefoil knot group. \begin{claim} $\pi_1(K_2)$ is isomorphic to the knot group of the connected sum of a trefoil knot and a $3$-twisted Whitehead double of a trefoil knot. \end{claim} \begin{proof} By the construction of $W^3$, $T_{1}^{*}$ embeds in $T_{2}^{*}$ just as the way $T_{0}^{*}$ embeds in $T_{1}^{*}$ (as shown in Figure \ref{3_1knot}). Note that the space $K_2 = S^3 \backslash \operatorname{Int}h_2^0(T_0)$ can be decomposed into $$(S^3 \backslash \operatorname{Int}T_{2}^{*})\cup (T_{2}^{*}\backslash \operatorname{Int}T_{1}^{*})\cup (T_{1}^{*}\backslash \operatorname{Int}T_{0}^{*}).$$ Since a solid torus $S^3 \backslash \operatorname{Int}T_{2}^{*}$ is glued to $T_{2}^{*}\backslash \operatorname{Int}T_{1}^{*}$ along $\partial T_{2}^{*}$, one can unlink the clasped portion of $T_{1}^{*}$ while keeping the way $T_{0}^{*}$ embeds in $T_{1}^{*}$ via an ambient isotopy $\Psi_t$ of $S^3$ starting at $\Psi_0 = \operatorname{Id}_{S^3}$. The restriction $\Psi_1(T_{1}^{*})$ is a tubular neighborhood of a trefoil knot, denoted $\mathcal{K}_{*}$. Name a twisted Whitehead double of $\mathcal{K}_{*}$ (as shown in Figure \ref{whiteheaddouble_3_1}) $\mathcal{K}_{*}^{Wh}$. Restrict $\Psi_1$ to $T_{0}^{*}$ and deformation retract $\Psi_1(T_{0}^{*})$ onto its core. The core of $\Psi_1(T_{0}^{*})$ is $\mathcal{K}_{*}^{Wh}$ connected sum with a small trefoil knot, denoted $\mathcal{K}_{**}$. Consider the knot of Figure \ref{double of trefoil}. In this case, $\mathcal{K}_{1}^{Wh}$ is $\mathcal{K}_{*}^{Wh}$ and $\mathcal{K}_{1}$ is $\mathcal{K}_{**}$. It follows easily that $K_2$ is homotopy equivalent to $S^3 \backslash (\mathcal{K}_{*}^{Wh}\# \mathcal{K}_{**})$. \end{proof} \begin{figure}[h!] \centering \includegraphics[ width=9cm, height=8cm]{double_3_1} \vspace{-2em} \caption{The connected sum of a twisted Whitehead double of $\mathcal{K}_1$ and $\mathcal{K}_1 (\approx$ trefoil knot). Here "$\approx$" stands for homeomorphic.} \label{double of trefoil} \end{figure} Let $\mathcal{K}_1$ be a trefoil knot corresponding to the knot space $K_1$. Denote a knot $\mathcal{K}_2$ by $\mathcal{K}_1^{Wh} \# \mathcal{K}_1$ such that $\pi_1(S^3 \backslash \mathcal{K}_2) \cong \pi_1(K_2)$. Similarly, one can further find a knot $\mathcal{K}_3$ on the 3rd stage which is a connected sum of a twisted Whitehead double of $\mathcal{K}_2$ and $\mathcal{K}_1$. By iteration, a knot $\mathcal{K}_j$ can be viewed as $\mathcal{K}_{j-1}^{Wh}\# \mathcal{K}_1$. Let $G_{3_1}$ and $G^{Wh}_{j-1}$ be the knot group of $\mathcal{K}_1$ and $K^{Wh}_{j-1}$ respectively. By the definition of connected sum, there is a tame 2-sphere $S^2$ dividing $S^3$ into two balls $B_{Wh}$ and $B_1$ containing $K^{Wh}_{j-1}$ and $\mathcal{K}_1$ respectively. The intersection of $K^{Wh}_{j-1}$ and $\mathcal{K}_1$ is an arc $\zeta$ lying in $S^2$. View $\mathcal{K}_j = K^{Wh}_{j-1}\#\mathcal{K}_1$ as the union of $K^{Wh}_{j-1}$ and $\mathcal{K}_1$ minus $\operatorname{Int}\zeta$ (see Figure \ref{double of trefoil}). Then we have the following diagram "pushout" commutative diagram \ref{pushout diagram}. \begin{figure}[h] \begin{center} \begin{tikzpicture}[>=angle 90] \matrix(a)[matrix of math nodes, row sep=6em, column sep=1em, text height=1.5 ex, text depth=0.25ex] { & \pi_1(S^2 \backslash \mathcal{K}_j) \cong \mathbb{Z} & \\ \pi_1(B_{Wh} \backslash K^{Wh}_{j-1}) \cong G^{Wh}_{j-1} & & \pi_1(B_1 \backslash \mathcal{K}_1) \cong G_{3_1} \\ & \pi_1(S^3 \backslash \mathcal{K}_j) & \\}; \path[right hook->](a-1-2) edge (a-2-1); \path[right hook->](a-1-2) edge (a-2-3); \path[right hook->](a-2-1) edge node[right]{$\text{ }\iota_1$} (a-3-2); \path[right hook->](a-2-3) edge node[left]{$\iota_2$\text{ }} (a-3-2); \end{tikzpicture} \end{center} \caption{"Pushout" commutative diagram} \label{pushout diagram} \end{figure} Clearly, the two upper homomorphisms in Figure \ref{pushout diagram} are injective. By the Seifert-van Kampen theorem, the other two homomorphisms $\iota_1,\iota_2$ are also injective. That means $$G_{j}= \pi_1(S^3 \backslash \mathcal{K}_j)=G^{Wh}_{j-1} \ast_{\langle \lambda \rangle} G_{3_1}$$ is a free product with amalgamation along an infinite cyclic group, where $[\lambda]$ corresponds to the loop class in $\pi_1(S^2 \backslash \mathcal{K}_j)$. According to this set-up, $G^{Wh}_{j-1}$ and $G_{3_1}$ are two subgroups of $G_j$ and $\langle \lambda \rangle$ is a subgroup of both $G^{Wh}_{j-1}$ and $G_{3_1}$. Since both $G^{Wh}_{j-1}$ and $G_{3_1}$ are abelianized to $\langle \lambda \rangle \cong \mathbb{Z}$, $G_j$ is a split amalgamated free product. Although the work in \cite{Wei99} guarantees a lower bound for $\operatorname{Rank}G_j \ast_{\langle \lambda \rangle} G_{3_1}$, i.e., $\operatorname{Rank}G_j \ast_{\langle \lambda \rangle} G_{3_1} \geq 2$, the ultimate goal is to show that $\operatorname{Rank} G_j \ast_{\langle \lambda \rangle} G_{3_1}$ has no upper bound as $j\to \infty$. At the time of writing, we don't know whether there is a direct knot theoretical approach to this. So, we use the covering space theory as developed by Sternfeld in \cite{Ste77}. We start by constructing a surjective homomorphism $\Phi_j: G^{Wh}_{j-1} \ast_{\langle \lambda \rangle} G_{3_1} \twoheadrightarrow \mathbb{A}_5$, where $\mathbb{A}_5$ is an alternating group on 5 letters. To that end, by the definition of $W^3$, we decompose $K_j$ into an amalgamation of $L_j$'s. That is, for $j \geq 1$, \begin{equation}\label{amalgamation of K} K_j \approx (S^3 \backslash \operatorname{Int}T_j) \cup_{\operatorname{Id}} L_j \cup_{h_{j}^{j-1}} L_{j-1} \cup_{h_{j-1}^{j-2}}\cdots \cup_{h_{2}^{1}} L_1, \end{equation} where the sewing homeomorphism $h_{l+1}^{l}$ identifies the boundary component $\partial T_l$ of $L_l$ to the boundary component $\partial T'_{l+1}$ of $L_{l+1}$. It's clear that $\pi_1(K_j)\cong G_j$. So, we convert the problem to finding a surjection from $\pi_1(K_j) \to \mathbb{A}_5$ which will be discussed in the following two sections. \section{A presentation of $\pi_1(K_j)$}\label{section: A presentation} First we spell out a Wirtinger presentation similar to what Sternfeld did in \cite[P.20--26]{Ste77} for $\pi_1(L_l)$, where $l \geq 1$. Let $\Sigma_l$ and $\Omega_l$ be polyhedral simple closed curves contained in $S^3$ such that $S^3 \backslash (\Sigma_l \cup \Omega_l)$ deformation retracts onto $L_l$. $\Sigma_l$ and $\Omega_l$ can be viewed as cores of the solid tori $T_l'$ and $S^3 \backslash \operatorname{Int}T_l$ respecitively (see Figures \ref{3_1knot} and \ref{double of 3_1_link}). Let the arc $\mu_l$ in Figure \ref{double of 3_1_link} run from one end point $p_l \in \partial T_l'$ and to the other end point $q_l \in \partial T_l$. $\mu_l$ is properly embedded in $L_l$. \begin{figure}[h!] \centering \includegraphics[ width=15cm, height=10cm]{double_3_1_link} \vspace{-2em} \caption{A Projection of $\Sigma_l\cup \Omega_l$ into the plane. Arrows $a,b,c,\dots,i$ have subscript $l$ corresponding to $L_l$ ($1<l\leq j$) is suppressed.} \label{double of 3_1_link} \end{figure} Hence, the presentation of $\pi_1(S^3 \backslash (\Sigma_l \cup \Omega_l),p_l)$ is \begin{equation}\label{group presentation1} \text{Generators: } a,b,c,\dots, i \end{equation} \begin{equation*} \text{Relators:} \begin{cases} \begin{aligned} R_{l,1}: b &= c^{-1}ac \\ R_{l,2}: c &= a^{-1}ba \\ R_{l,3}: d &= b^{-1}cb \\ R_{l,4}: e &= gdg^{-1} \\ R_{l,5}: f &= heh^{-1} \\ R_{l,6}: g &= efe^{-1} \\ R_{l,7}: a &= h^{-1}gh \\ R_{l,8}: h &= g^{-1}ig \\ R_{l,9}: i &= fhf^{-1}, \end{aligned} \end{cases} \end{equation*} where the subscripts $l$'s are surpressed. Write loop classes $[\alpha_l], [\beta_l], [\gamma_l]$ and $[\delta_l]$ as words in the generators $a_l,b_l,\dots,i_l$ of (\ref{group presentation1}): \begin{equation}\label{words} \begin{cases} \begin{aligned} &[\alpha_l] = h_l \\ &[\beta_l] = f_{l}^{-1}g_{l}\\ &[\gamma_l] = a_l \\ &[\delta_l] = c_la_lb_lg_{l}^{-1}h_{l}^{-1}e_{l}^{-1}h_l \\ \end{aligned} \end{cases} \end{equation} where $[\alpha_l]$ is determined by the oriented simple closed curve $\alpha_l$ lying in $\partial L_l$ (see Figures \ref{3_1knot} and \ref{double of 3_1_link}) and the arc $\mu_l$ connecting $\alpha_l$ to the base point $p_l$. Likewise, $[\beta_l]$, $[\gamma_l]$ and $[\delta_l]$ are defined in the same manner. Deformation retract $S^3\backslash(\Sigma_l \cup \Omega_l)$ onto $L_l$. It's clear that Presentation (\ref{group presentation1}) is a presentation of $\pi_1(L_l,p_l)$. Consider the loop classes $a_l,b_l,\dots, i_l$ in $\pi_1(L_l,p_l)$ (represented by the same loops as before) as loops in $L_l$. At the same time, $[\alpha_l], [\beta_l], [\gamma_l]$ and $[\delta_l]$ may be written as the same words (\ref{words}) in the generators of $\pi_1(L_l,p_l)$. Recall in the previous section, we have the following knot space \begin{equation*} K_j \approx (S^3 \backslash \operatorname{Int}T_j) \cup_{\operatorname{Id}} L_j \cup_{h_{j}^{j-1}} L_{j-1} \cup_{h_{j-1}^{j-2}}\cdots \cup_{h_{2}^{1}} L_1, \end{equation*} where the sewing homeomorphism $h_{l+1}^{l}$ identifies the boundary component $\partial T_l$ of $L_l$ to the boundary component $\partial T'_{l+1}$ of $L_{l+1}$ such that the transverse oriented simple closed curves $\alpha_l$ and $\beta_l$ of $\partial T_l$ are mapped in an orientation preserving manner to the transverse oriented simple closed curves $\delta_{l+1}$ and $\gamma_{l+1}$ respectively in $\partial T'_{l+1}$. Using the words (\ref{words}), this can be described by the following relators \begin{equation}\label{relators} \text{Relators} \begin{cases} \begin{aligned} &S_{l,1}: h_{l-1} = c_la_lb_lg_{l}^{-1}h_{l}^{-1}e_{l}^{-1}h_l \text{ for } j\geq l \geq 2 \\ &S_{l,2}: f_{l-1}^{-1}g_{l-1} = a_l \text{ for } j \geq l \geq 2. \\ \end{aligned} \end{cases} \end{equation} Combine the words (\ref{group presentation1}) and (\ref{relators}), we obtain \begin{proposition}\label{Prop: presentation} $\pi_1(K_j,p_1)$, $j\geq 1$, has the following presentation \begin{equation}\label{group_presentation2} \text{Generators: } a_l,b_l,c_l,\dots, i_l \text{ for } j\geq l \geq 1 \end{equation} \begin{equation*} \text{Relators:} \begin{cases} \begin{aligned} &R_{l,k} \text{ for } j\geq l \geq 1 \text{ and } 9 \geq k \geq 1 \\ &S_{l,1} \text{ for } j\geq l \geq 2 \\ &S_{l,2} \text{ for } j \geq l \geq 2 \\ &h_j = 1, \end{aligned} \end{cases} \end{equation*} where the generators $a_l,\dots,i_l$ of Presentation (\ref{group_presentation2}) correspond to those of Presentation (\ref{group presentation1}) conjugated by the path $\mu_l$. \end{proposition} \begin{proof} The proof is an easy modification of the proof of Proposition 4.1 in \cite{Ste77}. \end{proof} \section{The surjection of $\pi_1(K_j,p_1)$ onto $\mathbb{A}_5$}\label{section: The surjection} Here we shall define a homomorphism $\Phi_j: \pi_1(K_j,p_1) \to \mathbb{A}_5$, where $j\geq 1$. It suffices to define $\Phi_j$ on the generators of Presentation (\ref{group_presentation2}) of $\pi_1(K_j,p_1)$ and check that the definition is compatible with the relators of the presentation. That is, if the following words $$w(a_1,b_1,\dots,i_1,\dots,a_j,b_j,\dots,i_j) = w'(a_1,b_1,\dots,i_1,\dots,a_j,b_j,\dots,i_j)$$ is a relator of the presentation, then $$w(\Phi_1(a_1),\dots,\Phi_1(i_1),\dots, \Phi_j(a_j),\dots,\Phi_j(i_j))= w'(\Phi_1(a_1),\dots,\Phi_1(i_1),\dots, \Phi_j(a_j),\dots,\Phi_j(i_j)) $$ must hold for $\mathbb{A}_5$. Consider an extreme case by "unknotting" every small trefoil knot in the link (corresponding to $L_l$) as shown in Figure \ref{double of 3_1_link}. The link in Figure \ref{double of 3_1_link} can be viewed as a connected sum of a Whitehead link and a trefoil knot. Thus, we can abelianize the trefoil knot group to $\langle a_l \rangle$ while keeping the remaining structure of the group of the link complement fixed. Inherit the definitions of $T_l$, $T'_{l}$ and $h_{l+1}^{l}$ in constructing the knot space $K_j$. Unknot the trefoil-knotted hole in $C_l$ when we embed $T'_l$ into $T_l$. For convenience, we still call such reembedded torus $T'_l$. Similar to the construction of the knot space $K_j$, the sewing homeomorphism $h_{l+1}^{l}$ identifies the boundary component $\partial T_l$ of $L_l$ to the boundary component $\partial T'_{l+1}$ of $L_{l+1}$ such that the transverse oriented simple closed curves $\alpha_l$ and $\beta_l$ of $\partial T_l$ are mapped in an orientation preserving manner to the transverse oriented simple closed curves $\delta_{l+1}$ and $\gamma_{l+1}$ respectively in $\partial T'_{l+1}$. Furthermore, when $\alpha_l$ of $\partial T_l$ is mapped to $\delta_{l+1}$ in $\partial T'_{l+1}$, $h_{l+1}^{l}$ first gives 3 compensating half-twists to $T_l$ due to the writhe of trefoil knot (before abelianization) in $T'_{l+1}$ is 3. In other words, the new knot space is a concatenation of Whitehead links with 3 half-twists. Denote the corresponding knot space by $K_{j}^{**}$. By the above procedure, $\pi_1(K_{j}^{**})$ can be obtained by adding relators $a_l = b_l$, $b_l = c_l$ and $c_l = d_l$ to the presentation of $\pi_1(K_j)$ in Proposition \ref{Prop: presentation} \begin{equation}\label{group_presen} \text{Generators: } a_l,b_l,c_l,\dots, i_l \text{ for } j\geq l \geq 1 \end{equation} \begin{equation*} \text{Relators:} \begin{cases} \begin{aligned} & a_l = b_l, b_l = c_l, c_l = d_l \\ &R_{l,k} \text{ for } j\geq l \geq 1 \text{ and } 9 \geq k \geq 1 \\ &S_{l,1} \text{ for } j\geq l \geq 2 \\ &S_{l,2} \text{ for } j \geq l \geq 2 \\ &h_j = 1. \end{aligned} \end{cases} \end{equation*} Clearly, there is a surjection of $\psi_j: \pi_1(K_j) \twoheadrightarrow \pi_1(K_j^{**})$ by sending $a_l,\dots, d_l$ in Presentation (\ref{group_presentation2}) to $a_l$ in Presentation (\ref{group_presen}). So, it suffices to find a surjection $\phi_j$ of $\pi_1(K_j^{**})$ onto $\mathbb{A}_5$. We shall define $\phi_j$ inductively on the generators of Presentation (\ref{group_presen}). If $j=1$, we use GAP \cite{GAP18} to define a surjection $\phi_1$ on $a_1,\dots, i_1$ by Table \ref{Table 1}. This definition is compatible with the relators $R_{1,k}$ and $h_1 = 1$, where $1\leq k\leq 9$. If $j=2$, both Tables \ref{Table 1} and \ref{Table 2} are used. Besides relators $R_{1,k}, R_{2,k}$ and $h_2 =1$, relators $S_{2,1}$ and $S_{2,2}$ are also compatible. Similarly, if $j =3$ (resp. $j=4$), Tables \ref{Table 1}-\ref{Table 3} (resp. \ref{Table 1}-\ref{Table 4}) are applied. When $j\geq 5$, Tables \ref{Table 1}-\ref{Table 5} will be applied periodically. That is, extend $\phi_j$ to the generators $a_l,\dots,i_l$ according to Table \ref{Table 1} if $l = j$, according to Table \ref{Table 2} if $l = j - 1 - 4T$, according to Table \ref{Table 3} if $l = j - 2 - 4T$, according to Table \ref{Table 4} if $l = j - 3 - 4T$ and according to Table \ref{Table 5} if $l = j - 4 - 4T$, where $T \in \mathbb{N}$ and $0 \leq T \leq (j - 1)/4$. One can either use GAP \cite{GAP18} or simply by hand to check such extension is compatible with relators in Presentation (\ref{group_presentation2}). Hence, the composition $\Phi_j = \phi_j \circ \psi_j$ is the desired surjection. \begin{table}[h] \centering \caption{} \subfloat[$l= j$]{ \begin{tabular}{|c|c|} \hline Generators & Image \\ \hline $a_l$ & (1,2)(3,4)\\ $b_l$ & (1,2)(3,4)\\ $c_l$ & (1,2)(3,4)\\ $d_l$ & (1,2)(3,4)\\ $e_l$ & (1,2)(3,4)\\ $f_l$ & (1,2)(3,4)\\ $g_l$ & (1,2)(3,4)\\ $h_l$ & ()\\ $i_l$ & ()\\\hline \end{tabular} \label{Table 1}} \subfloat[$l= j - 1 - 4T$]{ \begin{tabular}{|c|c|} \hline Generators & Image \\ \hline $a_l$ & (1,2,3)\\ $b_l$ & (1,2,3)\\ $c_l$ & (1,2,3)\\ $d_l$ & (1,2,3)\\ $e_l$ & (2,4,3)\\ $f_l$ & (1,3,4)\\ $g_l$ & (1,4,2)\\ $h_l$ & (1,2)(3,4)\\ $i_l$ & (1,3)(2,4)\\ \hline \end{tabular} \label{Table 2}} \subfloat[$l= j - 2 - 4T$]{ \begin{tabular}{|c|c|} \hline Generators & Image \\ \hline $a_l$ & (1,3)(4,5) \\ $b_l$ & (1,3)(4,5)\\ $c_l$ & (1,3)(4,5)\\ $d_l$ & (1,3)(4,5)\\ $e_l$ & (1,2)(4,5)\\ $f_l$ & (1,3)(4,5)\\ $g_l$ & (2,3)(4,5)\\ $h_l$ & (1,2,3)\\ $i_l$ & (1,3,2)\\ \hline \end{tabular} \label{Table 3}} \end{table} \begin{table}[h] \centering \caption{} \subfloat[$l = j- 3 - 4T$]{ \begin{tabular}{|c|c|} \hline Generators & Image \\ \hline $a_l$ & (3,4,5) \\ $b_l$ & (3,4,5)\\ $c_l$ & (3,4,5)\\ $d_l$ & (3,4,5)\\ $e_l$ & (1,3,5)\\ $f_l$ & (1,4,3)\\ $g_l$ & (1,5,4)\\ $h_l$ & (1,3)(4,5)\\ $i_l$ & (1,5)(3,4)\\ \hline \end{tabular} \label{Table 4}} \subfloat[$l= j- 4 - 4T$]{ \begin{tabular}{|c|c|} \hline Generators & Image \\ \hline $a_l$ & (1,2)(3,4)\\ $b_l$ & (1,2)(3,4)\\ $c_l$ & (1,2)(3,4)\\ $d_l$ & (1,2)(3,4)\\ $e_l$ & (1,2)(4,5)\\ $f_l$ & (1,2)(3,4)\\ $g_l$ & (1,2)(3,5)\\ $h_l$ & (3,4,5)\\ $i_l$ & (3,5,4)\\\hline \end{tabular} \label{Table 5}} \end{table} \begin{remark} \label{Error} In line 16 \cite[P.28]{Ste77}, the author claims that the definition of $\Phi_i: \pi_1(A_i) \to A$ given in Table 1 \cite[P.29]{Ste77} is compatible with the relators $S_{j,1},S_{j,2}$ for $l\geq j \geq 2$, where $A$ is an alternating group on 5 letters $v$, $w$, $x$, $y$ and $z$. However, for $l <i$, $\Phi(o_{l-1}^{-1}h_{l-1}f_{l-1}^{-1}q_{l-1})$ is not equal to $\Phi(a_{l})$. That is, using Table 1 \cite[P.29]{Ste77}, $\Phi(o_{l-1}) = (vy)(wz)$, $\Phi(h_{l-1})=(vy)(xz)$, $\Phi(f_{l-1}) = (wx)(yz)$ and $\Phi(q_{l-1}) = (vw)(yz)$. Hence, $\Phi(o_{l-1}^{-1}h_{l-1}f_{l-1}^{-1}q_{l-1})=(vw)(xz)=\Phi(r_{l})\neq \Phi(a_{l}) =(vw)(xy)$. That means the definition of the so claimed $\Phi_i$ is not compatible with the relators $S_{j,1},S_{j,2}$ for $l\geq j \geq 2$. This error directly affects the following statement \cite[P.52]{Ste77}: "The composition $\pi_1(C_j,x_j)\xrightarrow{k_\ast} \pi_1(A_i,x_j)\xrightarrow{M_j} \pi_1(A_i,x_i) \xrightarrow{\Phi_i} A$ has image isomorphic to $\mathbb{Z}_2$ in $A$ since $\Phi_i$ maps $a_j$ and $b_j$ to the same element of order 2 in $A$. Thus, the kernel of $\Phi_i \circ M_j \circ k_\ast$ has index 2 in $\pi_1(C_j,x_j)$." To fix this error, we provide a series of correct tables here. We have to use at least 3 tables (instead of 2 tables) such that the definition of $\Phi_i$ is compatible with all the relators. Similar to how we define a surjection of $\pi_1(K_j,p_1) \to \mathbb{A}_5$ in the beginning of this section, with the assistance of GAP \cite{GAP18}, the following tables provide a surjection of $\Phi_i: \pi_1(A_i,x_1) \twoheadrightarrow \mathbb{A}_5$. If $i = 1$, we defined $\Phi_i$ on $a_1,\dots,u_1$ by Table \ref{Table 6}. If $i=2$, then Tables \ref{Table 6} and \ref{Table 7} are used. Otherwise, when $ i\geq 3$, Tables \ref{Table 6}, \ref{Table 7} and \ref{Table 8} are applied. That is, extend $\Phi_i$ to the generators $a_l,\dots,u_l$ according to Table \ref{Table 6} if $l=i$, according to Table \ref{Table 7} at $l = i-1 - 2T$ and according to Table \ref{Table 8} at $l = i-2 - 2T$, where $T \in \mathbb{N}$ and $0 \leq T \leq (i - 1)/2$. \begin{table} \caption{} \subfloat[$l= i$]{ \begin{tabular}{|c|c|} \hline Generators & Image \\ \hline $a_l$ & (1,2)(3,5) \\ $b_l$ & (1,2)(3,5) \\ $c_l$ & (1,2)(3,5)\\ $d_l$ & (1,2)(3,5)\\ $e_l$ & (1,2)(4,5)\\ $f_l$ & (1,2)(4,5)\\ $g_l$ & (1,2)(4,5)\\ $h_l$ & (1,2)(3,5)\\ $i_l$ & (1,2)(3,5)\\ $j_l$ & (1,2)(4,5)\\ $k_l$ & (1,2)(3,4)\\ $l_l$ & (1,2)(4,5)\\ $m_l$ & (1,2)(3,4)\\ $n_l$ & (1,2)(3,4)\\ $o_l$ & (1,2)(3,4)\\ $p_l$ & (1,2)(3,4)\\ $q_l$ & (1,2)(3,5)\\ $r_l$ & ()\\ $s_l$ & ()\\ $t_l$ & ()\\ $u_l$ & ()\\ \hline \end{tabular} \label{Table 6}} \subfloat[$l = i-1 -2T$]{ \begin{tabular}{|c|c|} \hline Generators & Image \\ \hline $a_l$ & (1,2)(4,5)\\ $b_l$ & (1,2)(4,5)\\ $c_l$ & (1,2)(4,5)\\ $d_l$ & (1,2)(4,5)\\ $e_l$ & (1,3)(4,5)\\ $f_l$ & (2,5)(3,4)\\ $g_l$ & (1,5)(2,4)\\ $h_l$ & (1,4)(3,5)\\ $i_l$ & (2,4)(3,5)\\ $j_l$ & (1,3)(2,5)\\ $k_l$ & (2,3)(4,5)\\ $l_l$ & (1,3)(4,5)\\ $m_l$ & (1,3)(4,5)\\ $n_l$ & (1,5)(2,4)\\ $o_l$ & (1,4)(2,3)\\ $p_l$ & (1,5)(2,3)\\ $q_l$ & (1,2)(3,4)\\ $r_l$ & (1,2)(3,5)\\ $s_l$ & (1,2)(4,5)\\ $t_l$ & (1,5)(2,3)\\ $u_l$ & (2,5)(3,4)\\ \hline \end{tabular} \label{Table 7}} \subfloat[$l= i - 2 - 2T$]{ \begin{tabular}{|c|c|} \hline Generators & Image \\ \hline $a_l$ & (1,2)(3,5)\\ $b_l$ & (1,2)(3,5) \\ $c_l$ & (1,2)(3,5) \\ $d_l$ & (1,2)(3,5) \\ $e_l$ & (1,4)(3,5) \\ $f_l$ & (2,5)(3,4) \\ $g_l$ & (1,5)(2,3) \\ $h_l$ & (1,3)(4,5) \\ $i_l$ & (2,3)(4,5) \\ $j_l$ & (1,4)(2,5) \\ $k_l$ & (2,4)(3,5) \\ $l_l$ & (1,4)(3,5)\\ $m_l$ & (1,4)(3,5)\\ $n_l$ & (1,5)(2,3)\\ $o_l$ & (1,3)(2,4)\\ $p_l$ & (1,5)(2,4)\\ $q_l$ & (1,2)(3,4)\\ $r_l$ & (1,2)(4,5)\\ $s_l$ & (1,2)(3,5)\\ $t_l$ & (1,5)(2,4)\\ $u_l$ & (2,5)(3,4)\\\hline \end{tabular} \label{Table 8}} \end{table} \end{remark} \section{Properties of a cube with a trefoil-knotted hole} \label{section: properties of cube hole} One of the key ingredients in proving Theorem \ref{Thm: W^3 embeds in no compact ANR} is to understand the covering space of a cube with a trefoil-knotted hole as shown in Figure \ref{3_1knot}. In this section, we collect a number of important properties about cubes with a trefoil-knotted hole. Let $C$ be the cube with a trefoil-knotted hole as shown in Figure \ref{cubehole_3_1}. Here $C$ is the complement in $S^3$ of the interior of a regular neighborhood of the polyhedral simple closed curve $\Gamma$. There is a deformation retract of $S^3\backslash \Gamma$ onto $C$. The presentation of $\pi_1(S^3 \backslash \Gamma)$ (i.e., trefoil knot group) is a presentation of $\pi_1(C,p_0)$, where $p_0$ is a base point. Hence, one can use the Wirtinger presentation of $\pi_1(S^3 \backslash \Gamma)$ to obtain the following proposition. \begin{figure}[h!] \centering \includegraphics[ width=9cm, height=12cm]{cubehole_3_1} \caption{The Cube-With-trefoil-Knotted Hole. $C$ is the complement in $S^3$ of the interior of a regular neighborhood of the trefoil-knotted simple closed curve $\Gamma$} \label{cubehole_3_1} \end{figure} \begin{proposition}\label{Prop: presentation of 3_1 knot} $\pi_1(C,p_0)$ has presentation $$\langle a, b| b^{-1}a^{-1}b^{-1}aba = 1 \rangle,$$ where $a = [A]$ and $b = [B]$ as shown in Figure \ref{cubehole_3_1}. \end{proposition} \begin{corollary} \label{Corollary: rank of pi_1(C)} $\pi_1(C,p_0)$ has $\operatorname{Rank}2$. \end{corollary} \begin{proof} Obviously, $\operatorname{Rank} \pi_1(C,p_0)\leq 2$. By the classification of finite simple groups, $\operatorname{Rank} \mathbb{A}_5 = 2$. Using GAP \cite{GAP18}, one can find a surjection of $\pi_1(C,p_0)$ onto $\mathbb{A}_5$ by $(a,b) \mapsto \big((1,3,5,4,2),(1,2,3,4,5)\big) $. That means $\operatorname{Rank} \pi_1(C,p_0)$ has to be greater or equal to 2. Hence, $\operatorname{Rank} \pi_1(C,p_0)= 2$. \end{proof} \begin{proposition}\cite[Prop.6.3]{Ste77}\label{Prop: 2-fold cover} $C$ has a unique $2$-fold cover, $\tilde{C}^2$, the boundary $\partial \tilde{C}^2$ is connected and the quotient map $$ Q: \tilde{C}^2 \to \tilde{C}^2/\partial \tilde{C}^2$$ induces a surjection on fundamental groups. \end{proposition} \begin{lemma}\cite[Lemma 1.3]{Ste77} \label{lemma: collar} Let $B$ be a subspace of $X$. Let $B$ and $X$ be path connected. If $B$ is collared in $X$, then the quotient map $q: X \to X/B$ induces a surjection of fundamental groups whose kernel is the normal closure in $\pi_1(X)$ of $i_\ast \pi_1(B)$, where $i_*$ denotes the inclusion induced homomorphism. \end{lemma} The following result generalizes Proposition \ref{Prop: 2-fold cover} for the $k$-fold cyclic cover of $C$. \begin{proposition}\label{Prop: k-fold cover} Let $\tilde{C}^k$ be the $k$-fold cyclic cover of $C$. Then $\partial \tilde{C}^k$ is connected and the quotient map $$ Q: \tilde{C}^k \to \tilde{C}^k/\partial \tilde{C}^k$$ induces a surjection on fundamental groups. \end{proposition} \begin{proof} First, we show $\partial \tilde{C}^k$ is connected. Let $P: \tilde{C}^k \to C$ be the $k$-fold cyclic cover. The restriction of $P$ to each component of $P^{-1}(\partial C)$ is a covering map of $\partial C$. Note that the $k$-fold cyclic cover is defined to be the one which corresponds to the kernel of the composite $$\pi_1(C)\xrightarrow{abelianization}\mathbb{Z}\xrightarrow{projection}\mathbb{Z}_k.$$ The uniqueness of the abelianization and the projection assures that the simple closed curve $A$ (see Figure \ref{cubehole_3_1}) in $\partial C$ based at a point $p_0$ has a lift $\tilde{A}$ which is not a loop since the loop $[A]$ corresponding to the generator $a$ in Proposition \ref{Prop: presentation of 3_1 knot} is not in the kernel. Therefore, the component of $\partial \tilde{C}^k$ that contains $\tilde{A}$ must be a least a double cover of $\partial C$ since the two end points of $\tilde{A}$ cover $p_0$. Since each point of $C$ has precisely $k$ preimages in $\tilde{C}^k$, the component of $\partial \tilde{C}^k$ that contains $\tilde{A}$ must be all of $\partial \tilde{C}^k$. Thus $\partial \tilde{C}^k$ is (path) connected. Applying Lemma \ref{lemma: collar} finishes the proof. \end{proof} \begin{proposition} \label{Prop: Rank of 2-fold cover} $\pi_1(\tilde{C}_2/\partial \tilde{C}_2) \cong \mathbb{Z}_3.$ \end{proposition} \begin{proof} The proof is a standard covering space argument. See the proof of Prop.6.4 in \cite[P.39-46]{Ste77}. \end{proof} \begin{proposition} \label{Prop: rank of } Let $\tilde{C}^3$ be the $3$-fold cyclic cover of $C$. Then $\operatorname{Rank}\pi_1(\tilde{C}^3/\partial \tilde{C}^3) \geq 1.$ \end{proposition} \begin{proof} Standard cyclic cover argument \cite[Ch.6]{Rol76} assures the first homology group $H_1(\tilde{C}^3) \cong \mathbb{Z}_2 \oplus \mathbb{Z}_2 \oplus \mathbb{Z}$. "Modulo out" the generators corresponding the boundary $\tilde{C}^3$ can at most reduce the rank by 2, hence, $\operatorname{Rank}\pi_1(\tilde{C}^3/\partial \tilde{C}^3) \geq 3 - 2 =1$. \end{proof} \section{Proof of Theorem \ref{Thm: W^3 embeds in no compact ANR}} \label{section: proof of proposition} Recall in Section \ref{section: The constructiion of a 3-dimensional example} we pointed out the key in proving Theorem \ref{Thm: W^3 embeds in no compact ANR} is to show that $\operatorname{Rank}\pi_1(K_j, p_1)$ is not bounded. Since $\mathbb{A}_5$ has order 60 and $\Phi_j: \pi_1(K_j,p_1) \to \mathbb{A}_5$ is onto, $\ker \Phi_j$ has index 60 in $\pi_1(K_j,p_1)$. Then the following formula guarantees that it suffices to show that $\operatorname{Rank}\ker \Phi_j$ is not bounded. The formula can be viewed as a corollary of the Schreier index theorem. A detailed proof by utilizing covering space theory can be found in \cite[Lemma 1.4]{Ste77}. \begin{lemma}\label{Lemma: rank bound} Let $G$ be a group and $H$ be a subgroup of index $i$. If $\operatorname{Rank}H \geq m$, then $\operatorname{Rank}G \geq \frac{m-1}{i}+1$. \end{lemma} Let $P_j: (\tilde{K}_j,\tilde{p}_1) \to (K_j,p_1)$ be the covering map such that the induced map $P_{j\ast} : \pi_1(\tilde{K}_j,\tilde{p}_1) \to \pi_1(K_j,p_1)$ is an isomorphism onto $\ker \Phi_j$. By Lemma \ref{Lemma: rank bound}, it remains to show that $\operatorname{Rank}\ker \Phi_j$ is not bounded above as $j\to \infty$, which is equivalent to showing that $\operatorname{Rank}\pi_1(\tilde{K}_j,\tilde{p}_1)\geq 25j $ (resp. $5(5j+1)$) when $j$ is even (resp. odd). The key is the fact that $K_j$ contains $j$ pairwise disjoint incompressible cubes with trefoil-knotted hole. Figure \ref{3_1knot} shows that each $L_l$, $l\geq 1$ contains a cube with trefoil-knotted hole $C_l$. Recall $$K_j \approx (S^3 \backslash \operatorname{Int}T_j) \cup_{\operatorname{Id}} L_j \cup_{h_{j}^{j-1}} L_{j-1} \cup_{h_{j-1}^{j-2}}\cdots \cup_{h_{2}^{1}} L_1,$$ $K_j$ contains $C_1,C_2,\dots,C_j$, pairwise disjoint cubes with trefoil-knotted hole. The disjointness follows from that each $C_l$ lies in its own $L_l$ and touches only the "inner" boundary of its $L_l$. In $K_j$, when we sew two adjacent $L_l$'s together, only the "outer" boundary of one is glued to the "inner" boundary of the next. Next, we shall show that $C_l$ in $K_j$ has preimage under the restriction of the covering map $P_j$ has 30 disjoint double covers and 20 disjoint triple covers. The proof heavily relies on the argument given in \cite[P.50-55]{Ste77}. For the convenience of readers, we spell out the proof in details. Consider $p_l \in C_l$. See Figures \ref{double of 3_1_link} and \ref{cubehole_3_1}. From the Wirtinger presentation (\ref{group_presentation2}), a loop class with subscript $l$ is the class of a loop formed by conjugation of a loop in $L_l$ based at $p_l$ by the path $\mu_l^1$ running from $p_1$ to $p_l$ in $K_j$. Define a change-basepoint isomorphism $M_l: \pi_1(K_j,p_l) \to \pi_1(K_j,p_1)$ generated by conjugation by $\mu_l^1$. By Figures \ref{3_1knot} and \ref{double of 3_1_link}, loop classes $M_l^{-1}(a_l)$, $M_l^{-1}(b_l)$ can be viewed as loop classes of $\pi_1(C_l,p_l)$, where $1\leq l \leq j$. Then Figures \ref{double of 3_1_link}-\ref{cubehole_3_1} and Proposition \ref{Prop: presentation of 3_1 knot} assure that the set $\{M_l^{-1}(a_l), M_l^{-1}(b_l)\}$ generates $\pi_1(C_l,p_l)$. Let $\iota_*: \pi_1(C_l,p_l) \to \pi_1(K_j,p_l)$ be the inclusion induced homomorphism. Combine the results from \S \ref{section: The surjection} to obtain the following composition $$\pi_1(C_l,p_l)\xrightarrow{\iota_*} \pi_1(K_j,p_l)\xrightarrow{M_l} \pi_1(K_j,p_1) \xrightarrow{\Phi_j} \mathbb{A}_5,$$ which has image isomorphic to $\mathbb{Z}_2$ (resp. $\mathbb{Z}_3$) in $\mathbb{A}_5$ when $l = j, j - 2 -4T$ and $j - 4 - 4T$ (resp. $l = j -1-4T$ and $j - 3 - 4T$). See Tables \ref{Table 1}, \ref{Table 3} and \ref{Table 5} (resp. \ref{Table 2} and \ref{Table 4}). That is because $\Phi_j$ maps $a_l$ and $b_l$ of $\pi_1(C_l,p_l)$ to the same element of order 2 (resp. 3) in $\mathbb{A}_5$. It follows that the kernel of $\Phi_j \circ M_l \circ \iota_2$ has index either 2 or 3 in $\pi_1(C_l,p_l)$. Let $q: (\tilde{C}_l^2,\hat{p}_l) \to (C_l,p_l)$ be a 2-fold cover of $(C_l,p_l)$ corresponding to the kernel. \begin{claim} Each $\tilde{C}_l^2$ embeds in $\tilde{K}_j$. \end{claim} \begin{proof} Note that there exists a lift $\tilde{p}_l$ of $p_l$ in $\tilde{K}_j$ so that $P_{j*}(\pi_1(\tilde{K}_j,\tilde{p}_l)) = \ker (\Phi_j \circ M_l)$. The lift is obtained by lifting $\mu_l^1$ to a path $\tilde{\mu}_l^1$ so $\mu_l^1(0) = \tilde{p}_1$ and the point $\tilde{p}_l$ is defined to be $\tilde{\mu}_{l}^{1}(1)$. Since $\iota_*q_*(\pi_1(\tilde{C}_l^2,\hat{p}_l))\subseteq P_{j*}(\pi_1(\tilde{K}_j,\tilde{p}_l))$, we have the following commutative diagram with $\iota$ lifted to $\tilde{\iota}$ \ \begin{array} [c]{ccc (\tilde{C}_l^2,\hat{p}_l) & \xrightarrow{\tilde{\iota}} & (\tilde{K}_j,\tilde{p}_l)\\ \downarrow q & & \downarrow P_j \\ (C_l,p_l) & \xrightarrow{\iota} & (K_j,p_l) \end{array} \] We shall apply standard covering space theory to show $\tilde{\iota}$ is an embedding. It suffices to prove that $\tilde{\iota}$ is 1-1. Suppose $x$ and $y$ are two elements of $\tilde{C}_l^2$ such that $\tilde{\iota}(x) = \tilde{\iota}(y)$. The commutativity of the diagram above implies that $q(x) = q(y)$. Connect $x$ to $y$ by a path $\alpha$ and $x$ to $\hat{p}_l$ by a path $\beta$ with $\beta(0)= \hat{p}_l$ and $\beta(1) = x$. Lift $q(\beta)$ to $\tilde{\beta}$ so that $\tilde{\beta}(1) = y$. Suppose $x \neq y$. Then $\tilde{\beta}$ and $\beta$ are distinct lifts of $q(\beta)$. That means $\beta(0) \neq \tilde{\beta}(0)$. So, $\beta \alpha \tilde{\beta}^{-1}$ is not a loop. However, $\tilde{\iota}(\beta \alpha \tilde{\beta}^{-1})$ is a loop in $\tilde{K}_j$. Since $\tilde{\iota}(x) = \tilde{\iota}(y)$, $\tilde{\iota}\beta$ and $\tilde{\iota}\tilde{\beta}$ have to be the same lift of $\iota q(\beta)$. By commutativity of the diagram, $\iota q(\beta \alpha \tilde{\beta}^{-1}) = P_j \tilde{\iota}(\beta \alpha \tilde{\beta}^{-1})$. Hence, $q(\beta \alpha \tilde{\beta}^{-1})$ is a loop in $\iota_*^{-1}P_{j*}(\pi_1(\tilde{K}_j,\tilde{p}_l))$. Thus, $q(\beta \alpha \tilde{\beta}^{-1})$ must lift to a loop at $\hat{p}_l$. Contradiction! \end{proof} \begin{remark} The above argument also works for the 3-fold cover $\tilde{C}_l^3$ which will soon be defined. \end{remark} Since $\tilde{\iota}$ is an embedding, $l = j, j - 2 -4T$ and $j - 4 - 4T$, the restriction map $P_j|: \tilde{\iota}(\tilde{C}_l^2) \to C_l$ is a 2-fold cover of $C_l$. Since $\ker \Phi_j$ has index 60 in $\pi_1(K_j)$, the covering space $P_j: \tilde{K}_j \to K_j$ has 60 covering translations. The components of $P_j^{-1}(C_l)$ are the homeomorphic images of $\tilde{\iota}(\tilde{C}_l^2)$ under the 60 covering translations of $P_j$. Thus, every component of $P_j^{-1}(C_l)$ is a 2-fold cover of $C_l$ (i.e., a 2-fold cover of trefoil knot). By \S \ref{section: The constructiion of a 3-dimensional example}, each $K_j$ contains $j$ pairwise disjoint cubes with trefoil-knotted hole $C_l$, where $1\leq l\leq j$. Hence, $\tilde{K}_j$ must have $15j$ (resp. $15(j+1)$) when $j$ is even (resp. odd) pairwise disjoint 2-fold covers of trefoil knot. Likewise, let $q': (\tilde{C}_l^3,\hat{p}_l) \to (C_l,p_l)$ be a 3-fold cover of $(C_l,p_l)$ corresponding to the kernel of $\Phi_j \circ M_l \circ \iota_2$. When $l = j -1 -4T$ and $j-3-4T$, the restriction map $P_j|: \tilde{\iota}(\tilde{C}_l^3) \to C_l$ is a 3-fold cover of $C_l$. \begin{claim} $P_j|: \tilde{\iota}(\tilde{C}_l^3) \to C_l$ yields a unique $3$-fold (cyclic) cover of $C_l$. \end{claim} \begin{proof} Since the 60-fold covering space of $K_j$ is clearly regular, the restriction of the covering projection to each $C_l$ is also a regular covering. Thus, the induced map $P_{j\ast}|: \tilde{\iota_\ast}(\pi_1(\tilde{C}_l^3)) \to \pi_1(C_l)$ goes onto an index 3 normal subgroup ($\mathbb{Z}_3$). Note that $\pi_1(\tilde{C}_l^3)$ corresponds to the kernel of the composite $\pi_1(C_l)\xrightarrow{abelianization} \mathbb{Z} \xrightarrow{projection} \mathbb{Z}_3$. Then the claim follows immediately from the uniqueness of the abelianization and the projection. \end{proof} When $j$ is even (resp. odd), let $D$ be the complement of the interior of the $15j$ (resp. $15(j+1)$) double covers and $10j$ (resp. $10(j-1)$) triple cover of trefoil knot in $\tilde{K}_j$. Let $Q_j: \tilde{K}_j \to \tilde{K}_j/D$ be quotient map. The quotient space $\tilde{K}_j/D$ is $25j$ (resp. $5(5j+1)$) when $j$ is even (resp. odd) pairwise disjoint 2-fold and 3-fold covers of trefoil knot modulo their boundaries, wedged at the point to which their boundaries are identified. By Propositions \ref{Prop: Rank of 2-fold cover} and \ref{Prop: rank of }, $\pi_1(\tilde{K}_j/D)$ has rank at least $25j$ (resp. $5(5j+1)$) when $j$ is even (resp. odd). Then Propositions \ref{Prop: 2-fold cover} and \ref{Prop: k-fold cover} assure that $Q_j$ induces a surjection of $\pi_1(\tilde{K}_j)$ onto $\pi_1(\tilde{K}_j/D)$, hence, $\operatorname{Rank}\pi_1(\tilde{K}_j) \geq 25j$ (resp. $5(5j+1)$) when $j$ is even (resp. odd). This completes the proof of Theorem \ref{Thm: W^3 embeds in no compact ANR}. \begin{proof}[Proof of Theorem \ref{Thm: high dimensional collection}] Using our building block $W^3$, one can apply the standard "drilling tunnel" and "piping" to generate high-dimensional examples $W^n$. We only spell out an outline. A detailed proof described in \cite[P.56-62]{Ste77} can readily be applied. Recall in \S \ref{section: A presentation} there is an arc $\mu_l^1$ connecting the base points $p_l \in \partial T_l'$ and $q_l \in \partial T_l$ (see Figure \ref{double of 3_1_link}). The sewing homeomorphism $h_{l+1}^{l}$ identifies $q_l$ with $p_{l+1}$. By the construction of $W^3$, those arcs fit together to form a (base) ray $R$ in $W^3$. Find a regular neighborhood $N$ of $R$ such that $W^+ = W^3 \backslash \operatorname{Int}N$ is a PL manifold with $\partial W^+$ homeomorphic to $\mathbb{R}^2$ and $\operatorname{Int}W^+$ homeomorphic to $W^3$. The $n$-dimensional example $W^n$ is defined to be $W^n = \partial (B^{n-2}\times W^+)= (B^{n-2}\times \partial W^+) \cup (\partial B^{n-2}\times W^+)$, where $B^{n-2}$ is a codimension 2 ball. The openness and contractibility follow from the standard PL topology arguments. Define solid torus $T_l^+$ a subset of $W^+$ by $T_l^+ = T_l^* \backslash \operatorname{Int}N$. Then $W^+$ can be expressed by $\cup T_l^+$. Let $p_2: B^{n-2} \times W^+ \to W^+$ be a projection sending $B^{n-2}\times W^+$ onto its second factor. Let $p: W^n \to W^+$ be the restriction of $p_2$. Suppose there is a compact, locally connected, locally 1-connected metric space $U$ that contains $W^n$ as an open set. Then it suffices to show $\pi_1(U\backslash p^{-1}(\operatorname{Int}T_0^+))$ is not finitely generated just as how we prove Theorem \ref{Thm: W^3 embeds in no compact ANR}. By definition of $N$, $T_0^+ = T_0^*$. Let $q$ be the quotient map $$q: T_j^+ \backslash \operatorname{Int}T_0^+ \to (T_j^+ \backslash \operatorname{Int}T_0^+)/\partial T_j^+.$$ Extend $q$ to map $Q: U \backslash p^{-1}(\operatorname{Int}T_0^+) \to (T_j^+ \backslash \operatorname{Int}T_0^+)/\partial T_j^+$. There should be no difficult in doing so because $U \backslash p^{-1}(\operatorname{Int}T_0^+)$ can be decomposed into the union of $U \backslash p^{-1}(\operatorname{Int}T_j^+)$ and $p^{-1}(T_j^+ \backslash \operatorname{Int}T_0^+)$. Then $Q$ can be defined as the union of the constant map $l: U \backslash p^{-1}(\operatorname{Int}T_j^+) \to (T_j^+ \backslash \operatorname{Int}T_0^+)/\partial T_j^+$ and the restriction map $q \circ p|_{p^{-1}(T_j^+ \backslash \operatorname{Int}T_0^+)}$. By Lemma \ref{lemma: collar}, $q \circ p|_{p^{-1}(T_j^+ \backslash \operatorname{Int}T_0^+)}$ induces a surjection on fundamental groups, so does $Q$. Note that $(T_j^+ \backslash \operatorname{Int}T_0^+)/\partial T_j^+$ and $(T_j^* \backslash \operatorname{Int}T_0^*)/\partial T_j^*$ are homeomorphic. Thus, showing that $\operatorname{Rank}\pi_1(U\backslash p^{-1}(\operatorname{Int}T_0^+))$ has no lower bound is equivalent to proving $\operatorname{Rank}\pi_1\big((T_j^*\backslash \operatorname{Int}T_0^*)/\partial T_j^*\big) = \operatorname{Rank}\pi_1(K_j)$, which is just an application of Theorem \ref{Thm: W^3 embeds in no compact ANR}. \end{proof} \section{Questions}\label{section: questions} Recall the construction of $W^3$ in \S \ref{section: The constructiion of a 3-dimensional example} \begin{equation}\label{Decomposition of W^3} W^3 = \lim_{j\to \infty} L_j \cup_{h_{j}^{j-1}} L_{j-1} \cup_{h_{j-1}^{j-2}}\cdots \cup_{h_{2}^{1}} L_1, \end{equation} where the sewing homeomorphism $h_{l+1}^{l}$ identifies the boundary component $\partial T_l$ of $L_l$ to the boundary component $\partial T'_{l+1}$ of $L_{l+1}$. Unknotting the cube with trefoil-knotted hole as shown in Figure \ref{3_1knot} results in a cobordism $L^\ast$, which is widely known as the first stage of constructing a Whitehead manifold. See Figure \ref{whitehead}. \begin{figure}[h!] \centering \includegraphics[ width=7cm, height=9cm]{whitehead} \caption{$L^\ast = T \backslash T'$. The "inner" boundary component of $L^\ast$ is $\partial T'$. The "outer" boundary component of $L^\ast$ is $\partial T$.} \label{whitehead} \end{figure} Consider a variation of $W^3$ by placing $L^*$ ahead of $L_j$ or inserting $L^*$ between adjacent $L_l$ and $L_{l+1}$ in (\ref{Decomposition of W^3}) \begin{equation}\label{Decomposition2} W^{\ast} = \lim_{j\to \infty} L_j \cup_{H_{j}^{\ast}} L^\ast \cup_{H_{\ast}^{j-1}}L_{j-1}\cdots \cup_{h_{2}^{1}} L_1, \end{equation} where the sewing homeomorphism $H_{*}^{l}$ identifies the boundary component $\partial T_l$ of $L_l$ to the boundary component $\partial T'$ of $L^\ast$ and the sewing homemorphism $H_{l+1}^{*}$ identifies the boundary component $\partial T$ of $L^\ast$ to the boundary component $\partial T'_{l+1}$ of $L_{l+1}$. Then we obtain an infinite collection $\mathcal{C}$ by inserting $L^*$'s in (\ref{Decomposition of W^3}). The following result is an example of $\mathcal{C}$. \begin{proposition} The $3$-dimensional example $W$ constructed by Sternfeld belongs to the collection $\mathcal{C}$. \end{proposition} \begin{proof} The manifold $W$ constructed by Sterneld is homeomorphic to $L^\ast \cup_{H_{*}^{j}} L_{j} \cup_{H_{j}^{*}}L^\ast \cdots $, i.e., inserting $L^*$ in (\ref{Decomposition of W^3}) every other slot. See Figure \ref{sternfeld}. If one ignores the grey curves as shown in Figure \ref{sternfeld}, then the picture will be exactly the same picture given in \cite[P.4]{Ste77}. In other words, solid tori $T$ and $T_{j-1}'$ are the first stage of Sternfeld's construction. \end{proof} \begin{remark} Let $K_j$ and $K_i$ be the corresponding knot spaces of $W^3$ and $W$ respectively. Although both $W^3$ and $W$ contain a cube with a trefoil-knotted hole at each stage of the construction, the corresponding 60-fold covers of $K_j$ and $K_i$ are different. That is, the 60-fold cover of $K_j$ has both embedded 2-fold covers and embedded 3-fold covers of incompressible cube with a trefoil-knotted hole in $K_j$. However, the 60-fold cover of $K_i$ has only embedded 2-fold covers of incompressible cube with trefoil-knotted hole in $K_i$. \end{remark} \begin{figure}[h!] \centering \includegraphics[ width=6.8cm, height=8cm]{sternfeld} \caption{The difference between solid torus $T$ (blue) and $T'$ (grey) is $L^\ast$. This $L_{j-1}$ is the area between $\partial T_{j-1}$ (which has been identified with $\partial T'$) and $\partial T_{j-1}'$. } \label{sternfeld} \end{figure} \begin{question} Does $\mathcal{C}$ contain an infinite subcollection of contractible open 3-manifolds $\mathcal{C}'$ such that each manifold in $\mathcal{C}'$ embeds in no compact, locally connected and locally 1-connected metric $3$-space? \end{question} \begin{question} The cube with trefoil-knotted hole $C_l$ plays the key role in this paper. Let $K$ be an arbitrary (nontrivial) knot. Can $C_l$ be replaced by a cube with a $K$-knotted hole? More specifically, if we replace $C_l$ at each stage in the construction of $W^3$ by cube with a $K$-knotted hole, can the resulting contractible open manifold $W'$ embed in some compact, locally connected and locally 1-connected metric 3-space? \end{question} \section*{Acknowledgements} I would like to thank Professor Craig Guilbault for bringing Bing's and Sternfeld's examples to my attention and many helpful discussions on this work. I also thank the referee for the comments and for giving this paper a very close reading.
1,314,259,994,107
arxiv
\section{Introduction} Marginal treatment effects (MTEs) have unified the identification theory of several policy parameters. While the MTE framework is essentially non-parametric,\footnote{Linearity is sometimes assumed to facilitate estimation. See, e.g., Appendix B in \cite{urzua2006}} it is required that the recipient's participation into treatment follows a (generalized) Roy model. This is often referred to as additive separability: an ``additive'' comparison of costs and benefits determines selection. On the other hand, identification of the MTE is achieved via the local instrumental variable (LIV) approach (\cite{Heckman2001,Heckman2005}). An excellent survey is provided by \cite{mogstad2018}. An early effort to analyze MTE under misspecification can be found in the appendix of the seminal paper by \cite{Heckman2001}. They consider a case where the additive separability in the selection equation does not hold. The most serious consequence is that the LIV approach does not identify the MTE curve. In this paper we analyze a different type of misspecification. We model a situation in which, under additive separability, a proportion of the population does not take into account the instrumental variable when deciding whether to take up treatment or not. We refer to them as non-responders. To analyze the resulting bias, we define a pseudo-MTE curve which results from the LIV approach. Under no misspecification, the pseudo-MTE curve would coincide with the MTE curve. The resulting bias can be interpreted as a location-scale change of the MTE curve, parameterized by the proportion of non-responders and their propensity score. We have two main results. The first one shows that the ability to recover the conditional average treatment effect (CATE) for the subpopulation of responders depends on the proportion of non-responders only through the support of the responders' propensity score. Indeed, when the support of the propensity score is the unit interval, it is possible to identify the CATE \emph{without} having to recover the true MTE curve in the first place. In a nutshell, ignoring misspecification and integrating under the pseudo-MTE curve over the support of observed propensity score yields the correct CATE for the subpopulation of responders. While the previous identification result for the CATE is independent of the proportion of non-responders, this is not true of the MTE curve and other parameters derived from it such as LATE and MPRTE. However, in our second result, we show how to recover the MTE curve for responders by undoing the location-scale change induced by the presence of non-responders. The correction is based on an estimate of the support of the propensity score and requires only observable data. It gives an estimator of the policy parameter of interest that is simple to implement. Cases where the propensity score is fully supported are relevant in practice. For a recent example, see the survey approach of \cite{Briggs2020} the probability of having a child is supported on the full unit interval. Recently, \cite{kedadni2021} and \cite{vitor2021} focus on the effect of measurement error in treatment status on the MTE curve. We complement such results by noting that a simple change to our setup can cover the case of misclassification. In a setting where treatment status is misclassified, the observed outcome is generated with the true treatment status. In our setting of misclassification, the observed outcome can be regarded as a mixture of responders and non-responders. The proportion of non-responders is analogous to the proportion of misreporters. Indeed, our results also hold if instead of having a fraction of non-responders, we have a fraction of misreporters. Another consequence of the presence of non-responders in the sample is that the effect of the instrumental variable on the propensity score is attenuated. Motivated by this, we model a situation where the proportion of non-responders approaches 1, analogous to the setting of weak instruments of \cite{stockstaiger1997}. Thus, we can derive weak-instrument-like asymptotic distributions for the parameters derived from the MTE curve. The rest of the paper is organized as follows: section \ref{sec:misc_and_mte} introduces the model; section \ref{sec:recover} contains the main identification results; section \ref{sec:bounds} provides bounds for the case where the propensity score is not fully supported in the unit interval; section \ref{sec:weak_iv} traces the connection to the weak IV literature; and section \ref{sec:conclusion} concludes. While this paper only deals with identification, we expect to extend our results to cover estimation and inference. \section{Misspecification and MTE}\label{sec:misc_and_mte} In this section we introduce our model for misspecification in the MTE framework (\cite{Bjorklund1987}, \cite{Heckman2001,Heckman2005}). We analyze the consequences of misspecification from the identification point of view. \subsection{The Model} We start with a general non-separable potential outcome model \begin{align*} Y(0)&=h_0(X,U_0),\\ Y(1)&=h_1(X,U_1),\\ Y&=D^*Y(1)+(1-D^*)Y(0), \end{align*} where $D^*$ is the observed treatment status, $X$ are observable covariates with support denoted by $\mathcal X$, and $\left\{Y(0),Y(1)\right\}, Y$ are potential and observed outcomes, respectively. The functions $h_0$ and $h_1$ are unknown. We model misspecification as a situation where there are two types of individuals: responders and non-responders. Responders select into treatment taking into account the incentives in Z. Their selection equation is given by $ D=\mathds 1\left\{ \mu(X,Z)\geq V\right\}$. On the other hand, non-responders do not react to incentives in Z at all. Their selection equation is given by $\tilde D=\mathds 1\left\{ \tilde \mu(X)\geq \tilde V\right\}$. Notice how $Z$ is not featured in $\tilde{\mu}(\cdot)$. For the non-responders, Z fails the relevance condition of the standard MTE model. Let $S$ be the latent status of an individual: $S=1$ for a responder and $S=0$ for a non-responder. The observed treatment status $D^*$ is given by: \begin{align}\label{eq:mixture_d} D^* = S \cdot D + (1-S) \cdot \tilde{D}. \end{align} We allow for the proportion of non-responders may vary with $X$. To this end, we define $\delta_X = \Pr(S=0|X)=\Pr(D^*= \tilde{D}|X)$. Thus, for every subpopulation with characteristics $X=x$ there is a proportion $\delta_x = \Pr(S=0|X=x) \in [0,1)$ of non-responders. We consider values where $\sup_{x \in \mathcal{X}} \delta_x < 1$ to avoid a situation where no-one responds to the instrumental variable. \begin{remark} We observe $Y$ according to $Y=D^*Y(1)+(1-D^*)Y(0)$, which is given by the actual choice $D^*$. If, instead, we have $Y=DY(1)+(1-D)Y(0)$, then we can interpret $D^*$ as a misclassified treatment status. In this case, all individuals decide according to $D=\mathds 1\left\{ \mu(X,Z)\geq V\right\}$, but a fraction of them reports according to $ \tilde D=\mathds 1\left\{ \tilde \mu(X)\geq \tilde V\right\}$ See \cite{kedadni2021} and \cite{vitor2021} for recent studies on MTE under misclassification. \end{remark} The econometrician observes a cross section of $(Y_i, D^*_i, X_i, Z_i)$. When $\delta_X=0$ almost surely, then $D^*=D$ and we are in the familiar MTE framework of \cite{Heckman2001,Heckman2005}. Otherwise, if $\delta_X \neq 0$ almost surely, for an observation of $D^*_i$, we do not know whether we are observing the treatment status of a non-responder or of a responder. That is, it is unknown if we are observing $D_i$ or $\tilde D_i$. \begin{assumption} \textbf{Type Independence.} \label{Assumption_type} $S\perp Z\| X$. \end{assumption} Assumption \ref{Assumption_type} states that once we control for $X$, the latent status of a individuals does not vary with the instrumental variable Z. \begin{assumption} \textbf{Relevance and Exogeneity} \label{Assumption_heckman} \begin{enumerate}% \item \label{relevance}$\mu(X,Z)$ is a nondegenerate random variable conditional on $X$. \item \label{exogeneity} $(U_{0},U_{1},V, \tilde V)$ are independent of $Z$ conditional on $X$. \end{enumerate} \end{assumption} Note that, for the subpopulation of non-responders, the instrument is valid but totally irrelevant. The larger the value of $\delta_x$, the ``weaker'' the instrument $Z$, since most participants with $X=x$ are non-responders. With the exception of the requirement that $\tilde V\perp Z\| X$, these are the same conditions of \cite{Heckman2001,Heckman2005}. Our additional requirement covers the subpopulation of non-responders: neither the ``cost'' of treatment $\tilde V$ nor the ``benefit'' $\tilde \mu(X)$ depend on $Z$ when conditioned on $X$. \begin{example} To fix ideas, we can think of a two part cost of providing the incentive. A fixed cost associated to targeting a particular subpopulation with covariates $X=x$ and the cost of the incentive itself. If Z is a voucher, there could be administrative costs associated to making it available to subpopulation $X=x$. For non-responders who do not redeem the voucher, the cost of the incentive is zero. Such a scenario would satisfy Assumption \ref{Assumption_heckman}. \end{example} The misclassification structure of Equation \eqref{eq:mixture_d} allows to define three different propensity scores. An observed/identified one which is based on the observables $(D^*,X,Z)$, and two latent/unobserved propensity scores: one for the reponders and one for the non-responders. Formally, they are given by \begin{align*} P^*(X,Z)&:=\Pr(D^*=1|X,Z) & \textbf{(Observed)}\\ P(X,Z)&:=\Pr(D=1|S=1,X,Z) & \textbf{(Responders)} \\ \tilde P(X)&:=\Pr(\tilde D=1|S=0,X) & \textbf{(Non-responders)} \end{align*} The next result takes (mainly) advantage of Assumption \ref{Assumption_type} to derive a useful linear relation between them. \begin{lemma}\label{lemma_prop_score} Under Assumptions \ref{Assumption_type} and \ref{Assumption_heckman}.\ref{exogeneity} we can relate the different propensity scores by \begin{align}\label{eq:obs_ps} P^*(X,Z)=(1-\delta_X)\cdot P(X,Z)+\delta_X \cdot\tilde P(X). \end{align} \end{lemma} \begin{proof} Starting with the model in \eqref{eq:mixture_d} we can write \begin{align*} \Pr(D^*=1|X,Z)&=\Pr(S=1|X,Z)\cdot \Pr(D=1|S=1,X,Z)\\& + \Pr(S=0|X,Z)\cdot \Pr(\tilde D=1|S=0,X,Z). \end{align*} Assumption \ref{Assumption_type} simplifies the mixing probabilities to $\Pr(S=1|X)=1-\delta_X$ and $\Pr(S=0|X)=\delta_X$. We obtain \begin{align*} \Pr(D^*=1|X,Z)=(1-\delta_X)\cdot \Pr(D=1|S=1,X,Z) + \delta_X\cdot \Pr(\tilde D=1|S=0,X,Z). \end{align*} To see that $Pr(\tilde D=1|S=0,X,Z)=Pr(\tilde D=1|S=0,X)$, we note that By Assumptions \ref{Assumption_type} and \ref{Assumption_heckman}.\ref{exogeneity}: \begin{align*} \Pr(\tilde D=1|S=0,X,Z) = \Pr(\tilde \mu(X)\geq\tilde V|S=0,X,Z)=\Pr(\tilde \mu(X)\geq\tilde V|X)= \Pr(\tilde D=1|S=0,X). \end{align*} Therefore \begin{align*} \Pr(D^*=1|X,Z)&=(1-\delta_X)\cdot \Pr(D=1|S=1, X,Z) + \delta_X\cdot \Pr(\tilde D=1|S=0,X)\\ &=(1-\delta_X)\cdot P(X,Z)+\delta_X \cdot\tilde P(X). \end{align*} \end{proof} For a fixed $X=x$, the result in Lemma \ref{lemma_prop_score} shows that the observed propensity (still random through $Z$) is a linear transformation of the propensity score for the responders. If, additionally, we take two different values of $Z$, for example $z$ and $z'$, we can remove the contribution of $\tilde P(X)$, which is invariant with respect to $z$ and obtain\footnote{We write $P^*(x,z)$ for $\Pr(D^*=1|X=x,Z=z)$, and $P(x,z)$ for $\Pr(D=1|S=1,X=x,Z=z)$.} \begin{align}\label{eq:disc_pz} P^*(x,z)-P^*(x,z')=(1-\delta_x)\cdot \left[P(x,z)-P(x,z')\right] \end{align} Equation \eqref{eq:disc_pz} says that the changes on the observed propensity score induced by varying $Z$ are proportional to the changes on the true propensity score induced by varying $Z$. Thus, if we knew $\delta_x$, we could recover the change in the propensity score for the responders. When $Z$ is continuous, we can take a limiting version of this argument, \textit{e.g.}, as $z'\to z$, to obtain \begin{align}\label{eq:der_pz} \frac{\partial P^*(x,z)}{\partial z}=(1-\delta_x)\cdot \frac{\partial P(x,z)}{\partial z}. \end{align} Both the discrete (equation \eqref{eq:disc_pz}), and the continuous (equation\eqref{eq:der_pz}) change in the propensity score play a role in the relationship between the MTE curve (defined below) and certain parameters of interest. \subsection{The MTE for Responders} For the subpopulation of responders, the standard MTE framework holds. This motivates us to define an MTE curve for this subpopulation. In doing so, we are implicitly assuming that this is our object of interest. The reason for this is that many times we can also control the instrumental variable $Z$. Thus, to asses the effects of manipulations of $Z$ we look at the MTE curve for responders. Let $\mathcal P_x$ and $\mathcal P^*_x$ denote the support of $P(x,Z):=\Pr(D=1|X=x,Z)$ and $P^*(x,Z):=\Pr(D^*=1|X=x,Z)$ respectively. For the subpopulation of responders, we rewrite the selection equation as $D=\mathds 1\left\{ P(X,Z)\geq U_D \right\}$ where $U_D\sim U_{(0,1)}$.\footnote{This follows from $D=\mathds 1\{ F_{V|S,X,Z}(\mu(X,Z)|1,X,Z)\geq F_{V|S,X,Z}(V|1,X,Z)\}$. Noting that by assumptions \ref{Assumption_heckman}.(\ref{exogeneity}) and \ref{Assumption_type}, we have $D=\mathds 1\{ P(X,Z)\geq F_{V|S,X}(V|1,X)\}$. Finally, we take $U_D:=F_{V|S,X}(V|1,X)$.} Thus, we define the MTE curve for responders as $$\text{MTE}(u,x):=\mathbb E\left[Y(1)-Y(0)|S=1,U_D=u,X=x\right].$$ By the LIV approach we have the following equivalence result:\footnote{See \cite{Heckman2001} for sufficient conditions.} \begin{equation}\label{eq:mte_liv} \text{MTE}(u,x)=\frac{\partial \mathbb E\left[Y|S=1,P(X,Z)=u,X=x\right]}{\partial u}\text{ for }u\in \mathcal P_x. \end{equation} Since we do not observe $P(X,Z)$, this is \emph{not} an identification result in our setting. In a similar fashion, we \emph{define} the following pseudo-MTE curve: \begin{align}\label{eq:pseudo_mte} \text{MTE}^*(u,x;\delta_x):=\frac{\partial \mathbb E\left[Y|P^*(X,Z)=u,X=x\right]}{\partial u} \text{ for }u\in \mathcal P^*_x. \end{align} We emphasize that the pseudo-MTE curve is indexed by $\delta_x$ because it depends implicitly on the proportion of the nonresponders. From the data only, we can only compute $\text{MTE}^*(u,x;\delta_x)$, not $\text{MTE}(u,x)$. The pseudo-MTE curve is the curve that would be mistakenly taken to be the MTE curve. Indeed, in the absence of non-responders, $\text{MTE}^*(u,x;0)=\text{MTE}(u,x)$. If non-responders are present in the $X=x$ subpopulation, that is if $\delta_x> 0$, the observed $\text{MTE}^*(u,x;\delta_x)$ does not identify $\text{MTE}(u,x)$. In another words, the LIV approach is biased. We can now fully characterize the bias induced by $\delta_x$ on the MTE curve. \begin{lemma}\label{lemma:bias_mte} Under Assumptions \ref{Assumption_type} and \ref{Assumption_heckman}, we can write \begin{align}\label{eq:equivalence_2} \text{MTE}(v,x)=(1-\delta_x) \text{MTE}^*\left ( (1-\delta_x)v+\delta_x\tilde P(x),x;\delta_x\right ) \text{ for }v\in \mathcal P_x. \end{align} \end{lemma} \begin{proof} Using \eqref{eq:obs_ps}, for $u\in \mathcal P^*_x$, we can write \begin{align*} \mathbb E\left[Y|P^*(X,Z)=u,X=x\right] &= \mathbb E\left[Y|(1-\delta_x)\cdot P(X,Z)+\delta_x\cdot\tilde P(X)=u,X=x\right] \\ &= \mathbb E\left[Y\bigg | P(X,Z)=\frac{u-\delta_x \tilde P(x)}{1-\delta_x},X=x\right] \end{align*} Differentiating with respect to $u$, we obtain \begin{align}\label{eq:mte_mte} \text{MTE}^*(u,x;\delta_x)=\frac{1}{1-\delta_x} \text{MTE}\left (\frac{u-\delta_x \tilde P(x)}{1-\delta_x},x\right ) \text{ for }u\in \mathcal P^*_x. \end{align} since $\frac{u-\delta_x \tilde P(x)}{1-\delta}\in \mathcal P_x$ by \eqref{eq:obs_ps}. Alternatively, we can write \begin{align*} \text{MTE}(v,x)=(1-\delta_x) \text{MTE}^*\left ( (1-\delta_x)v+\delta_x\tilde P(x),x;\delta_x\right ) \text{ for }v\in \mathcal P_x. \end{align*} \end{proof} Lemma \ref{lemma:bias_mte} shows that the bias is in the form of both location and scale. Equation \eqref{eq:mte_mte}, which is equivalent to Equation \eqref{eq:equivalence_2},\footnote{Note the changes in the domain of integration between \eqref{eq:equivalence_2} and \eqref{eq:mte_mte}.} shows that $\text{MTE}^*$ is obtained by changing the location from $u$ to $u-\delta_x\tilde P(x)$, and rescaling by $(1-\delta_x)^{-1}$. Thus, as in a location-scale family of densities, we can regard $\text{MTE}^*$ as a family of curves, defined over $\mathcal P^*_x$, which is indexed by $\delta_x$ and $\tilde P(x)$. \section{Automatic and explicit de-biasing}\label{sec:recover} We now introduce our two main results. We show that, for any subpopulation $X=x$ where the instrument is strong enough to induce a propensity score supported on the full unit interval $[0,1]$, the associated $CATE(x)$ can be identified for responders. This is true even if the $MTE^*(u,x,\delta_x)$ curve is biased for $MTE(u,x)$. We note that the identified $CATE(x)$ parameters corresponds to the subpopulation of responders. \begin{assumption}\label{full_support}\textbf{Full Support.} The support of $P(x,Z)$ is $\mathcal P_x=[0,1]$ for every $x$ in a subset $\mathcal{X}_B \subseteq \mathcal{X}$. \end{assumption} Assumption \ref{full_support} says that the incentive in the instrument $Z$ is strong enough to induce any individual in the $X=x$ subpopulation into or out of treatment. Perhaps surprisingly, the $\text{CATE}(x)$, can be recovered only by resorting to the full support assumption. That is, to correctly compute the $\text{CATE}(x)$ we do not need to recover the true MTE curve for responders. \begin{theorem}\label{theorem:cate} Let Assumptions \ref{Assumption_type}, \ref{Assumption_heckman}, and \ref{full_support} hold. Then, for any $x \in \mathcal{X}_B$: \begin{align*} \text{CATE}(x) =\int_{\inf \mathcal P^*_x}^{\sup \mathcal P^*_x} \text{MTE}^*(u,x;\delta_x)du. \end{align*} \end{theorem} \begin{proof} The Conditional Average Treatment Effect, $\text{CATE}(x)$, could be computed using the true MTE curve (if it was observed) as \begin{align*} \text{CATE}(x) =\int_0^1 \text{MTE}(u,x)du. \end{align*} Given that $\mathcal P_x=[0,1]$, then $\mathcal P^*_x:= [\underline{p_x^*} , \overline{p_x^*}]$ where $\underline{p_x^*}:=\inf \mathcal P^*_x=\delta_x\tilde P(x)$ and $\overline{p_x^*}:=\sup \mathcal P^*_x(1-\delta_x)+\delta_x\tilde P(x)]$. Consider the integrating the pseudo-MTE curve over the support of the observed propensity score: \begin{align*} \int_{\delta_x\tilde P(x)}^{(1-\delta_x)+\delta_x\tilde P(x)} \text{MTE}^*(u,x;\delta_x)du. \end{align*} Using \eqref{eq:mte_mte}, we have \begin{align*} \int_{\delta_x\tilde P(x)}^{(1-\delta_x)+\delta_x\tilde P(x)} \text{MTE}^*(u,x;\delta_x)du &= \int_{\delta_x\tilde P(x)}^{(1-\delta_x)+\delta_x\tilde P(x)} \frac{1}{1-\delta_x} \text{MTE}\left (\frac{u-\delta_x \tilde P(x)}{1-\delta_x},x\right ) du\\ &=\int_{0}^{1}\text{MTE}(u,x)du\\ &=\text{CATE}(x) \end{align*} where we have done the change of variables \begin{align*} v = \frac{u-\delta_x \tilde P(x)}{1-\delta_x}. \end{align*} \end{proof} \begin{remark} The result of Theorem \ref{theorem:cate} states that by integrating the observed (and biased) marginal treatment effect curve over the support of the observed (and biased) propensity score leads to the $\text{CATE}(x)$ provided that the propensity score for responders has full support. Thus, under the type of misspecification described in \eqref{eq:mixture_d}, $\text{CATE}(x)$ is robust to $\delta_x\neq 0$. \end{remark} \begin{remark} This result also hold in a setting of misclassification and was our original motivation. That is, in a setting where instead of $Y=D^*Y(1)+(1-D^*)Y(0)$, we have $Y=DY(1)+(1-D)Y(0)$ and we interpret $D^*$ as a misclassified treatment status. \end{remark} Unfortunately, the automatic ``de-biasing" in Theorem \ref{theorem:cate} does not hold for the other policy parameters that can be obtained via the MTE curve. On the other hand, we show that the full support assumption can be used to identify $\delta_x$ which allows an explicit ``de-biasing" procedure. Given that $\mathcal P^*_x:= [\underline{p_x^*} , \overline{p_x^*} ] =[\delta_x\tilde P(x), (1-\delta_x)+ \delta_x\tilde P(x)]$ we can actually identify both $\delta_x$ and $\tilde P(x)$. It follows then from Lemma \ref{lemma:bias_mte} that we can recover the $\text{MTE}(u,x)$ curve. \begin{proposition} \label{prop:identification of delta_x} Let Assumptions \ref{Assumption_type}, \ref{Assumption_heckman}, and \ref{full_support} hold. Then $\delta_x$ is identified for any $x \in \mathcal{X}_B$ through: \begin{equation*} \delta_x = 1-(\overline{p_x^*} -\underline{p_x^*} ) \end{equation*} \end{proposition} \begin{proof} According to Equation \eqref{eq:obs_ps}, the range of the observed propensity score is given by $\mathcal P^*_x=[\delta_x\tilde P(x), (1-\delta_x)+ \delta_x\tilde P(x)]$. For each $x$, the observed propensity score $P^*(\cdot)$ can be viewed as an affine function of $P(\cdot)$. This affine function is parameterized by $\delta_x$ and $\tilde{P}_x$. For the endpoints $\underline{p}_x$ and $\overline{p}_x$ of the true propensity score, we have the mappings: \begin{align*} \underline{p_x} \mapsto (1-\delta_x) \underline{p_x} + \delta_x \tilde{P}(x) \\ \overline{p_x} \mapsto (1-\delta_x) \overline{p_x} + \delta_x \tilde{P}(x) \end{align*} The images of this collection of mapping are observed. They are the endpoints of the observed propensity score $P^*(Z,x)$. If the original endpoints of the true $P(\cdot)$ are known to be $\underline{p_x} = 0$ and $\overline{p_x} =1$, like stated in Assumption \ref{full_support}, the mapping above can be recovered by the following system of two equations in two unknowns: $\tilde P(x)$ and $\delta_x$. \begin{align*} \underline{p_x^*} &= \delta_x\tilde P(x)\\ \overline{p_x^*} &=(1-\delta_x)+ \delta_x\tilde P(x) \end{align*} which implies that \begin{align*} \delta_x &=1-(\overline{p_x^*} -\underline{p_x^*}) \\ \tilde P(x)&=\underline{p_x^*} \cdot \frac{1}{\delta_x} \end{align*} \end{proof} The intuition for this result is simple. Because the original propensity score $P(Z,x)$, for any fixed $x$, is supported on the unit interval, the observed support $\mathcal P^*_x=[\underline{p_x^*} , \overline{p_x^*} ]$ will contain enough information to identify $\delta_x$. This is summarized Figure \ref{fig:MTE_graph}. \begin{figure} \centering \includegraphics[width=\textwidth]{MTE_graph.png} \caption{Identifying $\delta_x$: The figure shows the link between the non-responders propensity score, the proportion of non-responders and the observed propensity score. Because the non-responders propensity score does not vary with the instrument $Z$ and $supp(P(Z,x)) =[0,1]$ the $\delta_x$ can be recovered from observing the discrepancy from the observed support $P^*(Z,x)$ and $[0,1]$. The picture shows one of those points, $x_0$. } \label{fig:MTE_graph} \end{figure} Having identified $\delta_x$, then we use Equation \eqref{eq:mte_mte} to identify the \text{MTE} curve. \begin{corollary}\label{cor:mte_id} Let Assumptions \ref{Assumption_type}, \ref{Assumption_heckman}, and \ref{full_support} hold. Then, the \text{MTE} curve is identified: \begin{align*} \text{MTE}(v,x)=(\overline{p_x^*} -\underline{p_x^*} ) \text{MTE}^*\left ( (\overline{p_x^*} -\underline{p_x^*} )v+\underline{p_x^*},x;1-(\overline{p_x^*} -\underline{p_x^*})\right ) \text{ for }v\in \mathcal P_x = [0,1]. \end{align*} where $\underline{p_x^*} = \inf \mathcal P^*_x$ and $ \overline{p_x^*} = \sup \mathcal P^*_x.$ \end{corollary} This corollary provides the correct ``de-biasing'' to be performed on the observed MTE curve to match the true MTE curve. However, it is possible to recover parameters that are based on the MTE curve \emph{without} having to recover the MTE curve in the first place. We provide two examples. \begin{example}[LATE]\label{example_late} Consider the $\text{LATE}$, for $P(x,z')<P(x,z)$ with $z,z'\in\mathcal Z$, which can be obtained from MTE curve as \begin{align*} \text{LATE}(x, P(x,z), P(x,z'))=\frac{1}{P(x,z)-P(x,z')}\int_{P(x,z')}^{P(x,z)}\text{MTE}(u,x)du. \end{align*} Under misspecification, for the same $z,z'\in\mathcal Z$, we have \begin{align*} \text{LATE}^*(x, P^*(x,z), P^*(x,z'))&=\frac{1}{P^*(x,z)-P^*(x,z')}\int_{P^*(x,z')}^{P^*(x,z)}\text{MTE}^*(u,x;\delta_x)du\\ &=\frac{(1-\delta_x)^{-1}}{P(x,z)-P(x,z')}\int_{(1-\delta_x)P(x,z')+\delta_x\tilde P(x)}^{(1-\delta_x)P(x,z)+\delta_x\tilde P(x)}\frac{1}{1-\delta_x}\\ &\times \text{MTE}\left(\frac{u-\delta_x\tilde P(x)}{1-\delta_x},x\right)du. \end{align*} Note that to go from $\text{MTE}^*$ to $\text{MTE}$ we used Lemma \ref{lemma:bias_mte}. We did not use Corollary \ref{cor:mte_id}. Defining the change of variables $\tilde u = \frac{u-\delta_x\tilde P(x)}{1-\delta_x}$, we get $(1-\delta_x)d\tilde u = du.$ We then write \begin{align*} \text{LATE}^*(x, P^*(x,z), P^*(x,z'))&=\frac{(1-\delta_x)^{-1}}{P(x,z)-P(x,z')}\int_{(1-\delta_x)P(x,z')+\delta_x\tilde P(x)}^{(1-\delta_x)P(x,z)+\delta_x\tilde P(x)}\frac{1}{1-\delta_x}\\ &\times \text{MTE}\left(\frac{u-\delta_x\tilde P(x)}{1-\delta_x},x\right)du\\ &=\frac{(1-\delta_x)^{-1}}{P(x,z)-P(x,z')}\int_{P(x,z')}^{P(x,z)} \text{MTE}(u,x)du\\ &=\frac{1}{1-\delta_x}\text{LATE}(x, P(x,z), P(x,z')). \end{align*} Now, since $\delta_x =1-(\overline{p_x^*} -\underline{p_x^*})$ by Proposition \ref{prop:identification of delta_x}, the explicit de-biasing is achieved by \begin{align*} (\overline{p_x^*}-\underline{p_x^*})\text{LATE}^*(x, P^*(x,z), P^*(x,z'))&=\text{LATE}(x, P(x,z), P(x,z')). \end{align*} The left hand side can be computed from the data. \end{example} \begin{example}[MPRTE]\label{example_mprte} The marginal policy relevant treatment effect (MPRTE) is an average of the $\text{MTE}(u,x)$ along the margin of indifference: when $U_D=P(X,Z)$. It is given by \begin{align*} \text{MPRTE}(x) = \int_{\mathcal Z} \text{MTE}(P(x,z),x)\frac{\partial P(x,z)}{\partial z} \left(E\left[\frac{\partial [P(x,Z)]}{\partial z}\right]\right)^{-1}f_{Z|X}(z|x)dz \end{align*} Then, using Equations \eqref{eq:der_pz} and \eqref{eq:equivalence_2} we get \begin{align*} \text{MPRTE}^*(x) &= \int_{\mathcal Z} \text{MTE}^*(P^*(x,z),x;\delta_x)\frac{\partial P^*(x,z)}{\partial z} \left(E\left[\frac{\partial [P^*(x,Z)]}{\partial z}\right]\right)^{-1}f_{Z|X}(z|x)dz\\ &= \int_{\mathcal Z} \frac{1}{1-\delta_x} \text{MTE}(P(x,z),x)\frac{\partial P(x,z)}{\partial z} \left(E\left[\frac{\partial [P(X,Z)]}{\partial z}\right]\right)^{-1}f_{Z|X}(z|x)dz\\ &=\frac{1}{1-\delta_x}\text{MPRTE}(x). \end{align*} Thus, again, by Proposition \ref{prop:identification of delta_x}, we obtain \begin{align*} (\overline{p_x^*}-\underline{p_x^*})\text{MPRTE}^*(x) &=\text{MPRTE}(x) . \end{align*} \end{example} In the previous examples, proceeding as if there were no misspecification, yields biased parameters. Thus, the automatic ``de-biasing'' in CATE is the exception rather than the rule. \section{Bounds under limited support}\label{sec:bounds} Instead of assuming full support, now we allow for limited support of the propensity score $P(x,Z)$, but we still require that it is an interval. \begin{assumption}\label{limited_support}\textbf{Limited Support.} The support of $P(x,Z)$ is $\mathcal P_x=[\underline{p_x},\overline{p_x}]\subset[0,1]$. \end{assumption} \noindent Under Assumption \ref{limited_support}, and using \eqref{eq:obs_ps}, we have that the observed support of $P^*(X,Z)$ is \begin{align*} [\underline{p_x^*} , \overline{p_x^*} ]= [(1-\delta_x)\underline{p_x} + \delta_x\tilde P(x), (1-\delta_x)\overline{p_x} + \delta_x\tilde P(x)]. \end{align*} Taking the difference we obtain that $\overline{p_x^*} - \underline{p_x^*}=(1-\delta_x)(\overline{p_x} -\underline{p_x} )$. Since $\overline{p_x} -\underline{p_x}\leq 1$, then $\overline{p_x^*} - \underline{p_x^*}\leq (1-\delta_x)$, so that a lower bound for $\delta_x$ is $\delta_x\geq 1-(\overline{p_x^*} - \underline{p_x^*})$. In general, it is not possible to provide an upper bound for $\delta_x$. This is similar to the case of misclassification. Following that literature (see Assumption 4 in \cite{kedadni2021}, and references therein), we assume it is known that for some $\overline \delta_x$: $\delta_x\leq \overline \delta_x<1$. Thus, we can write $1-(\overline{p_x^*} - \underline{p_x^*})\leq \delta_x\leq \overline\delta_x.$ The correction factor in Examples \ref{example_late} and \ref{example_mprte} is $(1-\delta_x)$. Now, it bounded by $ 1-\overline \delta_x \leq 1-\delta_x\leq \overline{p_x^*} - \underline{p_x^*}$. Thus, we can bound both LATE and MPRTE using this: \begin{align*} (1-\overline\delta)\text{LATE}^*(x, P^*(x,z), P^*(x,z')) &\leq \text{LATE}(x, P(x,z), P(x,z'))\\&\leq (\overline{p_x^*} - \underline{p_x^*})\text{LATE}^*(x, P^*(x,z), P^*(x,z')), \end{align*} and \begin{align*} (1-\overline\delta)\text{MPRTE}^*(x)\leq \text{MPRTE}(x) \leq (\overline{p_x^*} - \underline{p_x^*})\text{MPRTE}^*(x). \end{align*} Naturally, if $\overline \delta_x$ is not known, we can only provide upper bounds. Again, we stress that is not necessary to bound the MTE curve in the first place. Such a bound can be complicated to obtain since, by Lemma \ref{lemma:bias_mte}, $\delta_x$ enters in three different ways in the observed MTE curve. \section{Misspecification as a weak instrument}\label{sec:weak_iv} We can frame our model as the triangular scheme of \cite{stockstaiger1997} and consider a sequence $\left\{\delta_{x,n}\right\}_{n=1}^{\infty}$ such that $\lim_{n\to\infty}\delta_{x,n}=1$ at a certain rate as $n\to\infty$. Thus, as $n\to\infty$, the instrument becomes irrelevant in the model. A possible indicator of the presence of a large value of $\delta_{x,n}$ can be the average derivative of the observed propensity score. This equals an attenuated version of the average derivative of the true propensity score. For a given value of $\delta_{x,n}$, by equation \eqref{eq:der_pz}, we have \begin{align*} E\left[ \frac{\partial P^*(x,Z)}{\partial z} \right] = (1-\delta_{x,n})E\left[ \frac{\partial P(x,Z)}{\partial z} \right] \end{align*} Thus a ``small'' value can be an indication that $\delta_{x,n}$ is close to 1. This is similar to a first stage regression in the linear model. We take the derivative with respect to $z$ to get rid of the propensity score that does not respond to $Z$. We average, because this likely to be a non-linear expression. Thus, $(1-\delta_{x,n})$ can be thought of as the counterpart of $C/\sqrt T$ in the notation of \cite{stockstaiger1997}. Indeed, define \begin{align*} Cov_x(Z,D^*):=E[ZD^*|X=x] -E[Z|X=x]E[D^*|X=x]. \end{align*} We have \begin{align*} E[ZD^*|X=x] &= E[ZSD|X=x] + E[Z(1-S)\tilde D|X=x]\\ &=E[ZSD|X=x] + E[Z|X=x]E[(1-S)\tilde D|X=x] \end{align*} and \begin{align*} E[D^*|X=x] = E[SD|X=x] + E[(1-S)\tilde D|X=x] \end{align*} Thus, \begin{align*} Cov_x(Z,D^*)&= E[ZSD|X=x] - E[Z|X=x]E[SD|X=x]\\ &+ E[Z|X=x]E[(1-S)\tilde D|X=x] -E[Z|X=x]E[(1-S)\tilde D|X=x] \\ &=Cov_x(Z,SD) \end{align*} which is the covariance between the instrument and treatment status for the responders with $X=x$. To see the role of the rate at which $\delta_{x,n}$ converges to 1, suppose for a second that we know the functional form of $P^*(x,Z)$, and we estimate the average derivative using a sample mean: \begin{align*} \hat E\left[ \frac{\partial P^*(x,Z)}{\partial z} \right] = \frac{1}{n}\sum_{i=1}^n \frac{\partial P^*(x,Z_i)}{\partial z} =(1-\delta_{x,n}) \frac{1}{n}\sum_{i=1}^n \frac{\partial P(x,Z_i)}{\partial z} \end{align*} Then \begin{align*} \hat E\left[ \frac{\partial P^*(x,Z)}{\partial z} \right] - E\left[ \frac{\partial P^*(x,Z)}{\partial z} \right] = (1-\delta_{x,n})\left ( \frac{1}{n}\sum_{i=1}^n \frac{\partial P(x,Z_i)}{\partial z}-E\left[ \frac{\partial P(x,Z)}{\partial z} \right]\right) \end{align*} In order to investigate possible discontinuities in the limiting distributions, we follow \cite{kuersteiner2002}, and we let $(1-\delta_{x,n})=n^{\nu_{x}}$, for $\nu_{x}<0$. We obtain \begin{align*} \hat E\left[ \frac{\partial P^*(X,Z)}{\partial z} \right] - E\left[ \frac{\partial P^*(X,Z)}{\partial z} \right] = O_p(n^{\nu_{x}-1/2}). \end{align*} Then, we obtain a degenerate limit: \begin{align*} \sqrt n\left(\hat E\left[ \frac{\partial P^*(X,Z)}{\partial z} \right] - E\left[ \frac{\partial P^*(X,Z)}{\partial z} \right]\right) = o_p(1) \end{align*} Now consider the MPRTE. Recall that, by Example \ref{example_mprte}, under the full support guaranteed by Assumption \ref{full_support}, \begin{align*} n^{\nu_{x}}\text{MPRTE}^*(x) =\text{MPRTE}(x). \end{align*} Assume that, if $\delta_x=0$, there exists $\hat{\text{MPRTE}}(x)$, a $\sqrt n$-consistent estimator of $\text{MPRTE}(x)$ such that \begin{align*} \hat{\text{MPRTE}}^*(x)-\text{MPRTE}^*(x) = n^{-\nu_{x}}\left (\hat{\text{MPRTE}}(x) - \text{MPRTE}(x) \right). \end{align*} Thus, if $\nu_{x}=-1/2$, then $\hat{\text{MPRTE}}^*(x)$ does not converge in probability. In future work, we will use these results to construct confidence intervals for the parameters of interest. \section{Conclusion}\label{sec:conclusion} In this paper we use the MTE framework to model a proportion of individuals who do not respond to the incentives of the instrumental variable. We show that in the special case where the observed propensity score is fully supported on the unit interval, i) the CATE is automatically identified regardless of the non-responders, and ii) we can identify the proportion of non-responders and use it to recover the MTE curve, and we can recover any parameter associated with it. We show that for some parameters, such as LATE and MPRTE, it is even possible to bypass the recovery of the MTE curve, and directly recover these parameters. Moreover, if the propensity has limited support, we find bounds for the LATE, the MPRTE, and the MTE curve. When we let the proportion of non-responders approach 1 at a certain rate, the framework resembles that of weak instruments. In future research we hope to leverage the results in this literature to construct valid confidence intervals for the MTE curve and related parameters. \bibliographystyle{econometrica}
1,314,259,994,108
arxiv
\section{Introduction} Multi Armed Bandits (MAB) algorithms are often used in web services \citep{agarwal2016making, li2010contextual}, sensor networks \citep{tran2012long}, medical trials \citep{badanidiyuru2018bandits,rangi2019unifying}, and crowdsourcing systems \citep{rangi2018multi}. The distributed nature of these applications makes these algorithms prone to third party attacks. For example, in web services decision making critically depends on reward collection, and this is prone to attacks that can impact observations and monitoring, delay or temper rewards, produce link failures, and generally modify or delete information through hijacking of communication links \citep{agarwal2016making} \citep{cardenas2008secure,rangi2021learning}. { Making} these systems secure requires an understanding of the regime where the systems can be attacked, as well as designing ways to mitigate these attacks. In this paper, we study both of these aspects in a stochastic MAB setting. We consider a data poisoning attack, also referred as man in the middle (MITM) attack. In this attack, there are three agents: the environment, the learner (MAB algorithm), and the attacker. At each discrete time-step $t$, the learner selects an action $i_t$ among $K$ choices, the environment then generates a reward $r_t(i_t)\in[0,1]$ corresponding to the selected action, and attempts to communicate it to the learner. However, an adversary intercepts $r_t(i_t)$ and can contaminate it by adding noise {$\epsilon_t(i_t)\in [-r_t(i_t),1-r_t(i_t)]$}. It follows that the learner observes the contaminated reward $r^o_t(i_t)=r_t(i_t)+\epsilon_t(i_t)$, and $r^o_t(i_t)\in [0,1]$. Hence, the adversary acts as a ``man in the middle'' between the learner and the environment. {We present an upper bound on both the amount of contamination, which is the total amount of additive noise injected by the attacker, and the number of attacks, which is the number of times the adversary contaminates the observations, sufficient to ensure that the regret of the algorithm is $\Omega(T)$, where $T$ is the total time of interaction between the learner and the environment.} Additionally, we establish that this upper bound is order-optimal by providing a lower bound on the number of attacks and the amount of contamination. A typical way to protect a distributed system from a MITM attack is to employ a secure channel between the learner and the environment \citep{asokan2003man,sieka2007establishing,callegati2009man}. These secure channels ensure the CIA triad: confidentiality, integrity, and availability \citep{ghadeer2018cybersecurity,doddapaneni2017secure,goyal2019security}. Various ways to establish these channels have been explored in the literature \citep{asokan2003man,sieka2007establishing, haselsteiner2006security, callegati2009man}. An alternative way to provide security is by auditing, namely perform data verification \citep{karlof2003secure}. The idea of data verification or using trusted information is also embraced in the learning literature where small number of observations are verified \citep{charikar2017learning,bishop2020optimal}. Establishing a secure channel or an effective auditing method or getting trusted information is generally costly \citep{sieka2007establishing}. Hence, it is crucial to design algorithms that achieve security, namely the performance of the algorithm is unaltered (or minimally altered) in presence of attack, while limiting the usage of these additional resources. Motivated by these observations, we consider a \emph{reward verification} model in which the learner can access verified (i.e. uncontaminated) rewards from the environment. This verified access can be implemented through a secure channel between the learner and the environment, or using auditing. At any round $t$, the learner can decide whether to access the possibly contaminated reward $r^o_t(i_t)=r_t(i_t)+\epsilon_t(i_t)$, or to access the verified reward $r^o_t(i_t)=r_t(i_t)$. Since verification is costly, the learner faces a tradeoff between its performance in terms of regret, and the number of times access to a verified reward occurs. Second, the learner needs to decide when to access a verified reward during the learning process. We design an order-optimal bandit algorithm which strategically plans the verification, and makes no assumptions on the attacker’s strategy. Against this background, we make the following contributions in this paper: \begin{itemize} \item First, in Section~\ref{sec: characterization of attacks} we provide a tight characterisation about the total (expected) number of contaminations needed for a successful attack. Specifically, while it is well-known that with $O(\log{T})$ expected number of contaminations, a strong attacker can successfully attack \emph{any} bandit algorithm (see Section~\ref{subsec: upper bound successful attack} for a more detailed discussion), it is not known to date whether this amount of contamination is necessary. We fill this gap by providing a matching lower bound on the amount of contamination (Theorem 1). This result is based on a novel insight of UCB's behaviour, which may be of independent interest. Specifically, we show that for arbitrary (even adversarial) reward sequences, UCB will pull every arm at least $\log(T/2)$ times for sufficiently large $T$. Such conversativeness property of UCB guarantees its robustness against any attack strategy with $o(\log T)$ contaminations. Note that we also extend the state-of-the-art results on the sufficient condition by proposing a simpler yet optimal attack scheme, which is oblivious to the bandit algorithm's actual behaviour (Proposition 1). \item We then consider bandit algorithms with verification as a means of defense against these attacks. In our first set of investigations, we consider the case of having unlimited number of verification (Section~\ref{subsec: unlimited verification}). We first show that the minimum number of verification needed to recover from any strong attack is $\Theta(\log{T})$ (Theorem 2 and Corollary 2). We then propose an Explore-Then-Commit (ETC) based method, called Secure-ETC that can achieve full recovery from any attacks with this optimal amount of verification (Observation 1). While Secure-ETC is simple, it might not stop the exploration phase before exceeding the time horizon. To avoid this situation, we also propose a UCB-like method called Secure-UCB, which also enjoys full recovery under optimal verification scheme (Theorem 3). \item Finally, we consider the case when the number of verifications is bounded above by a budget $B$. We first show that if the attacker has unlimited contamination budget, it is impossible to fully recover from the attack if the verification budget $B = o(T)$ (Theorem 4). However, when the attacker also has a finite contamination budget $C$, as typically assumed in the literature, we propose Secure-BARBAR, which achieves $\tilde{O}\bigg(\min\Big\{C, { T\log{({2}/{\beta})}}/{\sqrt{B}}\Big\}\bigg)$ regret against a weaker attacker (who has to place the contamination before seeing the actual pull of the bandit algorithm). It remains an intriguing open question whether there exists efficient but limited verification schemes against stronger attackers. \end{itemize} \section{Preliminaries and Problem Statement} \label{sec: problem description} \subsection{Poisoning Attacks on Stochastic Bandits} We consider the classical stochastic bandit setting under data poisoning attacks. In this setting, a learner can choose from a set of $K$ actions for $T$ rounds. At each round $t$, the learner chooses an action $i_t\in [K]$, triggers a reward $r_{t}(i_t)\in [0,1]$ and observes a possibly corrupted (and thus altered) reward $r^o_{t}(i_t)\in [0,1]$ corresponding to the chosen action. The reward $r_t(i)$ of action $i$ is sampled independently from a fixed unknown distribution of action $i$. Let $\mu_i$ denote the expected reward of action $i$ and $i^*=\mbox{argmax}_{i\in[K]}\mu_i$.\footnote{For convenience, we assume $i^*$ is unique though all our conclusions hold when there are multiple optimal actions.} Also, let $\Delta(i)=\mu_{i^*}-\mu_i$ denote the difference between the expected reward of actions $i^*$ and $i$. Finally, we assume that $\{\mu_i\}_{i\in[K]}$ are unknown to both the \emph{learner} and the \emph{attacker}. The reward $r^o_{t}(i_t)$ observed by the learner and the true reward $r_t(i_t)$ satisfy the following relation \begin{equation} r^o_{t}(i_t)=r_t(i_t)+\epsilon_t(i_t), \end{equation} where the contamination $\epsilon_t(i_t)$ added by the attacker can be a function of $\{i_n\}_{n=1}^t$ and $\{r_n(i_n)\}_{n=1}^t$. Additionally, since $r^o_{t}(i_t)\in [0,1]$, we have that $\epsilon_t(i_t)\in [-r_t(i_t),1-r_t(i_t)]$. If $\epsilon_t(i_t) \neq 0$, then the round $t$ is said to be \emph{under attack}. Hence, the {\emph{number of attacks}} is $\sum_{t=1}^T\mathbf{1}(\epsilon_t(i_t)\neq 0)$ and the \emph{amount of contamination} is $\sum_{t=1}^T |\epsilon_t(i_t)|$. The regret $R^{\mathcal{A}}(T)$ of a learning algorithm $\mathcal{A}$ is the difference between the total expected true reward from the best fixed action and the total expected \emph{true} reward over $T$ rounds, namely \begin{equation}\label{eq:RegretOfAlg} R^{\mathcal{A}}(T)=T\mu_{i^*}-\mathbb{E}[\sum_{t=1}^T r_{t}(i_t)], \end{equation} The objective of the learner is to minimize the regret $R^{\mathcal{A}}(T)$. In contrast, the objective of the attacker is to increase the regret to at least $\Omega(T)$. As a convention, we say the attack is ``successful'' only when it leads to $\Omega(T)$ regret \citep{jun2018adversarial,liu2019data}. The first question we address is the following. \noindent {\bf Question 1: } {\it Is there a \emph{tight characterization} of the amount of contamination and the number of attacks leading to a regret of~~$\Omega(T)$ in stochastic bandits?} \subsection{Remedy via Limited Reward Verification} It is well known that no stochastic bandit algorithm can be resilient to data poisoning attacks {if the attacker has sufficiently large amount of contamination} \citep{liu2019data}. Therefore, to guarantee sub-linear regret {when the attacker has an unbounded amount of contamination} it is necessary for the bandit algorithm to exploit additional (and possibly costly) resources. We consider one of the most natural resource --- \emph{verified rewards}. Namely, we assume that at any round $t$, the learner can choose to access the true, uncontaminated reward of the selected action $i_t$, namely, when \emph{ round $t$ is verified} we have $r^o_t(i_t)=r_t(i_t)$. This process of accessing true rewards is referred to as \emph{verification}. If the learner performs verification at each round, then {it is clear that} the regret of any bandit algorithm is unaltered in the presence of attacker. Unfortunately, this is unrealistic because verification is costly in practice. Therefore, the learner has to carefully balance the regret and the number of verifications. This naturally leads to the second question that we aim to answer in this paper: \noindent {\bf Question 2: } {\it Is there a \emph{tight characterization} of the number of verifications needed by the learner to guarantee the optimal $O (\log T)$ regret for \emph{any} poisoning attack? } {Finally, we consider the case of limited amount of contamination from the attacker and limited number of verifications from the bandit algorithm. In the direction of studying this trade-off between contamination and verification, the third question that we aim to answer in this paper is: \noindent {\bf Question 3: } {\it Can we improve upon the $\Omega{(C)}$ regret lowerbound if the attacker's contamination budget is at most $C$, and the number of verifications that can be used by a bandit algorithm is also bounded above by a budget $B$. } } In this paper we answer the three questions above. \section{Tight Characterization for the Cost of Poisoning Attack} \label{sec: characterization of attacks} In this section we show that if an attack can successfully induce $\Omega(T)$ linear regret for any bandit algorithm, both its expected number of attacks and the expected amount of contamination must be $\Theta(\log T)$. In other words, there exists a ``robust'' stochastic bandit algorithm that cannot be successfully attacked by any attacker with only $o(\log T )$ expected amount of contamination, and we show the celebrated UCB algorithm satisfies this property. The key technical challenge in proving the above result is to show the sublinear regret of UCB against \emph{arbitrary} poisoning attack using at most $o(\log T)$ amount of contamination. In order to prove this strong result, we discover a novel ``convervativeness'' property of the UCB algorithm which may be of independent interest and has already found application in completely different tasks \cite{Shi2021Neurips}. To complement and also to match the above lower bounds of any successful attack, we design a data poisoning attack that can indeed use $O(\log T)$ expected number of attacks to induce $\Omega(T)$ regret for any order-optimal bandit algorithm, namely any algorithm which has $O(\log T)$-regret in the absence of attack. Since $r_t^o(i_t)\in [0,1]$, this implies that the attack would require at most $O(\log T)$ expected amount of contamination. \subsection{Lower Bound on the Contaminations} We show that there exists an order-optimal bandit algorithm --- in fact, the classical UCB algorithm --- which cannot be attacked with $o(\log T)$ amount of contamination by \emph{any} poisoning attack strategy. This implies that if an attacking strategy is required to be successful for all order-optimal bandit algorithms, then the amount of contamination needed is at least $\Omega(\log T)$. Since the amount of contamination is bounded above by the number of attacks, this also implies that any attacker requires at least $\Omega(\log T)$ number of attacks to be successful. While adversarial attacks to bandits have been extensively studied recently, to our knowledge such a lower bound on the attack strategy is novel and not known before; previous results have mostly studied the upper bound, i.e, how much contaminations are need for successful attacks \cite{jun2018adversarial,liu2019data}. Here we briefly describe the well-known UCB algorithm \citep{auer2002finite}, and defer its details to Algorithm \ref{alg:UCB} in Appendix \ref{append:AlgUCB}. At each round $t\leq K$, UCB selects an action in round robin manner. At each round $t>K$, the selected action $i_t$ has the maximum \emph{upper confidence bound}, namely \begin{equation}\label{eq:ucb-def} i_t= \mbox{argmax}_{i\in[K]} \bigg( \hat{\mu}_{t-1}(i)+ \sqrt{\frac{8\log t}{N_{t-1}(i)}} \bigg), \end{equation} where $N_t(i)=\sum_{n=1}^t\mathbf{1}(i_n=i)$ is the number of rounds action $i$ is selected until (and including) round $t$, and \begin{equation} \hat{\mu}_{t}(i)=\frac{ \sum_{n=1}^t r^o_{n}(i_n) \mathbf{1}(i_n = i)}{N_{t}(i)}, \end{equation} is the empirical mean of action $i$ until round $t$. Note that the algorithm uses the \emph{observed} rewards. The following Theorem \ref{thm:lowerBoundonUCB} establishes that the UCB algorithm will have sublinear regret $o(T)$ under any poisoning attack if the amount of contamination is $o(\log T)$. The proof of Theorem \ref{thm:lowerBoundonUCB} crucially hinges on the following ``conservativeness'' property about the UCB algorithm, which may be of independent interest.\footnote{Indeed, Lemma \ref{lemma:Min number of pulls} has been applied in \citep{Shi2021Neurips} to the task of incentivized exploration in order to show that a \emph{principal} can get sufficient feedback from every arm even if the \emph{agent} who pulls arms has completely different preferences from the principal.} \begin{lemma}[Conservativeness of UCB] \label{lemma:Min number of pulls} Let $t_0$ be such that ${t_0}/{(\log (t_0))^2} \geq 36K^2$. Then for all $ t \geq t_0$ and any sequence of rewards $\{r^o_n(i)\}_{i\in [K],n\leq t}$ in $[0,1]$ (can even be adversarial), UCB will select every action at least $ \log (t/2)$ times up until round $t$. \end{lemma} Lemma \ref{lemma:Min number of pulls} is inherently due to the design of the UCB algorithm. { Its proof does \emph{not} rely on the rewards being stochastic, and it holds deterministically --- i.e., at any time $t \geq t_0$, UCB will pull each action at least $ \log (t/2)$ times. } This lemma leads to the following theorem. \begin{theorem}\label{thm:lowerBoundonUCB} For all $0<\epsilon<1$ and $\alpha>0$ such that $0<\epsilon\alpha\leq 1/2$, and for all $T > \max\{(t_0)^{\frac{1}{1-\alpha \epsilon}}, \exp{(4^\alpha)}\}$, if the total \emph{amount} of contamination by the attacker is $\sum_{n=1}^T |\epsilon_n(i_n)|\leq {(\log T)^{1-\epsilon}}$, then there exists a constant $c_1$ such that the expected regret of UCB algorithm is \begin{equation} R^{UCB}(T)\leq c_1\big( T^{1-\alpha \epsilon} \max_i\Delta(i)+ \sum_{i \not = i^*}\log T/\Delta(i)\big), \end{equation} which implies the regret $R^{UCB}(T)$ is $o(T)$. \end{theorem} The constant $\alpha$ in Theorem \ref{thm:lowerBoundonUCB} is an adjustable \emph{parameter} to control the tradeoff between the scale of time horizon $T$ ($T \geq \max\{(t_0)^{\frac{1}{1-\alpha \epsilon}}, \exp{(4^\alpha)}\}$) and the dominating term $(T^{1-\alpha \epsilon} \max_i\Delta(i))$ in the regret. If $\epsilon$ is small, then the larger $\alpha$ leads to a smaller regret, however $T$ should be sufficiently large in order for us to see such a regret. The upper bound on the expected regret in Theorem \ref{thm:lowerBoundonUCB} holds if the total {amount} of contamination is at most $(\log T)^{1-\epsilon}$. Furthermore, if the total number of attacks is at most $(\log T)^{1-\epsilon}$, then using $|\epsilon_t(i_t)|\leq 1$, we have that $\sum_{n=1}^T |\epsilon_n(i_n)|\leq {(\log T)^{1-\epsilon}}$. Hence, Theorem \ref{thm:lowerBoundonUCB} also establishes that if the total number of attacks is $o(\log T)$, then the expected regret of UCB is $o( T)$. Thus, the attacker requires at least $\Omega(\log T)$ amount of contamination (or number of attacks) to ensure its success. The lower bound on the amount of contamination in Theorem~\ref{thm:lowerBoundonUCB} cannot be directly compared with the upper bound in Proposition~\ref{thm:constantAttack} since the former assumes that the amount of contamination is bounded above by $o(\log{T})$ \emph{almost surely}, while the latter is a bound on the \emph{expected} amount of contamination. Instead, we consider the following corollary, which can be easily derived from Theorem~\ref{thm:lowerBoundonUCB} using Markov's inequality, and establishes the lower bound on the expected amount of contamination necessary for a successful attack. \begin{corollary} \label{corr:PAC lower bound of attacker for UCB} For all $\epsilon\in (0,1)$ and $T$ such that the conditions in Theorem~\ref{thm:lowerBoundonUCB} are satisfied, if the expected amount of contamination by the attacker is at most $(\log{T})^{1-\epsilon}$, in other words $o(\log T)$, then the regret of UCB is $o(T)$. \end{corollary} \subsection{Matching Upper Bound on Contamination} \label{subsec: upper bound successful attack} We now show that there indeed exists attacks that can succeed with $O(\log T)$ attacks. Consider an attacker who tries to ensure any action $i_A\in [K]$ to be selected by the bandit algorithm at least $\Omega (T)$ times in expectation. This implies that the expected regret of the bandit algorithm is $\Omega(T)$ if $i_A\neq i^*$. We consider the following simple attack, that pulls the observed reward down to $0$ whenever the target suboptimal action $i_A$ is not selected. Namely, \begin{equation}\label{eq:attackStrategy1} r_t^o(i_t)=\begin{cases} r_t(i_t)&\mbox{ if } i_t=i_A,\\ 0 &\mbox{ if } i_t\neq i_A. \end{cases} \end{equation} Equivalently, the attacker adds $\epsilon_t(i_t)=-r_t(i_t)\mathbf{1}(i_t\neq i_A)$ to the true reward $r_t(i_t)$. {Unlike the attacks in \cite{jun2018adversarial,liu2019data}, the attack in \eqref{eq:attackStrategy1} is oblivious to rewards, since it overwrites all the rewards observation by zero.} The following proposition establishes an upper bound on the expected number of attacks sufficient to be successful. \begin{proposition}\label{thm:constantAttack} For any stochastic bandit algorithm $\mathcal{A}$ with expected regret in the \emph{absence} of attack given by \begin{equation}\label{eq:algo1} R^{\mathcal{A}}(T)=O\bigg(\sum_{i\neq i^*}\frac{\log^\alpha(T)}{(\Delta(i))^\beta}\bigg), \end{equation} where $\alpha\geq 1$ and $\beta\geq 1$; and for any target action $i_A\in [K]$; if an attacker follows strategy \eqref{eq:attackStrategy1}, then it will use an expected number of attacks \begin{equation}\label{eq:numConntam} \mathbb{E}[\sum_{t=1}^T \mathbf{1}(\epsilon_t(i_t)\neq 0)]]=O\bigg({(K-1)\log^{\alpha}(T)}/{\mu_{i_A}^{\beta+1}}\bigg),% \end{equation} an expected amount of contamination \begin{equation} \mathbb{E}[\sum_{t=1}^T|\epsilon_t(i_t)|]=O\bigg({(K-1)\log^{\alpha}(T)}/{\mu_{i_A}^{\beta+1}}\bigg), \end{equation} and it will force $\mathcal{A}$ to select the action $i_A$ at least $\Omega(T)$ times in expectation, namely $ \mathbb{E}[\sum_{t=1}^T \mathbf{1}(i_t= i_A)]=\Omega(T)$ . \end{proposition} Proposition \ref{thm:constantAttack} provides a relationship between the regret of the algorithm without attack and the number of attacks (or amount of contamination) sufficient to ensure that the target action $i_A$ is selected $\Omega(T)$ times, which also implies $R^{\mathcal{A}}(T)=\Omega(T)$ if $i_A\neq i^*$. Another important consequence of the proposition is that for an order optimal algorithm such as UCB, we have that $\alpha=1$ and $\beta=1$ in \eqref{eq:algo1}. Thus, the expected number of attacks and the expected amount of contamination are $O(\log T)$. A small criticism to the attack strategy \eqref{eq:attackStrategy1} might be that it pulls down the reward ``too much''. This turns out to be fixable. In Appendix \ref{append:gap-attack}, we prove that a different type of attack that pulls the reward of any action $i \not = i_A$ down by an \emph{estimated} gap $\Delta = 2 \max \{ \mu_{i} - \mu_{i_A}, 0 \} $ (similar to the ACE algorithm in \cite{ma2018data}) will also succeed. However, the number of attacks now will be inversely proportional to $\min_{i\neq i_A}|\mu_i-\mu_{i_A}|^{\beta+1}$, while not $ \mu_{i_A}^{\beta+1}$ as in Proposition \ref{thm:constantAttack}. \section{Verification based Algorithms} \label{sec: verification} In this section we explore the idea of using verifications to rescue our bandit model from reward contaminations. In particular, we first investigate the case when the amount of verification is not limited, and therefore our main goal is to minimize the number of verifications (along with aiming to restore the order-optimal logarithmic regret bound). We then discuss the case when the number of verifications is bounded above by a budget $B$ (typically of $o(T)$). \subsection{Saving Bandits with Unlimited Verifications} \label{subsec: unlimited verification} In this setting we assume that the number of verifications is not bounded above, and therefore, our goal is to minimize the number of verifications that is required to restore the logarithmic regret bound. To do so, we first show that any successful verification based algorithm (i.e., they can restore the logarithmic regret) would require $\Omega(\log{T})$ verifications. In particular, the following theorem establishes that for all consistent learning algorithm\footnote{A learning algorithm is consistent \citep{kaufmann2016complexity} if for all $t$, the action $i_{t+1}$ (a random variable) is measurable given the history $\mathcal{F}_{t}=\sigma (i_1,r^o_1(i_1), i_2,r^o_2(i_2) \ldots, i_{t},r^o_{t}(i_{t}))$. } $\mathcal{A}$ and sufficiently large $T$, if the algorithm $\mathcal{A}$ uses $O((\log T)^{1-\alpha})$ verifications with $0<\alpha < 1$, then the expected regret is $\Omega{((\log T)^{\beta}})$ with $\beta>1$ in the MAB setting with verification. \begin{theorem}\label{thm:lowBoundVerification} Let $KL(i_1,i_2)$ denote the KL divergence between the distributions of actions $i_1$ and $i_2$. For all $0<\alpha<1$, $1<\beta$ and all consistent learning algorithm $\mathcal{A}$, there exists a time $t^*$ and an attacking strategy such that for all $T\geq 2t^*$ satisfying $(\log T)^{1-\alpha}+\beta\log (4\log T)\leq \log T,$ if the total number of verifications $N^s_T$ until round $T$ is \begin{equation}\label{eq:boundOnVeri} N^s_T<(\log T)^{1-\alpha} /\min_{i_1,i_2\in [K]}KL(i_1,i_2), \end{equation} then the expected regret of $\mathcal{A}$ is at least $\Omega((\log T)^\beta)$. \end{theorem} Theorem \ref{thm:lowBoundVerification} establishes that $\Omega(\log T)$ verifications are necessary to obtain $O(\log T)$ regret. Here, we assume that the number of verifications is bounded above \emph{almost surely}. Nevertheless, if instead the \emph{expected} number of verifications is bounded, we shall obtain the following similar bound. \begin{corollary} \label{corr:verification expected lower bound} For all $0<\alpha<1$, $1<\beta$, all consistent learning algorithm $\mathcal{A}$ and sufficiently large $T$ such that the requirements in Theorem \ref{thm:lowBoundVerification} are satisfied, there exists an attacking strategy such that if the \emph{expected number of verifications} $N^s_T$ until round $T$ is $\mathbb{E}[N^s_T]<(\log T)^{1-\alpha} /\min_{i_1,i_2\in [K]}KL(i_1,i_2)$, then the expected regret of $\mathcal{A}$ is at least $\Omega((\log T)^\beta)$. \end{corollary} We now move to design an algorithm that matches this optimal number of verifications. Our algorithm is based on the following simple idea: Contamination is only effective when the contaminated reward is used for estimating the mean reward value of the arms, and therefore, influencing the learnt order of the arms. As such, any algorithm that do not need these estimates for most of the time would not suffer much from the contamination if the remaining pulls (when the observed rewards are used for mean estimation) is properly secured via verification. This idea naturally lends us to the explore-then-commit (ETC) type of bandit algorithms~\citep{garivier2016explore}, where in the first phase, the algorithm aims to learn the optimal arm by solving a best arm identification (BAI) problem (exploration phase), and in the second (commit) phase, it just repeatedly pulls the learnt best arm~\citep{kaufmann2016complexity}. It is clear that if the first phase is fully secured (i.e., every single pull within that phase is verified), then we can learn the best arm with high probability, and thus, can ignore the contaminations within the second phase. % The choice of the BAI algorithm for the exploration phase is important though. In particular, any BAI with fixed pulling budget would not work here, as they cannot guarantee logarithmic regret bounds~\citep{garivier2016explore}. On the other hand, BAI with fixed confidence will suffice. In particular, we state the following: \begin{observation} \label{verification upper bound for Secure-ETC} Any ETC algorithm, where the exploration phase uses BAI with fixed confidence $\delta = \frac{1}{T}$ and every single pull in that phase is verified, enjoys an expected regret bound of $O\Big(\sum_{i \neq i^*}{\log{T}}/{\Delta_i}\Big)$. In addition, the expected number of verifications is bounded above by $O\Big(\sum_{i \neq i^*}{\log{T}}/{\Delta^2_i}\Big)$. \end{observation} We refer to the ETC algorithm enhanced with verification described in the above observation as Secure-ETC. The proof of Observation~\ref{verification upper bound for Secure-ETC} is simple and hence omitted from the main paper. Note that this result, alongside with Theorem \ref{thm:lowBoundVerification}, show that Secure-ETC uses order-optimal number of verification, and enjoys an order-optimal expected regret, irrespective of the attacker's strategy. The main drawback of Secure-ETC algorithms is that there is positive probability that the algorithm may keep exploring until the end time $T$. While such small probability event turns out to not be an issue regarding its expected regret, one might prefer another type of algorithm which properly mix the exploration and exploitation. For such interested readers, we propose another algorithm, named Secure-UCB (for Secure Upper Confidence Bound), which integrates verification into the classical UCB algorithm, and also enjoys similar order-optimal regret bounds and order-optimal expected number of verifications. Due to space limitations, we defer both the detailed description of Secure-UCB and its theoretical analysis to the appendix (see Appendix~\ref{appendix:verify with Secure-UCB} for more details). However, for the sake of completeness, we state the following theorem below \begin{theorem} \label{thm:SUCB_simple} For all $T$ such that {$T\geq c_2\log T/\min_{i\neq i^*}\Delta^2(i)$}, Secure-UCB performs $O(\log T)$ number of verification in expectation, and the expected regret of the algorithm is $O(\log T)$ irrespective of the attacker's strategy. Namely, \begin{equation \begin{split} \sum_{i\in [K]}\mathbb{E}[N^s_T(i)]&\leq c_3\big(\sum_{i\neq i^*}{\log T}/{\Delta^2(i)}\big), \end{split} \end{equation} \begin{equation \begin{split} R(T)&\leq c_4\big(\sum_{i\neq i^*}{\log T}/{\Delta(i)}\big), \end{split} \end{equation} where $N^s_T(i)$ is the total number of verifications for arm $i$ until round $T$ and $c_2$, $c_3$ and $c_4$ are numerical constants (concrete values can be found in the appendix). \end{theorem} It is worth noting that due to the sequential nature of UCB, designing a UCB-like algorithm with verification is far from trivial and therefore its technical analysis is significantly more involved. \subsection{Saving Bandits with Limited Verifications} \label{subsec:limited verification} While unlimited verification can completely restore the original regret bounds, we will show next that this is unfortunately not the case if the number of verification are bounded. In particular, we state this negative result. \begin{theorem}\label{thm:LowerBoundFixed Budget} Consider an attacker with unlimited contamination budget. For any $T$, $K\geq 2$ and $N^s_T\geq K$, if the total number of verifications performed until round $T$ is at most $N^s_T$, then there exists a distribution over the assignment of rewards such that the expected \emph{gap-independent} regret of any learning algorithm is at least \begin{equation} R(T)\geq cT\sqrt{K/{N^s_T}}. \end{equation} where $c$ is a numerical constant. In addition, for any $T$, $K\geq 2$, and $N^s_T\geq K$, there exists a distribution over the assignment of rewards such that the expected cost, defined as the sum of expected regret and the number of verifications, of any learning algorithm is at least $\Omega(T^{2/3})$. \end{theorem} We remark that the goal of Theorem \ref{thm:LowerBoundFixed Budget} is to demonstrate that, unlike the unlimited verification case in subsection \ref{subsec: unlimited verification}, here it is impossible to fully recover from the attack --- in the sense of achieving order optimal regret bounds as in the original bandit setting without attacks --- if $B \in o(T)$, and this motivates our following study (Theorem \ref{thm:Secure-BARBAR regret bound}) of developing regret bounds that scale with the budget $B$. For this purpose, it suffices to have a gap-independent lower bound as in Theorem \ref{thm:LowerBoundFixed Budget}. Nevertheless, we acknowledge that an interesting research question is to see whether one can achieve a gap-dependent lower bound. This is out of the scope of our current paper and is an independent open question. \begin{algorithm}[t] \begin{algorithmic}[1] \STATE \textbf{Input}: confidences $\beta, \delta \in (0,1)$, time horizon $T$, verification budget $B$ \STATE Set $n^B_i = \Big\lfloor {B}/{K} \Big \rfloor$, $T_0 = B$, $\Delta^0_i = 1$ for all $i \in [K]$, and $\lambda = 1024\ln(\frac{8K}{\delta}\log_2{T})$ \FOR{ epochs $m = 1,2,\dots$} \STATE Set $n_i^m = \lambda (\Delta_i^{m-1})^{-2}$ for all $i \in [K]$, $N_m = \sum_{i=1}^{K}n^m_i$, and $T_m = T_{m-1} + N_m$ \FOR{$t= T_{m-1}$ \TO $T_m$ } \STATE choose arm $i$ with probability $n^m_i/N_m$ and pull it \STATE if $n^B_i > 0$ then \emph{verify the pull} (i.e., \emph{observe the true reward}), and reduce $n^B_i$ by $1$ \ENDFOR \STATE Let $S^m_i$ be the total \emph{observed} rewards from pulls of arm $i$ within epoch $m$ (including both verified and unverified ones) \STATE \textbf{If} $\;$ \emph{all} the pulls of arm $i$ were verified in epoch $m$ \textbf{then} $r^m_i = S^m_i/n^m_i$ \STATE \textbf{Else if} $S^m_i/n^m_i \geq \mu_i^B$ \textbf{then} $r^m_i = \min \Big\{S^m_i/n^m_i, \mu^B_i + \frac{\Delta^{m-1}_i}{16} + \sqrt{\frac{\ln{2/\beta}}{2n_B}}\Big\}$ \STATE \textbf{Else} $r^m_i = \max \Big\{S^m_i/n^m_i, \mu^B_i - \frac{\Delta^{m-1}_i}{16} - \sqrt{\frac{\ln{2/\beta}}{2n_B}}\Big\}$ \STATE Set $r^{m}_{*} = \max_{i}\{r_i^{m} - \Delta_i^{m-1}/16\}$, $\Delta^m_i = \max\{2^{-m}, r^{m}_{*} - r_i^{m}\}$ \ENDFOR \caption{Secure-BARBAR} \label{alg:Secure-BARBAR} \end{algorithmic} \end{algorithm} Now, this impossibility result relies on the assumption that the attacker has an unlimited contamination budget (or amount of contamination). One might ask what would happen if the attacker is also limited by a contamination budget $C$ as typically assumed in the relevant literature~\citep{gupta2019better,bogunovic2020stochastic,lykouris2018stochastic}. We now turn to the investigation of this setting in more detail where contamination budget is at most $C$. To start with, we assume for now that the attacker can only place the contamination before seeing the actual actions of the bandit algorithm. We refer to this type of attackers as \emph{weak} attackers, as opposed to the ones we have been dealing with in this paper (see Section 5 for a comprehensive comparison of different attacker models). We describe an algorithm that addresses this case in a provably efficient way. In particular, we introduce Secure-BARBAR (Algorithm~\ref{alg:Secure-BARBAR}), which is built on top of the BARBAR algorithm proposed by~\cite{gupta2019better}. The key differences are: (i) Secure-BARBAR sets up a verification budget $n^B_i$ for each arm $i$ and verify that arm until this budget deplets (lines $6-7$); and (ii) use these reward estimate to adjust the estimates ( lines $9-13$). By doing so, we achieve the following result: \begin{theorem} \label{thm:Secure-BARBAR regret bound} With probability at least $1-\delta - \beta$, the regret of Secure-BARBAR against any weak attackers with contamination budget $C$ is bounded by \begin{equation} \begin{split} &O\bigg(K\min\Big\{C, \frac{T \log{\frac{2}{\beta}}\ln(\frac{8K}{\delta}\log_2{T})}{\sqrt{B/K}}\Big\} \\ &\qquad \qquad + \sum_{i \neq i^*}\frac{\log{T}}{\Delta_i}\log{\Big(\frac{K}{\delta}\log{T}\Big)}\bigg). \end{split} \end{equation} \end{theorem} The regret bound is of $\tilde{O}\bigg(\min\Big\{C, { T\log{({2}/{\beta})}}/{\sqrt{B}}\Big\}\bigg)$, which breaks the known $\Omega(C)$ lower bound of the non-verified setting if $C$ is large \cite{gupta2019better}. \paragraph{A note on efficient verification schemes against strong attackers.} In the case of strong attackers, with a careful combination of the idea described in Secure-BARBAR to incorporate the verified pulls into the estimate of the average reward at each round (lines $9-12$ in Algorithm~\ref{alg:Secure-BARBAR}), and the techniques used in the proof of Theorem 1 from ~\cite{bogunovic2020stochastic} \footnote{The key step is to replace Lemma 1 from~\cite{bogunovic2020stochastic} with a verification aware version, using similar ideas applied in the proof of Theorem~\ref{thm:Secure-BARBAR regret bound}.}, we can prove the following result: With probability at least $1-\delta - \beta$, we can achieve a regret upper bound of $\tilde{O}\bigg(\min\big\{C, { T\log{({2}/{\beta})}}/{\sqrt{B}}\big\}\log{T}\bigg)$. This can be done by modifying the Robust Phase Elimination (RPE) algorithm described in~\cite{bogunovic2020stochastic} with the verification and estimation steps from Algorithm~\ref{alg:Secure-BARBAR}. The drawback of this approach is that it only works when the contamination budget $C$ is known in advance. Although~\cite{bogunovic2020stochastic} have also provided a method against strong attackers with unknown contamination budget $C$, their method can only achieve $\tilde{O}(C^2)$ under some restrictive constraints (e.g., $C$ has to be sufficiently small). In addition, it is not clear how to incorporate our ideas introduced for Secure-BARBAR to that approach in an efficient way (i.e., to significantly reduce the regret bound from $\tilde{O}(C^2)$). Given this, it remains future work to derive an efficient verification method against strong attackers with unknown contamination budget $C$, which can yield regret bounds better than $\tilde{O}(C^2)$. \section{Comparison of Attacker Models} \label{sec:comparison of attacker models} This section provides a more detailed comparison between the different attacker models from the (robust bandits) literature and their corresponding performance guarantees. In particular, at each round $t$, a \emph{weak attacker} has to make the contamination \emph{before} the actual action is chosen. On the other hand, a \emph{strong attacker} can observe both the chosen actions and the corresponding rewards before making the contamination. From the perspective of contamination budget (or the amount of contamination), it can either be bounded above surely by a threshold, or that bound only holds in expectation. We refer to the former as \emph{deterministic budget}, while we call the latter as \emph{expected budget}. To date, the following three attacker models have been studied: (i) weak attacker with deterministic budget; (ii) strong attacker with deterministic budget; and (iii) strong attacker with expected budget. \paragraph{Weak attacker with deterministic budget.} For this attacker model, \cite{gupta2019better} have proposed a robust bandit algorithm (called BARBAR) that provably achieves $O(KC + (\log{T})^2)$ regret against a weak attacker with (unknown) deterministic budget $C$. They have also proved a matching regret lower bound of $\Omega(C)$. These results imply that in order to successfully attack BARBAR (i.e., to force a $\Omega(T)$ regret), a weak attacker with deterministic budget would need a contamination budget of $\Omega(T)$. \paragraph{Strong attacker with deterministic budget.} \cite{bogunovic2020stochastic} have shown that there is a phased elimination based bandit algorithm that achieves $O(\sqrt{T} + C\log{T})$ regret if $C$ is known to the algorithm, and $O(\sqrt{T} + C\log{T} + C^2)$ if $C$ is unknown. Note that by moving from the weaker attacker model to the stronger one, we suffer an extra loss in terms of achievable regret (i.e., from $O(C)$ to $O(C^2)$) in case of unknown $C$. While the authors have also proved a matching regret lower bound of $\Omega(C)$ for the known budget case, they have not provided any similar results for the case of unknown budget. Nevertheless, their results show that in order to successfully attack their algorithm, an attacker of this type would need a contamination budget of $\Omega(T)$ for the case of known contamination budget, and $\Omega(\sqrt{T})$ if that budget is unknown. \paragraph{Strong attacker with expected budget.} Our Proposition~\ref{thm:constantAttack} shows that this attacker can successfully attack any order-optimal algorithm with a $O(\log{T})$ expected contamination budget (note that~\cite{liu2019data} have also proved a similar, but somewhat weaker result). We have also provided a matching lower bound on the necessary amount of expected contamination budget against UCB. {It is worth noting that if the rewards are unbounded, then the attacker may use even less amount contamination (e.g., $O(\sqrt{\log{T}})$) to achieve a successful attack~\citep{zuo2020near}.} \paragraph{{Saving} bandit algorithms with verification.} The above mentioned results also indicate that if an attacker uses a contamination budget $C$ (either deterministic or expected), the regret that any (robust) algorithm would suffer is $\Omega(C)$. A simple implication of this is that if an attacker has a budget of $\Theta(T)$ (e.g., he can contaminate all the rewards), then no algorithm can maintain a sub-linear regret if they can only rely on the observed rewards. Secure-ETC, Secure-UCB, and Secure-BARBAR break this barrier of $\Omega(C)$ regret with verification. In particular, the former two still enjoy an order-optimal regret of $O(\log{T})$ against any attacker (even when they have $\Theta(T)$ contamination budget) while only using $O(\log{T})$ verifications. The latter, when playing against a weak attacker, still suffers a swift increase in the regret as $C$ is increased. But this increase is not linear in $C$ as in the non-verified setting. \section{Conclusions} \label{sec: conclusions} In this paper we introduced a reward verification model for bandits to counteract against data contamination attacks. In particular, we contributions can be grouped as follows: We first revisited the analysis of strong attacker and proved the first attack lower bound of $\Theta(\log{T})$ expected number of contaminations for a successful attack. This lower bound is shown to be tight with our oblivious attack scheme, the contamination of which matches the lower bound. We then move to verification based approaches with unlimited verification, where we first provided two algorithms, Secure-ETC and Secure-UCB, which can recover any attacks with logarithmic number of verifications. We also provided a matching lower bound on the number of verifications. For the case of limited verifications, we first showed that full recovery is impossible if the attacked has unlimited contamination budget, unless the verification budget $B = \Theta(T)$. In case the attacker is also limited by a budget $C$, we proposed Secure-BARBAR, which achieves a regret lower than the $\Theta(C)$ regret barrier, if used against a weak attacker. For future research, when facing a strong attacker with contamination budget $C$, we briefly discussed how a similar idea from Secure-BARBAR with limited verification can be used to achieve a regret bound better than $O(C\log{T})$. However, this idea requires that $C$ is known in advance. It is an open question whether for the case of unknown $C$ we can get a similar regret bound that is better than the regret we can achieve for the non-verified case. Second, since bounding the contamination in expectation and almost surely leads to different results (see Section \ref{sec:comparison of attacker models}), it would be interesting to study the setting where number of verifications is bounded almost surely. Third, another interesting extension is a \emph{partial feedback verification} model, where the learner can only request a feedback about whether the observed reward is corrupted or not but cannot see the true reward. Finally, extending our study to RL is an intriguing future direction. \bibliographystyle{unsrtnat}
1,314,259,994,109
arxiv
\section{Introduction} \par The conjunction of density functional theory (DFT) \cite{Kohn1965} and the non--equilibrium Green's functions (NEGF) method \cite{DiVentra2008, Xue2001} has afforded a tool of unprecedented utility for the computational description of electrical transport in nanoscale devices. Fruitful applications have extended from metallic and semiconducting constrictions to molecular junctions, with transmission regions spanning a broad swath of chemical parameter space. While inherently a single--particle approach, many--body corrections may be phenomenologically included through the DFT+$U$ method or through direct modification of self--energy terms appearing in the NEGF expansion \cite{Timoshevskii2014}. The scope of these extensions suggests a universal framework for the atomic--resolution simulation of transport in technologically--relevant materials, which is necessary for the engineering of functional nanoelectronic components. The flexibility and simplicity of this method nonetheless comes at a price, as the calculated conductances are generally between one to two orders of magnitude greater than those observed experimentally \cite{Lindsay2007}. Furthermore NEGF+DFT calculations employ static, ground--state electronic structure calculations by construction, and hence there is no possibility of calculating time--dependent response properties within this framework. \par In a first--order attempt to circumvent this limitation, the NEGF method has been expanded to include time--dependent density functional theory (TDDFT) \cite{Runge1984, Stefanucci2004a, Kurth2004}. While this method is sufficient for model Hamiltonians, self--consistent calculations are difficult to execute \cite{Ke2010}, and self--consistency is requisite for the study of real materials. One appealing alternative to the NEGF+TDDFT method entails direct propagation of the electronic wavefunction with real--time TDDFT (RT--TDDFT) \cite{Varga2011}. This scheme likewise ameliorates the cost of NEGF+TDDFT calculations as the numerically expensive determination of Green's functions in the lead regions is no longer necessary \cite{Driscoll2008}. Nonetheless, a known difficulty associated with RT--TDDFT propagation is the treatment of boundary conditions at the edge of the simulation cell, which must correspond to those of an open quantum system. Recent investigations with both NEGF+TDDFT \cite{Driscoll2008, Varga2009, Zhang2013} and RT--TDDFT propagation \cite{Varga2011} have employed a complex absorbing potential in this region to attenuate the wavefunction and avoid spurious reflections. While previously proposed for transport problems in model systems \cite{Ferry1999, Zhang2007}, these investigations comprise the first application to a realistic case. The complex potential is itself a non--Hermitian extension to the Hamiltonian that diminishes the net electron density in the system as a function of simulation time. Reducing the number of electrons within the leads will alter the contribution from Hartree and exchange--correlation terms in the DFT Hamiltonian, and lead to a jamming process in which transport no longer occurs as the system becomes ionized. Thus, the absorbing potentials only comprise half of the framework required for a comprehensive treatment of transport, as generating potentials for incoming charge carriers are also required. \par The addition of a complex potential $\hat{V}_\text{cplx}$ to a quantum system has an unusual effect on the time evolution of a state vector. Consider a non--Hermitian Hamiltonian $\hat{H} = \hat{H}_0 + \hat{V}_\text{cplx}$ in which $\hat{V}_\text{cplx}$ may be arbitrarily applied, and let $\hat{H}_0$ be a Hermitian Hamiltonian which is applicable at all times. Furthermore, let $\ket{\psi(x,t = 0)}$ be an initial eigenstate of $\hat{H}_0$ when $\hat{V}_\text{cplx}$ is zero. As $\ket{\psi(x,t)}$ propagates, assume that a purely imaginary $\hat{V}_\text{cplx} \approx i\Gamma \neq 0$ is turned on starting at time $t_1$ and turned off at time $t_2 > t_1$. In the course of this process, the state vector evolution is afforded by the operator $\hat{U} (t', t) = \exp[-i\hat{H}(t' - t)/\hbar]$ so that $\ket{\psi(x,t_2)} = \hat{U}(t_2, t_1) \ket{\psi(x,t_1)}$, or explicitly \begin{eqnarray} \ket{\psi(x,t_2)} &=& \exp[-i(\hat{H_0} + i\Gamma) (t_2 - t_1) / \hbar] \ket{\psi(x,t_1)}\\ &=& \exp[-i\hat{E} (t_2 - t_1) / \hbar] \exp[\Gamma (t_2 - t_1) / \hbar] \ket{\psi(x,t_1)}. \end{eqnarray} \noindent The first term in the product is simply the time evolution operator for the system under the action of $\hat{H}_0$ alone, whereas the second term characterizes the effect of the complex potential. Taking the inner product $\braket{\psi(x,t_2) \vert \psi(x,t_2)} = \exp[2\Gamma(t_2 - t_1) / \hbar] \braket{\psi(x,t_1) \vert \psi(x,t_1)}$, it is clear that the norm of the particle is rescaled by a factor of $\exp[2\Gamma(t_2 - t_1) / \hbar]$. Furthermore, just as the Hamiltonian is no longer Hermitian in the presence of $\hat{V}_\text{cplx}$, the evolution operator $\hat{U}(t',t)$ ceases to be unitary. \par If the complex potential strength $\Gamma < 0$, the norm of the state vector is decreased and the effective particle number in the system is diminished. This behavior is key when using the complex potential to mimic open boundary conditions that accommodate an incoming or outgoing particle flux \cite{Varga2007, Paul2007, Berggren2010, Wibking2012, Fortanier2014, Wahlstrand2014, Zhu2014} as well as for the treatment of resonances in wavepacket propagation \cite{Moiseyev1998, Muga2004, Moiseyev2011}, and atomic \cite{Sahoo2000}, and nuclear systems \cite{Masui2002}. Conversely, if the sign of the potential is flipped so that $\Gamma > 0$, the potential then generates norm for a given state, which may be conceptualized as the addition of particles to the system. The presence of such `source' and ``sink' terms is a general property of simple non--Hermitian extensions \cite{Ferry1999, Berggren2010, Wahlstrand2014}. One particularly useful category of these theories are the $\mathcal{PT}$--symmetric Hamiltonians, in which the condition of Hermiticity is relaxed in favor of symmetry under conjugation by the product of the parity $\hat{\mathcal{P}}$ and time reversal $\hat{\mathcal{T}}$ operators \cite{Bender1998, Bender2002, Bender2007}. The construction of $\mathcal{PT}$--symmetric theories extends the spectrum of effective Hamiltonians which may be employed to describe open quantum systems, and has led to several experimentally verified predictions in optics \cite{Guo2009,Ruter2010, Sun2014}. \par Given these considerations, it is natural to ask if an appropriately constructed $\mathcal{PT}$--symmetric theory can fully mimic the progressive, time--dependent addition and removal of particles in an open system, particularly in a manner that does not require continuous tuning of source and sink terms. By constructing an appropriate scheme using recent lessons from $\mathcal{PT}$--symmetric optics this question is answered in the affirmative. In particular, it is demonstrated that particle generation and attenuation may be integrated under conditions corresponding to either an applied voltage or current bias. Analytical and numerical results are evaluated for the particular problem of real--time one--dimensional wavepacket propagation to demonstrate the efficacy of this method. It is further demonstrated that such a framework may be naturally extended to density functional theory without limitations. \section{Analytical Considerations} \subsection{$\mathcal{PT}$--Symmetric Quantum Mechanics} \par In a $\mathcal{PT}$--symmetric quantum system, the requirement that the Hamiltonian be Hermitian is relaxed to a more general conjugation condition \cite{Bender2002, Bender2007}. Specifically, a new operator $\hat{\mathcal{P}}\hat{\mathcal{T}}$ is introduced as a product of the parity $\hat{\mathcal{P}}: \{\hat{p}, \hat{x}, i\mathbb{Id}\} \mapsto \{-\hat{p}, -\hat{x}, i\mathbb{Id}\}$ and time--reversal $\hat{\mathcal{T}}: \{\hat{p}, \hat{x}, i\mathbb{Id}\} \mapsto \{-\hat{p}, \hat{x}, -i\mathbb{Id}\}$ operations, such that the composite operator $\hat{\mathcal{P}}\hat{\mathcal{T}}$ and the new Hamiltonian $\hat{H}_{PT}$ share a common set of eigenfunctions. Invariance under Hermitian conjugation is replaced with the commutator $[\hat{\mathcal{P}}\hat{\mathcal{T}},\hat{H}_{PT}] = 0$. Note that $\hat{H}_{PT}$ need not commute with the action of $\hat{\mathcal{P}}$ and $\hat{\mathcal{T}}$ alone, but only with the operator product. When these conditions are collectively satisfied, the system is said to possess exact or unbroken $\mathcal{PT}$--symmetry \cite{Bender1998,Bender2002}. If the potential is only a function of the particle position, the Hamiltonian $\hat{H}_{PT}$ may be written in the elementary form $\hat{H}_{PT} = \hat{p}^2 / 2m + \hat{V}_{PT}(\hat{x})$, whereupon $\mathcal{PT}$--symmetry requires that $\hat{V}_{PT}(\hat{x}) = \hat{V}_{PT}^* (-\hat{x})$ with the asterisk denoting complex conjugation. Accordingly, the potential may be expanded as $\hat{V}_{PT}(\hat{x}) = \text{Re}[\hat{V}_{PT}(\hat{x})] + i\text{Im}[\hat{V}_{PT}(\hat{x})]$, where the real and imaginary parts are even and odd functions of $\hat{x}$, respectively. \par Despite the presence of an imaginary potential, the unbroken symmetry phase of a $\mathcal{PT}$--symmetric theory is characterized by a real eigenvalue spectrum. Conversely, in the so--called broken symmetry phase, $\hat{\mathcal{P}}\hat{\mathcal{T}}$ and $\hat{H}_{PT}$ cease to share a common eigenfunction space and the roots of the eigenvalue problem become complex. This spectral behavior has been systematically investigated for several potentials, including those of the form $V(x) = \alpha x^2(ix)^\nu$ with $\alpha \in \mathbb{R}$ and $\nu \in \mathbb{N}$ \cite{Bender1998, Bender2012}. It is conjectured that an arbitrary $\mathcal{PT}$--symmetric complex potential $V(x)$ must be analytic to possess a real spectrum \cite{Dorey2001, Bender2008b}, though other more stringent requirements may also apply \cite{Bender2007}. This surprising observation of a well--defined real eigenvalue spectrum led to the proposition that $\mathcal{PT}$ symmetry could represent a generalization of quantum mechanics, especially when formulated in terms of an inner product structure with additional symmetries \cite{Bender2002}. \par Nonetheless, a local $\mathcal{PT}$ symmetry allows arbitrarily fast quantum state evolution \cite{Bender2007}, including superluminal propagation \cite{Lee2014}, thus limiting the applicability of such Hamiltonians as a fundamental extension of quantum mechanics. Furthermore, global $\mathcal{PT}$ symmetric Hamiltonians are isomorphic to conventional Hermitian Hamiltonians for finite--dimensional systems \cite{Mostafazadeh2002, Mostafazadeh2002b, Mostafazadeh2002c, Mostafazadeh2003, Mostafazadeh2007}, differing only in their unconventional definition of the inner product. In spite of such restrictions, these structures afford a mathematically useful framework for effective theories in the condensed matter realm, particularly for open quantum systems \cite{ Varga2007, Paul2007, Rotter2009, Berggren2010,Wibking2012, Fortanier2014, Wahlstrand2014, Zhu2014}, for the computational treatment of resonances in wavepacket dynamics and scattering \cite{Moiseyev1998, Muga2004, Moiseyev2011} and for light propagation in certain optical lattices \cite{Musslimani2008, Mostafazadeh2009, Ramezani2010, Lin2011}. Several experimental realizations of $\mathcal{PT}$--symmetry have been explored in this optical context, including loss--induced optical transparency \cite{Guo2009} and left--right asymmetric power oscillations \cite{Ruter2010} in a nonlinear optical device, unidirectional invisibility in a $\mathcal{PT}$--symmetric optical lattice, and the existence of coherent perfect absorbers assembled using passive optical components \cite{Sun2014}. \subsection{Quantum Transport in $\mathcal{PT}$--Symmetric Potentials} \par A characteristic of $\mathcal{PT}$--symmetric non--Hermitian theories is the presence of `source' and `sink' terms for the wavefunction norm \cite{Ferry1999,Berggren2010, Wahlstrand2014}. While a self--consistent norm has been devised for $\mathcal{PT}$--symmetry \cite{Bender2002}, we are interested in physical systems for which $\hat{H}_{PT}$ is an effective Hamiltonian and thus do not adopt this definition. Accordingly, denote by $\mathcal{N} = \braket{\psi(x,t) \vert \psi(x,t)}$ the $\mathbb{L}^2 (\mathbb{R})$ norm of $\psi(x,t)$ in a Hilbert space $\mathcal{H}$. Calculating the time dependence directly in terms of the $\mathcal{PT}$--symmetric potential $V_{PT}(x) = \text{Re}[V_{PT}(x)] + i\text{Im}[V_{PT}(x)]$ yields \begin{eqnarray} \label{attenrate} \frac{d\mathcal{N}(t)}{dt} &=& \int_{-\infty}^\infty dx\, \frac{d}{dt} \left(\psi^* (x,t) \psi(x,t)\right)) \\ &=& \int_{-\infty}^\infty dx\, \left(\frac{d\psi^*(x,t)}{dt} \psi(x,t) + \psi^*(x) \frac{d\psi(x,t)}{dt}\right) \\ &=& \frac{1}{i\hbar} \int_{-\infty}^\infty dx\,\left(\psi^*(x,t) \hat{H}_{PT} \psi(x,t) - \hat{H}_{PT}^\dagger \psi^*(x) \psi(x)\right) \\ &=& \frac{1}{\hbar} \braket{\psi\vert(\hat{H}_{PT} - \hat{H}_{PT}^\dagger)\vert\psi} \\ &=& \frac{2}{\hbar} \braket{\psi \vert \text{Im}(\hat{V}_{PT}) \vert \psi} \end{eqnarray} \noindent The condition for norm attenuation, $d\mathcal{N}(t)/dt < 0$, requires that $\braket{\psi \vert \text{Im}(\hat{V}_{PT}) \vert \psi} < 0$. A similar condition for norm generation applies when $d\mathcal{N}(t)/dt > 0$, indicating that the imaginary part alone determines the `source' or `sink' behavior. Interestingly, since this process is contingent upon an expectation value, the source term is incapable of generating norm in the absence of some finite probability amplitude within the spatial extent of the potential. Once a state is completely attenuated, it many never be recovered by a generating term. \par This relation has an important association with transport properties. Specifically, the net outgoing change in norm for all particles in a many--particle system $\partial \mathcal{N}_T / \partial t = \sum_i \partial \mathcal{N}_i / \partial t$ due to the presence of the imaginary potential must be equal to the integrated divergence of the current through the system $-\int_V \nabla \cdot \vec{j}(\vec{x},t) dV = \partial \mathcal{N}_T(t) / \partial t$, where $\vec{j}(\vec{x},t)$ is the local probability current density \begin{equation} \vec{j}(\vec{x},t) = \frac{e\hbar}{2mi} \sum_{i} \left[\psi_i^* (\vec{x},t) \nabla \psi_i(\vec{x},t) - \psi_i (\vec{x},t) \nabla \psi_i^*(\vec{x},t)\right]. \end{equation} \noindent This relationship between norm and net current is obtained by directly integrating the continuity equation for the particle density. \subsection{Complex Absorbing Potentials} \par Complex absorbing potentials have been systematically developed using functional forms including linear and step potentials \cite{Neuhasuer1989}, higher--order polynomials \cite{Vibok1992, Riss1996, Ge1997, Poirier2003, Poirier2003b}, exponential \cite{Vibok1992, Vibok1992b} and hyperbolic functions \cite{Kosloff1986}, as well as through functions with singular behavior at isolated points in the complex plane \cite{Brouard1994, Manolopoulos2002}. These investigations do not suggest a universal `optimal absorbing potential,' however, the criteria necessary for an effective absorber may be distinguished. In particular, complex polynomial potentials significantly enhance absorption over purely imaginary polynomial terms, particularly in low energy cases where the deBroglie wavelength of the incident wavepacket is comparable to the characteristic length of the absorbing region \cite{Ge1997}. Adding a negative real component to the potential will increase the energy of the incident particle and thereby reduce the wavelength of the packet, enhancing norm attenuation by the imaginary part while concurrently reducing reflection. It should be noted that the Wentzel--Kramers--Jeffreys--Brillouin (WKJB) approximation yields quantitatively inaccurate results where $\lambda / L \geq 1$, which is the domain of interest for most applications of absorbing potentials \cite{Ge1997}. Despite these limitations, potentials optimized in the semiclassical limit will be utilized as--is, with the assumption that general trends in absorbing efficiency are transferrable. This approximation is found to be sufficient, provided that the numerical parameters defining the potential are adjusted at runtime. \par A further consideration is related to the specific application of a given potential within a simulation. In the first case, a complex potential may be located at the boundary of the simulation cell to absorb particles leaving the system [Fig. \ref{absorb_schematic}]. Such a potential should switch on smoothly outside of the interaction region and attain larger values as the distance from this region increases. A smooth profile is essential to minimize reflections, as any discontinuous step will be reflection generating \cite{Poirier2003, Poirier2003b}. While satisfied by simple cases such as complex polynomials, a particularly efficacious attenuator is the potential $V_{A, \text{edge}} (w) = -i E_{\text{min}} f(w)$, with \begin{equation} f(w) = \left(1 - \frac{16}{c^3}\right)w - \frac{1}{c^2}\left(1 - \frac{17}{c^3}\right)w^3 + 4\left(\frac{1}{(c-w)^2} - \frac{1}{(c+w)^2}\right) \end{equation} \noindent where the variable $w = 2 \delta k_\text{min} (x-x_i)$ has been introduced. In this case, $x_i$ marks the incoming boundary of the potential, and $x_f$ corresponds to the edge of the simulation cell, $E_\text{min}$ is the lowest energy of interest for an incident particle, and $\delta k_\text{min} = \sqrt{2 m E_\text{min} / \hbar^2}$ is the corresponding wavevector. In order that the potential become singular as $x \longrightarrow x_f$, it is necessary to set $\delta k_\text{min} = c / 2L$, where $L = (x_f - x_i)$ is the length of the potential \cite{Manolopoulos2002}. This particular functional form was constructed as the solution of a semiclassical differential equation derived for plane wave scattering from a complex potential. The sum of reflection and transmission coefficients $\vert R\vert^2 + \vert T \vert^2$ was minimized as a constraint during construction, thereby ensuring optimal absorption. Furthermore, the divergent growth as $x \longrightarrow x_f$ ensures complete attenuation before the cell boundary is reached [Fig. \ref{absorb_sim}(a)]. \par While the aforementioned potential is ideal for boundary attenuation, it may also become necessary to attenuate the wavefunction within the interaction region [Fig. \ref{absorb_schematic}]. Such a potential must be symmetric to ensure isotropic scattering from each side and bidirectionally smooth to minimize reflections. The simplest such choice is a Gaussian function \begin{equation} V_{A,\text{int}}(x) = -i V_0 e^{-(x - x_0)^2 / 2\alpha^2} \end{equation} \noindent where $\alpha^2$ delimits the spread of the Gaussian. Since the spatial extent of this potential is infinite, it must be defined on a piecewise subdomain $\vert x - x_0 \vert \leq L/2$, where $\alpha$ and $L$ are chosen so that the Gaussian becomes sufficiently small at $x = x_0 \pm L/2$ [Fig. \ref{absorb_sim}(b)]. Taken together, these two functional forms comprise a sufficient armamentarium of absorbing potentials to handle any scenario that may be encountered in a routine transport simulation. As a final note, if necessary and irrespective of the application, a negative real part may always be added to decrease the deBroglie wavelength of the incident particle and enhance absorption. \subsection{$\mathcal{PT}$--Symmetric Generating Potentials} \par Just as a negative imaginary potential attenuates the wavefunction norm, a positive imaginary potential will increase the wavefunction norm. Consider a simple experiment in which a Gaussian wavepacket of unit norm $\braket{\psi(t_2) \vert \psi(t_2)} = 1$ is incident on a potential of the form \begin{equation} V(x) = \left\{ \begin{array}{ccc} iV_0 e^{-(x-x_0)^2/2\alpha}, & & -L/2 \leq x \leq L/2 \\ 0, & & \text{otherwise} \end{array} \right. \end{equation} \noindent at $t = t_1$, and emerges from the potential later at time $t = t_2$. During transmission, the norm of the packet will have been increased in magnitude, however, the shape of the packet will be unchanged [Fig. \ref{gaussgrow}]. If an additional unit of norm is added to the packet so that $\braket{\psi(t_2) \vert \psi(t_2)} = 2$, this may be interpreted as the addition of a second particle to the system. Nonetheless, this scenario is unphysical as the particles coincide spatially and copropagate under time evolution. To avoid such complications, the use of these potentials in transport calculations requires systematic system and bias dependent tuning \cite{Varga2007, Wibking2012}. \par A more suitable option is provided through $\mathcal{PT}$--symmetric potentials possessing anisotropic transmission resonances (ATR), also known as `unidirectional invisibility.' Such potentials were first theoretically investigated in the context of optical heterostuctures and Bragg gratings characterized by alternating gain / loss regions \cite{Lin2011}, with subsequent experimental realization in a temporal optical lattice \cite{Regensburger2012}. At the spontaneous $\mathcal{PT}$--symmetry breaking point, these systems permit near--perfect transmission of a wave incident from either side while simultaneously reflecting waves at one boundary and being reflectionless at the other. This anisotropy is a manifestation of the generalized unitarity condition satisfied by $\mathcal{PT}$--symmetric potentials \cite{Ge2012}. Furthermore, the reflecting side of such an optical structure may exhibit enhanced gain; that is, the reflected wave may have an amplitude greater than that of the incident wave. This phenomenon has direct implications for matter--wave scattering, as the paraxial approximation to the equation of motion for propagation of the electric field $E(x)$ in a medium is formally equivalent to the Schr\"{o}dinger equation. In the case of an optical heterostructure, the variation in the index of refraction $n$ occurs longitudinal to the incident wave, and this assumes the form of a Helmholtz equation \begin{equation} \frac{\partial^2 E(x)}{\partial x^2} + k^2 \left(\frac{n}{n_0}\right)^2 E(x) = 0 \end{equation} \noindent where the wavevector $k = n_0 \omega / c$, the index of refraction of the surrounding medium is $n_0$, $c$ is the speed of light in vacuum, and $\omega$ is the angular frequency of the wave. Introducing the convention that $(n / n_0)^2 = (1 + 2 V_\text{ATR}(x))$ establishes a formal connection to the Schr\"{o}dinger equation and the quantum case. To mimic the aforementioned heterostructures assume that the complex potential, and hence index of refraction, acts over a range $0 \leq x \leq L$ and has the functional form \begin{eqnarray} \label{ATRpot} V_\text{ATR}(z) &=& V_A \cos (2\beta x) + i V_B \sin(2\beta x)\\ &=& V_0 e^{2i\beta x} \end{eqnarray} \noindent where $V_A = V_B = V_0$ is assumed in the second line and $\beta = \pi / \Lambda$ for a lattice of spatial periodicity $\Lambda$. It is clear that this potential satisfies the condition $V_\text{ATR}(x) = V_\text{ATR}^*(-x)$ as required for $\mathcal{PT}$--symmetry. Note that the choice $V_A = V_B$ places the system at the critical point for $\mathcal{PT}$--symmetry breaking, with a real energy spectrum retained for $V_B / V_A \leq 1$. This ratio likewise controls the left/right--reflection asymmetry. Within the coupled--mode approximation \cite{Lin2011}, the transmission coefficient is found to vanish for wavevectors with $\delta = \beta - k = 0$, as does the reflection coefficient for left--incident (right--incident) plane waves. Conversely, the reflection coefficient for right--incident (left--incident) plane waves grows as $L^2 (k V_A)^2$. Note that, unlike the Schr\"{o}dinger equation, the ``potential'' appearing in the Helmholtz equation is energy dependent through the $k$ terms. For shallow gratings this dependence is negligible and hence the equivalence is exact \cite{Kulishov2005}. \par The aforementioned analysis of invisibility is nonetheless performed in an approximate regime. An exact solution of the Schr\"{o}dinger equation at the $\mathcal{PT}$--symmetry breaking point \begin{equation} \frac{\partial^2 \psi(x,t)}{\partial x^2} + \frac{2m}{\hbar^2} \left(E - \hat{V}_{ATR}(x)\right)\psi(x,t) = 0 \end{equation} \noindent with $\hat{V}_{ATR}(x) = V_0 \left[\cos(2\beta x) + i\sin(2\beta x)\right]$ for $0 < x < L$ may be obtained. Performing a change of variables to $y = (\Lambda \sqrt{V_0} / \pi) \exp[i \pi x / \Lambda]$, the Schr\"{o}dinger equation becomes \begin{equation} y^2 \frac{d^2 \psi}{dy^2} + y \frac{d\psi}{dy} - (y^2 + \nu^2)\psi = 0 \end{equation} \noindent where $\nu = k \Lambda / \pi$, and the convention that $\hbar = 2m = 1$ has been adopted for convenience of notation. In this case $k$ is the wavevector associated with the momentum of the quantum particle through $p = \hbar k$. This is a Bessel equation with solutions $\psi_k (x) = I_\nu (y)$ and $\psi_{-k} (x) = I_{-\nu} (y)$ given in terms of the modified Bessel functions of the first kind. These functions remain linearly independent provided that $k \Lambda / \pi$ is not an integer \cite{Graefe2011,Longhi2011,Jones2012}. It is significant to note that these solutions are not orthogonal in the conventional sense, however, they are orthogonal under the $\mathcal{PT}$--symmetric inner product. At exceptional points where $\nu = n \in \mathbb{N}$, there exists a spectral singularity \cite{Mostafazadeh2009, Longhi2010}, whereupon $\psi_{\pm k}(x)$ become degenerate. To resolve this situation, the solutions must be extended by the addition of Jordan associated functions \cite{Graefe2011}. This general solution will be neglected herein, with the approximation of solutions at all points by Bessel functions in subsequent calculations. As before, $\beta = \pi/\Lambda$ with the provision that $L=N \Lambda$ with $N \in \mathbb{N}$, corresponding to a $\mathcal{PT}$--symmetric crystal $N$ cells in length. \par Analysis of these solutions indicates that invisibility is not exact for $\delta = \beta - k \neq 0$, with a nontrivial breakdown of this assumption particularly apparent beyond a critical length $L_c = (2\pi^3 / V_0^2 \Lambda^3)$ \cite{Longhi2011}. This observation is consistent with the modified unitarity condition for $\mathcal{PT}$--symmetric potentials, $T - 1 = \pm \sqrt{R_L R_R} $, which bounds the deviation from ideal behavior \cite{Ge2012}. Numerical results further indicate that the transmission $T$ and unenhanced reflection coefficients $R_L$ oscillate rapidly as a function of $\delta$, however, the amplitude of this oscillation remains small. The enhanced reflection coefficient $R_R$, on the other hand, affords a strong enhancement only within a narrow window of values about $\delta = 0$ \cite{Jones2012}. \par A unique phenomenon is observed when a Gaussian wavepacket is incident on an ATR potential [Fig. \ref{packet_saturate}]. Assuming the packet is incident on the generating interface of the ATR, the reflected wave eventually saturates in amplitude, emerging with an extended, flattened peak. This extrusion process occurs during the entire period for which the maximum of the incident packet remains under the barrier. This phenomena was first observed in numerical simulations and perturbative calculations \cite{Longhi2010} and later rationalized in terms of the Jordan--block structure of the eigenfunction space for the potential \cite{Graefe2011}. Physically, this saturation occurs due to the presence of spectral singularities, with the resultant spectral broadening causing a saturation in the secular growth of scattered waves. While a linear scaling behavior would be expected at this point, the excited Jordan associated functions grow linearly to precisely compensate the decrease in contribution from the nondegenerate states, leading to the stalled growth. Formally, this corresponds to an incident Gaussian packet being reflected as a sum of error functions \cite{Jones2011} and thus the incident pulse is lengthened into an extended packet of peak width $\sim L$ upon reflection \cite{Jones2012}. \subsection{Wavepacket Propagation and Transmission / Reflection Coefficients} \par Consider a barrier penetration problem in which a free particle is incident on an isolated potential $\hat{V}_{PT}$ of width $L$, with the potential permitting anisotropic transmission resonances as per Eq. (\ref{ATRpot}). In a first order approach, the wavefunctions in the left and right regions may be expanded in terms of plane--wave eigenstates \begin{equation} \psi(x) = \left\{ \begin{array}{lcc} \psi_{L,k}(x) = \frac{1}{\sqrt{2\pi}}(A_L e^{ikx} + B_L e^{-ikx}) && x \leq 0 \\ \psi_{R,k}(x) = \frac{1}{\sqrt{2\pi}}(A_R e^{ik(x-L)} + B_R e^{-ik(x-L)})&& x \geq L \end{array}\right. \end{equation} \noindent The wavefunctions on either side of the scattering region are linked through the transfer matrix $\hat{M}(k)$ with components \begin{equation} \left( \begin{array}{c} A_R \\ B_R\end{array}\right) = \left( \begin{array}{cc} M_{11}(k) & M_{12}(k) \\ M_{21}(k) & M_{22}(k) \end{array}\right) \left( \begin{array}{c} A_L \\ B_L\end{array}\right) \end{equation} \noindent from which the transmission amplitude $t_R = 1 / M_{22}$ as well as left $r_L = -M_{21} / M_{22}$ and right $r_R = M_{12} / M_{22}$ reflection amplitudes are readily obtained. Evaluating the Bessel function solutions for this potential at the boundaries, the transfer matrix $M(k)$ is constructed explicitly \cite{Longhi2011}: \begin{eqnarray} M_{11}(k) &=& \cos (kL) + i \frac{\Lambda \sin(kL)}{2k \sin(\pi \nu)} \left(k^2 Q_1 Q_2 - V_0 D_1 D_2\right) \\ M_{12}(k) &=& -i \frac{\Lambda \sin(kL)}{2k \sin(\pi \nu)} \left(V_0 D_1 D_2 + k^2 Q_1 Q_2 + k \sqrt{V_0} \left(D_1 Q_2 + D_2 Q_1\right) \right) \\ M_{21}(k) &=& i \frac{\Lambda \sin(kL)}{2k \sin(\pi \nu)} \left(V_0 D_1 D_2 + k^2 Q_1 Q_2 - k \sqrt{V_0} \left(D_1 Q_2 + D_2 Q_1\right) \right) \\ M_{22}(k) &=& \cos (kL) - i \frac{\Lambda \sin(kL)}{2k \sin(\pi \nu)} \left(k^2 Q_1 Q_2 - V_0 D_1 D_2\right) \end{eqnarray} \noindent where the notation \begin{equation} \begin{array}{cc} Q_1 = I_\nu (\Delta), & D_1 = \partial_x I_\nu(\Delta) \\ Q_2 = I_{-\nu} (\Delta), & D_2 = \partial_x I_{-\nu}(\Delta) \end{array} \end{equation} \noindent has been introduced with $\Delta = \Lambda \sqrt{V_0} / \pi$ and $2m = 1$. Similar solutions for other masses may be recovered through the substitution $k \mapsto k / \sqrt{2m}$ and consistent rescaling. \par These relations afford the reflection and transmission coefficients for plane--wave scattering through the $\mathcal{PT}$--symmetric media. Nonetheless, the corresponding result for wavepacket transmission will differ substantially, especially when the packet width is comparable to the extent of the $\mathcal{PT}$--symmetric region. To derive the corresponding coefficients for a finite--width packet, create an initial envelope of width $\sigma^2$ and wavevector $k_0$ centered at $x=a$: \begin{equation} \phi(x,0) = \frac{1}{(\pi\sigma^2)^{1/4}} e^{i k_0 (x - a)} e^{-(x-a)^2 / 2\sigma^2}, \end{equation} \noindent from which the momentum--space representation may be obtained via a Fourier transform \begin{eqnarray} \phi(k,0) &=& \frac{1}{\sqrt{2\pi}}\int_{-\infty}^{\infty} \phi(x,0) e^{-ikx} \,dx\\ &=& \left(\frac{\sigma^2}{\pi}\right)^{1/4} e^{(k-k_0)^2\sigma^2 / 2} e^{-ika}. \end{eqnarray} \noindent Using this expression, the wavefunction may be synthesized in terms of the plane wave eigenfunctions $\psi_k(x)$ \begin{equation} \label{packeteqn} \psi(x,t) = \int \psi_k(x) \phi(k,0) e^{-iE(k)t} \, dk, \end{equation} \noindent where $E(k) = k^2$ is the energy of a given plane--wave component. For illustrative purposes, assume that $\psi_k(x) = C(k) (2\pi)^{-1/2} e^{ikx}$ for a rightmoving packet. Expanding Eq. (\ref{packeteqn}) explicitly affords \begin{eqnarray} \psi(x,t) &=& \frac{1}{\sqrt{2\pi}} \left(\frac{\sigma^2}{\pi}\right)^{1/4} \int e^{(k-k_0)^2\sigma^2 / 2} e^{-ika} e^{-ik^2 t} C(k) e^{ikx} \, dk \\ &=& \int \left(\frac{1}{\sqrt{2\pi}}e^{ikx} \right) \phi'(k,t) \, dk, \end{eqnarray} \noindent where in the second line $\psi(x,t)$ was rewritten in terms of the Fourier transform of a function $\phi'(k,t) = C(k) \exp [-ik^2 t] \phi(k,0)$ comprising a Gaussian envelope with amplitude $C(k)$ inherited from the plane wave. Using this representation, the norm of the packet is simply \begin{eqnarray} \mathcal{N} = \vert\vert \phi'(k,t) \vert\vert^2 &=& \int [\phi'(k,t)]^* \phi'(k,t) \, dk \\ &=& \left(\frac{\sigma^2}{\pi}\right)^{1/2} \int e^{(k-k_0)^2\sigma^2 } \vert C(k) \vert^2 \, dk. \end{eqnarray} \noindent Assuming unit incident norm, the norm of the transmitted or reflected packet equates to the transmission or reflection amplitude, respectively. To compute this explicitly, let a packet be incident on the right side of the $\mathcal{PT}$ symmetric region ($B_R = 1$ and $A_L = 0$) so that the amplitude of the reflected wave is $A_R = M_{12}(k) / M_{22}(k)$. Then the reflection coefficient for the wave on the right side is given by \begin{equation} R_R = \left(\frac{\sigma^2}{\pi}\right)^{1/2}\int e^{(k-k_0)^2\sigma^2} \left| \frac{M_{12}(k)}{M_{22}(k)} \right|^2 \, dk, \end{equation} \noindent and, with $B_R = 1 / M_{22}(k)$, we have the transmission coefficient \begin{equation} \label{wptranscoeff} T_R = \left(\frac{\sigma^2}{\pi}\right)^{1/2}\int e^{(k-k_0)^2\sigma^2 } \left| \frac{1}{M_{22}(-k)} \right|^2 \, dk, \end{equation} \noindent with the sign change due to the opposite motion of the plane wave, though this is strictly formal since $\vert M_{22}(k)\vert$ is an even function of $k$ for the given potential. Note that these integrals are well defined with a removable singularity at $k = 0$. The accuracy of this framework requires that the barrier width and phase factor are suitably chosen so that the packet does not spread appreciably on the traversal timescale. One caveat of this analysis is that the reflected packet must maintain a Gaussian profile; a condition which is only satisfied for a certain range of parameters due to the saturation of anisotropic transmission resonances. \subsection{Potential Structure} \par Having developed a toolkit containing both absorbing and generating potentials, these components may be assembled to afford an effective simulation method for open systems. The most intuitive construction entails placing an edge absorbing potential $\hat{V}_{A,\text{edge}}$ at the boundary of the simulation cell, which is assumed to lie within an infinite square well, and an ATR generating potential $\hat{V}_{ATR}$ near the other boundary [Fig. \ref{edgegen_geom}]. The generating face of the absorbing potential is oriented toward the scattering region, so that any particle incident on this region will transmit and be compensated by an additional reflected particle. The transmitted particle will ultimately reflect off the square well boundary at $x_\text{min}$ and reenter the system. Within the center of the cell these wavepackets encounter a scattering region in which the particles interact with static potentials or through many--body interactions. After traversing this region, the particle reaches $\hat{V}_{A,\text{edge}}$, where it is completely attenuated. This establishes a net current from the generating region to the absorbing region. Note that neither the generating nor attenuating potentials overlap with the scattering region, ensuring that the transport processes are unperturbed. \par In this scenario, the absorbing potential $\hat{V}_{A,\text{edge}}$ is chosen so that any packet entering this region is completely attenuated before reaching the edge of the square well. If the potential is sufficiently strong, reflections and transmission will be minimized, thereby eliminating a source of artifacts. The definition of the ATR potential is slightly more complicated. Due to the saturable reflections inherent in ATR potentials, the generated packet will only be Gaussian (and not an extended Gaussian), if the width of the incident wavepacket is greater than the region $L_\text{ATR} = \vert g_1 - g_2 \vert$ over which $\hat{V}_{ATR}$ is defined. Furthermore, the width of this region and the distance $L_d = \vert x_\text{min} - g_1 \vert$ determine the interpacket spacing or the delay time $2 (L_d + L_\text{ATR}) / v_g$ between packet arrivals. Note that, by construction, this method requires the density from within the scattering region to impinge on the generating potential in order to afford a positive flux of norm. Thus, the bias across the simulation must be suitably small so that backscattered packets continue to reach $\hat{V}_{ATR}$. For steady--state current (no time--dependent potentials or charge accumulation) we require that the total norm within the cell remain constant at all times, and hence $\langle\partial \mathcal{N}_G / \partial t\rangle = -\langle\partial \mathcal{N}_A / \partial t\rangle$, where $\mathcal{N}_G$ is the generated norm and $\mathcal{N}_A$ is the attenuated norm for the system. \par A second scenario may be envisioned, in which a generating potential consistently adds a stream of packets with fixed delay spacing to the system. In such a configuration, the outgoing particles are once again attenuated by a potential at the cell boundary $\hat{V}_{A,\text{edge}}$, however, a second absorbing potential $\hat{V}_{A,1}$ and generating potential $\hat{V}_{ATR}$ are utilized to create a pulse generator [Fig. \ref{bounce_geom}]. Specifically, a wavepacket $\psi_G(x,t)$ of norm $\braket{ \psi_G(x,t) \vert \psi_G(x,t)} = \mathcal{N}_g$ with $\mathcal{N}_g > 1$ is placed between the ATR and the cell boundary, with the generating face of the ATR facing toward the cell edge. During simulation, the packet $\psi_G(x,t)$ impinges on the reflecting surface of the ATR causing an identical packet to be reflected into the delay region accompanied by transmission of the incident packet toward the scattering region. The transmitted packet then passes through $\hat{V}_{A,1}$, which attenuates the particle to unit norm before it interacts with the scatterers. In the same manner, $\hat{V}_{A,1}$ attenuates any packets passing from this interaction region toward the pulse generator, isolating it from the simulation. The new packet generated in the delay region reflects off the cell boundary at $x_\text{min}$ and propagates back toward the generating surface at $g_1$ to begin this process anew. The use of pulse trains generated in this manner is particularly appealing for situations where non--equilibrium charge accumulation, $\langle\partial \mathcal{N}_G / \partial t\rangle \neq -\langle\partial \mathcal{N}_A / \partial t\rangle$ is desirable such as in capacitive charging. \par In the packet generator configuration, the pulse generation delay time is given by $2 L_d / v_g$, and hence is a tunable parameter. The norm $\mathcal{N}_g$ of the generating packet must be adjusted for the given absorbing potential $\hat{V}_{A,1}$, as the attenuation rate is a function of both the potential itself and the norm of the incident wavefunction (Eq. \ref{attenrate}). Note that the attenuation rate is larger for a packet with a larger norm, and thus the rate of absorption for a reflected packet incident from the scattering region will be less than that for a probe packet incident from the packet generator. The applicable timescale for this method is likewise limited by the scheme utilized to maintain the wavepacket(s) in the generating region, as they will ultimately broaden under time evolution in the absence of measurement. In this scheme, a complication regarding transferability to different biases results from the use of generating potentials. At a finite bias voltage $V$ the energy $E_0$ of a given particle will undergo a shift to $E = E_0 + eV$. This corresponds to a new wavevector $k = \sqrt{2m(E_0 + eV)}$, and hence a new group velocity for the packet $v_g = \sqrt{2m(E_0 + eV)} / m$. Thus, the absorbing potential parameters need to be reoptimized at each finite bias, or the bias range chosen to be sufficiently narrow, to ensure the addition of unit norm packets with minimal reflection. This is less of a concern for the boundary absorbing potential, as the strength and width may be initially chosen so as to attenuate any incident packets for a range of energies $E_0 \pm eV$. Nonetheless, the widths of absorbing and generating regions must be altered for both propagation schemes since the shift in group velocity affects the extent of norm generation or loss. Specifically, the net norm removed from the system is \begin{eqnarray} \mathcal{N}_{A} &=& \int_{t_i}^{t_f} dt \,\frac{\partial{N}_{A}}{\partial t} \\ &=& \int_{t_i}^{t_f} dt \, \braket{ \psi(\vec{x},t) \vert \text{Im}(\hat{V}_{PT})\vert \psi(\vec{x},t)} \\ &=& \int_{t_i}^{t_f} dt \int_\mathcal{V} d^N x \, \psi^*(\vec{x},t)\, [\text{Im}(V_{PT}(\vec{x}))] \, \psi(\vec{x},t) \, \end{eqnarray} \noindent such that $\Delta t = t_f - t_i = t_S$ = $L / v_g$ is the duration for which the attenuating potential acts, $\mathcal{V}$ is the volume of the absorbing region, and $N$ is the dimensionality of the system. The delay region must likewise be modified to ensure a proper inter--packet delay time. Finally, an ultimate time scale must be assigned to the stability of these simulations due to aberrant accumulation or loss of norm. This deviation may arise from either imperfect transmission, reflection, and generation, or from the inevitable spread of the generating wavepacket. \section{Numerical Results} \subsection{Propagation Parameters} \par Numerical simulations are performed through real--time propagation of an initial wavepacket. The propagation method, detailed in the Appendix, employs a forward finite difference algorithm to propagate real and imaginary components of the wavefunction. The initial wavepacket is described by the product of a normalized, unit mass Gaussian centered at $x_0$ and a monochromatic plane wave as \begin{equation} \psi(x,0) = \frac{1}{(\pi \sigma^2)^{1/4}} e^{-(x-x_0)^2 / 2\sigma^2} e^{ik_0 (x-x_0)} \end{equation} \noindent where $k_0 = \sqrt{2E}$ is the initial wavevector for a particle of energy $E$ and $2\sigma \sqrt{2 \log 2}$ is the full--width at half--maximum (FWHM) spatial extent of the packet. The packet propagates in the direction of $k_0$ with frequency $\omega = k^2 /2$ and group velocity $v_g = \partial \omega / \partial k = k$. The wavepacket is discretized on a spatial lattice comprising $N = 1 \times 10^4$ elements and integrated with finite temporal and spatial steps, $\Delta t = 1.0 \times 10^{-9}$ and $\Delta x = 1.0 \times 10^{-4}$ respectively. This ensures that the lattice spacing is smaller than the phase oscillation length of the packet for a typical choice of parameters ($\sigma^2 = 0.001$ and $k_0 = 500$). Arbitrary potentials are defined within the confines of the lattice, with infinite square--well boundary conditions ensuring that the wavefunction vanishes at the edges of the cell. \subsection{ATR Potential Numerics} \par Scattering from an ATR potential was simulated via real--time propagation of an initial Gaussian wavepacket ($k_0 = -500$, $\sigma^2 = 0.001$, $x_0 = 0.80$) incident on an ATR region of width $L = 20 \Lambda = \pi / 25$ centered about $x = 0.50$. The scaling behavior of the reflection and transmission coefficients exhibits good agreement with analytical calculations, with a few notable deviations [Fig. \ref{coefficients}]. In particular, the right (enhanced) reflection coefficient $R_\text{right}$ and the transmission coefficient $T$ are found to be nontrivially smaller than the analytical result when the ATR potential strength is greater than $V_0 \sim 6.0 \times 10^{-3}$. This corresponds to a regime for which $R_\text{right} > 1.0$, and hence where the wavefunction norm is doubled. The discrepancy may arise from the approximation of eigenfunctions within the ATR region as modified Bessel functions of the first kind, and thus the neglect of Jordan associated functions. Additional deviations are due to the spread of the wavepacket during propagation, as the FWHM no longer corresponds to that defined by $\sigma^2 = 0.001$ in the initial distribution. Nonetheless, calculations in which wavepacket propagation was initiated as close as possible to the ATR region demonstrate that violations of the quasistatic approximation arising from wavefunction spread do not account for these large discrepancies in the data. \par There is a strong dependence of the enhanced reflection coefficient on the incident wavevector when scattering from a grating with fixed ATR mode wavevector $\Lambda = \pi / k_\text{grating} = \pi / 500$ [Fig. \ref{enhanced_vs_k}]. Nonetheless, the reflection coefficient $R_\text{right}$ is reduced by a factor of $0.90$ for wavevectors $k_0 = 500 \pm 10$, corresponding to incident packet energies ranging between $E = E_0 \pm 5000$. Thus, if used as a generating potential in transport calculations, this ATR configuration would ensure greater than 90.0\% generation for bias values of $eV = \pm 5000$, or $\sim 4.0 \%$ of the incident packet energy. Such a dispersion is more than suitable for most transport applications, in which the bias need not exceed a few electron volts. \par The enhanced reflection coefficient ($R_\text{right}$) is found to exhibit an initial quadratic dependence on the number of $\mathcal{PT}$--symmetric ATR unit cells, followed by a linear increase at cell numbers $N \geq 5$ [Fig. \ref{length_dependence}]. The transmission coefficient drops below unity for large ATR crystals, however, the overall magnitude of this effect is rather small ($T \sim 0.989$ at $N = 30$). For simulation purposes, it is desirable to keep the length of the ATR region smaller than the width of the Gaussian to prevent extrusion of the generated packet. For $\sigma^2 = 0.001$, which represents a rather broad packet, this requires $N \leq 20$. \par The dependence of transmission properties on $\sigma^2$ is important for the stability of a packet generator, yet this is difficult to quantify numerically due to spread of the packet during real--time propagation. Using the analytical results as a guide, a strong dependence exists between the enhanced reflection coefficient $R_\text{right}$ and the incident packet width [Fig. \ref{sgsq_dependence}]. As the packet broadens spatially under time evolution, $\sigma^2 \longrightarrow \infty$, the momentum distribution narrows and $R_\text{right}$ asymptotically approaches the value expected for an incident plane wave. Thus, if a broad initial packet is chosen, there will be little change in the enhanced reflection coefficient as a function of time, leading to a longer timescale for stable packet generation. Conversely, if a narrow initial packet is chosen, the enhanced reflection coefficient will vary substantially between subsequent generation events as $\sigma^2$ grows, lending inconsistency to the simulation. \subsection{Transport Through A Scattering Region} \par Calculations employing the ATR generator were performed using broad wavepackets $\sigma^2 = 0.01$ with large norm $\mathcal{N} = 8$ and an incident wavevector $k = 500$ corresponding to an energy of $E = 1.25 \times 10^5$. The generating wavepacket was situated between the edge of the simulation cell and an ATR generating potential of width $L = 5 \Lambda = \pi / 100$. The potential was numerically optimized to yield $V_0 = 16049.5$, which affords unit generation and transmission of the incident packet. A Gaussian absorbing potential was situated between the ATR and the scattering region, and defined over a distance $L_\text{Gau} = 0.10$ with $\alpha^2 = 1.0 \times 10^{-4}$. The magnitude of the absorber was numerically optimized to yield $V_\text{Gau} = 520.1$, which attenuates an incident $\mathcal{N} = 8$ wavepacket to unit norm. The use of a large incident packet permits a large Gaussian filter, which reduces the penetration of reflected wavepackets into the ATR region. Wavepackets were attenuated at the outgoing boundary of the scattering region using a singular absorbing potential with $c = 2.0$, $k_\text{min} = 250$, and a width $L_\text{att} = 0.250$. The scattering region was occupied by a rectangular potential barrier $L_\text{barrier} = 0.095$ units in extent and evaluated at a variety of potential strengths $V_\text{barrier}$ to determine conductance characteristics. All components were enclosed in an infinite square well measuring 2.0 units in spatial extent [Fig. \ref{bounce_geom}]. The spatial integration step was taken to be $\Delta x = 2.0 \times 10^{-4}$ in this case. \par Calculations performed with $V_\text{barrier} = 0$ reveal that the norms of the generating and transmitted packets are well maintained, with a deviation of $\sim$ 10\% observed after generation of twelve packets, corresponding to over $5.0 \times 10^6$ integration timesteps [Fig. \ref{stability}]. This is comparable to the divergence expected for the simple first--order integration scheme employed herein. Accordingly, the only factor that varies substantially between subsequent iterations aside from this systematic error is the spread of the wavepacket. To ascertain the current flow through a scattering region, the probability current was averaged over two small spatial windows, measuring 0.04 units in width, placed on either side of the rectangular barrier and a bias potential $V_\text{bias}$ was added to the incident packet $E = E_{k_0} + V_\text{bias}$. This configuration conceptually resembles a conventional four--probe conductivity measurement. The conductance $\mathcal{G}$ of the scattering region at a given bias $eV$ may be obtained as a function of the transmission coefficient $T$: \begin{eqnarray} \mathcal{G}(eV) = \frac{e^2}{\pi \hbar} T(eV) = \frac{e^2}{\pi \hbar} \frac{\vert J_T(eV) \vert}{\vert J_I (eV) \vert} \end{eqnarray} \noindent where $J_I$ is the incident wavepacket current and $J_T$ is the transmitted wavepacket current \cite{Imry1999}. The spread of the wavepacket in the ATR region has a demonstrable effect on successive transmitted packets as measured at zero applied bias, which manifests through a decrease in the peak current density [Fig. \ref{successive}]. Nonetheless, the conductance values remain remarkably stable even as the barrier strength is increased, with the first four transmission events affording nearly identical conductance determinations (Table \ref{conduct}). When including the full set of nine transmission events the calculated conductance varies by only 6.4\% of the value calculated from the first event. As a point of reference, the conductance was analytically determined using the transmission coefficient for a plane wave through a square barrier \begin{equation} T = \left( 1 + \frac{V_\text{barrier}^2 \sin^2 (k L_\text{barrier})}{4E(E-V)}\right)^{-1} \end{equation} \noindent in conjunction with Eq. (\ref{wptranscoeff}). In this context, $V_\text{barrier}$ is the barrier height and $L_\text{barrier}$ the barrier width, and $k = \sqrt{2E}$ is the incident wavevector. The analytically--determined values agree closely with those obtained numerically for low barriers, with a slight departure from analytical results in the high--barrier case. This discrepancy likely arises due to deviation of the generated packet shape from a proper Gaussian, made more apparent due to reflection from a stronger potential. In either case, the magnitude of this deviation never exceeds 10\% of the analytical value affording an accuracy beyond other numerical schemes for conductance determination (Table \ref{conduct}). \par The ATR packet generating scheme employed herein is essentially a response formalism, in which the reaction of a system to a probe packet is measured. Accordingly, there exists a nonzero current at zero applied bias, which comprises the reference state for such determinations. Physically, the zero--bias state in a material is associated with zero net current, and hence an isotropic movement of charge carriers in the system. The formalism herein corresponds to the short time limit, in which a single carrier has passed in a given direction but before an additional compensatory carrier may pass in the opposite direction. To demonstrate the scaling of transport with applied bias, it is more instructive to consider the relative conductance versus bias than the raw transmitted current. The relative conductance $\mathcal{G}_\text{Rel} (V_\text{bias})$ is defined as \begin{eqnarray} \mathcal{G}_\text{Rel}(V_\text{bias}) &=& \frac{\mathcal{G}(V_\text{bias}) - \mathcal{G}(0)}{\mathcal{G}_0 (V_\text{bias}) -\mathcal{G}_0 (0)} \\ &=& \left. \left(\frac{J_T(V_\text{bias})}{J_I(V_\text{bias})} - \frac{J_T(0)}{J_I(0)}\right) \middle/ \left(\frac{J_{0,T}(V_\text{bias})}{J_{0,I}(V_\text{bias})} -\frac{J_{0,T}(0)}{J_{0,I}(0)}\right) \right. \end{eqnarray} \noindent where a subscript of zero indicates the current or conductance calculated in the absence of a barrier. The normalization of the transmitted current by the incident current is required for comparative purposes between calculations with different barriers, as the presence of the barrier itself introduces a boundary condition which may alter the incident flux. Furthermore, as all determinations are taken with respect to a probe packet, the conductance must be measured relative to that observed in the absence of a barrier to provide a reference point for free propagation and accommodate variation in peak--to--peak current due to packet spread. The result of this analysis is in some sense analogous to the I--V curves typically presented in the context of experimental transport measurements. The scaling of the relative conductance $\mathcal{G}_\text{Rel}(V)$ exhibits the expected correlation with increasing bias and increasing barrier strength [Fig. \ref{ivc}]. Notably, the increase in barrier strength affords a greater slope for $d\mathcal{G}_\text{Rel}(V) / dV_\text{bias}$, consistent with the expected scaling for the transmission coefficient through an increasingly strong rectangular barrier. It is notable that the formalism herein affords the conductance at both zero and finite bias with no additional computational cost. \section{Computational Limitations of Complex Potentials in Many--Body Systems} \par While evolution under the action of $V_{PT}$ mimics a multiparticle state, this does not embody all the requisite properties for a true many--body configuration. To see this, assume a simple system with a wavefunction given by the product ansatz $\ket{\Psi(t)} = \ket{\psi_0 (t)} \otimes \ket{\psi_1(t)}$, where $\ket{\Psi(t)} \in \mathcal{H}^{(2)} = \mathcal{H} \otimes \mathcal{H}$ is a two--particle Hilbert space. For now we ignore the effects of symmeterization, as this elementary form is sufficient for illustrative purposes. The full Hamiltonian for this system is \begin{equation} \hat{H} = \hat{H}_0 \otimes \hat{H}_0 + \hat{V}_{A/G} \otimes \mathbb{I} + \hat{V}_{2p} \end{equation} \noindent where $\hat{H}_0$ is the Hamiltonian for an isolated particle, $\hat{V}_{A/G}$ is the complex potential term acting only on $\ket{\psi_0 (t)}$, and $\hat{V}_{2p}$ is a two--particle interaction defined by \begin{eqnarray} \label{twobody} \hat{V}_{2p} &=& \sum_{ij} (\ket{\psi_i} \otimes \ket{\psi_j}) V_{2p}^{ij} (\bra{\psi_i} \otimes \bra{\psi_j}) \\ &=& \sum_{ij} (\ket{\psi_i} \otimes \ket{\psi_j}) U_{2p}^{ij} (\delta_{ij} - 1) (\bra{\psi_i} \otimes \bra{\psi_j}) \end{eqnarray} \noindent which approximates the Hartree--like term in an electronic structure method. Assume once again that $\hat{V}_\text{A/G}$ may be turned on or off arbitrarily, or asymptotically localized to a region of space, so that the interaction will apply to $\ket{\psi_0 (t)}$ only when it traverses this region. The latter scenario is representative of the complex absorbing and generating potentials utilized herein. To further simplify discussion, take $\hat{V}_\text{A/G}$ to be entirely imaginary, as the real part of this potential may be absorbed into $\hat{H}_0$ as a single--particle potential term. The time evolution operator decomposes as a tensor product in this formalism \begin{equation} \hat{U} = (\hat{U}_0 \otimes \hat{U}_0) (\hat{U}_{A/G} \otimes \mathbb{I}) \end{equation} \noindent where $\hat{U}_0(t_2,t_1) = \exp[-i(\hat{H}_0 + \hat{V}_{2p}) (t_2 - t_1) /\hbar]$ is the evolution in the absence of the complex potential and $\hat{U}_{A/G} = \exp[-i\hat{V}_{\text{A/G}} (t_2 - t_1) / \hbar]$ is the nonunitary evolution afforded by the Hermicity breaking term. \par Assume that $\ket{\Psi(t)}$ evolves in the absence of $\hat{V}_\text{A/G}$ up to a time $t_1$ after which $\ket{\psi_0(t_1)}$ enters the interaction region. Furthermore, let the interaction with $\hat{V}_\text{A/G} \ket{\psi_0(t_1)} = i\Gamma \ket{\psi_0(t_1)}$ end at $t_2$ sometime later. During this propagation, the wavefunction is carried to the final state $\ket{\Psi(t_2)} = \ket{\psi'_0 (t_2)} \otimes \ket{\psi_1(t_2)}$, where $\ket{\psi'_0 (t_2)} = \exp[\Gamma (t_2 - t_1) / \hbar] \ket{\psi_0 (t_2)}$, so that $\ket{\psi_0 (t_2)}$ corresponds to time evolution under the Hermitian part of the Hamiltonian. Defining $\alpha = \exp[2\Gamma (t_2 - t_1) / \hbar]$, it is clear that $0 \leq \alpha \leq 1$ for an attenuating potential $\hat{V}_A$ and $1 \leq \alpha < \infty$ for a generating potential $\hat{V}_G$. Focusing on the latter case, it is desirable to choose $\hat{V}_G$ such that $\alpha \in \mathbb{N}$, thereby ensuring that norm generation occurs in units of a single particle. If the norm of $\ket{\psi'_0 (t_2)}$ is enhanced to correspond to a two--particle state ($\alpha = 2$), then the interaction with $\ket{\psi_1 (t_2)}$ is scaled accordingly as \begin{equation} \braket{\Psi(t_2) \vert \hat{V}_{2p} \vert \Psi(t_2)} = 2 U_{2p}^{01} \end{equation} \noindent which corresponds to the doubling of the potential term due to the interaction of a single particle in $\ket{\psi_1 (t_2)}$ with the two ``particles'' in $\ket{\psi'_0 (t_2)}$. This result is not physically meaningful, as the system is now analogous to a three--particle problem in which two of the particles interact with the third particle, but not with each other. The origin of this fault arises from the nature of the generating potential itself, which superposes additional norm onto an existing wavefunction instead adding an additional state vector as required for a true multiparticle configuration. Since the net effect of the complex potential is only to elongate a state vector and not to create a new state, the presence of a nonzero coupling term for the new particle and its parent can only be achieved by artificially introducing a self--interaction term in the Hamiltonian. Nonetheless, for any conventional two--body potential, however, the vanishing diagonal terms in Eq. (\ref{twobody}) will prevent these states from acting as a true multiparticle configuration. These observations collectively impose strong limitations on the computational scope of any calculation that employs complex generating potentials. Specifically, the use of generating potentials excludes any wavefunction based method, or any method that includes Hartree--Fock exchange, from consideration in this context. \par These limitations may be circumvented through the use of theories that are formulated in terms of the norm of constituent states, such as DFT. The DFT Hamiltonian is defined solely in terms of the single--particle density $\rho(x)$ such that \begin{equation} \rho(\vec{x}) = N \int d^3 \vec{x}_2 \dots d^3 \vec{x}_N \vert \Psi_0(\vec{x}, \vec{x}_2, \dots, \vec{x}_N) \vert^2 \end{equation} \noindent where $\Psi_0(\vec{x}, \vec{x}_2, \dots, \vec{x}_N)$ is the $N$--particle ground--state wavefunction characterizing the system. In this scheme, terms that are pathological for generating potential--modified wave functions, such as the Hartree interaction \begin{equation} V_\text{Hartree}[\rho(x)] = \frac{e^2}{2} \int\int d^3 \vec{x} \, d^3 \vec{x}'\, \frac{\rho(\vec{x})\rho(\vec{x}')}{\vert\vert \vec{x} - \vec{x}' \vert\vert} \end{equation} \noindent cease to be problematic as there is no explicit dependence on single--particle state vectors. The role of an absorbing or generating potential is then to modulate $\rho(x)$ in a manner that adds density to or subtracts density from the system. Note that these considerations apply only to pure DFT. Hybrid methods, which incorporate a degree of exact exchange from Hartree--Fock theory, will suffer from the same failures as full wavefunction methods. \par The results for the propagation of a single wavepacket considered in this manuscript are directly applicable to DFT by construction. In the single--particle limit, the particle density from DFT reduces to $\rho(x) = \psi^*(x) \psi(x)$, and thus the rescaling induced by the absorbing or generating potential transforms the density in a manner identical to the wavefunction norm discussed herein. Furthermore, the choice of Gaussian wavepackets underscores the correspondence with DFT, in which Gaussian functions are a popular functional form in localized and hybrid localized/delocalized basis set schemes. Thus, the single--packet simulations are analogous to a valence electron traversing the system boundaries in the limit of vanishing coupling to the other electrons and ions. \section{Conclusions} \par The computational methods developed herein outline a path through which $\mathcal{PT}$--symmetric potentials may be employed to afford open boundary conditions in the context of RT--TTDFT transport calculations. Existing methods have utilized absorbing boundary conditions to attenuate wavefunction norm at simulation boundaries, however, this does not permit the complementary positive probability density flux required for a physically realistic system. A judicious assembly of ATR regions permits construction of a wavepacket pulse generator that can inject a train of probe wavepackets into the scattering region. By measuring the ratio of outgoing to incoming current, the transmission coefficient and hence conductance are calculated as those of a single conducting channel \cite{Imry1999}. As an ancillary benefit, the zero--bias and finite--bias conductance may be readily determined in the presence of time dependent processes including, but not limited to, the oscillatory electric fields associated with photoexcitation. This transport formalism is demonstrated to exhibit excellent agreement with analytical results, paralleling the recent success using similar $\mathcal{PT}$--symmetric methods to describe open quantum dots \cite{Berggren2010, Wahlstrand2014}, dipolar Bose--Einstein condensates in open double--well potentials \cite{Fortanier2014}, and the topologically trivial and nontrivial phases of the Su--Schrieffer--Heeger model with open chain boundaries \cite{Zhu2014}. A particular property of $\mathcal{PT}$--symmetric Hamiltonians prevents these methods from generating a true many--body wavefunction and hence this formalism is not applicable to Hartree--Fock or explicit multireference methods. Nonetheless, these limitations do not apply to modulation of the probability density, so that $\mathcal{PT}$--symmetric potential terms may be employed without restriction in any DFT--based formalism. \par This method is numerically robust, exhibiting stable transmission characteristics for up to nine transfer events in a simple model system. This exceeds the timescale accessible through prior real--time propagation calculations by several orders of magnitude, in which only a fraction of a carrier may be transferred before the simulation becomes unstable due to carrier depletion \cite{Varga2011}. Furthermore, the temporal upper limit for RT--TDDFT calculations in actual materials is limited by the highest phonon frequency of the material. On this timescale, the lattice undergoes spatial translation, electron--phonon coupling terms become nontrivial, and the adiabatic approximation ceases to hold. This corresponds to only a few carrier transfer events. Thus, the framework herein affords boundary conditions for RT--TDDFT throughout its range of physical applicability. \par Nonequilibrium Green's function methods currently comprise the mainstay for explicit quantum transport calculations, although time--dependent phenomena are inaccessible in this context due to their static nature. Conductances calculated in this scheme likewise deviate from experimentally determined values by one to two orders of magnitude, limiting this method to use as a qualitative tool that indicates physical mechanism through scaling behavior. RT--TDDFT ameliorates the restrictions imposed by the quasi--static approximation, while affording conductance values within 10\% of analytical results for a model system. Thus the conjunction of RT--TDDFT with ATR potentials is a firm step toward the development of broadly applicable and quantitatively accurate electronic structure methods for quantum transport in real materials. \section{Acknowledgements} \par This research was supported by the start--up grant and University Facilitating Fund of George Washington University. Computational resources utilized in this research were provided by the Argonne Leadership Computing Facility (ALCF) at Argonne National Laboratory under Department of Energy Contract DE--AC02--06CH11357 and by the Extreme Science and Engineering Discovery Environment (XSEDE) at the Texas Advanced Computing Center under National Science Foundation contract TG--CHE130008. \section{Appendix: Wavepacket Propagation} \par The behavior of a wavepacket in the presence of a complex potential is readily determined through a real--time propagation scheme. Writing the packet wavefunction and complex potentials in terms of their real and imaginary parts, $\psi(x,t) = \text{Re}[\psi(x,t)] + i\text{Im}[\psi(x,t)]$ and $V(x) = \text{Re}[V(x)] + i\text{Im}[V(x)]$ \cite{Visscher1991}, respectively, and substituting these into the single--particle Schr\"{o}dinger equation (with $\hbar = m = 1$) \begin{equation} i \frac{\partial \psi(x,t)}{\partial t} = -\frac{1}{2} \frac{\partial^2 \psi(x,t)}{\partial x^2} + \hat{V}(x) \psi(x,t), \end{equation} \noindent a coupled pair of equations for wavepacket evolution is obtained after equating real and imaginary parts: \begin{eqnarray} \frac{\partial}{\partial t}\left[\text{Im}[\psi(x,t)]\right] &=& \frac{1}{2} \frac{\partial^2}{\partial x^2} \left[\text{Re}[\psi(x,t)]\right] + (\text{Im}[V(x)])(\text{Im}[\psi(x,t)]) \\ &&- (\text{Re}[V(x)])(\text{Re}[\psi(x,t)])\\ \frac{\partial}{\partial t}\left[\text{Re}[\psi(x,t)]\right] &=& -\frac{1}{2} \frac{\partial^2}{\partial x^2} \left[\text{Im}[\psi(x,t)]\right] + (\text{Im}[V(x)])(\text{Re}[\psi(x,t)]) \\ &&+ (\text{Re}[V(x)])(\text{Im}[\psi(x,t)]). \end{eqnarray} \noindent For the purposes of numerical evaluation, the derivatives are evaluated in a finite centered--difference approximation. Within such a scheme, the first derivative of the wavefunction is given by \begin{equation} \frac{\partial}{\partial t} \psi(x,t) \approx \frac{\psi(x,t + \Delta t) - \psi(x,t-\Delta t)}{2 \Delta t} \end{equation} \noindent while the second derivative is \begin{equation} \frac{\partial^2}{\partial x^2} \psi(x,t) \approx \frac{\psi(x + \Delta x,t) - 2\psi(x,t) + \psi(x-\Delta x, t)}{(\Delta x)^2}. \end{equation} \noindent Given these approximations, the imaginary propagation equation becomes \begin{multline} [\text{Im} \, \psi(x,t+\Delta t)] = [\text{Im} \, \psi(x,t)] + s (\text{Re} \, [\psi(x + \Delta x, t)] - 2 [\text{Re} \, \psi(x,t)] +\\ [\text{Re} \, \psi(x - \Delta x, t)]) + (\Delta t) ([\text{Im} \, V(x)][\text{Im}\, \psi(x,t)] - [\text{Re} \, V(x)][\text{Re}\, \psi(x,t)]) \end{multline} \noindent where $s = \Delta t / 2(\Delta x)^2$ has been introduced as the parameter controlling integration. The real term is evaluated similarly \begin{multline} [\text{Re} \, \psi(x,t+ \Delta t)] = [\text{Re} \, \psi(x,t)] - s (\text{Im} \, [\psi(x + \Delta x, t )] -2 [\text{Im} \, \psi(x,t)] + \\ [\text{Im} \, \psi(x - \Delta x, t )]) + (\Delta t)([\text{Re} \, V(x)] [\text{Im} \, \psi(x,t)] \\ + [\text{Im}\, V(x)] [\text{Re} \, \psi(x,t)]. \end{multline} \noindent \clearpage \begin{figure} \centering \includegraphics[scale=1.0]{./two_absorbers.eps} \caption{Contexts for the use of complex absorbing potentials in an infinite square well. The potential $V_{A,\text{edge}}(x)$ is employed to absorb wavepackets impinging on the simulation cell boundary to mimic the effect of a particle leaving an open system. Conversely, the $V_{A,\text{int}}(x)$ absorbs wavefunction norm incident from either side of the potential, with a net effect of ``disconnecting'' two regions of the simulation.} \label{absorb_schematic} \end{figure} \clearpage \begin{figure} \centering \includegraphics[scale=1.0]{./packet_edge.eps} \caption{Wavepacket attenuation for two distinct classes of complex potentials, as demonstrated through numerical wavepacket propagation. (A) Propagation of a Gaussian wavepacket with $k_0 = 500$, $\sigma^2 = 0.001$, and $x_0 = 0.5$ (blue) into a potential $V_{A,\text{edge}}(x)$ (orange) with singularity at the cell boundary. The potential switches on at $x = 0.75$ with a width $L = 0.25$ and a strength of $E_\text{min} = 4.0$. Each packet envelope corresponds to a configuration advanced by $\Delta t = 2.5 \times 10^4$ units. (B) Propagation of a right--moving Gaussian wavepacket $k_{0,R} = 500$ and $x_{0,R} = 0.2$ (blue) alongside a left--moving Gaussian wavepacket $k_{0,L} = -k_{0,R}$ and $x_{0,R} = 0.8$ (yellow) into a Gaussian absorbing potential $V_{A,\text{int}}(x)$ (orange). The potential is applied for all $x \in [0.4,0.6]$ with a width $\alpha^2 = 1.0 \times 10^{-4}$ and strength $V_0 = 5.0 \times 10^3$. Each packet envelope corresponds to a configuration advanced by $\Delta t = 2.5 \times 10^4$ units, with an additional configuration shown at $t = 1.35 \times 10^5$ units for the rightmoving packet. Packets are propagated using default propagation parameters. } \label{absorb_sim} \end{figure} \clearpage \begin{figure} \centering \includegraphics[scale=1.0]{./packet_growth.eps} \caption{Enhancement of packet norm by a complex generating potential of the form $\hat{V}_{G,\text{Gau}} = iV_0 e^{-(x-x_0)^2 / 2 \alpha}$. The incident packet has a wavevector $k_0 = 500$ and width $\sigma^2 = 0.001$ with the envelope initially centered at $x_0 = 0.2$, while the potential parameters are $V_0 = 250$ and $\alpha^2 = 1.0 \times 10^{-4}$, with a center at $x_0 = 0.5$. In the course of propagation, the norm of the packet increases so that $\braket{\psi(x,t_f) \vert \psi(x,t_f)} = 2.72$. Wavepackets are plotted at time steps ranging from $t_i = 0$ to $t_f = 2.1 \times 10^5$ in units of $\Delta t = 3.0 \times 10^4$ using default parameters. } \label{gaussgrow} \end{figure} \clearpage \begin{figure} \centering \includegraphics[scale=1.0]{./packet_saturate.eps} \caption{Interaction of a wavepacket with a complex potential $\hat{V}_{ATR}(x)$ exhibiting anisotropic transmission resonances on a scale broader than the incident wavepacket. Note that the reflected, generated packet (purple) is enhanced in width. The incident packet (yellow) has a wavevector $k_0 = 500$ and width $\sigma^2 = 0.001$ with the envelope initially centered at $x_0 = 0.2$, while the potential strength is $V_0 = 6500$, the unit cell spacing is $\Lambda = 6.28 \times 10^{-3} = \pi / k$, and the total width is $L = 20 \Lambda$ with a center at $x_0 = 0.5$. Wavepackets are plotted at time steps of $t_i = 0$, $t = 1.5 \times 10^5$, and $t = 2.5 \times 10^5$. Wavepacket propagation is performed using default parameters. } \label{packet_saturate} \end{figure} \clearpage \begin{figure} \centering \includegraphics[scale=1.0]{./edgegen_geom.eps} \caption{Cross--sectional geometry for wavepacket propagation with boundary wavepacket generation. The interaction region with scattering potential $\hat{V}_\text{scatter}$ is situated between an absorbing potential $\hat{V}_{A,\text{edge}}$ and a generating potential $\hat{V}_{ATR}$. The edge absorbing potential $\hat{V}_{A,\text{edge}}$ completely attenuates any wavepacket that enters this region, while the $\mathcal{PT}$--symmetric ATR potential $\hat{V}_{ATR}$ has a generating surface oriented toward the scattering region. Any wavepacket that crosses the ATR edge causes a new counter--propagating packet to be reflected, while itself passing through the potential and reflecting off the wall of the infinite square well.} \label{edgegen_geom} \end{figure} \clearpage \begin{figure} \centering \includegraphics[scale=1.0]{./bounce_geom.eps} \caption{Cross--sectional geometry for wavepacket propagation from an ATR pulse train generator. The interaction region with scattering potential $\hat{V}_\text{scatter}$ is situated between an absorbing potential $\hat{V}_{A,\text{edge}}$ and the pulse generator comprising an ATR potential $\hat{V}_{ATR}$, a Gaussian absorbing potential $\hat{V}_{A,1}$ and the seed wavepacket $\psi_G (x,t)$. The reflecting surface of the ATR potential faces $\psi_G(x,t)$, ensuring a packet will remain in the generator while affording a pulse stream toward the interaction region. } \label{bounce_geom} \end{figure} \clearpage \begin{figure} \centering \includegraphics[scale=1.0]{./coefficients.eps} \caption{Analytical (solid lines) and simulated (points) transmission $T$, standard reflection $R_\text{left}$ and enhanced reflection $R_\text{right}$ coefficients for passage of a wavepacket ($k_0 = 500$, $\sigma^2 = 0.001$) through an ATR potential region ($L_\text{ATR} = 20 \Lambda$) as a function of the potential strength $V_0$.} \label{coefficients} \end{figure} \clearpage \begin{figure} \centering \includegraphics[scale=1.0]{./enhanced_vs_k.eps} \caption{Analytical (solid lines) and simulated (points) scaling of the enhanced reflection coefficient $R_\text{right}$ as a function of incident wavevector $k_0$ for a Gaussian wavepacket ($\sigma^2 = 0.01$) impinging on an ATR potential ($V_0 = 5500$, $L_\text{ATR} = 20 \Lambda$). } \label{enhanced_vs_k} \end{figure} \clearpage \begin{figure} \centering \includegraphics[scale=1.0]{./r_vs_length.eps} \caption{Analytical (solid lines) and simulated (points) transmission $T$, standard reflection $R_\text{left}$ and enhanced reflection $R_\text{right}$ coefficients for passage of a wavepacket ($k_0 = 500$, $\sigma^2 = 0.001$) through an ATR potential region of variable length $L = N\Lambda$ where $\Lambda = \pi / 500$.} \label{length_dependence} \end{figure} \clearpage \begin{figure} \centering \includegraphics[scale=1.0]{./sgsq.eps} \caption{Calculated transmission $T$, standard reflection $R_\text{left}$ and enhanced reflection $R_\text{right}$ coefficients for passage of a Gaussian wavepacket ($k_0 = 500$) through an ATR potential region of length $L = 20\Lambda$ and $V_0 = 5500$ as a function of the wavepacket extent $\sigma^2$.} \label{sgsq_dependence} \end{figure} \clearpage \begin{figure} \centering \includegraphics[scale=1.0]{./stability.eps} \caption{Evolution of wavefunction norm for generated (reflected) and transmitted packets arising from the ATR packet generator during successive enhanced reflection events.} \label{stability} \end{figure} \clearpage \begin{figure} \centering \includegraphics[scale=1.0]{./successive.eps} \caption{Incident $J_I$ and transmitted $J_T$ current arising from an ATR potential and incident on a rectangular barrier with $V_\text{barrier} = 2.5 \times 10^4$ and $L_\text{barrier} = 0.1$ units. The initial generating packet is a Gaussian with wavevector $k_0 = 500$ and $\sigma^2 = 0.01$ characterizing the packet width.} \label{successive} \end{figure} \clearpage \begin{figure} \centering \includegraphics[scale=1.0]{./ivc.eps} \caption{Relative conductance $(\mathcal{G}(V) - \mathcal{G}(0)) / (\mathcal{G}_0(V) - \mathcal{G}_0(0))$ calculated as a function of bias $E = E_{k_0} + V_\text{bias}$ for several rectangular barrier strengths $V_\text{barrier}$. The initial generating packet is a Gaussian with $k_0 = 500$ as the initial wavevector and $\sigma^2 = 0.01$ characterizing the packet width, while the scattering potential has an extent of $L_\text{barrier} = 0.1$ units.} \label{ivc} \end{figure} \clearpage \begin{table}[h] \begin{center} {\small \begin{tabular}{|c||c|c|c|c|} \hline $V_\text{barrier}$ & $\mathcal{G}_\text{Ana}$ & $\mathcal{G}_\text{P1}$ & $\mathcal{G}_\text{P2-4}$ & $\mathcal{G}_\text{P1-9}$\\ \hline \hline $2.50 \times 10^5$ & 0.314 & 0.315 & 0.315 & 0.315 \\ $5.00 \times 10^5$ & 0.306 & 0.306 & 0.307 & 0.309 \\ $7.50 \times 10^5$ & 0.262 & 0.281 & 0.282 & 0.288 \\ $10.0 \times 10^5$ & 0.232 & 0.219 & 0.212 & 0.233 \\ \hline \end{tabular} } \end{center} \caption{Comparison of analytically--determined conductances $\mathcal{G}_\text{Ana}$ with simulation--derived conductances for the first transmission event $\mathcal{G}_\text{P1}$, the mean of the subsequent three events $\mathcal{G}_\text{P2--4}$, and the collective mean of all simulated events $\mathcal{G}_\text{P1--9}$ for transmission through a barrier with $L_\text{barrier} = 0.1$ units at several barrier strengths $V_\text{barrier}$.} \label{conduct} \end{table} \clearpage
1,314,259,994,110
arxiv
\section{Introduction} \noindent In 1977, John Cahn gave simple illuminating arguments to describe the interaction between solids and liquids. His model was based on a generalized van der Waals theory of fluids treated as attracting hard spheres \cite{Cahn}. It entailed assigning an energy to the solid surface that is a functional of the liquid density \emph{at the surface}. It was thoroughly examined in a review paper by de Gennes \cite{de Gennes}. Three hypotheses are implicit in Cahn's picture: \ \emph{i)} \ The liquid density is taken to be a smooth function of the distance from the solid surface, that surface is assumed to be flat on the scale of molecular sizes and the correlation length is assumed to be greater than intermolecular distances; \ \emph{ii)} \ The forces between solid and liquid are of short range with respect to intermolecular distances; \ \emph{iii)} \ The liquid is considered in the framework of a mean-field theory. This means, in particular, that the free energy of the liquid is a classical so-called \emph{gradient square functional}.\newline The point of view that the liquid in an interfacial region may be treated as bulk phase with a local free-energy density and an additional contribution arising from the nonuniformity which may be approximated by a gradient expansion truncated at the second order, is most likely to be successful and perhaps even quantitatively accurate near the liquid critical point \cite% {Rowlinson}. We use this approximation enabling us to compute analytically the liquid density profiles. Nevertheless, we take surface effects and repulsive forces into account by adding density functionals at boundary surfaces. In mean-field theory, London potentials of liquid-liquid and liquid-solid molecular interactions are \begin{equation*} \left\{ \begin{array}{c} \displaystyle\;\;\;\;\;\;\varphi _{ll}=-\frac{c_{ll}}{r^{6}}\;,\text{ \ when\ }r>\sigma _{l}\;\;\text{and }\;\ \varphi _{ll}=\infty \text{ \ when \ }% r\leq \sigma _{l}\, ,\ \\ \displaystyle\;\;\;\;\;\;\varphi _{ls}=-\frac{c_{ls}}{r^{6}}\;,\text{ \ when\ }r>\delta \;\;\text{and }\;\ \varphi _{ls}=\infty \text{ \ when \ }% r\leq \delta \;, \ \end{array} \right. \end{equation*} where $c_{ll}$ et $c_{ls}$ are two positive constants associated with Hamaker constants, $\sigma _{l}$ and $\sigma _{s}$ respectively denote liquid and solid molecular diameters, $\delta =\frac{1}{2}($ $\sigma _{l}+$ $% \sigma _{s})$ is the minimal distance between centers of liquid and solid molecules \cite{Israelachvili}. We consider the interaction between a solid surface flat at a molecular scale (but curved at several nanometer scale) and a liquid by means of a continuous model. The density-functional of energy $E$ of the inhomogeneous liquid in a domain $D$ of differentiable boundary $S$ (external forces being neglected) is taken in the form \begin{equation*} E=E_{f}+E_{S}\qquad {\mathrm{with}\qquad \emph{E}_{\emph{f}}=\int \int \int_{D}\rho \,\varepsilon \ dv ,\ \emph{E}_{\emph{S}}\ =\ \int \int_{S}\phi \ ds} .\label{E} \end{equation*}% The first integral (energy of the volume) is associated with square-gradient approximation when we introduce a specific free energy of the fluid at a given temperature, $\varepsilon =\varepsilon (\rho ,\beta )$, as a function of liquid density $\rho $ and $\beta =(\mathrm{grad\,\rho )^{2}}$. Specific free energy $\varepsilon $ characterizes together fluid properties of \emph{% compressibility} and \emph{molecular capillarity} of interfaces. In accordance with gas kinetic theory \cite{Rocard}, scalar $\lambda =2\rho \,\varepsilon _{,\beta }(\rho ,\beta )$ (where $\varepsilon _{,\beta }$ denotes the partial derivative with respect to $\beta $) is assumed to be constant at a given temperature and \begin{equation*} \rho \,\varepsilon =\rho \,\alpha (\rho )+\frac{\lambda }{2}\,(\text{grad\ }% \rho )^{2}, \end{equation*}% where term $({\lambda }/{2})\,(\mathrm{grad\ \rho )^{2}}$ is added to the volume free energy $\rho \,\alpha (\rho )$ of a compressible fluid. We denote the pressure term by $P(\rho )=\rho ^{2}\alpha ^{\,\prime }(\rho ). $ The second integral (energy of the surface) is such that the free energy per unit surface $\phi $ is \cite{de Gennes}, \begin{equation} \phi (\rho )=-\gamma _{1}\rho +\frac{1}{2}\,\gamma _{2}\,\rho ^{2}. \label{surface energy} \end{equation}% Here $\rho $ denotes the limit liquid density value at surface $S$. Constants $\gamma _{1}$, $\gamma _{2}$ and $\lambda $ are positive and given by relations \cite{Gouin}, \begin{equation*} \gamma _{1}=\frac{\pi c_{ls}}{12\delta ^{2}m_{l}m_{s}}\;\rho _{sol},\quad \gamma _{2}=\frac{\pi c_{ll}}{12\delta ^{2}m_{l}^{2}},\quad \lambda =\frac{% 2\pi c_{ll}}{3\sigma _{l}\,m_{l}^{2}}, \label{coefficients} \end{equation*}% where $m_{l}$ et $m_{s}$ respectively denote the masses of liquid and solid molecules; $\rho _{sol}$ is the solid density. In this paper, we first develop the boundary conditions for the general case of the interaction between a non-homogeneous liquid and a curved solid surface with a surface energy due to intermolecular interactions and depending of the fluid volume deformation. Then, for a surface energy in form (\ref{surface energy}) we study the stress vector distribution on a surface where bumps and hollows are periodically distributed. Finally, we estimate the stress effects for a silicon surface, with a curvature of several nanometer range, in contact with water. \section{Boundary conditions} The equation of equilibrium and boundary conditions are obtained by using the virtual power principle \cite{Germain,Maugin}. For example, virtual displacements ${\mathbf{\zeta}} =\delta \mathbf{x}$ are defined in a classical way by Serrin \cite{Serrin} page 145, where $\mathbf{x}% =\{x^{i}\},(i=1,2,3)$ denotes the Euler variables in a Galilean or fixed system of coordinates.\newline \begin{figure}[h] \begin{center} \includegraphics[width=7cm]{1.eps} \end{center} \caption{ Vector $\mathbf{n}$ is the unit normal vector to $S$ exterior to $% D $; vector $\mathbf{t}$ is the unit tangent vector to $\Gamma $ with respect to $\mathbf{n}$; ${\mathbf{n}^{\prime }}={\mathbf{t}}\times {\mathbf{% n}}$.} \label{fig1} \end{figure} A liquid (in drop form) occupying a domain $D$ of the physical space lies on a solid surface $S$ (the liquid is also partially bordered by a gas); the edge $\Gamma $ (or contact line) is the curve common to $S$ and the boundary of $D$ (see Fig. 1). All the surfaces and curves are oriented differential manifolds (\footnote{\textit{Transposed} mappings being denoted by $^{T}$, for any vectors ${\mathbf{a}},{\mathbf{b}}$, we write ${\mathbf{a}}^{T}\,% \mathbf{b}$\ for their \textit{scalar product} (the line vector is multiplied by the column vector) and ${\mathbf{a}}{\mathbf{b}}^{T}$ or ${% \mathbf{a}}\otimes {\mathbf{b}}$ for their \textit{tensor product} (the column vector is multiplied by the line vector). The image of vector ${% \mathbf{a}}$ by a mapping $B$ is denoted by $B\,{\mathbf{a}}$. Notation ${% \mathbf{b}}^{T}\,{B}\,$ means the covector ${\mathbf{c}}^{T}$ defined by the rule ${\mathbf{c}}^{T}=(B^{T}\,{\mathbf{b}})^{T}$. The divergence of a linear transformation $B $ is the covector $\mathrm{div}B$ such that, for any constant vector ${\mathbf{a}},$ $(\mathrm{div}\,B)\,{\mathbf{a}}=\mathrm{% div}\,(B\ {\mathbf{a}})$. If $f$ is a real function of $\mathbf{x}$, $% \displaystyle{\partial f}/{\partial {\mathbf{x}}}$ is the linear form associated with the gradient of $f $ and $\displaystyle{\partial f}/{% \partial x^{i}}=({\partial f}/{\partial {\mathbf{x}}})_{i}$\thinspace ; consequently, $\displaystyle({\partial f}/{\partial {\mathbf{x}}})^{T}=% \mathrm{grad}\,f$. The identity tensor is denoted by $I$.}). \subsection{Variation of the density-functional of energy $E$} The density in the fluid has a limit value at the wall $S$. Then, on $S$, \begin{equation*} \delta \phi =\phi ^{\prime }(\rho )\,\delta \rho =-\rho \,\phi ^{\prime }(\rho )\,\mathrm{div}\,{\mathbf{\zeta }}, \end{equation*}% where $\delta \rho +\rho \,\mathrm{div}\,{\mathbf{\zeta }}=0$ \cite{Serrin}. Let us denote \begin{equation*} G=-\rho \,\phi ^{\prime }(\rho )\,,\quad H=\phi (\rho )-\rho \,\phi ^{\prime }(\rho ). \end{equation*}% The function $H$ is the Legendre transform of $\phi$ with respect to $\rho$. For any virtual displacement ${\mathbf{\zeta }}$ null on $\Gamma $, Rel. (\ref% {A1}) of Appendix yields, \begin{equation*} \int \int_{S}\delta \phi \ ds=\int \int_{S}G\ \mathrm{div}\,{\mathbf{\zeta }}% \ ds\equiv \displaystyle\int \int_{S}\ \left\{ G\,\frac{d\zeta _{n}}{dn}% -\left( \frac{2G}{R_{m}}\,{\mathbf{n}}^{T}+\mathrm{grad}_{tg}^{T}G\right) {% \mathbf{\zeta }}\right\} ds , \end{equation*} where, now $S$ is the imprint of $D$ on the solid surface. Consequently, from the calculations in Appendix, we obtain:\newline \emph{For any virtual displacement null on the complementary boundary of $D$ with respect to $S$ and null on the edge $\Gamma $}, the variation of $E$ is, \begin{equation*} \begin{array}{ll} \delta E=\displaystyle-\int \int \int_{D}(\mathrm{div}\,\sigma )\,{\mathbf{% \zeta }}\,dv\ & \\ +\displaystyle\int \int_{S}(G-A)\frac{d{\zeta }_{n}}{dn}+\left( \frac{2(A-H)% }{R_{m}}\,{\mathbf{n}}^{T}+\mathrm{grad}_{tg}^{T}(A-H)+{\mathbf{n}}% ^{T}\sigma \right) {\mathbf{\zeta }}\ ds, \label{S3} & \end{array}% \end{equation*} where \begin{equation} \displaystyle\ \sigma =-p\,{I}-\lambda \ \mathrm{grad}\;\rho \otimes \mathrm{% grad}\;\rho \equiv -p\,{I}-\lambda \,\left( \frac{\partial \rho }{\partial {% \mathbf{x}}}\right) ^{T} \frac{\partial \rho }{\partial {\mathbf{x}}} \label{sigma} \end{equation} is the symmetric stress tensor of the inhomogeneous liquid, with $p=\rho ^{2}\varepsilon _{,\rho }-\rho \ \mathrm{div}(\lambda \mathrm{\ grad}\,\rho ) $ ; $A=\lambda \,\rho \,\left({d\rho }/{dn}\right) $ with$\ {d\rho }/{dn} =\left({\partial\rho }/{\partial{\mathbf{x}}}\right)\, {\mathbf{n}}$ ; ${% \zeta }_{n}={\mathbf{n}}^{T}\,{\mathbf{\zeta }}$ ; $2/R_m$ is the mean curvature of $S$ and $\mathrm{grad}_{tg}$ denotes the tangential part of the gradient relatively to $S$. \subsection{The virtual work of forces exerted on $D$} The virtual work of elastic stresses on $S$ is \begin{equation*} \delta \tau _{e}=\int \int_{S}{\mathbf{\kappa }}^{T}{\mathbf{\zeta }}\ ds\,, \end{equation*}% where ${\mathbf{\kappa }}={-\sigma _{e}}\,{\mathbf{n}}$ is the loading vector associated with stress tensor $\sigma _{e}$ on the wall in classical theory of continuum mechanics (\footnote{% It is important to note that the external unit normal to $S$ with respect to the solid is $\mathbf{-n}$.}). Then, the virtual work of forces $\delta \tau $ exerted on $D$ is $ \delta \tau =-\,\delta E+\delta \tau _{e}\, $ and, \begin{equation*} \begin{array}{ll} \delta \tau=\displaystyle\int \int \int_{D}(\mathrm{div}\,\sigma )\,{\mathbf{% \zeta }}\,dv\ & \\ -\displaystyle\int \int_{S}(G-A)\frac{d{\zeta }_{n}}{dn}+\left( \frac{2(A-H)% }{R_{m}}\,{\mathbf{n}}^{T}+\mathrm{grad}_{tg}^{T}(A-H)+{\mathbf{n}}% ^{T}\sigma-{\mathbf{\kappa }}^{T} \right) {\mathbf{\zeta }}\ ds.\label{S3} & \end{array}% \end{equation*} \subsection{Results} The fundamental lemma of variation calculus, applied to the relation $\delta \tau =0$, for all previous virtual displacements, yields: $\bullet$ \ The well-known equation of equilibrium for capillary fluids \cite{Casal}, \begin{equation} \mathrm{div}\,\sigma =0, \label{motion5} \end{equation}% $\bullet$ \ The boundary conditions on $S$, \begin{equation} \forall \ {\mathbf{x}}\in S,\ \ \ \ \ \ \ \ \ \left\{ \begin{array}{l} G-A=0, \\ {\mathbf{\kappa }}=\,\displaystyle\frac{2(A-H)}{R_{m}}\;{\mathbf{n}}+\mathrm{% grad}_{tg}(A-H)+\sigma \;{\mathbf{n}}.% \end{array}% \right. \label{conditions} \end{equation} Equation (\ref{conditions})$_1$ yields a condition relative to the surface energy (\ref{surface energy}) which depends on the fluid density at the surface and on the quality of the solid wall: \begin{equation} \lambda \ \frac{d\rho }{dn}+\phi ^{\prime }(\rho )=0 \qquad \rm{or}\qquad \lambda \,\frac{d\rho }{dn}=\gamma _{1}-\gamma _{2}\,\rho \,. \label{conditions5.1} \end{equation}% Equation (\ref{conditions5.1}) expresses an embedding effect for the liquid density. Such a condition appears for simpler geometry in \cite% {Cahn,Seppecher}. \newline Condition (\ref{conditions})$_2$ appears in the literature \cite% {Germain,Casal2} but without the terms corresponding to the molecular model (\ref{surface energy}) of surface free energy. Such type of condition also appears in interfacial problems with other solid surface energy but with a null curvature as in \cite{Cahn,Seppecher}. In Cauchy theory, we are back to the classical equation ${\mathbf{\kappa }}=\sigma \;{\mathbf{n}}$.% \newline The definition (\ref{sigma}) of $\sigma $ implies\ $ \sigma \,{\mathbf{n}}=-p\,{\mathbf{n}}-\lambda \; ({d\rho }/{dn})\ \mathrm{% grad}\;\rho \,. $ Then, for an elastic wall, by taking into account of Rel. (\ref{conditions5.1}), the vector $\mathbf{\kappa }$ is normal to $S$, \begin{equation}{\mathbf{\kappa }}=\kappa _{n}\,{\mathbf{n}} \quad \rm{with}\quad \kappa _{n}\ =\mathbf{n}^T\sigma\,\mathbf{n}-\frac{2\,\phi }{R_{m}}\, . \label{conditions5.2} \end{equation}% We obtain the stress vector values of the solid at the elastic wall (which is opposite to the action ${\mathbf{\tau }}$ of the liquid on the elastic wall). Relation (\ref{conditions5.2}) looks like the Laplace formula for fluid interfaces. Nonetheless, we will see in the next section some differences between the results for fluid interfaces and for liquid-solid interfaces. \section{An example of elastic effect on a solid surface} \subsection{General considerations} As bibliography about elastic effects on surface physics, one may refer to the review article \cite{Muller}.\newline The aim of this section is to present an example of system such that the mesoscopic effects of a liquid locally generate important molecular stress vectors on a solid surface. We consider a periodic domain such that the substrate solid surface has an alternated structure. The solid surface can be considered as a flat domain at the Angstr\"{o}m scale because roughness and undulations are only of several nanometer length (such a model is presented on Fig. 2). \begin{figure}[h] \begin{center} \includegraphics[width=8cm]{2.eps} \end{center} \caption{ We consider the model consisting of a surface $S$ with bumps and hollows periodically distributed on a period $L$ of several nanometers in two directions such that, with respect to the third axis, the bump and hollow levels are opposite. Extrema of the surface mean curvature are located at point A and B; curve $C$ is the limit curve of the periodic rectangular parallepiped. Surface $S^{\prime }$ delimits the liquid bulk (at a distance $h$ of a great number of nanometers from surface $% S$). Surface $\Sigma $ is the lateral boundary of $D$. Vector $\mathbf{k}$ is normal to $S^{\prime }$ and $z$ is directed along $\mathbf{k}$.} \label{fig2} \end{figure} At level 0 with respect to the third axis, the lateral boundary of domain $D$ follows the curve $C$ of the bludging surface. Due to the axial symmetries around the lines A$\mathbf{k}$ and B$\mathbf{k}$, in local coordinates with these lines as third axis, $\mathrm{{grad}\,\rho =(d\rho /dz)\mathbf{k}}$ and on these lines the stress tensor $\sigma $ of the inhomogeneous liquid gets expressions in the form \begin{equation*} \mathbf{\sigma }=\left[ \begin{array}{ccc} a_{1} & 0 & 0 \\ 0 & a_{2} & 0 \\ 0 & 0 & a_{3}% \end{array}% \right] ,\quad \mathrm{with}\quad \left\{ \begin{array}{lll} a_{1} & = & a_{2}=-p,\quad \displaystyle p=P(\rho )-\frac{\lambda }{2}\left( \frac{d\rho }{dz}\right) ^{2}-\lambda \,\rho \,\Delta \rho \\ a_{3} & = & \displaystyle{-p-\lambda \left( \frac{d\rho }{dz}\right) ^{2},}% \end{array}% \right. \end{equation*} where $\Delta$ is the Laplace operator. Consequently, on these lines, Eq. (\ref{motion5}) yields a constant value for the eigenvalue $a_{3}$, \begin{equation} p+\lambda \left( \frac{d\rho }{dz}\right) ^{2}=P_{l}, \label{equilibrium1b} \end{equation}% where $P_{l}$ denotes the uniform pressure in the liquid bulk of density $% \rho _{l}$ bounding the liquid layer at level $h$. \newline -\ \ Due to symmetries of domain $D$, we deduce the average stress actions of the liquid on $S$ and $S^{\prime }$ are opposite and numerically equal to the pressure $P_{l}$. \newline -\ \ From Rels. (\ref{conditions5.1}-\ref{equilibrium1b}) we obtain, at points A and B, a stress vector ${\mathbf{\tau }}=-{\mathbf{\kappa }}$, action of the liquid on the elastic wall in the same form than the Laplace formula form for fluid interfaces, \begin{equation} {\mathbf{\tau }}=\left( P_{\ell }+\frac{2\,\phi }{R_{m}}\right) \,\mathbf{n}. \label{Laplace} \end{equation}% -\ \ We must emphasize that Rel. (\ref{Laplace}) is only valid at points A and B. In fact, Rel. (\ref{conditions5.2}) yields \begin{equation} {\mathbf{\tau }}=\left( -\mathbf{n}^T\sigma\,\mathbf{n} +\frac{2\,\phi }{R_{m}}\right)\,\mathbf{n}, \label{Laplacebis} \end{equation}% but for points which are not the summits of bumps or the bottoms of hollows, $-\mathbf{n}^T\sigma\,\mathbf{n}\equiv p+\lambda \left({d\rho }/{dn}\right) ^{2} \neq P_{\ell }$\ where\ $\lambda({d\rho }/{dn})^2 \neq \lambda\left( \mathrm{% grad}\rho\right) ^{2}$. Consequently at a mesoscopic scale, due to the anisotropy of the liquid on curved solid surfaces, Rel. (\ref{Laplacebis}) replaces Laplace's formula of fluid interfaces. \newline -\ \ The stress vector is directed as $\mathbf{k}$ at points A and B. Due to the axial symmetries around the surface extrema at points A and B and opposite mean curvatures, when we neglect $P_{l}$ with respect to $2\phi/R_m$, the stress vector associated with the hollow corresponding to point A is a vector $\mathbf{T}$ parallel to $\mathbf{k}$ and the stress vector associated with the bump corresponding to point B is a vector $\mathbf{-T}$; the two vectors generate a torque on the surface. This result is in accordance with results in \cite{Prevot} where the interaction between liquid and solid is represented as localized dipoles and monopoles depending on bumps and hollows of the surface $S$. \subsection{Application to explicit materials} At $\theta= 20 {{}^\circ}$ Celsius, we consider water damping a wall in silicon. The experimental estimates of coefficients defined in Section 1 are presented in Table 1. \newline Far from the liquid critical point, the liquid density at the wall is closely the same than the liquid density in the bulk \cite{Gouin1}. If we consider a mean radius of curvature of surface S, $R_m=-10^{-6}$ cm at point A and $R_m=10^{-6}$ cm at point B, when we neglect $P_l$, we immediately obtain an arithmetic value of $\tau_{n} \equiv {\mathbf{n}}^T{\mathbf{\tau }}= 10^8$ cgs (or 100 atmospheres) corresponding to stress effects of large magnitude between areas around points A and B. \begin{table}[tbp] \centering $% \begin{tabular}{|c|c|c|c|c|} \hline\hline \multicolumn{1}{||c|}{\scriptsize Physical constants} & $c_{ll}$ & $% \sigma_{l}$ & $m_{l}$ & \multicolumn{1}{|c||}{$\rho_{l}$} \\ \hline \multicolumn{1}{||c|}{Water} & $1.4\times 10^{-58}$ & $2.8\times 10^{-8}$ & $% 2.99\times 10^{-23}$ & \multicolumn{1}{|c||}{$0.998$} \\ \hline\hline \multicolumn{1}{||c|}{\scriptsize Physical constants} & $c_{ls}$ & $% \sigma_{s}$ & $m_{s}$ & \multicolumn{1}{|c||}{$\rho_{sol}$} \\ \hline \multicolumn{1}{||c|}{Silicon} & $1.4\times 10^{-58}$ & $2.7\times 10^{-8}$ & $4.65\times 10^{-23}$ & \multicolumn{1}{|c||}{$2.33$} \\ \hline\hline\hline\hline \multicolumn{1}{||c|}{\scriptsize Deduced constants} & $\delta$ & $\lambda$ & $\gamma_1$ & \multicolumn{1}{|c||}{$\gamma_2$} \\ \hline \multicolumn{1}{||c|}{\scriptsize Results (water-silicon)} & $2.75\times 10^{-8}$ & $1.17\times 10^{-5}$ & $81.2$ & \multicolumn{1}{|c||}{$54.2$} \\ \hline\hline \end{tabular} $ \vskip 0.5cm \caption{ The physical values associated with water and silicon are obtained in references \protect\cite{Israelachvili,Handbook} and expressed in \textbf{% c.g.s. units} (centimeter, gramme, second). No information is available for water-silicon interactions; we assume that $c_{ll}=c_{ls}$.} \label{TableKey1} \end{table} The elastic effects of a liquid on a solid surface result from the topology of the contact interface. It is amazing to observe that a solid surface considered as an interface between solid and liquid does not require new concept but only a supplementary surface energy and likewise surface morphology.\newline An important assumption in the previous calculations is that three scales infer in the surface physics: a length scale of one nanometer associated with molecular effects and the expression of surface energy, a length scale of ten nanometers associated with the size of undulations and surface roughness and a length scale of one hundred nanometers associated with the distance of the liquid bulk to the surface $S$.
1,314,259,994,111
arxiv
\section{Introduction} Although binary data are often considered a simple concept, they are typically difficult to model in statistics. Traditionally, binary random variables are modelled independently via a Bernoulli distribution with a single probability of success ($\pi$). However, in many underlying processes, true independence of outcomes is not guaranteed. The correlation is often dependent on the distance between outcomes; given a series of binary trials in a one dimensional space it is often the case that the successes in nearby trials are more highly correlated than trials far away. For example, in ecology \citep{Pielou1984, Pielou1969} we may be modelling the absence or presence of individuals in a population such as trees or bacteria (denoted 0 or 1 respectively). Clearly, there is a much higher chance of observing trees/bacteria close to spatial locations that are known to already contain trees/bacteria. The aim of this work is to model binary data from a correlated process such that there is a spatial (or temporal) varying probability of success; the probability of obtaining a $1$ is higher having observed other $1$'s (and similarly for $0$'s). It is then expected that the successes ($1$'s) will cluster (or anti-cluster in the case of negative correlation). Methods such as logistic regression classification or generalised linear models \citep{Diggle1998, Chang2015} are typically used in these scenarios. They allow the marginal probability ($\pi$) of a $1$ to vary smoothly in space, and therefore at any point $x$ we have a marginal Bernoulli distribution with probability of success equal to $\pi(x)$. However, again only having information regarding the marginal probability of a success at each trial causes sequence samples to neglect any form of spatial correlation. Correlated multivariate Bernoulli distributions have been discussed in literature, where it is possible to draw a sample of length $n$ binary random variables jointly. \cite{Teugels1990} introduces a multivariate Bernoulli distribution, such that given a sequence of $n$ Bernoulli trials, the joint probability of each of the possible $2^{n}$ sequences of $0$'s and $1$'s is specified. The distribution is then parametrised in terms of the central moments so that each of these sequences is further dependent on $2^{n-1}$ parameters. Although the work by \cite{Teugels1990} successfully defines a multivariate Bernoulli distribution, a large number of parameters are required for even short sequences of Bernoulli trials. Secondly, there is no metric to control the spread of correlation across the binary variables. Similar approaches have been seen in both \cite{Society2017a} and \cite{Society2017}. Graphical models have also proven to be useful in simulating correlated variables. Similar to the structures used in \cite{Teugels1990}, \cite{Dai2013} use a multivariate Bernoulli distribution to estimate the structure of graphs with binary nodes. The binary nodes are described as random variables, where pairwise correlation is defined in terms of the edges of the graph. Variables are conditionally independent if the associated nodes are not linked by an edge. The authors further extend the model to take account of higher order interactions (not just pairwise interactions) to allow a higher level of structure to be incorporated. Another alternative is to use a Markov chain \citep{Billingsley1961, Hillier2006, Isaacson1976}, so that each variable is dependent on the previous variable in a graph structure. However, we find that the Markov property by itself is fairly limiting in the possible range of correlation structures. We also point readers to the image processing literature where a similar problem arises when restoring fuzzy images \citep{Besag1986, Besag1974,Abend1965, Bishop2006, Li2016}. This also includes literature relating to cellular automata \citep{Wolfram1959,Agapie2014, Agapie2004}. In this paper, we propose a novel framework for simulating binary trials from a correlated multivariate Bernoulli process. We define this process, the de Bruijn process (DBP), so named since we incorporate a distance correlation between variables using the structure of de Bruijn graphs \citep{Woude1946, Good1946, Golomb1967, Fredricksen1992}. These are directed graphs, which have nodes consisting of $m$ symbols. We denote the symbols the `letters' of the de Bruijn graph, and the sequence of these letters at each node a de Bruijn `word' of length $m$. We let the set of letters be $\{0,1\}$ and incorporate a generalised Markov property \citep{Billingsley1961, Hillier2006, Isaacson1976} within the graph. The length $m$ sequences of binary letters then become the states (or words) of a Markov chain where the spread of correlation is controlled by varying $m$. Figure \ref{DBG23} shows two de Bruijn graphs with the set of letters $\{0,1\}$, and word lengths $m=2$ (top) and $m=3$ (bottom). The nodes consist of all the possible length two or three sequences of $0$'s and $1$'s where there are two edges coming in and out of each node to give a total of $2m$ nodes and $2m + 1$ edges. By defining a distinct probability for each possible transition that can occur in the process (each of the edges), we can control the amount of clustering of each letter when simulating from the generalised Markov chain. Other work involving de Bruijn graphs can be seen in \cite{Hauge1996, Hauge1996a} and \cite{Hunt2002}. \begin{figure}[ht] \centering \includegraphics[scale=1]{dbgraphs_3.png} \caption{Examples of length 2 and 3 de Bruijn graphs with two letters: 0 and 1.} \label{DBG23} \end{figure} With a DBP, we are able to generate a variety of sequences that can be very clustered together for $0$'s or $1$'s, or alternatively, can be very structured to continuously be alternating (anti-clustered). We hence analyse the run lengths of letters to help quantify how clustered a sequence is likely to be. In this paper, we define a run length, $R$, to be the number of consecutive $1$'s (or $0$'s) in a row bounded by a 0 (or a 1) at both ends, and use this to calculate a run length distribution. The distribution gives the probability of a run of length $n$ for any $n \in \mathbb{N}^{+}$ in terms of the word length $m$ and the transition probabilities (as given by the edges in Figure \ref{DBG23}). From this we can then calculate expected run lengths, variance of run lengths and generating functions. As previously mentioned, in ecology there is a large amount of research in the measurement of aggregation (e.g. level of clustering of trees in a one dimensional direction). Currently, this often relies on only testing data for randomness instead of levels of clustering. Hence, by analysing expected run lengths, it will be possible to make better predictions on levels of aggregation. In this paper we apply our run length distribution for a DBP to precipitation data collected daily from a station in Eskdalemuir, UK. We translate the data to be binary such that $1$ is recorded for when precipitation is present and $0$ otherwise. To remove any seasonal effects, we separate the data into seasons and analyse separately. In each season, we successfully show that the data favours a DBP structure rather than an independent Bernoulli equivalent. The remainder of this paper is organised as follows. De Bruijn graphs are explained further in Section \ref{DBG}. This includes details and relationships that come from analysing the stationary distribution. By introducing correlation between binary variables in a sequence, we can control how clustered the $0$'s and $1$'s are. Hence, Section \ref{RLD} gives details of a run length distribution of consecutive numbers of $1$'s in the sequence. This includes definitions for the expectation, variance, generating functions. Examples including an application to precipitation data is given in Section \ref{Expp}. Finally, Section \ref{Conclu} concludes with details of future work. \section{De Bruijn Graphs} \label{DBG} De Bruijn Graphs \citep{Woude1946, Good1946, Golomb1967, Fredricksen1992} are directed graphs consisting of overlapping sequences of symbols. Given a set of $s$ symbols, $V=\{v_{1}, ..., v_{s}\}$, the vertices or nodes of the graph consist of all the possible sequences of $V$. Each graph has $s^{m}$ vertices, where $m$ is the length of each possible sequence given the set of symbols, $V$. The possible nodes are as follows: \begin{equation} \begin{split} V^{m} = \{ (v_{1}&, ..., v_{1},v_{1}), (v_{1}, ..., v_{1}, v_{2}), ..., (v_{1}, ..., v_{1}, v_{s}), (v_{1}, ..., v_{2}, v_{1}), ... ,\\ & (v_{s}, ..., v_{s}, v_{s}) \}. \end{split} \end{equation} Edges in de Bruijn graphs are drawn between node pairs in such a way that the connected nodes have overlaps of $m-1$ nodes. An edge is created by removing the first symbol from the node sequence, and adding a new symbol to the end of the sequence from $V$. Thus, from each vertex, $(v_{1}, ..., v_{m}) \in V^{m}$, there is an edge to vertex $(v_{2}, ..., v_{m}, v) \in V^{m}$ for every $v \in V$. There are exactly $s$ directed edges going into each node and $s$ directed edges going out from each node. We denote the symbols, $v$, the `letters' of the de Bruijn graph, and the sequence of these letters at each node a de Bruijn `word' of length $m$. By travelling along a path through the de Bruijn graph, chains of letters are generated such that each letter is dependent on the $m$ letters that come before \citep{Fredricksen1992, Ayyer2011, Rhodes2017}. Hence, by altering $m$, we are able to change the dependence structure of the de Bruijn graph. De Bruijn graphs are used in a number of applications including genome sequencing \citep{Tesler2017}, and the mathematics of juggling \citep{Ayyer2015}. Let $X = \{X_{1}, X_{2}, \ldots, X_{n}\}$ be a length $n$ sequence of binary random variables such that $X_{i}$ is contained within the set of $s=2$ letters, $V = \{0,1\}$. The sequence $X$ can be written in terms of words of length $m$ specified by the de Bruijn structure above; $X = \{W_{1}, W_{2}, \ldots, W_{n-m+1}$\}, where each $W_{i}$ is an overlapping sequence of $m$ letters. $X$ is defined to be a sequence of order $m$. We thus formally define a de Bruijn process in Definition \ref{DBPdef}. \begin{definition}[de Bruijn Process (DBP)] \label{DBPdef} \rm{Let random variable words $W_{1}, W_{2}, \ldots$ consist of length $m$ sequences of `letters' from the set $V=\{0,1\}$. For any positive integer, $t$, and possible states (words), $i_{1}, i_{2}, \ldots, i_{t}$, $X$ is described by a stochastic process in the form of a Markov chain with,} \begin{equation} \begin{split} P \left( W_{t} = i_{t} | W_{t-1} = i_{t-1}, W_{t-2} = i_{t-2}, \ldots, W_{1} = i_{1} \right) &= P \left( W_{t} = i_{t} | W_{t-1} = i_{t-1} \right) \\ &= p_{i_{t-1}}^{i_{t}}, \end{split} \end{equation} \rm{where $p_{i_{t-1}}^{i_{t}}$ is the probability of transitioning from the word $i_{t-1}$ to word $i_{t}$.} \end{definition} For the remainder of this paper $p_{i}^{j}$ is used to denote the probability of transitioning from the word (state) $i$ to the word (state) $j$. Since $X$ has a Markov property \citep{Billingsley1961, Hillier2006, Isaacson1976} on the word and not the letter, far more structure can be incorporated into the sequence. The length of each word (or order) defines how spread the correlation is over nearby letters (the distance over which letters are correlated). For a length $m = 2$ de Bruijn graph, each letter is dependent on two letters back and, for a length $m = 3$ de Bruijn graph, each letter is dependent on three letters back. This structure can be observed in the graph in Figure \ref{DBG23}. Each word can only transition to other words in which the initial $m-1$ letters are equivalent to the last $m-1$ letters of the previous word. For example, if $m=3$, then the word $001$ can only transition to either the word $010$ or $011$. If m = 1, then the model collapses down to be classically Markov (Markov on the letter), and if $m=0$, this is equivalent to independent Bernoulli trials. The main aim for developing this de Bruijn process is to be able to model sequences of $0$'s and $1$'s so that the amount of correlation between variables can be controlled. This is possible by altering the word length and associated transition probabilities, to control the amount of clustering between like letters. For example, let the probabilities be: $\{p_{00}^{01} =0.1, p_{01}^{11} =0.9, p_{10}^{01} =0.1, p_{11}^{11} =0.9\}$ in a length $m=2$ DBP (see top graph in Figure \ref{DBG23}), this will ensure that there is a high level of correlation forcing the letters to cluster together for both $0$'s and $1$'s, and avoiding changes between values. Equivalently, the model can be made very anti-clustered by choosing the transition probabilities that retain the current letter to be small. Examples are given in Section \ref{Examples1} to emphasize the full range of flexibility. \subsection{Stationary Distribution} Let $X = \{X_{1}, \ldots, X_{n}\}$ be a sequence of binary random variables such that $X$ can be specified in terms of its de Bruijn words of length $m$, $X = \{W_{1}, \ldots, W_{n-m+1}\}$. The de Bruijn structure for letters implies a Markov chain for words. Hence, the stationary distribution \citep{Isaacson1976, Jones2001} of a word or state $j \in V^{m}$ from an irreducible, persistent, aperiodic Markov chain \citep{Norris1997, Ross2014} with word length $m$ de Bruijn structure is defined as: \begin{equation} \pi^{m}(j) = \text{lim}_{t \rightarrow \infty} \hspace{0.2cm} {p_{i}^{j}}^{(t)} , \end{equation} where ${p_{i}^{j}}^{(t)} = P[W_{k+t} = j | W_{k} = i]$ are the transition probabilities at the $t^{\text{th}}$ time step. The stationary probabilities, $\pi^{m}$, must also satisfy the following: \begin{equation} \label{eqSD} \begin{split} \pi^{m}(j) > 0, \hspace{1.5cm}&\hspace{1.5cm} \sum_{j \in V^{m}} \pi^{m}(j) = 1 \hspace{0.5cm} \\ \\ \text{and} \hspace{0.5cm} \pi^{m}(j) &= \sum_{i \in V^{m}} \pi^{m}(i) p_{i}^{j} \\ \Longrightarrow \pi^{m} &= \pi^{m} T \end{split} \end{equation} for transition matrix $T$, with ${p_{i}^{j}}^{(t+1)} = \sum_{k} {p_{i}^{k}}^{(n)} p_{k}^{j}$ for $k \in V^{m}$ and by letting $t \rightarrow \infty$. The stationary distribution is hence found when ${\pi^{m}}^{(t)}$ remains unchanged as $t$ increases and gives the marginal or long run time probabilities of the de Bruijn words. Given a long enough sequence $X$ with sufficient burn-in period, $\pi^{m}$ states the proportion of each word occurring in the sequence. First start with the stationary distribution of the simple $m=2$ case where $\pi^{2} = (\pi^{2}(00), \pi^{2}(01), \pi^{2}(10), \pi^{2}(11))$. Using Equation \eqref{eqSD} and the probability transition matrix, we will solve the following: \begin{equation} \left(\begin{array}{cccc} \pi^{2}(00) & \pi^{2}(01) & \pi^{2}(10) & \pi^{2}(11) \end{array}\right) \left(\begin{array}{cccc} p_{00}^{00} & p_{00}^{01} & 0 & 0\\ 0 & 0 & p_{01}^{10} & p_{01}^{11} \\ p_{10}^{00} & p_{10}^{01} & 0 & 0 \\ 0 & 0 & p_{11}^{10} & p_{11}^{11} \end{array}\right) = \left(\begin{array}{c} \pi^{2}(00) \\ \pi^{2}(01) \\ \pi^{2}(10) \\ \pi^{2}(11) \end{array}\right)^{T} \end{equation} After expanding this out, and with some simple rearranging using conservation of probability, we come to the following results: \begin{equation} \begin{split} \pi^{2}(00) &= \frac{p_{10}^{00}}{1-p_{00}^{00}} \pi^{2}(10) \\ \Rightarrow \pi^{2}(00) &= \frac{p_{10}^{00}}{p_{00}^{01}} \pi^{2}(10), \end{split} \hspace{3cm} \begin{split} \pi^{2}(11) &= \frac{p_{01}^{11}}{1-p_{11}^{11}} \pi^{2}(01) \\ \Rightarrow \pi^{2}(11) &= \frac{p_{01}^{11}}{p_{11}^{10}} \pi^{2}(01), \end{split} \end{equation} \begin{equation} \begin{split} p_{00}^{01} \pi^{2}(00) + p_{10}^{01} \pi^{2}(10) &= \pi^{2}(01) \\ \Rightarrow p_{00}^{01} \frac{p_{10}^{00}}{p_{00}^{01}} \pi^{2}(10) + p_{10}^{01} \pi^{2}(10) &= \pi^{2}(01) \\ \Rightarrow \left( p_{10}^{00} + p_{10}^{01} \right) \pi^{2}(10) &= \pi^{2}(01) \\ \Rightarrow \pi^{2}(10) &= \pi^{2}(01), \end{split} \end{equation} Therefore if we let $\pi^{2}(01) = \pi^{2}(10) = \alpha$, the system solution becomes $\left(\pi^{2}(00), \pi^{2}(01), \pi^{2}(10), \pi^{2}(11)\right) = \left( \frac{p_{10}^{00}}{p_{00}^{01}} \alpha, \alpha, \alpha, \frac{p_{01}^{11}}{p_{11}^{10}} \alpha \right)$. This shows that the word $01$ has equal probability of occurring in the sequence to the word $10$. This is to be expected since every consecutive sequence of $1$'s (or $0$'s) is bounded by these two words; hence they will occur with equal probability. The likelihood of the word $00$ occurring is dependent on the transition $p_{10}^{00}$ taking place in conjunction with the transition $p_{00}^{01}$ (to give the sequence $1001$). Hence, the probability of $00$ occurring is the ratio between these two transactions, $\frac{p_{10}^{00}}{p_{00}^{01}}$. The probability of the word $11$ occurring follows by the same reasoning. The stationary distribution for words of length $m \ge 2$ are given in Theorem \ref{SD1}. To simplify the notation, and to apply to a general word length, the words have been written in terms of the decimal representation of their binary values. This form is repeatedly used for the remainder of this paper and is formally written as, $\sum_{i=1}^{m} k_{i} \hspace{0.1cm} 2^{i-1}$, where $k_{i} \in \{0,1\}$ is each letter in the word; e.g. for the word 010, $\hspace{0.2cm} \sum_{i=1}^{m} k_{i} \hspace{0.1cm} 2^{i-1} = k_{1}2^{0} + k_{2}2^{1} + k_{3}2^{2} = 0 \cdot 1 + 1 \cdot 2 + 0 \cdot 4 = 2$. \begin{theorem}[Stationary Distribution $m \ge 2$] \label{SD1} \rm{For a general case $m$, the system of equations for the marginal probabilities, $\pi^{m}$, can be expressed as follows:} \begin{equation} \begin{split} p_{\frac{1}{2} (i - i \text{ mod } 2)}^{i} &\hspace{0.2cm} \pi^{m} \left( \frac{1}{2} (i - i \text{ mod } 2) \right) \\ &+ p_{\frac{1}{2} (i - i \text{ mod } 2) + 2^{m-1)}}^{i} \hspace{0.2cm} \pi^{m} \left( \frac{1}{2} (i - i \text{ mod } 2) + 2^{m-1} \right) = \pi^{m}(i) , \end{split} \end{equation} for $\hspace{0.2cm} i=0:(2^{m}-1)$. \\ Solving these equations gives the following relationships: \begin{equation} \begin{split} \pi^{m}(0) &= \frac{p_{2^{m-1}}^{0}}{p_{0}^{1}} \hspace{0.2cm} \pi^{m}(2^{m-1}) \\ \pi^{m}(1) &= \pi^{m}(2^{m-1}) \\ \pi^{m}(i) + \pi^{m}(i + 2^{m-1}) &= \pi^{m}(2j) + \pi^{m}(2i + 1) \hspace{1cm} \text{for} \hspace{0.5cm} i=1:(2^{m-1}-2) \\ \pi^{m}(2^{m}-2) &= \pi^{m}(2^{m-1}-1) \\ \pi^{m}(2^{m}-1) &= \frac{p_{2^{m-1}-1}^{2^{m}-1}}{p_{2^{m}-1}^{2^{m}-2}} \hspace{0.2cm} \pi^{m}(2^{m-1}-1) \end{split} \end{equation} \end{theorem} \begin{proof} See Appendix \end{proof} There are also relationships between the stationary distribution and the transition probabilities which we can express using the law of total probability \citep{Feller1950}. The law of total probability states that $\pi^{1}(j) = \sum_{n} P(j | i_{n}) \pi^{m}(i_{n})$, so that the probability of a letter ($0$ or $1$) is the weighted average of all the possible words that could generate that letter. In other words, we average over all of the possible starting words to get the probability of obtaining a single $0$ or $1$. For example, if $m=2$ we know the following relationships are true: \begin{equation} \label{mareqn} \begin{split} &\pi^{1}(1) = p_{00}^{01}\pi^{2}(00) + p_{01}^{11}\pi^{2}(01) + p_{10}^{01}\pi^{2}(10) + p_{11}^{11}\pi^{2}(11) , \\ &\pi^{1}(0) = p_{00}^{00}\pi^{2}(00) + p_{01}^{10}\pi^{2}(01) + p_{10}^{00}\pi^{2}(10) + p_{11}^{10}\pi^{2}(11) , \end{split} \end{equation} where $\pi^{1}(0) + \pi^{1}(1) = 1$. If we further let the transition probabilities be $\{p_{00}^{01}, p_{01}^{11}, p_{10}^{01}, p_{11}^{11}\} = \{0.1,0.9,0.1,0.9\}$), the marginal probabilities for the words are: $\{\pi^{2}(00), \pi^{2}(01), \pi^{2}(10), \pi^{2}(11)\} = \{0.45, 0.05, 0.05, 0.45\}$, and the marginal probabilities for the letters are: $\pi^{1}(0) = 0.5$ and $\pi^{1}(1)=0.5$. There is a $50\%$ chance of getting a $0$ or a $1$, but by including the de Bruijn graph structure into the Markov chains, we are forcing correlation to be accounted for and so the $0$'s and $1$'s will appear in clustered blocks. If we remove the de Bruijn structure and generate a sequence of independent random Bernoulli trials, we would observe the same proportions of each letter, but they would no longer be grouped in blocks. We can expand Equation \eqref{mareqn} to be applicable for any word length, $m$, as follows: \begin{equation} \begin{split} \pi^{1}(1) &= p_{0 \ldots 00}^{0 \ldots 01} \, \pi^{m}(0 \ldots 0) + p_{0 \ldots 001}^{0 \ldots 011} \, \pi^{m}(0 \ldots 01) + \ldots + p_{1 \ldots 110}^{1 \ldots 101} \, \pi^{m}(1 \ldots 10) \\ & \hspace{1cm} + p_{1 \ldots 1}^{1 \ldots 1} \, \pi^{m}(1 \ldots 1) \\ &= \sum_{i=0}^{2^{m}-1} p_{i}^{(2i + 1) \hspace{0.1cm} \text{mod} 2^{m}} \pi^{m}(i), \end{split} \end{equation} where $\pi^{1}(0) + \pi^{1}(1) = 1$. \subsection{Examples} \label{Examples1} We now present several examples of how the transition probabilities ($p_{i}^{j}$) effect the distribution of letters in sequences produced from de Bruijn processes. It is assumed that binary sequences are unconditional on the starting letters. We hence simulate for a sufficient amount of time to be in steady state, taking a large enough burn-in to ensure that this does not affect any results. Each panel in Figure \ref{Samp4} gives one of four examples of running a word length $m = 2$ DBP for $n = 200$ time steps. The marginal probabilities are kept the same ($\pi^{1}(0) = \pi^{1}(1) = 0.5$) so that we can make a better comparison of the spread of letters. We would expect that like letters cluster in larger blocks when the probabilities for remaining at the same letter are kept close to one. The transition probabilities ($\{p_{00}^{01},p_{01}^{11},p_{10}^{01},p_{11}^{11}\}$)) for the four examples (from top to bottom) take the values $\{0.9, 0.1, 0.9, 0.1\}$, $\{0.5, 0.5, 0.5, 0.5\}$, $\{0.25, 0.75, 0.25, 0.75\}$ and $\{0.1, 0.9, 0.1, 0.9\}$ respectively. The light blue areas represent simulated $0$'s and the dark blue areas represent simulated $1$'s. As the sequences progress from top to bottom, the letters are becoming far more clustered together in blocks. Each sequence has equal numbers of $0$'s and $1$'s ($50\%$ of each), but vary in their distribution across the sequence. The top sequence is designed to be constantly swapping between letters (anti-clustered), the second sequence is equivalent to independent Bernoulli trials, the third sequence is slightly more clustered to like letters, and the final sequence is very clustered into blocks of $0$'s and $1$'s. \begin{figure}[ht] \centering \includegraphics[scale=0.55]{examples1.png} \caption{Four samples from de Bruijn processes with letters $0$ and $1$ to show the effects of changing the transition probabilities. From top to bottom, the transition probabilities, $\{ p_{00}^{01}, p_{01}^{11}, p_{10}^{01}, p_{11}^{11} \}$, are: $\{0.9, 0.1, 0.9, 0.1\}$, $\{0.5, 0.5, 0.5, 0.5\}$, $\{0.25, 0.75, 0.25, 0.75\}$ and $\{0.1, 0.9, 0.1, 0.9\}$.} \label{Samp4} \end{figure} Figure \ref{Samp4b} shows that it is not necessary to keep the marginal probabilities of the letters equal to 0.5. In this plot we have two sequences from $m=2$ de Bruijn processes where the top sequence has equivalent marginal probabilities, but the bottom sequence is set to have $\pi^{1}(0)=0.2$ and $\pi^{1}(1)=0.8$. The corresponding transition probabilities for these are $\{0.1,0.8,0.2,0.9\}$ and $\{0.775,0.8,0.8,0.9\}$ respectively. The transition probabilities in this example are defined such that we have $p_{01}^{11}=0.8$ and $p_{11}^{11}=0.9$ for both, but the frequency of $1$'s differs. Hence the lengths of continuous $1$'s in a row is equivalent, but they occur far more often in the latter sequence. \begin{figure}[ht] \centering \includegraphics[scale=0.47]{DGB28plot.png} \caption{Two samples from de Bruijn processes with marginal probabilities $\pi^{(1)}(0)=0.2$, $\pi^{(1)}(1)=0.8$ (top) and $\pi^{(1)}(0)=0.2$, $\pi^{(1)}(1)=0.8$ (bottom). From top to bottom, the transition probabilities, $\{ p_{00}^{01}, p_{01}^{11}, p_{10}^{01}, p_{11}^{11} \}$, are: $\{ 0.1,0.8,0.2,0.9 \}$ and $\{ 0.775,0.8,0.8,0.9 \}$.} \label{Samp4b} \end{figure} The word length of the de Bruijn process also has an effect on the distribution of the letters across the sequence, but it is not as easy to make comparisons due to the different numbers of transition probabilities required for each word. Selecting a longer word length means that more structure can be incorporated through the more intricate transition probabilities, and so it is a mixture of both the word lengths and the transition probabilities that causes the largest impact on the clustering of the sequence. \section{Run Length Distribution} \label{RLD} A run length, $R$, is defined as the number of consecutive $1$'s (or $0$'s) in a row bounded by a $0$ (or $1$) at both ends. To quantify how clustered a sequence generated from a de Bruijn process is, we can observe the run lengths of letters. Without loss of generality, we will only quantify run lengths of $1$'s, but all of the following results can be extended to be in terms of run lengths of $0$'s. We also note that all results are conditional on a run existing (i.e. at least a run of $R=1$). The distribution of run lengths $R$ for a word length $m=2$ de Bruijn process with transition probabilities $\{ p_{01}^{10}, p_{01}^{11}, p_{11}^{10}, p_{11}^{11} \}$ is given in Lemma \ref{RLD2}. This distribution gives the probability of a run of $1$'s of length $R=n$, for any $n \in \mathbb{N}^{+}$. The probability of a single $1$ is simply the transition probability $p_{01}^{10}$, since this gives the probability of transitioning from the word $01$ to the word $10$ (giving the sequence $010$). For higher run lengths we again start with the word $01$, but now transition to the word $11$, with probability $p_{01}^{11}$. For a run of length $R=2$, we finish with the probability $p_{11}^{10}$, whilst for a run length of $R \ge 3$, we would further transition to the same word $11$ with transition probability $p_{11}^{11}$ until a run length of $R=n$ is reached. The transition probability is raised to the power $n-2$ since a run length of two is already created with the other two transition probabilities in the equation. Every run must finish with the probability $p_{11}^{10}$ since a zero signifies the end of the run. \begin{lemma}[Run Length Distribution, $m=2$] \label{RLD2} \rm{Let the random variable $R$ denote the run length of a sequence of $1$'s from a sequence $X$. The probability density function of $R$ in terms of length $m=2$ de Bruijn words is given as:} \begin{equation} P(R = n) = \begin{cases} p_{01}^{10} &\quad \rm{for} \hspace{0.2cm} n=1 \\ p_{01}^{11}(p_{11}^{11})^{n-2}p_{11}^{10} &\quad \rm{for} \hspace{0.2cm} n \ge 2 , \end{cases} \end{equation} \end{lemma} \begin{proof} See Appendix \end{proof} In Theorem \ref{RLD3} the run length distribution is extended for de Bruijn words of length $m \ge 3$, where the decimal representation of the binary notation is used. The general form of the distribution is the same as that seen in Lemma \ref{RLD2}. We retain the term that represents consecutively drawing $1$'s $\left( p_{2^{m}-1}^{2^{m}-1} \right)$, however there is now a longer burn-in period until this point is reached. We refer the reader to Figure \ref{RLimage}. Each line of this figure shows how a run of length $n$ $1$'s evolves through subsequent transitions. To start a sequence of $1$'s off, the first word must take the form $* \hspace{0.1cm} 01$, where $*$ represents any possible sequence of length $m-2$. Since there are $2^{m-2}$ possibilities for the letters represented by $*$, we must average over all possibilities using the law of total probability \citep{Feller1950} to get the full run length distribution as follows: \begin{equation} \label{LTP1} P(R = n) = \sum_{i=0}^{2^{m-2}-1} P(R = n | *_{i}) \pi^{m-2}(*_{i}), \end{equation} where $\pi^{m-2}(*_{i})$ is the marginal probability for the $i$th initial sequence of letters. Since, $*$ is of length $m-2$, we must then represent each $\pi^{m-2}(*_{i})$ in terms of the length $m$ de Bruijn words.We apply the law of total probability again, such that: \begin{equation} \pi^{m-2}(*_{i}) = \sum_{j=0}^{2^{m-2}-1} P(*_{i} | j) \pi^{m}(j). \end{equation} For longer word lengths $m$, there are not only more possible starting sequences (given by $*$), but the burn-in periods are also longer (as shown in Figure \ref{RLimage}). If the run does not reach the state containing all $1$'s before a $0$ is drawn, we finish by transitioning to a word of the form $*10$. Alternatively, all runs of length $R=n \ge m$ will end up reaching the all $1$ state with probability $p_{2^{m}-1}^{2^{m}-1}$. The run then ends with the first $0$, resulting in a word of the form $11..10$ with transition probability $p_{2^{m}-1}^{2^{m}-2}$. The state with all $1$'s must also be raised to the power $n-m$ since a length of $m$ $1$'s is obtained in the run up period. \begin{theorem}[Run Length Distribution, $m \ge 3$] \label{RLD3} \rm{Let the random variable $R$ denote the run length of a sequence of $1$'s from a sequence $X$. The probability density function of $R$ in terms of length $m$ de Bruijn words is given as:} \begin{equation} \begin{split} &P(R = n) \\ & = \begin{cases} \sum_{i=0}^{2^{m-2}-1} \pi^{m-2}(i) \hspace{0.1cm} p_{4i+1}^{2^{3}(i \hspace{0.1cm} \text{mod } 2^{m-3}) + 2} &\quad \rm{for} \hspace{0.2cm} n=1\\ \sum_{i=0}^{2^{m-2}-1} \pi^{m-2}(i) \hspace{0.1cm} \left[ \prod_{j=1}^{n} p_{2^{j+1}(i \hspace{0.1cm} \text{mod } 2^{m-j-1}) + (2^{j}-1)} ^{2^{j+2} (i \hspace{0.1cm} \text{mod } 2^{m-j-2}) + (2^{j+1}-1) - \textbf{1}_{j=n} } \right] &\quad \rm{for} \hspace{0.2cm} n=2:m-1\\ \sum_{i=0}^{2^{m-2}-1} \pi^{m-2}(i) \hspace{0.1cm} \left[ \prod_{j=1}^{m-1} p_{2^{j+1}(i \hspace{0.1cm} \text{mod } 2^{m-j-1}) + (2^{j}-1)} ^{2^{j+2} (i \hspace{0.1cm} \text{mod } 2^{m-j-2}) + (2^{j+1}-1) } \right] &\quad \rm{for} \hspace{0.2cm} n \ge m .\\ \hspace{1.2cm} \times \left[ \left( p_{2^{m}-1}^{2^{m}-1} \right)^{n-m} p_{2^{m}-1}^{2^{m}-2} \right] \end{cases} \end{split} \end{equation} \rm{where,} \begin{equation} \pi^{m-2}(i) = \sum_{j=0}^{2^{m}-1} \prod_{k=0}^{m-3} \left[ p^{2^{k+1} (j \hspace{0.1cm} \text{mod } 2^{m-k-1}) + \sum_{s=1}^{k+1} 2^{k-s+1} [ ( \frac{1}{2^{m-s-2}} ( i - (i \hspace{0.1cm} \text{mod } 2^{m-s-2}))) \text{mod } 2 ]}_{2^{k} (j \hspace{0.1cm} \text{mod } 2^{m-k}) + \sum_{s=1}^{k} 2^{k-s} [ ( \frac{1}{2^{m-s-2}} ( i - (i \hspace{0.1cm} \text{mod } 2^{m-s-2}))) \text{mod } 2 ]} \right] \pi^{m}(j) \end{equation} \end{theorem} \begin{proof} See Appendix \end{proof} \begin{figure}[ht] \centering \includegraphics[scale=0.4]{RLimage.png} \caption{Diagram representing the burn-in period for sequences of $0$'s and $1$'s which generate a run of length $n$ $1$'s. Letters represented by $*$ can take either a $0$ or a $1$ for the run to be valid.} \label{RLimage} \end{figure} The discrete geometric distribution, given by $P(X=k) = (1-p)^{k} p$ for $k=0,1,2,...$ \citep{Johnson2005, Feller1950}, gives the probability that the first success ($1$) from a set of independent Bernoulli trials, $X$, with probability of success, $p$, happens on the $(k+1)^{\text{th}}$ trial. We can consider the run length distribution as a generalised geometric distribution with the additional burn-in period of a run. Although the geometric distribution considers independent trials, we still retain correlation through the Markov property on the words, which ensures the transitions have a unique ordering to match the words to the sequence. We will be using this link to the geometric distribution in following calculations.\newline \noindent{\textbf{Expectation and Variance of Run Length}} From the run length distribution, we can now calculate subsequent measures including the expectation \citep{Feller1950}. The expected run length for a length $m=2$ de Bruijn process is given in Lemma \ref{ERL2} and the expected run length for a de Bruijn process with word length $m \ge 3$ is given in Theorem \ref{ERLM}. The expectation of a geometric random variable is utilised in both expressions. \begin{lemma}[Expected Run Length, $m=2$] \label{ERL2} \rm{Given the $m=2$ run length distribution in Lemma \ref{RLD2} for random variable $R$, the expected run length is given as:} \begin{equation} \mathbb{E}[R] = p_{01}^{10} +\frac{p_{01}^{11} \left( 1 - (p_{11}^{10})^{2} \right)}{p_{11}^{11}p_{11}^{10}}. \end{equation} \end{lemma} \begin{proof} See Appendix \end{proof} \begin{theorem}[Expected Run Length, $m \ge 3$] \label{ERLM} \rm{Given the run length distribution in Theorem \ref{RLD3} for random variable $R$, the expected run length is given as:} \begin{equation} \begin{split} \mathbb{E}[R] &= \sum_{i=0}^{2^{m-2}-1} \pi^{m-2}(i) \hspace{0.1cm} p_{4i+1}^{2^{3}(i \hspace{0.1cm} \text{mod } 2^{m-3}) + 2} \\ & \hspace{1cm} + \sum_{n=2}^{m-1} n \left[ \sum_{i=0}^{2^{m-2}-1} \pi^{m-2}(i) \hspace{0.1cm} \left[ \prod_{j=1}^{k} p_{2^{j+1}(i \hspace{0.1cm} \text{mod } 2^{m-j-1}) + (2^{j}-1)} ^{2^{j+2} (i \hspace{0.1cm} \text{mod } 2^{m-j-2}) + (2^{j+1}-1) - \textbf{1}_{j=n} } \right] \right] \\ & \hspace{1cm} + \frac{1}{\left( p^{2^{m}-1}_{2^{m}-1} \right)^{m}} \left[ \sum_{i=0}^{2^{m-2}-1} \pi^{m-2}(i) \hspace{0.1cm} \left[ \prod_{j=1}^{m-1} p_{2^{j+1}(i \hspace{0.1cm} \text{mod } 2^{m-j-1}) + (2^{j}-1)} ^{2^{j+2} (i \hspace{0.1cm} \text{mod } 2^{m-j-2}) + (2^{j+1}-1) } \right] \right] \\ & \hspace{2cm} \times \left[ \frac{p_{2^{m}-1}^{2^{m}-1}}{p_{2^{m}-1}^{2^{m}-2}} - \sum_{n=1}^{m-1} n \left( p_{2^{m}-1}^{2^{m}-1} \right)^{n} p_{2^{m}-1}^{2^{m}-2} \right] \end{split} \end{equation} \rm{where,} \begin{equation} \pi^{m-2}(i) = \sum_{j=0}^{2^{m}-1} \prod_{k=0}^{m-3} \left[ p^{2^{k+1} (j \hspace{0.1cm} \text{mod } 2^{m-k-1}) + \sum_{s=1}^{k+1} 2^{k-s+1} [ ( \frac{1}{2^{m-s-2}} ( i - (i \hspace{0.1cm} \text{mod } 2^{m-s-2}))) \text{mod } 2 ]}_{2^{k} (j \hspace{0.1cm} \text{mod } 2^{m-k}) + \sum_{s=1}^{k} 2^{k-s} [ ( \frac{1}{2^{m-s-2}} ( i - (i \hspace{0.1cm} \text{mod } 2^{m-s-2}))) \text{mod } 2 ]} \right] \pi^{m}(j) \end{equation} \end{theorem} \begin{proof} See Appendix \end{proof} By using the well known result, $\text{Var}[R] = \mathbb{E}[R^{2}] - (\mathbb{E}[R])^{2}$ (as well as results from the geometric distribution), we can calculate the variance for the run length distribution. The squared expectation of the run length distribution for the $m=2$ word length case is given in Lemma \ref{VRL2} and the squared expectation of the run length distribution for word lengths $m \ge 3$ is shown in Theorem \ref{VRLM}. \begin{lemma}[Squared Expectation of Run Length, $m=2$] \label{VRL2} \rm{Given the $m=2$ run length distribution in Lemma \ref{RLD2} for random variable $R$, the squared expectation of the run length is given as:} \begin{equation} \mathbb{E}[R^{2}] = p_{01}^{10} + \frac{p_{01}^{11} \left( 2 - p_{11}^{10} - \left( p_{11}^{10} \right)^{3} \right)}{p_{11}^{11} \left(p_{11}^{10}\right)^{2}} \end{equation} \end{lemma} \begin{proof} See Appendix \end{proof} \begin{theorem}[Squared Expectation of Run Length, $m \ge 3$] \label{VRLM} \rm{Given the run length distribution in Theorem \ref{RLD3} for random variable $R$, the squared expectation of the run length is given as:} \begin{equation} \begin{split} \mathbb{E}[R^{2}] &= \sum_{i=0}^{2^{m-2}-1} \pi^{m-2}(i) \hspace{0.1cm} p_{4i+1}^{2^{3}(i \hspace{0.1cm} \text{mod } 2^{m-3}) + 2} \\ & \hspace{1cm} + \sum_{n=2}^{m-1} n^{2} \left[ \sum_{i=0}^{2^{m-2}-1} \pi^{m-2}(i) \hspace{0.1cm} \left[ \prod_{j=1}^{k} p_{2^{j+1}(i \hspace{0.1cm} \text{mod } 2^{m-j-1}) + (2^{j}-1)} ^{2^{j+2} (i \hspace{0.1cm} \text{mod } 2^{m-j-2}) + (2^{j+1}-1) - \textbf{1}_{j=k} } \right] \right] \\ & \hspace{1cm} + \frac{1}{\left( p^{2^{m}-1}_{2^{m}-1} \right)^{m}} \left[ \sum_{i=0}^{2^{m-2}-1} \pi^{m-2}(i) \hspace{0.1cm} \left[ \prod_{j=1}^{m-1} p_{2^{j+1}(i \hspace{0.1cm} \text{mod } 2^{m-j-1}) + (2^{j}-1)} ^{2^{j+2} (i \hspace{0.1cm} \text{mod } 2^{m-j-2}) + (2^{j+1}-1) } \right] \right] \\ & \hspace{2cm} \times \left[ \frac{p_{2^{m}-1}^{2^{m}-1} (2 - p_{2^{m}-1}^{2^{m}-2})}{(p_{2^{m}-1}^{2^{m}-2})^{2}} - \sum_{n=1}^{m-1} n^{2} \left( p_{2^{m}-1}^{2^{m}-1} \right)^{n} p_{2^{m}-1}^{2^{m}-2} \right] . \end{split} \end{equation} \rm{where,} \begin{equation} \pi^{m-2}(i) = \sum_{j=0}^{2^{m}-1} \prod_{k=0}^{m-3} \left[ p^{2^{k+1} (j \hspace{0.1cm} \text{mod } 2^{m-k-1}) + \sum_{s=1}^{k+1} 2^{k-s+1} [ ( \frac{1}{2^{m-s-2}} ( i - (i \hspace{0.1cm} \text{mod } 2^{m-s-2}))) \text{mod } 2 ]}_{2^{k} (j \hspace{0.1cm} \text{mod } 2^{m-k}) + \sum_{s=1}^{k} 2^{k-s} [ ( \frac{1}{2^{m-s-2}} ( i - (i \hspace{0.1cm} \text{mod } 2^{m-s-2}))) \text{mod } 2 ]} \right] \pi^{m}(j) \end{equation} \end{theorem} \begin{proof} See Appendix \end{proof} \noindent{\textbf{Generating Functions of Run Length}} Generating functions \citep{Wilf1994, Johnson2005} are a powerful tool which are often used as an alternative to a distribution when it is too complex to work with. The following generating functions are formed from recurrence relationships which give the general form for the sequence of run lengths. Again, since there is a relationship between the de Bruijn run length distribution and a geometric distribution, both the probability and moment generating functions for the geometric distribution can be utilised. The probability generating function for the run length distribution takes the form $G_{R}(t) = \mathbb{E}[t^{n}] = \sum_{n=1}^{\infty} t^{n} P(R = n)$. The result for $m=2$ is given in Lemma \ref{PGF2} and the result for $m \ge 3$ is given in Theorem \ref{PGFM}. The latter has the same structure as both the geometric and $m=2$ cases, with the slight difference that there is now an additional polynomial which comes from the initial burn-in period before the all-$1$ word is reached. Once we have reached this word, and continue drawing $1$'s to the chain, then this becomes equivalent to the geometric distribution. \begin{lemma}[Run Length Probability Generating Function, $m=2$] \label{PGF2} \rm{Given the $m=2$ run length distribution in Lemma \ref{RLD2} for random variable $R$, the probability generating function of the run length is given as follows:} \begin{equation} G_{R}(t) = \frac{\left( p_{01}^{11}p_{11}^{10} - p_{01}^{10}p_{11}^{11} \right)t^{2} + p_{01}^{10}t }{1 - p_{11}^{11}t} \end{equation} \end{lemma} \begin{proof} See Appendix \end{proof} \begin{theorem}[Run Length Probability Generating Function, $m \ge 3$] \label{PGFM} \rm{Given the run length distribution in Theorem \ref{RLD3} for random variable $R$, the probability generating function of the run length is given as follows:} \begin{equation} G_{R}(t) = \sum_{s=1}^{m} a_{s}t^{s} + \frac{p_{2^{m}-1}^{2^{m}-1} a_{m} t^{m+1}}{1 - p_{2^{m}-1}^{2^{m}-1}t } \end{equation} \rm{where, } \begin{equation} \begin{split} a_{1} &= \sum_{i=0}^{2^{m-2}-1} \pi^{m-2}(i) p_{4i+1}^{2^{3}(i \hspace{0.1cm} \text{mod } 2^{m-3}) + 2} \\ a_{2} &= \sum_{i=0}^{2^{m-2}-1} \pi^{m-2}(i) p_{4i+1}^{2^{3}(i \hspace{0.1cm} \text{mod } 2^{m-3})+3} p_{2^{3}(i \hspace{0.1cm} \text{mod } 2^{m-3}) + (2^{2}-1)} ^{2^{4} (i \hspace{0.1cm} \text{mod } 2^{m-4}) + (2^{3}-1) - 1 } \\ &\vdots \hspace{3cm} \\ a_{m-1} &= \sum_{i=0}^{2^{m-2}-1} \pi^{m-2}(i) \left[ \prod_{j=1}^{m-1} p_{2^{j+1}(i \hspace{0.1cm} \text{mod } 2^{m-j-1}) + (2^{j}-1)} ^{2^{j+2} (i \hspace{0.1cm} \text{mod } 2^{m-j-2}) + (2^{j+1}-1) - \textbf{1}_{j=n} } \right] \\ a_{m} &= \sum_{i=0}^{2^{m-2}-1} \pi^{m-2}(i) \left[ \prod_{j=1}^{m-1} p_{2^{j+1}(i \hspace{0.1cm} \text{mod } 2^{m-j-1}) + (2^{j}-1)} ^{2^{j+2} (i \hspace{0.1cm} \text{mod } 2^{m-j-2}) + (2^{j+1}-1) } \right] p_{2^{m}-1}^{2^{m}-2} \end{split} \end{equation} \end{theorem} \begin{proof} See Appendix \end{proof} We can also calculate the moment generating function of the run length distribution, which take the form $M_{R}(t) = \mathbb{E}[e^{nt}] = \sum_{n=1}^{\infty} e^{nt} P(R = n)$ for $n = 0,1,2,...$. The moment generating functions when $m=2$ and $m \ge 3$ are given in Lemma \ref{MGF2} and Theorem \ref{MGFM} respectively. As for the case in Theorem \ref{PGFM}, we can see the similarities to the geometric case, but with the addition of a polynomial term representing the burn-in period to the de Bruijn word consisting of all $1$'s. \begin{lemma}[Run Length Moment Generating Function, $m=2$] \label{MGF2} \rm{Given the $m=2$ run length distribution in Lemma \ref{RLD2} for random variable $R$, the moment generating function of the run length is given as follows:} \begin{equation} M_{R}(t) = \frac{\left( p_{01}^{11}p_{11}^{10} - p_{01}^{10}p_{11}^{11} \right)e^{2t} + p_{01}^{10}e^{t} }{1 - p_{11}^{11}e^{t} } \end{equation} \end{lemma} \begin{proof} See Appendix \end{proof} \begin{theorem}[Run Length Moment Generating Function, $m \ge 3$] \label{MGFM} \rm{Given the run length distribution in Theorem \ref{RLD3} for random variable $R$, the moment generating function of the run length is given as follows:} \begin{equation} M_{R}(t) = \sum_{s=1}^{m} a_{s}e^{st} + \frac{p_{2^{m}-1}^{2^{m}-1} a_{m} e^{(m+1)t}}{1 - p_{2^{m}-1}^{2^{m}-1}e^{t} } \end{equation} \rm{where,} \begin{equation} \begin{split} a_{1} &= \sum_{i=0}^{2^{m-2}-1} \pi^{m-2}(i) p_{4i+1}^{2^{3}(i \hspace{0.1cm} \text{mod } 2^{m-3}) + 2} \\ a_{2} &= \sum_{i=0}^{2^{m-2}-1} \pi^{m-2}(i) p_{4i+1}^{2^{3}(i \hspace{0.1cm} \text{mod } 2^{m-3})+3} p_{2^{3}(i \hspace{0.1cm} \text{mod } 2^{m-3}) + (2^{2}-1)} ^{2^{4} (i \hspace{0.1cm} \text{mod } 2^{m-4}) + (2^{3}-1) - 1 } \\ &\vdots \hspace{3cm} \\ a_{m-1} &= \sum_{i=0}^{2^{m-2}-1} \pi^{m-2}(i) \left[ \prod_{j=1}^{m-1} p_{2^{j+1}(i \hspace{0.1cm} \text{mod } 2^{m-j-1}) + (2^{j}-1)} ^{2^{j+2} (i \hspace{0.1cm} \text{mod } 2^{m-j-2}) + (2^{j+1}-1) - \textbf{1}_{j=n} } \right] \\ a_{m} &= \sum_{i=0}^{2^{m-2}-1} \pi^{m-2}(i) \left[ \prod_{j=1}^{m-1} p_{2^{j+1}(i \hspace{0.1cm} \text{mod } 2^{m-j-1}) + (2^{j}-1)} ^{2^{j+2} (i \hspace{0.1cm} \text{mod } 2^{m-j-2}) + (2^{j+1}-1) } \right] p_{2^{m}-1}^{2^{m}-2} \end{split} \end{equation} \end{theorem} \begin{proof} See Appendix \end{proof} \section{Examples} \label{Expp} Consider the four $m=2$ examples in Figure \ref{Samp4} from Section \ref{Examples1}. We can compare the theoretical results obtained from the de Bruijn run length against the actual run lengths generated from these simulations. Table \ref{RLProb} shows the probabilities of getting a run of length $n$ for $n = 1, \ldots, 10$ from the run length distribution in Lemma \ref{RLD2}. They are presented in the same order as above so that DBP 1 refers to the de Bruijn process with transition probabilities $\{0.9,0.1,0.9,0.1\}$ and DBP 4 refers to probabilities $\{0.1, 0.9, 0.1, 0.9\}$. The distributions of run lengths from each sequence are also depicted in the histograms in Figure \ref{RLhist} along with the theoretical run length distributions as seen in Table \ref{RLProb}. \begin{table}[h!] \centering \begin{tabular}{||c | c c c c ||} \hline Run Length, n & DBP 1 & DBP 2 & DBP 3 & DBP 4 \\ [0.5ex] \hline\hline 1 & $0.9$ & 0.5 & 0.25 & 0.1 \\ 2 & $0.09$ & 0.25 & 0.188 & 0.09 \\ 3 & $0.009$ & 0.125 & 0.141 & 0.081 \\ 4 & $0.0009$ & 0.0625 & 0.105 & 0.0729 \\ 5 & $9 \times 10^{-4}$ & 0.0313 & 0.0791 & 0.0656 \\ 6 & $9 \times 10^{-5}$ & 0.0156 & 0.0593 & 0.0590 \\ 7 & $9 \times 10^{-6}$ & 0.00781 & 0.0445 & 0.0531 \\ 8 & $9 \times 10^{-7}$ & 0.00391 & 0.0334 & 0.0478 \\ 9 & $9 \times 10^{-8}$ & 0.00195 & 0.0250 & 0.0430 \\ 10 & $9 \times 10^{-9}$ & 0.000977 & 0.0188 & 0.0387 \\ [1ex] \hline \end{tabular} \caption{The probabilities of getting run lengths of $n = 1, ..., 10$ for four different de Bruijn processes of word length $m=2$. The corresponding transition probabilities ($\{ p_{00}^{01}, p_{01}^{11}, p_{10}^{01}, p_{11}^{11} \}$) for these four processes are as follows: DBP 1: $\{0.9, 0.1, 0.9, 0.1\}$, DBP 2: $\{0.5, 0.5, 0.5, 0.5\}$, DBP 3: $\{0.25, 0.75, 0.25, 0.75\}$ , DBP 4: $\{0.1, 0.9, 0.1, 0.9\}$.} \label{RLProb} \end{table} DBP 1 has the highest chance of a short run length, where the probability for a run length $R \ge 2$ quickly becomes very small. There is a $90\%$ chance of a run length of $R=1$ since this process is designed to constantly be alternating between the two letters. DBP 2 has a $50\%$ chance of a run of length $R=1$. The chance then continues to drop for higher processes until DBP 4 only has a $10\%$ chance of a run of length $R=1$. We also notice that all of the given probabilities for DBG 3 and DBG 4 stay fairly similar for all run lengths as compared to DBP 1 and DBP 2 which tend to reduce at a fast rate. The expected run lengths and variance of run lengths for the four examples are given in Table \ref{RLExVar}. Here we have given both the analytical results generated from Theorems \ref{ERLM} and \ref{VRLM} and the simulated results from the length $n=200$ sequences in Figure \ref{Samp4}. For all cases, both values are shown to agree with on average a difference of $2.18\%$. Hence, this gives confidence that both the simulated and analytic run lengths are calculated correctly. The expected run length of the structured example (DBP 1) is $1.11$, which is to be expected since the de Bruijn process is designed to be constantly alternating letters. The variance for this process is $0.12$, which confirms that the process is designed to be very structured and does not alter much from having an average run length close to $R=1$. These both match exactly to the theoretical results. The random Bernoulli sequence has both an expected run length of $2$ and variance of $2$. For DBP 3 and DBP 4, since the de Bruijn processes are designed to generate more clustered sequences, the expected run lengths are also shown to increase. This is to be expected since the transition probabilities have a high probability of remaining at the same letter. The variance for these de Bruijn processes also increase significantly; being more likely to generate several very long sequences as well as a few short ones by chance. We observe this effect in the histograms in Figure \ref{RLhist}, where the spread of run lengths gets much larger for DBP 3 and DBP 4. Occasionally we observe a very long run length, but we also occasionally draw very short lengths. There is also likely to be more variance in the simulations, especially if the de Bruijn Markov chains are not run for a sufficient amount of time. \begin{figure}[ht] \centering \includegraphics[scale=0.6]{denplot1.png} \caption{Histograms showing the distributions of run lengths of 1's from the de Bruijn process examples in Figure \ref{Samp4}. Black dots give the theoretical run length distribution derived from Lemma \eqref{RLD2}. The run lengths increase as the de Bruijn process gets increasingly clustered. } \label{RLhist} \end{figure} Table \ref{RLExVar} also shows values for two standard deviations of the sample expected run lengths and two standard deviations of the sample variance of run lengths. These are calculated using the expressions, $\sqrt{\frac{\sigma^{2}}{n}}$ and $\sqrt{\frac{\mu_{4}}{n} - \frac{\sigma^{4} (n-3)}{n(n-1)}}$ respectively, where $n = 200$ and $\mu_{4}$ is the kurtosis. This is the fourth central moment and is found from the fourth differential of the cumulant generating function of the run length distribution evaluated at zero. All the differences between the analytical and simulated expectations and variances are within $\pm$ two standard deviations. \begin{table}[h!] \centering \begin{tabular}{||c | c c c c c c||} \hline D. B. Process & A. Exp & S. Exp & 2 Sd(S. E.) & A. Var & S. Var & 2 Sd(S. V.) \\ [0.5ex] \hline\hline DBP 1 & 1.11 & 1.11 & 0.05 & 0.12 & 0.12 & 0.063 \\ DBP 2 & 2 & 2.02 & 0.20 & 2 & 2.21 & 0.66 \\ DBP 3 & 4 & 4.10 & 0.49 & 12 & 12.07 & 3.83 \\ DBP 4 & 10 & 9.47 & 1.34 & 90 & 93.57 & 28.52 \\ [1ex] \hline \end{tabular} \caption{The theoretical expectation (A. Exp), simulated expectation (S. Exp), two standard deviations of the simulated expectation (Sd(S. E.)), analytical variance (A. Exp), simulated variance (S. Var) and two standard deviations of the simulated variance (Var(S. V.)) of the run length distribution given a sequence of length 200 for four different de Bruijn processes from Figure \ref{Samp4}.} \label{RLExVar} \end{table} \subsection{Application: Precipitation} As an example application we finally investigate whether the amount of precipitation observed can be successfully described using a de Bruijn process structure. In order to do this, we have obtained daily weather recordings from a human-facilitated observation station used in the Global Historical Climatology Network-Daily (GHCN) database. There are 455 daily measurements of precipitation recorded at a station in Eskdalemuir, UK from January 2021 to March 2022. The data gives the total amount of precipitation (rain, melted snow, etc.) recorded in inches in a 24 hour period ending at the observation time. We further translate this to binary data, recording if either precipitation was observed ($1$) or no precipitation was observed ($0$). The full data is given in Figure \ref{Rain}. \begin{figure}[ht] \centering \includegraphics[scale=0.35]{raindata.png} \caption{Daily precipitation data recorded at a station in Eskdalemuir, UK. Dark blue (or $1$) represents that there was precipitation recorded in the 24 hour period, whilst light blue (or $0$) corresponds to no precipitation recorded that day.} \label{Rain} \end{figure} Since the amount of precipitation can depend on the current season, the data is first split up into corresponding seasons (spring, summer, autumn and winter). The corresponding distributions of run lengths are presented in the histograms in Figure \ref{RLhist2} in the above order of seasons. There are shown to be different distributions of run lengths for each season such that for the colder seasons, there are both many single days of rain as well as long periods of rain lasting up to 27 days. A de Bruijn process is then fitted separately to each season, estimating both the word length and associated transition probabilities. See the companion paper (PAPER 2) for more information on model fitting and inference. Both spring and summer were estimated to have a word length of $m=2$, whilst autumn and winter were instead estimated to have a word length of $m=3$. The black dots seen in Figure \ref{RLhist2} correspond to the theoretical run lengths generated by applying the estimated transition probabilities to Lemma \ref{RLD2} and Theorem \ref{RLD3}. In general, the theoretical run lengths match well with the run lengths from the data implying that the data can indeed be represented by a de Bruijn structure. We have further compared the theoretical run lengths from the de Bruijn processes with that of a geometric distribution. The black crosses in Figure \ref{RLhist2} give the probabilities of a run of length $n$ generated from a geometric distribution for each given season. It is evident that in each season, the run lengths produced match those from the de Bruijn process far better than those produced from the geometric distribution. We can conclude that the precipitation recorded has correlation structures that are far more complex than that of Bernoulli trials or a Markov property. \begin{figure}[ht] \centering \includegraphics[scale=0.7]{Raindenplot2.png} \caption{Histograms showing the seasonal distributions of run lengths of recorded precipitation in a 24 hour period at a station in Eskdalemuir. From top to bottom, the data corresponds to seasons spring, summer, autumn and winter from January 2021 to March 2022. Black dots give the theoretical run length distributions derived from estimated de Bruijn process. Black crosses give a geometric density fitted to the data.} \label{RLhist2} \end{figure} \section{Discussion} \label{Conclu} We have introduced a novel method for modelling correlated binary random variables, where the strength of the correlation is related to distance. It is often the case that variables close in distance are more likely to be highly correlated then those further away. Hence, by altering the correlation we are able to generate both clustered and anti-clustered sequences of $0$'s and $1$'s. The process uses structures from de Bruijn graphs. These are directed graphs where the nodes of the graph are length $m$ sequences of $0$'s and $1$'s. If a Markov property is placed on these length $m$ sequences rather than the variables themselves, then we are able to control the amount of correlation included in the neighbourhood. Examples are presented throughout as well as details of the stationary distribution for the length $m$ sequences. We have also presented a run length distribution in order to determine how clustered a sequence is likely to be from a given de Bruijn process. The run length distribution gives the probability of a run of $1$'s of length $n$ bounded by a $0$ at each end. From this, we are then able to calculate the expected run length, variance of run length and generating functions. There are many areas for future work on the topic of de Bruijn graphs and correlated Bernoulli processes. Firstly, we need to establish a method of inference. Given a sequence of binary variables known to be generated from a de Bruijn process, it is important that we have a method to estimate both the de Bruijn word length (length $m$ sequence of each node) and the associated transition probabilities. We will then be able to test the proposed de Bruijn process on real-life applications. This is presented in a companion paper (PAPER 2). Secondly, we must also consider de Bruijn processes in two (or more) dimensions, as well as non-directional de Bruijn processes. Many applications (particularly in ecology) are defined on a 2-dimensional spatial grid. However, when considering the form of a de Bruijn word in 2d, it becomes very difficult to work in a specific direction (something that is very important for de Bruijn graphs). Hence, we believe further research should focus on trying to remove this natural direction in simulation. De Bruijn graphs can quickly become very complicated with many possible transition probabilities for large word lengths. Therefore, we believe that it would be useful to try and limit the number of transition probabilities, which could be particularly important for inference. We also intend to develop non-stationary de Bruijn processes where we would be able to create chains that are clustered in some places and anti-clustered in others. \bibliographystyle{abbrvnat}
1,314,259,994,112
arxiv
\section{Introduction} Galaxies may experience interactions and mergers throughout their lifetimes. Tidal forces distort galaxy shapes leading to the formation of different structures and substructures, e.g. shells, rings, tails, and to the onset of star formation inside and outside galaxies \citep[e.g.][]{Toomre77,Mendes04,Schiminovich13,Ueda14,Ordenes16} depending on the nature and evolutionary stage of the on-going tidal interaction. In particular, after a close encounter of two gas-rich systems of similar mass, the gas may be stripped from the interacting galaxies forming long filaments or tidal tails driven by gravity torques (e.g. the Antennae galaxies, which are the nearest example of merging disk galaxies in the Toomre 1977 sequence), while the stars mostly remain in the system, given their higher velocity dispersions and their collisionless dynamics. Once the gas has been removed, it can cool, self-gravitate and form new stars \citep{Duc12}. Thus, these systems are ideal laboratories to study star formation in extreme environments, in particular outside galaxies, in regions where, under normal conditions, the gas density would have been too low for star-formation to occur \citep{Maybhate07,Sengupta15}. The details of the processes capable of triggering star formation in the low-density environments of galaxies outskirts and in the intergalactic medium are still not fully understood. Numerical simulations done to study the dynamics of interacting and merging galaxies \citep[e.g.][]{Bournaud08,Bournaud10,Escala13,Renaud15} have shown that young stellar substructures are formed in the outskirts of merger remnants as well as outside galaxies, in gas clouds stipped during interactions. These simulated objects have properties similar to the observed ones: the largest objects are usually formed at the tip of the tails and the objects have low M/L ratios and high metallicities. Indeed the actively star-forming regions associated with the galaxy outskirts or intergalactic medium \citep[e.g.][]{Neff04,Lisenfeld07,Knierman12,Mullan13} have high metallicities for their luminosities, given that they are formed by gas that was pre-enriched in the ``parent" galaxy \citep[e.g.][]{Mendes04,Duc07,deMello12,Torres12,Torres14}. The evolution of the newly formed systems is mainly driven by gravitational turbulence and instabilities around the Jeans-Scale \citep{Bournaud10}. Most of the studies on star-forming regions in the outskirts of galaxies are based in the analysis of ongoing wet mergers, where H{\sc i}-rich tidal debris and tidal structures are present, and the interacting galaxies are still separate entities \citep[e.g.][]{Oosterloo04,Mendes04,Mendes06,Ryan04,Boquien07,Torres12,deMello12,Rodruck16,Lee16}. However, the environments of peculiar merger-candidate elliptical galaxies, with H{\sc i} outside their main optical body, have not receive as much attention. This is an interesting variation given that these systems are in advanced stages of evolution. \cite{Rampazzo07} have shown a few examples of ``rejuvenated'' elliptical galaxies, which display young bursts of star formation. The object of study in this paper is one of these rejuvenated ellipticals. NGC~2865, a genuinely peculiar elliptical galaxy, with a surface brightness profile consistent with r$^{1/4}$, inside its effective radius \citep{Jorgensen92}, but deep images show shells and disturbed morphology, present in merging systems \citep{Rampazzo07}. NGC~2865, at a distance of ~ 38 Mpc, has an extended tidal tail of H{\sc i} gas, settled in a ring around the galaxy, with low surface brightness optical counterpart. The fine structures present around the galaxy are shells, very faint filaments and an outer loop, that are indicative of an advanced stage of interaction of $\sim$ 4 Gyr \citep{Malin83,Hau99}. In \cite{Urrutia14} (hereafter UV14) we obtained a complete census of H$\alpha$-emitting sources in the southeastern region of the H{\sc i} ring of NGC~2865 using the multi-slit imaging spectroscopy (MSIS) technique \citep{Gerhard05,Arnaboldi07}. Using this technique (a combination of a mask of parallel multiple slits with a narrow-band filter centered around the H$\alpha$ line, see UV14 for details), seven candidate intergalactic H{\sc ii} regions were detected. Due to the short wavelength interval ($\sim$ 80 \AA{}) of the spectra, only one emission line (H$\alpha$) was typically detected. We were, then, not able to confirm the redshifts and compute the metallicities using just one line. Thus, here we revisited NGC~2865 using multi-object spectroscopy of five of the seven H{\sc ii} regions previously detected. For one of these regions we placed two slits over it, and these were resolved into two different regions. The large wavelength interval used in the new observations, from 4500 \AA{} to 7000 \AA{} allowed us to confirm the nature of the sources as newly formed objects and to derive their main physical parameters. The paper is organized as follows. In Section 2 we describe the observation and the data reduction. The analysis and results are described in Section 3. We discuss our results in Section 4 and present our conclusions in Section 5. Throughout this paper we use H$_0$ = 75 km s$^{-1}$ Mpc$^{-1}$ which results in a distance for NGC~2865 of 38 Mpc. At this distance, 1' = 11 kpc. The systemic velocity of NGC~2865 is 2627 km s$^{-1}$ \begin{figure*} \centering \includegraphics[width=0.7\textwidth]{Figure1.jpg} \caption{Gemini g'-band images, the regions are marked and a zoom is show for each one. The size of the zoom images are 6.5" $\times$ 6.5". A radius of 1" was used in the photometry, which are indicate in the zoom images. North is up and East is to the left.} \label{fig:image} \end{figure*} \section{Observation and Reduction} \subsection{Observation} The data were collected with the Gemini Multi-Object Spectrograph \citep[][hereafter GMOS]{Hook04} mounted on the Gemini South telescope in Chile in queue mode (Program ID. GS-2011A-Q-55). Given that we wanted to compare the results with those obtained with the MSIS technique, our field of view is the same as observed in UV14. We observed the southeastern H{\sc i} tail of NGC~2865 ($\alpha$(2000) = 9$^h$23$^m$37$\fs$13, $\delta$(2000) = -23${\degr}$11$^m$54$\fs$34) in the g' filter on February 1 2011 (UT) to build the multi-slit mask. The spectra were observed between April 13 and April 27 2013, under gray and photometric conditions, and with a seeing ranging between 0\farcs8 and 1\arcsec. We centered the slits in 20 sources across the GMOS field of view, 5 of which were previously identified in UV14. For the region $IG\_04$ we set two slits, one in the stellar cluster (or main source) and the other in the tail, as is defined by UV14 To avoid confusion, we re-defined the ID of the region $IG\_04$ as $IG\_04\_main$ and $IG\_04\_tail$ according to where we set the slit (see Fig. \ref{fig:image}). The spectra in the mask were observed using the R400 grating, 1\arcsec slits, 2 $\times$ 2-binning, and centered at 6550. A total of 12 exposures of 1150 seconds each were obtained. An offset of 50\AA{} towards the blue or the red was performed, between successive exposures, such that the central wavelength ranged from 6400\AA{} to 6650\AA{}, to avoid losing any important emission lines that could fall, by chance, in the gaps between CCDs. Spectroscopic flat fields and CuAr comparison lamps spectra were taken before and after each science exposure. \subsection{Data reduction} All spectra were reduced with the Gemini GMOS package version 1.8 inside IRAF\footnote{{\sc iraf} is distributed by the National Optical Astronomy Observatories,which are operated by the Association of Universities of Research in Astronomy, Inc., under cooperative agreement with the NSF.} following the standard procedures for MOS observations. Science exposures, spectroscopic flats and CuAr comparison lamps were overscan/bias-subtracted and trimmed. The two-dimensional science spectra were flat fielded, wavelength calibrated, rectified (S-shape distortion corrected) and extracted to one-dimensional format. The cosmic rays were removed using the Laplacian Cosmic Ray Identification algorithm \citep{van Dokkum01}. The final spectra have a resolution of $\sim$ 7.0\AA{} (as measured from the sky lines 5577\AA{} and 6300\AA{}) and a dispersion of 1.36 \AA{} pixel$^{-1}$, with a typical coverage in wavelength between $\sim$ 4500\AA{} and $\sim$7000\AA{} (see Fig. 2). The science spectra were flux calibrated using the spectrum of the spectrophotometric standard star LTT~7379 observed during the night of April 11, 2011 UT, under different observing program and with the same instrument setup used for the science spectra. The standard star being observed in a different night ensures good relative flux calibration, although probably an uncertain absolute zero point. The flux standard was reduced following the same procedures used for the science frames. \begin{table} \centering \begin{minipage}{80mm} \caption{Physical properties of the intergalactic H{\sc ii} regions.} \label{table:obs} \begin{tabular}{@{}lc c c c @{}} \hline \multicolumn{1}{c}{ID} & $\lambda H\alpha$ & $V_{sys}$ & D$_{projected}$ & Line ratio \\ & \AA{} & km s$^{-1}$ & Kpc & ${H\alpha}/{H\beta}$ \\ \hline \hline {\it IG\_04\_main } & 6618 & 2571 & 16 & 7.27 \\ {\it IG\_04\_tail } & 6616 & 2471 & 16 & 6.84 \\ {\it IG\_17\_P1} & 6617 & 2461 & 15 & --- \\ {\it IG\_51\_P3} & 6615 & 2360 & 26 & 4.06 \\ {\it IG\_85\_P6} & 6614 & 2341 & 40 & --- \\ {\it IG\_52\_P7} & 6614 & 2331 & 37 & 6.00 \\ \hline \end{tabular} \end{minipage} \end{table} \subsection{Emission lines} The six spectra exhibit strong emission lines of H$\alpha$ and [N{\sc ii}]$\lambda$6583\AA{}. Four of the six spectra also show H$\beta$ and the forbidden lines: [O{\sc iii}]$\lambda\lambda$4959, 5007\AA{} and [S{\sc ii}]$\lambda\lambda$6717, 6731\AA{}. No significant underlying continuum was measured in any of the six cases, hence no continuum subtraction was done. The principal parameters for each spectrum presented in Table \ref{table:obs} are: column (1): ID; column (2), the central wavelength for the H$\alpha$ emission line; column (3), the heliocentric velocity, obtained with the task {\sc rvidline} from {\sc iraf} (using at least three emission lines); column (4), the projected distance (from the center of NGC~2865) in kpc; and column (5), line ratios H$\alpha$/H$\beta$. The latter was used to estimate the the color excess, $E(B-V)$ (see section 3.1). The errors for the velocities listed in column (3) were estimated using Monte Carlo simulations for 100 runs, and were found to be $\sim$ 40 km s$^{-1}$ for each spectrum. \begin{table*} \centering \caption{Line intensities of the Intergalactic H{\sc ii} regions around NGC~2865.} \label{table:lines_flux} \centering \begin{tabular}{l c c c c c c} \hline \multicolumn{1}{c}{ID} & Flux$_{H\beta}$ & Flux$_{[OIII]\lambda5007}$ & Flux$_{H\alpha}$ & Flux$_{[NII]\lambda6583}$ & Flux$_{[SII]\lambda6717}$ & Flux$_{[SII]\lambda6731}$ \\ \hline \hline {\it IG\_04\_main} & 0.49 $\pm$ 0.06 & 1.12 $\pm$ 0.36 & 1.27 $\pm$ 0.67 & 0.52 $\pm$ 0.16 & 0.22 $\pm$ 0.03 & 0.20 $\pm$ 0.07 \\ {\it IG\_04\_tail} & 2.41 $\pm$ 0.97 & 7.00 $\pm$ 0.76 & 7.63 $\pm$ 1.23 & 2.15 $\pm$ 0.36 & 1.16 $\pm$ 0.35 & 0.69 $\pm$ 0.23 \\ {\it IG\_17\_P1} & --- & 0.12 $\pm$ 0.05 & 0.13 $\pm$ 0.02 & 0.03 $\pm$ 0.01 & --- & --- \\ {\it IG\_51\_P3} & 0.04 $\pm$ 0.01 & --- & 0.18 $\pm$ 0.05 & 0.05 $\pm$ 0.01 & --- & --- \\ {\it IG\_85\_P6} & --- & --- & 0.55 $\pm$ 0.13 & 0.15 $\pm$ 0.03 & 0.09 $\pm$ 0.06 & 0.07 $\pm$ 0.09 \\ {\it IG\_52\_P7} & 0.16 $\pm$ 0.35 & 0.10 $\pm$ 0.74 & 0.29 $\pm$ 0.46 & 0.08 $\pm$ 0.02 & 0.06 $\pm$ 0.01 & 0.04 $\pm$ 0.02 \\ \hline \end{tabular} \tablefoot{The fluxes are in units of 10$^{-15}$ erg s$^{-1}$ cm$^{-2}$.} \end{table*} \section{Analysis and Results} For each of the six regions, when possible, we derived the following parameters: i) color excess $E(B-V)$, ii) oxygen abundance, 12 + log(O/H), and iii) electron density, $n_e$. These results are used in the following analysis of the physical and chemical properties of the H{\sc ii} regions around NGC~2865. \subsection{Reddening correction E(B-V)} \label{sec:redd} Dust extinction in each region was estimated from the line ratio H$\alpha$/H$\beta$ \citep{Calzetti94}. The intrinsic value for the H$\alpha_0$/H$\beta_0$ ratio for an effective temperature of 10$^4$ K and N$_e$ = 10$^2$ cm$^{-3}$ is 2.863 \citep{Osterbrock06}. Thus, we estimated the extinction using the following relation \citep{Osterbrock06}: \begin{eqnarray} \frac{I_\lambda}{I_{H\beta}} & = & \frac{I_{\lambda0}}{I_{H\beta0}}10^{-c[f(\lambda)-f(H\beta)]} \end{eqnarray} \noindent where I$_\lambda$ and I$_{\lambda0}$ are the observed and the theoretical fluxes, respectively. c is the reddening coefficient and f($\lambda$) is the reddening curve. The line ratios H$\alpha$/H$\beta$ used to estimate the dust extinction values are tabulated in Table \ref{table:obs} The relation between $E(B - V)$ and $c$ depends on the shape of the extinction curve; assuming a Galactic standard extinction curve (R = 3.1, typical in the diffuse interstellar medium) and the H$\alpha$ line ($\lambda$ = 6563\AA{}), the values for each parameter are: f(H$\beta$) = 1.164, f(H$\alpha$) = 0.818. Thus, the color excess is given by $E(B - V) \approx 0.77c$. For the spectra {\it IG\_04\_main}, {\it IG\_04\_tail}, {\it IG\_51\_P3} and {\it IG\_52\_P7} the color excess was obtained using the H$\alpha$ and H$\beta$ lines, as described above. Given that H$\beta$ emission is not detected in the spectra of {\it IG\_17\_P1} and {\it IG\_85\_P6}, we used in these cases a correction equal to the average of the color excess obtained in the other 4 regions. We corrected each spectrum for the interstellar dust extinction using the extinction function given by \cite{Calzetti00}. This is thought to be appropriate for de-reddening the spectrum of a source whose output radiation is dominated by massive stars, as we assume the regions observed here are. The six extracted spectra after the extinction correction are shown in Figure \ref{fig:spectra}. The lines used are marked in each spectrum. In Table \ref{table:lines_flux} we list the resulting fluxes, after the reddening correction is applied. The four H{\sc ii} regions for which it was possible to estimate the reddening correction, present high values of color excess, E(B-V)$ >$ 0.33. These values indicate a considerably high dust absorption, typical of a nebula where newborn stars are being formed \citep{Arias06}. Indeed, these four regions display quite young ages, as derived by UV14, ranging from $\sim$ 2 to 50 Myr, which strongly suggests that these sources are the birthplace of young stars. \begin{figure} \centering \includegraphics[width=\columnwidth]{spectra_paper.jpg} \caption{Spectra for 6 intergalactic regions around NGC~2865. Including IG\_04\_tail. } \label{fig:spectra} \end{figure} \subsection{Oxygen abundance} The most commonly measured tracers of metallicity are oxygen and iron, given their bright spectral lines. Iron is usually used as a metallicity indicator when old stars are the main component, while oxygen is the metallicity tracer of choice in the interstellar medium \citep{Henry99}. In the ionized gas the most accurate estimator to derive the chemical abundance is the electron temperature, T$_e$ \citep{Osterbrock06}. In the optical, T$_e$ can be estimated using sensitive lines such as [O{\sc iii}$\lambda$4363\AA{}]. However, the auroral lines are difficult to detect in extragalactic star forming regions, where cooling is efficient and the temperature is low. Thus, strong nebular lines must be used to derive oxygen abundances \citep[e.g.][]{Denicolo02,Pilyugin01,Pettini04}. The empirical methods more frequently use the N$_2$ and O$_3$N$_2$ indices, \begin{equation} N_2 \equiv log \left( \frac{I([NII]\lambda 6583)}{I(H\alpha)} \right) \end{equation} \begin{equation} O_3N_2 \equiv log\left( \frac{I([OIII]\lambda 5007)}{I(H\beta)} \Big/ \frac{I([NII]\lambda 6583)}{I(H\alpha)} \right) \end{equation} \noindent We calculated the oxygen abundances using the empirical methods N$_2$ and, when possible, also O$_3$N$_2$, as proposed and calibrated by \cite{Pettini04}. These ``empirical'' methods are adequate for estimating oxygen abundances in extragalactic H{\sc ii} regions. We used the linear relation between the oxygen abundance and the N$_2$ index given by: \begin{equation} 12 + log(O/H) = 8.90 + 0.57 \times N_2 \end{equation} \noindent Three regions have the four lines; for these we were able to use the index O$_3$N$_2$ and the linear relation between the O$_3$N$_2$ and the oxygen abundance given by: \begin{equation} 12 + log(O/H) = 8.73 - 0.32 \times O_3N_2 \end{equation} \noindent The uncertainties in the calibration of these methods are 0.18 dex and 0.14 dex (at the 68$\%$ confidence interval), for N$_2$ and O$_3$N$_2$, respectively. Table \ref{table:result} lists the oxygen abundances estimated including the values of the N$_2$ and O$_3$N$_2$ indices for each region. Table \ref{table:result} summarizes the oxygen abundances calculated for all of our H{\sc ii} regions. using the N$_2$ index. We found an excellent agreement between the values derived from the N$_2$ and the O$_3$N$_2$ indices. The average value for the oxygen abundance for the six regions is 12+log(O/H) $\sim$ 8.6, which is very similar to typical values for extragalactic H{\sc ii} regions reported in the literature for other interacting systems \citep[e.g.][]{Werk10,Torres12,deMello12,Olave15}. \subsection{Electron density} The electron density, N$_e$, is one of the fundamental physical parameters used to characterize H{\sc ii} regions. It is a reliable physical quantity that is not sensitive to flux calibration errors and reddening. It is largely independent of metallicity, and only weakly depends on electron temperature. The most used emission lines to measure N$_e$ are [O{\sc ii}]$\lambda$3729\AA{}/$\lambda$3726\AA{}, and [S{\sc ii}]$\lambda$6716\AA{}/$\lambda$6731\AA{}. Given the wavelength range of our spectra, N$_e$ was determined for four H{\sc ii} regions based on the measured intensity ratio [S{\sc ii}]$\lambda6716\AA{}$/ $\lambda6731\AA{}$. We used the task {\sc temden}, available in the {\sc nebular} package of {\sc stsdas}, inside {\sc iraf V2.16}. For the calculation we assume an electron temperature of 10$^4$ K, which is a representative value for H{\sc ii} regions \citep{Osterbrock06}. This assumption was necessary, given that we have no independent measurements for the temperatures of these regions. It was possible to obtain the electron density for four of the six H{\sc ii} regions, see Table \ref{table:result}. We estimated values which ranged from 90 to 327 cm$^{-3}$, which are typical values of the electron density observed in the Magallanic Clouds \citep{Feast64}, and in particular for bright regions in 30 Dorado \citep{Osterbrock06}. \\ Table \ref{table:result} lists the physical parameters described above using the [O{\sc iii}], H$\beta$, [N{\sc ii}], H$\alpha$ and [S{\sc ii}] lines of the intergalactic H{\sc ii} regions. All parameters were obtained after applying reddening correction. \begin{table*} \centering \caption{Physical parameters derived from spectral lines for the intergalactic H{\sc ii} regions.}\label{table:result} \begin{tabular}{@{}l c c c c c c c@{}} \hline \multicolumn{1}{c}{ID} & M$_{g'}$&N$_e$ & E(B-V) & N$_2$ & O$_3$N$_2$ & \multicolumn{2}{c}{12+log(O/H)} \\ & mag & cm$^{-3}$ & & & & N$_2$ & O$_3$N$_2$ \\ \hline \hline {\it IG\_04\_main} & -15.76 & 327 & 0.91 & -0.39 & 0.74 & 8.67 $\pm$ 0.18 & 8.49 $\pm$ 0.1\\ {\it IG\_04\_tail} & -15.31 & 90 & 0.84 &-0.54 &1.01 & 8.58 $\pm$ 0.18 & 8.40 $\pm$ 0.14\\ {\it IG\_17\_P1} & --- & --- & 0.70$^*$ & -0.58 &--- & 8.56 $\pm$ 0.18 & --- \\ {\it IG\_51\_P3} & -14.58 & --- & 0.33 & -0.54 & --- & 8.58 $\pm$ 0.18 & --- \\ {\it IG\_85\_P6} & -14.22 & 176 & 0.70$^*$ & -0.54 & --- & 8.58 $\pm$ 0.18 & --- \\ {\it IG\_52\_P7} & -14.51 & 176 & 0.75 & -0.65 & 1.29 & 8.52 $\pm$ 0.18 & 8.31 $\pm$ 0.14 \\ \hline \end{tabular} \tablefoot{* Given that the H$\beta$ emission line was not detected in two cases, the color excess, E(B-V), was assumed to be an average of the values derived for the other four spectra} \end{table*} \subsection{Confirming the MSIS technique as an effective tool to find emission-line regions} The MSIS technique has been successfully used to detect extragalactic planetary nebulae in the Coma and Hydra {\sc i} clusters \citep[e.g.][]{Gerhard05,Ventimiglia11}. This technique combines a mask of parallel multiple slits with a narrow-band filter, centered around the [O{\sc iii}] $\lambda$5007\AA{} line, at the redshift of the object. In the case of the Coma cluster, \cite{Gerhard05} were able to detect faint objects with fluxes as low as 10$^{-18}$ erg s$^{-1}$ cm$^{-2}$ \AA$^{-1}$. An attempt to use the same technique to detect star-forming regions in the outskirts of galaxies was done by UV14, for the case of NGC~2865. Given the higher contrast between emission and background, this technique has great potential to detect faint emission-line objects when regular narrow-band imaging would fail to lead to a detection of faint objects. However, given that the spectral range observed with such a technique is limited, the spectra shown in this paper were a necessary follow-up to validate the efficiency of the MSIS method. In the present paper we find that all of the detected objects with MSIS technique (UV14) for which further spectroscopic follow-up was done, were confirmed to be star-forming regions. This demonstrates that the MSIS technique is very efficient in finding emission-line objects. This type of technique will become increasingly useful as the new generation of extremely large telescopes comes on-line and blind searches for emission-line objects can be efficiently done. \section{Discussion} In the previous sections, we described the main properties of the six intergalactic H{\sc ii} regions found in the southeast ring of H{\sc i} gas within a radius of $\sim$ 50 kpc of the elliptical galaxy NGC~2865. The study is based on 6 emission lines: H$\beta$, [O{\sc iii}]$\lambda$ 5007\AA{}, H$\alpha$, [N{\sc ii}], and S{\sc ii}$\lambda\lambda$ 6717,6731\AA{}. The six regions were previously detected using the MSIS technique by UV14, but only one line (H$\alpha$) was observed then. The detection of at least three emission lines in each of the six regions, shown in this work, indicate that the sources have a velocity difference of $\sim$ 50 - 300 km s$^{-1}$ with respect to the systemic velocity of NGC~2865 and they have close to solar oxygen abundance \citep[12 + log (O/H) $\approx$ 8.6,][]{Asplund09}. \subsection{Star formation outside galaxies} The six intergalactic H{\sc ii} regions are located 16-40 kpc from the center of NGC~2865 and they show strong emission in H$\alpha$ and low or absent continuum. The indication that they are young come from the ages derived in UV14, from ultraviolet fluxes, but our new observations constrain the ages further: high emission in H$\alpha$ is a direct indicator of recent star formation, showing that the last starburst occurred less than 10 Myr ago, assuming single stellar populations \citep{Leitherer99}. Indication that they are formed in situ come from the prohibitively large velocities derived for these regions under the assumption that they traveled from inside the parent galaxy to the present location. Approximate velocities can be derived in back-of-envelope calculations, taking as an input the projected distance of the objects (with respect to the parent galaxy) and the derived ages. If the intergalactic H{\sc ii} regions were thrown out of the disk of the main galaxy during an interaction, the typical velocities they would need to have is over 16 kpc / 10 Myr $\approx$ 1600 km s$^{-1}$, which is not expected, given the low velocity dispersions present in the field or in groups of galaxies (typically 200 km s$^{-1}$, arguing in favor of in-situ formation. In addition, their metallicities were found to be close to solar. These high metallicities could be explained if the regions were formed out of enriched gas which was removed from the galaxy during the merger event. Thus, these regions are similar to extragalactic H{\sc ii} regions found in recent or ongoing mergers \citep[e.g.][]{Mendes01,deMello12,Olave15}. \cite{Torres12} found similar objects around the system NGC~2782, an ongoing merging galaxy which shows optical and H{\sc i} tidal tails containing several H{\sc ii} regions. These objects have ages of ~1 to 11 Myr, masses ranging from 0.8 - 4 $\times$ 10$^{4}$ M$_\odot$ and solar metallicities, similar properties to those of the regions found in the present work. Other examples of star formation outside galaxies is provided by the intra-cluster compact H{\sc ii} regions detected in the Virgo core, close to NGC 4388 \citep{Gerhard02}. These objects have ages of about ~3 Myr, mass of the order of 4$\times$10$^2$ M$_\odot$ and metallicity 0.25 solar. What makes NGC~2865 a special target for searches of newly-formed star forming objects is its evolutionary stage. This galaxy is a merger relic, an elliptical galaxy with an r$^{1/4}$ profile and low surface brightness tidal tails. Its last major merger happened $\sim$ 4 Gyr ago \citep{Schiminovich95}. Nevertheless, NGC~2865, in such an advanced stage of evolution, proved to contain star forming regions with very similar properties to those found in less evolved systems. \subsection{The faint-continuum sources in low N$_{HI}$ regions} The regions measured in this work have weak emission in the far- and near- ultraviolet ($\sim$ log(FUV) $\sim$ 37, $\sim$ log(NUV) $\sim$ 36 erg s$^{-1}$ \AA$^{-1}$) and mass of Log(M$_\star$) $\sim$ 6 M$_\odot$. Five of the six regions have low fluxes in the g'-band, M$_{g'}$ $>$ 15.76 and one of them is not seen at all in the Gemini/GMOS images (i.e. which sets a lower limit of 21 mag in g' for the magnitude of the object). Regions with such faint optical counterparts can only be found in blind searches like those done using the MSIS technique. \cite{Maybhate07} and \cite{Mullan13} reported that the threshold $N_{HI}$ value to form stars in tidal debris of interacting galaxies is log($N_{HI}$) $\approx$ 20.6 cm$^{-2}$, which is considered a necessary but not a sufficient condition to generate clusters. In the case of NGC~2865, this value is one order of magnitude lower with H{\sc i} density of $\sim$ 3.8 $\times$ 10$^{19}$ cm$^{-2}$, although it is possible that the real H{\sc i} column densities are underestimated due to the large size of the beam \citep{Schiminovich95}. The level of star formation in the southeast H{\sc i} cloud where the regions of NGC~2865 are located is 2.6 $\times$ 10$^{-3}$ M$_\odot$ yr$^{-1}$ (UV14), which is similar to values observed in other systems such as in the Leo ring and in NGC~4262 \citep{Thilker09,Bettoni10}. The mechanisms that trigger and quench star formation in such low-gas density environments is still a mistery. However, what can be said is that star formation does happen outside galaxies, in low H{\sc i} column density regions, in different systems, including around at least one merger relic, NGC~2865. \subsection{Are H{\sc i} tidal debris and tails nurseries of future globular clusters?} NGC~2865 is a shell galaxy \citep{Malin83} with a peculiar morphology, embedded in a ring-shape H{\sc i} tidal tail \citep[see Fig. \ref{fig:image} and also][]{Schiminovich95}. These features all point to a formation scenario for the galaxy as a wet merger. Using stellar spectroscopy and {\it UBV} images, \cite{Schiminovich95} proposed that a possible major merger event happened between 1 and 4 Gyr ago. \cite{Rampazzo07} found a young stellar sub-population in this galaxy with an age of 1.8 $\pm$ 0.5 Gyr. This finding is in agreement with the results obtained by \cite{Hau99}, who derived burst ages from population synthesis of 0.4 - 1.7 Gyr, indicating the presence of a younger stellar population in the core of NGC~2865. These authors also derived a much younger age for the shells of this galaxy. \cite{Salinas15} and \cite{Sikkema06} found a very blue population of globular clusters (GCs) with a color distribution which peaked at ({\it V - I}) = 0.7. This suggests the presence of stellar populations with ages ranging from 0.5 to 1 Gyr, similar to those found by \cite{Hau99} in the central regions of NGC~2865. Moreover, \cite{Trancho14} found a sub-population of young metal-rich GCs of age $\sim$ 1.8 $\pm$ 0.8 Gyr. In this paper we found a much younger sub-population, formed by emission-line regions, with an age $<$ 10 Myr (as indicated by the H$\alpha$-line emission). All these young regions are located in the H{\sc i} tidal tails and they could be the result of a more recent burst of star formation. In UV14, we estimated a mass for these regions between 4 $\times$ 10$^4$ and 5 $\times$ 10$^6$ M$_\odot$. Similar values were found by \cite{Maraston04} for the mass of a young cluster in the merger remnant NGC~7252 and by \cite{Goudfrooij01} for the masses of clusters around the peculiar central galaxy of the Fornax cluster, NGC~1316, which is also considered a merger remnant. In conclusion, NGC~2865 contains at least three generations of star clusters, the youngest generation (which is embebed in its H{\sc ii} regions) is the object of study of this paper. We do not exclude the possibility that these young H{\sc ii} regions could be the progenitors of the intermediate age or the red globular clusters that will be part of the halo of the future relaxed elliptical galaxy. \section{Conclusions} We have confirmed the tidal origin of six H{\sc ii} regions in the immediate vicinity of NGC~2865, previously detected using the MSIS technique. Our results show that although NGC~2865 has undergone a major merger event $\sim$ 4 Gyr ago, stars are still being formed in its H{\sc i} tidal tail, outside the main body of the galaxy. The H{\sc ii} regions around NGC~2865 display high oxygen abundances of 12+log(O/H)$\sim$8.6. This suggests that they were formed from pre-processed and enriched material from the parent galaxy which was stripped to the intergalactic medium during a merger event. The objects coincide with a gaseous tail with a projected H{\sc i} density of N$_{HI}$ $<$ 10$^{19}$ cm$^{-2}$, which is one magnitude lower than the H{\sc i} critical density for star formation, as given by previous studies. The young regions ($< 10 Myr$) found in this work can be associated with the youngest population in addition to the two populations of star clusters already identified in NGC 2865: i) A much older generation of globular clusters, and ii) a secondary population with an age ranging from 0.5 to 1.8 Gyrs, as found by \cite{Sikkema06} and \cite{Trancho14}. The fate of these young sources is unclear: they may either get re-acreted onto the parent galaxy, get dissolved or merge to form massive (globular) star clusters. Indeed, given the mass and the location of the H{\sc ii} regions, we can not exclude that these young star-forming regions are potential precursors of globular clusters that will be part of the halo of NGC~2865. The latter scenario is consistent with the results obtained by \cite{Bournaud08}, who used high-resolution simulations of galaxies mergers. These authors found that super star cluster formed in mergers are likely the progenitors of globular clusters. In this sense, our observations are in agreement with the predictions derived from simulations. Finally, in this paper we verify that the MSIS technique is a powerful tool to detect faint objects with emission lines which are not always detected in broad band imaging. This technique may become specially useful in blind searches of emission line objects to be done with the next generation of extremely large telescopes. \section*{Acknowledgments} The authors would like to thank the anonymous referee for the thoughtful comments which improved the clarity of this paper. Based on observations obtained at the Gemini Observatory, which is operated by the Association of Universities for Research in Astronomy, Inc., under a cooperative agreement with the NSF on behalf of the Gemini partnership: the National Science Foundation (United States), the National Research Council (Canada), CONICYT (Chile), Ministerio de Ciencia, Tecnolog\'{i}a e Innovaci\'{o}n Productiva (Argentina), and Minist\'{e}rio da Ci\^{e}ncia, Tecnologia e Inova\c{c}\~{a}o (Brazil). F.U.-V. acknowledges the financial support of the Chilean agency Conicyt $+$ PAI/Concurso nacional apoyo al retorno de investigadores/as desde el extranjero, convocatoria 2014, under de contract 82140065. ST-F acknowledges the financial support of Direcci\'on de Investigaci\'on y Desarrollo de la ULS, through a project DIULS Regular, under contract PR16143. CMdO acknowledges support from FAPESP (grants 2009/54202-8, 2016/17119-9 ) and CNPq (grant 312333/2014-5). STF and CMdO kindly thank CONICYT for funding provided through the PAI MEC program for a visit of CMdO to the U. of La Serena.
1,314,259,994,113
arxiv
\section{Introduction}\label{sec:intro} Inspired by biological neural networks, the theory of artificial neural networks has largely focused on pointwise (or ``local'') nonlinear layers \citep{rosenblatt1958perceptron, cybenko1989approximation}, in which the same function $\sigma \colon \mathbb{R} \to \mathbb{R}$ is applied to each coordinate independently: \begin{equation}\label{eq:ptwise-act} \mathbb{R}^n \to \mathbb{R}^n, \qquad v = (v_1 \ , \ \dots \ , \ v_n) \ \mapsto \ (\sigma(v_1) \ , \ \sigma(v_2) \ , \ \dots \ , \ \sigma(v_n)). \end{equation} In networks with pointwise nonlinearities, the standard basis vectors in $\mathbb{R}^n$ can be interpreted as ``neurons'' and the nonlinearity as a ``neuron activation.'' Research has generally focused on finding functions $\sigma$ which lead to more stable training, have less sensitivity to initialization, or are better adapted to certain applications \citep{ramachandran2017searching, misra2019mish, milletari2018mean, clevert2015fast, klambauer2017self}. Many $\sigma$ have been considered, including sigmoid, ReLU, arctangent, ELU, Swish, and others. However, by setting aside the biological metaphor, it is possible to consider a much broader class of nonlinearities, which are not necessarily pointwise, but instead depend simultaneously on many coordinates. Neural networks using such nonlinearities may yield expressive function classes with different advantages. One example is radial basis networks \citep{broomhead1988radial}, which contain nonlinearities of the form $\| v - c \|$, which depend on all coordinates of $v$. However, each coordinate output is still independent. In this paper, we introduce \emph{radial} neural networks which employ non-pointwise nonlinearities called \emph{radial rescaling} activations. Freedom from the pointwise assumption allows us to design activation functions that maximize symmetry in the parameter space of the neural network. Such networks enjoy several provable properties including high model compressibility, symmetry in optimization, and universal approximation. These activations are defined by rescaling each vector by a scalar that depends only on the norm of the vector: \begin{equation}\label{eq:radial-intro} \rho: \mathbb{R}^n \to \mathbb{R}^n, \qquad v \ \mapsto \ \actone(|v|) v, \end{equation} where $\actone$ is a scalar-valued function of the norm. Whereas in the pointwise setting, only the linear layers mix information between different components of the latent features, for radial rescaling, all coordinates of the activation output vector are affected by all coordinates of the activation input vector. The inherent geometric symmetry of radial rescalings makes them particularly useful for designing equivariant neural networks \citep{weiler_general_2019, sabour2017dynamic, weiler20183d, weiler2018learning}. \begin{figure}[t] \centering \includegraphics[width=\textwidth]{figs/pointwise-radial.pdf} \caption{(Left) Pointwise activations distinguish a specific basis of each hidden layer and treat each coordinate independently, see \eqref{eq:ptwise-act}. (Right) Radial rescaling activations rescale each feature vector by a function of the norm, see \eqref{eq:radial-intro}. } \label{fig:ptwise-vs-radial} \end{figure} In our first set of main results, we prove that radial neural networks are in fact {\it universal approximators}. Specifically, we demonstrate that any asymptotically affine function can be approximated with a radial neural network, suggesting potentially good extrapolation behavior. Moreover, this approximation can be done with bounded width. Our approach to proving these results differs significantly from standard techniques used in the case of pointwise activations due to the fact that coordinates cannot be treated independently when dealing with non-pointwise activations. In our second set of main results, we exploit parameter space symmetries of radial neural networks to achieve {\it model compression}. Using the fact that radial rescaling activations commute with orthogonal transformations, we develop a practical algorithm to systematically factor out orthogonal symmetries via iterated QR decompositions. This leads to another radial neural network with fewer neurons in each hidden layer. The resulting model compression algorithm is {\it lossless}: the compressed network and the original network both have the same value of the loss function on any batch of training data. Furthermore, we prove that the loss of the compressed model after one step of gradient descent is equal to the loss of the original model after one step of \emph{projected gradient descent}. As explained below, projected gradient descent involves zeroing out certain parameter values after each step of gradient descent. Although training the original network may result in a lower loss function after fewer epochs, in many cases the compressed network takes less time per epoch to train and is faster in reaching a local minimum. To summarize, our main contributions are: \begin{itemize} \item A formalization of radial neural networks, a new class of neural networks; \item Two universal approximations results for radial neural networks: a) approximation of asymptotically affine functions, and b) bounded width approximation; \item An implementation of a lossless model compression algorithm for radial neural networks; \item A theorem providing the precise relationship between gradient descent optimization of the original and compressed networks. \end{itemize} \section{Related work}\label{sec:related-work} {\bf Radial rescaling activations.} Radial rescaling functions have the symmetry property of preserving vector directions, and hence exhibit rotation equivariance. Consequently, examples of such functions, such as the squashing nonlinearity and Norm-ReLU, feature in the study of rotationally equivariant neural networks \citep{weiler_general_2019, sabour2017dynamic, weiler20183d, weiler2018learning, jeffreys_kahler_2021}. However, previous works apply the activation only along the channel dimension, and consider the orthogonal group $O(n)$ only for $n=2,3$. In contrast, we consider a radial rescaling activation across the entire hidden layer, and $O(n)$-equivariance where $n$ is the hidden layer dimension. Our constructions echo the vector neurons formalism \citep{deng_vector_2021}, in which the output of a nonlinearity is a vector rather than a scalar. For radial basis networks, each hidden neuron is a radial nonlinear function of the shifted input vector, but the outputs are independent, whereas for radial rescaling functions, the outputs are also tied together \citep{broomhead1988radial}. {\bf Universal approximation.} Neural networks of arbitrary width and sigmoid activations have long been known to be universal approximators \citep{cybenko1989approximation}. Universal approximation can also be achieved by bounded width networks with arbitrary depth \citep{lu2017expressive}. Additional work has generalized to other activation functions and neural architectures \citep{hornik1991approximation, yarotsky2022universal, ravanbakhsh2020universal, sonoda2017neural}. While most work has focused on compact domains, some recent work also considers non-compact domains \citep{kidger2020universal, wang2022approximation}. The techniques used for pointwise activation functions generalize to radial basis networks since the outputs of each RBF are independent \citep{park1991universal}, but do not easily generalize to the radial rescaling activations considered here, because all coordinates of the activation output vector are affected by all coordinates of the activation input vector. As a consequence, individual radial neural network approximators of two different functions cannot be easily combined to give an approximator of the sum of the functions. {\bf Groups and symmetry.} Appearances of symmetry in machine learning have generally focused on symmetric input and output spaces. Most prominently, equivariant neural networks incorporate symmetry as an inductive bias and feature weight-sharing constraints based on equivariance with respect to various symmetry groups. Examples of equivariant architectures include $G$-convolution, steerable CNN, and Clebsch-Gordon networks \citep{cohen2019gauge, weiler_general_2019, cohen2016group, chidester2018rotation, kondor_generalization_2018, bao2019equivariant, worrall2017harmonic, cohen2016steerable, weiler2018learning, dieleman2016cyclic, lang2020wigner, ravanbakhsh2017equivariance}. By contrast, our approach to radial neural networks does not depend on symmetries of the input domain, output space, or feedforward mapping. Instead, we exploit parameter space symmetries and thus obtain more general results that apply to domains with no apparent symmetry. As such, our use of the orthogonal invariance of radial neural networks is parallel to the he ``non-negative homogeneity'' (or ``positive scaling invariance'') of the pointwise ReLU activation function \citep{armenta_neural_2021, dinh_sharp_2017, meng_g-sgd_2019, neyshabur_path-sgd_2015}. {\bf Model compression.} A major goal in machine learning is to find methods to reduce the number of trainable parameters, decrease memory usage, or accelerate inference and training \citep{cheng2017survey, zhang2018systematic}. Our approach toward this goal differs significantly from most existing methods in that it is based on the inherent symmetry of network parameter spaces. One prior method is \emph{weight pruning}, which removes redundant or small weights from a network with little loss in accuracy \citep{han2015deep, blalock2020state, karnin1990simple}. Pruning can be done during training \citep{frankle2018lottery} or at initialization \citep{ lee2019signal, wang2020picking}. \emph{Gradient-based pruning} identifies low saliency weights by estimating the increase in loss resulting from their removal \citep{ lecun1990optimal, hassibi1993second, dong2017learning, molchanov2016pruning}. A complementary approach is \emph{quantization}, which decreases the bit depth of weights \citep{wu2016quantized, howard2017mobilenets, gong2014compressing}. \textit{Knowledge distillation} identifies a small model mimicking the performance of a larger model or ensemble of models \citep{bucilua2006model,hinton2015distilling, ba2013deep}. \textit{Matrix Factorization} methods replace fully connected layers with lower rank or sparse factored tensors \citep{cheng2015fast, cheng2015exploration, tai2015convolutional, lebedev2014speeding, rigamonti2013learning, lu2017fully} and can often be applied before training. Our method involves a type of matrix factorization based on the QR decomposition; however, rather than aim for a rank reduction of linear layers, we leverage this decomposition to reduce hidden widths via change-of-basis operations on the hidden representations. Close to our method are lossless compression methods which remove stable neurons in ReLU networks \citep{ serra2021scaling, serra2020lossless} or exploit permutation parameter space symmetry to remove redundant neurons \citep{sourek2020lossless}; our compression instead follows from the symmetries of the radial rescaling activation. Finally, the model compression results of \citep{jeffreys_kahler_2021}, while conceptually similar to ours, are weaker, as (1) the unitary group action is on disjoint layers instead of iteratively moving through all layers, and (2) the results are only stated for a version of the squashing nonlinearity. \section{Radial neural networks}\label{sec:rad-nns} In this section, we define radial rescaling functions and radial neural networks. Let $h : \mathbb{R} \to \mathbb{R}$ be a function. For any $n \geq 1$, set: \[ h^{(n)} : \mathbb{R}^n \to \mathbb{R}^n \qquad \qquad h^{(n)}(v) = h(\lvert v \rvert) \frac{v}{|v|} \] for $v \neq 0$, and $h^{(n)}(0) = 0$. A function $\rho : \mathbb{R}^n \to \mathbb{R}^n$ is called a {\it radial rescaling} function if $\rho = h^{(n)}$ for some piecewise differentiable $h : \mathbb{R} \to \mathbb{R}$. Hence, $\rho$ sends each input vector to a scalar multiple of itself, and that scalar depends only on the norm of the vector\footnote{A function $ \mathbb{R}^n \to \mathbb{R}$ that depends only on the norm of a vector is known as a {\it radial} function. Radial rescaling functions rescale each vector according to the radial function $v \mapsto \lambda(|v|) = \frac{h(|v|)}{|v|}$. This explains the connection to Equation \ref{eq:radial-intro}.}. It is easy to show that radial rescaling functions commute with orthogonal transformations. \begin{example}\label{ex:radialact} (1) Step-ReLU, where $h(r) = r$ if $r \geq 1$ and $0$ otherwise. In this case, the radial rescaling function is given by \begin{align}\label{eqn:step-relu} \rho: \mathbb{R}^n & \rightarrow \mathbb{R}^n, \qquad v \mapsto \begin{cases} v \ \text{\rm if $|v| \geq 1$} \\ 0 \ \text{\rm if $|v| <1$} \end{cases} \end{align} (2) The squashing function, where $h(r) = r^2/(r^2 + 1)$. (3) Shifted ReLU, where $ h(r) = \max(0, r - b )$ for $r >0$ and $b$ is a real number. See Figure \ref{fig:radial_act}. We refer to \citep{weiler_general_2019} and the references therein for more examples and discussion of radial functions. \end{example} \begin{figure} \centering \input{figs/radial_activation.tex} \caption{{Examples of different radial rescaling functions in $\mathbb{R}^1$, see \autoref{ex:radialact}.}} \label{fig:radial_act} \end{figure} A {\it radial neural network} with $L$ layers consists of a positive integer $n_i$ indicating the width of each layer $i = 0, 1, \dots, L$; the trainable parameters, comprising of a matrix $W_i \in \mathbb{R}^{n_i \times n_{i-1}}$ of weights and a bias vector $b_i \in \mathbb{R}^{n_i}$ for each $i = 1, \dots, L$; and a radial rescaling function $\rho_i :\mathbb{R}^{n_i} \to \mathbb{R}^{n_i}$ for each $i = 1, \dots, L$. We refer to the tuple $\mathbf{n} = (n_0, n_1, \dots, n_L)$ as the {\it widths vector} of the neural network. The hidden widths vector is $\mathbf{n}^\text{\rm hid} = (n_1, n_2, \dots, n_{L-1})$. The feedforward function $F : \mathbb{R}^{n_0} \to \mathbb{R}^{n_L}$ of a radial neural network is defined in the usual way as an iterated composition of affine maps and activations. Explicitly, set $F_0 = \text{\rm id}_{\mathbb{R}^{n_0}}$ and recursively define the partial feedforward functions: \[ F_i : \mathbb{R}^{n_0} \to \mathbb{R}^{n_i}, \qquad x\mapsto \rho_i\left( W_i \circ F_{i-1}(x) + b_i\right) \] for $i = 1, \dots, L$. Then the feedforward function is $F = F_L$. \begin{rmk} If $b_i = 0$ for all $i$, then the feedforward function takes the form $F(x) = W \left( \mu(x) x\right)$ where $\mu : \mathbb{R}^n \to \mathbb{R}$ is a scalar-valued function and $W = W_L W_{L-1} \cdots W_1 \in \mathbb{R}^{n_L \times n_0}$ is the product of the weight matrices. If any of the biases are non-zero, then the feedforward function lacks such a simple form. \end{rmk} \section{Universal Approximation}\label{sec:UA} In this section, we consider two universal approximation results. The first approximates asymptotically affine functions with a network of unbounded width. The second generalizes to bounded width networks. Proofs appear in Appendix~\ref{app:UA}. Throughout, $B_r(c) = \{ x \in \mathbb{R}^n : |x-c| <r \}$ denotes the $r$-ball around a point $c$, and an affine map $\mathbb{R}^n \to \mathbb{R}^m$ is one of the from $L(x)= Ax + b$ for a matrix $A \in \mathbb{R}^{m \times n}$ and $b\in \mathbb{R}^m$. \subsection{Approximation of asymptotically affine functions}\label{subsec:asymp-lin-approx} A continuous function $f : \mathbb{R}^n \to \mathbb{R}^m$ is said to be {\it asymptotically affine} if there exists an affine map $L : \mathbb{R}^n \to \mathbb{R}^m$ such that, for every $\epsilon >0$, there is a compact subset $K$ of $\mathbb{R}^n$ such that $ |L(x) - f(x)| < \epsilon$ for all $x \in \mathbb{R}^n \setminus K$. In particular, continuous functions with compact support are asymptotically affine. The continuity of $f$ and compactness of $K$ imply that, for any $\epsilon >0$, there exist $c_1, \dots, c_N \in K$ and $r_1, \dots, r_N \in (0,1)$ such that, first, the union of the balls $B_{r_i}(c_i)$ covers $K$ and, second, for all $i$, we have $f\left(B_{r_i} (c_i) \cap K \right) \subseteq B_{\epsilon}(f(c_i))$. Let $N(f,K, \epsilon)$ be the minimal choice of $N$. In many cases, the constant $N(f,K,\epsilon)$ can be bounded explicitly\footnote{For example, if $K$ is the unit cube in $\mathbb{R}^n$ and $f$ is Lipschitz continuous with Lipschitz constant $R$, then $N(f,K,\epsilon) \leq \Bigl\lceil\frac{R \sqrt{n}}{2\epsilon}\Bigr\rceil^n$.}. \begin{restatable}[Universal approximation]{theorem}{thmUAasymplin} \label{thm:UA-asymp-lin} Let $f : \mathbb{R}^n \to \mathbb{R}^m$ be an asymptotically affine function. For any $\epsilon >0$, there exists a compact set $K \subset \mathbb{R}^n$ and a function $F : \mathbb{R}^n \to \mathbb{R}^m$ such that:\vspace{-6pt} \begin{enumerate}[itemsep=-4pt] \item $F$ is the feedforward function of a radial neural network with $N = N(f,K, \epsilon)$ layers whose hidden widths are $(n+1, n+2, \dots, n+ N)$. \item For any $x \in \mathbb{R}^n$, we have $|F(x) - f(x)| < \epsilon$. \end{enumerate} \end{restatable} We note that the approximation in Theorem~\ref{thm:UA-asymp-lin} is valid on all of $\mathbb{R}^n$. To give an idea of the proof, first fix $c_1, \dots, c_N \in K$ and $r_1, \dots, r_N \in (0,1)$ as above. Let $e_1, \dots, e_N$ be orthonormal basis vectors extending $\mathbb{R}^n$ to $\mathbb{R}^{n+N}$. For $i = 1, \dots, N$ define the following affine maps: \begin{align*} T_i : \mathbb{R}^{n+i-1} &\to \mathbb{R}^{n + i} \qquad & S_i : \mathbb{R}^{n+i} & \to \mathbb{R}^{n + i} \\ z &\mapsto z - c_i + h_i e_i \qquad & z &\mapsto z - (1 + h_i^{-1})\langle e_i, z \rangle e_i + c_i + e_i \end{align*} where $h_i = \sqrt{1 - r_i^2}$ and $\langle e_i, z \rangle$ is the coefficient of $e_i$ in $z$. Setting $\rho_i$ to be Step-ReLU (Equation \ref{eqn:step-relu}) on $\mathbb{R}^{n +i}$, these maps are chosen so that the composition $S_i \circ \rho_i \circ T_i$ maps the points in $B_{r_i}(c_i)$ to $c_i + e_i$, while keeping points outside this ball the same. We now describe a radial neural network with widths $(n, n+1, \dots, n+N, m)$ whose feedforward function approximates $f$. For $i = 1, \dots, N$ the affine map from layer $i-1$ to layer $i$ is given by $z \mapsto T_i \circ S_{i-1} (z)$, with $S_0 = \text{\rm id}_{\mathbb{R}^n}$. The activation at each hidden layer is {Step-ReLU}. Let $L$ be the affine map such that $|L-f| <\epsilon$ on $\mathbb{R}^n \setminus K$. The affine map from layer $N$ to the output layer is \newcommand{\Phi}{\Phi} $\Phi \circ S_N$ where $\Phi : \mathbb{R}^{n+N} \to \mathbb{R}^m$ is the unique affine map determined by $x \mapsto L(x)$ if $x \in \mathbb{R}^n$, and $e_i \mapsto f(c_i)- L(c_i)$. This construction is illustrated in Figure \ref{fig:UA-asymp-lin}. \begin{figure} \centering \input{figs/proof-idea2.tex} \caption{ Two layers of the radial neural network used in the proof of \autoref{thm:UA-asymp-lin}. (Left) The compact set $K$ is covered with open balls. (Middle) Points close to $c_2$ (green ball) are mapped to $c_2 + e_2$, all other points are kept the same. (Right) In the final layer, $c_2+e_2$ is mapped to $f(c_2)$. } \label{fig:UA-asymp-lin} \end{figure} \subsection{Bounded width approximation}\label{subsec:bdd-width-approx} We now turn our attention to a bounded width universal approximation result. \begin{restatable}{theorem}{thmUAnplusmplusone} \label{thm:UA-n+m+1} Let $f : \mathbb{R}^n \to \mathbb{R}^m$ be an asymptotically affine function. For any $\epsilon >0$, there exists a compact set $K \subset \mathbb{R}^n$ and a function $F : \mathbb{R}^n \to \mathbb{R}^m$ such that:\vspace{-6pt} \begin{enumerate}[itemsep=-4pt] \item $F$ is the feedforward function of a radial neural network with $N = N(f,K, \epsilon)$ hidden layers whose widths are all $n+m+1$. \item For any $x \in \mathbb{R}^n$, we have $|F(x) - f(x) | < \epsilon$. \vspace{-7pt} \end{enumerate} \end{restatable} The proof, which is more involved than that of Theorem \ref{thm:UA-asymp-lin}, relies on using orthogonal dimensions to represent the domain and the range of $f$, together with an indicator dimension to distinguish the two. We regard points in $\mathbb{R}^{n+m+1}$ as triples $(x,y,\theta)$ where $x \in \mathbb{R}^n$, $y \in \mathbb{R}^m$ and $\theta \in \mathbb{R}$. The proof of Theorem \ref{thm:UA-n+m+1} parallels that of Theorem~\ref{thm:UA-asymp-lin}, but instead of mapping points in $B_{r_i}(c_i)$ to $c_i + e_i$, we map the points in $B_{r_i}((c_i,0,0))$ to $(0,\frac{f(c_i)- L(0)}{s},1)$, where $s$ is chosen such that different balls do not interfere. The final layer then uses an affine map $(x,y,\theta) \mapsto L(x) + sy$, which takes $(x,0,0)$ to $L(x)$, and $(0,\frac{f(c_i)-L(0)}{s},1)$ to $f(c_i)$. We remark on several additional results; see Appendix~\ref{app:UA} for full statements and proofs. The bound of Theorem \ref{thm:UA-n+m+1} can be strengthend to $\max(n,m) + 1$ in the case of functions $f : K \to \mathbb{R}^m$ defined on a compact domain $K \subset \mathbb{R}^n$ (i.e., ignoring asymptotic behavior). Furthermore, with more layers, it is possible to reduce that bound to $\max(n,m)$. \section{Model compression}\label{sec:mod-comp} In this section, we prove a model compression result. Specifically, we provide an algorithm which, given any radial neural network, computes a different radial neural network with smaller widths. The resulting compressed network has the same feedforward function as the original network, and hence the same value of the loss function on any batch of training data. In other words, our model compression procedure is {\it lossless}. Although our algorithm is practical and explicit, it reflects more conceptual phenomena, namely, a change-of-basis action on network parameter spaces (Section \ref{subsec:paramspace}). \subsection{The parameter space}\label{subsec:paramspace} Suppose a fully connected network has $L$ layers and widths given by the tuple $\mathbf{n} = (n_0, n_1, n_2, \dots, n_{L-1}, n_L)$. In other words, the $i$-th layer has input width $n_{i-1}$ and output width $n_i$. The parameter space is defined as the vector space of all possible choices of parameter values. Hence, it is given by the following product of vector spaces: \[ \mathsf{Param}(\mathbf{n}) = \left( \mathbb{R}^{n_1 \times n_0} \times \mathbb{R}^{n_2 \times n_1} \times \cdots \times \mathbb{R}^{n_L \times n_{L-1}} \right) \times \left(\mathbb{R}^{n_1} \times \mathbb{R}^{n_2} \times \cdots \times \mathbb{R}^{n_L} \right) \] We denote an element therein as a pair of tuples $(\mathbf{W}, \mathbf{b})$ where $\mathbf{W} = (W_i \in \mathbb{R}^{n_i \times n_{i-1}})_{i=1}^L$ are the weights and $\mathbf{b} = (b_i \in \mathbb{R}^{n_i})_{i=1}^L$ are the biases. To describe certain symmetries of the parameter space, consider the following product of orthogonal groups, with sizes corresponding to the widths of the hidden layers: \[ O(\mathbf{n}^\text{hid}) = O(n_1) \times O(n_2) \times \cdots \times O(n_{L-1}) \] There is a change-of-basis action of $O(\mathbf{n}^\text{\rm hid})$ on the parameter space $\mathsf{Param}(\mathbf{n})$. Explicitly, the tuple of orthogonal matrices $\mathbf{Q} = (Q_i)_{i =1}^{L-1} \in O(\mathbf{n}^\text{\rm hid})$ transforms the parameter values $(\mathbf{W}, \mathbf{b}) $ to $ \mathbf{Q} \cdot \mathbf{W} := \left( Q_i W_i Q_{i-1}^{-1} \right)_{i=1}^L$ and $ \mathbf{Q} \cdot \mathbf{b} := \left( Q_i b_i\right)_{i=1}^L$, where $Q_0 = \text{\rm id}_{n_0}$ and $Q_L = \text{\rm id}_{n_L}$. We write $\mathbf{Q} \cdot (\mathbf{W}, \mathbf{b})$ for $(\mathbf{Q} \cdot \mathbf{W}, \mathbf{Q} \cdot \mathbf{b})$. \begin{comment} \begin{equation}\label{eq:GL-hidden-action-neur} \mathbf{W} \quad \mapsto \quad \mathbf{Q} \cdot \mathbf{W} := \left( Q_i W_i Q_{i-1}^{-1} \right)_{i=1}^L, \qquad \quad \mathbf{b} \quad \mapsto \quad \mathbf{Q} \cdot \mathbf{b} := \left( Q_i b_i\right)_{i=1}^L, \end{equation} \end{comment} \subsection{Model compression}\label{subsec:mod-comp} In order to state the compression result, we first define the reduced widths. Namely, the {reduction} $\mathbf{n}^{\text{\rm red}} = (n^{\rm red}_0, n^{\rm red}_1, \dots, n^{\rm red}_L)$ of a widths vector $\mathbf n$ is defined recursively by setting $n^{\rm red}_0 = n_0 $, then $n^{\rm red}_{i} = \min( n_i , n^{\rm red}_{i-1} + 1 )$ for $i = 1, \dots, L-1$, and finally $n^{\rm red}_L = n_L$. For a tuple $\boldsymbol \rho = \left(\rho_i : \mathbb{R}^{n_i} \to \mathbb{R}^{n_i} \right)_{i=1}^L$ of radial rescaling functions, we write $\boldrho^{\rm red} = \left(\rho^{\rm red}_i : \mathbb{R}^{n^{\rm red}_i} \to \mathbb{R}^{n^{\rm red}_i}\right)$ for the corresponding tuple of restrictions, which are all radial rescaling functions. The following result relies on Algorithm \ref{alg:QR-mod-comp} below. \begin{restatable}{theorem}{thmModelCompression} \label{thm:mod-comp} Let $(\mathbf{W}, \mathbf{b}, \boldsymbol \rho )$ be a radial neural network with widths $\mathbf{n}$. Let $\mathbf{W}^\text{\rm red}$ and $\mathbf{b}^\text{\rm red}$ be the weights and biases of the compressed network produced by Algorithm \ref{alg:QR-mod-comp}. The feedforward function of the original network $(\mathbf{W}, \mathbf{b}, \boldsymbol \rho )$ coincides with that of the compressed network $(\mathbf{W}^\text{\rm red}, \mathbf{b}^\text{\rm red}, \boldrho^{\rm red} )$. \end{restatable} \begin{algorithm}[H]\label{alg:QR-mod-comp} \SetKwFunction{QRdecompCom}{QR-decomp} \SetKwFunction{QRdecompRed}{QR-decomp} \SetKwInOut{Input}{input} \SetKwInOut{Output}{output} \SetKwInOut{Initialize}{initialize} \DontPrintSemicolon \Input{$\mathbf{W}, \mathbf{b} \in \mathsf{Param}(\mathbf{n})$} \Output{$\mathbf{Q} \in O(\mathbf{n}^\text{\rm hid})$ and $\mathbf{W}^\text{\rm red}, \mathbf{b}^\text{\rm red} \in \mathsf{Param}(\mathbf{n}^{\text{\rm red}})$} \BlankLine $\mathbf{Q}, \mathbf{W}^\text{\rm red}, \mathbf{b}^\text{\rm red} \gets [\ ], [\ ], [\ ]$ \tcp*[r]{initialize output lists} $A_1 \gets \begin{bmatrix} b_1 & W_1 \end{bmatrix}$\: \tcp*[r]{matrix of size $n_1 \times (n_0 + 1)$} \For(\tcp*[r]{iterate through layers \vspace{-\baselineskip}}) {$ i \leftarrow 1$ \KwTo $L-1$ }{ $Q_i, R_i \gets $ \QRdecompCom{$A_i$ \ , \ \text{\tt mode = `complete'}} \tcp*[r]{$A_i = Q_i \text{\rm Inc}_i R_i$} Append $Q_i$ to $\mathbf{Q}$\; Append first column of $R_i$ to $\mathbf{b}^\text{\rm red}$ \tcp*[r]{reduced bias for layer $i$} Append remainder of $R_i$ to $\mathbf{W}^\text{\rm red}$ \tcp*[r]{reduced weights for layer $i$} Set $A_{i+1} \gets \begin{bmatrix} b_{i+1} & W_{i+1} Q_i \text{\rm Inc}_i \end{bmatrix}$ \tcp*[r]{matrix of size {\footnotesize $n_{i+1} \times (n^{\rm red}_{i} + 1)$} } } Append the first column of $A_L$ to $\mathbf{b}^\text{\rm red}$ \tcp*[r]{reduced bias for last layer} Append the remainder of $A_L$ to $\mathbf{W}^\text{\rm red}$ \tcp*[r]{reduced weights for last layer} \BlankLine \KwRet $\mathbf{Q}$, $\mathbf{W}^\text{\rm red}$, $\mathbf{b}^\text{\rm red}$ \caption{QR Model Compression (\texttt{QR-compress})} \end{algorithm} We explain the notation of the algorithm. The inclusion matrix $\text{\rm Inc}_i \in \mathbb{R}^{n_i \times n^{\rm red}_i}$ has ones along the main diagonal and zeros elsewhere. The method \texttt{QR-decomp} with \texttt{mode = `complete'} computes the complete QR decomposition of the $n_i \times (1 + n^{\rm red}_{i-1})$ matrix $A_i$ as $Q_i \text{\rm Inc}_i R_i$ where $Q_i \in O(n_i)$ and $R_i$ is upper-triangular of size $n^{\rm red}_ i \times (1+n^{\rm red}_{i-1})$. The definition of $n^{\rm red}_i$ implies that either $n^{\rm red}_i = n^{\rm red}_{i-1} + 1$ or $n^{\rm red}_i = n_i$. The matrix $R_i$ is of size $n^{\rm red}_ i \times n^{\rm red}_{i}$ in the former case and of size $n_i \times (1 + n^{\rm red}_{i-1})$ in the latter case. \begin{figure}[b] \centering% \input{figs/NNquiver1}% \hfill% \input{figs/NNquiver2}% \hfill% \input{figs/NNquiver3}% \caption{Model compression in 3 steps. Layer widths can be iteratively reduced to 1 greater than the previous. The number of trainable parameters reduces from 33 to 17.} \label{fig:dim_red} \end{figure} \begin{example} Suppose the widths of a radial neural network are $ (1, 8, 16, 8, 1 )$. Then it has $\sum_{i=1}^4 (n_{i-1} +1)n_i = 305$ trainable parameters. The reduced network has widths $(1, 2, 3, 4, 1)$ and $\sum_{i=1}^4 (n^{\rm red}_{i-1} + 1)(n^{\rm red}_i) = 34$ trainable parameters. Another example appears in Figure \ref{fig:dim_red}. \end{example} We note that the tuple of matrices $\mathbf{Q}$ produced by Algorithm \ref{alg:QR-mod-comp} does not feature in the statement of Theorem \ref{thm:mod-comp}, but is important in the proof (which appears in Appendix \ref{app:mod-comp}). Namely, an induction argument shows that the $i$-th partial feedforward function of the original and reduced models are related via the matrices $Q_i$ and $\text{\rm Inc}_i$. A crucial ingredient in the proof is that radial rescaling activations commute with orthogonal transformations. \section{Projected gradient descent}\label{subsec:proj-gd} The typical use case for model compression algorithms is to produce a smaller version of the fully trained model which can be deployed to make inference more efficient. It is also worth considering whether compression can be used to accelerate training. For example, for some compression algorithms, the compressed and full models have the same feedforward function after a step of gradient descent is applied to each, and so one can compress before training and still reach the same minimum. Unfortunately, in the context of radial neural networks, compression using Algorithm \ref{alg:QR-mod-comp} and then training does not necessarily give the same result as training and then compression (see Appendix \ref{appsubsec:ex-131} for a counterexample). However, \texttt{QR-compress} does lead to a precise mathematical relationship between optimization of the two models: the loss of the compressed model after one step of gradient descent is equivalent to the loss of (a transformed version of) the original model after one step of {projected gradient descent}. Proofs appear in Appendix \ref{app:proj-gd}. To state our results, fix a tuple of widths $\mathbf n$ and a tuple $\boldsymbol \rho = \left(\rho_i : \mathbb{R}^{n_i} \to \mathbb{R}^{n_i} \right)_{i=1}^L$ of radial rescaling functions. The loss function $\mathcal L : \mathsf{Param}(\mathbf{n})\rightarrow \mathbb{R}$ associated to a batch of training data $\{ (x_j, y_j)\} \subseteq \mathbb{R}^{n_0} \times \mathbb{R}^{n_L}$ is defined as taking parameter values $ (\mathbf{W} , \mathbf{b} ) $ to the sum $\sum_j \mathcal C(F ( x_j), y_j)$ where $\mathcal C : \mathbb{R}^{n_L }\times \mathbb{R}^{n_L} \to \mathbb{R}$ is a cost function on the output space, and $F = F_{(\mathbf{W}, \mathbf{b}, \boldsymbol \rho)}$ is the feedforward of the radial neural network with parameters $(\mathbf{W}, \mathbf{b})$ and activations $\boldsymbol \rho$. Similarly, we have a loss function $\mathcal L_\text{\rm red}$ on the parameter space $\mathsf{Param}(\mathbf{n}^{\text{\rm red}})$ with reduced widths vector. For any learning rate $\eta >0$, we obtain gradient descent maps: \begin{align*} \gamma : \mathsf{Param}(\mathbf{n}) & \to \mathsf{Param}(\mathbf{n}) \qquad & \gamma_{\text{\rm red}} : \mathsf{Param}(\mathbf{n}^{\text{\rm red}}) &\to \mathsf{Param}(\mathbf{n}^{\text{\rm red}}) \\ (\mathbf{W}, \mathbf{b}) &\mapsto (\mathbf{W}, \mathbf{b}) - \eta \nabla_{(\mathbf{W}, \mathbf{b}) } \mathcal L \qquad & (\mathbf{V}, \mathbf{c} )&\mapsto (\mathbf{V}, \mathbf{c} ) - \eta \nabla_{(\mathbf{V}, \mathbf{c} )} \mathcal L_\text{\rm red} \end{align*} We will also consider, for $k \geq 0$, the $k$-fold composition $\gamma^k = \gamma \circ \gamma \circ \cdots \circ \gamma$ and similarly for $\gamma_{\text{\rm red}}$. The {\it projected gradient descent} map on $\mathsf{Param}(\mathbf{n})$ is given by: \[ \gamma_{\text{\rm proj}} : \mathsf{Param}(\mathbf{n}) \to \mathsf{Param}(\mathbf{n}) ,\qquad (\mathbf{W}, \mathbf{b}) \mapsto \mathrm{Proj}\left( \gamma(\mathbf{W}, \mathbf{b}) \right) \] where the map $\mathrm{Proj}$ zeroes out all entries in the bottom left $(n_{i} - n^{\rm red}_{i}) \times n^{\rm red}_{i-1}$ submatrix of $W_i - \nabla_{W_i}\mathcal L$, and the bottom $(n_i - n^{\rm red}_i)$ entries in $b_i - \nabla_{b_i} \mathcal L$, for each $i$. Schematically: \[ \small {W_i} - \nabla_{W_i} \mathcal L= \begin{bmatrix} * & * \\ * & * \end{bmatrix} \mapsto \begin{bmatrix} * & * \\ 0 & * \end{bmatrix} , \qquad {b_i} - \nabla_{b_i} \mathcal L= \begin{bmatrix} * \\ * \end{bmatrix} \mapsto \begin{bmatrix} * \\ 0 \end{bmatrix} \] To state the following theorem, let $\mathbf{W}^\text{\rm red}, \mathbf{b}^\text{\rm red}, \mathbf{Q} = \text{\tt{QR-compress}}(\mathbf{W}, \mathbf{b})$ be the outputs of Algorithm \ref{alg:QR-mod-comp} applied to $(\mathbf{W}, \mathbf{b}) \in \mathsf{Param}(\mathbf{n})$. Hence $(\mathbf{W}^\text{\rm red}, \mathbf{b}^\text{\rm red} )\in \mathsf{Param}(\mathbf{n}^{\text{\rm red}})$ are the parameters of the compressed model, and $\mathbf{Q}\in O(\mathbf{n}^\text{\rm hid})$ is an orthogonal parameter symmetry. We also consider the action (Section \ref{subsec:paramspace}) of $\mathbf{Q}^{-1}$ applied to $(\mathbf{W}, \mathbf{b})$. \begin{theorem}\label{thm:proj-gd} Let $\mathbf{W}^\text{\rm red}, \mathbf{b}^\text{\rm red}, \mathbf{Q} = \text{\tt{QR-compress}}(\mathbf{W}, \mathbf{b})$ be the outputs of Algorithm \ref{alg:QR-mod-comp} applied to $(\mathbf{W}, \mathbf{b}) \in \mathsf{Param}(\mathbf{n})$. Set $\mathbf{U} = \mathbf{Q}^{-1} \cdot \left( \mathbf{W} , \mathbf{b} \right) - ( \mathbf{W}^\text{\rm red}, \mathbf{b}^\text{\rm red})$. For any $k \geq 0$, we have: \[ \gamma^k( \mathbf{W}, \mathbf{b}) = \mathbf{Q} \cdot \gamma^k( \mathbf{Q}^{-1} \cdot \left( \mathbf{W} , \mathbf{b} \right)) \qquad \qquad \gamma_\text{\rm proj}^k (\mathbf{Q}^{-1} \cdot \left( \mathbf{W} , \mathbf{b} \right)) = \gamma_{\text{\rm red}}^k (\mathbf{W}^\text{\rm red}, \mathbf{b}^\text{\rm red}) + \mathbf{U}. \] \end{theorem} We conclude that gradient descent with initial values $(\mathbf{W}, \mathbf{b})$ is equivalent to gradient descent with initial values $ \mathbf{Q}^{-1} \cdot \left( \mathbf{W} , \mathbf{b} \right)$ since at any stage we can apply the orthogonal symmetry $\mathbf{Q}^{\pm 1}$ to move from one to the other. Furthermore, projected gradient descent with initial values $ \mathbf{Q}^{-1} \cdot \left( \mathbf{W} , \mathbf{b} \right)$ is equivalent to gradient descent on $\mathsf{Param}(\mathbf{n}^{\text{\rm red}})$ with initial values $(\mathbf{W}^\text{\rm red}, \mathbf{b}^\text{\rm red})$ since at any stage we can move from one to the other by $\pm\mathbf{U}$. \section{Experiments}\label{sec:exp} In addition to the theoretical results in this work, we provide an implementation of Algorithm \ref{alg:QR-mod-comp}, in order to validate the claims of Theorems \ref{thm:mod-comp} and \ref{thm:proj-gd} empirically, as well as to quantify real-world performance. Full experimental details are in Appendix \ref{app:exp}. {\bf(1) Empirical verification of Theorem \ref{thm:mod-comp}.} We learn the function $f(x) = e^{-x^2}$ from samples using a radial neural network with widths $\mathbf{n} = (1,6,7,1)$ and activation the radial shifted sigmoid $h(x) = 1/(1+e^{-x + s})$. Applying \texttt{QR-compress} gives a compressed radial neural network with widths $\mathbf{n}^{\mathrm{red}} = (1,2,3,1)$. Theorem \ref{thm:mod-comp} implies that the respective neural functions $F$ and $F_\text{\rm red}$ are equal. Over 10 random initializations, the mean absolute error is negligible up to machine precision: $(1/N) \sum_{j} |F(x_j) - F_\text{\rm red}(x_j)| = 1.31 \cdot 10^{-8} \pm 4.45 \cdot 10^{-9}$. {\bf (2) Empirical verification of Theorem \ref{thm:proj-gd}.} The claim is that training the transformed model with parameters $\mathbf{Q}^{-1} \cdot (\mathbf{W}, \mathbf{b})$ and objective $\mathcal{L}$ by projected gradient descent coincides with training the reduced model with parameters $(\mathbf{W}^\text{\rm red}, \mathbf{b}^\text{\rm red})$ and objective $\mathcal{L}_{\mathrm{red}}$ by usual gradient descent. We verified this on synthetic data as above. Over 10 random initializations, the loss functions after training match: $|\mathcal{L}-\mathcal{L}_{\mathrm{red}}| = 4.02 \cdot 10^{-9} \pm 7.01 \cdot 10^{-9}$. {\bf (3) The compressed model trains faster.} Our compression method may be applied before training to produce a smaller model class which \emph{trains} faster without sacrificing accuracy. We demonstrate this in learning the function $f : \mathbb{R}^2 \to \mathbb{R}^2$ sending $(t_1, t_2)$ to $(e^{-t_1^2}, e^{-t_2^2})$ using a radial neural network with widths $\mathbf{n} = (2,16,64, 128, 16, 2)$ and activation the radial sigmoid $h(r) = 1/(1+e^{-r})$. Applying \texttt{QR-compress} gives a compressed network with widths $\mathbf{n}^{\mathrm{red}} = (2,3,4,5,6,2)$. We trained both models until the training loss was $\leq0.01$. Over 10 random initializations on our system, the reduced network trained in $15.32 \pm 2.53$ seconds and the original network trained in $31.24 \pm 4.55$ seconds. \section{Conclusions and Discussion}\label{sec:conclusions} This paper demonstrates that radial neural networks are universal approximators and that their parameter spaces exhibit a rich symmetry group, leading to a model compression algorithm. The results of this work combine to build a theoretical foundation for the use of radial neural networks, and suggest that radial neural networks hold promise for wider practical applicability. Furthermore, this work makes an argument for considering the advantages of non-pointwise nonlinearities in neural networks. There are two main limitations of our results, each providing an opportunity for future work. First, our universal approximation constructions currently work only for Step-ReLU radial rescaling radial activations; it would be desirable to generalize to other activations. Additionally, Theorem \ref{thm:mod-comp} achieves compression only for networks whose widths satisfy $n_i > n_{i-1}+1$ for some $i$. Neural networks which do not have increasing widths anywhere in their architecture, such as encoders, would not be compressible. Further extensions of this work include: First, little is currently known about the stability properties of radial neural networks during training, as well as their sensitivity to initialization. Second, radial rescaling activations provide an extreme case of symmetry; there may be benefits to combining radial and pointwise activations within a single network, for example, through `block' radial rescaling functions. Third, the parameter space symmetries may provide a key ingredient in analyzing the gradient flow dynamics of radial neural networks and computation of conserved quantities. Fourth, radial rescaling activations can be used within convolutional or group-equivariant NNs. Finally, based on the theoretical advantages laid out in this paper, future work will explore empirically applications in which we expect radial networks to outperform alternate methods. Such potential applications include data spaces with circular or distance-based class boundaries. \section*{Acknowledgements} We would like to thank Avraham Aizenbud, Marco Antonio Armenta, Alex Kolmus, Niklas Smedemark-Margulies, Jan-Willem van de Meent, and Rose Yu for insightful discussions, comments, and questions. This work was (partially) funded by the NWO under the CORTEX project (NWA.1160.18.316) and NSF grant \#2134178. Robin Walters is supported by the Roux Institute and the Harold Alfond Foundation. \bibliographystyle{plainnat}
1,314,259,994,114
arxiv
\section{Introduction}\label{section1} \setcounter{equation}{0} In 1957 R\'enyi published his paper \cite{R} about representations for real numbers by $f$-expansions, called hereafter $\varphi$-expansions, which had tremendous impact in Dynamical Systems Theory. The ideas of R\'enyi were further developed by Parry in \cite{P1} and \cite{P2}. See also the book of Schweiger \cite{Sch}. The first part of the paper, section \ref{section2}, is an exposition of the theory of $\varphi$-expansions in the setting of piecewise monotone dynamical systems. Although many of the results of section \ref{section2} are known, for example see \cite{Bo} chapter 9 for Theorem \ref{thm2.5}, we state necessary and sufficient conditions for the validity of the $\varphi$-expansion, which are different from those in Parry's paper \cite{P2}, Theorem \ref{thm2.1bis} and Theorem \ref{thm2.1ter}. We then use $\varphi$-expansions to study two interesting and related problems in sections \ref{section3} and \ref{section4}. When one applies the method of section \ref{section2} to the dynamical system $\beta x+\alpha\mod1$, one obtains a symbolic shift which is entirely described by two strings $\ud{u}^{\alpha,\beta}$ and $\ud{v}^{\alpha,\beta}$ of symbols in a finite alphabet ${\tt A}=\{0,\ldots,k-1\}$. The shift space is given by \begin{equation}\label{1.1} \mathbf\Sigma(\ud{u}^{\alpha,\beta},\ud{v}^{\alpha,\beta})=\big\{\ud{x}\in{\tt A}^{\Z_+}\,{:}\; \ud{u}^{\alpha,\beta}\preceq\sigma^n\ud{x}\preceq\ud{v}^{\alpha,\beta}\;\,\forall n\geq 0 \big\}\,, \end{equation} where $\preceq$ is the lexicographic order and $\sigma$ the shift map. The particular case $\alpha=0$ has been much studied from many different viewpoints ($\beta$-shifts). For $\alpha\not=0$ the structure of the shift space is richer. A natural problem is to study all shift spaces $\Sigma(\ud{u},\ud{v})$ of the form \eqref{1.1} when we replace $\ud{u}^{\alpha,\beta}$ and $\ud{v}^{\alpha,\beta}$ by a pair of strings $\ud{u}$ and $\ud{v}$. In section \ref{section3} we give an algorithm, Theorem \ref{thm3.1}, based on the $\varphi$-expansion, which allows to compute the topological entropy of shift spaces $\Sigma(\ud{u},\ud{v})$. One of the essential tool is the follower-set graph associated to the shift space. This graph is presented in details in subsection \ref{subsectionfollower}. The algorithm is given in subsection \ref{subsectionalgo} and the computations of the topological entropy in subsection \ref{topological}. The basic idea of the algorithm is to compute two real numbers $\bar{\alpha}$ and $\bar{\beta}$, given the strings $\ud{u}$ and $\ud{v}$, and to show that the shift space $\mathbf\Sigma(\ud{u},\ud{v})$ is a modification of the shift space $\Sigma(\ud{u}^{\bar{\alpha},\bar{\beta}},\ud{v}^{\bar{\alpha},\bar{\beta}})$ obtained from the dynamical system $\bar{\beta}x+\bar{\alpha}\mod1$, and that the topological entropies of the two shift spaces are the same. In the last section we consider the following inverse problem for the dynamical systems $\beta x+\alpha \mod1$: given $\ud{u}$ and $\ud{v}$, find $\alpha$ and $\beta$ so that $$ \ud{u}=\ud{u}^{\alpha,\beta}\quad\text{and}\quad\ud{v}=\ud{v}^{\alpha,\beta}\,. $$ The solution of this problem is given in Theorems \ref{thm4.1} and \ref{thm4.2} for all $\beta>1$. \section{$\varphi$-expansion for piecewise monotone dynamical \\ systems}\label{section2} \setcounter{equation}{0} \subsection{Piecewise monotone dynamical systems}\label{subsection2.1} Let $X:=[0,1]$ (with the euclidean distance). We consider the case of piecewise monotone dynamical systems of the following type. Let $0=a_0<a_1<\cdots<a_k=1$ and $I_j:=(a_j,a_{j+1})$, $j\in{\tt A}$. We set ${\tt A}:=\{0,\ldots,k-1\}$, $k\geq 2$, and $$ S_0:=X\backslash \bigcup_{j\in{\tt A}}I_j\,. $$ For each $j\in{\tt A}$ let $$ f_j:I_j\mapsto J_j:=f_j(I_j)\subset [0,1] $$ be a strictly monotone continuous map. When necessary we also denote by $f_j$ the continuous extension of the map on the closure $\overline{I}_j$ of $I_j$. We define a map $T$ on $X\backslash S_0$ by setting $$ T(x):=f_j(x)\quad \text{if $x\in I_j$}\,. $$ The map $T$ is left undefined on $S_0$. We also assume that \begin{equation}\label{2.1} \big(\bigcup_{i\in{\tt A}}J_i\big)\cap I_j=I_j\quad\forall j\,. \end{equation} \noindent We introduce sets $X_j$, $S_j$, and $S$ by setting for $j\geq 1$ $$ X_0:=[0,1]\,,\quad X_j:=X_{j-1}\backslash S_{j-1}\,,\quad S_j:=\{x\in X_j\,{:}\; T(x)\in S_{j-1}\}\,,\quad S:=\bigcup_{j\geq 0}S_j\,. $$ \begin{lem}\label{lem2.1} Under the condition \eqref{2.1}, $T^n(X_{n+1})= X_1$ and $T(X\backslash S)=X\backslash S$. $X\backslash S$ is dense in $X$. \end{lem} \medbreak\noindent{\bf Proof}:\enspace Condition \eqref{2.1} is equivalent to $T(X_1)\supset X_1$. Since $X_2=X_1\backslash S_1$ and $S_1=\{x\in X_1\,{:}\; T(x)\not\in X_1\}$, we have $T(X_2)=X_1$. Suppose that $T^n(X_{n+1})=X_1$; we prove that $T^{n+1}(X_{n+2})=X_{1}$. One has $X_{n+1}=X_{n+2}\cup S_{n+1}$ and $$ X_1=T^n(X_{n+1})=T^n(X_{n+2})\cup T^n(S_{n+1})\,. $$ Applying once more $T$, $$ X_1\subset T(X_1)=T^{n+1}(X_{n+2})\cup T^{n+1}(S_{n+1})\,. $$ $T^{n+1}$ is defined on $X_{n+1}$ and $S_{n+1}\subset X_{n+1}$. $$ T^{n+1}S_{n+1}=\{x\in X_{n+1}\,{:}\; T^{n+1}(x)\in S_0\}= \{x\in X_{n+1}\,{:}\; T^{n+1}(x)\not\in X_1\}\,. $$ Hence $T^{n+1}(X_{n+2})=X_{1}$. Clearly $T(X\backslash S)\subset X\backslash S$ and $T(S\backslash S_0)\subset S$. Since $X_1$ is the disjoint union of $X\backslash S$ and $S\backslash S_0$, and $TX_1\supset X_1$, we have $T(X\backslash S)=X\backslash S$. The sets $X\backslash S_k$ are open and dense in $X$. By Baire's Theorem $X\backslash S=\bigcap_{k}(X\backslash S_k)$ is dense. \qed Let $\Z_+:=\{0,1,2,\ldots\}$ and ${\tt A}^{\Z_+}$ be equipped with the product topology. Elements of ${\tt A}^{\Z_+}$ are called {\sf strings} and denoted by $\ud{x}=(x_0,x_1,\ldots)$. A finite string $\ud{w}=(w_0,\cdots,w_{n-1})$, $w_j\in{\tt A}$, is a {\sf word}; we also use the notation $\ud{w}=w_0\cdots w_{n-1}$. The {\sf length of $\ud{w}$} is $|\ud{w}|=n$. A {\sf $n$-word} is a word of length $n$. There is a single word of length $0$, the {\sf empty-word} $\epsilon$. The set of all words is ${\tt A}^*$. The shift-map $\sigma\,{:}\; {\tt A}^{\Z_+}\rightarrow{\tt A}^{\Z_+}$ is defined by $$ \sigma(\ud{x}):=(x_1,x_2,\ldots)\,. $$ We define two operations ${\tt p}$ and ${\tt s}$ on ${\tt A}^*\backslash\{\epsilon\}$, \begin{align} {\tt p}{\ud{w}}&:=\begin{cases} w_0\cdots w_{n-2}& \text{if $\ud{w}=w_0\cdots w_{n-1}$ and $n\geq 2$}\label{opp}\\ \epsilon & \text{if $\ud{w}=w_0$} \end{cases}\\ {\tt s}{\ud{w}}&:=\begin{cases} w_1\cdots w_{n-1}& \text{if $\ud{w}=w_0\cdots w_{n-1}$ and $n\geq 2$}\label{ops}\\ \epsilon & \text{if $\ud{w}=w_0$.} \end{cases} \end{align} On ${\tt A}^{\Z_+}$ we define a total order, denoted by $\prec$. We set $$ \delta(j):=\begin{cases} +1 &\text{if $f_j$ is increasing}\\ -1 & \text{if $f_j$ is decreasing,} \end{cases} $$ and for a word $\ud{w}$, $$ \delta(\ud{w}):=\begin{cases} \delta(w_0)\cdots \delta(w_{n-1}) &\text{if $\ud{w}=w_0\cdots w_{n-1}$}\\ 1& \text{if $\ud{w}=\epsilon$.} \end{cases} $$ Let $\ud{x}^\prime\not =\ud{x}^{\prime\prime}$ belong to ${\tt A}^{\Z_+}$; define $j$ as the smallest integer with $x^\prime_j\not=x^{\prime\prime}_j$. By definition $$ \ud{x}^\prime\prec\ud{x}^{\prime\prime}\iff \begin{cases} x^\prime_j<x^{\prime\prime}_j &\text{if $\delta(x^\prime_0\cdots x^\prime_{j-1})=1$}\\ x^\prime_j>x^{\prime\prime}_j &\text{if $\delta(x^\prime_0\cdots x^\prime_{j-1})=-1$}\,. \end{cases} $$ As usual $\ud{x}^\prime\preceq \ud{x}^{\prime\prime}$ if and only if $\ud{x}^\prime\prec\ud{x}^{\prime\prime}$ or $\ud{x}^\prime=\ud{x}^{\prime\prime}$. When all maps $f_j$ are increasing this order is the lexicographic order. \subsection{$\varphi$-expansion}\label{subsection2.2} We give an alternative description of a piecewise monotone dynamical system as in Parry's paper \cite{P2}. In this description, when all maps $f_j$ are increasing, one could use instead of the intervals $I_j$ the intervals $I^\prime_j:=[a_j,a_{j+1})$, $j\in{\tt A}$. In that case $S_0=\{a_k\}$ and $S_j=\emptyset$ for all $j\geq 1$. This would correspond to the setting of Parry's paper \cite{P2}. We define a map $\varphi$ on the disjoint union $$ {\rm dom}\varphi:=\bigcup_{j=0}^{k-1}j+J_j\subset \R\,, $$ by setting \begin{equation}\label{2.3} \varphi(x):= f^{-1}_j(t) \quad \text{if $x=j+t$ and $t\in J_j$}\,. \end{equation} The map $\varphi$ is continuous, injective with range $X_1$. On $X_1$ the inverse map is $$ \varphi^{-1}(x)=j+Tx\quad\text{if $x\in I_j$}\,. $$ For each $j$, such that $f_j$ is increasing, we define $\overline{\varphi}^j$ on $j+[0,1]$ (using the extension of $f_j$ to $[a_j,a_{j+1}]$) by \begin{equation}\label{2.4} \overline{\varphi}^j(x):=\begin{cases} a_j & \text{if $x=j+t$ and $t\leq f_j(a_j)$}\\ f^{-1}_j(t) & \text{if $x=j+t$ and $t\in J_j$}\\ a_{j+1} & \text{if $x=j+t$ and $f_j(a_{j+1})\leq t$.} \end{cases} \end{equation} For each $j$, such that $f_j$ is decreasing, we define $\overline{\varphi}^j$ on $j+[0,1]$ by \begin{equation}\label{2.5} \overline{\varphi}^j(x):=\begin{cases} a_{j+1} & \text{if $x=j+t$ and $t\leq f_j(a_{j+1})$}\\ f^{-1}_j(t) & \text{if $x=j+t$ and $t\in J_j$}\\ a_{j} & \text{if $x=j+t$ and $f_j(a_{j})\leq t$.} \end{cases} \end{equation} \noindent It is convenient below to consider the family of maps $\overline{\varphi}^j$ as a single map defined on $[0,k]$, which is denoted by $\overline{\varphi}$. In order to avoid ambiguities at integers, where the map may be multi-valued, we always write a point of $[j,j+1]$ as $x=j+t$, $t\in[0,1]$, so that $$ \overline{\varphi}(j+t)\equiv\overline{\varphi}(x):=\overline{\varphi}^j(t)\,. $$ We define the {\sf coding map} ${\tt i}:X\backslash S\rightarrow {\tt A}^{\Z_+}$ by $$ {\tt i}(x):=({\tt i}_0(x),{\tt i}_1(x),\ldots)\quad\text{with ${\tt i}_n(x):=j$ if $T^nx\in I_j$}\,. $$ The {\sf $\varphi$-code} of $x\in X\backslash S$ is the string ${\tt i}(x)$, and we set $$ \Sigma=\{\ud{x}\in{\tt A}^{\Z_+}\,{:}\; \text{$\ud{x}={\tt i}(x)$ for some $x\in X\backslash S$}\}\,. $$ For $x\in X\backslash S$ and any $n\geq 0$, \begin{equation}\label{2.7} \varphi^{-1}(T^nx)={\tt i}_n(x)+T^{n+1}x\quad\text{and}\quad {\tt i}(T^nx)=\sigma^n{\tt i}(x) \,. \end{equation} Let $z_j\in{\tt A}$, $1\leq j \leq n$, and $t\in [0,1]$; we set $$ \overline{\varphi}_1(z_1+t):=\overline{\varphi}(z_1+t) $$ and \begin{equation}\label{formule} \overline{\varphi}_n(z_1,\ldots,z_n+t):= \overline{\varphi}_{n-1}(z_1,\ldots,z_{n-1}+\overline{\varphi}(z_n+t))\,. \end{equation} \noindent For $n\geq 1$ and $m\geq 1$ we have \begin{equation}\label{formulegenerale} \overline{\varphi}_{n+m}(z_1,\ldots,z_{n+m}+t) = \overline{\varphi}_n(z_1,\ldots,z_{n}+ \overline{\varphi}_m(z_{n+1},\ldots,z_{n+m}+t))\,. \end{equation} \noindent The map $t\mapsto \overline{\varphi}_n(x_0,\ldots,x_{n-1}+t)$ is increasing if $\delta(x_0\cdots x_{n-1})=1$ and decreasing if $\delta(x_0\cdots x_{n-1})=-1$. We also write $\overline{\varphi}_n(\ud{x})$ for $\overline{\varphi}_n(x_0,\ldots,x_{n-1})$. \begin{defn}\label{defn2.2} The real number $s$ has a {\sf $\varphi$-expansion} $\ud{x}\in{\tt A}^{\Z_+}$ if the following limit exists, $$ s=\lim_{n\rightarrow\infty}\overline{\varphi}_n(\ud{x})\equiv \overline{\varphi}\big(x_0+\overline{\varphi}(x_1+\ldots)\big)\equiv \overline{\varphi}_\infty(\ud{x})\,. $$ The {\sf $\varphi$-expansion is well-defined} if for all $\ud{x}\in{\tt A}^{\Z_+}$, $\lim_{n\rightarrow\infty}\overline{\varphi}_n(\ud{x})= \overline{\varphi}_\infty(\ud{x})$ exists.\\ The {\sf $\varphi$-expansion is valid} if for all $x\in X\backslash S$ the $\varphi$-code ${\tt i}(x)$ of $x$ is a {\sf $\varphi$-expansion} of $x$. \end{defn} If the $\varphi$-expansion is valid, then for $x\in X\backslash S$, using \eqref{formulegenerale}, \eqref{2.7} and the continuity of the maps $\overline{\varphi}^j$, \begin{align}\label{2.8} x&=\lim_{n\rightarrow\infty} \overline{\varphi}_n\big({\tt i}_0(x),\ldots,{\tt i}_{n-1}(x)\big)\nonumber\\ &= \lim_{m\rightarrow\infty} \overline{\varphi}_{n}\big({\tt i}_0(x),\ldots,{\tt i}_{n-1}(x)+ \overline{\varphi}_{m}\big({\tt i}_n(x),\ldots,{\tt i}_{n+m-1}(x)\big)\big) \\ &= \overline{\varphi}_{n}\big({\tt i}_0(x),\ldots,{\tt i}_{n-1}(x)+ \overline{\varphi}_\infty({\tt i}(T^nx)\big)\,.\nonumber \end{align} The basic and elementary fact of the $\varphi$-expansion is \begin{equation}\label{2.9} \text{$a,b\in[0,1]$ and $x_0<x_0^\prime$} \implies \overline{\varphi}(x_0+a)\leq \overline{\varphi}(x_0^\prime+b)\,. \end{equation} We begin with two lemmas on the $\varphi$-code (for Lemma \ref{lem2.4} see e.g. \cite{CoE}). \begin{lem}\label{lem2.4} The $\varphi$-code ${\tt i}$ is $\preceq$-order-preserving on $X\backslash S$: $x\leq y$ implies ${\tt i}(x)\preceq{\tt i}(y)$. \end{lem} \medbreak\noindent{\bf Proof}:\enspace Let $x<y$. Either ${\tt i}_0(x)<{\tt i}_0(y)$, or ${\tt i}_0(x)={\tt i}_0(y)$; in the latter case, the strict monotonicity of $f_{{\tt i}_0(x)}$ implies \begin{align*} \varphi^{-1}(x)&={\tt i}_0(x) +T(x)< \varphi^{-1}(y)={\tt i}_0(x) +T(y)\quad\text{if $\delta({\tt i}_0(x))=+1$}\\ \varphi^{-1}(x)&={\tt i}_0(x) +T(x)> \varphi^{-1}(y)={\tt i}_0(x) +T(y)\quad\text{if $\delta({\tt i}_0(x))=-1$.} \end{align*} Repeating this argument we get ${\tt i}(x)\preceq{\tt i}(y)$. \qed \begin{lem}\label{lem2.4bis} The $\varphi$-code ${\tt i}$ is continuous\footnote{\label{f2.3} If we use the intervals $I^\prime_j=[a_j,a_{j+1})$, then we have only right-continuity} on $X\backslash S$. \end{lem} \medbreak\noindent{\bf Proof}:\enspace Let $x\in X\backslash S$ and $\{x^n\}\subset X\backslash S$, $\lim_n x^n=x$. Let $x\in I_{j_0}$. For $n$ large enough $x^n\in I_{j_0}$ and ${\tt i}_0(x^n)={\tt i}_0(x)=j_0$. Let $j_1:={\tt i}_1(x)$; we can choose $n_1$ so large that for $n\geq n_1$ $Tx_{n}\in I_{j_1}$. Hence ${\tt i}_0(x^n)=j_0$ and ${\tt i}_1(x^n)=j_1$ for all $n\geq n_1$. By induction we can find an increasing sequence $\{n_m\}$ such that $n\geq n_m$ implies ${\tt i}_j(x)={\tt i}_j(x^{n})$ for all $j=0,\ldots,m$. \qed The next lemmas give the essential properties of the map $\overline{\varphi}_\infty$. \begin{lem}\label{lem2.2} Let $\ud{x}\in{\tt A}^{\Z_+}$. Then there exist $y_\uparrow(\ud{x})$ and $y_\downarrow(\ud{x})$ in $[0,1]$, such that $y_\uparrow(\ud{x})\leq y_\downarrow(\ud{x})$; $y_\uparrow(\ud{x})$ and $y_\downarrow(\ud{x})$ are the only possible cluster points of the sequence $\{\overline{\varphi}_n(\ud{x})\}_n$.\\ Let $x\in X\backslash S$ and set $\ud{x}:={\tt i}(x)$. Then $$ a_j\leq y_\uparrow(\ud{x})\leq x\leq y_\downarrow(\ud{x})\leq a_{j+1}\quad \text{if $x_0=j$}\,. $$ If the $\varphi$-expansion is valid, then each $y\in X\backslash S$ has a unique $\varphi$-expansion\footnote{\label{f2.4} If we use the intervals $I^\prime_j=[a_j,a_{j+1})$, this statement is not correct.}, $$ y=\overline{\varphi}_\infty(\ud{x})\in X\backslash S\iff \ud{x}={\tt i}(y)\,. $$ \end{lem} \medbreak\noindent{\bf Proof}:\enspace Consider the map $$ t\mapsto \overline{\varphi}_n(x_0,\ldots,x_{n-1}+t)\,. $$ Suppose that $\delta(x_0\cdots x_{n-1})=-1$. Then it is decreasing, and for any $m$ \begin{align*} \overline{\varphi}_{n+m}(x_0,\ldots,x_{n+m-1}) &= \overline{\varphi}_n(x_0,\ldots,x_{n-1}+ \overline{\varphi}_m(x_n,\ldots,x_{n+m-1}))\\ &\leq \overline{\varphi}_n(x_0,\ldots,x_{n-1})\,. \end{align*} In particular the subsequence $\{\overline{\varphi}_n(\ud{x})\}_n$ of all $n$ such that $\delta(x_0\cdots x_{n-1})=-1$ is decreasing with limit\footnote{\label{f2.limit} If the subsequence is finite, then $y_\downarrow(\ud{x})$ is the last point of the subsequence.} $y_\downarrow(\ud{x})$. When there is no $n$ such that $\delta(x_0\cdots x_{n-1})=-1$, we set $y_\downarrow(\ud{x}):=a_{x_0+1}$. Similarly, the subsequence $\{\overline{\varphi}_n(\ud{x})\}_n$ of all $n$ such that $\delta(x_0\cdots x_{n-1})=1$ is increasing with limit $y_\uparrow(\ud{x})\leq y_\downarrow(\ud{x})$. When there is no $n$ such that $\delta(x_0\cdots x_{n-1})=1$, we set $y_\uparrow(\ud{x}):=a_{x_0}$. Since any $\overline{\varphi}_n(\ud{x})$ appears in one of these sequences, there are at most two cluster points for $\{\overline{\varphi}_n(\ud{x})\}_n$. Let $x\in X\backslash S$; $x=\varphi(\varphi^{-1}(x))$ and by \eqref{2.7} \begin{align}\label{identity} x&=\varphi({\tt i}_0(x)+Tx)= \varphi({\tt i}_0(x)+\varphi(\varphi^{-1}(Tx)))= \varphi({\tt i}_0(x)+\varphi({\tt i}_1(x)+T^2x))=\cdots \nonumber\\ &=\varphi\big({\tt i}_0(x)+\varphi({\tt i}_1(x)+\ldots +\varphi({\tt i}_{n-1}(x)+T^{n}x))\big)\,. \end{align} By monotonicity \begin{equation}\label{id1} \big(\text{$x\in X\backslash S$ and $\delta({\tt i}_0(x)\cdots{\tt i}_{n-1}(x))=-1$}\big)\implies \overline{\varphi}_n({\tt i}_0(x),\cdots,{\tt i}_{n-1}(x))\geq x\,, \end{equation} and \begin{equation}\label{id2} \big(\text{$x\in X\backslash S$ and $\delta({\tt i}_0(x)\cdots{\tt i}_{n-1}(x))=1$}\big)\implies \overline{\varphi}_n({\tt i}_0(x),\cdots,{\tt i}_{n-1}(x))\leq x\,. \end{equation} The inequalities of Lemma \ref{lem2.2} follow from \eqref{id1}, \eqref{id2} and $\overline{\varphi}({\tt i}_0(x)+t)\in [a_{x_0}, a_{x_0+1}]$. Suppose that the $\varphi$-expansion is valid and that $\overline{\varphi}_\infty(\ud{x})=y\in X\backslash S$. We prove that $\ud{x}={\tt i}(y)$. By hypothesis $y\in I_{x_0}$; using \eqref{2.8} and the fact that $I_{x_0}$ is open, we can write $$ y=\overline{\varphi}\big(x_0+\overline{\varphi}(x_1 +\overline{\varphi}(x_{2}+\ldots))\big) =\varphi\big(x_0+\overline{\varphi}(x_1 +\overline{\varphi}(x_{2}+\ldots))\big)\,. $$ This implies that $$ \varphi^{-1}(y)={\tt i}_0(y)+Ty= x_0+ \overline{\varphi}(x_1 +\overline{\varphi}(x_{2}+\ldots))\,. $$ Since $Ty \in X\backslash S$, we can iterate this argument. \qed \begin{lem}\label{lem2.3} Let $\ud{x},\ud{x}^\prime\in{\tt A}^{\Z_+}$ and $\ud{x}\preceq\ud{x}^\prime$. Then any cluster point of $\{\overline{\varphi}_n(\ud{x})\}_n$ is smaller then any cluster point of $\{\overline{\varphi}_n(\ud{x}^\prime)\}_n$. In particular, if $\overline{\varphi}_\infty$ is well-defined on ${\tt A}^{\Z_+}$, then $\overline{\varphi}_\infty$ is order-preserving. \end{lem} \medbreak\noindent{\bf Proof}:\enspace Let $\ud{x}\prec\ud{x}^\prime$ with $x_k=x^\prime_k$, $k=0,\ldots,m-1$ and $x_m\not=x^\prime_m$. We have $$ \overline{\varphi}_{m+n}(\ud{x})= \overline{\varphi}_m(x_0,\ldots,x_{m-1}+ \overline{\varphi}_n(\sigma^m\ud{x}))\,. $$ By \eqref{2.9}, if $\delta(x_0\cdots x_{m-1})=1$, then $x_m<x^\prime_m$ and for any $n\geq 1$, $\ell\geq 1$, $$ \overline{\varphi}_n(\sigma^m\ud{x})=\overline{\varphi}_1(x_m+ \overline{\varphi}_{n-1}(\sigma^{m+1}\ud{x}))\leq \overline{\varphi}_\ell(\sigma^m\ud{x}^\prime)= \overline{\varphi}_1(x^\prime_m+ \overline{\varphi}_{\ell-1}(\sigma^{m+1}\ud{x}^\prime))\,; $$ if $\delta(x_0\cdots x_{m-1})=-1$, then $x_m>x^\prime_m$ and $$ \overline{\varphi}_n(\sigma^m\ud{x})=\overline{\varphi}_1(x_m+ \overline{\varphi}_{n-1}(\sigma^{m+1}\ud{x}))\geq \overline{\varphi}_\ell(\sigma^m\ud{x}^\prime)= \overline{\varphi}_1(x^\prime_m+ \overline{\varphi}_{\ell-1}(\sigma^{m+1}\ud{x}^\prime))\,. $$ Therefore, in both cases, for any $n\geq 1$, $\ell\geq 1$, $$ \overline{\varphi}_{m+n}(\ud{x})\leq \overline{\varphi}_{m+\ell}(\ud{x}^\prime)\,. $$ \qed \begin{lem}\label{lem2.4ter} Let $\ud{x}\in{\tt A}^{\Z_+}$ and $x_0=j$.\\ 1) Let $\delta(j)=1$ and $y_\uparrow(\ud{x})\in \overline{I}_j$ be a cluster point of $\{\overline{\varphi}_n(\ud{x})\}$. Then $f_j\big(y_\uparrow(\ud{x})\big)\geq y_\uparrow(\sigma\ud{x})$ if $y_\uparrow(\ud{x})=a_j$, $f_j\big(y_\uparrow(\ud{x})\big)\leq y_\uparrow(\sigma\ud{x})$ if $y_\uparrow(\ud{x})=a_{j+1}$ and $f_j\big(y_\uparrow(\ud{x})\big)=y_\uparrow(\sigma\ud{x})$ otherwise. The same conclusions hold when $y_\downarrow(\ud{x})$ is a cluster point of $\{\overline{\varphi}_n(\ud{x})\}$.\\ 2) Let $\delta(j)=-1$ and $y_\uparrow(\ud{x})\in \overline{I}_j$ be a cluster point of $\{\overline{\varphi}_n(\ud{x})\}$. Then $f_j\big(y_\uparrow(\ud{x})\big)\leq y_\downarrow(\sigma\ud{x})$ if $y_\uparrow(\ud{x})=a_j$, $f_j\big(y_\uparrow(\ud{x})\big)\geq y_\downarrow(\sigma\ud{x})$ if $y_\uparrow(\ud{x})=a_{j+1}$ and $f_j\big(y_\uparrow(\ud{x})\big)=y_\downarrow(\sigma\ud{x})$ otherwise. The same conclusions hold when $y_\downarrow(\ud{x})$ is a cluster point of $\{\overline{\varphi}_n(\ud{x})\}$. \end{lem} \medbreak\noindent{\bf Proof}:\enspace Set $f_j(\overline{I}_j):=[\alpha_j,\beta_j]$. Suppose for example that $\delta(j)=-1$ and that $n_k$ is the subsequence of all $m$ such that $\delta(x_0,\ldots,x_m)=1$. Since $\delta(j)=-1$ the sequence $\{\overline{\varphi}_{n_k-1}(\sigma\ud{x})\}_k$ is decreasing. Hence by continuity \begin{equation}\label{mon} y_\uparrow(\ud{x})=\lim_k\overline{\varphi}_{n_k}(\ud{x})= \overline{\varphi}(j+\lim_k\overline{\varphi}_{n_k-1}(\sigma\ud{x}))= \overline{\varphi}(j+y_\downarrow(\sigma\ud{x}))\,. \end{equation} If $y_\uparrow(\ud{x})=a_j$, then $f_j(a_j)=\beta_j\leq y_\downarrow(\sigma\ud{x})$; if $y_\uparrow(\ud{x})=a_{j+1}$, then $f_j(a_{j+1})=\alpha_j\geq y_\downarrow(\sigma\ud{x})$; if $a_j<y_\uparrow(\ud{x})<a_{j+1}$, then $$ j+f_j\big(y_\uparrow(\ud{x})\big)= \varphi^{-1}\big(\varphi(j+\lim_k\overline{\varphi}_{n_k-1}(\sigma\ud{x}))\big)= j+y_\downarrow(\sigma\ud{x})\,. $$ Similar proofs for the other cases. \qed \begin{lem}\label{lem2.4quatro} Let $\ud{x}\in{\tt A}^{\Z_+}$.\\ 1) If $\{\overline{\varphi}_n(\ud{x})\}$ has two cluster points, and if $y\in \big(y_\uparrow(\ud{x}), y_\downarrow(\ud{x})\big)$, then $y\in X\backslash S$, ${\tt i}(y)=\ud{x}$ and $y$ has no $\varphi$-expansion.\\ Let $x\in X\backslash S$ and set $\ud{x}:={\tt i}(x)$.\\ 2) If $\lim_n\overline{\varphi}_n(\ud{x})=y_\uparrow(\ud{x})$ and if $y\in (y_\uparrow(\ud{x}), x)$, then $y\in X\backslash S$, ${\tt i}(y)=\ud{x}$ and $y$ has no $\varphi$-expansion.\\ 3) If $\lim_n\overline{\varphi}_n(\ud{x})=y_\downarrow(\ud{x})$ and if $y\in(x,y_\downarrow(\ud{x}))$, then $y\in X\backslash S$, ${\tt i}(y)=\ud{x}$ and $y$ has no $\varphi$-expansion. \end{lem} \medbreak\noindent{\bf Proof}:\enspace Suppose that $y_\uparrow(\ud{x})<y< y_\downarrow(\ud{x})$. Then $y\in I_{x_0}$ and ${\tt i}_0(y)=x_0$. From Lemma \ref{lem2.4ter} $$ y_\uparrow(\sigma\ud{x})<Ty<y_\downarrow(\sigma\ud{x})\quad \text{if $\delta(x_0)=1$}\,, $$ and $$ y_\downarrow(\sigma\ud{x})>Ty>y_\uparrow(\sigma\ud{x})\quad \text{if $\delta(x_0)=-1$}\,. $$ Iterating this argument we prove that $T^ny\in I_{x_n}$ and ${\tt i}_n(y)=x_n$ for all $n\geq 1$. Suppose that $y$ has a $\varphi$-expansion, $y=\overline{\varphi}_\infty(\ud{x}^\prime)$. If $\ud{x}^\prime\prec\ud{x}$, then by Lemma \ref{lem2.3} $\overline{\varphi}_\infty(\ud{x}^\prime)\leq y_\uparrow(\ud{x})$ and if $\ud{x}\prec \ud{x}^\prime$, then by Lemma \ref{lem2.3} $y_\downarrow(\ud{x})\leq \overline{\varphi}_\infty(\ud{x}^\prime)$, which leads to a contradiction. Similar proofs in cases 2 and 3. \qed \begin{lem}\label{lem2.4quinto} Let $\ud{x}^\prime\in{\tt A}^{\Z_+}$ and $x\in X\backslash S$. Then $$ \text{$y_\downarrow(\ud{x}^\prime)<x$ $\implies$ $\ud{x}^\prime\preceq {\tt i}(x)$} \quad\text{and}\quad\text{$x<y_\uparrow(\ud{x}^\prime)$ $\implies$ ${\tt i}(x)\preceq \ud{x}^\prime$.} $$ \end{lem} \medbreak\noindent{\bf Proof}:\enspace Suppose that $y_\downarrow(\ud{x}^\prime)<x$ and $y_\downarrow(\ud{x}^\prime)$ is a cluster point. Either $x^\prime_0<{\tt i}_0(x)$ or $x^\prime_0={\tt i}_0(x)$ and by Lemma \ref{lem2.4ter} $$ y_\downarrow(\sigma\ud{x}^\prime) <Tx\quad\text{if $\delta(x_0^\prime)=1$,} $$ or $$ y_\uparrow(\sigma\ud{x}^\prime) >Tx\quad\text{if $\delta(x_0^\prime)=-1$.} $$ Since $y_\downarrow(\sigma\ud{x}^\prime)$ or $y_\uparrow(\sigma\ud{x}^\prime)$ is a cluster point we can repeat the argument and conclude that $\ud{x}^\prime\preceq{\tt i}(x)$. If $y_\downarrow(\ud{x}^\prime)$ is not a cluster point, then we use the cluster point $y_\uparrow(\ud{x}^\prime)<y_\downarrow(\ud{x}^\prime)$ for the argument. \qed \begin{thm}\label{thm2.1}{\rm \cite{P2}} A $\varphi$-expansion is valid if and only if the $\varphi$-code ${\tt i}$ is injective on $X\backslash S$. \end{thm} \medbreak\noindent{\bf Proof}:\enspace Suppose that the $\varphi$-expansion is valid. If $x\not=z$, then $$ x=\overline{\varphi}\big({\tt i}_0(x)+ \overline{\varphi}({\tt i}_1(x)+\ldots)\big)\not= \overline{\varphi}\big({\tt i}_0(z)+ \overline{\varphi}({\tt i}_1(z)+\ldots)\big)=z\,, $$ and therefore ${\tt i}(x)\not={\tt i}(z)$. Conversely, assume that $x\not=z$ implies ${\tt i}(x)\not={\tt i}(z)$. Let $x\in X\backslash S$, $\ud{x}={\tt i}(x)$, and suppose for example that $y_\uparrow(\ud{x})< y_\downarrow(\ud{x})$ are two cluster points. Then by Lemma \ref{lem2.4quatro} any $y$ such that $y_\uparrow(\ud{x})<y< y_\downarrow(\ud{x})$ is in $X\backslash S$ and ${\tt i}(y)=\ud{x}$, contradicting the hypothesis. Therefore $z:=\lim_n\overline{\varphi}_n(\ud{x})$ exists. If $z\not=x$, then we get again a contradiction using Lemma \ref{lem2.4quatro}. \qed Theorem \ref{thm2.1} states that the validity of the $\varphi$-expansion is equivalent to the injectivity of the map ${\tt i}$ defined on $X\backslash S$. One can also state that the validity of the $\varphi$-expansion is equivalent to the surjectivity of the map $\overline{\varphi}_\infty$. \begin{thm}\label{thm2.1bis} A $\varphi$-expansion is valid if and only if $\overline{\varphi}_\infty\,{:}\; {\tt A}^{\Z_+}\rightarrow [0,1]$ is well-defined on ${\tt A}^{\Z_+}$ and surjective. \end{thm} \medbreak\noindent{\bf Proof}:\enspace Suppose that the $\varphi$-expansion is valid. Let $\ud{x}\in{\tt A}^{\Z_+}$ and suppose that $\{\overline{\varphi}_n(\ud{x})\}_n$ has two different accumulation points $y_\uparrow<y_\downarrow$. By Lemma \ref{lem2.4quatro} we get a contradiction. Thus $\overline{\varphi}_\infty(\ud{x})$ is well-defined for any $\ud{x}\in{\tt A}^{\Z_+}$. To prove the surjectivity of $\overline{\varphi}_\infty$ it is sufficient to consider $s\in S$. The argument is a variant of the proof of Lemma \ref{lem2.4quatro}. Let $\ud{x}^\prime$ be a string such that for any $n\geq 1$ $$ f_{x^\prime_{n-1}}{\scriptstyle\circ} \cdots {\scriptstyle\circ} f_{x^\prime_0}(s)\in\overline{I}_{x^\prime_n}\,. $$ We use here the extension of $f_j$ to $\overline{I}_j$; we have a choice for $x^\prime_n$ whenever $f_{x^\prime_{n-1}}{\scriptstyle\circ} \cdots {\scriptstyle\circ} f_{x^\prime_0}(s)\in S_0$. Suppose that $\overline{\varphi}_\infty(\ud{x}^\prime)<s$ and that $\overline{\varphi}_\infty(\ud{x}^\prime)<z<s$. Since $s, \overline{\varphi}_\infty(\ud{x}^\prime)\in \overline{I}_{x_0^\prime}$, we have $z\in I_{x_0^\prime}$ and therefore ${\tt i}(z)=x_0^\prime$. Moreover, $$ \overline{\varphi}_\infty(\sigma\ud{x}^\prime)<Tz<f_{x^\prime_0}(s)\quad\text{if $\delta(x_0^\prime)=1$} $$ or $$ f_{x^\prime_0}(s)<Tz<\overline{\varphi}_\infty(\sigma\ud{x}^\prime)\quad\text{if $\delta(x_0^\prime)=-1$}\,. $$ Iterating the argument we get $z\in X\backslash S$ and ${\tt i}(z)=\ud{x}^\prime$, contradicting the validity of the $\varphi$-expansion. Similarly we exclude the possibility that $\overline{\varphi}_\infty(\ud{x}^\prime)>s$, thus proving the surjectivity of the map $\overline{\varphi}_\infty$. Suppose that $\overline{\varphi}_\infty\,{:}\; {\tt A}^{\Z_+}\rightarrow [0,1]$ is well-defined and surjective. Let $x\in X\backslash S$ and $\ud{x}={\tt i}(x)$. Suppose that $x< \overline{\varphi}_\infty(\ud{x})$. By Lemma \ref{lem2.4quatro} any $z$, such that $x<z<\overline{\varphi}_\infty(\ud{x})$, does not have a $\varphi$-expansion. This contradicts the hypothesis that $\overline{\varphi}_\infty$ is surjective. Similarly we exclude the possibility that $x> \overline{\varphi}_\infty(\ud{x})$. \qed \begin{thm}\label{thm2.1ter} A $\varphi$-expansion is valid if and only if $\overline{\varphi}_\infty: {\tt A}^{\Z_+}\rightarrow [0,1]$ is well-defined, continuous and there exist $\ud{x}^+$ with $\overline{\varphi}_\infty(\ud{x}^+)=1$ and $\ud{x}^-$ with $\overline{\varphi}_\infty (\ud{x}^-)=0$. \end{thm} \medbreak\noindent{\bf Proof}:\enspace Suppose that the $\varphi$-expansion is valid. By Theorem \ref{thm2.1bis} $\overline{\varphi}_\infty$ is well-defined and surjective so that there exist $\ud{x}^+$ and $\ud{x}^-$ with $\overline{\varphi}_\infty(\ud{x}^+)=1$ and $\overline{\varphi}_\infty(\ud{x}^-)=0$. Suppose that $\ud{x}^n\downarrow\ud{x}$ and set $y:=\overline{\varphi}_\infty(\ud{x})$, $x_n:=\overline{\varphi}_\infty(\ud{x}^n)$. By Lemma \ref{lem2.3} the sequence $\{x_n\}$ is monotone decreasing; let $x:=\lim_nx_n$. Suppose that $y<x$ and $y<z<x$. Since $y<z<x_n$ for any $n\geq 1$ and $\lim_n\ud{x}^n=\ud{x}$, we prove, as in the beginning of the proof of Lemma \ref{lem2.4quatro}, that $z\in X\backslash S$. The validity of the $\varphi$-expansion implies that $z=\overline{\varphi}_\infty({\tt i}(z))$. By Lemma \ref{lem2.4quinto} $$ \ud{x}\preceq{\tt i}(z)\preceq\ud{x}^n\,. $$ Since these inequalities are valid for any $z$, with $y<z<x$, the validity of $\varphi$-expansion implies that we have strict inequalities, $\ud{x}\prec{\tt i}(z)\prec \ud{x}^n$. This contradicts the hypothesis that $\lim_{n\rightarrow\infty}\ud{x}^n=\ud{x}$. A similar argument holds in the case $\ud{x}^n\uparrow\ud{x}$. Hence $$ \lim_{n\rightarrow\infty}\ud{x}^n=\ud{x}\implies \lim_{n\rightarrow\infty} \overline{\varphi}_\infty(\ud{x}^n)=\overline{\varphi}_\infty(\ud{x})\,. $$ Conversely, suppose that $\overline{\varphi}_\infty \,{:}\; {\tt A}^{\Z_+}\rightarrow [0,1]$ is well-defined and continuous. Then, given $\delta>0$ and $\ud{x}\in{\tt A}^{\Z_+}$, $\exists n$ so that $$ 0\leq\sup\{\overline{\varphi}_\infty(\ud{x}^\prime)\,{:}\; x^\prime_j=x_j\;j=0,\ldots,n-1\}- \inf\{\overline{\varphi}_\infty(\ud{x}^\prime)\,{:}\; x^\prime_j=x_j\;j=0,\ldots,n-1\}\leq \delta\,. $$ We set $$ \ud{x}^{n,-}:=x_0\cdots x_{n-1}\ud{x}^-\quad\text{and}\quad \ud{x}^{n,+}:=x_0\cdots x_{n-1}\ud{x}^+\,. $$ For any $x\in X\backslash S$ we have the identity \eqref{identity}, $$ x=\varphi\big({\tt i}_0(x)+\varphi({\tt i}_1(x)+\ldots +\varphi({\tt i}_{n-1}+T^{n}x))\big)= \overline{\varphi}_{n}({\tt i}_0(x),\ldots,{\tt i}_{n-1}(x)+T^{n}x)\,. $$ If $\delta({\tt i}_0(x)\cdots{\tt i}_{n-1}(x))=1$, then \begin{align*} \overline{\varphi}_\infty(\ud{x}^{n,-}):&= \overline{\varphi}_n( {\tt i}_0(x),\ldots,{\tt i}_{n-1}(x)+\overline{\varphi}_\infty(\ud{x}^-))\\ &=\overline{\varphi}_n( {\tt i}_0(x),\ldots,{\tt i}_{n-1}(x))\\ &\leq \overline{\varphi}_{n}({\tt i}_0(x),\ldots,{\tt i}_{n-1}(x)+T^{n}x)\\ &\leq \overline{\varphi}_{n}( {\tt i}_0(x),\ldots,{\tt i}_{n-1}(x)+1)\\ &=\overline{\varphi}_{n}( {\tt i}_0(x),\ldots,{\tt i}_{n-1}(x)+\overline{\varphi}_\infty(\ud{x}^+))=: \overline{\varphi}_\infty(\ud{x}^{n,+})\,. \end{align*} If $\delta({\tt i}_0(x)\cdots{\tt i}_{n-1}(x))=-1$, then the inequalities are reversed. Letting $n$ going to infinity, we get $\overline{\varphi}_\infty({\tt i}(x))=x$. \qed \begin{rem}\label{rem2.4} When the maps $f_0$ and $f_{k-1}$ are increasing, then we can take $$ \ud{x}^+=(k-1,k-1,\ldots)\quad\text{and}\quad \ud{x}^{-}=(0,0,\ldots)\,. $$ \end{rem} \begin{thm}\label{thm2.2}{\rm \cite{P2}} A necessary and sufficient condition for a $\varphi$-expansion to be valid is that $S$ is dense in $[0,1]$. A sufficient condition is $\sup_t|\varphi^\prime(t)|<1$. \end{thm} \bigskip For each $j\in{\tt A}$ we define (the limits are taken with $x\in X\backslash S$) \begin{equation}\label{2.12} \ud{u}^j:=\lim_{x\downarrow a_{j}}{\tt i}(x)\quad\text{and}\quad \ud{v}^j :=\lim_{x\uparrow a_{j+1}}{\tt i}(x)\,. \end{equation} The strings $\ud{u}^j$ and $\ud{v}^j$ are called {\sf virtual itineraries}. Notice that $\underline{v}^j \prec\underline{u}^{j+1}$ since $v^j_0<u_0^{j+1}$. \begin{equation}\label{2.12b} \sigma^k\ud{u}^j=\sigma^k(\lim_{x\downarrow a_{j}}{\tt i}(x))= \lim_{x\downarrow a_{j}}\sigma^k{\tt i}(x)=\lim_{x\downarrow a_{j}}{\tt i}(T^kx)\qquad(x\in X\backslash S)\,. \end{equation} \begin{pro}\label{prof2.2} Suppose that $\ud{x}^\prime\in{\tt A}^{\Z_+}$ verifies $\ud{u}^{x_n^\prime}\prec \sigma^n\ud{x}^\prime\prec \ud{v}^{x_n^\prime}$ for all $n\geq 0$. Then there exists $x\in X\backslash S$ such that ${\tt i}(x)=\ud{x}^\prime$. \end{pro} \noindent Notice that we do not assume that the $\varphi$-expansion is valid or that the map $\overline{\varphi}_\infty$ is well-defined. For unimodal maps see e.g. Theorem II.3.8 in \cite{CoE}. Our proof is different. \medbreak\noindent{\bf Proof}:\enspace If $y_\uparrow(\ud{x}^\prime)<y_\downarrow(\ud{x}^\prime)$ are two cluster points, then this follows from Lemma \ref{lem2.4quatro}. Therefore, assume that $\lim_{n}\overline{\varphi}_n(\ud{x}^\prime)$ exists. Either there exists $m>1$ so that $y_\uparrow(\sigma^m\ud{x}^\prime)<y_\downarrow(\sigma^m\ud{x}^\prime)$ are two cluster points, or $\lim_{n}\overline{\varphi}_n(\sigma^m\ud{x}^\prime)$ exists for all $m\geq 1$. In the first case, there exists $z_m\in X\backslash S$, $$ y_\uparrow(\sigma^m\ud{x}^\prime)<z_m<y_\downarrow(\sigma^m\ud{x}^\prime) \quad\text{and}\quad {\tt i}(z_m)=\sigma^m\ud{x}^\prime\,. $$ Let $$ z_{m-1}:=\overline{\varphi}(x^\prime_{m-1}+z_m)\,. $$ We show that $a_{x^\prime_{m-1}}< z_{m-1}<a_{x^\prime_{m-1}+1}$. This implies that $z_m\in {\rm int(dom} \varphi)$ so that $$ \varphi^{-1}(z_{m-1})=x^\prime_{m-1}+Tz_{m-1}=x^\prime_{m-1}+z_m\,. $$ Suppose that $\delta(x^\prime_{m-1})=1$ and $a_{x^\prime_{m-1}}=z_{m-1}$. Then for any $y\in X\backslash S$, $y>a_{x^\prime_{m-1}}$, we have $Ty>z_{m}$. Therefore, by Lemma \ref{lem2.4}, ${\tt i}(Ty)\succeq {\tt i}(z_m)=\sigma^m\ud{x}^\prime$; ${\tt i}_0(y)=x^\prime_{m-1}$ when $y$ is close to $a_{x^\prime_{m-1}}$, so that $$ \lim_{y\downarrow a_{x^\prime_{m-1}}}{\tt i}(y)=\ud{u}^{x^\prime_{m-1}}\succeq \sigma^{m-1}\ud{x}^\prime\,, $$ which is a contradiction. Similarly we exclude the cases $\delta(x^\prime_{m-1})=1$ and $a_{x^\prime_{m-1}+1}=z_{m-1}$, $\delta(x^\prime_{m-1})=-1$ and $a_{x^\prime_{m-1}}=z_{m-1}$, $\delta(x^\prime_{m-1})=-1$ and $a_{x^\prime_{m-1}+1}=z_{m-1}$. Iterating this argument we get the existence of $z_0\in X\backslash S$ with ${\tt i}(z_0)=\ud{x}^\prime$. In the second case, $\lim_{n}\overline{\varphi}_n(\sigma^m\ud{x}^\prime)$ exists for all $m\geq 1$. Let $x:=\lim_{n}\overline{\varphi}_n(\ud{x}^\prime)$. Suppose that $x^\prime_0=j$, so that $\ud{u}^{j}\prec\ud{x}^\prime\prec \ud{v}^{j}$. By Lemma \ref{lem2.4} and definition of $\ud{u}^{j}$ and $\ud{v}^{j}$ there exist $z_1,z_2\in I_j$ such that $$ z_1<x<z_2\quad\text{and}\quad\ud{u}^{j}\preceq{\tt i}(z_1)\prec \ud{x}^\prime\prec{\tt i}(z_2)\preceq \ud{v}^{j}\,. $$ Therefore $a_j< x<a_{j+1}$, ${\tt i}_0(x)=x_0^\prime$ and $Tx= \overline{\varphi}_\infty(\sigma\ud{x}^\prime)$ (Lemma \ref{lem2.4ter}). Iterating this argument we get $\ud{x}^\prime={\tt i}(x)$. \qed \begin{thm}\label{thm2.5} Suppose that the $\varphi$-expansion is valid. Then \begin{enumerate} \item $\Sigma:=\{{\tt i}(x)\in{\tt A}^{\Z_+}\,{:}\; x\in X\backslash S\}=\{\ud{x}\in{\tt A}^{\Z_+}\,{:}\; \ud{u}^{x_n}\prec \sigma^n\ud{x}\prec \ud{v}^{x_n}\quad\forall\,n\geq 0\}$. \item The map ${\tt i}\,{:}\; X\backslash S\rightarrow \Sigma$ is bijective, $\overline{\varphi}_\infty{\scriptstyle\circ}{\tt i}={\rm id}$ and ${\tt i}{\scriptstyle\circ}\overline{\varphi}_\infty={\rm id}$.\\ Both maps ${\tt i}$ and $\overline{\varphi}_\infty$ are order-preserving. \item $\sigma(\Sigma)=\Sigma$ and $\overline{\varphi}_\infty(\sigma\ud{x})=T\overline{\varphi}_\infty(\ud{x})$ if $\ud{x}\in\Sigma$. \item If $\ud{x}\in{\tt A}^{\Z_+}\backslash\Sigma$, then there exist $m\in\Z_+$ and $j\in{\tt A}$ such that $\overline{\varphi}_\infty(\sigma^m\ud{x})=a_j$. \item $\forall n\geq 0\,,\,\forall j\in{\tt A}\,{:}\; \quad\ud{u}^{u^j_n}\preceq \sigma^n\ud{u}^j\prec \ud{v}^{u^j_n}$ if $\delta(\ud{u}^j_0\cdots\ud{u}^j_{n-1})=1$ and $\ud{u}^{u^j_n}\prec \sigma^n\ud{u}^j\preceq \ud{v}^{v^j_n}$ if $\delta(\ud{u}^j_0\cdots\ud{u}^j_{n-1})=-1$. \item $\forall n\geq 0\,,\,\forall j\in{\tt A}\,{:}\; \quad\ud{u}^{u^j_n}\preceq \sigma^n\ud{v}^j\prec \ud{v}^{u^j_n}$ if $\delta(\ud{v}^j_0\cdots\ud{v}^j_{n-1})=-1$ and $\ud{u}^{u^j_n}\prec \sigma^n\ud{v}^j\preceq \ud{v}^{v^j_n}$ if $\delta(\ud{u}^j_0\cdots\ud{u}^j_{n-1})=1$. \end{enumerate} \end{thm} \medbreak\noindent{\bf Proof}:\enspace Let $x\in X\backslash S$. Clearly, by monotonicity, $$ \ud{u}^{{\tt i}_k(x)}\preceq \sigma^k{\tt i}(x)\preceq \ud{v}^{{\tt i}_k(x)}\quad \forall\;k\in\Z_+\,. $$ Suppose that there exist $x\in X\backslash S$ and $k$ such that $\sigma^k{\tt i}(x)= \ud{v}^{{\tt i}_k(x)}$. Since $(\sigma^k{\tt i}(x))_0={\tt i}_0(T^kx)$, we can assume, without restricting the generality, that $k=0$ and ${\tt i}_0(x)=j$. Therefore $x\in (a_{j}, a_{j+1})$, and for all $y\in X\backslash S$, such that $x\leq y<a_{j+1}$, we have by Lemma \ref{lem2.4} that ${\tt i}(y)={\tt i}(x)=\ud{v}^{j}$. By Theorem \ref{thm2.1} this contradicts the hypothesis that the $\varphi$-expansion is valid. The other case, $\sigma^k{\tt i}(x)= \ud{u}^{{\tt i}_k(x)}$, is treated similarly. This proves half of the first statement. The second half is a consequence of Proposition \ref{prof2.2}. The second statement also follows, as well as the third, since $T(X\backslash S)=X\backslash S$ (we assume that \eqref{2.1} holds). Let $\ud{x}\in{\tt A}^{\Z_+}\backslash\Sigma$ and $m\in\Z_+$ be the smallest integer such that one of the conditions defining $\Sigma$ is not verified. Then either $\sigma^m\ud{x}\preceq\underline{u}^{x_m}$, or $\sigma^m\ud{x}\succeq \underline{v}^{x_m}$. The map $\overline{\varphi}_\infty$ is continuous (Theorem \ref{thm2.1ter}). Hence, for any $j\in{\tt A}$, $$ \overline{\varphi}_\infty(\ud{u}^j)=a_{j}\quad\text{and}\quad \overline{\varphi}_\infty(\ud{v}^j)=a_{j+1}\,. $$ Let $\sigma^m\ud{x}\preceq\underline{u}^{x_m}$. Since $\ud{v}^{x_m-1}\prec\sigma^m\ud{x}$, $$ a_{x_m}=\overline{\varphi}_\infty(\ud{v}^{x_m-1}) \leq\overline{\varphi}_\infty(\sigma^m\ud{x})\leq \overline{\varphi}_\infty(\ud{u}^{x_m})=a_{x_m}\,. $$ The other case is treated in the same way. From definition \eqref{2.12} $\ud{u}^{u^j_n}\preceq \sigma^n\ud{u}^j\preceq \ud{v}^{u^j_n}$. Suppose that $\delta(\ud{u}^j_0\cdots\ud{u}^j_{n-1})=1$ and $\sigma^n\ud{u}^j=\ud{v}^{u^j_n}$. By continuity of the $\varphi$-code there exists $x\in X\backslash S$ such that $x>a_j$ and ${\tt i}_k(x)=\ud{u}^j_k$, $k=0,\ldots,n$. Let $a_j<y<x$. Since $\delta(\ud{u}^j_0\cdots\ud{u}^j_{n-1})=1$, $T^ny<T^nx$ and consequently $$ \lim_{y\downarrow a_j}{\tt i}(T^ny)=\sigma^n\ud{u}^j\preceq{\tt i}(T^nx)\preceq \ud{v}^{u^j_n}\,. $$ Hence $\sigma^n{\tt i}(x)= \ud{v}^{x_n}$, which is a contradiction. The other cases are treated similarly. \qed \subsection{Dynamical system $\beta x+\alpha\mod 1$}\label{subsection2.3} We consider the family of dynamical systems $\beta x+\alpha\mod 1$ with $\beta >1$ and $0\leq \alpha<1$. For given $\alpha$ and $\beta$, the dynamical system is described by $k=\lceil \alpha+\beta\rceil$ intervals $I_j$ and maps $f_j$, $$ I_0=\Big(0,\frac{1-\alpha}{\beta}\Big)\,,\,I_j=\Big(\frac{j-\alpha}{\beta}, \frac{j+1-\alpha}{\beta}\Big) \,,\,j=1,\ldots,k-2\,,\,I_{k-1}=\Big(\frac{k-1-\alpha}{\beta},1\Big) $$ and $$ f_j(x)=\beta x+\alpha-j\,,\,j=0,\ldots k-1\,. $$ The maps $T_{\alpha,\beta}$, $\varphi^{\alpha,\beta}$ and $\overline{\varphi}^{\alpha,\beta}$ are defined as in subsection \ref{subsection2.1}. $$ \overline{\varphi}^{\alpha,\beta}(x)=\begin{cases} 0 & \text{if $0\leq x\leq\alpha$}\\ \beta^{-1}(x-\alpha) & \text{if $\alpha\leq x\leq \alpha+\beta$} \\ 1 & \text{if $\alpha+\beta\leq x\leq \lceil \alpha+\beta\rceil$} \end{cases} $$ and \begin{equation}\label{eqso} S_0=\{a_j\,{:}\; j=1,\ldots,k-1\}\cup\{0,1\}\quad\text{with}\quad a_j:=\beta^{-1}(j-\alpha)\,. \end{equation} Since all maps are increasing the total order on ${\tt A}^{\Z_+}$ is the lexicographic order. We have $2k$ virtual orbits, but only two of them are important. Indeed, if we set $$ \ud{u}^{\alpha,\beta}:=\ud{u}^0\quad\text{and} \quad\ud{v}^{\alpha,\beta}:=\ud{v}^{k-1}\,, $$ then $$ \ud{u}^j=j\ud{u}^{\alpha,\beta}\,,\,j=1,\ldots k-1 $$ and $$ \ud{v}^j=j\ud{v}^{\alpha,\beta}\,,\,j=0,\ldots,k-2\,. $$ \begin{pro}\label{pro3.1} Let $\beta >1$ and $0\leq \alpha<1$. The $\varphi$-expansion for the dynamical system $\beta x+\alpha\mod1$ is valid. $$ \Sigma^{\alpha,\beta}:=\big\{{\tt i}(x)\in{\tt A}^{\Z_+}\,{:}\; x\in X\backslash S\big\}=\big\{\ud{x}\in{\tt A}^{\Z_+}\,{:}\; \ud{u}^{\alpha,\beta}\prec\sigma^n\ud{x}\prec\ud{v}^{\alpha,\beta}\quad\forall n\geq 0\big\}\,. $$ Moreover $$ \ud{u}^{\alpha,\beta}\preceq\sigma^n\ud{u}^{\alpha,\beta}\prec\ud{v}^{\alpha,\beta} \quad\text{and}\quad \ud{u}^{\alpha,\beta}\prec\sigma^n\ud{v}^{\alpha,\beta}\preceq\ud{v}^{\alpha,\beta} \quad\forall n\geq0\,. $$ \end{pro} \noindent The closure of $\Sigma^{\alpha,\beta}$ is the shift space \begin{equation}\label{3.2} \mathbf\Sigma(\ud{u}^{\alpha,\beta},\ud{v}^{\alpha,\beta}):=\big\{\ud{x}\in{\tt A}^{\Z_+}\,{:}\; \ud{u}^{\alpha,\beta}\preceq\sigma^n\ud{x}\preceq\ud{v}^{\alpha,\beta}\quad\forall n\geq 0\big\}\,. \end{equation} We define the {\sf orbits of $0$, resp. $1$} as, (the limits are taken with $x\in X\backslash S$) $$ T^k_{\alpha,\beta}(0):=\lim_{x\downarrow 0}T_{\alpha,\beta}^k(x)\,,\,k\geq 0\quad\text{resp.}\quad T^k_{\alpha,\beta}(1):=\lim_{x\uparrow 1}T_{\alpha,\beta}^k(x)\,,\,k\geq 0\,. $$ From \eqref{2.12} and \eqref{2.12b} the coding of these orbits is $\ud{u}^{\alpha,\beta}$, resp. $\ud{v}^{\alpha,\beta}$, \begin{equation}\label{virtual} \sigma^k\ud{u}^{\alpha,\beta}=\lim_{x\downarrow 0}{\tt i}(T_{\alpha,\beta}^k(x))\quad\text{and}\quad \sigma^k\ud{v}^{\alpha,\beta}=\lim_{x\uparrow 1}{\tt i}(T_{\alpha,\beta}^k(x))\,. \end{equation} Notice that $T^k_{\alpha,\beta}(0)<1$ and $T^k_{\alpha,\beta}(1)>0$ for all $k\geq 0$. The virtual itineraries $\ud{u}\equiv\ud{u}^{\alpha,\beta}$ and $\ud{v}\equiv\ud{v}^{\alpha,\beta}$ of the dynamical system $\beta x+\alpha \mod 1$ verify the conditions \begin{equation}\label{condition} \ud{u}\preceq\sigma^n\ud{u}\preceq \ud{v}\quad \forall\,n\geq 0\quad\text{and}\quad \ud{u}\preceq \sigma^n\ud{v}\preceq \ud{v}\quad \forall\,n\geq 0\,. \end{equation} By Theorem \ref{thm2.1ter}, \eqref{virtual} and Theorem \ref{thm2.5} we have $(x\in X\backslash S)$ \begin{align} \overline{\varphi}_\infty^{\alpha,\beta}(\sigma^k\ud{u})&=\lim_{x\downarrow 0}\overline{\varphi}_\infty^{\alpha,\beta}({\tt i}(T^k_{\alpha,\beta}(x))=\lim_{x\downarrow 0}T^k_{\alpha,\beta}(x)\equiv T^k_{\alpha,\beta}(0) \label{validity1}\\ \overline{\varphi}_\infty^{\alpha,\beta}(\sigma^k\ud{v})&=\lim_{x\uparrow 1}\overline{\varphi}_\infty^{\alpha,\beta}({\tt i}(T^k_{\alpha,\beta}(x))=\lim_{x\uparrow 1}T^k_{\alpha,\beta}(x)\equiv T^k_{\alpha,\beta}(1)\,. \label{validity2} \end{align} Hence $\ud{u}$ and $\ud{v}$ verify the equations\footnote{If the $\varphi$-expansion is not valid, which happens when $\beta=1$ and $\alpha\in\Q$, then \eqref{validity1} and \eqref{validity2} are not necessarily true, as simple examples show. Hence $\ud{u}^{\alpha,\beta}$ and $\ud{v}^{\alpha,\beta}$ do not necessarily verify \eqref{equation}.} \begin{equation}\label{equation} \overline{\varphi}_\infty^{\alpha,\beta}(\ud{u})=0\,,\; \overline{\varphi}_\infty^{\alpha,\beta}(\sigma\ud{u})=\alpha\quad\text{and}\quad \overline{\varphi}_\infty^{\alpha,\beta}(\ud{v})=1\,,\; \overline{\varphi}_\infty^{\alpha,\beta}(\sigma\ud{v})=\gamma\,, \end{equation} with \begin{equation}\label{gamma} \gamma:=\alpha+\beta-k+1\in(0,1]\,. \end{equation} The strings $\ud{u}^{\alpha,\beta}$ and $\ud{v}^{\alpha,\beta}$ are $\varphi$-expansions of $0$ and $1$. Because of the presence of discontinuities for the transformation $T_{\alpha,\beta}$ at $a_1,\ldots a_{k-1}$, there are other strings $\ud{u}$, $\ud{v}$ which verify \eqref{condition} and \eqref{equation}, and which are also $\varphi$-expansions of $0$ and $1$. For latter purposes we need to decribe these strings; this is the content of Proposition \ref{pro3.4}, Proposition \ref{pro3.4bis} and Proposition \ref{pro3.4ter}. We also take into consideration the borderline cases $\alpha=1$ and $\gamma=0$. When $\alpha=1$ or $\gamma=0$ the dynamical system $T_{\alpha,\beta}$ is defined using formula \eqref{2.7}. The orbits of $0$ and $1$ are defined as before. For example, if $\alpha=1$ it is the same dynamical system as $T_{0,\beta}$, but with different symbols for the coding of the orbits. The orbit of $0$ is coded by $\ud{u}^{1,\beta}=(1)^\infty$, that is $\ud{u}^{1,\beta}_j=1$ for all $j\geq 0$. Similarly, if $\gamma=0$ the orbit of $1$ is coded by $\ud{v}^{\alpha,\beta}=(k-2)^\infty$. {\em We always assume that $\alpha\in[0,1]$, $\gamma\in[0,1]$ and $\beta\geq1 $}. \begin{lem}\label{lemelementary} The equation $$ y=\overline{\varphi}^{\alpha,\beta}(x_k+t)\,, \quad y\in[0,1] $$ can be solved uniquely if $y\not\in S_0$, and its solution is $x_k={\tt i}_0(y)$ and $t=T_{\alpha,\beta}(y)\in(0,1)$. \\ If $y<y^\prime$, then the solutions of the equations $$ y=\overline{\varphi}^{\alpha,\beta}(x_k+t)\quad\text{and}\quad y^\prime=\overline{\varphi}^{\alpha,\beta}(x^\prime_k+t^\prime) $$ are such that either $x_k=x^\prime_k$ and $T_{\alpha,\beta}(y^\prime)-T_{\alpha,\beta}(y)=\beta(y^\prime-y)$, or $x_k<x^\prime_k$. \end{lem} \medbreak\noindent{\bf Proof}:\enspace The proof is elementary. It suffices to notice that $$ y\not\in S_0\implies y= \varphi^{\alpha,\beta}(x_k+t)\,. $$ The second statement follows by monotonicity. \qed \begin{pro}\label{pro3.4} Let $0\leq\alpha< 1$ and assume that the $\varphi$-expansion is valid. The following assertions are equivalent.\\ 1)\, There is a unique solution ($\ud{u}=\ud{u}^{\alpha,\beta}$) of the equations \begin{equation}\label{eqalpha} \overline{\varphi}_\infty^{\alpha,\beta}(\ud{u})=0\quad\text{and}\quad\overline{\varphi}_\infty^{\alpha,\beta}(\sigma\ud{u})=\alpha\,. \end{equation} 2)\, The orbit of $0$ is not periodic or $x=0$ is a fixed point of $T_{\alpha,\beta}$.\\ 3)\, $\ud{u}^{\alpha,\beta}$ is not periodic or $\ud{u}^{\alpha,\beta}=\ud{0}$, where $\ud{0}$ is the string $\ud{x}$ with $x_j=0$ $\forall j\geq 0$. \end{pro} \begin{pro}\label{pro3.4bis} Let $0<\gamma\leq 1$ and assume that the $\varphi$-expansion is valid. The following assertions are equivalent.\\ 1)\, There is a unique solution ($\ud{v}=\ud{v}^{\alpha,\beta}$) of the equations \begin{equation}\label{eqbeta} \overline{\varphi}_\infty^{\alpha,\beta}(\ud{v})=1\quad\text{and}\quad\overline{\varphi}_\infty^{\alpha,\beta}(\sigma\ud{v})=\gamma\,. \end{equation} 2)\, The orbit of $1$ is not periodic or $x=1$ is a fixed point of $T_{\alpha,\beta}$.\\ 3)\, $\ud{v}^{\alpha,\beta}$ is not periodic or $\ud{v}^{\alpha,\beta}=(k-1)^\infty$. \end{pro} \medbreak\noindent{\bf Proof}:\enspace We prove Proposition \ref{pro3.4}. Assume 1. The validity of the $\varphi$-expansion implies that $\ud{u}^{\alpha,\beta}$ is a solution of \eqref{eqalpha}. If $\alpha=0$, then $\ud{u}^{0,\beta}=\ud{0}$ is the only solution of \eqref{eqalpha} since $\ud{x}\not=\ud{0}$ implies $\overline{\varphi}_\infty^{0,\beta}(\ud{x})>0$ and $x=0$ is a fixed point of $T_{0,\beta}$. Let $0<\alpha<1$. Using Lemma \ref{lemelementary} we deduce that $u_0=0$ and $$ \alpha=T_{\alpha,\beta}(0)=\overline{\varphi}^{\alpha,\beta}(u_1+\overline{\varphi}_\infty^{\alpha,\beta}(\sigma^2\ud{u}))\,. $$ If $\alpha=a_j$, $j=1,\ldots,k-1$ (see \eqref{eqso}), then \eqref{eqalpha} has at least two solutions, which are $0j(\sigma^2\ud{u}^{\alpha,\beta})$ with $\overline{\varphi}_\infty^{\alpha,\beta}(\sigma^2\ud{u}^{\alpha,\beta})=T^2(0)=0$ (see \eqref{validity1}), and $0(j-1)\ud{v}^{\alpha,\beta}$ with $\overline{\varphi}_\infty^{\alpha,\beta}(\ud{v}^{\alpha,\beta})=1$. Therefore, by our hypothesis we have $\alpha\not\in \{a_1,\ldots,a_{k-1}\}$, $u_1=u_1^{\alpha,\beta}$ and $\overline{\varphi}_\infty^{\alpha,\beta}(\sigma^2\ud{u}^{\alpha,\beta})=T^2(0)\in (0,1)$. Iterating this argument we conclude that $1\implies 2$.\\ Assume 2. If $x=0$ is a fixed point, then $\alpha=0$ and $\ud{u}^{0,\beta}=\ud{0}$. If the orbit of $0$ is not periodic, \eqref{virtual} and the validity of the $\varphi$-expansion imply $$ \sigma^k\ud{u}^{\alpha,\beta}=\lim_{x\downarrow 0}{\tt i}(T^k_{\alpha,\beta}(x))\succ\lim_{x\downarrow 0}{\tt i}(x)=\ud{u}^{\alpha,\beta}\,. $$ Assume 3. From \eqref{validity1} and the validity of the $\varphi$-expansion we get $$ \overline{\varphi}_\infty^{\alpha,\beta}(\sigma^k\ud{u}^{\alpha,\beta})=T^k_{\alpha,\beta}(0)> \overline{\varphi}_\infty^{\alpha,\beta}(\ud{u}^{\alpha,\beta})=0\,, $$ so that the orbit of $0$ is not periodic. The orbit of $0$ is not periodic if and only if $T^k_{{\alpha,\beta}}(0)\not\in\{a_1,\ldots,a_{k-1}\}$ for all $k\geq 1$. Using Lemma \ref{lemelementary} we conclude that \eqref{eqalpha} has a unique solution. \qed Propositions \ref{pro3.4} and \ref{pro3.4bis} give necessary and sufficient conditions for the existence and uniqueness of the solution of equations \eqref{equation}. In the following discussion we consider the case when there are several solutions. The main results are summarize in Proposition \ref{pro3.4ter}. We assume the validity of the $\varphi$-expansion. Suppose first that the orbit of $1$ is not periodic and that the orbit of $0$ is periodic, with minimal period $p:=\min\{k\,{:}\; T^k(0)=0\}>1$. Hence $0<\gamma<1$ and $0<\alpha<1$. Let $\ud{u}$ be a solution of equations \eqref{eqalpha} and suppose furthermore that $\ud{w}$ is a $\varphi$-expansion of $1$ such that $$ \forall n\,{:}\;\; \ud{u}\preceq\sigma^n\ud{u}\preceq \ud{w}\quad\text{with}\quad \overline{\varphi}_\infty^{\alpha,\beta}(\ud{w})=1\;,\; \overline{\varphi}_\infty^{\alpha,\beta}(\sigma\ud{w})\leq\gamma\,. $$ By Lemma \ref{lemelementary} we conclude that $$ u_j=u_j^{\alpha,\beta}\quad\text{and}\quad T_{\alpha,\beta}^{j+1}(0)=\overline{\varphi}_\infty^{\alpha,\beta}(\sigma^{j+1}\ud{u})\,,\quad j=1,\ldots,p-2\,. $$ Since $T^p(0)=0$, $T^{p-1}(0)\in \{a_1,\ldots,a_{k-1}\}$ and the equation $$ T_{\alpha,\beta}^{p-1}(0)=\overline{\varphi}_\infty^{\alpha,\beta}\big(u_{p-1}+\overline{\varphi}_\infty^{\alpha,\beta}(\sigma^{p}\ud{u})\big) $$ has two solutions. Either $u_{p-1}=u_{p-1}^{\alpha,\beta}$ and $\overline{\varphi}_\infty^{\alpha,\beta}(\sigma^{p}\ud{u})=0$ or $u_{p-1}=u_{p-1}^{\alpha,\beta}-1$ and $\overline{\varphi}_\infty^{\alpha,\beta}(\sigma^{p}\ud{u})=1$. Let $\ud{a}$ be the prefix of $\ud{u}^{\alpha,\beta}$ of length $p$ and $\ud{a}^\prime$ the word of length $p$ obtained by changing the last letter of $\ud{a}$ into\footnote{$u_{p-1}^{\alpha,\beta}\geq 1$. $u_{p-1}^{\alpha,\beta}=0$ if and only if $p=1$ and $\alpha=0$.} $u_{p-1}^{\alpha,\beta}-1$. We have $\ud{a}^\prime<\ud{a}$. If $u_{p-1}=u_{p-1}^{\alpha,\beta}$, then we can again determine uniquely the next $p-1$ letters $u_i$. The condition $\ud{u}\leq\sigma^k\ud{u}$ for $k=p$ implies that we have $u_{2p-1}=u_{p-1}^{\alpha,\beta}$ so that, by iteration, we get the solution $\ud{u}=\ud{u}^{\alpha,\beta}$ for the equations \eqref{eqalpha}. If $u_{p-1}=u_{p-1}^{\alpha,\beta}-1$, then $$ 1=\overline{\varphi}_\infty^{\alpha,\beta}(\sigma^{p}\ud{u})=\overline{\varphi}_\infty^{\alpha,\beta}\big(u_p+\overline{\varphi}_\infty^{\alpha,\beta}(\sigma^{p+1}\ud{u})\big)\,. $$ When $\overline{\varphi}_\infty^{\alpha,\beta}(\sigma^{p}\ud{u})=1$, by our hypothesis on $\ud{u}$ we also have $\overline{\varphi}_\infty^{\alpha,\beta}(\sigma^{p+1}\ud{u})=\gamma$. By Proposition \ref{pro3.4bis} the equations $$ \overline{\varphi}_\infty^{\alpha,\beta}(\sigma^{p}\ud{u})=1\quad\text{and}\quad \overline{\varphi}_\infty^{\alpha,\beta}(\sigma^{p+1}\ud{u})=\gamma $$ have a unique solution, since we assume that the orbit of $1$ is not periodic. The solution is $\sigma^{p}\ud{u}=\ud{v}^{\alpha,\beta}$, so that $\ud{u}=\ud{a}^\prime\ud{v}^{\alpha,\beta}\prec\ud{u}^{\alpha,\beta}$ is also a solution of \eqref{eqalpha}. In that case there is no other solution for \eqref{eqalpha}. The borderline case $\alpha=1$ corresponds to the periodic orbit of the fixed point $0$, $\ud{u}^{1,\beta}=(1)^\infty$. Notice that $\overline{\varphi}_\infty^{1,\beta}(\sigma\ud{u}^{1,\beta})\not=1$. We can also consider $\overline{\varphi}_\infty^{1,\beta}$-expansions of $0$ with $u_0=0$ and $\overline{\varphi}_\infty^{1,\beta}(\sigma\ud{u})=1$. Our hypothesis on $\ud{u}$ imply that $\overline{\varphi}_\infty^{1,\beta}(\sigma^2\ud{u})=\gamma$. Hence, $\ud{u}=0\ud{v}^{1,\beta}=\ud{a}^\prime\ud{v}^{\alpha,\beta}\prec\ud{u}^{\alpha,\beta}$ is a solution of \eqref{eqalpha} and a $\overline{\varphi}_\infty^{1,\beta}$-expansion of $0$. We can treat similarly the case when $\ud{u}^{\alpha,\beta}$ is not periodic, but $\ud{v}^{\alpha,\beta}$ is periodic. When both $\ud{u}^{\alpha,\beta}$ and $\ud{v}^{\alpha,\beta}$ are periodic we have more solutions, but the discussion is similar. Assume that $\ud{u}^{\alpha,\beta}$ has (minimal) period $p>1$ and $\ud{v}^{\alpha,\beta}$ has (minimal) period $q>1$. Define $\ud{a}$, $\ud{a}^\prime$ as before, $\ud{b}$ as the prefix of length $q$ of $\ud{v}^{\alpha,\beta}$, and $\ud{b}^\prime$ as the word of length $q$ obtained by changing the last letter of $\ud{b}$ into $v_{q-1}^{\alpha,\beta}+1$. When $0<\alpha<1$ and $0<\gamma<1$, one shows as above that the elements $\ud{u}\not=\ud{u}^{\alpha,\beta}$ and $\ud{v}\not=\ud{v}^{\alpha,\beta}$ which are $\overline{\varphi}^{\alpha,\beta}$-expansions of $0$ and $1$ are of the form $$ \ud{u}=\ud{a}^\prime\ud{b}^{n_1}\ud{b}^\prime\ud{a}^{n_2}\cdots\,,\,n_i\geq 0 \quad\text{and}\quad \ud{v}=\ud{b}^\prime\ud{a}^{m_1}\ud{a}^\prime\ud{b}^{m_2}\cdots\,,\,m_i\geq 0 \,. $$ The integers $n_i$ and $m_i$ must be such that \eqref{condition} is verified. The largest solution of \eqref{eqalpha} is $\ud{u}^{\alpha,\beta}$ and the smallest one is $\ud{a}^\prime\ud{v}^{\alpha,\beta}$. \begin{pro}\label{pro3.4ter} Assume that the $\varphi$-expansion is valid.\\ 1)\, Let $\ud{u}$ be a solution of \eqref{eqalpha}, such that $\ud{u}\preceq\sigma^n\ud{u}$ for all $n\geq 1$, and let $\ud{v}$ be a solution of \eqref{eqbeta}, such that $\sigma^n\ud{v}\preceq\ud{v}$ for all $n\geq 1$. Then $$ \ud{u}\preceq\ud{u}^{\alpha,\beta}\quad\text{and}\quad \ud{v}^{\alpha,\beta}\preceq\ud{v}\,. $$ 2)\, Let $\ud{u}$ be a solution of \eqref{eqalpha}, and let $\ud{u}^{\alpha,\beta}=(\ud{a})^\infty$ be periodic with minimal period $p>1$, and suppose that there exists $\ud{w}$ such that $$ \forall n\,{:}\;\; \ud{u}\preceq\sigma^n\ud{u}\preceq \ud{w}\quad\text{with}\quad \overline{\varphi}_\infty^{\alpha,\beta}(\ud{w})=1\;,\; \overline{\varphi}_\infty^{\alpha,\beta}(\sigma\ud{w})\leq\gamma\,. $$ Then \begin{equation}\label{3.3.3} \ud{u}^{{\alpha,\beta}}_*\preceq\ud{u}\preceq\ud{u}^{\alpha,\beta}\quad\text{where}\quad \ud{u}^{{\alpha,\beta}}_*:=\ud{a}^\prime\ud{v}^{\alpha,\beta}\;\text{and}\; \ud{a}^\prime:=({\tt p}\ud{a})(a_{p-1}-1)\,. \end{equation} Moreover, $\ud{u}=\ud{u}^{\alpha,\beta}$ $\iff$ $\ud{a}$ is a prefix of $\ud{u}$ $\iff$ $\overline{\varphi}_\infty^{\alpha,\beta}(\sigma^p\ud{u})<1$. \\ 3)\, Let $\ud{v}$ be a solution of \eqref{eqbeta}, and let $\ud{v}^{\alpha,\beta}=(\ud{b})^\infty$ be periodic with minimal period $q>1$, and suppose that there exists $\ud{w}$ such that $$ \forall n\,{:}\;\; \ud{w}\preceq\sigma^n\ud{v}\preceq \ud{v}\quad\text{with}\quad \overline{\varphi}_\infty^{\alpha,\beta}(\ud{w})=0\;,\; \overline{\varphi}_\infty^{\alpha,\beta}(\sigma\ud{w})\geq\alpha\,. $$ Then \begin{equation}\label{3.3.4} \ud{v}^{\alpha,\beta}\preceq\ud{v}\preceq \ud{v}^{{\alpha,\beta}}_*\quad\text{where}\quad \ud{v}^{{\alpha,\beta}}_*:=\ud{b}^\prime\ud{u}^{\alpha,\beta}\;\text{and}\; \ud{b}^\prime:=({\tt p}\ud{b})(b_{q-1}+1)\,. \end{equation} Moreover, $\ud{v}=\ud{v}^{\alpha,\beta}$ $\iff$ $\ud{b}$ is a prefix of $\ud{v}$ $\iff$ $\overline{\varphi}_\infty^{\alpha,\beta}(\sigma^q\ud{v})>0$. \end{pro} \section{Shift space ${\mathbf\Sigma}(\ud{u},\ud{v})$}\label{section3} \setcounter{equation}{0} Let $\ud{u}\in{\tt A}^{\Z_+}$ and $\ud{v}\in{\tt A}^{\Z_+}$, such that $u_0=0$, $v_0=k-1$ ($k\geq 2$) and \eqref{condition} holds. These assumptions are valid for the whole section, except subsection \ref{subsectionalgo}. We study the shift-space \begin{equation}\label{3.2.2} \mathbf\Sigma(\ud{u},\ud{v}):=\{\ud{x}\in{\tt A}^{\Z_+}\,{:}\; \ud{u}\preceq \sigma^n\ud{x}\preceq \ud{v}\quad \forall\,n\geq 0\}\,. \end{equation} It is useful to extend the relation $\prec$ to words or to words and strings. We do it only in the following case. Let $\ud{a}$ and $\ud{b}$ be words (or strings). Then $$ \ud{a}\prec\ud{b}\quad\text{iff $\exists$ $\ud{c}\in{\tt A}^*$, $\exists$ $k\geq 0$ such that $\ud{a}=\ud{c}a_k\cdots$, $\ud{b}=\ud{c}b_k\cdots$ and $a_k<b_k$.} $$ If $\ud{a}\prec\ud{b}$ then neither $\ud{a}$ is a prefix of $\ud{b}$, nor $\ud{b}$ is a prefix of $\ud{a}$. In subsection \ref{subsectionfollower} we introduce one of the main tool for studying the shift-space $\mathbf\Sigma(\ud{u},\ud{v})$, the follower-set graph. In subsection \ref{subsectionalgo} we give an algorithm which assigns to a pair of strings $(\ud{u},\ud{v})$ a pair of real numbers $(\bar{\alpha},\bar{\beta})\in [0,1]\times[1,\infty)$. Finally in subsection \ref{topological} we compute the topological entropy of the shift space $(\ud{u},\ud{v})$. \subsection{Follower-set graph $\cG(\ud{u},\ud{v})$}\label{subsectionfollower} We associate to $\mathbf\Sigma(\ud{u},\ud{v})$ a graph $\cG(\ud{u},\ud{v})$, called the {\sf follower-set graph} (see \cite{LiM}), as well as an equivalent graph $\overline{\cG}(\ud{u},\ud{v})$. The graph $\overline{\cG}(\ud{u},\ud{v})$ has been systematically studied by Hofbauer in his works about piecewise monotone one-dimensional dynamical systems; see \cite{Ho1}, \cite{Ho2} and \cite{Ho3} in the context of this paper, as well as \cite{Ke} and \cite{BrBr}. Our presentation differs from that of Hofbauer, but several proofs are directly inspired by \cite{Ho2} and \cite{Ho3}. We denote by $\cL(\ud{u},\ud{v})$ the {\sf language of} $\mathbf\Sigma(\ud{u},\ud{v})$, that is the set of words, which are factors of $\ud{x}\in \mathbf\Sigma(\ud{u},\ud{v})$ (including the empty word $\epsilon$). Since $\sigma\mathbf\Sigma(\ud{u},\ud{v})\subset \mathbf\Sigma(\ud{u},\ud{v})$, the language is also the set of prefixes of the strings $\ud{x}\in \mathbf\Sigma(\ud{u},\ud{v})$. To simplify the notations we set in this subsection $\mathbf\Sigma:=\mathbf\Sigma(\ud{u},\ud{v})$, $\cL:=\cL(\ud{u},\ud{v})$, $\cG:=\cG(\ud{u},\ud{v})$. Let $\cC_u$ be the set of words $\ud{w}\in\cL$ such that $$ \ud{w}=\begin{cases} \text{$\ud{w}^\prime$\,{:}\; $\ud{w}^\prime\not=\epsilon$, $\ud{w}^\prime$ is a prefix of $\ud{u}$}\\ \text{$w_0\ud{w}^\prime$\,{:}\; $w_0\not=u_0$, $\ud{w}^\prime$ is a prefix of $\ud{u}$, possibly $\epsilon$.} \end{cases} $$ Similarly we introduce $\cC_v$ as the set of words $\ud{w}\in\cL$ such that $$ \ud{w}=\begin{cases} \text{$\ud{w}^\prime$\,{:}\; $\ud{w}^\prime\not=\epsilon$, $\ud{w}^\prime$ is a prefix of $\ud{v}$}\\ \text{$w_0\ud{w}^\prime$\,{:}\; $w_0\not=v_0$, $\ud{w}^\prime$ is a prefix of $\ud{v}$, possibly $\epsilon$.} \end{cases} $$ \begin{defn}\label{defn3.1} Let $\ud{w}\in\cL$. The longest suffix of $\ud{w}$, which is a prefix of $\ud{v}$, is denoted by $v(\ud{w})$. The longest suffix of $\ud{w}$, which is a prefix of $\ud{u}$, is denoted by $u(\ud{w})$. The {\sf $u$-parsing of $\ud{w}$} is the following decomposition of $\ud{w}$ into $\ud{w}=\ud{a}^1\cdots\ud{a}^k$ with $\ud{a}^j\in\cC_u$. The first word $\ud{a}^1$ is the longest prefix of $\ud{w}$ belonging to $\cC_u$. If $\ud{w}=\ud{a}^1\ud{w}^\prime$ and $\ud{w}^\prime\not=\epsilon$, then the next word $\ud{a}^2$ is the longest prefix of $\ud{w}^\prime$ belonging to $\cC_u$ and so on. The {\sf $v$-parsing of $\ud{w}$} is the analogous decomposition of $\ud{w}$ into $\ud{w}=\ud{b}^1\cdots\ud{b}^\ell$ with $\ud{b}^j\in\cC_v$. \end{defn} \begin{lem}\label{lem3.2.0} Let $\ud{w}\ud{c}$ and $\ud{c}\ud{w}^\prime$ be prefixes of $\ud{u}$ (respectively of $\ud{v}$). If $\ud{w}\ud{c}\ud{w}^\prime\in\cL$, then $\ud{w}\ud{c}\ud{w}^\prime$ is a prefix of $\ud{u}$ (respectively of $\ud{v}$). Let $\ud{w}\in\cL$. If $\ud{a}^1\cdots\ud{a}^k$ is the $u$-parsing of $\ud{w}$, then only the first word $\ud{a}^1$ can be a prefix of $\ud{u}$, otherwise $u(\ud{a}^j)={\tt s}\ud{a}^j$. Moreover $u(\ud{a}^k)=u(\ud{w})$. Analogous properties hold for the $v$-parsing of $\ud{w}$. \end{lem} \medbreak\noindent{\bf Proof}:\enspace Suppose that $\ud{w}\ud{c}$ and $\ud{c}\ud{w}^\prime$ are prefixes of $\ud{u}$. Then $\ud{w}$ is a prefix of $\ud{u}$. Assume that $\ud{w}\ud{c}\ud{w}^\prime\in\cL$ is not a prefix of $\ud{u}$. Then $\ud{u}\prec\ud{w}\ud{c}\ud{w}^\prime$. Since $\ud{w}$ is a prefix of $\ud{u}$, $\sigma^{|\ud{w}|}\ud{u}\prec\ud{c}\ud{w}^\prime$. This contradicts the fact that $\ud{c}\ud{w}^\prime$ is a prefix of $\ud{u}$. By applying this result with $\ud{c}=\epsilon$ we get the result that only the first word in the $u$-parsing of $\ud{w}$ can be a prefix of $\ud{u}$. Suppose that the $u$-parsing of $\ud{w}$ is $\ud{a}^1\cdots\ud{a}^k$. Let $k\geq 2$ and assume that $u(\ud{w})$ is not a suffix of $\ud{a}^k$ (the case $k=1$ is obvious). Since $\ud{a}^k$ is not a prefix of $\ud{u}$, $u(\ud{w})$ has $\ud{a}^k$ as a proper suffix. By the first part of the lemma this contradicts the maximality property of the words in the $u$-parsing. \qed \begin{lem}\label{lem3.2.1} Let $\ud{w}\in\cL$. Let $p=|u(\ud{w})|$ and $q=|v(\ud{w})|$. Then $$ \big\{\ud{x}\in\mathbf\Sigma\,{:}\; \text{$\ud{w}$ is a prefix of $\ud{x}$}\big\}= \big\{\ud{x}\in {\tt A}^{\Z_+}\,{:}\; \ud{x}=\ud{w}\ud{y}\,,\;\ud{y}\in\mathbf\Sigma\,,\; \sigma^p\ud{u}\preceq\ud{y}\preceq\sigma^q\ud{v}\big\}\,. $$ Moreover, $$ \big\{\ud{y}\in\mathbf\Sigma\,{:}\; \ud{w}\ud{y}\in\mathbf\Sigma\big\}=\big\{\ud{y}\in\mathbf\Sigma\,{:}\; u(\ud{w})\ud{y}\in\mathbf\Sigma\big\}\quad\text{if $p>q$} $$ $$ \;\big\{\ud{y}\in\mathbf\Sigma\,{:}\; \ud{w}\ud{y}\in\mathbf\Sigma\big\}=\big\{\ud{y}\in\mathbf\Sigma\,{:}\; v(\ud{w})\ud{y}\in\mathbf\Sigma\big\}\quad\text{if $q>p$.} $$ \end{lem} \medbreak\noindent{\bf Proof}:\enspace Suppose that $\ud{x}\in\mathbf\Sigma$ and $\ud{w}$, $|\ud{w}|=n$, is a prefix of $\ud{x}$. Let $n\geq 1$ (the case $n=0$ is trivial). We can write $\ud{x}=\ud{w}\ud{y}$. Since $\ud{x}\in\mathbf\Sigma$, $$ \ud{u}\preceq \sigma^{\ell+n}\ud{x}\preceq\ud{v}\quad\forall\,\ell\geq 0\,, $$ so that $\ud{y}\in\mathbf\Sigma$. We have $$ \ud{u}\preceq \sigma^{n-p}\ud{x}= u(\ud{w})\ud{y}\,. $$ Since $u(\ud{w})$ is a prefix of $\ud{u}$ of length $p$, we get $\sigma^p\ud{u}\preceq \ud{y}$. Similarly we prove that $\ud{y}\preceq\sigma^q\ud{v}$. Suppose that $\ud{x}=\ud{w}\ud{y}$, $\ud{y}\in\mathbf\Sigma$ and $\sigma^p\ud{u}\preceq\ud{y}\preceq\sigma^q\ud{v}$. To prove that $\ud{x}\in\mathbf\Sigma$, it is sufficient to prove that $\ud{u}\preceq\sigma^m\ud{x}\preceq \ud{v}$ for $m=0,\ldots, n-1$. We prove $\ud{u}\preceq\sigma^m\ud{x}$ for $m=0,\ldots, n-1$. The other case is similar. Let $\ud{w}=\ud{a}^1\cdots\ud{a}^\ell$ be the $u$-parsing of $\ud{w}$, $|\ud{w}|=n$ and $p=|u(\ud{w})|$. We have $$ \sigma^p\ud{u}\preceq\ud{y}\implies \ud{u}\preceq \sigma^j\ud{u}\preceq \sigma^ju(\ud{w})\ud{y}\quad\forall \,j=0,\ldots,p\,. $$ If $\ud{a}^\ell$ is not a prefix of $\ud{u}$, then $p=n-1$ and we also have $\ud{u}\preceq \ud{a}^k\ud{y}$. If $\ud{a}^\ell$ is a prefix of $\ud{u}$, then $p=n$ (and $\ell=1$). This proves the result for $\ell=1$. Let $\ell\geq 2$. Then $\ud{a}^\ell$ is not a prefix of $\ud{u}$ and $\ud{a}^{\ell-1}\ud{a}^\ell\in\cL$. Suppose that $\ud{a}^{\ell-1}$ is not a prefix of $\ud{u}$. In that case $\ud{u}\preceq\ud{a}^{\ell-1}\ud{a}^\ell\ud{y}$ and we want to prove that $\ud{u}\preceq\sigma^{j}\ud{a}^{\ell-1}\ud{a}^\ell\ud{y}$ for $j=1,\ldots, |\ud{a}^{\ell-1}|$. We know that $\sigma\ud{a}^{\ell-1}$ is a prefix of $\ud{u}$, and by maximality of the words in the $u$-parsing and Lemma \ref{lem3.2.0} $\ud{u}\prec\sigma\ud{a}^{\ell-1}\ud{a}^\ell$; hence $\ud{u}\prec\sigma\ud{a}^{\ell-1}\ud{a}^\ell\ud{y}$. Therefore $$ \ud{u}\preceq \sigma^j\ud{u}\preceq \sigma^j\ud{a}^{\ell-1}\ud{a}^\ell\ud{y}\quad\forall \,j=0,\ldots,|\ud{a}^{\ell-1}|\,. $$ Similar proof if $\ell=2$ and $\ud{a}^{\ell-1}$ is a prefix of $\ud{u}$. Iterating this argument we prove that $\ud{u}\preceq\sigma^m\ud{x}$ for $m=0,\ldots,n-1$. Suppose that $|u(\ud{w})|>|v(\ud{w})|$ and set $\ud{a}=u(\ud{w})$. We prove that $v(\ud{a})=v(\ud{w})$. By definition $v(\ud{w})$ is the longest suffix of $\ud{w}$ which is a prefix of $\ud{v}$; it is also a suffix of $\ud{a}$, whence it is also the longest suffix of $\ud{a}$ which is a prefix of $\ud{v}$. Therefore, from the first part of the lemma we get $$ \big\{\ud{y}\in\mathbf\Sigma\,{:}\; \ud{w}\ud{y}\in\mathbf\Sigma\big\}=\big\{\ud{y}\in\mathbf\Sigma\,{:}\; u(\ud{w})\ud{y}\in\mathbf\Sigma\big\}\,. $$ \qed \begin{defn}\label{defn3.2.1} Let $\ud{w}\in\cL$. The {\sf follower-set}\footnote{Usually the follower-set is defined as ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{\ud{w}}=\big\{\ud{y}\in\cL\,{:}\; \ud{w}\ud{y}\in\cL\big\}$. Since $\cL$ is a dynamical language, i.e. for each $\ud{w}\in\cL$ there exists a letter $e\in{\tt A}$ such that $\ud{w}e\in\cL$, the two definitions agree.} of $\ud{w}$ is the set $$ {\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{\ud{w}}:=\big\{\ud{y}\in\mathbf\Sigma\,{:}\; \ud{w}\ud{y}\in\mathbf\Sigma\big\}\,. $$ \end{defn} Lemma \ref{lem3.2.1} gives the important results that ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{\ud{w}}={\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{u(\ud{w})}$ if $|u(\ud{w})|>|v(\ud{w})|$, and ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{\ud{w}}={\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{v(\ud{w})}$ if $|v(\ud{w})|>|u(\ud{w})|$. Moreover, \begin{equation}\label{3.2.3} {\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_\ud{w}=\big\{\ud{y}\in\mathbf\Sigma\,{:}\; \sigma^p\ud{u}\preceq\ud{y}\preceq\sigma^q\ud{v}\big\}\quad\text{where $p=|u(\ud{w})|$ and $q=|v(\ud{w})|$.} \end{equation} We can define an equivalence relation between words of $\cL$, $$ \ud{w}\sim \ud{w}^\prime \iff {\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{\ud{w}}={\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{\ud{w}^\prime}\,. $$ The collection of follower-sets is entirely determined by the strings $\ud{u}$ and $\ud{v}$. Moreover, the strings $\ud{u}$ and $\ud{v}$ are eventually periodic if and only if this collection is finite. Notice that $\mathbf\Sigma={\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_\epsilon={\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{\ud{w}}$ when $p=q=0$. \begin{defn}\label{defn3.2.2} The {\sf follower-set graph} $\cG$ is the labeled graph whose set of vertices is the collection of all follower-sets. Let $\cC$ and $\cC^\prime$ be two vertices. There is an edge, labeled by $a\in{\tt A}$, from $\cC$ to $\cC^\prime$ if and only if there exists $\ud{w}\in\cL$ so that $\ud{w}a\in\cL$, $\cC={\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{\ud{w}}$ and $\cC^\prime={\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{\ud{w}a}$. ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_\epsilon$ is called the {\sf root} of $\cG$. \end{defn} The following properties of $\cG$ are immediate. From any vertex there is at least one out-going edge and at most $|{\tt A}|$. If ${\tt A}=\{0,1,\ldots,k-1\}$ and $k\geq 3$, then for each $j\in \{1,\ldots,k-2\}$ there is an edge labeled by $j$ from ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_\epsilon$ to ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_\epsilon$. The out-going edges from ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{\ud{w}}$ are labeled by the first letters of the strings $\ud{y}\in {\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{\ud{w}}$. The follower-set graph $\cG$ is right-resolving. Given $\ud{w}\in\cL$, there is a unique path labeled by $\ud{w}$ from ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_\epsilon$ to ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{\ud{w}}$. \begin{lem}\label{lem3.2.2} Let $\ud{a}$ be a $u$-prefix and suppose that $\ud{b}=v(\ud{a})$. Let $p=|\ud{a}|$ and $q=|\ud{b}|$ so that ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_\ud{a}=\big\{\ud{y}\in\mathbf\Sigma\,{:}\; \sigma^p\ud{u}\preceq\ud{y}\preceq\sigma^q\ud{v}\big\}$. Then there are more than one out-going edges from ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{\ud{a}}$ if and only if $u_p<v_q$. Assume that $u_p<v_q$. Then there is an edge labeled by $v_q$ from ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{\ud{a}}$ to ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{\ud{b}v_q}$, an edge labeled by $u_p$ from ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{\ud{a}}$ to ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{\ud{a}u_p}$ and $v(\ud{a}u_p\ud{c})=v(\ud{c})$. If there exists $u_p<\ell<v_q$, there is an edge labeled by $\ell$ from ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{\ud{a}}$ to ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_\epsilon$. Moreover, there are at least two out-going edges from ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{\ud{b}}$, one labeled by $v_q$ to ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{\ud{b}v_q}$ and one labeled by $\ell^\prime=u_{|u(\ud{b})|+1}<v_q$ to ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{u(\ud{b})\ell^{\prime}}$. Furthermore $u(\ud{b}v_q\ud{c})=u(\ud{c})$. \end{lem} \medbreak\noindent{\bf Proof}:\enspace The first part of the lemma is immediate. Suppose that there is only one out-going edge from ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{\ud{b}}$, that is from ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{\ud{b}}$ to ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{\ud{b}v_q}$. This happens if and only if $u(\ud{b})v_q$ is a prefix of $\ud{u}$. By Lemma \ref{lem3.2.0} we conclude that $\ud{a}v_q$ is a prefix of $\ud{u}$, which is a contradiction. Therefore $u(\ud{b}v_q)=\epsilon$; hence $u(\ud{b}v_q\ud{c})=u(\ud{c})$. \qed \begin{lem}\label{lem3.2.3} Let $\ud{b}$ be a $v$-prefix and suppose that $\ud{a}=u(\ud{b})$. Let $p=|\ud{a}|$ and $q=|\ud{b}|$ so that ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_\ud{b}=\big\{\ud{y}\in\mathbf\Sigma\,{:}\; \sigma^p\ud{u}\preceq\ud{y}\preceq\sigma^q\ud{v}\big\}$. Then there are more than one out-going edges from ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{\ud{b}}$ if and only if $u_p<v_q$. Assume that $u_p<v_q$. Then there is an edge labeled by $u_p$ from ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{\ud{b}}$ to ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{\ud{a}u_p}$, an edge labeled by $v_q$ from ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{\ud{b}}$ to ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{\ud{b}v_q}$ and $u(\ud{b}v_q\ud{c})=u(\ud{c})$. If there exists $u_p<\ell<v_q$, there is an edge labeled by $\ell$ from ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{\ud{b}}$ to ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_\epsilon$. Moreover, there are at least two out-going edges from ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{\ud{a}}$, one labeled by $u_p$ to ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{\ud{a}u_p}$ and one labeled by $\ell^\prime=v_{|v(\ud{a})|+1}>u_p$ to ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{v(\ud{a})\ell^{\prime}}$. Furthermore $v(\ud{a}u_p\ud{c})=v(\ud{c})$. \end{lem} \begin{sch}\label{sch} The picture below illustrates the main properties of the graph $\cG$. The vertices of the graph are labeled by prefixes of $\ud{u}$ and $\ud{v}$. The above line represents a prefix of the string $\ud{u}$ which is written $\ud{w}\,e^{\prime\p}$, and the bottom line a prefix of the string $\ud{v}$, which is written $\ud{b}\,e^\prime$. Here $u(\ud{b})=\ud{a}$, $v(\ud{w})=\ud{b}$ and we assume that $e^{\prime\p}\prec e^\prime$. Therefore, there is an edge labeled by $e^\prime$ from ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{\ud{w}}$ to ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{\ud{b}e^\prime}$ and there is an edge labeled by $e$ from ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{\ud{b}}$ to ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{\ud{a}e}$ with $e\prec e^\prime$. Moreover, we also have $e\preceq e^{\prime\p}$. Only these two labeled edges are drawn in the picture. \setlength{\unitlength}{1mm} \begin{picture}(160,35) \put(15,25){\framebox(10,3)[b]{$\ud{a}$}} \put(40,25){\makebox(40,3){$\cdots\cdots\cdots$}} \put(80,25){\dashbox{2}(45,3){$\ud{b}$}}\put(115,25){\framebox(10,3){$\ud{a}$}} \put(15,10){\dashbox{2}(45,3){$\ud{b}$}}\put(50,10){\framebox(10,3)[b]{$\ud{a}$}} \put(60,11){\vector(-2,1){30}} \put(125,26){\vector(-4,-1){60}} \put(25,25){\makebox[5mm]{$e$}} \put(60,10){\makebox[5mm]{$e^\prime$}} \put(125,25){\makebox[5mm]{$e^{\prime\p}$}} \put(48,18){\makebox[5mm]{$e$}} \put(88,18){\makebox[5mm]{$e^\prime$}} \end{picture} \end{sch} We introduce a variant of the follower-set graph denoted below by $\overline{\cG}(\ud{u},\ud{v})$ or simply by $\overline{\cG}$. We introduce a vertex for each (nontrivial) prefix $\ud{a}$ of $\ud{u}$ and for each (nontrivial) prefix of $\ud{b}$ of $\ud{v}$. We add the vertex ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_\epsilon$. Here we do not use the equivalence relation $\sim$. The root is denoted by $[0,0]$; let $\ud{a}$ be a prefix of $\ud{u}$ and let $p=|\ud{a}|$, $q=|v(\ud{a})|$. Then this vertex is denoted by $[p,q]$. Notice that $p>q$. Similarly, let $\ud{b}$ be a prefix of $\ud{v}$ and let $p=|u(\ud{b})|$, $q=|\ud{b}|$. Then the corresponding vertex is denoted by $[p,q]$. Notice that $p<q$. The {\sf upper branch} of $\overline{\cG}$ is the set of all vertices $[p,q]$ with $p>q$ and the {\sf lower branch} of $\overline{\cG}$ is the set of all vertices $[p,q]$ with $p<q$. There is a single out-going edge from $[p,q]$ if and only if $u_p=v_q$. In that case the edge is labeled by $u_p$ (or $v_q$) and goes from $[p,q]$ to $[p+1,q+1]$. Otherwise there are several out-going edges. If $u_p<v_q$ there is an edge labeled by $u_p$ from $[p,q]$ to $[p+1,0]$, an edge labeled by $v_q$ from $[p,q]$ to $[0,q+1]$, and if $u_p<j<v_q$ then there is an edge labeled by $j$ from $[p,q]$ to $[0,0]$. We define the {\sf level of the vertex} $[p,q]$ of $\overline{\cG}$ as $\ell([p,q]):=\max\{p,q\}$. \begin{defn}\label{defn3.2.3} Let $\mathbf\Sigma$ be a shift-space and $\cL$ its language. We denote by $\cL_n$ the set of all words of $\cL$ of length $n$. The {\sf entropy} of $\mathbf\Sigma$ is $$ h(\mathbf\Sigma):=\lim_{n\rightarrow\infty}\frac{1}{n}\log_2 {\rm card}(\cL_n)\,. $$ \end{defn} The number $h(\mathbf\Sigma)$ is also equal to the topological entropy of the dynamical system $(\mathbf\Sigma,\sigma)$ \cite{LiM}. In our case we can give an equivalent definition using the graph $\cG$ or the graph $\overline{\cG}$. We set $$ \ell(n):={\rm card}\{\text{$n$-paths in $\overline{\cG}$ starting at the root ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_\epsilon$}\}\,. $$ Since the graph is right-resolving and for any $\ud{w}\in\cL_n$ there is a unique path labeled by $\ud{w}$, starting at the root $[0,0]$, so that $h(\mathbf\Sigma)=h(\overline{\cG})$ where $$ h(\overline{\cG})=\lim_{n\rightarrow\infty}\frac{1}{n}\log_2 \ell(n)\,. $$ Let $K\in{\bf N}}\def\Z{{\bf Z}}\def\R{{\bf R}}\def\Q{{\bf Q}$ and $\overline{\cG}_K$ be the sub-graph of $\overline{\cG}$ whose set of vertices is the set of all vertices of $\overline{\cG}$ of levels smaller or equal to $K$. The following result is Proposition 9.3.15 in \cite{BrBr}. \begin{pro}\label{proBB} Given $\varepsilon>0$ there exists a $K(\varepsilon)<\infty$ such that for any $K\geq K(\varepsilon)$, $$ h(\overline{\cG}_K)\leq h(\overline{\cG})\leq h(\overline{\cG}_K)+\varepsilon\,. $$ \end{pro} \begin{cor}\label{corBB} Let $(\ud{u},\ud{v})$ be a pair of strings of ${\tt A}^{\Z_+}$ verifying \eqref{condition}. Given $\varepsilon>0$ there exists $N(\varepsilon)$ such that if $(\ud{u}^\prime,\ud{v}^\prime)$ is a pair of strings verifying \eqref{condition}, $\ud{u}$ and $\ud{u}^\prime$ have a common prefix of length larger $N(\varepsilon)$ and $\ud{v}$ and $\ud{v}^\prime$ have a common prefix of length larger than $N(\varepsilon)$, then $$ \big|h(\mathbf\Sigma(\ud{u}^\prime,\ud{v}^\prime))- h(\mathbf\Sigma(\ud{u},\ud{v}))\big|\leq\varepsilon\,. $$ \end{cor} \subsection{The algorithm for finding $(\bar{\alpha},\bar{\beta})$}\label{subsectionalgo} We describe an algorithm, which assigns to a pair of strings $(\ud{u},\ud{v})$, such that $u_0=0$ and $v_0=k-1$, a pair of real numbers $(\bar{\alpha},\bar{\beta})\in [0,1]\times[1,\infty)$. We assume tacitly that for the pair $({\alpha,\beta})$ one has $\alpha\in[0,1]$, $\beta\leq k$, and that the map $\overline{\varphi}^{\alpha,\beta}$ verifies $$ 0<\overline{\varphi}^{\alpha,\beta}(t)<1\quad\forall t\in(1,k-1)\,. $$ In particular $\beta\geq k-2$. When $k=2$ we assume that $\beta\geq 1$. Recall that $$ \gamma=\alpha+\beta-k+1\,, $$ and notice that our assumptions imply that $0\leq\gamma\leq 1$. \begin{defn}\label{defndominate} The map $\overline{\varphi}^{\alpha,\beta}$ {\sf dominates} the map $\overline{\varphi}^{\alpha^\prime,\beta^\prime}$ if and only if $\overline{\varphi}^{\alpha,\beta}(t)\geq \overline{\varphi}^{\alpha^\prime,\beta^\prime}(t)$ for all $t\in [0,k]$ and there exists $s\in [0,k]$ such that $\overline{\varphi}^{\alpha,\beta}(s)>\overline{\varphi}^{\alpha^\prime,\beta^\prime}(s)$. \end{defn} \setlength{\unitlength}{1mm} \begin{picture}(160,50) \put(25,15){\framebox(30,30)} \put(55,15){\framebox(30,30)} \put(85,15){\framebox(30,30)} \put(35,15){\line(5,2){75}} \put(25,23){\dashbox{2}(30,0)} \put(25,35){\dashbox{2}(60,0)} \put(20,22){\makebox[5mm]{$a_1$}} \put(20,34){\makebox[5mm]{$a_2$}} \put(32,10){\makebox[5mm]{$\alpha$}} \put(108,10){\makebox[5mm]{$2+\gamma=\alpha+\beta$}} \put(34,14){$\bullet$} \put(109,14){$\bullet$} \put(55,5){\makebox[30mm]{\it{The graph of $\overline{\varphi}^{\alpha,\beta}$} ($k=3$)}} \end{picture} \begin{lem}\label{lem3.3.1} If $\overline{\varphi}^{\alpha,\beta}$ dominates $\overline{\varphi}^{\alpha^\prime,\beta^\prime}$, then, for all $\ud{x}\in{\tt A}^{\Z_+}$, $\overline{\varphi}_\infty^{\alpha,\beta}(\ud{x})\geq \overline{\varphi}_\infty^{\alpha^\prime,\beta^\prime}(\ud{x})$. If $$ 0<\overline{\varphi}_\infty^{\alpha,\beta}(\ud{x})<1\quad\text{or}\quad 0<\overline{\varphi}_\infty^{\alpha^\prime,\beta^\prime}(\ud{x})<1\,, $$ then the inequality is strict. \end{lem} \medbreak\noindent{\bf Proof}:\enspace If $\overline{\varphi}^{\alpha,\beta}$ dominates $\overline{\varphi}^{\alpha^\prime,\beta^\prime}$, then by our implicit assumptions we get by inspection of the graphs that $$ \forall t\geq t^\prime \,{:}\;\;\overline{\varphi}^{\alpha,\beta}(t)>\overline{\varphi}^{\alpha^\prime,\beta^\prime}(t^\prime)\quad \text{if}\quad t,t^\prime\in (\alpha,\alpha^\prime+\beta^\prime)= (\alpha,\alpha+\beta)\cup(\alpha^\prime,\alpha^\prime+\beta^\prime)\,, $$ otherwise $\overline{\varphi}^{\alpha,\beta}(t)\geq\overline{\varphi}^{\alpha^\prime,\beta^\prime}(t^\prime)$. Therefore, for all $n\geq 1$, $$ \overline{\varphi}_n^{\alpha,\beta}(\ud{x})\geq\overline{\varphi}_n^{\alpha^\prime,\beta^\prime}(\ud{x})\,. $$ Suppose that $0<\overline{\varphi}_\infty^{\alpha,\beta}(\ud{x})<1$. Then $x_0+\overline{\varphi}_\infty^{\alpha,\beta}(\sigma\ud{x})\in(\alpha,\alpha+\beta)$ and $$ \overline{\varphi}_\infty^{\alpha,\beta}(\ud{x})=\overline{\varphi}^{\alpha,\beta}(x_0+\overline{\varphi}_\infty^{\alpha,\beta}(\sigma\ud{x}))> \overline{\varphi}^{\alpha^\prime,\beta^\prime}(x_0+\overline{\varphi}_\infty^{\alpha^\prime,\beta^\prime}(\sigma\ud{x}))=\overline{\varphi}_\infty^{\alpha^\prime,\beta^\prime}(\ud{x})\,. $$ Similar proof for $0<\overline{\varphi}_\infty^{\alpha^\prime,\beta^\prime}(\ud{x})<1$. \qed \begin{lem}\label{lem3.3.2} Let $\alpha=\alpha^\prime\in [0,1]$ and $1\leq \beta<\beta^\prime$. Then, for $\ud{x}\in{\tt A}^{\Z_+}$, $$ 0\leq \overline{\varphi}_\infty^{\alpha,\beta}(\ud{x})-\overline{\varphi}_\infty^{\alpha,\beta^\prime}(\ud{x})\leq \frac{|\beta-\beta^\prime|}{\beta^\prime-1}\,. $$ Let $\gamma=\gamma^\prime\in [0,1]$, $0\leq\alpha^\prime<\alpha\leq 1$ and $\beta^\prime>1$. Then, for $\ud{x}\in{\tt A}^{\Z_+}$, $$ 0\leq \overline{\varphi}_\infty^{{\alpha^\prime,\beta^\prime}}(\ud{x})-\overline{\varphi}_\infty^{\alpha,\beta}(\ud{x})\leq \frac{|\alpha-\alpha^\prime|}{\beta^\prime-1}\,. $$ The map $\beta\mapsto \overline{\varphi}_\infty^{\alpha,\beta}(\ud{x})$ is continuous at $\beta=1$. \end{lem} \medbreak\noindent{\bf Proof}:\enspace Let $\alpha=\alpha^\prime\in [0,1]$ and $1\leq \beta<\beta^\prime$. For $t,t^\prime\in [0,k]$, $$ |\overline{\varphi}^{\alpha,\beta^\prime}(t^\prime)-\overline{\varphi}^{\alpha,\beta}(t)|\leq |\overline{\varphi}^{\alpha,\beta^\prime}(t^\prime)-\overline{\varphi}^{\alpha,\beta^\prime}(t)|+ |\overline{\varphi}^{\alpha,\beta^\prime}(t)-\overline{\varphi}^{\alpha,\beta}(t)|\leq \frac{|t-t^\prime|}{\beta^\prime}+\frac{|\beta-\beta^\prime|}{\beta^\prime}\,. $$ (The maximum of $|\overline{\varphi}^{\alpha,\beta^\prime}(t)-\overline{\varphi}^{\alpha,\beta}(t)|$ is taken at $\alpha+\beta$). By induction $$ |\overline{\varphi}_n^{\alpha,\beta^\prime}(x_0,\ldots,x_{n-1})- \overline{\varphi}_n^{\alpha,\beta}(x_0,\ldots,x_{n-1})|\leq |\beta-\beta^\prime|\sum_{j=1}^n(\beta^\prime)^{-j}\,. $$ Since $\beta^\prime>1$ the sum is convergent. This proves the first statement. The second statement is proved similarly using $$ |\overline{\varphi}^{\alpha^\prime,\beta^\prime}(t^\prime)-\overline{\varphi}^{\alpha,\beta}(t)|\leq |\overline{\varphi}^{\alpha^\prime,\beta^\prime}(t^\prime)-\overline{\varphi}^{\alpha^\prime,\beta^\prime}(t)|+|\overline{\varphi}^{\alpha^\prime,\beta^\prime}(t)-\overline{\varphi}^{\alpha,\beta}(t)|\leq \frac{|t-t^\prime|}{\beta^\prime}+\frac{|\alpha-\alpha^\prime|}{\beta^\prime} $$ which is valid for $\gamma=\gamma^\prime\in [0,1]$ and $0\leq\alpha^\prime<\alpha\leq 1$. We prove the last statement. Given $\varepsilon>0$ there exists $n^*$ $$ \overline{\varphi}_{n^*}^{\alpha,1}(\ud{x})\geq \overline{\varphi}_\infty^{\alpha,1}(\ud{x})-\varepsilon\,. $$ Since $\beta\mapsto \overline{\varphi}_{n^*}^{\alpha,\beta}(\ud{x})$ is continuous, there exists $\beta^\prime$ so that for $1\leq\beta\leq\beta^\prime$, $$ \overline{\varphi}_{n}^{\alpha,\beta}(\ud{x})\geq \overline{\varphi}_{n^*}^{\alpha,\beta^\prime}(\ud{x})\geq \overline{\varphi}_{n^*}^{\alpha,1}(\ud{x})-\varepsilon\quad\forall n\geq n^*\,. $$ Hence $$ \overline{\varphi}_\infty^{\alpha,1}(\ud{x})-2\varepsilon\leq \overline{\varphi}_\infty^{\alpha,\beta}(\ud{x})\leq \overline{\varphi}_\infty^{\alpha,1}(\ud{x})\,. $$ \qed \begin{cor}\label{cor3.2} Given $\ud{x}$ and $0\leq\alpha^*\leq 1$, let $$ g_{\alpha^*}(\gamma):=\overline{\varphi}_\infty^{\alpha^*,\beta(\gamma)}(\ud{x})\quad\text{with}\quad \beta(\gamma):=\gamma-\alpha^*+k-1\,. $$ For $k\geq 3$ the map $g_{\alpha^*}$ is continuous and non-increasing on $[0,1]$. If $0<g_{\alpha^*}(\gamma_0)<1$, then the map is strictly decreasing in a neighborhood of $\gamma_0$. If $k=2$ then the same statements hold on $[\alpha^*,1]$. \end{cor} \begin{cor}\label{cor3.3} Given $\ud{x}$ and $0<\gamma^*\leq 1$, let $$ h_{\gamma^*}(\alpha):=\overline{\varphi}_\infty^{\alpha,\beta(\alpha)}(\ud{x})\quad\text{with}\quad \beta(\alpha):=\gamma^*-\alpha+k-1\,. $$ For $k\geq 3$ the map $h_{\gamma^*}$ is continuous and non-increasing on $[0,1]$. If $0<h_{\gamma^*}(\alpha_0)<1$, then the map is strictly decreasing in a neighborhood of $\alpha_0$. If $k=2$ then the same statements hold on $[0,\gamma^*)$. \end{cor} \begin{pro}\label{pro3.3} Let $k\geq 2$, $\ud{u}, \ud{v}\in{\tt A}^{\Z_+}$ verifying $u_0=0$ and $v_0=k-1$ and $$ \sigma\ud{u}\preceq\ud{v}\quad\text{and}\quad \ud{u}\preceq\sigma\ud{v}\,. $$ If $k=2$ we also assume that $\sigma\ud{u}\preceq\sigma\ud{v}$. Then there exist $\bar{\alpha}\in[0,1]$ and $\bar{\beta}\in[1,\infty)$ so that $\bar{\gamma}\in[0,1]$. If $\bar{\beta}>1$, then $$ \overline{\varphi}_\infty^{\bar{\alpha},\bar{\beta}}(\sigma\ud{u})=\bar{\alpha}\quad\text{and}\quad \overline{\varphi}_\infty^{\bar{\alpha},\bar{\beta}}(\sigma\ud{v})=\bar{\gamma}\,. $$ \end{pro} \medbreak\noindent{\bf Proof}:\enspace We consider separately the cases $\sigma\ud{v}=\ud{0}$ and $\sigma\ud{u}=(k-1)^\infty$ (i.e. $u_j=k-1$ for all $j\geq 1$). If $\sigma\ud{v}=\ud{0}$, then $\ud{u}=\ud{0}$ and $\ud{v}=(k-1)\ud{0}$; we set $\bar{\alpha}:=0$ and $\bar{\beta}:=k-1$ ($\bar{\gamma}=0$). If $\sigma\ud{u}=(k-1)^\infty$, then $\ud{v}=(k-1)^\infty$ and $\ud{u}=0(k-1)^\infty$; we set $\bar{\alpha}:=1$ and $\bar{\beta}:=k$. From now on we assume that $\ud{0}\prec\sigma\ud{v}$ and $\sigma\ud{u}\prec (k-1)^\infty$. Set $\alpha_0:=0$ and $\beta_0:=k$. We consider in details the case $k=2$, so that we also assume that $\sigma\ud{u}\preceq\sigma\ud{v}$. \noindent {\sf Step $1$.\,} Set $\alpha_1:=\alpha_0$ and solve the equation $$ \overline{\varphi}_\infty^{\alpha_1,\beta}(\sigma\ud{v})=\beta+\alpha_1-k+1\,. $$ There exists a unique solution, $\beta_1$, such that $k-1<\beta_1\leq k$. Indeed, the map $$ G_{\alpha_1}(\gamma):=g_{\alpha_1}(\gamma)-\gamma\quad\text{with}\quad g_{\alpha_1}(\gamma):=\overline{\varphi}_\infty^{\alpha_1,\beta(\gamma)}(\sigma\ud{v}) \;\,\text{and}\;\,\beta(\gamma):=\gamma-\alpha_1+k-1 $$ is continuous and strictly decreasing on $[\alpha_1,1]$ (see Corollary \ref{cor3.2}). If $\sigma\ud{v}=(k-1)^\infty$, then $G_{\alpha_1}(1)=0$ and we set $\beta_1:=k$ and we have $\gamma_1=1$. If $\sigma\ud{v}\not=(k-1)^\infty$, then there exists a smallest $j\geq 1$ so that $v_j\leq (k-2)$. Therefore $\overline{\varphi}_\infty^{\alpha_1,k}(\sigma^j\ud{v})<1$ and $$ \overline{\varphi}_\infty^{\alpha_1,k}(\sigma\ud{v})=\overline{\varphi}^{\alpha_1,k}_{j-1}\big(v_1,\ldots,v_{j-1}+ \overline{\varphi}_\infty^{\alpha_1,k}(\sigma^j\ud{v})\big)<1\,, $$ so that $G_{\alpha_1}(1)<0$. On the other hand, since $\sigma\ud{v}\not=\ud{0}$, $\overline{\varphi}_\infty^{\alpha_1,k-1}(\sigma\ud{v})>0$, so that $G_{\alpha_1}(0)>0$. There exists a unique $\gamma_1\in(0,1)$ with $G_{\alpha_1}(\gamma_1)=0$. Define $\beta_1:=\beta(\gamma_1)=\gamma_1-\alpha_1+k-1$. \noindent {\sf Step $2$.\,} Solve in $[0,\gamma_1)$ the equation $$ \overline{\varphi}_\infty^{\alpha,\beta(\alpha)}(\sigma\ud{u})=\alpha\quad\text{with}\quad\beta(\alpha):= \gamma_1-\alpha+k-1=\beta_1+\alpha_1-\alpha\,. $$ If $\sigma\ud{u}=0$, then set $\bar{\alpha}:=0$ and $\bar{\beta}:=\beta_1$. Let $\sigma\ud{u}\not=0$. There exists a smallest $j\geq 1$ such that $u_j\geq 1$. This implies that $\overline{\varphi}_\infty^{\alpha_1,\beta_1}(\sigma^j\ud{u})>0$ and consequently $$ \overline{\varphi}_\infty^{\alpha_1,\beta(\alpha_1)}(\sigma\ud{u})= \overline{\varphi}^{\alpha_1,\beta_1}_{j-1}\big(u_1,\ldots,u_{j-1}+ \overline{\varphi}_\infty^{\alpha_1,\beta_1}(\sigma^j\ud{u})\big)>0\,. $$ Since $\sigma\ud{u}\preceq\sigma\ud{v}$, $$ 0<\overline{\varphi}_\infty^{\alpha_1,\beta_1}(\sigma\ud{u}) \leq\overline{\varphi}_\infty^{\alpha_1,\beta_1}(\sigma\ud{v})=\gamma_1\,. $$ We have $\gamma_1=1$ only in the case $\sigma\ud{v}=(k-1)^\infty$; in that case we also have $\overline{\varphi}_\infty^{\alpha_1,\beta_1}(\sigma\ud{u})<1$. By Corollary \ref{cor3.3}, for any $\alpha>\alpha_1$ we have $\overline{\varphi}_\infty^{\alpha_1,\beta_1}(\sigma\ud{u})>\overline{\varphi}_\infty^{\alpha,\beta(\alpha)}(\sigma\ud{u})$. Therefore, the map $$ H_{\gamma_1}(\alpha):=h_{\gamma_1}(\alpha)-\alpha\quad\text{with}\quad h_{\gamma_1}(\alpha):=\overline{\varphi}_\infty^{\alpha,\beta(\alpha)}(\sigma\ud{u}) $$ is continuous and strictly decreasing on $[0,\gamma_1)$, $H_{\gamma_1}(\alpha_1)>0$ and $\lim_{\alpha\uparrow\gamma_1}H_{\gamma_1}(\alpha)<0$. There exists a unique $\alpha_2\in (\alpha_1,\gamma_1)$ such that $H_{\gamma_1}(\alpha_2)=0$. Set $\beta_2:=\gamma_1-\alpha_2+k-1=\alpha_1+\beta_1-\alpha_2$ and $\gamma_2:=\alpha_2+\beta_2-k+1=\gamma_1$. Since $\alpha_2\in [0,\gamma_1)$, we have $\beta_2>1$. Hence \begin{equation}\label{step2} \alpha_1<\alpha_2<\gamma_1\quad\text{and}\quad 1<\beta_2<\beta_1\quad\text{and}\quad \gamma_2=\gamma_1\,. \end{equation} If $\sigma\ud{v}=(k-1)^\infty$, $\gamma_2=1$ and we set $\bar{\alpha}:=\alpha_2$ and $\bar{\beta}:=\beta_2$. \noindent {\sf Step $3$.\,} From now on $\sigma\ud{u}\not=\ud{0}$ and $\sigma\ud{v}\not=(k-1)^\infty$. Set $\alpha_3:=\alpha_2$ and solve in $[\alpha_3,1]$ the equation $$ \overline{\varphi}_\infty^{\alpha_3,\beta(\gamma)}(\sigma\ud{v})=\gamma\quad\text{with}\quad \beta(\gamma):=\gamma-\alpha_3+k-1\,. $$ By Lemma \ref{lem3.3.1} ($k=2$), $$ \overline{\varphi}_\infty^{\alpha_3,\beta(\alpha_3)}(\sigma\ud{v})= \overline{\varphi}_\infty^{\alpha_2,1}(\sigma\ud{v})\geq \overline{\varphi}_\infty^{\alpha_2,1}(\sigma\ud{u})>\overline{\varphi}_\infty^{\alpha_2,\beta_2}(\sigma\ud{u})=\alpha_2\,, $$ since $0<\alpha_2<1$. On the other hand by Corollary \ref{cor3.3}, \begin{equation}\label{esti3} \overline{\varphi}_\infty^{\alpha_3,\beta(\gamma_1)}(\sigma\ud{v})= \overline{\varphi}_\infty^{\alpha_3,1+\gamma_1-\alpha_3}(\sigma\ud{v})< \overline{\varphi}_\infty^{\alpha_1,1+\gamma_1-\alpha_1}(\sigma\ud{v})=\overline{\varphi}_\infty^{\alpha_1,\beta_1}(\sigma\ud{v}) =\gamma_1\,, \end{equation} since $0<\gamma_1<1$. Therefore, the map $G_{\alpha_3}$ is continuous and strictly decreasing on $[\alpha_3,1]$, $G_{\alpha_3}(\alpha_3)>0$ and $G_{\alpha_3}(\gamma_1)<0$. There exists a unique $\gamma_3\in (\alpha_3,\gamma_1)$ such that $G_{\alpha_3}(\gamma_3)=0$. Set $\beta_3:=\gamma_3-\alpha_3+k-1$, so that $\beta_3<\gamma_1-\alpha_2+k-1=\beta_2$. Hence \begin{equation}\label{step3} \alpha_3=\alpha_2\quad\text{and}\quad 1<\beta_3<\beta_2\quad\text{and}\quad 0<\gamma_3<\gamma_2<1\,. \end{equation} \noindent {\sf Step $4$.\,} Solve in $[0,\gamma_3)$ the equation $$ \overline{\varphi}_\infty^{\alpha,\beta(\alpha)}(\sigma\ud{u})=\alpha\quad\text{with}\quad\beta(\alpha):= \gamma_3-\alpha+k-1=\beta_3+\alpha_3-\alpha\,. $$ By Lemma \ref{lem3.3.1} \begin{equation}\label{esti4} \overline{\varphi}_\infty^{\alpha_3,\beta(\alpha_3)}(\sigma\ud{u})=\overline{\varphi}_\infty^{\alpha_3,\beta_3}(\sigma\ud{u}) >\overline{\varphi}_\infty^{\alpha_3,\beta_2}(\sigma\ud{u})=\overline{\varphi}_\infty^{\alpha_2,\beta_2}(\sigma\ud{u})=\alpha_2\,, \end{equation} since $0<\alpha_2<1$. On the other hand, $$ 0<\overline{\varphi}_\infty^{\alpha_3,\beta(\alpha_3)}(\sigma\ud{u})= \overline{\varphi}_\infty^{\alpha_3,\beta_3}(\sigma\ud{u})\leq\overline{\varphi}_\infty^{\alpha_3,\beta_3}(\sigma\ud{v}) =\gamma_3<1\,. $$ By Corollary \ref{cor3.3} $$ \overline{\varphi}_\infty^{\alpha,\beta(\alpha)}(\sigma\ud{u})<\overline{\varphi}_\infty^{\alpha_3,\beta(\alpha_3)}(\sigma\ud{u}) \quad\forall \alpha\in (\alpha_3,\gamma_3)\,. $$ Therefore, the map $$ H_{\gamma_3}(\alpha):=h_{\gamma_3}(\alpha)-\alpha\quad\text{with}\quad h_{\gamma_3}(\alpha):=\overline{\varphi}_\infty^{\alpha,\beta(\alpha)}(\sigma\ud{u}) $$ is continuous and strictly decreasing on $[\alpha_3,\gamma_3)$, $H_{\gamma_3}(\alpha_3)>0$ and $\lim_{\alpha\uparrow\gamma_3}H_{\gamma_3}(\alpha) <0$. There exists a unique $\alpha_4\in (\alpha_3,\gamma_3)$. Set $\beta_4:=\gamma_3-\alpha_4+k-1=\alpha_3+\beta_3-\alpha_4$ and $\gamma_4:=\alpha_4+\beta_4-k+1=\gamma_3$. Hence \begin{equation}\label{step4} \alpha_3<\alpha_4<\gamma_3\quad\text{and}\quad 1<\beta_4<\beta_3\quad\text{and}\quad \gamma_4=\gamma_3\,. \end{equation} Repeating steps 3 and 4 we get two monotone sequences $\{\alpha_n\}$ and $\{\beta_n\}$. We set $\bar{\alpha}:=\lim_{n\rightarrow\infty}\alpha_n$ and $\bar{\beta}:=\lim_{n\rightarrow\infty}\beta_n$. We consider briefly the changes which occur when $k\geq 3$. Step $1$ remains the same. In step $2$ we solve the equation $H_{\gamma_1}(\alpha)=0$ on $[0,1)$ instead of $[0,\gamma_1)$. The proof that $H_{\gamma_1}(\alpha_1)>0$ remains the same. We prove that $\lim_{\alpha\uparrow 1}H_{\gamma_1}(\alpha)<0$. Corollary \ref{cor3.3} implies that $$ \gamma_1=\overline{\varphi}_\infty^{\alpha_1,\beta_1}(\sigma\ud{v})= \overline{\varphi}_\infty^{\alpha_1,\beta(\alpha_1)}(\sigma\ud{v})> \overline{\varphi}_\infty^{\alpha,\beta(\alpha)}(\sigma\ud{v})\quad \forall\alpha>\alpha_1\,. $$ Since $\sigma\ud{u}\preceq \ud{v}$ and $\beta(\alpha_1)=\beta_1$, $$ \overline{\varphi}_\infty^{\alpha,\beta(\alpha)}(\sigma\ud{u})\leq \overline{\varphi}^{\alpha,\beta(\alpha)}\big(v_0+\overline{\varphi}_\infty^{\alpha,\beta(\alpha)}(\sigma\ud{v})\big) \leq \overline{\varphi}^{\alpha_1,\beta(\alpha_1)} \big(v_0+\overline{\varphi}_\infty^{\alpha,\beta(\alpha)}(\sigma\ud{v})\big)<1\,. $$ Instead of \eqref{step2} we have $$ \alpha_1<\alpha_2<1\quad\text{and}\quad 1<\beta_2<\beta_1\quad\text{and}\quad \gamma_2=\gamma_1\,. $$ Estimate \eqref{esti3} is still valid in step 3 with $k\geq 3$. Hence $G_{\alpha_3}(\gamma_1)<0$. We solve the equation $G_{\alpha_3}(\gamma)=0$ on $[0,\gamma_1]$. We have $$ \overline{\varphi}_\infty^{\alpha_3,\beta(\gamma_1)}(\sigma\ud{u})= \overline{\varphi}_\infty^{\alpha_2,\beta_2}(\sigma\ud{u})=\alpha_2\,. $$ By Corollary \ref{cor3.2} we get $$ \overline{\varphi}_\infty^{\alpha_3,\beta(\gamma)}(\sigma\ud{u})> \overline{\varphi}_\infty^{\alpha_2,\beta_2}(\sigma\ud{u})=\alpha_2 \quad \forall\gamma<\gamma_1\,. $$ Since $\ud{u}\preceq\sigma\ud{v}$, $$ \overline{\varphi}_\infty^{\alpha_3,\beta(0)}(\sigma\ud{v})\geq \overline{\varphi}^{\alpha_3,\beta(0)}\big(u_0+\overline{\varphi}_\infty^{\alpha_3,\beta(0)}(\sigma\ud{u})\big) \geq \overline{\varphi}^{\alpha_2,\beta(\gamma_1)}\big(u_0+\overline{\varphi}_\infty^{\alpha_3,\beta(0)}(\sigma\ud{u})\big)>0\,. $$ Estimate \eqref{esti4} is still valid in step 4 so that $H_{\gamma_3}(\alpha_3)>0$. Corollary \ref{cor3.3} implies that $$ \gamma_3=\overline{\varphi}_\infty^{\alpha_3,\beta_3}(\sigma\ud{v})= \overline{\varphi}_\infty^{\alpha_3,\beta(\alpha_3)}(\sigma\ud{v})> \overline{\varphi}_\infty^{\alpha,\beta(\alpha)}(\sigma\ud{v})\quad \forall\alpha>\alpha_3\,. $$ Therefore $$ \overline{\varphi}_\infty^{\alpha_3,\beta(\alpha_3)}(\sigma\ud{u})\leq \overline{\varphi}^{\alpha,\beta(\alpha)}\big(v_0+\overline{\varphi}_\infty^{\alpha,\beta(\alpha)}(\sigma\ud{v})\big) \leq \overline{\varphi}^{\alpha_3,\beta(\alpha_3)} \big(v_0+\overline{\varphi}_\infty^{\alpha,\beta(\alpha)}(\sigma\ud{v})\big)<1\,. $$ Instead of \eqref{step4} we have $$ \alpha_3<\alpha_4<1\quad\text{and}\quad 1<\beta_4<\beta_3\quad\text{and}\quad \gamma_4=\gamma_3\,. $$ Assume that $\bar{\beta}>1$. Then $1<\bar{\beta}\leq\beta_n$ for all $n$. We have $$ \overline{\varphi}_\infty^{\alpha_n,\beta_n}(\sigma\ud{v})=\gamma_n\,,\quad\text{$n$ odd} $$ and $$ \overline{\varphi}_\infty^{\alpha_n,\beta_n}(\sigma\ud{u})=\alpha_n\,,\quad\text{$n$ even}\,. $$ Let $\bar{\gamma}=\bar{\alpha}+\bar{\beta}-k+1$. For $n$ odd, let $\beta_n^*:=\bar{\gamma}-\alpha_n+k-1$; using Lemma \ref{lem3.3.2} we get \begin{align*} |\overline{\varphi}_\infty^{\bar{\alpha},\bar{\beta}}(\sigma\ud{v})-\bar{\gamma}|&\leq |\overline{\varphi}_\infty^{\bar{\alpha},\bar{\beta}}(\sigma\ud{v})- \overline{\varphi}_\infty^{\alpha_n,\beta_n^*}(\sigma\ud{v})|+ |\overline{\varphi}_\infty^{\alpha_n,\beta_n^*}(\sigma\ud{v})- \overline{\varphi}_\infty^{\alpha_n,\beta_n}(\sigma\ud{v})|+|\gamma_n-\gamma|\\ &\leq\frac{1}{\bar{\beta}-1}(2|\bar{\alpha}-\alpha_n|+ |\bar{\beta}-\beta_n|)+|\gamma_n-\gamma|\,, \end{align*} since $\beta_n^*=\bar{\beta}+\bar{\alpha}-\alpha_n$. Letting $n$ going to infinity we get $\overline{\varphi}_\infty^{\bar{\alpha},\bar{\beta}}(\sigma\ud{v})=\bar{\gamma}$. Similarly we prove $\overline{\varphi}_\infty^{\bar{\alpha},\bar{\beta}}(\sigma\ud{v})=\bar{\alpha}$. \qed \begin{cor}\label{cor3.4} Suppose that $(\ud{u},\ud{v})$, respectively $(\ud{u}^\prime,\ud{v}^\prime)$, verify the hypothesis of Proposition \ref{pro3.3} with $k\geq 2$, respectively with $k^\prime\geq 2$. If $k\geq k^\prime$, $\ud{u}\preceq\ud{u}^\prime$ and $\ud{v}^\prime\preceq\ud{v}$, then $\bar{\beta}^\prime\leq\bar{\beta}$ and $\bar{\alpha}^\prime\geq \bar{\alpha}$. \end{cor} \medbreak\noindent{\bf Proof}:\enspace We consider the case $k=k^\prime$, whence $\sigma\ud{v}^\prime\preceq\sigma\ud{v}$. From the proof of Proposition \ref{pro3.3} we get $\gamma^\prime_1\leq\gamma_1$ and $\alpha_1^\prime\geq\alpha_1$. Suppose that $\gamma^\prime_j\leq\gamma_j$ and $\alpha_j^\prime\geq\alpha_j$ for $j=1,\ldots,n$. If $n$ is even, then $\alpha_{n+1}^\prime=\alpha_{n}^\prime$ and $\alpha_{n+1}=\alpha_{n}$. We prove that $\gamma_{n+1}^\prime\leq\gamma_{n+1}$. We have $$ \gamma_{n+1}^\prime=\overline{\varphi}_\infty^{\alpha_{n+1}^\prime,\beta(\gamma_{n+1}^\prime)}(\sigma\ud{v}^\prime) \leq \overline{\varphi}_\infty^{\alpha_{n+1}^\prime,\beta(\gamma_{n+1}^\prime)}(\sigma\ud{v})\leq \overline{\varphi}_\infty^{\alpha_{n+1},\beta(\gamma_{n+1}^\prime)}(\sigma\ud{v})\implies \gamma_{n+1}\geq\gamma_{n+1}^\prime\,. $$ If $n$ is odd, then $\gamma_{n+1}^\prime=\gamma_{n}^\prime$ and $\gamma_{n+1}=\gamma_{n}$. We prove that $\alpha_{n+1}^\prime\geq\alpha_{n+1}$. We have \begin{align*} \alpha_{n+1}&=\overline{\varphi}_\infty^{\alpha_{n+1},\beta(\alpha_{n+1})}(\sigma\ud{u}) \leq \overline{\varphi}_\infty^{\alpha_{n+1},\beta(\alpha_{n+1})}(\sigma\ud{u}^\prime)= \overline{\varphi}_\infty^{\alpha_{n+1},\gamma_{n+1}-\alpha_{n+1}+k-1)}(\sigma\ud{u}^\prime)\\ & \leq \overline{\varphi}_\infty^{\alpha_{n+1},\gamma_{n+1}^\prime-\alpha_{n+1}+k-1)}(\sigma\ud{u}^\prime) \implies \alpha_{n+1}^\prime\geq\alpha_{n+1}\,. \end{align*} \qed We state a uniqueness result. The proof uses Theorem \ref{thm3.1}. \begin{pro}\label{prounicity} Let $k\geq 2$, $\ud{u}, \ud{v}\in{\tt A}^{\Z_+}$, $u_0=0$ and $v_0=k-1$, and assume that \eqref{condition} holds. Then there is at most one solution $({\alpha,\beta})\in[0,1]\times[1,\infty)$ for the equations $$ \overline{\varphi}_\infty^{\alpha,\beta}(\sigma\ud{u})=\alpha\quad\text{and}\quad \overline{\varphi}_\infty^{\alpha,\beta}(\sigma\ud{v})=\gamma\,. $$ \end{pro} \medbreak\noindent{\bf Proof}:\enspace Assume that there are two solutions $(\alpha_1,\beta_1)$ and $(\alpha_2,\beta_2)$ with $\beta_1\leq \beta_2$. If $\alpha_2>\alpha_1$, then $$ \alpha_2-\alpha_1=\overline{\varphi}_\infty^{\alpha_2,\beta_2}(\sigma\ud{u})- \overline{\varphi}_\infty^{\alpha_1,\beta_1}(\sigma\ud{u})\leq 0\,, $$ which is impossible. Therefore $\alpha_2\leq\alpha_1$. If $\beta_1=\beta_2$, then $$ 0\geq\alpha_2-\alpha_1=\overline{\varphi}_\infty^{\alpha_2,\beta_2}(\sigma\ud{u})- \overline{\varphi}_\infty^{\alpha_1,\beta_1}(\sigma\ud{u})\geq 0\,, $$ which implies $\alpha_2=\alpha_1$. Therefore we assume that $\alpha_2\leq\alpha_1$ and $\beta_1<\beta_2$. However, Theorem \ref{thm3.1} implies that $$ \log_2\beta_1=h(\mathbf\Sigma(\ud{u},\ud{v}))=\log_2\beta_2\,, $$ which is impossible. \qed \subsection{Computation of the topological entropy of $\mathbf\Sigma(\ud{u},\ud{v})$}\label{topological} We compute the entropy of the shift space $\mathbf\Sigma(\ud{u},\ud{v})$ where $\ud{u}$ and $\ud{v}$ is a pair of strings verifying $u_0=0$, $v_0=k-1$ and \eqref{condition}. The main result is Theorem \ref{thm3.1}. The idea for computing the topological entropy is to compute $\bar{\alpha}$ and $\bar{\beta}$ by the algorithm of section \ref{subsectionalgo} and to use the fact that $h\big(\mathbf\Sigma(\ud{u}^{\bar{\alpha},\bar{\beta}},\ud{v}^{\bar{\alpha},\bar{\beta}})\big)=\log_2\bar{\beta}$ (see e.g. \cite{Ho1}). The most difficult case is when $\ud{u}$ and $\ud{v}$ are both periodic. Assume that the string $\ud{u}:=\ud{a}^\infty$ has minimal period $p$, $|\ud{a}|=p$, and that the string $\ud{v}:=\ud{b}^\infty$ has minimal period $q$, $|\ud{b}|=q$. If $a_0=a_{p-1}=0$, then $\ud{u}=\ud{0}$ and $p=1$. Indeed, if $a_0=a_{p-1}=0$, then $\ud{a}\ud{a}=({\tt p}\ud{a})00({\tt s}\ud{a})$; the result follows from \eqref{condition}. Similarly, if $b_0=b_{q-1}=k-1$, then $\ud{v}=(k-1)^\infty$ and $q=1$. These cases are similar to the case when only one of the strings $\ud{u}$ and $\ud{v}$ is periodic and are simpler than the generic case of two periodic strings, which we treat in details. The setting for subsection \ref{topological} is the following one. The string $\ud{u}:=\ud{a}^\infty$ has minimal period $p\geq 2$ with $u_0=0$, or $\ud{u}=(1)^\infty$. The string $\ud{v}:=\ud{b}^\infty$ has minimal period $q\geq 2$ with $v_0=k-1$, or $\ud{v}=(k-2)^\infty$. We also consider the strings $\ud{u}^*=\ud{a}^\prime\ud{b}^\infty$ and $\ud{v}^*=\ud{b}^\prime\ud{a}^\infty$ with $\ud{a}^\prime={\tt p}\ud{a}(u_{p-1}-1)$ and $\ud{b}^\prime={\tt p}\ud{b}(v_{q-1}+1)$. We write $\mathbf\Sigma\equiv\mathbf\Sigma(\ud{u},\ud{v})$, $\mathbf\Sigma^*\equiv \mathbf\Sigma(\ud{u}^*,\ud{v}^*)$, $\cG\equiv \cG(\ud{u},\ud{v})$ and $\cG^*\equiv \cG(\ud{u}^*,\ud{v}^*)$. The main point is to prove that $h(\cG)=h(\cG^*)$ by comparing the follower-set graphs $\cG$ and $\cG^*$. \begin{lem}\label{lemgraph} 1) In the above setting the vertices of the graph $\cG$ are ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_\epsilon$, ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{\ud{w}}$ with $\ud{w}$ a prefix of ${\tt p}\ud{a}$ or of ${\tt p}\ud{b}$, ${\tt p}\ud{a}$ and ${\tt p}\ud{b}$ included. \\ 2) Let $r:=|v({\tt p}\ud{a})|$. If $u_{p-1}\not=v_r$, then ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{\ud{a}}={\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_\epsilon$ and there is an edge labeled by $u_{p-1}$ from ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{{\tt p}\ud{a}}$ to ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_\epsilon$. If $u_{p-1}=v_r$, then ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{\ud{a}}={\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{v({\tt p}\ud{a})v_r}$ and there is a single edge, labeled by $u_{p-1}=v_r$, from ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{{\tt p}\ud{a}}$ to ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{v({\tt p}\ud{a})v_r}$. If $k=2$ the first possibility is excluded.\\ 3) Let $s:=|u({\tt p}\ud{b})|$. If $v_{q-1}\not=u_s$, then ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{\ud{b}}={\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_\epsilon$ and there is an edge labeled by $v_{q-1}$ from ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{{\tt p}\ud{b}}$ to ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_\epsilon$. If $v_{q-1}=u_s$, then ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{\ud{b}}={\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{u({\tt p}\ud{b})u_s}$ and there is a single edge, labeled by $v_{q-1}=u_s$, from ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{{\tt p}\ud{b}}$ to ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{u({\tt p}\ud{b})u_s}$. If $k=2$ the first possibility is excluded. \end{lem} \medbreak\noindent{\bf Proof}:\enspace Suppose that $\ud{w}$ and $\ud{w}\ud{w}^\prime$ are two prefixes of ${\tt p}\ud{a}$. We show that ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{\ud{w}}\not={\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{\ud{w}\ud{w}^\prime}$. Write $\ud{u}=\ud{w}\ud{x}$ and $\ud{u}=\ud{w}\ud{w}^\prime\ud{y}$ and suppose that ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{\ud{w}}={\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{\ud{w}\ud{w}^\prime}$. Then (see \eqref{3.2.3}) $\ud{x}=\ud{y}=\sigma^p\ud{u}$, so that $\ud{u}=(\ud{w}^\prime)^\infty$, contradicting the minimality of the period $p$. Consider the vertex ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{{\tt p}\ud{a}}$ of $\cG$. We have $$ {\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{{\tt p}\ud{a}}=\{\ud{x}\in\mathbf\Sigma\,{:}\; \sigma^{p-1}\ud{u}\preceq\ud{x}\preceq \sigma^{r}\ud{v}\}\quad\text{where $r=|v({\tt p}\ud{a})|$}\,. $$ Let $\ud{d}$ be the prefix of $\ud{v}$ of length $r+1$, so that ${\tt p}\ud{d}=v({\tt p}\ud{a})$. If $u_{p-1}\not=v_{r}$, then there are an edge labeled by $u_{p-1}$ from ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{{\tt p}\ud{a}}$ to ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{\ud{a}}={\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{\epsilon}$ (since $\sigma^p\ud{u}=\ud{u}$) and an edge labeled by $v_{r}$ from ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{{\tt p}\ud{a}}$ to ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{\ud{d}}$. There may be other labeled edges from ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{{\tt p}\ud{a}}$ to ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_\epsilon$ (see Lemma \ref{lem3.2.2}). If $u_{p-1}=v_{r}$, then there is a single out-going edge labeled by $u_{p-1}$ from ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{{\tt p}\ud{a}}$ to ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{\ud{a}}$ and $v(\ud{a})=\ud{d}$. We prove that ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{\ud{a}}={\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{\ud{d}}$. If $u(\ud{d})=\epsilon$, the result is true, since in that case $$ {\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{\ud{d}}=\{\ud{x}\in\mathbf\Sigma\,{:}\; \ud{u}\preceq\ud{x}\preceq \sigma^{r+1}\ud{v}\}=\{\ud{x}\in\mathbf\Sigma\,{:}\; \sigma^p\ud{u}\preceq\ud{x}\preceq \sigma^{r+1}\ud{v}\}={\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{\ud{a}}\,. $$ We exclude the possibility $u(\ud{d})\not=\epsilon$. Suppose that $\ud{w}:=u(\ud{d})$ is non-trivial ($|u(\ud{d})|<p$). We can write $\ud{a}=\ud{a}^{\prime\p}\ud{w}$ and $\ud{a}=\ud{w}\widehat{\ud{a}}$ since $\ud{w}$ is a prefix of $\ud{u}$, and consequently $\ud{a}\ud{a}=\ud{a}^{\prime\p}\ud{w}\ud{w}\widehat{\ud{a}}$. From Lemma \ref{lem3.2.0} we conclude that $\ud{w}\ud{w}$ is a prefix of $\ud{u}$, so that $\ud{a}\ud{u}=\ud{a}^{\prime\p}\ud{w}\ud{w}\ud{w}\cdots$, proving that $\ud{u}$ has period $|\ud{w}|$, contradicting the hypothesis that $p$ is the minimal period of $\ud{u}$. If $k=2$ the first possibility is excluded because $u_{p-1}\not=0$ and we have $u_{p-1}\preceq v_r$ by $\sigma^{p-r}\ud{u}\preceq\ud{v}$. The discussion concerning the vertex ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{{\tt p}\ud{b}}$ is similar. \qed \begin{pro}\label{pro3.6} Consider the above setting. If $h(\mathbf\Sigma)>0$, then $h(\mathbf\Sigma)=h(\mathbf\Sigma^*)$. \end{pro} \medbreak\noindent{\bf Proof}:\enspace Consider the vertex ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{{\tt p}\ud{a}}$ of $\cG^*$. In that case $(u_{p-1}-1)\not=v_{r}$ so that we have an additional edge labeled by $u_{p-1}-1$ from ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{{\tt p}\ud{a}}$ to ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{\ud{a}^\prime}$ (see proof of Lemma \ref{lemgraph}), otherwise all out-going edges from ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{{\tt p}\ud{a}}$, which are present in the graph $\cG$, are also present in $\cG^*$. Let $v^*(\ud{w})$ be the longest suffix of $\ud{w}$, which is a prefix of $\ud{v}^*$. Then $$ {\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{\ud{a}^\prime}=\{\ud{x}\in\mathbf\Sigma^*\,{:}\; \sigma^{p}\ud{u}^*\preceq\ud{x}\preceq \ud{v}^*\}= \{\ud{x}\in\mathbf\Sigma^*\,{:}\; \ud{v}\preceq\ud{x}\preceq \ud{v}^*\}\,. $$ Similarly, there is an additional edge labeled by $v_{q-1}+1$ from ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{{\tt p}\ud{b}}$ to ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{\ud{b}^\prime}$. Let $u^*(\ud{w})$ be the longest suffix of $\ud{w}$, which is a prefix of $\ud{u}^*$. Then $$ {\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{\ud{b}^\prime}=\{\ud{x}\in\mathbf\Sigma^*\,{:}\; \ud{u}^*\preceq\ud{x}\preceq \sigma^q\ud{v}^*\}= \{\ud{x}\in\mathbf\Sigma^*\,{:}\; \ud{u}^*\preceq\ud{x}\preceq \ud{u}\}\,. $$ The structure of the graph $\cG^*$ is very simple from the vertices ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{\ud{a}^\prime}$ and ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{\ud{b}^\prime}$. There is a single out-going edge from ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{\ud{a}^\prime}$ to ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{\ud{a}^{\prime}v_0}$, from ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{\ud{a}^{\prime}v_0}$ to ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{\ud{a}^{\prime}v_0v_1}$ and so on, until we reach the vertex ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{\ud{a}^{\prime}{\tt p}\ud{b}}$. From that vertex there are an out-going edge labeled by $v_{q-1}$ to ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{\ud{a}^\prime}$ and an out-going edge labeled by $v_{q-1}+1$ to ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{\ud{b}^\prime}$. Similarly, there is a single out-going edge from ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{\ud{b}^\prime}$ to ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{\ud{b}^{\prime}u_0}$, from ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{\ud{b}^{\prime}u_0}$ to ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{\ud{b}^{\prime}u_0u_1}$ and so on, until we reach the vertex ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{\ud{b}^{\prime}{\tt p}\ud{a}}$. From that vertex there are an out-going edge labeled by $u_{p-1}$ to ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{\ud{b}^\prime}$ and an out-going edge labeled by $u_{p-1}-1$ to ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{\ud{a}^\prime}$. Let us denote that part of $\cG^*$ by $\cG^*\backslash\cG$. This subgraph is strongly connected. The graph $\cG^*$ consists of the union of $\cG$ and $\cG^*\backslash\cG$ with the addition of the two edges from ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{{\tt p}\ud{a}}$ to ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{\ud{a}^\prime}$ and ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{{\tt p}\ud{b}}$ to ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{\ud{b}^\prime}$. Using Theorem 1.7 of \cite{BGMY} it easy to compute the entropy of the subgraph $\cG^*\backslash\cG$ (use as rome $\{{\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{\ud{a}^\prime}, {\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{\ud{b}^\prime}\}$). It is the largest root, say $\lambda^*$, of the equation $$ \lambda^{-q}+\lambda^{-p}-1=0\,. $$ Hence $\lambda^*$ is equal to the entropy of a graph with two cycles of periods $p$ and $q$, rooted at a common point. To prove Proposition \ref{pro3.6} it sufficient to exhibit a subgraph of $\cG$ which has an entropy larger or equal to that of $\cG^*\backslash\cG$. If $k\geq 4$, then there is a subgraph with two cycles of length $1$ rooted at ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_\epsilon$. Hence $h(\cG)\geq \log_22>\lambda^*$. If ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{\ud{a}}={\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_\epsilon$ or ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{\ud{b}}={\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_\epsilon$, which could happen only for $k\geq 3$ (see Lemma \ref{lemgraph}), then there is a subgraph of $\cG$ consisting of two cycles rooted at ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_\epsilon$, one of length $p$ or of length $q$ and another one of length $1$. This also implies that $h(\cG)\geq\lambda^*$. Since the minimal periods of $\ud{u}$ and $\ud{v}$ are $p$ and $q$, it is impossible that ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{\ud{w}}={\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_\epsilon$ for $\ud{w}$ a non trivial prefix of ${\tt p}\ud{a}$ or ${\tt p}\ud{b}$. Therefore we assume that $k\leq 3$, ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{\ud{a}}\not={\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_\epsilon$ and ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{\ud{b}}\not={\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{\epsilon}$. Let ${\mathcal H}}\def\cI{{\mathcal I}}\def\cJ{{\mathcal J}$ be a strongly connected component of $\cG$ which has strictly positive entropy. If ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_\epsilon$ is a vertex of ${\mathcal H}}\def\cI{{\mathcal I}}\def\cJ{{\mathcal J}$, which happens only if $k=3$, then we conclude as above that $h(\cG)\geq\lambda^*$. Hence, we assume that ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_\epsilon$ is not a vertex of ${\mathcal H}}\def\cI{{\mathcal I}}\def\cJ{{\mathcal J}$. The vertices of ${\mathcal H}}\def\cI{{\mathcal I}}\def\cJ{{\mathcal J}$ are indexed by prefixes of ${\tt p}\ud{a}$ and ${\tt p}\ud{b}$. Let ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{\ud{c}}$ be the vertex of ${\mathcal H}}\def\cI{{\mathcal I}}\def\cJ{{\mathcal J}$ with $\ud{c}$ a prefix of $\ud{u}$ and $|\ud{c}|$ minimal; similarly, let ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{\ud{d}}$ be the vertex of ${\mathcal H}}\def\cI{{\mathcal I}}\def\cJ{{\mathcal J}$ with $\ud{d}$ a prefix of $\ud{v}$ and $|\ud{d}|$ minimal. By our assumptions $r:=|\ud{c}|\geq 1$ and $s:=|\ud{d}|\geq 1$. The following argument is a simplified adaptation of the proof of Lemma 3 in \cite{Ho3}. The core of the argument is the content of the Scholium \ref{sch}. Consider the $v$-parsing of $\ud{a}$ from the prefix ${\tt p}\ud{c}$, and the $u$-parsing of $\ud{b}$ from the prefix ${\tt p}\ud{d}$, $$ \ud{a}=({\tt p}\ud{c})\ud{a}^1\cdots\ud{a}^k\quad\text{and}\quad \ud{b}=({\tt p}\ud{d})\ud{b}^1\cdots\ud{b}^\ell\,. $$ (From ${\tt p}\ud{c}$ the $v$-parsing of $\ud{a}$ does not depend on ${\tt p}\ud{c}$ since there is an in-going edge at ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{\ud{c}}$.) We claim that there are an edge from ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{({\tt p}\ud{c})\ud{a}^1}$ to ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{\ud{d}}$ and an edge from ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{({\tt p}\ud{d})\ud{b}^1}$ to ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{\ud{c}}$. Suppose that this is not the case, for example, there is an edge from ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{({\tt p}\ud{c})\ud{a}^1\cdots \ud{a}^j}$ to ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{\ud{d}}$, but no edge from ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{({\tt p}\ud{c})\ud{a}^1\cdots \ud{a}^i}$ to ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{\ud{d}}$, $1\leq i<j$. This implies that $v(\ud{a}^j)={\tt s}\ud{a}^j={\tt p}(\ud{d})$ and $({\tt p}\ud{c})\ud{a}^1\cdots \ud{a}^jf^\prime$ is a prefix of $\ud{u}$ with $f^\prime\prec f$ and $f$ defined by $\ud{d}=({\tt p}\ud{d})f$. On the other hand there exists an edge from ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{({\tt p}\ud{c})\ud{a}^1\cdots \ud{a}^{j-1}}$ to ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{v(({\tt p}\ud{c})\ud{a}^1\cdots \ud{a}^{j-1})*}= {\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{v(\ud{a}^{j-1})*}$ with $*$ some letter of ${\tt A}$ and $v(\ud{a}^{j-1})*\not=\ud{d}$ by hypothesis. Let $e$ be the first letter of $\ud{a}^j$. Then $*=(e+1)$ since we assume that ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_\epsilon$ is not a vertex of ${\mathcal H}}\def\cI{{\mathcal I}}\def\cJ{{\mathcal J}$ and consequently there are only two out-going edges from ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{({\tt p}\ud{c})\ud{a}^1\cdots \ud{a}^{j-1}}$. There exists an edge from ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{v(\ud{a}^{j-1})}$ to ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{u(v(\ud{a}^{j-1}))*}$, where $*$ is some letter of ${\tt A}$ (see Scholium \ref{sch}). Again, since ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_\epsilon$ is not a vertex of ${\mathcal H}}\def\cI{{\mathcal I}}\def\cJ{{\mathcal J}$ we must have $*=e$. Either $u(v(\ud{a}^{j-1}))e=\ud{c}$ or $u(v(\ud{a}^{j-1}))e\not=\ud{c}$. In the latter case, by the same reasoning, there exists an edge from ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{u(v(\ud{a}^{j-1}))}$ to ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{v(u(v(\ud{a}^{j-1})))(e+1)}$ and $v(u(v(\ud{a}^{j-1})))(e+1)\not=\ud{d}$ by hypothesis; there exists also an edge from ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{v(u(v(\ud{a}^{j-1})))}$ to ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{u(v(u(v(\ud{a}^{j-1}))))e}$. After a finite number of steps we get $$ u(\cdots v(u(v(\ud{a}^{j-1}))))e=\ud{c}\,. $$ This implies that ${\tt p}\ud{c}$ is a suffix of $\ud{a}^{j-1}$, and the last letter of $\ud{c}$ (or the first letter of $\ud{a}^1$) is $e$. Hence $\ud{a}^1=e\ud{d}\cdots$. If we write $\ud{a}^{j-1}=\ud{g}({\tt p}\ud{c})$ we have $$ ({\tt p}\ud{c})\ud{a}^1=\ud{c}\ud{d}\cdots=\ud{c}({\tt p}\ud{d})f\cdots \quad\text{and}\quad\ud{a}^{j-1}\ud{a}^j f=\ud{g}({\tt p}\ud{c})e({\tt p}\ud{d})f^\prime=\ud{g}\ud{c}({\tt p}\ud{d})f^\prime\,. $$ We get a contradiction with \eqref{condition} since $\ud{c}({\tt p}\ud{d})f^\prime\prec\ud{c}({\tt p}\ud{d})f$. Consider the smallest strongly connected subgraph ${\mathcal H}}\def\cI{{\mathcal I}}\def\cJ{{\mathcal J}^\prime$ of ${\mathcal H}}\def\cI{{\mathcal I}}\def\cJ{{\mathcal J}$ which contains the vertices ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{\ud{c}}$, ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{({\tt p}\ud{c})\ud{a}^1}$, ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{\ud{d}}$ and ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{({\tt p}\ud{d})\ud{b}^1}$. Since ${\mathcal H}}\def\cI{{\mathcal I}}\def\cJ{{\mathcal J}$ has strictly positive entropy, there exists at least one edge from some other vertex $A$ of ${\mathcal H}}\def\cI{{\mathcal I}}\def\cJ{{\mathcal J}$ to ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{\ud{c}}$ or ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{\ud{d}}$, say ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{\ud{c}}$. Define $\cG^\prime$ as the smallest strongly connected subgraph of ${\mathcal H}}\def\cI{{\mathcal I}}\def\cJ{{\mathcal J}$, which contains ${\mathcal H}}\def\cI{{\mathcal I}}\def\cJ{{\mathcal J}^\prime$ and $A$. This graph has two cycles: one passing through the vertices ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{\ud{c}}$, ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{({\tt p}\ud{c})\ud{a}^1}$, ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{\ud{d}}$, ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{({\tt p}\ud{d})\ud{b}^1}$ and ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{\ud{c}}$, the other one passing through the vertices ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{\ud{c}}$, ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{({\tt p}\ud{c})\ud{a}^1}$, ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{\ud{d}}$, ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{({\tt p}\ud{d})\ud{b}^1}$, $A$ and ${\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{\ud{c}}$. The first cycle has length $|\ud{a}^1|+|\ud{b}^1|$, and the second cycle has length $|\ud{a}^1|+|\ud{b}^1|+\cdots+|\ud{b}^j|$ if $A={\mathcal F}}\def\cG{{\mathcal G}}\def\cZ{{\mathcal Z}_{{\tt p}(\ud{d})\ud{b}^1\cdots\ud{b}^j}$. We also have $$ |\ud{c}|=|\ud{b}^1|=|\ud{b}^j|\quad\text{and}\quad |\ud{a}^1|=|\ud{d}|\,. $$ Therefore one cycle has period $$ |\ud{a}^1|+|\ud{b}^1|\leq |\ud{a}^1|+|\ud{c}|\leq p\,, $$ and the other one has period $$ |\ud{d}|+|\ud{b}^1|+\cdots+|\ud{b}^j|\leq q\,. $$ \qed \begin{thm}\label{thm3.1} Let $k\geq 2$ and let $\ud{u}\in{\tt A}^{\Z_+}$ and $\ud{v}\in{\tt A}^{\Z_+}$, such that $u_0=0$, $v_0=k-1$ and $$ \ud{u}\preceq\sigma^n\ud{u}\preceq \ud{v}\quad \forall\,n\geq 0\quad\text{and}\quad \ud{u}\preceq \sigma^n\ud{v}\preceq \ud{v}\quad \forall\,n\geq 0\,. $$ If $k=2$ we also assume that $\sigma\ud{u}\preceq\sigma\ud{v}$. Let $\bar{\alpha}$ and $\bar{\beta}$ be the two real numbers defined by the algorithm of Proposition \ref{pro3.3}. Then $$ h(\mathbf\Sigma(\ud{u},\ud{v}))=\log_2\bar{\beta}\,. $$ If $k=2$ and $\sigma\ud{v}\prec \sigma\ud{u}$, then $h(\mathbf\Sigma(\ud{u},\ud{v}))=0$. \end{thm} \medbreak\noindent{\bf Proof}:\enspace Let $\bar{\beta}>1$. By Propositions \ref{pro3.3} and \ref{pro3.4ter} we have $$ \mathbf\Sigma(\ud{u}^{\bar{\alpha},\bar{\beta}},\ud{v}^{\bar{\alpha},\bar{\beta}})\subset \mathbf\Sigma(\ud{u},\ud{v})\subset\mathbf\Sigma(\ud{u}^{{\bar{\alpha},\bar{\beta}}}_*,\ud{v}^{{\bar{\alpha},\bar{\beta}}}_*) \,. $$ From Proposition \ref{pro3.6} we get $$ h(\mathbf\Sigma(\ud{u}^{\bar{\alpha},\bar{\beta}},\ud{v}^{\bar{\alpha},\bar{\beta}}))= h(\mathbf\Sigma(\ud{u}^{{\bar{\alpha},\bar{\beta}}}_*,\ud{v}^{{\bar{\alpha},\bar{\beta}}}_*))=\log_2\bar{\beta}\,. $$ Let $\lim_n\alpha_n=\bar{\alpha}$ and $\lim_n\beta_n=\bar{\beta}=1$. We have $\alpha_n<1$ and $\beta_n>1$ (see proof of Proposition \ref{pro3.3}). Let $$ \ud{u}^n:=\ud{u}^{\alpha_n,\beta_n}_*\quad\text{and}\quad \ud{v}^n:=\ud{v}^{\alpha_n,\beta_n}_*\,. $$ By Proposition \ref{pro3.4ter} point 3, $$ \ud{v}^{\alpha_1,\beta_1}\preceq \ud{v}\preceq \ud{v}^1\,. $$ By monotonicity, $$ \overline{\varphi}_\infty^{\alpha_2,\beta_2}(\sigma\ud{v}^1)\leq \overline{\varphi}_\infty^{\alpha_1,\beta_1}(\sigma\ud{v}^1)=\gamma_1=\gamma_2 \overline{\varphi}_\infty^{\alpha_2,\beta_2}(\sigma\ud{v}^2)\,. $$ Therefore $\ud{v}^1\preceq\ud{v}^2$ ($v^1_0=v^2_0$) and by Proposition \ref{pro3.4ter} point 2, $$ \ud{u}^2\preceq \ud{u}\preceq \ud{u}^{\alpha_2,\beta_2}\quad\text{and}\quad \ud{v}\preceq\ud{v}^2\,. $$ By monotonicity, $$ \overline{\varphi}_\infty^{\alpha_3,\beta_3}(\sigma\ud{u}^3)=\alpha_3=\alpha_2= \overline{\varphi}_\infty^{\alpha_2,\beta_2}(\sigma\ud{u}^2)\leq \overline{\varphi}_\infty^{\alpha_3,\beta_3}(\sigma\ud{u}^2)\,. $$ Therefore $\ud{u}^3\preceq\ud{u}^2$ and $$ \ud{u}^3\preceq \ud{u}\quad\text{and}\quad \ud{v}^{\alpha_3,\beta_3}\preceq\ud{v}\preceq\ud{v}^3\,. $$ Iterating this argument we conclude that $$ \ud{u}^n\preceq\ud{u}\quad\text{and}\quad \ud{v}\preceq\ud{v}^n\,. $$ These inequalities imply $$ h(\mathbf\Sigma(\ud{u},\ud{v}))\leq h(\mathbf\Sigma(\ud{u}^n,\ud{v}^n))=\log_2\beta_n\rightarrow 0\quad\text{for $n\rightarrow\infty$}\,. $$ Finally let $k=2$ and $\sigma\ud{v}\prec\sigma\ud{u}$. If $\sigma\ud{u}=(1)^\infty$, then $\ud{v}_j=0$ for a single value of $j$, so that $h(\mathbf\Sigma(\ud{u},\ud{v}))=0$. Suppose that $\sigma\ud{u}\not=(1)^\infty$ and fix any $\beta>1$. The function $\alpha\mapsto\overline{\varphi}_\infty^{\alpha,\beta}(\sigma\ud{u})$ is continuous and decreasing since $\overline{\varphi}^{\alpha,\beta}$ dominates $\overline{\varphi}^{\alpha^\prime,\beta}$ if $\alpha<\alpha^\prime$. There exists $\alpha\in(0,1)$ such that $\overline{\varphi}_\infty^{\alpha,\beta}(\sigma\ud{u})=\alpha$. If $\ud{v}_0< \ud{v}^{\alpha,\beta}_0$, then $\ud{v}\prec \ud{v}^{\alpha,\beta}$ and $\mathbf\Sigma(\ud{u},\ud{v})\subset\mathbf\Sigma(\ud{u},\ud{v}^{\alpha,\beta})$, whence $h(\mathbf\Sigma(\ud{u},\ud{v}))\leq\log_2\beta$. If $\ud{v}_0= \ud{v}^{\alpha,\beta}_0=1$, then $$ \overline{\varphi}_\infty^{\alpha,\beta}(\sigma\ud{v})\leq \overline{\varphi}_\infty^{\alpha,\beta}(\sigma\ud{u})=\alpha<\gamma= \overline{\varphi}_\infty^{\alpha,\beta}(\sigma\ud{v}^{\alpha,\beta})\,. $$ The map $\overline{\varphi}_\infty^{\alpha,\beta}$ is continuous and non-decreasing on ${\tt A}^{\Z_+}$ so that $\sigma\ud{v}\prec \sigma\ud{v}^{\alpha,\beta}$, whence $\ud{v}\prec\ud{v}^{\alpha,\beta}$ and $h(\mathbf\Sigma(\ud{u},\ud{v}))\leq\log_2\beta$. Since $\beta>1$ is arbitrary, $h(\mathbf\Sigma(\ud{u},\ud{v}))=0$. \qed \section{Inverse problem for $\beta x+\alpha \mod 1$}\label{section4} \setcounter{equation}{0} In this section we solve the inverse problem for $\beta x+\alpha\mod 1$, namely the question: {\it given two strings $\ud{u}$ and $\ud{v}$ verifying \begin{equation}\label{4.1} \ud{u}\preceq\sigma^n\ud{u}\prec\ud{v} \quad\text{and}\quad \ud{u}\prec\sigma^n\ud{v}\preceq\ud{v} \quad\forall n\geq0\,, \end{equation} can we find $\alpha\in [0,1)$ and $\beta\in(1,\infty)$ so that $\ud{u}=\ud{u}^{\alpha,\beta}$ and $\ud{v}=\ud{v}^{\alpha,\beta}$? } \begin{pro}\label{pro3.5} Let the $\varphi$-expansion be valid. Let $\ud{u}$ be a solution of \eqref{eqalpha} and $\ud{v}$ a solution of \eqref{eqbeta}. If \eqref{4.1} holds, then $$ \ud{u}^{\alpha,\beta}=\ud{u}\iff \forall n\geq 0\,{:}\;\;\overline{\varphi}_\infty^{\alpha,\beta}(\sigma^n\ud{u})<1 \,\iff\, \forall n\geq 0\,{:}\;\;\overline{\varphi}_\infty^{\alpha,\beta}(\sigma^n\ud{v})>0\iff \ud{v}^{\alpha,\beta}=\ud{v}\,. $$ \end{pro} \medbreak\noindent{\bf Proof}:\enspace The $\varphi$-expansion is valid, so that \eqref{validity1} is true, $$ \forall n\geq 0\,{:}\;\;\overline{\varphi}_\infty^{\alpha,\beta}(\sigma^n\ud{u}^{\alpha,\beta})=T^n_{\alpha,\beta}(0)<1\,. $$ Proposition \ref{pro3.4} and Proposition \ref{pro3.4ter} point 2 imply $$ \ud{u}=\ud{u}^{\alpha,\beta}\,\iff\,\forall n\geq 0\,{:}\;\;\overline{\varphi}_\infty^{\alpha,\beta}(\sigma^n\ud{u})<1\,. $$ Similarly $$ \ud{v}=\ud{v}^{\alpha,\beta}\,\iff\,\forall n\geq 0\,{:}\;\;\overline{\varphi}_\infty^{\alpha,\beta}(\sigma^n\ud{v})>0\,. $$ Let $\ud{x}\prec\ud{x}^\prime$, $\ud{x}, \ud{x}^\prime\in \mathbf\Sigma(\ud{u},\ud{v})$. Let $\ell:=\min\{m\geq 0\,{:}\; x_m\not=x_m^\prime\}$. Then $$ \overline{\varphi}_\infty^{\alpha,\beta}(\ud{x})=\overline{\varphi}_\infty^{\alpha,\beta}(\ud{x}^\prime)\implies \overline{\varphi}_\infty^{\alpha,\beta}(\sigma^{\ell+1}\ud{x})=1\quad\text{and}\quad \overline{\varphi}_\infty^{\alpha,\beta}(\sigma^{\ell+1}\ud{x})=0\,. $$ Indeed, $$ \overline{\varphi}_{\ell+1}^{\alpha,\beta}\big(x_0,\ldots,x_{\ell-1}, x_\ell+\overline{\varphi}_\infty^{\alpha,\beta}(\sigma^{\ell+1}\ud{x})\big)= \overline{\varphi}_{\ell+1}^{\alpha,\beta}\big(x_0,\ldots,x_{\ell-1}, x_\ell^\prime+\overline{\varphi}_\infty^{\alpha,\beta}(\sigma^{\ell+1}\ud{x}^\prime)\big) $$ Therefore $x^\prime_\ell=x_\ell+1$, $\overline{\varphi}_\infty^{\alpha,\beta}(\sigma^{\ell+1}\ud{x})=1$ and $\overline{\varphi}_\infty^{\alpha,\beta}(\sigma^{\ell+1}\ud{x})=0$. Suppose that $\overline{\varphi}_\infty^{\alpha,\beta}(\sigma^k\ud{u})=1$, and apply the above result to $\sigma^k\ud{u}$ and $\ud{v}$ to get the existence of $m$ with $\overline{\varphi}_\infty^{\alpha,\beta}(\sigma^m\ud{v})=0$. \qed Let $\ud{u}\in{\tt A}^{\Z_+}$ with $u_0=0$ and $\ud{u}\preceq\sigma^n\ud{u}$ for all $n\geq 0$. We introduce the quantity $$ \widehat{\ud{u}}:=\sup\{\sigma^n\ud{u}\,{:}\; n\geq 0\}\,. $$ We have $$ \sigma^n\widehat{\ud{u}}\leq\widehat{\ud{u}}\quad\forall n\geq 0\,. $$ Indeed, if $\widehat{u}$ is periodic, then this is immediate. Otherwise there exists $n_j$, with $n_j\uparrow\infty$ as $j\rightarrow\infty$, so that $\widehat{\ud{u}}=\lim_j\sigma^{n_j}\ud{u}$. By continuity $$ \sigma^n\widehat{\ud{u}}= \lim_{j\rightarrow\infty}\sigma^{n+n_j}\ud{u}\leq\widehat{\ud{u}}\,. $$ \noindent {\bf Example.\,} We consider the strings $\ud{u}^\prime=(01)^\infty$ and $\ud{v}^\prime=(110)^\infty$. One can prove that $\ud{u}^\prime=\ud{u}^{\alpha,\beta}$ and $\ud{v}^\prime=\ud{v}^{\alpha,\beta}$ where $\beta$ is the largest solution of $$ \beta^6-\beta^5-\beta=\beta(\beta^2-\beta+1)(\beta^3-\beta-1)=0 $$ and $\alpha=(1+\beta)^{-1}$. With the notations of Proposition \ref{pro3.4ter} we have $$ \ud{a}=01\quad\ud{a}^\prime=00\quad\ud{b}=110\quad\ud{b}^\prime=111\,. $$ Let $$ \ud{u}:=(00110111)^\infty=(\ud{a}^\prime\ud{b}\ud{b}^\prime)^\infty\,. $$ We have $$ \widehat{\ud{u}}=(11100110)^\infty=(\ud{b}^\prime\ud{a}^\prime\ud{b})^\infty\,. $$ By definition $\overline{\varphi}_\infty^{\alpha,\beta}(\sigma\ud{u})=\alpha$. We have $$ (\ud{b})^\infty\preceq \widehat{\ud{u}}\preceq \ud{b}^\prime(\ud{a})^\infty\,. $$ From Proposition \ref{pro3.4ter} point 3 and Proposition \ref{pro3.6} we conclude that $\log_2\beta=h(\mathbf\Sigma(\ud{u},\widehat{\ud{u}}))$. \begin{thm}\label{thm4.1} Let $k\geq 2$ and let $\ud{u}\in{\tt A}^{\Z_+}$ and $\ud{v}\in{\tt A}^{\Z_+}$, such that $u_0=0$, $v_0=k-1$ and \eqref{4.1} holds. If $k=2$ we also assume that $\sigma\ud{u}\preceq\sigma\ud{v}$. Set $\log_2\widehat{\beta}:=h(\mathbf\Sigma(\ud{u},\widehat{\ud{u}}))$. Let $\bar{\alpha}$ and $\bar{\beta}$ be defined by the algorithm of Proposition \ref{pro3.3}. Then\\ 1)\, If $\widehat{\beta}<\bar{\beta}$, then $\ud{u}=\ud{u}^{\bar{\alpha},\bar{\beta}}$ and $\ud{v}=\ud{v}^{\bar{\alpha},\bar{\beta}} $.\\ 2)\, If $\widehat{\beta}=\bar{\beta}>1$ and $\ud{u}^{{\bar{\alpha},\bar{\beta}}}$ and $\ud{v}^{{\bar{\alpha},\bar{\beta}}}$ are not both periodic, then $\ud{u}=\ud{u}^{{\bar{\alpha},\bar{\beta}}}$ and $\ud{v}=\ud{v}^{{\bar{\alpha},\bar{\beta}}}$.\\ 3)\, If $\widehat{\beta}=\bar{\beta}>1$ and $\ud{u}^{{\bar{\alpha},\bar{\beta}}}$ and $\ud{v}^{{\bar{\alpha},\bar{\beta}}}$ are both periodic, then $\ud{u}\not=\ud{u}^{{\bar{\alpha},\bar{\beta}}}$ and $\ud{v}\not=\ud{v}^{{\bar{\alpha},\bar{\beta}}}$. \end{thm} \medbreak\noindent{\bf Proof}:\enspace Let $\widehat{\beta}<\bar{\beta}$. Suppose that $\ud{u}\not=\ud{u}^{\bar{\alpha},\bar{\beta}}$ or $\ud{v}\not=\ud{v}^{\bar{\alpha},\bar{\beta}} $. By Proposition \ref{pro3.5} $\ud{u}\not=\ud{u}^{\bar{\alpha},\bar{\beta}}$ and $\ud{v}\not=\ud{v}^{\bar{\alpha},\bar{\beta}} $, and there exists $n$ such that $\overline{\varphi}_\infty^{\bar{\alpha},\bar{\beta}}(\sigma^n\ud{u})=1$. Hence $\overline{\varphi}_\infty^{\bar{\alpha},\bar{\beta}}(\widehat{\ud{u}})=1$. If $\bar{\gamma}>0$, then $\widehat{\ud{u}}_0=v_0=k-1$ whence $\sigma\widehat{\ud{u}}\preceq\sigma\ud{v}$, so that $\overline{\varphi}_\infty^{\bar{\alpha},\bar{\beta}}(\sigma\widehat{\ud{u}})=\bar{\gamma}$. By Propositions \ref{pro3.4ter} and \ref{pro3.6} we deduce that $$ \log_2\widehat{\beta}=h(\mathbf\Sigma(\ud{u},\widehat{\ud{u}})) =h(\mathbf\Sigma(\ud{u},\ud{v}))=\log_2\bar{\beta}\,, $$ a contradiction. If $\bar{\gamma}=0$, either $\widehat{\ud{u}}_0=k-1$ and $\overline{\varphi}_\infty^{\bar{\alpha},\bar{\beta}}(\sigma\widehat{\ud{u}})=\bar{\gamma}$, and we get a contradiction as above, or $\widehat{\ud{u}}_0=k-2$ and $\overline{\varphi}_\infty^{\bar{\alpha},\bar{\beta}}(\sigma\widehat{\ud{u}})=1$. In the latter case, since $\sigma\widehat{\ud{u}}\preceq \widehat{\ud{u}}$, we conclude that $\widehat{\ud{u}}_1=k-2$ and $\overline{\varphi}_\infty^{\bar{\alpha},\bar{\beta}}(\sigma^2\widehat{\ud{u}})=1$. Using $\sigma^n\widehat{\ud{u}}\preceq \widehat{\ud{u}}$ we get $\widehat{\ud{u}}=(k-2)^\infty=\ud{v}^{{\bar{\alpha},\bar{\beta}}}$, so that $h(\mathbf\Sigma(\ud{u},\widehat{\ud{u}})) =h(\mathbf\Sigma(\ud{u},\ud{v}))$, a contradiction.\\ We prove 2. Suppose for example that $\ud{u}^{\bar{\alpha},\bar{\beta}}$ is not periodic. This implies that $\bar{\alpha}<1$, so that Proposition \ref{pro3.4} implies that $\ud{u}=\ud{u}^{\bar{\alpha},\bar{\beta}}$. We conclude using Proposition \ref{pro3.5}. Similar proof if $\ud{v}^{\bar{\alpha},\bar{\beta}}$ is not periodic.\\ We prove 3. By Proposition \ref{pro3.5}, $\ud{u}=\ud{u}^{{\bar{\alpha},\bar{\beta}}}$ or $\ud{v}=\ud{v}^{{\bar{\alpha},\bar{\beta}}}$ if and only if $\ud{u}=\ud{u}^{{\bar{\alpha},\bar{\beta}}}$ and $\ud{v}=\ud{v}^{{\bar{\alpha},\bar{\beta}}}$. Suppose $\ud{u}=\ud{u}^{{\bar{\alpha},\bar{\beta}}}$, then $\ud{u}$ is periodic so that $\widehat{u}=\sigma^p\ud{u}$ for some $p$. This implies that $$ \overline{\varphi}_\infty^{\bar{\alpha},\bar{\beta}}(\sigma\widehat{\ud{u}})\leq\overline{\varphi}_\infty^{\bar{\alpha},\bar{\beta}}(\widehat{\ud{u}}) =\overline{\varphi}_\infty^{\bar{\alpha},\bar{\beta}}(\sigma^p\ud{u})<1\,. $$ by Proposition \ref{pro3.5}. Let $\widehat{\ud{u}}_0\equiv \widehat{k}-1$. We can apply the algorithm of Proposition \ref{pro3.3} to the pair $(\ud{u},\widehat{\ud{u}})$ and get two real numbers $\widetilde{\alpha}$ and $\widetilde{\beta}$ (if $\widehat{k}=2$, using $\widehat{\beta}>1$ and Theorem \ref{thm3.1}, we have $\sigma\ud{u}\preceq\sigma\widehat{\ud{u}}$). Theorem \ref{thm3.1} implies $\widehat{\beta}=\widetilde{\beta}$, whence $\widetilde{\beta}=\bar{\beta}$. The map $\alpha\mapsto\overline{\varphi}_\infty^{\alpha,\bar{\beta}}(\sigma\ud{u})$ is continuous and decreasing, so that $\alpha\mapsto\overline{\varphi}_\infty^{\alpha,\bar{\beta}}(\sigma\ud{u})-\alpha$ is strictly decreasing, whence there exists a unique solution to the equation $\overline{\varphi}_\infty^{\alpha,\bar{\beta}}(\sigma\ud{u})-\alpha=0$, which is $\bar{\alpha}=\widetilde{\alpha}$. Therefore $\overline{\varphi}_\infty^{\bar{\alpha},\bar{\beta}}(\sigma\widehat{\ud{u}})<1$ and we must have $\widehat{k}=k$, whence $$ \overline{\varphi}_\infty^{\bar{\alpha},\bar{\beta}}(\sigma\widehat{\ud{u}})=\bar{\alpha}+\bar{\beta}-k+1= \overline{\varphi}_\infty^{\bar{\alpha},\bar{\beta}}(\sigma\ud{v})\,. $$ But this implies $\overline{\varphi}_\infty^{\bar{\alpha},\bar{\beta}}(\widehat{\ud{u}})=1$, a contradiction \qed \begin{thm}\label{thm4.2} Let $k\geq 2$ and let $\ud{u}\in{\tt A}^{\Z_+}$ and $\ud{v}\in{\tt A}^{\Z_+}$, such that $u_0=0$, $v_0=k-1$ and \eqref{4.1} holds. If $k=2$ we also assume that $\sigma\ud{u}\preceq\sigma\ud{v}$. Let $\bar{\alpha}$ and $\bar{\beta}$ be defined by the algorithm of Proposition \ref{pro3.3}. If $h(\mathbf\Sigma(\ud{u},\widehat{\ud{u}}))>1$, then there exists $\ud{u}_*\succeq\widehat{\ud{u}}$ such that \begin{align*} \ud{u}_*\prec\ud{v}&\implies \text{$\ud{u}=\ud{u}^{\bar{\alpha},\bar{\beta}}$ and $\ud{v}=\ud{v}^{\bar{\alpha},\bar{\beta}}$}\\ \ud{u}_*\succ\ud{v}&\implies \text{$\ud{u}\not=\ud{u}^{\bar{\alpha},\bar{\beta}}$ and $\ud{v}\not=\ud{v}^{\bar{\alpha},\bar{\beta}}$} \end{align*} \end{thm} \medbreak\noindent{\bf Proof}:\enspace As in the proof of Theorem \ref{thm4.1} we define $\widetilde{k}$ and, by the algorithm of Proposition \ref{pro3.3} applied to the pair $(\ud{u},\widehat{\ud{u}})$, two real numbers $\widetilde{\alpha}$ and $\widetilde{\beta}$. By Theorem \ref{thm3.1}, $\log_2\widetilde{\beta}=h(\mathbf\Sigma(\ud{u},\widehat{\ud{u}}))$. We set $$ \ud{u}_*:= \begin{cases}\ud{v}^{\widetilde{\alpha},\widetilde{\beta}}_*& \text{if $\ud{v}^{\widetilde{\alpha},\widetilde{\beta}}$ is periodic}\\ \ud{v}^{\widetilde{\alpha},\widetilde{\beta}}&\text{if $\ud{v}^{\widetilde{\alpha},\widetilde{\beta}}$ is not periodic}\,. \end{cases} $$ It is sufficient to show that $\ud{u}_*\prec\ud{v}$ implies $\bar{\beta}>\widetilde{\beta}$ (see Theorem \ref{thm4.1} point 1). Suppose the contrary, $\bar{\beta}=\widetilde{\beta}$. Then $$ 1=\overline{\varphi}_\infty^{\widetilde{\alpha},\bar{\beta}}(\widehat{\ud{u}})\leq \overline{\varphi}_\infty^{\widetilde{\alpha},\bar{\beta}}(\ud{v})\,. $$ We have $\overline{\varphi}_\infty^{\bar{\alpha},\bar{\beta}}(\ud{v})=1$ and for $\alpha>\bar{\alpha}$, $\overline{\varphi}_\infty^{\alpha,\bar{\beta}}(\ud{v})<1$ (see Lemma \ref{lem3.3.1}). Therefore $\widetilde{\alpha}\leq\bar{\alpha}$. On the other hand, applying Corollary \ref{cor3.4} we get $\widetilde{\alpha}\geq\bar{\alpha}$ so that $\widetilde{\alpha}=\bar{\alpha}$ and $\widetilde{k}=k$. From Propositions \ref{pro3.4bis} or \ref{pro3.4ter} we get $\ud{v}\preceq \ud{u}_*$, a contradiction. Suppose that $\ud{u}_*\succ\ud{v}$. We have $\widehat{u}\preceq\ud{v}\prec \ud{u}_*$, whence $h(\mathbf\Sigma(\ud{u},\widehat{\ud{u}}))=h(\mathbf\Sigma(\ud{u},\ud{u}_*))$ and therefore $\bar{\beta}=\widetilde{\beta}$. As above we show that $\bar{\alpha}=\widetilde{\alpha}$. Notice that if $\ud{u}^{\widetilde{\alpha},\widetilde{\beta}}$ is not periodic, then by Proposition \ref{pro3.4} $\ud{u}^{\widetilde{\alpha},\widetilde{\beta}}=\ud{u}$. If $\ud{v}^{\widetilde{\alpha},\widetilde{\beta}}$ is not periodic, then by Proposition \ref{pro3.4bis} $\ud{v}^{\widetilde{\alpha},\widetilde{\beta}}=\ud{v}$. If $\ud{v}^{\widetilde{\alpha},\widetilde{\beta}}$ is periodic, then inequalities \eqref{4.1} imply that we must have $\ud{v}^{\widetilde{\alpha},\widetilde{\beta}}_*\prec \ud{v}$. Therefore we may have $\ud{u}_*\succ\ud{v}$ and inequalities \eqref{4.1} only if $\ud{u}^{\widetilde{\alpha},\widetilde{\beta}}$ and $\ud{v}^{\widetilde{\alpha},\widetilde{\beta}}$ are periodic. Suppose that it is the case. If $\ud{u}$ is not periodic, then using Proposition \ref{pro3.5} the second statement is true. If $\ud{u}$ is periodic, then $\widehat{\ud{u}}=\sigma^p\ud{u}$ for some $p$, whence $\overline{\varphi}_\infty^{\widetilde{\alpha},\widetilde{\beta}}(\sigma^p\ud{u})=1$; by Proposition \ref{pro3.5} $\ud{u}\not=\ud{u}^{\widetilde{\alpha},\widetilde{\beta}}$. \qed \newpage
1,314,259,994,115
arxiv
\section{Introduction} Stellar evolution theory underpins much of observational astrophysics, yet significant uncertainties remain at low masses ($M\lesssim 0.8$ $M_{\odot}$) and young ages ($t \lesssim 1$ Gyr). Unfortunately, this mass and age range is also where observational constraints are scarce. The fundamental goal of stellar evolution theory is to accurately predict the observables (radius, temperature and luminosity) for a star of given mass, age and metallicity. The evolutionary pathway of a star is governed primarily by its mass, which is accessible only through study of gravitational interactions such as in binary or higher order multiple star systems. For eclipsing binaries (EBs) the ratio of the radii of the stars is attainable. Eclipsing binaries are particularly important objects if they are also detected as double-lined systems in spectra, as the individual masses and radii of both stars can be extracted from the combined light curves and radial velocity curves of the system. Radii can also be measured directly using interferometric techniques, but only for the brightest of nearby stars. When the inferred mass and radius values reach a precision of a few percent or less, they provide one of the strongest observational tests of stellar evolution theory available \citep[e.g.][]{Torres10,Stassun14}. Open clusters are fruitful astrophysical laboratories given that their members share broad coevality, composition and distance. The detection of multiple EBs in a given cluster, with each member of each pair sharing the same age and metallicity but spanning a range of masses, offers a particularly strong test of stellar evolution theory. The pursuit of EB parameters, among other science goals, has motivated numerous programmes to target open clusters via time-series photometry, e.g. the ground-based Monitor, PTF Orion and YETI projects \citep{Aigrain07,vanEyken11,Neuhauser11}, and space-based observations with CoRoT and \emph{Spitzer}\ \citep{Gillen14,Morales-Calderon12}. Furthermore, since March 2014, the re-purposed Kepler mission, \emph{K2}\ \citep{Howell14}, has targeted a number of star forming regions and young (sub-Gyr) open clusters across the ecliptic for $\sim$80 days each. To date, the nearby $\rho$ Ophiuchi star forming region and Upper Scorpius young OB association ($\sim$1 and 5-10 Myr, respectively) were observed in Campaign 2, as were the Pleiades and Hyades open clusters ($\sim$125, 600--800 Myr, respectively) in Campaign 4, and Praesepe (600--800 Myr) in Campaign 5. The Praesepe\ open cluster, also known as the Beehive cluster or M44, was targeted by \emph{K2}\ in Campaign 5 (April--July 2015). Praesepe\ is a relatively nearby, metal-rich, several hundred Myr cluster hosting $>$1000 high probability members ($>$80\%) and more than 100 candidate members ($>$50\% probability) \citep{Kraus07,Rebull17}. Given its richness and proximity, Praesepe is a well-studied benchmark cluster. The parallaxes of bright Praesepe\ members in the GAIA DR1 suggest a distance of $182.8\pm1.7\pm14$ pc, where the first error represents the uncertainty on the cluster center determination and the second reflects the observed radial spread of high probability members on the sky \citep{vanLeeuwen17}. This is in agreement with the commonly quoted Hipparcos distance to the cluster, $181.5\pm6.0$ pc \citep{vanLeeuwen09}. The cluster has a low reddening along the line of sight of $E(B-V) = 0.027\pm0.004$ \citep{Taylor06}. Metallicity estimates typically fall within the range [Fe/H] $\sim$ 0.12--0.16 \citep[e.g.][]{Boesgaard13,Yang15,Netopil16} but can be as high as [Fe/H] = $0.27\pm0.10$ \citep[e.g.][]{Pace08}. The age of Praesepe\ is estimated in the range $\sim$600--900 Myr \citep[e.g.][]{Adams02} with traditional estimates typically falling at the lower end, often through association with the Hyades \citep[e.g.][]{Salaris04}. More recently, however, \citet{Brandt15} included stellar rotation to conclude that the upper main sequences of both Praesepe and the Hyades were consistently well-fit at an age of $\sim$750--800 Myr. The age of Praesepe is further discussed in section \ref{age_discussion}. The binary fraction within the cluster has been extensively studied. \citet{Pinfield03} noted that binaries in Praesepe appear to favor similar-mass systems. \citet{Boudreault12} focused on the low-mass population, finding binary frequencies of: $25.6\pm3.0$\% between 0.2\,$<$\,M\,$<$\,0.45 $M_{\odot}$, $19.6\pm3.0$\% between 0.1\,$<$\,M\,$<$\,0.2 $M_{\odot}$ and $23.2\pm5.6$\% between 0.07\,$<$\,M\,$<$\,0.1 $M_{\odot}$. \citet{Wang14} analyzed the full Praesepe membership to find a binary occurrence rate of 20--40\%. Furthermore, a significant population of binaries and higher order systems were identified by \citet{Khalaj13}, who propose a binary fraction of $35\pm5$\% in the mass range 0.6--2.2 $M_{\odot}$, assuming mass-dependent pairing of primary stars following the results of recent star formation simulations \citep[e.g.][]{Bate09}. This paper presents the characterization of four high-probability, low-mass eclipsing binary members of Praesepe. \S \ref{objects} describes the sources and previous literature characterization. In \S \ref{observations} we detail the photometric and spectroscopic observations. In \S \ref{analysis} we present a modified eclipsing binary model for detached systems, GP--EBOP, and describe the light curve and radial velocity analyzes. We then present the results for each system in \S \ref{results}. In \S \ref{discussion} we present an updated method to simultaneously determine the effective temperatures of both stars as well as the distance to an EB system, before discussing these new EBs in the context of calibrating stellar evolution models, and informing tidal evolution theory in close binaries. Finally, we conclude in \S \ref{conclusions}. \section{New Eclipsing Binaries Among Praesepe Members} \label{objects} Half a dozen deep proper motion surveys of Praesepe have been published since 2000 \citep{Adams02,Kraus07,Baker10,Boudreault12, Khalaj13,Wang14}. Three of our four EBs are considered Praesepe members in at least four of those six studies (AD 3814, 2615 and 3116). Our fourth EB (AD 1508) is identified as a Praesepe member in only two of those studies. In the top panel of Figure~\ref{cmd}, we show where these four objects fall in a V vs. V-K$_s$\ color-magnitude diagram, where we have derived V-K$_s$\ estimates based on a conversion \citep{Rebull17} from G-K$_s$, where G is the star's magnitude in the Gaia DR1 catalog. All four stars have photometry consistent with Praesepe membership. AD 1508 is the earliest type (brightest) of the four; it is located well above the single star main-sequence locus, suggesting that it is a nearly equal mass binary. AD 3116 and 3814 are located nearly on the single star main-sequence locus, and so their binary companions are presumably very low mass. AD 2615 is displaced about 0.4 mag above the single star locus, and so is likely to have an intermediate mass binary companion. Three of the four stars have published spectral types: AD 3814 - M5 \citep{West11}; AD 2615 - M4.0 \citep{Adams02}, M5 \citep{West11}; and AD 3116 - M4.5 \citep{Adams02}, M3.9 (\citealt{Kafka06}. These spectral types are broadly consistent with their V-K$_s$\ colors. All four systems have spectral types estimated from photometry \citep{Kraus07}: AD 3814 - M$3.4\pm0.1$; AD 2615 - M$4.0\pm0.1$; AD 3116 - M$3.9\pm0.1$; and AD 1508 - M$0.1\pm0.1$. As these form a homogeneous set for our EBs, we adopt these spectral types here. For each system, properties extracted from the literature are reported in Table \ref{info_tab}. In the bottom panel of Figure \ref{cmd} we show the Praesepe V vs. V-K$_s$ color vs. rotation period diagram and indicate our four systems in red. Given the wide spread in rotation periods for mid--M dwarfs, ADs 3814, 2615 and 3116 all lie along the single star trend, but the early--M dwarf AD 1508 lies far below the single star trend with a short rotation period. \begin{figure} \centering \includegraphics[width=0.9\linewidth]{./praesepe_EBs_vvmk_cmd.pdf} \includegraphics[width=0.9\linewidth]{./praesepe_EBs_logPvmk.pdf} \caption{Color -- magnitude diagram (\emph{top}) and color -- rotation period diagram (\emph{bottom}) illustrating the location of the four new eclipsing systems relative to the sequence of Praesepe members. \emph{Top}: From brightest to faintest are: AD 1508, AD 3814, AD 2615, and AD 3116 with elevation above the color-magnitude sequence a rough indicator of the mass ratio of a binary system (equal mass ratio produces a 0.75 mag magnitude excess). \emph{Bottom}: from slowest to fastest rotators are: AD 2615, AD 3814, AD 3116 and AD 1508. } \label{cmd} \end{figure} \section{Observations} \label{observations} \subsection{Photometry} We proposed targets for the \emph{K2}\ Campaign 5 observations, which included Praesepe, as part of the \emph{K2}\ Young Suns Survey (PI Stauffer). Targets were collated through merging various proper motion surveys \citep{Klein-Wassink27,Jones83,Jones91,Kraus07,Wang14} with published BVRI photometry \citep[][and references therein]{Mermilliod90,Stauffer82}. The K2FOV tool was used to select targets falling `on silicon' and we further limited our proposal to stars with spectral type later than F0 (i.e. possessing outer convective envelopes) and brighter than $R < 17$. This gave 477 high-probability Praesepe\ targets in total. In addition to our proposed systems we also investigated light curves of Praesepe candidates from other \emph{K2}\ programs. The \emph{K2}\ observations of Praesepe\ spanned 27 April -- 10 July 2015 and the FoV centerd on 08:40:38 +16:49:47. Given the typical 30-minute cadence of \emph{Kepler}\ observations, this resulted in $\sim$3300 data points for each target. Short cadence (1 min) observations are also possible for a small number of targets but all systems presented here were observed in standard long cadence mode. We discuss our method to reduce the \emph{K2}\ photometry in \S\ref{k2sc}. For objects showing the signatures of eclipses in the \emph{K2}\ time series photometry, we cross-referenced the EPIC identifiers with literature information in order to determine basic system properties (see \S\ref{par_est}) and to identify which systems to pursue with high dispersion spectroscopy (see \S\ref{spectroscopy}). \subsubsection{\emph{K2}\ data detrending and eclipse detection} \label{k2sc} We started from the Simple Aperture Photometry (SAP) light curves, which were made available at the Mikulski Archive for Space Telescopes (MAST) as part of \emph{K2}\ Data Release 7\footnote{See {\tt https://keplerscience.arc.nasa.gov/\\k2-data-release-notes.html\#k2-campaign-5} for details.}. We used the {\sc k2sc}\ pipeline \citep{Aigrain16} to correct the light curves for systematics caused by the quasi-periodic rolling motion of the spacecraft, while preserving the intrinsic variability of the target stars. {\sc k2sc}\ works by modeling the SAP flux as the sum of two smooth, random functions: one depending on the star's position on the detector, and one depending on time, plus white noise. The position component represents instrumental systematics associated with the satellite's pointing variations (mainly intra- and inter-pixel sensitivity variations), while the time component represents the star's intrinsic variability, plus any long-term instrumental effects not accounted for by the position component. Both components are modeled using Gaussian Process (GP) regression (see \S\ref{gpe_model_sec} for further details and references on GPs). While both components are initially treated as aperiodic, a quasi-periodic GP is automatically used for the time component if the light curve shows any evidence of periodic behavior after a first pass treatment with default parameters. A careful treatment of outliers ensures that {\sc k2sc}\ mostly preserves short-duration events such as planetary transits or stellar eclipses. However, once the eclipses were identified (by visual examination) in the four systems discussed in the present paper, their light curves were re-processed using {\sc k2sc}'s periodic mask option. This option enables the user to supply the period, epoch and duration of the eclipses, and any in-eclipse points are then ignored when training the GP model. In effect, we are using the {\sc k2sc}\ GP model to interpolate in both flux and position space to the times affected by the eclipses, thereby providing a model prediction for the total system flux across each eclipse. In our analysis we use the {\sc k2sc}\ light curve that has been detrended for instrument systematics but which retains the stellar variability component. This allows us to simultaneously model both the stellar variability and eclipses (see \S\ref{analysis}). \subsubsection{Estimation of primary star properties from broadband colors} \label{par_est} We estimated primary effective temperatures and masses using broadband color relations and absolute magnitudes presented in Table \ref{info_tab}, respectively. Effective temperatures ($T_{\rm eff}$) were estimated using the empirical color-$T_{\rm eff}$\ relations presented in \citet{Mann16} (their eq. 6) and \citet{David16a} (their eq. 1, which is derived from fitting polynomials to the color and temperature data presented in \citet{Pecaut13} for dwarf stars, and is valid for $0.3 < V-K_{s} < 7.0$). These predict primary effective temperatures of $\sim$3250, 3190, 3240 and 3750 K for ADs 3814, 2615, 3116 and 1508, respectively. In \S\ref{SED_sec} we directly determine the effective temperatures of both stars in each EB through modeling their spectral energy distributions (SEDs) and compare our $T_{\rm eff}$\ values to these empirical predictions in Table \ref{Td_comp_tab}. We estimated primary masses from absolute K band magnitudes using the semi-empirical relation of \citet{Mann16} (their eq. 10) and the empirical relation of \citet{Benedict16} (their eq. 11). For this, we converted apparent to absolute magnitudes assuming a cluster distance of $182.8\pm14$ pc \citep{vanLeeuwen17} and assumed a reddening along the line of sight of $E(B-V) = 0.027\pm0.004$ \citep{Taylor06}. These two relations predict primary masses of: $\sim$0.43, 0.34, 0.28 and 0.72 $M_{\odot}$ for ADs 3814, 2615, 3116 and 1508, respectively. For AD 1508 we used only the \citet{Mann16} mass prediction as this system lies outside the validity range ($0.1 \lesssim M \lesssim 0.6$ $M_{\odot}$) of the Benedict relation. We note that these predictions are for single stars and hence are not appropriate for binary systems unless the system magnitudes are dominated by the primary component. Furthermore, these empirical relations are approximations only and are estimated from systems that typically do not contain as high a metallicity as Praesepe ([Fe/H] $\sim$ 0.1--0.27). Nonetheless, they serve to highlight the expected temperature and mass regimes of the systems to be analyzed. \begin{table*} \centering \caption{Names, coordinates, properties and membership information for the four newly identified EBs.} \label{info_tab} \resizebox{\textwidth}{!}{% \begin{tabular}{llccccr} \hline \hline \noalign{\smallskip} Property & Units & AD 3814 & AD 2615 & AD 3116 & AD 1508 & Refs. \\ \noalign{\smallskip} \hline \noalign{\smallskip} EPIC & & 211972086 & 212002525 & 211946007 & 212009427 & \\ [-0.5ex] 2MASS & & J08504984+1948364 & J08394203+2017450 & J08423943+1924520 & J08312987+2024374 & \\ [-0.5ex] Other names & & ... & ... & HSHJ 430 & ... & 1 \\ [-0.5ex] RA & J2000.0 & 08:50:49.84 & 08:39:42.03 & 08:42:39.43 & 08:31:29.87 & \\ [-0.5ex] Dec & J2000.0 & +19:48:36.4 & +20:17:45.0 & +19:24:51.9 & +20:24:37.5 & \\ [-0.5ex] \noalign{\smallskip} \noalign{\smallskip} \noalign{\smallskip} $u$ & AB & $21.009\pm0.093$ & $21.747\pm0.185$ & $22.290\pm0.190$ & $18.102\pm0.014$ & 2 \\ [-0.5ex] $g$ & AB & $18.769\pm0.008$ & $19.416\pm0.012$ & $19.646\pm0.014$ & $15.540\pm0.004$ & 2 \\ [-0.5ex] $r$ & AB & $17.299\pm0.006$ & $17.905\pm0.007$ & $18.206\pm0.007$ & $14.151\pm0.004$ & 2 \\ [-0.5ex] $i$ & AB & $15.803\pm0.005$ & $16.324\pm0.004$ & $16.675\pm0.005$ & $13.700\pm0.001$ & 2 \\ [-0.5ex] $z$ & AB & $14.999\pm0.005$ & $15.456\pm0.006$ & $15.845\pm0.006$ & $12.905\pm0.004$ & 2 \\ [-0.5ex] V & Vega & 17.80 & 18.46 & 18.73 & 14.79 & 3 \\ [-0.5ex] $J$ & Vega & $13.529\pm0.026$ & $14.027\pm0.021$ & $14.348\pm0.032$ & $11.674\pm0.022$ & 4 \\ [-0.5ex] $H$ & Vega & $12.911\pm0.024$ & $13.456\pm0.026$ & $13.769\pm0.037$ & $10.949\pm0.023$ & 4 \\ [-0.5ex] $K_{s}$ & Vega & $12.651\pm0.022$ & $13.136\pm0.034$ & $13.499\pm0.043$ & $10.767\pm0.020$ & 4 \\ [-0.5ex] WISE 1 & Vega & $12.478\pm0.024$ & $12.938\pm0.024$ & $13.299\pm0.029$ & $10.677\pm0.023$ & 4 \\ [-0.5ex] WISE 2 & Vega & $12.291\pm0.026$ & $12.773\pm0.031$ & $13.096\pm0.039$ & $10.638\pm0.021$ & 4 \\ [-0.5ex] \noalign{\smallskip} \noalign{\smallskip} \noalign{\smallskip} Spectral type & M sub-type & $3.4\pm0.1$ & $4.0\pm0.1$ & $3.9\pm0.3$ & $0.1\pm0.1$ & 5 \\ [-0.5ex] H$\alpha$ emission & \AA\ & 2.4--3.5, ... & 3.0--4.3, 10.7 & 3.1--5.2, 4.6 & 2.0--2.1, ... & 6,7 \\ [-0.5ex] RA proper motion, $\mu_{\alpha}$ & mas yr$^{-1}$ & -37.5 & -39.3 & -37.5 & -37.3 & 5 \\ [-0.5ex] Dec proper motion, $\mu_{\delta}$ & mas yr$^{-1}$ & -14.1 & -11.6 & -8.2 & -16.7 & 5 \\ [-0.5ex] Membership probability & \% & 97.9 & 99.7 & 99.1 & 98.3 & 5 \\ \noalign{\smallskip} \hline \end{tabular} } \begin{list}{}{} \item[\textbf{Notes.}]{The quoted photometric uncertainties are formal measurement errors and hence do not capture the intrinsic variability of these systems. } \item[\textbf{References.}] 1. \citet{Hambly95}; 2. Sloan Digital Sky Survey Data Release 13; 3. \citet{Rebull17}; 4. NASA/IPAC Infrared Science Archive; 5. \citet{Kraus07}; 6. This work, with quoted range as measured over the epochs listed in Table~\ref{RVs_tab}; 7. \citet{Adams02}. \end{list} \end{table*} \subsection{Spectroscopy} \label{spectroscopy} We obtained high resolution spectra for each of the identified eclipsing binary systems using the Keck HIRES spectrograph \citep{Vogt94}. The observations were taken between 2015 December and 2017 January, with the exact epochs along with estimated signal-to-noise ratios and measured radial velocities given in Table \ref{RVs_tab}. The spectra cover the wavelength range $\approx$4800--9200 \AA\ at a spectral resolution of $R > 36,000$, and were reduced using the $makee$ software written by Tom Barlow. We measured radial velocities using the cross correlation techniques within the $fxcor$ task in $IRAF$, with absolute reference to between 3 and 5 (depending on the night) late type radial velocity standards. The standards and their approximate spectral types include: GJ 514 (M0.5), HD 95650 (M1), LHS 3433B (M2), Gl821 (M2), GJ 408 (M2.5), GJ 176 (M2.5), GJ 109 (M3.5), GJ 402 (M4), Gl 876 (M4), GJ 105B (M4.5), GJ 388 (M4.5), GJ411 (M4.5), GJ 406 (M6.5), with the reference velocities generally taken from \cite{Nidever02}. Telluric-free spectral regions were selected over between 6 and 19 orders (depending on the signal-to-noise of the target spectrum) for cross correlation function fitting. Depending on the velocity separation of the peaks, they were fit either singly or simultaneously, and depending on the signal-to-noise of the spectrum, the fitting function was either Gaussian or parabolic. Errors in the quoted radial velocities were determined from the empirical scatter among the measured orders and reference stars for each observation, with some hand editing to remove extreme outliers deriving from particularly poor measurements. In general, the scatter among the measurements that is quoted as the radial velocity error, is smaller than or comparable to the mean among the errors in the individual measurements over the orders and reference stars included in the quoted radial velocity value. This gives us some confidence that we are accurately representing the random errors in our methods. AD 3814, AD 2615, and AD 1508 are detected as double-lined systems, with measurable radial velocities for each component at nearly all epochs. AD 3116, however, presented only a single line set, which we attribute to the primary. In the double-lined systems, the CCF peak height ratios were used to approximate the light ratio between the two components, which was then applied as a prior in the light curve modeling (see \S\ref{analysis}). In addition to the radial velocities, H$\alpha$ equivalent width measurements were made for each EB using the $splot$ task in $IRAF$. The values quoted in Table \ref{info_tab} represent the combined system and the range records the variability over the various epochs of observation in Table \ref{RVs_tab}. \begin{table*} \centering \caption{Radial velocities derived from Keck/HIRES spectra for ADs 3814, 2615, 3116 and 1508 (\emph{top}\ to \emph{bottom}). } \label{RVs_tab} \begin{tabular}{c c c c r r } \noalign{\smallskip} \noalign{\smallskip} \hline \hline \noalign{\smallskip} \multicolumn{3}{c}{Epoch} & S/N & \multicolumn{2}{c}{RV ~ (km\,s$^{-1}$)} \\ UT date & BJD & Phase\,* & 7500 \AA\ & Primary~~~~ & Secondary~~ \\ \noalign{\smallskip} \hline \noalign{\smallskip} \noalign{\smallskip} \multicolumn{6}{c}{ ....................................................... ~ AD 3814 ~ ....................................................... } \\ 2015\,12\,24 & 2457381.15090 & 0.607 &16& $54.08\pm0.77$ & $-6.83\pm0.93$ \\ [-0.5ex] 2015\,12\,29 & 2457386.14539 & 0.437 &16& $21.42\pm0.76$ & $58.70\pm1.10$ \\ [-0.5ex] 2016\,02\,02 & 2457420.89940 & 0.214 &16& $0.10\pm0.75$ & $95.82\pm1.13$ \\ [-0.5ex] 2016\,02\,03 & 2457421.92652 & 0.385 &15& $12.91\pm0.76$ & $77.18\pm0.93$ \\ [-0.5ex] 2016\,05\,17 & 2457525.80479 & 0.652 &14& $60.96\pm0.37$ & $-19.29\pm1.15$ \\ [-0.5ex] 2016\,12\,22 & 2457744.96970 & 0.084 &15& $15.72\pm0.35$ & $65.38\pm0.56$ \\ [-0.5ex] 2016\,12\,26 & 2457748.97551 & 0.750 &13& $68.56\pm1.04$ & $-28.91\pm1.18$ \\ [-0.5ex] 2017\,01\,13 & 2457766.85771 & 0.723 &12& $67.17\pm0.40$ & $-29.35\pm0.56$ \\ [-0.5ex] \noalign{\smallskip} \noalign{\smallskip} \noalign{\smallskip} \multicolumn{6}{c}{ ....................................................... ~ AD 2615 ~ ....................................................... } \\ 2015\,12\,29 & 2457386.16741 & 0.039 &13& $26.03\pm0.86$ & $43.50\pm0.77$ \\ [-0.5ex] 2016\,05\,17 & 2457525.78402 & 0.059 &13& $20.45\pm0.76$ & $45.69\pm0.80$ \\ [-0.5ex] 2016\,05\,20 & 2457528.78074 & 0.317 &14& $-1.28\pm0.60$ & $65.74\pm0.60$ \\ [-0.5ex] 2016\,10\,14 & 2457676.07100 & 0.997 &10& \multicolumn{2}{c}{$35.22\pm0.29$} \\ [-0.5ex] 2016\,12\,22 & 2457745.03382 & 0.935 &13& $49.63\pm0.48$ & $20.66\pm0.41$ \\ [-0.5ex] 2017\,01\,13 & 2457766.90914 & 0.818 &5& $72.09\pm0.53$ & $5.63\pm0.60$ \\ [-0.5ex] \noalign{\smallskip} \noalign{\smallskip} \noalign{\smallskip} \multicolumn{6}{c}{ ....................................................... ~ AD 3116 ~ ....................................................... } \\ 2016\,02\,02 & 2457420.92116 & 0.102 &12& $26.28\pm0.82$ & --- ~~~~ \\ [-0.5ex] 2016\,02\,03 & 2457421.90606 & 0.599 &12& $40.47\pm0.83$ & --- ~~~~ \\ [-0.5ex] 2016\,05\,17 & 2457525.76250 & 0.978 &12& $39.75\pm0.59$ & --- ~~~~ \\ [-0.5ex] 2016\,05\,20 & 2457528.75747 & 0.488 &13& $27.46\pm0.54$ & --- ~~~~ \\ [-0.5ex] 2016\,10\,14 & 2457676.09435 & 0.796 &13& $55.96\pm0.30$ & --- ~~~~ \\ [-0.5ex] 2016\,12\,22 & 2457744.98886 & 0.542 &12& $31.91\pm0.44$ & --- ~~~~ \\ [-0.5ex] 2017\,01\,13 & 2457766.87986 & 0.583 &6& $37.23\pm0.77$ & --- ~~~~ \\ [-0.5ex] \noalign{\smallskip} \noalign{\smallskip} \noalign{\smallskip} \multicolumn{6}{c}{ ....................................................... ~ AD 1508 ~ ....................................................... } \\ 2016\,12\,22 & 2457745.047527495 & 0.971 &40& $50.62\pm1.40$ & $16.36\pm1.57$ \\ [-0.5ex] 2016\,12\,26 & 2457748.953315984 & 0.479 &30& $21.66\pm2.79$ & $42.37\pm3.23$ \\ [-0.5ex] 2017\,01\,13 & 2457766.842635326 & 0.970 &40& $52.56\pm2.19$ & $18.69\pm1.82$ \\ [-0.5ex] \noalign{\smallskip} \noalign{\smallskip}\noalign{\smallskip} \hline \end{tabular} \begin{list}{}{} \item[*] Phase is defined relative to primary eclipse. \end{list} \end{table*} \begin{figure} \centering \includegraphics[width=\linewidth]{{./EB_211972086_K2_rel_flux_pdc_LC_4paper}.pdf} \includegraphics[width=\linewidth]{{./EB_212002525_K2_rel_flux_pdc_LC_4paper}.pdf} \includegraphics[width=\linewidth]{{./EB_211946007_K2_rel_flux_pdc_LC_4paper}.pdf} \includegraphics[width=\linewidth]{{./EB_212009427_K2_rel_flux_pdc_LC_4paper}.pdf} \caption{Systematics-detrended {\sc k2sc}\ PDC light curves of ADs 3814, 2615, 3116 and 1508 (\emph{top}\ to \emph{bottom}). Each system shows out-of-eclipse variations arising from evolving starspot modulation upon which the stellar eclipses are superposed. Numerous stellar flares are visible throughout the observations, most notably on ADs 3814 and 2615, including one in each system reaching a relative flux $\gtrsim$1.8. Missing eclipses, as seen in AD 3116, are an artifact present in the PDC light curves. } \label{raw_LCs} \end{figure} \begin{figure} \centering \includegraphics[width=\linewidth]{{./AD3814_photometric_variabillty_evolution}.pdf} \includegraphics[width=\linewidth]{{./AD2615_photometric_variabillty_evolution}.pdf} \includegraphics[width=\linewidth]{{./AD3116_photometric_variabillty_evolution}.pdf} \includegraphics[width=\linewidth]{{./AD1508_photometric_variabillty_evolution}.pdf} \caption{Evolution of the photometric modulation observed in ADs 3814, 2615, 3116 and 1508 (\emph{top}\ to \emph{bottom}). The systematics-detrended {\sc k2sc}\ light curves shown in Figure \ref{raw_LCs} have been folded on the period of the observed variability. The rainbow color scheme highlights the evolution through the 75 day campaign (\emph{beginning} to \emph{end}, violet to red). For ADs 3814, 2615, 3116 and 1508, the number of periods folded upon are 11, 7, 34 and 49, respectively, which simply reflects the orbital periods of the systems. } \label{phot_var} \end{figure} \section{Analysis with the GP--EBOP\ model} \label{analysis} Both young and low-mass stars typically display photometric and spectroscopic modulation arising from the longitudinal inhomogeneity of active regions on the stellar surface, with activity timescales a strong function of stellar mass. In close binaries ($P \lesssim 15$ days), activity levels are generally observed to be higher than in their single star counterparts. This variability is important to properly account for when analyzing the observed stellar eclipses since it can subtly modify the detailed shape of individual eclipses. Ideally, therefore, we would model the stellar variability at the same time as fitting for the eclipses and, in doing so, propagate any uncertainties in the variability modeling through into the posterior distributions for the EB parameters. This approach motivated the development of a new eclipsing binary model, GP--EBOP, which we use here to characterize the new Praesepe\ EBs by simultaneously modeling the \emph{K2}\ light curves and Keck/HIRES radial velocity measurements, accounting for activity-induced effects. The method is distinct from those that account for stellar variability by detrending first and then modeling eclipses second. \subsection{GP--EBOP} \label{gpe_model_sec} GP--EBOP\ comprises a central eclipsing binary (EBOP) model coupled with a Gaussian process (GP) model, which has an MCMC (Markov chain Monte Carlo) wrapper. It can be used to model both eclipsing binary systems and transiting planets: we use it here in its first capacity but note its tested ability to model planet transits \citep[e.g.][]{Pepper17}. Below we briefly describe the main components of the model: \begin{itemize} \item \emph{EB component}. The EB model is a modified version of the (JKT)EBOP family of models, which was first presented in \citet{Irwin11}. Each star is modeled as a sphere when computing light curves from the eclipses and as a biaxial spheroid for the calculation of reflection and ellipsoidal effects. This model is able to compute light ratios and radial velocities, and can correct for the ``classical'' light travel time across the system. Differing from previous EBOP-based models, this implementation uses the analytic method of \citet{Mandel02} for the quadratic limb darkening law. GP--EBOP\ utilizes the LDtk toolkit \citep{Parviainen15}, which allows uncertainties in the stellar parameters (effective temperature, surface gravity and metallicity) to be propagated through the PHOENIX stellar atmosphere models \citep{Husser13} and into priors on the limb darkening coefficients. Limb darkening parameterization within the fitting process follows the triangular sampling method of \citet{Kipping13}. \vspace{1.5mm} \item \emph{GP component}. The GP model utilizes the {\tt george} package\footnote{\href{http://dan.iel.fm/george}{http://dan.iel.fm/george}} \citep{Ambikasaran14} and is used to model the out-of-eclipse (OOE) photometric data. A detailed description of Gaussian process regression is beyond the scope of this paper but the interested reader is referred to \citet{Roberts12} for a gentle introduction, \citet{Rasmussen06} for a more detailed entry, \citet{Aigrain12} for application to stellar light curves and \citet{Gillen14} for application to eclipsing binary light curves and cross-correlation functions. A simple way to view GPs is to think of them as a means of modeling a light curve by parameterizing the covariance between pairs of flux measurements, rather than explicitly specifying a functional form of model to fit the data. In this Gaussian process model, the joint distribution of the observed flux measurements is taken to be a multivariate Gaussian, whose covariance matrix is populated through a covariance function that depends on the observation times. As such, a GP is distribution over functions. When the parameters of a GP (called hyperparameters) are varied, we step through function space rather than the more familiar parameter space of conventional methods. Crucially for our application, the power of GP regression is that we obtain an uncertainty on the prediction for the OOE variability across each eclipse, which we can then propagate through into our posterior distributions for the EB parameters. \vspace{1.5mm} \item \emph{MCMC wrapper}. GP--EBOP\ explores the posterior parameter space using the the Affine Invariant MCMC method, as implemented in {\tt emcee} \citep{Foreman-Mackey13}. \end{itemize} \subsection{Light curves} \label{sec_light_curves} The K2 light curves are a timeseries of flux measurements. GP--EBOP\ models the light curves by assuming the joint distribution over the flux measurements ${\bf F}$ is given by a multivariate Gaussian whose mean function $\mu$ is an eclipse model and whose covariance matrix ${\bf K}$ is described by a Gaussian process: \begin{equation} {\bf F} \sim \mathcal{N} (\mu, {\bf K}). \end{equation} The elements of the covariance matrix ${\bf K}$ are given by: \begin{equation} {\bf K}_{ij} = k(t_{i},t_{j}) + k_{w}(i,j) \end{equation} where the first term represents the specific kernel chosen to describe to the out-of-eclipse variations and the second term describes the white noise component. Figure \ref{raw_LCs} shows the raw light curves of the four new EBs and Figure \ref{phot_var} shows these phase-folded on the photometric variability period. The OOE light curves of all four systems presented here display evolving starspot modulation with characteristic amplitudes, periods and evolutionary timescales. To model these smoothly evolving data, therefore, we chose a GP with a quasi-periodic Exponential Sine Squared kernel (hereafter QPESS). This is a periodic kernel that is allowed to evolve over time, i.e. mimicking evolving starspot modulation. The QPESS kernel has the required flexibility to explain the large-scale flux variations in the OOE light curves. It is given by: \begin{equation} k{\scriptscriptstyle \rm QPESS} (t_{i}, t_{j}) = A^{2} \exp \left[ -\Gamma \sin^2 \left( \frac{\pi \left| {t_{i}-t_{j}} \right|}{P} \right) - \frac{ \left( {t_{i}-t_{j}} \right)^{2} }{2l^{2}} \right] . \end{equation} The first exponential describes the periodic component and the second the evolution of the periodic signal. $A$ is the characteristic amplitude of the variations, $\Gamma$ is the scale of the correlations, $P$ the period of the oscillations and $l$ the evolutionary timescale. $t_{i}$ and $t_{j}$ represent example times of two flux measurements within the time series. The resulting periods (Table~\ref{lc_model_tab}) differ from those reported by \cite{Rebull17} (based on Lomb-Scargle techniques) at about the $\sim$1\% level. The white noise term is given by: \begin{equation} k_{w}(i,j) = \sigma^{2} \delta_{ij} \end{equation} where $\sigma$ is the standard deviation and $\delta_{ij}$ is the Kronecker delta function. Within GP--EBOP\ the white noise term is incorporated via a multiplicative scale factor on the observational uncertainties, as {\tt george} adds these scaled uncertainties in quadrature to the diagonal of the covariance matrix. We model the {\sc k2sc}\ light curves that have been detrended for instrument systematics but which still contain stellar activity variations. After visual inspection of the SAP and PDC {\sc k2sc}\ light curves we opted to work with the PDC versions as these display lower point-to-point scatter. As can be seen in Figure \ref{raw_LCs}, numerous stellar flares are present throughout the light curves. Flares were treated in two ways depending on whether or not they affected the stellar eclipses. Those that did not were automatically removed using the following method: the light curve was smoothed using a running median filter, which was followed by running sigma cuts to identify flares. The data before and after the flare peak was removed until the light curve returned to the smoothed light curve value. Flares affecting the stellar eclipses were treated more carefully: as even a detailed modeling would not correct the photometry to a precision required to include in our eclipse modeling, we opted to conservatively mask out the affected data via visual inspection. The resulting light curves, which were modeled in our analysis, can be seen in Figures \ref{3814_LC}, \ref{2615_LC} and \ref{3116_LC} for ADs 3814, 2615 and 3116, respectively. The light curve of AD 1508 was treated slightly differently as only a preliminary solution is presented here (see \S\ref{1508_results} for details). The full light curves (eclipses and out-of-eclipse variability) and radial velocity variations were simultaneously modeled by GP--EBOP\, stepping through the parameter space 50,000 times with each of 144 `walkers'. The first 25,000 steps were discarded as burn-in and the remainder of each chain was thinned following inspection of the autocorrelation lengths for each parameter. To account for the $\sim$30 minute cadence of \emph{K2}\ observations, GP--EBOP\ was supersampled at 1 minute cadence and numerically integrated to the \emph{K2}\ sampling for model evaluation. The uncertainties on the limb darkening coefficients were inflated by a factor of 30, above the uncertainties derived from the PHOENIX models. This inflation factor was determined by comparing quadratic limb darkening coefficients of LDtk, \citet{Claret12} and \citet{Sing10} for common $T_{\rm eff}$\, $\log g$\ and metallicity values in a representative range for our EBs across the {\it Kepler} bandpass. We used the spread in their predictions, and applied a further increase to account for systematic uncertainties in M-dwarf model atmospheres, to determine our inflation factor. Reflection effects and gravity brightening were not included in the modeling. The former is accounted for by the GP model and the latter makes no significant difference to the model posterior distributions, which we tested by performing model runs with different gravity brightening exponent ($\beta$) values. We note that \citet{Alencar97} found that $\beta$ ranges between 0.2 and 0.4 for stars with temperatures between $3700 \leqslant T \leqslant 7000$ K and that the typical \citet{Lucy67} value of $\beta=0.32$ best describes stars with $T = 6500$ K. \subsection{Radial velocities} The Keck/HIRES RVs were modeled using Keplerian orbits simultaneously with the \emph{K2}\ light curves. Spectroscopic light ratios (available for three of the four systems presented here) were estimated from cross-correlation peak heights and applied as priors on the light curve model component. This can help break the well-known degeneracy between the radius and surface brightness ratios, which can often be a limiting factor in the individual radius estimates for near equal-mass EBs. An RV jitter term, incorporated in GP--EBOP, was used to allow the uncertainties on the Keck/HIRES RV measurements to be scaled, if necessary. This helps account for additional variations arising from e.g. stellar activity and instrument systematics. This jitter term is added in quadrature to the observational uncertainties. When RVs from multiple instruments are obtained, GP--EBOP\ can scale the uncertainties for each instrument individually and account for offsets between different instrument RV zero points. \begin{table*} \centering \caption{Spectroscopic light ratios and quadratic limb darkening priors applied in the GP--EBOP\ modeling for ADs 3814, 2615, 3116 and 1508. } \label{priors_tab} \begin{tabular}{c @{\hskip 10mm} c c @{\hskip 10mm} c @{\hskip 8mm} c c @{\hskip 8mm} c l } \noalign{\smallskip} \noalign{\smallskip} \hline \hline \noalign{\smallskip} System & \multicolumn{2}{l}{Spectroscopic light ratio ~~~~~~~~ } & \multicolumn{5}{c}{Limb darkening coefficients and assumed model atmosphere parameters\,*} \\ & BJD & $l_{\rm sec}/l_{\rm pri}$ & Component & $\mu$ & $\mu'$ & $T_{\rm eff}$\ (K) & $\log g$\ (cgs) \\ \noalign{\smallskip} \hline \noalign{\smallskip} \noalign{\smallskip} \multirow{2}{*}{AD 3814} & \multirow{2}{*}{245\,7766.9} & \multirow{2}{*}{$0.41\,^{+0.25}_{-0.19}$} & Pri & $0.46 \pm 0.13$ & $0.21 \pm 0.46$ & $3200 \pm 200$ & $4.9 \pm 0.1$ \\ & & & Sec & $0.49 \pm 0.24$ & $0.23 \pm 0.76$ & $3100 \pm 200$ & $5.0 \pm 0.1$ \\ \noalign{\smallskip} \noalign{\smallskip} AD 2615 & 245\,7766.9 & $1.13\,^{+0.24}_{-0.20}$ & Pri \& Sec & $0.46 \pm 0.13$ & $0.21 \pm 0.46$ & $3200 \pm 200$ & $4.9 \pm 0.1$ \\ \noalign{\smallskip} \noalign{\smallskip} \multirow{2}{*}{AD 3116} & \multirow{2}{*}{---} & \multirow{2}{*}{---} & Pri & $0.46 \pm 0.13$ & $0.21 \pm 0.46$ & $3200 \pm 200$ & $4.9 \pm 0.1$ \\ & & & Sec & $0.68 \pm 0.17$ & $0.17 \pm 0.46$ & $2500 \pm 200$ & $5.0 \pm 0.1$ \\ \noalign{\smallskip} \noalign{\smallskip} AD 1508 & 245\,7745.0 & $0.63\,^{+0.41}_{-0.26}$ & Pri \& Sec & $0.47 \pm 0.14$ & $0.20 \pm 0.31$ & $3700 \pm 200$ & $4.8 \pm 0.1$ \\ \noalign{\smallskip} \noalign{\smallskip}\noalign{\smallskip} \hline \end{tabular} \begin{list}{}{} \item[*] $\mu$ and $\mu'$ are the coefficients for the linear and quadratic terms, respectively, of the quadratic limb darkening law. All limb darkening coefficients were computed assuming $Z = 0.14 \pm 0.05$. \end{list} \end{table*} \section{Results} \label{results} The K2 light curves and Keck/HIRES radial velocity measurements of the four new EBs (ADs 3814, 2615, 3116 and 1508) were modeled with GP--EBOP; the results for each system are discussed in turn below. Throughout our analysis we define the primary as the star that, when occulted, gives the deepest eclipse, and the secondary as the occulting star. We note that these adjectives do not necessarily imply that the primary star is the more massive or brighter star, as we find to be the case with AD 2615. \subsection{AD 3814} \label{3814_results} AD 3814 has been extensively studied in the literature. The M3.4 spectral type, broadband photometric magnitudes and colors, and proper motion give AD 3814 a high probability of cluster membership. Figure \ref{3814_LC} shows the \emph{K2}\ light curve used in the modeling after flares were removed. Three eclipses were masked in the flare removal process (see section \ref{sec_light_curves}): two secondary eclipses at rBJD\footnote{rBJD = BJD $-$ 2454833.}$\sim$2315 and 2361, and one primary eclipse at rBJD$\sim$2364. The red line and pink shaded region indicate the mean and 2$\sigma$ uncertainty of the posterior GP--EBOP\ eclipse model, which is able to reproduce both the eclipses and the slowly evolving starspot modulation. Detrending with respect to the GP component and phase-folding on the binary orbital period allows us to inspect the shape of the eclipses in detail. These are shown in Figure \ref{3814_eclipses}, where the top panel displays the full phase-folded light curve and the bottom panels show zooms around primary and secondary eclipses (\emph{left}\ and \emph{right}, respectively). There is clear evidence of increased scatter in the residuals across each eclipse, which is presumably due to uncorrected differential starspot effects. Starspots on the background star will have a differential effect on the eclipse shape, with the eclipse being shallower if starspots on the background star are preferentially occulted by the foreground star and deeper if the unspotted photosphere is preferentially occulted. As the timescale for such differential effects are much faster than the typical starspot modulation observed out of eclipse, the QPESS kernel will struggle to account for this effect given its covariance properties, which constrain it to smooth variations. Instead, the GP will opt to inflate its uncertainty due to the increased scatter. One could theoretically include an additional kernel within the GP model to try and account for such differential effects across eclipses, but this is beyond the scope of the current work. The 8 Keck/HIRES RVs were modeled simultaneously with the \emph{K2}\ light curve. The resulting phase-folded RV orbit is shown in Figure \ref{3814_RV} (primary in red and secondary in blue). The colored lines and shaded regions indicate the median and 2$\sigma$ uncertainties on the posterior orbits of the two stars, which are well-fit to the observed RVs. The systemic velocity of the system is $V_{\rm sys} = 33.60 \pm 0.24$ km\,s$^{-1}$ (dashed gray line), which is consistent with the recessional velocity of the cluster $V_{\rm rec}$ $\sim$33--35 km\,s$^{-1}$ \citep[e.g.][]{vanLeeuwen09,Quinn12,Yang15} and hence provides further evidence of cluster membership. We note that the residuals in the phase-folded RV plot display an interesting structure. Inspection of the RV residuals in time, however, does not suggest any long term trend indicative of a tertiary companion, which is consistent with the lack of a detectable tertiary peak in the cross-correlation function. Possible explanations for the residuals are issues with the absolute radial velocity calibration, the RV stability of the reference standards, or the precise placement of the target star in the center of the slit. GP--EBOP\ attempts to account for this unknown noise component by including an additional jitter term that acts to scale the observational uncertainties. We note that if the origin of this noise component were known, it may be possible to model directly within the fit, but this is beyond the scope of the present analysis. Figure \ref{3814_geom} depicts the system, to scale, at both primary and secondary eclipse, indicating the geometry responsible for the observed eclipses and RV variations. The model parameters and 1 sigma uncertainties for AD 3814 are presented in the first results column of Table \ref{lc_model_tab}. The light curve and RV modeling with GP--EBOP\ yields masses and radii for each star: the primary and secondary masses are $0.3813 \pm 0.0074$ and $0.2022 \pm 0.0045$ M$_{\odot}$ with corresponding radii of $0.3610 \pm 0.0033$ and $0.2256\,^{+0.0 063}_{-0.0049}$ R$_{\odot}$. The masses of both components are constrained to 2\% and the primary and secondary radii to 1\% and 3\%, respectively. The fundamental parameters are compatible with the estimated M3.4$\pm$0.1 spectral type and the primary mass estimate from section \ref{par_est}. The masses, radii and effective temperatures (derived in \S \ref{SED_sec}) of AD 3814 are compared to the current suite of stellar evolution models in section \ref{model_comp}. We applied a prior on the system light ratio and priors on the quadratic limb darkening coefficients (see Table \ref{priors_tab}). The light ratio was determined from the cross-correlation peak height ratio in a HIRES spectrum taken close to quadrature, which is acceptable as the HIRES spectral range is a reasonable match to the \emph{K2}\ bandpass. We note that the degeneracy between the surface brightness and radius ratios is not apparent in our posteriors, although it is not expected to be significant in this system given the mass and brightness ratios. We conclude by noting that this system would benefit from a more detailed modeling of the individual eclipses, incorporating a full starspot model, to assess whether the large-scale underlying starspot distribution can be reconstructed from the eclipses, which track different longitudes on the stellar surfaces over the K2 run. \subsection{AD 2615} \label{2615_results} AD 2615 is an M4.0 high probability member of Praesepe. The analysis presented here is consistent with the photometric, spectroscopic and membership information from previous studies. The light curve of AD2615 that was used in the modeling is shown in Figure \ref{2615_LC}. One secondary eclipse, at rBJD$\sim$2367, was masked following the flare removal process (see section \ref{sec_light_curves}). The red line and pink shaded region represent the mean and 2$\sigma$ uncertainty of the posterior GP--EBOP\ eclipse model. As with AD 3814, the model is able to capture both the stellar eclipses and the evolving starspot modulation. The model's predictive power can be seen before and after the light curve, where it is able to predict the form of the evolving modulation pattern, given the covariance properties of the data; this also drives the motivated prediction and uncertainty across each eclipse. Figure \ref{2615_eclipses} shows the phase-folded light curve, which has been detrended with respect to the GP component. The eclipse model is an acceptable fit to the data. There is no clear evidence for increased scatter in the residuals, which suggests that the geometry of the eclipses does not preferentially track bright or dark regions on the stellar surfaces, perhaps because the underlying starspot distribution in AD 2615 is more homogeneous than in AD 3814. Figure \ref{2615_RV} shows the phase-folded RV orbit (red for primary and blue for secondary). The 5 HIRES RVs of both stars are well-fit by the Keplerian model. The 2$\sigma$ uncertainties on the orbits (red and blue shaded regions) increase around quadrature, as expected. The systemic velocity of $V_{\rm sys} = 34.91 \pm 0.39$ km\,s$^{-1}$ (dashed gray line) is compatible with the cluster's recessional velocity, providing further kinematic evidence of cluster membership. We note that a sixth RV observation was conducted but lay too close to primary eclipse to disentangle the two stellar components and hence was not used in the fit. In principle, we could determine an upper limit on the separation of the two stars and use this as an additional constraint in the modeling. However, at phase = 0.997, the solution is already tightly constrained and hence this upper limit would not place useful constraints on our existing solution. We further note that spectral disentangling may offer an interesting alternative route of RV determination for this system, which could utilize this sixth observation. While traditional spectral disentangling techniques require many high SNR spectra, powerful new techniques are emerging designed for fewer and lower SNR spectra \citep[e.g.][]{Czekala17}. It would be interesting to compare the standard CCF-based RV determination with these new spectral disentangling techniques, but this is beyond the scope of the present paper. Figure \ref{2615_geom} depicts the system, to scale, at primary and secondary eclipse, showing the configuration responsible for the observed eclipses. The medians and 1$\sigma$ uncertainties of the GP--EBOP\ model posteriors are reported in Table \ref{lc_model_tab} (second results column). The primary and secondary masses are $0.212 \pm 0.012$ and $0.255 \pm 0.013$ M$_{\odot}$, with corresponding radii of $0.233 \pm 0.013$ and $0.267 \pm 0.014$ R$_{\odot}$. We remind the reader that we define the primary star as the star which, when occulted, gives the deeper eclipse, but that this does not necessarily mean it is the more massive or brighter of the two components, as indeed is the case in this system. The masses and radii are constrained to 6\% for the primary and 5\% for the secondary. This system would benefit from additional RVs around quadrature to increase the precision of the mass determination. The fundamental parameters are compatible with the estimated M4.0 spectral type but the mass of either component is lower than the estimate from the system's absolute K-band magnitude (Section \ref{par_est}), presumably because this system is a near equal mass binary and so both stars contribute significantly to the K-band flux, resulting in an overestimated single-star mass. The masses, radii, and effective temperatures (c.f. \S \ref{SED_sec}) are compared to stellar evolution models in section \ref{model_comp}. We applied priors on the system light ratio and stellar limb darkening coefficients (see Table \ref{priors_tab}). Even though the system is near-equal mass and brightness, our spectroscopic light ratio was able to break the degeneracy between the surface brightness and radius ratios, which can be a limiting factor in determining radii in such systems. \subsection{AD 3116} \label{3116_results} AD 3116 is an M3.9 high probability member of Praesepe. The system sits at the bottom of the cluster sequence (see Figure \ref{cmd}) suggesting the secondary component contributes little optical light to the total system flux and hence is comparatively low-mass. Analysis of the \emph{K2}\ light curve and 7 HIRES spectra reveals the system to be single-lined with eclipses visible only on the primary component, consistent with its position in color-magnitude space. Secondary spectroscopic lines could not be detected, even after dividing the two spectra and looking for similar but weaker patterns in the CCF, which suggests the secondary contributes very little ($<20-35$\%) to the system's optical light. Given the lack of a detectable secondary eclipse and secondary RV orbit, the data alone are not able to constrain the solution precisely. There exist two families of solutions: one consisting of a small secondary that fully transits and the other a larger secondary on a grazing trajectory. The primary RV orbit requires the secondary to be eclipsed, and hence both models find a negligible surface brightness ratio in the \emph{Kepler}\ band to remain consistent with the lack of a detectable secondary eclipse. For the solution comprising a large ($R_{\rm sec}/R_{\rm pri} \gtrsim 1$), grazing secondary, this would require an unusual object possessing a very low temperature given its radius. Inspection of the system mass function revealed that the secondary lay in the brown dwarf regime ($M_{\rm sec} \sim$55 $M_{\rm Jup}$), which further supported the solution comprising a small, fully-transiting secondary. We tested the reliability of the primary RV solution to see if individual RVs close to the systemic velocity (i.e. which could be biased by low-level secondary light) may be affecting the eccentricity of the RV orbit and hence the system parameters. We removed all bar the three RVs closest to quadrature and, as expected, the model converged again on a solution requiring the secondary to be eclipsed. This, combined with the small primary RV semi-amplitude and lack of secondary spectroscopic lines, rules out a scenario where the secondary is of comparable size and brightness to the primary but there is no secondary eclipse due to the eccentricity of the orbit. All available information and tests pointed towards a very low-mass, small and cool secondary component. We therefore chose to place loose uniform priors on the radius ratio and surface brightness ratio to encourage the solution towards a physically sensible secondary component. These priors were $0.0 \leqslant R_{\rm sec}/R_{\rm pri} \leqslant 0.46$ and $0.0 \leqslant J_{\rm K2} \leqslant 0.25$ which, given the expected primary star properties (c.f. \S\ref{par_est}) and secondary star mass estimate, act simply to exclude physically implausible solutions and do not act to constrain the remaining physically plausible solutions. We performed further tests allowing $R_{\rm sec}/R_{\rm pri}$ and $J_{\rm K2}$ to extend up to 0.75 and 0.35, respectively, but find consistent posterior values. The model fit is shown in Figures \ref{3116_LC}--\ref{3116_geom}, whose descriptions are the same as for ADs 3814 and 2615 in sections \ref{3814_results} and \ref{2615_results} above. The model is a good fit to the primary eclipse and large-scale evolving starspot structure in the \emph{K2}\ light curve (Figure \ref{3116_LC}). We note that two primary eclipses, at rBJD$\sim$2319 and 2350, were masked following the flare removal process (see section \ref{sec_light_curves}). Figure \ref{3116_eclipses} shows the phase-folded and GP-detrended light curve: the primary eclipse is well-fit, although there is a modest increase in the residual scatter, which is larger than in AD 2615 but smaller than in AD 3814. The RV data suggests a moderately eccentric orbit ($e \sim 0.15$; see Figure \ref{3116_RV}) with a systemic velocity of $V_{\rm sys} = 34.93\,^{+0.61}_{-0.53}$ km\,s$^{-1}$ (dashed gray line). This is consistent with the cluster's recessional velocity, providing additional kinematic evidence of cluster membership. Using the empirical relations of \citet{Benedict16}, and assuming the \citet{vanLeeuwen09} cluster distance of $181.5\pm6.0$ pc, the $K_{\rm s}$ magnitude of AD 3116 implies a primary mass of $M_{\rm pri} = 0.28 \pm 0.02$ $M_{\odot}$, where the uncertainty arises equally from the empirical relation scatter and our assumed 0.1 mag uncertainty on the quoted $K_{\rm s}$ value. We checked this value using the empirical relations of \citet{Mann16} and find $M_{\rm pri} \sim 0.29$ $M_{\odot}$, consistent with the Benedict et al. value. Taking the Benedict value, the mass function from our final solution then yields $M_{\rm sec} = 54.2\pm4.3$ $M_{\rm Jup}$. This is one of only $\sim$20 known transiting brown dwarfs \citep[e.g.][]{Csizmadia16,Nowak16,Bayliss16} and the primary component is one of only three M-dwarfs known to host a transiting brown dwarf. Furthermore, this is only the second known transiting brown dwarf in an open cluster (i.e. where the age is well-constrained), and the first younger than a Gyr. Figure \ref{3116_geom} shows the system geometry at primary and secondary eclipses. That the brown dwarf is fully occulted yet shows no detected signature in the \emph{K2}\ band theoretically allows us to place an upper limit on the optical reflected light and hence albedo of the object. Using the \citet{Mann16} empirical relations to estimate the primary radius, and hence secondary radius and semi-major axis from our light curve modeling, we can estimate the system scale. This then allows us to compute the angle on the sky that the brown dwarf subtends as seen from the primary. With $R_{\rm sec} \sim 0.11$ $R_{\odot}$ and $a \sim 4.7$ $R_{\odot}$, the brown dwarf intercepts $\sim$0.007\% of the visible light from the primary star. Therefore, even if the brown dwarf reflected all incident flux (i.e. an albedo of 1), we would not detect a drop in flux in the \emph{K2}\ light curve when the brown dwarf is occulted. We applied priors on the limb darkening coefficients (see Table \ref{priors_tab}). The secondary temperature was set to be as low as the PHOENIX models allow but is likely still too high (see Table \ref{Td_comp_tab}). However, as the secondary gives no detectable eclipse it makes no significant difference to the presented solution. Given the system is single-lined we did not place a prior on the system light ratio in the \emph{K2}\ band. \subsection{AD 1508} \label{1508_results} AD 1508 is a high probability M0.1 member of Praesepe, which sits high above the cluster sequence (see Figure \ref{cmd}), suggesting a near-equal mass system. The preliminary analysis presented here is consistent with this picture. The \emph{K2}\ light curve of AD 1508 (see Figures \ref{raw_LCs} and \ref{phot_var}; bottom panels) is dominated by evolving starspot modulation at the few percent level. Very shallow grazing eclipses are also present with a depth of less than 1\%. We obtained only three RVs for this system, which unfortunately fall close to primary and secondary eclipses (see Table \ref{RVs_tab}). Given this, and the shallow eclipses, a precise solution is not possible. Instead, we provide our initial analysis and offer the system to the community for further pursuit. The \emph{K2}\ light curve and three Keck/HIRES RVs were simultaneously modeled with GP--EBOP. However, given the preliminary nature of the modeling, and unlike the other three systems, we opted to simplify the light curve analysis by performing an initial detrending of the starspot modulation and then modeled the residuals with GP--EBOP\ to analyze the stellar eclipses. To do this, the out-of-eclipse light curve was flattened through two iterations of a cubic basis spline with knots every 2 hours and rejection of 0.5-$\sigma$ outliers. Figure \ref{1508_LC} shows the resulting detrended light curve that was modeled with GP--EBOP. Low-level (likely systematic) residual variations are present, which show a relatively rough behavior. Accordingly, we chose a Matern-3/2 kernel for the GP component, which is given by: \begin{equation} k_{\rm M32} (t_{i}, t_{j}) = A^{2} \left(1+\frac{\sqrt{3} \left| {t_{i}-t_{j}} \right| }{l}\right) \exp\left(-\frac{\sqrt{3} \left| {t_{i}-t_{j}} \right| }{l}\right) \end{equation} where $A$ is the amplitude and $l$ the characteristic timescale of the variations. Detrending with respect to the GP component and phase-folding on the orbital period, as shown in Figure \ref{1508_eclipses}, we see that the eclipses are well-fit by the model. There is no significant evidence of increased scatter across the eclipses. We note that the light curve of AD 1508 appears noisy in comparison to the other systems discussed here, even though it is significantly brighter. This is simply because the plot scales in Figures \ref{1508_LC} and \ref{1508_eclipses} are small as the eclipses are shallow and the starspot modulation has already been detrended for. It is not a reflection of the true noise level in this system: the point-to-point scatter of all systems discussed here decreases with system brightness, as expected. The phase-folded RV orbit is shown in Figure \ref{1508_RV} which, given only three RVs at non-optimal phases, is not well-constrained. This is reflected in the large 2$\sigma$ uncertainties on the posterior orbits (red and blue for the primary and secondary stars, respectively). Nonetheless, the systemic velocity is relatively well-constrained at $V_{\rm sys} = 33.1\pm1.7$ km\,s$^{-1}$, which is consistent with the cluster recessional velocity and hence provides further kinematic evidence of Praesepe\ membership. Figure \ref{1508_geom} shows the system, to scale, at primary and secondary eclipse. The shallow eclipses simply result from the very grazing trajectory of the stellar orbits, as viewed from Earth. The median and 1$\sigma$ uncertainties resulting from our preliminary analysis are reported in Table \ref{lc_model_tab} (fourth results column). Given the available data, significant uncertainties exist in the derived masses and radii. The primary and secondary masses are $0.45\,^{+0.19}_{-0.14}$ and $0.53\,^{+0.22}_{-0.16}$ M$_{\odot}$ with corresponding radii of $0.549\,^{+0.099}_{-0.082}$ and $0.454\,^{+0.094}_{-0.101}$ R$_{\odot}$. The solution is currently limited by the lack of RV constraints and future analysis would benefit from additional RV measurements, especially around quadrature. Nonetheless, the fundamental parameters are compatible with the estimated M0.1$\pm$0.1 spectral type and the primary mass estimate from section \ref{par_est}. Given the existing uncertainties we do not compare this system to stellar evolution models in section \ref{model_comp}. We applied priors on the system light ratio and limb darkening coefficients (see Table \ref{priors_tab}). Although large uncertainties remain, the spectroscopic light ratio was able to break the degeneracy between the surface brightness and radius ratios, which can be a limiting factor in determining individual radii in near-equal mass and brightness systems. \section{Discussion} \label{discussion} The direct determination of fundamental stellar parameters offers an opportunity to test stellar evolution models. The fundamental predictions of these models are the radius and $T_{\rm eff}$\ for a star of given mass and metallicity as a function of age. Ideally, therefore, we would be able to determine the mass, radius and $T_{\rm eff}$\ of both stars as, together, these offer a particularly strong test of stellar evolution theory. However, while the masses and radii of stars in EBs naturally fall out of the joint light curve and radial velocity modeling, estimating effective temperatures is more challenging. In \S\ref{SED_sec} we present a method of simultaneously estimating the effective temperature of both stars, and the distance to the system in a manner that makes full and correct use of the light and radial velocity constraints. We then compare our $T_{\rm eff}$'s and distances to empirical $T_{\rm eff}$\ relations and to previous distance estimates to Praesepe. In \S\ref{model_comp} we compare our masses, radii and $T_{\rm eff}$'s to the predictions of stellar evolution models for individual systems and also place the newly characterized EBs in the context of other known low mass EBs and briefly discuss the constraints that can be placed on the age of Praesepe. Through this model comparison, and in \S\ref{sync} where we comment on the synchronization of the new EBs, we discuss several astrophysical implications of our findings. \subsection{Simultaneous determination of effective temperatures and distance from the spectral energy distribution} \label{SED_sec} \begin{figure*} \centering \includegraphics[width=0.43\linewidth]{./AD3814_20170309_11_14_nsteps_50000_nwalkers_196_nburn_25000_PHOENIX_binary_Z_0_00_fit4err_SED.pdf} \includegraphics[width=0.43\linewidth]{./AD2615_20170309_12_31_nsteps_50000_nwalkers_196_nburn_25000_PHOENIX_binary_Z_0_00_fit4err_SED.pdf} \includegraphics[width=0.43\linewidth]{./AD3116_20170402_16_54_nsteps_50000_nwalkers_196_nburn_25000_BTSETTL_binary_Z_0_00_fit4err_SED.pdf} \includegraphics[width=0.43\linewidth]{./AD1508_20170401_19_27_nsteps_50000_nwalkers_196_nburn_25000_PHOENIX_binary_Z_0_00_fit4err_SED.pdf} \caption{Spectral energy distributions (SEDs) of the four new EBs. Clockwise from \emph{top left}: ADs 3814, 2615, 1508 and 3116. Cyan points represent the observed SED, which has been constructed from the broadband magnitudes reported in Table \ref{info_tab} (horizontal error bars indicate each band's spectral coverage). The primary and secondary star spectra are shown in red and blue, respectively. Their combined spectrum is shown in black and the hollow magenta triangles show the combined model convolved with the $ugriz$, $V$, $K_{p}$, $JHK$ and WISE 1\,\&\,2 bands. The models shown for ADs 3814, 2615 and 1508 are the PHOENIX v2 models are these produce a better fit to the data than the BT-SETTL models. However, we show the BT-SETTL models for AD 3116 as the PHOENIX models do not extend to low enough temperatures to explain the secondary component.} \label{SED} \end{figure*} The standard method of estimating $T_{\rm eff}$\ is the following: 1. estimate the primary star $T_{\rm eff}$\ from either system colors adopting empirical single-star relations, or use (typically) low resolution spectra to infer a combined spectral type (SpT) and convert this into a primary star $T_{\rm eff}$. 2. estimate the secondary star $T_{\rm eff}$\ from the primary $T_{\rm eff}$\ and the light curve surface brightness (and hence temperature) ratio. There are a number of issues with this approach: empirical color-$T_{\rm eff}$\ and SpT-$T_{\rm eff}$\ relations for single stars are not necessarily applicable for all binary systems and the temperature ratio estimated from the light curve is specific to that band, i.e. $(T_{\rm sec}/T_{\rm pri})_{\rm \,band}$; it is not a $T_{\rm eff}$\ ratio. A more direct approach would be to model the system's spectra, but to do so would require high SNR (signal-to-noise) data, which would normally require the co-adding of spectra. While feasible for single star systems, this is not possible for binaries as there are two varying components. One approach would be to disentangle the spectra into their individual components and model these directly to estimate $T_{\rm eff}$\ of each star \citep[e.g.][]{Czekala15,Czekala17}. However, while powerful, this approach is both time and computationally intensive, and the distance to the system remains unknown (unless the spectra are also flux calibrated). A method of simultaneously determining $T_{\rm eff}$\ of both stars, and the distance to the system, is to model the system's spectral energy distribution (SED). This approach is not computationally intensive, does not rely on empirical single-star relations and readily incorporates priors from the joint light curve and RV modeling. Importantly, with respect to the last point, it correctly interprets the band-specific surface brightness ratio from the light curve modeling. Therefore, we simultaneously estimate $T_{\rm eff}$'s and the distance to ADs 3814, 2615, 3116 and 1508 using the following method: \begin{enumerate} \item SEDs were constructed using broadband magnitudes readily available in the literature. We obtained SDSS $ugriz$ magnitudes from the Sloan Digital Sky Survey Data Release 13, and 2MASS JHK$_{\rm s}$ and WISE data from the NASA/IPAC Infrared Science Archive. These are reported in Table \ref{info_tab} along with their formal measurement uncertainties. \item Model grids of both BT-SETTL \citep{Allard12} and PHOENIX v2 model spectra \citep{Husser13} were convolved with commonly available bandpasses ($ugriz$, UBVRI, 2MASS JHK$_{s}$, \emph{Spitzer}/IRAC, WISE and Kepler) to create a model grid of bandpass fluxes. \item Each SED was modeled by interpolating the model grids in $T_{\rm eff}$--$\log g$ space. We opted to fix the metallicity at Z=0.0 given the cluster [Fe/H] value but note it is possible to include in the interpolation. \item The parameters of the fit were the $T_{\rm eff}$, radius and $\log g$ of each star, the distance to the system, the interstellar extinction and the uncertainty scale factor ($T_{\rm pri}$, $T_{\rm sec}$, $R_{\rm pri}$, $R_{\rm sec}$, $\log g_{\rm pri}$, $\log g_{\rm sec}$, $d_{\rm sys}$, $A_{\rm v}$ and $\sigma_{s}$). The radii and $\log g$'s have priors from the joint light curve and RV solution, $A_{\rm v}$ had a prior determined for the cluster \citep{Taylor06}, and the temperatures, distance and uncertainty scale factor had uninformative priors. The uncertainties on the magnitudes were initially set by adding the observed variability level to the formal measurement errors in quadrature and a further inflation term ($\sigma_{s}$) was then fit for. \item The posterior parameter space was explored using {\tt emcee} with 50,000 steps and 196 `walkers'. Convergence was assessed using the Gelman–Rubin diagnostic plus examination of individual sections of the chains. A conservative burn-in was estimated comprising the first 25,000 steps for all systems and parameter distributions were derived from the remainder after thinning each chain based on the autocorrelation lengths of each parameter. \item This method also gives the option of placing additional priors in the modeling. For example, one can place a prior, from the light curve modeling, on the surface brightness ratio between the two stars \emph{in the band observed}, rather than incorrectly placing a $T_{\rm eff}$\ ratio constraint. In the case of single-lined systems, radius ratio constraints and surface brightness upper limits can also be placed. \end{enumerate} Both BT-SETTL and PHOENIX v2 model spectra are able to reproduce the broadband magnitudes of ADs 3814, 2615, 3116 and 1508. We note, however, that the BT-SETTL models consistently underpredict the optical $r$ band fluxes, whereas the PHOENIX v2 models predict higher red-optical fluxes in agreement with the data for all sources. Accordingly, in Figure \ref{SED} we show the PHOENIX v2 model fits to the observed broadband magnitudes of ADs 3814, 2615 and 1508 reported in Table \ref{info_tab} (for AD 3116 we show the BT-SETTL fit as the PHOENIX models do not extend to low enough temperatures to explain the secondary brown dwarf component). The $T_{\rm eff}$\ and distance values derived from our SED fitting procedure with both the BT-SETTL and PHOENIX v2 models are reported in Table \ref{Td_comp_tab} along with the empirical relation predictions of \citet{Mann16} and \citet{David16}. We discuss the effective temperature and distance estimates in the following two sections. \begin{table} \centering \caption{Effective temperatures and distance values for each EB estimated from SED modeling and the empirical relations of \citet{Mann16} and \citet{David16}. } \label{Td_comp_tab} \begin{tabular}{l l c c c } \noalign{\smallskip} \noalign{\smallskip} \hline \hline \noalign{\smallskip} Method\,$^{*}$ & Model\,$^{\dag}$ & \multicolumn{2}{c}{$T_{\rm eff}$\,\,$^{\ddag}$} & Distance \\ & & Primary & Secondary & \\ & & (K) & (K) & ~(pc) \\ \noalign{\smallskip} \hline \noalign{\smallskip} \noalign{\smallskip} \multicolumn{5}{c}{............................................ AD 3814 ......................................} \\ \noalign{\smallskip} SED & PHOENIX & $3193 \pm 17$ & $3085 \pm 21$ & $168.8\,^{+6.1}_{-7.3}$ \\ [-0.3ex] SED & BT-SETTL & $3230 \pm 36$ & $3121 \pm 35$ & $172.1 \pm 9.3$ \\ [-0.3ex] ER & M15 & $3241 \pm 76$ & $3013 \pm 79$ & $172 \pm 12$ \\ [-0.3ex] ER & D16 & 3251 & 3023 & \\ [-0.3ex] \noalign{\smallskip} SED & Combined & $3211\,^{+54}_{-36}$ & $3103\,^{+53}_{-39}$ & $170.4\,^{+11.0}_{-8.9}$ \\ \noalign{\smallskip} \noalign{\smallskip} \noalign{\smallskip} \multicolumn{5}{c}{............................................ AD 2615 ......................................} \\ \noalign{\smallskip} SED & PHOENIX & $3132 \pm 21$ & $3112 \pm 20$ & $177.2 \pm 7.9$ \\ [-0.3ex] SED & BT-SETTL & $3172 \pm 37$ & $3150 \pm 37$ & $181 \pm 11$ \\ [-0.3ex] ER & M15 & $3187 \pm 75$ & $3145 \pm 90$ & $177 \pm 15$ \\ [-0.3ex] ER & D16 & 3197 & 3156 & \\ [-0.3ex] \noalign{\smallskip} SED & Combined & $3152\,^{+57}_{-40}$ & $3131\,^{+56}_{-38}$ & $179\,^{+13}_{-10}$ \\ \noalign{\smallskip} \noalign{\smallskip} \noalign{\smallskip} \multicolumn{5}{c}{............................................ AD 3116 ......................................} \\ \noalign{\smallskip} SED & BT-SETTL & $3184 \pm 29$ & $1639 \pm 248$ & --- \\ [-0.3ex] ER & M15 & $3237 \pm 74$ & $880 \pm 217$ & $183 \pm 14$ \\ [-0.3ex] ER & D16 & 3236 & 880 & \\ \noalign{\smallskip} \noalign{\smallskip} \noalign{\smallskip} \multicolumn{5}{c}{............................................ AD 1508 ......................................} \\ \noalign{\smallskip} SED & PHOENIX & $3754 \pm 78$ & $3679 \pm 121$ & $164 \pm 22$ \\ [-0.3ex] SED & BT-SETTL & $3779 \pm 87$ & $3706 \pm 117$ & $167 \pm 23$ \\ [-0.3ex] ER & M15 & $3738 \pm 76$ & $3639 \pm 284$ & $156 \pm 28$ \\ [-0.3ex] ER & D16 & 3746 & 3649 & \\ [-0.3ex] \noalign{\smallskip} SED & Combined & $3767\,^{+99}_{-85}$ & $3693\,^{+122}_{-135}$ & $166\,^{+25}_{-23}$ \\ \noalign{\smallskip} \noalign{\smallskip} \hline \end{tabular} \begin{list}{}{} \item[$^{*}$] SED = spectral energy distribution and ER = empirical relations. \item[$^{\dag}$] M15 = empirical relations from \citet{Mann16}; D16 = \citet{David16} polynomial fit to the color and temperature data presented in \citet{Pecaut13}. \item[$^{\ddag}$] For the two sets of empirical relations, the secondary $T_{\rm eff}$\ is estimated using the GP--EBOP\ temperature ratio in the \emph{K2}\ band as a proxy for the $T_{\rm eff}$\ ratio. \end{list} \end{table} \subsubsection{Effective temperatures} We find that the BT-SETTL model temperatures are typically $\sim$40 K hotter than the PHOENIX v2 values, although both sets of temperatures agree to within 1$\sigma$. They are also both in agreement with the temperatures predicted by empirical relations. We note that both sets of empirical relations used the BT-SETTL models to calibrate their temperature scale and hence caution should be applied when interpreting the slightly closer agreement between the empirical relations and BT-SETTL SED temperatures than with the PHOENIX v2 values. Given the slight offset between the BT-SETTL and PHOENIX temperatures we opted to combine the two predictions for each star as our final $T_{\rm eff}$\ values. These are reported in Table \ref{Td_comp_tab} as the ``combined'' model and are: $T_{\rm pri} = 3211\,^{+54}_{-36}$ K and $T_{\rm sec} = 3103\,^{+53}_{-39}$ K for AD 3814; $T_{\rm pri} = 3152\,^{+57}_{-40}$ K and $T_{\rm sec} = 3131\,^{+56}_{-38}$ K for AD 2615; and $T_{\rm pri} = 3767\,^{+99}_{-85}$ K and $T_{\rm sec} = 3693\,^{+122}_{-135}$ K for AD 1508. For AD 3116 we used only the BT-SETTL models given the expected temperature of the brown dwarf secondary. While both SED modeling and empirical relations yield consistent results, the SED modeling constraints are significantly tighter (even combining both sets of results), which is perhaps unsurprising given they are system-specific and capitalize on the joint light curve and RV modeling constraints. Furthermore, interpreting the temperature ratio from the light curve modeling as a genuine $T_{\rm eff}$\ ratio is incorrect in all cases where the bandpass observed does not cover the majority of the integrated spectra of both EB components, and the system is not equal mass. For both ADs 3814 and 2615, using the \emph{Kepler}\ bandpass temperature ratio as a $T_{\rm eff}$\ ratio (as required when using empirical relations) results in a steeper temperature scale than the light curve modeling results actually imply, i.e. the secondary is predicted to be cooler than expected relative to the primary temperature. This effect is most noticeable in AD 3814 given the larger mass ratio in this system. \subsubsection{Distance to Praesepe} Literature distance estimates to Praesepe\ range from $\sim$160--190 pc with the more recent determinations clustering around $\sim$175--185 pc \citep{Mermilliod90,Reglero91,Gatewood94,Percival03,An07,vanLeeuwen09,vanLeeuwen17}. Gaia DR1 parallaxes imply a distance of $182.8 \pm 1.7 \pm 14$ \citep[the two uncertainties are the error on the cluster center determination and the observed spread of cluster members on the sky;][]{vanLeeuwen17}. Our distance estimates for ADs 3814, 2615 and 1508 are $170.4\,^{+11.0}_{-8.9}$, $179\,^{+13}_{-10}$ and $166\,^{+25}_{-23}$ pc, respectively, which are all in agreement with the Gaia parallax distance. As AD 3116 is single-lined, we do not have precise radii and surface gravities, so we placed a prior on the distance to the system of $d_{\rm sys} = 182.8\pm14$ pc, and hence do not quote a distance for this system as we essentially recover our prior. Empirical bolometric corrections (BCs) are available for M-dwarfs \citep[e.g.][]{Mann16}. Combining these with our calculated radii gives the system bolometric flux, which can be converted to absolute bandpass magnitudes using the derived BCs and compared to apparent magnitudes to estimate the distance using the distance modulus (see M15 distances in Table \ref{Td_comp_tab}). We note that these are also in agreement with both our distances and the Gaia cluster value. \subsection{Comparison with stellar evolution models} \label{model_comp} \subsubsection{The Newly characterized EBs} \label{model_comp_prae} \begin{figure*} \centering \includegraphics[width=0.32\linewidth]{{./M_R_relation_3814_2615_PARSEC_BHAC15_Z_0_0152}.pdf} \includegraphics[width=0.32\linewidth]{{./T_logg_relation_3814_2615_PARSEC_BHAC15_Z_0_0152}.pdf} \includegraphics[width=0.32\linewidth]{{./T_L_relation_3814_2615_PARSEC_BHAC15_Z_0_0152}.pdf} \includegraphics[width=0.32\linewidth]{{./M_R_relation_3814_2615_PARSEC_Z_0_0174}.pdf} \includegraphics[width=0.32\linewidth]{{./T_logg_relation_3814_2615_PARSEC_Z_0_0174}.pdf} \includegraphics[width=0.32\linewidth]{{./T_L_relation_3814_2615_PARSEC_Z_0_0174}.pdf} \caption{Comparison of the fundamental parameters of ADs 3814 and 2615 (green and blue, respectively) to the PARSEC v1.2 and BHAC15 model isochrones (solid and dashed lines, respectively). The \emph{top row} shows the PARSEC (solid) and BHAC15 (dashed) models in the mass--radius, $T_{\rm eff}$--$\log g$\ and $T_{\rm eff}$--luminosity planes (\emph{left}\ to \emph{right}) at solar metallicity. The \emph{bottom row} shows the same planes but for the PARSEC models at the metallicity of Praesepe\ ($Z=0.0174$). The model isochrones shown are common in all plots and range from 200 Myr (lightest) to 1 Gyr (darkest).} \label{MRT_comp} \end{figure*} With precise masses, radii and effective temperatures for both stars in ADs 3814 and 2615 we can test the predictions of stellar evolution theory for low mass stars at the beginning of the main sequence phase of evolution. Figure \ref{MRT_comp} compares the fundamental parameters of ADs 3814 and 2615 to the PARSEC v1.2 \citep{Bressan12,Chen14} and BHAC15 \citep{Baraffe15} models. Praesepe\ is slightly metal-rich ([Fe/H]$\approx$0.14) but the closest BHAC15 models in metallicity are solar composition. Therefore, we compare our results with both the solar metallicity PARSEC and BHAC15 models (Figure \ref{MRT_comp}, \emph{top row}) and also compare to the PARSEC models at Praesepe\ metallicity (Figure \ref{MRT_comp}, \emph{bottom row}). In the mass-radius plane (\emph{left panels}) the PARSEC models (solid lines) predict slightly larger radii than BHAC15 (dashed lines) for a given mass, but both models are able to explain the two components of each system with a single isochrone at the 1$\sigma$ level (for PARSEC this is true for both solar and Praesepe\ metallicities). This agreement is encouraging as the masses of AD 3814 are constrained to 2\% for both components and the primary and secondary radii to 1\% and 3\%, respectively. The uncertainties on the masses and radii of AD 2615 are slightly larger, given the system is fainter, and there are fewer eclipses and RVs, but the masses and radii are still both constrained to 6\% for the primary and 5\% for the secondary. We note that both systems are young (sub-Gyr) and display modest H$\alpha$ emission. Therefore, compared to old M dwarfs these Praesepe stars are expected to have relatively strong magnetic fields and high spot coverage. Higher activity levels are thought to result in stars with lower effective temperatures and inflated radii \citep[e.g.][]{Chabrier07,Macdonald14}, and this is often seen in observations \citep[e.g.][]{Feiden12}. Stars in EBs with longer orbital periods appear to show better agreement with the models, but those that do show disagreement tend to be fully convective. This might suggest that for stars with radiative cores and convective outer envelopes, disagreements with models are driven by rotation and magnetic activity but comparisons for fully convective stars are subject to other errors \citep{Feiden15}. That these two fully (or almost fully) convective EB systems are active and have relatively short (6--12 day) periods yet agree well with the radius predictions of non-magnetic models presents a further challenge to stellar evolution theory. While the masses and radii appear to be in agreement, including $T_{\rm eff}$\ complicates the picture. We next compare our results in the $T_{\rm eff}$--$\log g$\ plane. The surface gravity, $\log g$, combines the mass and radius information, which agree well for both models, and hence this parameter should also be well explained. In the middle column of Figure \ref{MRT_comp} we see significant discrepancies between the data and models, which points towards problems in the model $T_{\rm eff}$\ scales. The models substantially diverge in their $T_{\rm eff}$\ predictions, with the BHAC15 models being hotter by $\sim$200--250 K across the mid M-dwarf range, and the PARSEC models being perhaps 10--25K cooler than the data. We note that this is also seen in the mass--$T_{\rm eff}$\ and radius-$T_{\rm eff}$\ planes (not shown here). Both sets of models essentially predict the same $T_{\rm eff}$\ independent of age for $t \gtrsim$400 Myr out to 10 Gyr. Our SED analysis yields $T_{\rm eff}$\ values that are in closer agreement to the PARSEC models than BHAC15, but both models predict a steeper $T_{\rm eff}$\ scale than the data suggest (note that a steeper model $T_{\rm eff}$\ scale manifests as a shallower gradient in $T_{\rm eff}$\--$\log g$\ space, as observed). One option is that the model $T_{\rm eff}$\ scales are too steep for mid M-dwarfs but it could also be that additional phenomena, not included in the models, are responsible for the observed slope difference. Both ADs 3814 and 2615 display starpot modulation in the \emph{K2}\ light curves. As neither PARSEC nor BHAC15 include the effect of magnetic fields and starspots it could be that some of the discrepancy arises from these phenomena rather than the model $T_{\rm eff}$\ scale being too steep per se. Although the primary component of AD 3814 agrees with the PARSEC Praesepe\ metallicity models, the secondary lies above the relation. We can take the primary star as an example to explore the required spot coverage and contrast ratio needed to bring its computed $T_{\rm eff}$\ onto the same expected isochrone as the secondary component. We note that this scenario would require the PARSEC $T_{\rm eff}$\ scale to be underpredicting the true unspotted $T_{\rm eff}$\ but this is plausible so we continue with the exercise nonetheless. Assuming a spot-to-unspotted photospheric temperature ratio of 0.8 \citep[e.g.][]{Grankin98} would require $\sim$25\% spot coverage. To bring the primary and secondary components within 1$\sigma$ would only require a 10\% spot coverage on the primary. We note, however, that the radius posterior medians sit just below the zero age main sequence predicted by the PARSEC models and invoking starspots to redress the $T_{\rm eff}$\ slope differences would imply a corresponding decrease in the radii for these stars without spots. To bring the primary and secondary components of AD 3814 into agreement with the BHAC15 models would require spot coverages of 30--40\% on each star. While high, this is consistent with observations of active late-type stars, especially those in close binaries \citep[e.g.][]{ONeal04}. We note that the BHAC15 models track a steeper path in $\log g$\--$T_{\rm eff}$\ space beyond 3400 K (corresponding to a shallower $T_{\rm eff}$\ scale). Simply shifting the BHAC15 models cooler by 250 K would bring them into agreement with all four stars. This is not possible with the PARSEC models, so it remains a valid option that the PARSEC model temperature scale is too steep over the mass range probed ($\sim$0.2--0.4 $M_{\odot}$). However, more precisely characterized M-dwarf binaries are required to confirm this tentative statement. The radii and effective temperatures combine to determine the luminosity of a star. Stellar evolution models are typically found to underpredict the radii and overpredict the effective temperatures of active low-mass stars; however, these combine to essentially recover the correct luminosity. The \emph{right} column of Figure \ref{MRT_comp} shows the radiative $T_{\rm eff}$\--luminosity relation. As expected, the BHAC15 models appear to underpredict the luminosity because the model $T_{\rm eff}$\ is too high. The PARSEC models are in better agreement: they are able to follow the general trend of the data and explain the primary component of AD 3814 and the secondary of AD 2615, but the other components are slightly discrepant at the $\sim$1.5$\sigma$ level. \begin{figure*} \centering \includegraphics[width=0.8\linewidth]{./MR_diagram_BHAC15_MS-grey_Orion-blue_UpSco-black_Pleiades-magenta_NGC2264-cyan_Praesepe-green_Hyades-orange_1039_3814_2615_1508_inset_linear.pdf} \caption{Mass-radius relation for detached double-lined eclipsing binaries (EBs) below 1.5 $M_{\odot}$. Data compiled from Table \ref{pms_ebs} and DEBCat (\href{http://www.astro.keele.ac.uk/~jkt/debdata/debs.html}{http://www.astro.keele.ac.uk/$\sim$jkt/debdata/debs.html}). EBs that are members of open clusters are colored while field EBs are shown in gray. The clusters containing well-characterized EBs are Orion (blue), Upper Scorpius (black), NGC\,2264 (cyan), Pleiades (magenta), Hyades (orange), NGC\,1647 (pink), Per OB2 (gold), and the new Praesepe\ EBs (green) presented here. The colored lines represent solar metallicity isochrones of \citet{Baraffe15} from 1 Myr to 1 Gyr (\emph{top}\ to \emph{bottom}). Inset (\emph{top}\ \emph{left}) is a zoom on the region containing ADs 3814 and 2615 to allow a closer comparison between the models and current observational constraints for low-mass stars. Here we also include the compilation of known low-mass EBs presented in \citet{Dittmann17}. } \label{MR_plot_full} \end{figure*} \begin{table*} \centering \caption{Published double-lined eclipsing binary systems in sub-Gyr open clusters where both components are below 1.5 M$_{\odot}$, ordered by ascending primary mass.} \label{pms_ebs} \resizebox{\textwidth}{!}{% \begin{tabular}{lccccllll} \hline \hline \noalign{\smallskip} Name & $M_{\rm{pri}}$ & $M_{\rm{sec}}$ & $R_{\rm{pri}}$ & $R_{\rm{sec}}$ & Cluster\,$^{a}$ & Age & Year & Refs. \\ & ($M_\odot$) & ($M_\odot$) & ($R_\odot$) & ($R_\odot$) & & (Myr) & \\ \noalign{\smallskip} \hline \noalign{\smallskip} EPIC 203868608 & $0.02216\pm0.00045$ & $0.02462\pm0.00055$ & $0.2823\pm0.0051$ & $0.2551\pm0.0036$ & Upper Sco & 5--10 & 2016 & 1 \\ 2MJ0535-05 & $0.0572\pm0.0033$ & $0.0366\pm0.0022$ & $0.690\pm0.011$ & $0.540\pm0.009$ & ONC & 1--2 & 2006 & 2,3 \\ EPIC 203710387 & $0.1183\pm0.0028$ & $0.1076\pm0.0031$ & $0.417\pm0.010$ & $0.450\pm0.012$ & Upper Sco & 5--10 & 2015 & 4,1 \\ JW\,380 & $0.262\pm0.025$ & $0.151\pm0.013$ & $1.189\pm0.175$ & $0.897\pm0.170$ & ONC & 1--2 & 2007 & 5 \\ HCG 76 & $0.2768\pm0.0072$ & $0.3020\pm0.0073$ & $0.319\pm0.036$ & $0.34\pm0.11$ & Pleiades & 125 & 2016 & 6 \\ UScoCTIO 5 & $0.3336\pm0.0022$ & $0.3200\pm0.0022$ & $0.862\pm0.012$ & $0.852\pm0.013$ & Upper Sco & 5--10 & 2015 & 7,1 \\ Par\,1802 & $0.391\pm0.032$ & $0.385\pm0.032$ & $1.73\pm0.015$ & $1.62\pm0.015$ & ONC & 1--2 & 2008 & 8,9 \\ MHO 9 & $0.41\pm0.18$ & $0.172\pm0.069$ & $0.46\pm0.11$ & $0.321\pm0.060$ & Pleiades & 125 & 2016 & 6 \\ 2MJ0446+19 & $0.47\pm0.05$ & $0.19\pm0.02$ & $0.56\pm0.02$ & $0.21\pm0.01$ & NGC 1647 & 150 & 2006 & 10 \\ CoRoT\,223992193 & $0.668\pm0.012$ & $0.4953\pm0.0073$ & $1.295\pm0.040$ & $1.107\pm0.044$ & NGC 2264 & 3--6 & 2014 & 11 \\ MML\,53 & $0.994\pm0.030$ & $0.857\pm0.026$ & \multicolumn{2}{c}{$2.201\pm0.071$\,$^b$} & UCL & 15 & 2010 & 12,13 \\ HD144548 & $0.984\pm0.007$ & $0.944\pm0.017$ & $1.319\pm0.010$ & $1.330\pm0.010$ & Upper Sco & 5--10 & 2015 & 14 \\ & \multicolumn{2}{c}{$1.44\pm0.04$\,$^c$} & \multicolumn{2}{c}{$2.41\pm0.03$\,$^c$} & & & & \\ V1174\,Ori & $1.006\pm0.013$ & $0.7271\pm0.0096$ & $1.338\pm0.011$ & $1.063\pm0.011$ & Ori OB 1c & 5--10 & 2004 & 15 \\ V818 Tau & $1.06\pm0.01$ & $0.90\pm0.02$ & $0.76\pm0.01$ & $0.77\pm0.01$ & Hyades & 600--800 & 2002 & 16 \\ RXJ\,0529.4$+$0041A & $1.27\pm0.01$ & $0.93\pm0.01$ & $1.44\pm0.10$ & $1.35\pm0.10$ & Ori OB 1a & 7--13 & 2000 & 17,18,19 \\ NP Per & $1.3207\pm0.0087$ & $1.0456\pm0.0046$ & $1.372\pm0.013$ & $1.229\pm0.013$ & Per OB 2 & 6--15 & 2016 & 20 \\ ASAS\,J0528$+$03 & $1.375\pm0.028$ & $1.329\pm0.020$ & $1.83\pm0.07$ & $1.73\pm0.07$ & Ori OB 1a & 7--13 & 2008 & 21 \\ \noalign{\smallskip} \noalign{\smallskip} \multicolumn{9}{c}{\emph{Praesepe systems published in this paper}} \\ \noalign{\smallskip} AD 3814 & $0.3813\pm0.0074$ & $0.2022\pm0.0045$ & $0.3610\pm0.0033$ & $0.2256\pm0.0063$ & Praesepe & 600--800 & 2017 & this work \\ AD 2615 & $0.212\pm0.012$ & $0.255\pm0.013$ & $0.233\pm0.013$ & $0.267\pm0.014$ & Praesepe & 600--800 & 2017 & this work \\ AD 1508 & $0.45\pm0.19$ & $0.53\pm0.21$ & $0.548\pm0.099$ & $0.45\pm0.10$ & Praesepe & 600--800 & 2017 & this work \\ \noalign{\smallskip} \hline \end{tabular} } \begin{list}{}{} \item[\textbf{Notes.}]Where asymmetric error bars were reported in the original papers we quote the larger of the two here. \item[$^a$]ONC = Orion Nebula Cluster; UCL = Upper Centaurus Lupus; Upper Sco = Upper Scorpius. \item[$^b$]Radius sum (individual radii have not been determined). \item[$^c$]Tertiary component that is also eclipsed. \item[\textbf{References.}] 1. \citet{David16}; 2. \citet{Stassun06}; 3. \citet{Stassun07}; 4. \citet{Lodieu15}; 5. \citet{Irwin07}; 6. \citet{David16a}; 7. \citet{Kraus15}; 8. \citet{Cargile08}; 9. \citet{Stassun08}; 10. \citet{Hebb06}; 11. \citet{Gillen14} 12. \citet{Hebb10}; 13. \citet{Hebb11}; 14. \citet{Alonso15}; 15. \citet{Stassun04}; 16. \citet{Torres02}; 17. \citet{Covino00}; 18. \citet{Covino01}; 19. \citet{Covino04}; 20. \citet{Lacy16}; 21. \citet{Stempels08}. \end{list} \end{table*} \subsubsection{Updated mass--radius relation for low-mass EBs} Figure \ref{MR_plot_full} shows the mass-radius relation for detached double-lined eclipsing binaries below $M<1.5$ $M_{\odot}$. Field EBs are shown in gray while members of young open clusters -- including our newly discovered systems reported here -- are colored by cluster (see figure caption for color scheme). The fundamental parameters of the known cluster EBs with both components below 1.5 $M_{\odot}$ are reported in Table \ref{pms_ebs}. The three double-lined Praesepe\ systems reported here make a significant contribution to known cluster EBs, increasing the total number below 1.5 $M_{\odot}$ by almost 20\% (and increasing the known double-lined M-dwarf EB population by 30\%). Furthermore, ADs 3814 and 2615 add precise constraints for stellar evolution models at the zero-to-early age main sequence for low-mass stars. \subsubsection{Age of Praesepe} \label{age_discussion} As briefly discussed in the introduction, the age of Praesepe\ has been debated in recent years. It has typically been estimated at $\sim$600--650 Myr by isochrone fitting, often through association with the Hyades \citep[e.g.][]{Perryman98,Salaris04,Fossati08}. However, \citet{Brandt15} found that including rotation in stellar models implied an age of $790\pm60$ Myr (2$\sigma$ uncertainty) for Praesepe, which is in agreement with their Hyades age of $\sim$750--800 Myr \citep{Brandt15a}. This older age estimate arises from the fact that rotation results in longer main sequence lifetimes and hence older ages for post-turnoff populations. This result was corroborated by \cite{David15}, who also include the effect of stellar rotation in their comparison between stellar atmospheric parameters (derived from Str\"{o}mgren photometry) and theoretical isochrones. Somewhat orthogonal to the ages inferred from radiative properties such as $L$ and $T$, the ages of EB systems can be determined through comparison of their masses and radii with stellar evolution models (see section \ref{model_comp_prae}). Unfortunately, over the mass range probed by our EBs, the several hundred Myr Praesepe\ sits roughly at the zero age main sequence. As M-dwarf evolution is slow, their increase in radius as they evolve through their first several Gyr on the main sequence is correspondingly small. Therefore, using our masses and radii to independently estimate the age of Praesepe\ would carry significant uncertainty and would not provide useful input to the current 600 vs. 800 Myr age discussion. \subsection{Circularization and synchronization} \label{sync} \subsubsection{Tidal circularization} In this section we compare our findings for the new EB systems to the expectations for tidal circularization and spin-orbit synchronization at the age of Praesepe. The binaries presented here are particularly valuable benchmarks for studies of tidal dissipation timescales in close binaries, as they are at or near the beginning of their main sequence evolution. \citet{Zahn89} posited that essentially all tidal circularization should occur during the PMS phase, when stars are larger and have deeper convective envelopes. If this theory were correct, all late-type main sequence binaries with periods less than $\sim$8 days should be circularized. Binaries with longer orbital periods would retain their primordial eccentricities and experience negligible tidal circularization after the PMS phase. However, \citet{Meibom05} used observations of binaries in coeval stellar populations to clearly show that tidal dissipation proceeds to circularize orbits well after the PMS stage (see their Fig. 9). While standard equilibrium tide theory \citep{Zahn89a, ClaretCunha97} and dynamical tide theory \citep{Witte02} do predict exactly this trend, binaries are generally observed to circularize more quickly than theory predicts (i.e. tidal dissipation is a more efficient process than expected). The binary population of Praesepe and the Hyades is a conspicuous outlier to this trend, indicating agreement with theory but significant tension with observations of all other well-characterized clusters. However, \citet{Zahn89} cautioned that two short-period eccentric binaries in Praesepe and Hyades (KW 181 and VB 121) are single-lined systems, in which the secondaries could possibly be white dwarfs, meaning that the standard theory of tidal dissipation would not apply. Ignoring these two systems, those authors estimated binaries with periods below 8.5--11.9 days should be circularized by the age of the Hyades, and by extension Praesepe. Our findings for AD 3814 and AD 2615 corroborate the notion that the circularization period for Praesepe is larger than previously measured, and to our knowledge AD 2615 is the longest period circular binary in either Praesepe or the Hyades. Revisiting the analysis of \citet{Meibom05} including these two systems would bring the observations for Praesepe into better agreement with those of other clusters, in the sense that binaries of a given age are observed to be circular out to longer periods than theory predicts. As for AD 3116, tidal dissipation proceeds differently for extreme mass ratio systems \citep{Ogilvie14}, and so we caution against drawing conclusions based on its relatively high eccentricity ($e=0.15$) given its short orbital period of $<2$ days. In fact, the recently discovered transiting brown dwarf in the significantly older Ruprecht 147 cluster similarly exhibits a relatively high eccentricity and short orbital period \citep{Nowak16}. Finally, we note that the transition between circular and eccentric binaries in a coeval stellar population (as demarcated by either the ``cutoff period'', i.e. the longest period circular binary, or preferably by the ``tidal circularization period'') can in principle be used to estimate the age of the stellar population \citep{MathieuMazeh88}. Given sufficient data and a well-calibrated relation amongst clusters, the method could also be extended to close binaries in the field to provide an upper limit in age if the binary is eccentric, or a lower limit if it is circular. \subsubsection{Spin-orbit synchronization} The theoretical outcome of tidal evolution within a binary system is a circular orbit and a state of double synchronous rotation with spin axes aligned to the orbital angular momentum vector. However, as noted by \citet{Ogilvie14}, this theoretical prediction has never been observationally verified for a binary star system. This is in part due to the difficulty of measuring stellar rotation, particularly for both components of a binary, and the need for an eclipsing system to precisely measure obliquities. Binaries for which the rotation period of one or more component can be measured, particularly within coeval stellar populations, are thus critical benchmarks for tidal synchronization studies. For the four binaries discussed here, one appears to be nearly synchronized (AD 1508) while the other three appear to be rotating subsynchronously (i.e. at a frequency lower than the orbital frequency). This observation is based on the measured $P_\mathrm{spot}/P_\mathrm{orb}$ ratios of 1.25, 1.08, and 1.14 for ADs 3814, 2615, and 3116, respectively. On the surface, this is surprising given that 1) the expected synchronization timescales are much smaller than the cluster age, and 2) tidal synchronization is expected to occur more quickly than circularization in close binaries \citep{Zahn77, Hut81} and two of the subsynchronous binaries are on nearly circular orbits (ADs 3814 and 2615). It is important to note that photometric variations indicate a star's surface rotation rate, but the spin of interior layers is not measured and known only to the extent to which there is reason to believe the interior is coupled to the surface. For the binaries with mass ratios near unity, both stars are contributing to the observed brightness modulations, but in the absence of multiple distinct peaks in a periodogram we infer the modulation period to indicate the rotation of the primary. Notably, surface differential rotation can lead to configurations in which the spin of equatorial regions is synchronized with the orbit while higher latitudes may be rotating more slowly. Such a scenario was suggested to explain observations of the late-type EB HII 2407 in the Pleiades \citep{David15b}. Indeed, there is observational evidence \citep{Barnes15} and theoretical motivation \citep{Schuessler92, Granzer00} for polar spots on rapidly rotating, fully convective stars. However, unlike the Pleiades EB, the binaries presented here exhibit much larger discrepancies between the rotation and orbital periods. The measured rates of differential rotation have been observed to decrease strongly with stellar temperature \citep{Barnes05,CollierCameron07}. Using the empirical formula of \citet{CollierCameron07}, the expected rates of differential rotation for the stars considered here are all below $10^{-3}$ rad~d$^{-1}$. If we assume the orbits are synchronized at the equator and that polar spots are responsible for the measured rotation periods, then the implied rates of differential rotation for ADs 3814, 2615, and 3116 would be 0.21, 0.04, and 0.38 rad~d$^{-1}$, respectively. These values are significantly higher than the differential rotation rates measured for fully convective stars \citep{Morin08,Reinhold13,Davenport15}. Our observations therefore indicate either: 1) tidal synchronization proceeds more slowly in fully convective stars than the theory of equilibrium tides predicts, 2) magnetic braking is currently playing a more important role in the spin evolution of these binaries than tidal forces, or 3) differential rotation in fully convective stars can be much more important than previously appreciated. We consider the last explanation to be the least plausible. Subsynchronous rotation has previously been observed for short period binaries in the younger M35 and M34 clusters, aged $\sim$150 Myr and $\sim$250 Myr, respectively \citep{Meibom06}. As those authors noted, this result is in direct contradiction with expectations of tidal evolution on the main sequence which predicts binaries with periods near or below the circularization period (which AD 3814 and AD 2615 apparently are) to be rotating pseudosynchronously (synchronized with the instantaneous orbital angular velocity at periastron) or slightly \textit{supersynchronous}. We conclude by noting that current theories of tidal evolution carry significant and under-explored uncertainties. In particular, theory for solar-type and early-type stars is more developed than that for fully convective stars. Tidal dissipation is expected to be more efficient, and thus circularization more rapid, in stars with convective outer layers \citep{Zahn75}, which is supported observationally \citep{vanEylen16}. \section{Conclusions} \label{conclusions} We have presented photometric timeseries data from \emph{Kepler}/\emph{K2}\ and follow-up high dispersion spectroscopy from Keck/HIRES in order to characterize four new EB systems in the sub-Gyr old Praesepe\ cluster. These new discoveries increase the number of characterized EBs below 1.5 $M_{\odot}$ in sub-Gyr open clusters by 20\%, and add 40\% of the cluster EB population with masses $M \lesssim 0.6 \, M_\odot$. We analyze these low-mass EBs with GP--EBOP, a new multi-purpose Gaussian process eclipsing binary and transiting exoplanet model, to determine model-independent stellar masses and radii. We present an updated method of simultaneously determining the effective temperatures of both stars as well as the distance to an EB by modeling the system's spectral energy distribution. This approach capitalizes on the posterior constraints from the joint light curve and RV modeling to break existing degeneracies and also correctly interprets the light curve model's band-specific surface brightness ratio, rather than using it to approximate an effective temperature ratio. We determine the masses of AD 3814 to 2\% precision and the primary and secondary radii to 1\% and 3\%, respectively. The masses and radii of AD 2615 are both determined to 6\% precision for the primary and to 5\% for the secondary. Together with effective temperatures determined to a typical precision of $\pm$50 K, we test the PARSEC v1.2 and BHAC15 stellar evolution models. Overall, the EB parameters are most consistent with the PARSEC models, primarily because the BHAC15 temperature scale is too hot over the mass--age range probed. Both the PARSEC and BHAC15 models are able to explain the masses and radii of ADs 3814 and 2615 with a single isochrone in the range $\sim$400--1000 Myr, but predicting $T_{\rm eff}$\ proves more challenging. Our SED-derived $T_{\rm eff}$\ values, which are consistent with those derived from empirical M-dwarf relations, are better matched to the PARSEC models. We find that the BHAC15 models predict temperatures $T_{\rm eff}$\ $\sim$100--300 K hotter than our data, whereas the PARSEC models lie in the correct $T_{\rm eff}$\ range. However, both models predict a steeper $T_{\rm eff}$\ track over the mass range $M \sim 0.2-0.4$ $M_{\odot}$ than our data suggest. More M-dwarf EBs with precise $T_{\rm eff}$\ values on the main sequence are required to confirm this tentative statement. Our luminosities are in agreement with the PARSEC model predictions but we find that the BHAC15 models overpredict this parameter primarily due to their high $T_{\rm eff}$\ values. While both ADs 3814 and 2615 possess precise solutions, we note that AD 3814 would benefit from a more detailed modeling of the individual eclipses (especially incorporating a full starspot model), and AD 2615 would benefit from additional RVs to tighten the existing solution. We present a preliminary solution for a third detached double-lined system, AD 1508. The \emph{K2}\ light curve displays clear, but shallow, eclipses on both stars and the three Keck/HIRES RVs we obtained show the two stars not to be rapid rotators. This system is therefore amenable to precise characterization but would require further RV measurements throughout the orbital phase and may also benefit from targeted eclipse monitoring with moderate-aperture ground-based telescopes. The final system, AD 3116, comprises a mid M-dwarf primary star with a transiting brown dwarf companion ($M$$\sim$54 $M_{\rm Jup}$). There are only $\sim$20 transiting brown dwarf systems known: AD 3116 is one of only three systems where the primary is an M-dwarf, and is only the second transiting brown dwarf system discovered in an open cluster (and the first younger than a Gyr). It will therefore be a favorable target for future transiting brown dwarf studies. Finally, we find that ADs 3814 and 2615, which have orbital periods of 6.0 and 11.6 days, are circularized but not synchronized, with at least one component rotating sub-synchronously. This contradicts the expectations of tidal evolution, which would predict synchronization to proceed faster than circularization in these systems and for it to have been achieved by the age of Praesepe. Our observations therefore suggest that either tidal synchronization proceeds more slowly in fully convective stars than the theory of equilibrium tides predicts, or magnetic braking is currently playing a more important role in the spin evolution of these binaries than tidal forces. \begin{table*} \centering \footnotesize \caption[Model parameters]{Fitted and derived parameters of the models applied to AD~3814, AD~2615, AD~3116 and AD~1508.} \label{lc_model_tab} \resizebox{\textwidth}{!}{% \begin{tabular}{l l l c c c c } \noalign{\smallskip} \noalign{\smallskip} \hline \hline \noalign{\smallskip} Parameter & Symbol & Unit & \multicolumn{4}{c}{Value} \\ & & & \,\,\,\,\,AD~3814 & \,\,\,\,\,AD~2615 & \,\,\,\,\,AD~3116 & \,\,\,\,\,AD~1508 \\ \noalign{\smallskip} \hline \noalign{\smallskip} \noalign{\smallskip} \multicolumn{7}{c}{\emph{Eclipse parameters}} \\ \noalign{\smallskip} \noalign{\smallskip} Sum of radii & $(R_{\rm{pri}} + R_{\rm{sec}})/ a$ & & $\,\,\,\,\,0.05044\,^{+0.00069}_{-0.00055}$ & $\,\,\,\,\,0.02979 \pm 0.00034$ & $\,\,\,\,\,0.0845\,^{+0.0066}_{-0.0053}$ & $\,\,\,\,\,0.1774\,^{+0.0066}_{-0.0076}$ \\ [0.9 ex] Radius ratio & $R_{\rm{sec}} / R_{\rm{pri}}$ & & $\,\,\,\,\,0.624\,^{+0.017}_{-0.010}$ & $\,\,\,\,\,1.15 \pm 0.11$ & $\,\,\,\,\,0.3599\,^{+0.0094}_{-0.0128}$ & $\,\,\,\,\,0.83 \pm 0.24$ \\ [0.9 ex] Orbital inclination & $i$ & $^{\circ}$ & $\,\,\,\,\,89.177\,^{+0.051}_{-0.064}$ & $\,\,\,\,\,88.996 \pm 0.013$ & $\,\,\,\,\,88.41\,^{+0.49}_{-0.42}$ & $\,\,\,\,\,80.54\,^{+0.46}_{-0.39}$ \\ [0.9 ex] Orbital period & $P$ & days & $\,\,\,\,\,6.015717 \pm 0.000013$ & $\,\,\,\,\,11.615254 \pm 0.000073$ & $\,\,\,\,\,1.9827960 \pm 0.0000060$ & $\,\,\,\,\,1.5568370\,^{+0.0000100}_{-0.0000090}$ \\ [0.9 ex] Time of eclipse center & $T_{\rm{prim}}$ & BJD & $\,\,\,\,\,2457178.982842 \pm 0.000059$ & $\,\,\,\,\,2457176.63998 \pm 0.00019$ & $\,\,\,\,\,2457178.817792 \pm 0.000080$ & $\,\,\,\,\,2457147.26784 \pm 0.00026$ \\ [0.9 ex] & $\sqrt{e} \cos \omega$ & & $-0.0301\,^{+0.0103}_{-0.0057}$ & $\,\,\,\,\,0.0337\,^{+0.0067}_{-0.0128}$ & $\,\,\,\,\,0.364\,^{+0.016}_{-0.026}$ & $-0.0081\,^{+0.0069}_{-0.0094}$ \\ [0.9 ex] & $\sqrt{e} \sin \omega$ & & $\,\,\,\,\,0.031 \pm 0.034$ & $\,\,\,\,\,0.020 \pm 0.052$ & $\,\,\,\,\,0.04 \pm 0.14$ & $\,\,\,\,\,0.010\,^{+0.049}_{-0.041}$ \\ [0.9 ex] Central surface brightness ratio & $J_{\rm{K2}}$ & & $\,\,\,\,\,0.748 \pm 0.034$ & $\,\,\,\,\,0.950 \pm 0.060$ & $\,\,\,\,\,0.0051\,^{+0.0049}_{-0.0036}$ & $\,\,\,\,\,0.90\,^{+0.30}_{-0.21}$ \\ [0.9 ex] \noalign{\smallskip} \noalign{\smallskip} Primary linear LDC\,* & $u_{\rm{pri~K2}}$ & & $\,\,\,\,\,0.54 \pm 0.20$ & $\,\,\,\,\,0.51 \pm 0.13$ & $\,\,\,\,\,0.66 \pm 0.16$ & $\,\,\,\,\,0.48 \pm 0.14$ \\ Primary non-linear LDC\,* & $u'_{\rm{pri~K2}}$ & & $\,\,\,\,\,0.24 \pm 0.29$ & $\,\,\,\,\,0.31 \pm 0.22$ & $\,\,\,\,\,0.03 \pm 0.23$ & $\,\,\,\,\,0.22 \pm 0.24$ \\ Secondary linear LDC\,* & $u_{\rm{sec~K2}}$ & & $\,\,\,\,\,0.39 \pm 0.11$ & $\,\,\,\,\,0.41 \pm 0.13$ & $\,\,\,\,\,0.43 \pm 0.13$ & $\,\,\,\,\,0.44 \pm 0.14$ \\ Secondary non-linear LDC\,* & $u'_{\rm{sec~K2}}$ & & $\,\,\,\,\,0.12 \pm 0.21$ & $\,\,\,\,\,0.04\,^{+0.25}_{-0.18}$ & $\,\,\,\,\,0.12 \pm 0.26$ & $\,\,\,\,\,0.12 \pm 0.23$ \\ \noalign{\smallskip} \noalign{\smallskip} \noalign{\smallskip} \noalign{\smallskip} \noalign{\smallskip} % \multicolumn{7}{c}{\emph{Out-of-eclipse variability parameters}} \\ \noalign{\smallskip} \noalign{\smallskip} Amplitude & $A_{\rm{K2}}$ & \% & $\,\,\,\,\,0.00785\,^{+0.00077}_{-0.00057}$ & $\,\,\,\,\,0.0179 \pm 0.0023$ & $\,\,\,\,\,0.0103\,^{+0.0033}_{-0.0020}$ & $\,\,\,\,\,0.136\,^{+0.085}_{-0.136}$ \\ [0.9 ex] Timescale of SqExp term & $l_{\rm{SE ~ K2}}$ & days & $\,\,\,\,\,8.55\,^{+0.43}_{-0.36}$ & $\,\,\,\,\,16.97 \pm 0.97$ & $\,\,\,\,\,7.32\,^{+0.90}_{-0.67}$ & --- \\ [0.9 ex] Scale factor of ExpSine2 term & $\Gamma_{\rm{ESS ~ K2}}$ & days & $\,\,\,\,\,11.5 \pm 4.0$ & $\,\,\,\,\,9.56\,^{+0.96}_{-0.82}$ & $\,\,\,\,\,0.55^{+0.27}_{-0.20}$ & --- \\ [0.9 ex] Period of ExpSine2 term & $P_{\rm{ESS ~ K2}}$ & days & $\,\,\,\,\,7.375\,^{+0.059}_{-0.069}$ & $\,\,\,\,\,12.150\,^{+0.074}_{-0.062}$ & $\,\,\,\,\,2.252 \pm 0.020$ & --- \\ [0.9 ex] Timescale of Matern32 term & $l_{\rm{M32 ~ K2}}$ & days & --- & --- & --- & $\,\,\,\,\,223.6 \pm 2.3$ \\ [0.9 ex] White noise scale factor & $\sigma_{\rm{K2}}$ & & $\,\,\,\,\,1.901 \pm 0.039$ & $\,\,\,\,\,1.448 \pm 0.02$ & $\,\,\,\,\,1.363 \pm 0.019$ & $\,\,\,\,\,1.329\,^{+0.031}_{-0.023}$ \\ [0.9 ex] \noalign{\smallskip} \noalign{\smallskip} \noalign{\smallskip} \noalign{\smallskip} \noalign{\smallskip} % \multicolumn{7}{c}{\emph{Radial velocity parameters}} \\ \noalign{\smallskip} \noalign{\smallskip} Systemic velocity & $V_{\rm{sys}}$ & km\,s$^{-1}$ & $\,\,\,\,\,33.60 \pm 0.24$ & $\,\,\,\,\,34.91 \pm 0.39$ & $\,\,\,\,\,34.93\,^{+0.61}_{-0.53}$ & $\,\,\,\,\,33.1 \pm 1.7$ \\ [0.9 ex] Primary RV semi-amplitude & $K_{\rm{pri}}$ & km\,s$^{-1}$ & $\,\,\,\,\,33.90 \pm 0.39$ & $\,\,\,\,\,39.86\,^{+0.80}_{-0.88}$ & $\,\,\,\,\,18.66\,^{+0.95}_{-1.00}$ & $\,\,\,\,\,98 \pm 15$ \\ [0.9 ex] Secondary RV semi-amplitude & $K_{\rm{sec}}$ & km\,s$^{-1}$ & $\,\,\,\,\,63.93 \pm 0.49$ & $\,\,\,\,\,33.12\,^{+0.83}_{-0.89}$ & --- & $\,\,\,\,\,84 \pm 14$ \\ [0.9 ex] HIRES jitter term & $\sigma_{\rm{HIRES}}$ & km\,s$^{-1}$ & $\,\,\,\,\,0.50\,^{+0.37}_{-0.30}$ & $\,\,\,\,\,0.95\,^{+0.48}_{-0.35}$ & $\,\,\,\,\,0.93\,^{+1.18}_{-0.62}$ & $\,\,\,\,\,1.6\,^{+2.5}_{-1.2}$ \\ [0.9 ex] \noalign{\smallskip} \noalign{\smallskip} \noalign{\smallskip} \noalign{\smallskip} \noalign{\smallskip} % \multicolumn{7}{c}{\emph{Fundamental parameters}} \\ \noalign{\smallskip} \noalign{\smallskip} Primary mass & $M_{\rm pri}$ & M$_{\odot}$ & $\,\,\,\,\,0.3813 \pm 0.0074$ & $\,\,\,\,\,0.212 \pm 0.012$ & $\,\,\,\,\,0.276\pm0.020$\,$^{a}$ & $\,\,\,\,\,0.45^{+0.19}_{-0.14}$ \\ [0.9 ex] Secondary mass & $M_{\rm sec}$ & M$_{\odot}$ & $\,\,\,\,\,0.2022 \pm 0.0045$ & $\,\,\,\,\,0.255 \pm 0.013$ & $\,\,\,\,\,0.0517\pm0.0041 ~~ (54.2\pm4.3)\,^{b,c}$ & $\,\,\,\,\,0.53^{+0.22}_{-0.16}$ \\ [0.9 ex] Primary radius & $R_{\rm pri}$ & R$_{\odot}$ & $\,\,\,\,\,0.3610 \pm 0.0033$ & $\,\,\,\,\,0.233 \pm 0.013$ & $0.29 \pm 0.08$\,$^{d}$ & $\,\,\,\,\,0.549\,^{+0.099}_{-0.082}$ \\ [0.9 ex] Secondary radius & $R_{\rm sec}$ & R$_{\odot}$ & $\,\,\,\,\,0.2256\,^{+0.0063}_{-0.0049}$ & $\,\,\,\,\,0.267 \pm 0.014$ & $0.10\pm0.03 ~~ (1.02\pm0.28)\,^{c,e}$ & $\,\,\,\,\,0.454\,^{+0.094}_{-0.101}$ \\ [0.9 ex] Primary effective temperature & $T_{\rm pri}$ & K & $\,\,\,\,\,3211\,^{+54}_{-36}$ & $\,\,\,\,\, 3152\,^{+57}_{-40}$ & $3191 \pm 27$ & $3767\,^{+99}_{-85}$ \\ [0.9 ex] Secondary effective temperature & $T_{\rm sec}$ & K & $\,\,\,\,\,3103\,^{+53}_{-39}$ & $\,\,\,\,\,3131\,^{+56}_{-38}$ & $1669\,^{+244}_{-258}$ & $3693\,^{+122}_{-135}$ \\ [0.9 ex] Mass sum & $M_{\rm pri} + M_{\rm sec}$ & M$_{\odot}$ & $\,\,\,\,\,0.583 \pm 0.011$ & $\,\,\,\,\,0.468 \pm 0.023$ & --- & $\,\,\,\,\,0.98\,^{+0.38}_{-0.29}$ \\ [0.9 ex] Radius sum & $R_{\rm pri} + R_{\rm sec}$ & R$_{\odot}$ & $\,\,\,\,\,0.5868\,^{+0.0084}_{-0.0073}$ & $\,\,\,\,\,0.4991\,^{+0.0096}_{-0.0102}$ & --- & $\,\,\,\,\,1.00 \pm 0.13$ \\ [0.9 ex] Semi-major axis & $a$ & R$_{\odot}$ & $\,\,\,\,\,11.630 \pm 0.073$ & $\,\,\,\,\,16.75 \pm 0.28$ & --- & $\,\,\,\,\,5.67 \pm 0.65$ \\ [0.9 ex] Eccentricity & $e$ & & $\,\,\,\,\,0.00194\,^{+0.00253}_{-0.00057}$ & $\,\,\,\,\,0.00254\,^{+0.00406}_{-0.00078}$ & $\,\,\,\,\,0.146\,^{+0.024}_{-0.016}$ & $\,\,\,\,\,0.00108\,^{+0.00347}_{-0.00078}$ \\ [0.9 ex] Longitude of periastron & $\omega$ & $^{\circ}$ & $\,\,\,\,\,116.0 \pm 39.0$ & $\,\,\,\,\,27.0 \pm 69.0$ & $\,\,\,\,\,5 \pm 20$ & $\,\,\,\,\,91.0 \pm 29.0$ \\ [0.9 ex] Primary surface gravity & $\log g_{\rm pri}$ & (cm\,s$^{-2}$) & $\,\,\,\,\,4.9040\,^{+0.0073}_{-0.0064}$ & $\,\,\,\,\,5.031 \pm 0.048$ & --- & $\,\,\,\,\,4.61 \pm 0.13$ \\ [0.9 ex] Secondary surface gravity & $\log g_{\rm sec}$ & (cm\,s$^{-2}$) & $\,\,\,\,\,5.037\,^{+0.019}_{-0.026}$ & $\,\,\,\,\,4.993\,^{+0.042}_{-0.035}$ & --- & $\,\,\,\,\,4.84 \pm 0.20$ \\ [0.9 ex] Primary synchronized velocity & $V_{\rm pri ~ sync}$ & km\,s$^{-1}$ & $\,\,\,\,\,3.036 \pm 0.028$ & $\,\,\,\,\,1.014 \pm 0.058$ & --- & $\,\,\,\,\,17.8 \pm 3.2$ \\ [0.9 ex] Secondary synchronized velocity & $V_{\rm sec ~ sync}$ & km\,s$^{-1}$ & $\,\,\,\,\,1.898\,^{+0.053}_{-0.041}$ & $\,\,\,\,\,1.162\,^{+0.050}_{-0.059}$ & --- & $\,\,\,\,\,14.7 \pm 3.3$ \\ [0.9 ex] Synchronization timescale & $t_{\rm sync}$ & Myr & $\,\,\,\,\,27.29 \pm 0.49$ & $\,\,\,\,\,152.6 \pm 4.7$ & --- & $\,\,\,\,\,0.0510\,^{+0.0120}_{-0.0090}$ \\ [0.9 ex] Circularization timescale & $t_{\rm circ}$ & Gyr & $\,\,\,\,\,17.30 \pm 0.10$ & $\,\,\,\,\,467.6 \pm 1.5$ & --- & $\,\,\,\,\,0.01040\,^{+0.00033}_{-0.00013}$ \\ [0.9 ex] \noalign{\smallskip} \noalign{\smallskip} \noalign{\smallskip} \hline \end{tabular}% } \begin{list}{}{} \item[* LDC = limb darkening coefficient] \item[$^{a}$ Derived from the empirical relations of \citet{Benedict16}.] \item[$^{b}$ Derived from the system mass function.] \item[$^{c}$ Units in brackets are relative to Jupiter.] \item[$^{d}$ Derived from the empirical relations of \citet{Mann16}.] \item[$^{e}$ Derived from the light curve radius ratio and the empirically determined primary radius.] \end{list} \end{table*} \begin{figure*} \centering \includegraphics[width=0.84\linewidth]{{./EB_211972086_20170203_14_55_K2_LC_mean_with_resid_GP_QPExpSine2_add_EBGP_nsteps_50000_nwalkers_144_nburn_25000_ext-constrs_lrat}.pdf} \caption{Systematics-corrected \emph{K2}\ light curve of AD 3814 (black points) with the GP--EBOP\ model in red. The red line and pink shaded region represent the mean and 2$\sigma$ uncertainty of the model's predictive posterior distribution. } \label{3814_LC} \end{figure*} \begin{figure} \centering \vspace{-0.0cm} \includegraphics[width=0.8\linewidth]{{./EB_211972086_20170203_14_55_K2_LC_phase_GP_QPExpSine2_add_EBGP_nsteps_50000_nwalkers_144_nburn_25000_ext-constrs_lrat_EB_median_of_draws_with_residuals}.pdf} \vspace{-0.1cm} \caption{\emph{Top}\ panels: phase-folded \emph{K2}\ light curve of AD 3814 (black), which has been detrended with respect to the Gaussian process model. The red line indicates the median EB model derived from the posterior distribution, i.e. individual draws are calculated across phase space and the median of their paths plotted. Phase zero marks the center of the primary eclipse. Immediately below are the residuals of the fit. \emph{Bottom}\ panels: zooms on primary and secondary eclipses (\emph{left}\ and \emph{right}\ respectively) with the median model and 2$\sigma$ uncertainties shown (red line and pink shaded region, respectively). Residuals are shown immediately below. } \label{3814_eclipses} \end{figure} \begin{figure} \centering \vspace{-0.0cm} \includegraphics[width=0.8\linewidth]{{./EB_211972086_20170203_14_55_K2sc_old_RV_RV_orbit_GP_K2sc_old-QPExpSine2_add_EBGP_nsteps_50000_nwalkers_144_nburn_25000_ext-constrs_lrat}.pdf} \vspace{-0.2cm} \caption{\emph{Top}: phase-folded RV orbit of AD 3814 with Keck/HIRES RVs for the primary and secondary stars (red and blue, respectively). The lines and shaded regions indicate the median and 2$\sigma$ uncertainties on the posterior distributions of the RV orbits. The gray horizontal dotted line shows the systemic velocity. \emph{Bottom}: Residuals of the fit. } \label{3814_RV} \end{figure} \begin{figure} \centering \vspace{-0.05cm} \includegraphics[width=0.7\linewidth]{{./EB3814_eclipse_geometry_20170203_1455_4paper}.pdf} \vspace{-0.1cm} \caption{Geometry of AD 3814, to scale, as observed at primary and secondary eclipse. The primary star is shown in red and the secondary in blue. } \label{3814_geom} \end{figure} \begin{figure*} \centering \includegraphics[width=0.84\linewidth]{{./EB_212002525_20170203_02_06_K2_LC_mean_with_resid_GP_QPExpSine2_add_EBGP_nsteps_50000_nwalkers_144_nburn_25000_ext-constrs_lrat}.pdf} \caption{Systematics-corrected \emph{K2}\ light curve of AD 2615 (black points) with the GP--EBOP\ model in red. The red line and pink shaded region represent the mean and 2$\sigma$ uncertainty of the model's predictive posterior distribution. } \label{2615_LC} \end{figure*} \begin{figure} \centering \vspace{0.0cm} \includegraphics[width=0.8\linewidth]{{./EB_212002525_20170203_02_06_K2_LC_phase_GP_QPExpSine2_add_EBGP_nsteps_50000_nwalkers_144_nburn_25000_ext-constrs_lrat_EB_median_of_draws_with_residuals}.pdf} \vspace{-0.1cm} \caption{\emph{Top}\ panels: phase-folded \emph{K2}\ light curve of AD 2615 (black), which has been detrended with respect to the Gaussian process model. The red line indicates the median EB model derived from the posterior distribution, i.e. individual draws are calculated across phase space and the median of their paths plotted. Phase zero marks the center of the primary eclipse. Immediately below are the residuals of the fit. \emph{Bottom}\ panels: zooms on primary and secondary eclipses (\emph{left}\ and \emph{right}\ respectively) with the median model and 2$\sigma$ uncertainties shown (red line and pink shaded region, respectively). Residuals are shown immediately below. } \label{2615_eclipses} \end{figure} \begin{figure} \centering \vspace{0.0cm} \includegraphics[width=0.8\linewidth]{{./EB_212002525_20170203_02_06_K2sc_old_RV_RV_orbit_GP_K2sc_old-QPExpSine2_add_EBGP_nsteps_50000_nwalkers_144_nburn_25000_ext-constrs_lrat}.pdf} \vspace{-0.2cm} \caption{\emph{Top}: phase-folded RV orbit of AD 2615 with Keck/HIRES RVs for the primary and secondary stars (red and blue, respectively). The lines and shaded regions indicate the median and 2$\sigma$ uncertainty on the posterior distribution of the RV orbits. The gray horizontal dotted line shows the systemic velocity. \emph{Bottom}: Residuals of the fit. } \label{2615_RV} \end{figure} \begin{figure} \centering \vspace{-0.05cm} \includegraphics[width=0.7\linewidth]{{./EB2615_eclipse_geometry_20170203_0206_4paper}.pdf} \vspace{-0.1cm} \caption{Geometry of AD 2615, to scale, as observed at primary and secondary eclipse. The primary star is shown in red and the secondary in blue. } \label{2615_geom} \end{figure} \begin{figure*} \centering \includegraphics[width=0.84\linewidth]{{./EB_211946007_20170423_22_31_K2_LC_mean_with_resid_GP_QPExpSine2_add_EBGP_nsteps_50000_nwalkers_144_nburn_25000_ext-constrs_ulrat_RRJ}.pdf} \caption{Systematics-corrected \emph{K2}\ light curve of AD 3116 (black points) with the GP--EBOP\ model in red. The red line and pink shaded region represent the mean and 2$\sigma$ uncertainty of the model's predictive posterior distribution. } \label{3116_LC} \end{figure*} \begin{figure} \centering \vspace{0.0cm} \includegraphics[width=0.8\linewidth]{{./EB_211946007_20170423_22_31_K2_LC_phase_GP_QPExpSine2_add_EBGP_nsteps_50000_nwalkers_144_nburn_25000_ext-constrs_ulrat_RRJ_EB_median_of_draws_with_residuals}.pdf} \vspace{-0.1cm} \caption{\emph{Top}\ panels: phase-folded \emph{K2}\ light curve of AD 3116 (black), which has been detrended with respect to the Gaussian process model. The red line indicates the median EB model derived from the posterior distribution, i.e. individual draws are calculated across phase space and the median of their paths plotted. Phase zero marks the center of the primary eclipse. Immediately below are the residuals of the fit. \emph{Bottom}\ panels: zooms on primary and secondary eclipses (\emph{left}\ and \emph{right}\ respectively) with the median model and 2$\sigma$ uncertainties shown (red line and pink shaded region, respectively). Residuals are shown immediately below. } \label{3116_eclipses} \end{figure} \begin{figure} \centering \vspace{0.0cm} \includegraphics[width=0.8\linewidth]{{./EB_211946007_20170423_22_31_K2sc_old_RV_RV_orbit_GP_K2sc_old-QPExpSine2_add_EBGP_nsteps_50000_nwalkers_144_nburn_25000_ext-constrs_ulrat_RRJ}.pdf} \vspace{-0.2cm} \caption{\emph{Top}: phase-folded RV orbit of AD 3116 with Keck/HIRES RVs for the primary and secondary stars (red and blue, respectively). The line and shaded regions indicate the median and 1 and 2$\sigma$ uncertainties on the posterior distribution of the primary RV orbit. The gray horizontal dotted line shows the systemic velocity. \emph{Bottom}: Residuals of the fit. } \label{3116_RV} \end{figure} \begin{figure} \centering \vspace{-0.05cm} \includegraphics[width=0.7\linewidth]{{./EB3116_eclipse_geometry_20170423_2231_4paper}.pdf} \vspace{-0.1cm} \caption{Geometry of AD 3116, to scale, as observed at primary and secondary eclipse. The primary star is shown in red and the secondary in blue. } \label{3116_geom} \end{figure} \begin{figure*} \centering \includegraphics[width=0.84\linewidth]{{./EB_212009427_20170328_12_19_K2_LC_mean_with_resid_GP_Matern32_add_EBGP_nsteps_50000_nwalkers_192_nburn_25000_ext-constrs_lrat}.pdf} \caption{Systematics-corrected \emph{K2}\ light curve of AD 1508 (black points) with the GP--EBOP\ model in red. The red line and pink shaded region represent the mean and 2$\sigma$ uncertainty of the model's predictive posterior distribution. } \label{1508_LC} \end{figure*} \begin{figure} \centering \vspace{0.0cm} \includegraphics[width=0.8\linewidth]{{./EB_212009427_20170328_12_19_K2_LC_phase_GP_Matern32_add_EBGP_nsteps_50000_nwalkers_192_nburn_25000_ext-constrs_lrat_EB_median_of_draws_with_residuals}.pdf} \vspace{-0.1cm} \caption{\emph{Top}\ panels: phase-folded \emph{K2}\ light curve of AD 1508 (black), which has been detrended with respect to the Gaussian process model. The red line indicates the median EB model derived from the posterior distribution, i.e. individual draws are calculated across phase space and the median of their paths plotted. Phase zero marks the center of the primary eclipse. Immediately below are the residuals of the fit. \emph{Bottom}\ panels: zooms on primary and secondary eclipses (\emph{left}\ and \emph{right}\ respectively) with the median model and 2$\sigma$ uncertainties shown (red line and pink shaded region, respectively). Residuals are shown immediately below. } \label{1508_eclipses} \end{figure} \begin{figure} \centering \vspace{0.0cm} \includegraphics[width=0.8\linewidth]{{./EB_212009427_20170328_12_19_K2EverestTrevor_RV_RV_orbit_GP_K2EverestTrevor-Matern32_add_EBGP_nsteps_50000_nwalkers_192_nburn_25000_ext-constrs_lrat}.pdf} \vspace{-0.2cm} \caption{\emph{Top}: phase-folded RV orbit of AD 1508 with Keck/HIRES RVs for the primary and secondary stars (red and blue, respectively). The lines and shaded regions indicate the median and 2$\sigma$ uncertainty on the posterior distribution of the RV orbits. The gray horizontal dotted line shows the systemic velocity. \emph{Bottom}: Residuals of the fit. } \label{1508_RV} \end{figure} \begin{figure} \centering \vspace{-0.05cm} \includegraphics[width=0.7\linewidth]{{./EB1508_eclipse_geometry_20170328_1219_4paper}.pdf} \vspace{-0.1cm} \caption{Geometry of AD 1508, to scale, as observed at primary and secondary eclipse. The primary star is shown in red and the secondary in blue. } \label{1508_geom} \end{figure} \acknowledgments We thank Pierre Maxted for interesting discussions and John Southworth for help compiling Table \ref{pms_ebs}. This paper includes data collected by the Kepler/K2 mission. Funding for the K2 mission of Kepler is provided by the NASA Science Mission directorate. Some of the data presented in this paper were obtained from the Mikulski Archive for Space Telescopes (MAST) under support by the NASA Office of Space Science via grant NNX09AF08G and by other grants and contracts. Some of the data presented herein were obtained at the W.M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California and the National Aeronautics and Space Administration. The Observatory was made possible by the generous financial support of the W.M. Keck Foundation. The authors wish to recognize and acknowledge the very significant cultural role and reverence that the summit of Mauna Kea has always had within the indigenous Hawaiian community. We are most fortunate to have the opportunity to conduct observations from this mountain. Finally, we would like to thank the anonymous referee for their careful reading of the manuscript and helpful suggestions for improvement. \vspace{5mm} \facilities{\emph{Kepler}/\emph{K2}, Keck/HIRES, SDSS, 2MASS, WISE.} \software{astropy \citep{astropy13}, {\tt emcee} \citep{Foreman-Mackey13}, {\tt george} \citep{Ambikasaran14}. } \bibliographystyle{aasjournal}
1,314,259,994,116
arxiv
\section{Introduction} \label{sec:intro} \IEEEPARstart{M}{assive} multiple-input multiple-output (MIMO) has attracted extensive research interests for its ability to enable interference-free transmission in an asymptotic sense, thereby improving throughput and energy efficiency significantly. However, the gain of the massive MIMO technique can be limited in practice because of the channel estimation error caused by pilot contamination. This letter aims to demonstrate the advantage of using nonorthogonal pilots together with power control for minimizing the pilot contamination effect. Pilot design and power control are two central questions in this area (although the recent work \cite{Bjornson_TWC2018_Unlimited} shows that pilot contamination does not limit the capacity in some cases). Differing from the most existing works that use orthogonal pilots, the present work considers power control under a more general setup with nonorthogonal pilots. As shown in the recent works \cite{Eldar_arxiv18_CoordinatedPilot,Kai_arxiv19_Rx_Nonorthogonal}, using nonorthogonal pilots can already effectively improve upon the conventional orthogonal pilot scheme in terms of channel estimation. This letter further shows that the nonorthogonal pilot scheme leads to considerable throughput gain when coupled with power control. This work formulates a multicell uplink power control problem for the massive MIMO system in recognition of the fact that power control typically depends on the slow-varying path-loss thus takes place in a much larger time scale than the small-scale fading. Given the fixed path-loss, we aim to choose a set of transmit powers that maximize the ergodic rates taken over a large number of small-scale time intervals. This power control problem for the uplink massive MIMO amounts to a stochastic optimization that is difficult to solve directly. Instead, this letter proposes a way of approximating the stochastic optimization problem in a deterministic form. The first part of the letter proposes a deterministic approximation to the stochastic optimization and accordingly devises an efficient iterative algorithm; the second part of the letter further gives an off-line stochastic optimization approach as benchmark to quantify the performance of the deterministic algorithm. Power control for massive MIMO is traditionally designed for a single cell system without taking the pilot contamination effect into account \cite{Victor_2017tsp_powercontrol,Bjornson_tsp2013_ORA,Yang_tcom2017_pc}. Regarding the multi-cell scenario in the presence of pilot contamination, the earlier works \cite{Guo_ICC14,Chien_twc2018_pc,Chien_tcom2019_LSFD} consider power control by assuming that the orthogonal pilot scheme has been used for channel estimation. In contrast, the main contribution of this letter is to explore the potential of using power control to further enhance the advantage of the nonorthogonal pilots over the traditional orthogonal pilots. As a further contribution, this paper justifies the proposed power control scheme by using a stochastic optimization based benchmark. Simulations show that the nonorthogonal pilot scheme followed by the proposed power control is more effective in mitigating pilot contamination than existing methods. \emph{Notation:} We use $(\cdot)^*$ to denote the conjugate transpose, $\mathrm{vec}(\cdot)$ the vectorization, $\otimes$ the Kronecker product, and $\mathcal{CN}$ the complex Gaussian distribution. We use the bold letter to denote a collection of variables, e.g., $\bm p=[p_1,p_2,\ldots,p_n]$. \setcounter{equation}{5} \begin{figure*} \begin{equation}\label{topE} \gamma{}_{ik}\left(\boldsymbol{p},\boldsymbol{h}\right)=\frac{\|\hat{\bm{h}}_{i,ik}\|^{4}p_{ik}}{\sum_{(j,l)\neq(i,k)}\big|\hat{\bm{h}}_{i,ik}^{\ast}\bm{h}_{i,jl}\big|^{2}p_{jl}+\sigma^{2}\|\hat{\bm{h}}_{i,ik}\|^{2}+\big|\hat{\bm{h}}_{i,ik}^{\ast}\big(\bm{h}_{i,ik}-\hat{\bm{h}}_{i,ik}\big)\big|^{2}p_{ik}} \end{equation} \hrulefill \end{figure*} \setcounter{equation}{0} \section{Problem Formulation} \label{sec:setup} Consider an uplink massive MIMO system with $I$ cells, where $M$-antenna base-station (BS) $i$ serves $K_i$ single-antenna user terminals in cell $i$. We use $i\in\left\{ 1,\dots,I\right\}$ to denote the index of each cell or the corresponding BS, and $(i,k)$ the index of the $k$th user in cell $i$. Let $p_{ik}$ be the transmit power level of user $(i,k)$ under the power constraint $0\le p_{ik}\le P_{\max}$. Hence, for the power variable $\bm{p}=[p_{ik},\forall (i,k)]$, its feasible region is $\mathcal P=[0,P_{\max}]\times[0,P_{\max}]\times\ldots\times[0,P_{\max}]$. Following the previous works \cite{Eldar_arxiv18_CoordinatedPilot,Kaiming_Icassp2019_multicellPilot}, we adopt the flat-fading channel model \setcounter{equation}{0} \begin{equation} \label{chn} \mathbf{H}_{ji}=\mathbf{G}_{ji}\mathbf{V}^{\frac{1}{2}}_{ji}, \end{equation} where $\mathbf{H}_{ji}=[\bm{h}_{j,i1},\cdots,\bm{h}_{j,iK_i}]\in\mathbb{C}^{M\times K_i}$ is the channel matrix with $\bm{h}_{j,ik}\mathbb{\in C}^{M}$ denoting the channel from user $(i,k)$ to BS $j$, $\mathbf{G}_{ji}\in\mathbb{C}^{M\times K_i}$ is the small-scale fading coefficient matrix with i.i.d entries distributed as $\mathcal{CN}(0,1)$, and $\mathbf{\mathbf{V}}_{ji}=\mathrm{diag}[v_{j,i1},\cdots,v_{j,iK_i}]\in\mathbb{C}^{K_i\times K_i}$ is the large-scale fading coefficient matrix. We assume that the set of large-scale fading coefficients $\{v_{j,ik}\}$ are known \emph{a priori} at each BS via the channel statistics. In the pilot phase, each user $(i,k)$ transmits a pilot sequence $\bm{\phi}_{ik}\in\mathbb{C}^{L}$ of length $L$. This paper assumes that nonorthogonal pilots developed in \cite{Kai_arxiv19_Rx_Nonorthogonal} are transmitted in the pilot phase. Each BS $i$ aims to estimate its channels $\mathbf{H}_{ii}$ based on the received signal \begin{equation} \mathbf{Y}_{i}=\mathbf{H}_{ii}\mathbf{\Phi}_{i}^{T}+\sum_{j=1,j\neq i}^{I}\mathbf{H}_{ij}\mathbf{\Phi}_{j}^{T}+\mathbf{Z}_{i},\label{eq:received-signal} \end{equation} where $\mathbf{\Phi}_{i}=[\bm{\phi}_{i1},\cdots,\bm{\phi}_{iK_i}]\in\mathbb{C}^{L\times K_i}$ is the composite pilot matrix in cell $i$, and $\mathbf{Z}_{i}$ is the background noise matrix with each i.i.d. entry distributed as $\mathcal{CN}(0,\sigma^{2})$. To make the problem tractable, we further assume that the minimum mean-square error (MMSE) estimator is used for channel estimation at each BS. The resulting channel estimation is \begin{equation} \mathrm{vec}(\hat{\mathbf{H}}_{ii})=\left(\mathbf{V}_{ii}\mathbf{\Phi}_{i}^{\ast}\otimes\mathbf{I}_{M}\right)\left(\mathbf{U}_{i}\otimes\mathbf{I}_{M}\right)^{-1}\mathrm{vec}\left(\mathrm{\mathbf{Y}}_{i}\right), \end{equation} where $\mathbf{\mathbf{U}}_{i}=\sigma^{2}\mathbf{I}_{L}+\sum_{j=1}^{I}\mathbf{\Phi}_{j}\mathbf{V}_{ij}\mathbf{\Phi}_{j}^{\ast}.$ Subsequently, in the data transmission phase, each BS $i$ receives a superposition of data signals as \begin{align} \bm{y}_{i} & =\sum_{(j,l)}\sqrt{p_{jl}}\bm{h}_{i,jl}s_{jl}+\tilde{\bm z}_{i}, \end{align} where $s_{ik}\sim\mathcal{CN}(0,1)$ is the data symbol of user $(i,k)$ and $\tilde{\bm{z}}_{ik}\sim\mathcal{CN}(\bm{0},\sigma^{2}\mathbf{I}_{M})$ is the background noise. Given the current channel realization $\bm h$, BS $i$ can use maximum ratio combining (MRC) to obtain the following instantaneous data rate for its user $(i,k)$: \begin{equation} \label{rate} R_{ik}\left(\boldsymbol{p},\boldsymbol{h}\right) = \log\left(1+\gamma{}_{ik}\left(\boldsymbol{p},\boldsymbol{h}\right)\right), \end{equation} where the signal-to-interference-plus-noise ratio (SINR) $\gamma{}_{ik}\left(\boldsymbol{p},\boldsymbol{h}\right)$ is defined in \eqref{topE} as displayed at the top of this page; more details can be found in \cite{Kai_arxiv19_Rx_Nonorthogonal}. To account for the small-scale fading, we take the expectation of $R_{ik}$ over $\bm{h}$. {Strictly speaking, $R_{ik}$ may not be achievable in practice because it requires the receiver to acquire the value of $\sum_{(j,l)\ne(i,k)}|\hat{\bm{h}}_{i,ik}^{\ast}\bm{h}_{i,jl}|^{2}p_{jl}$, rather it provides an upper bound to the achievable rate.} More formally, the optimization problem over the power variable $\bm{p}$ (as function of the large-scale fading) is that of maximizing a network utility function of the \emph{ergodic rate} (i.e. expected rate) across all the users. Assuming a weight sum rate maximization formulation, the optimization problem becomes: \setcounter{equation}{6} \begin{equation} \max_{\bm{p}\in\mathcal{P}}\:\sum_{(i,k)}w_{ik}\mathbb E_{\bm h}\big[R_{ik}\big],\label{eq:mainP} \end{equation} where $w_{ik}$ is a nonnegative rate weight reflecting the priority of user $(i,k)$. Note that the expectation is over the random and time-varying $\bm h$ over the small time scale, so the overall problem is a stochastic optimization problem. Differing from the case with orthogonal pilots, the ergodic rate with nonorthogonal pilots does not have a closed-form expression and is more difficult to optimize due to the last term in the denominator of \eqref{topE}. This paper proposes to approximate problem \eqref{eq:mainP} into a deterministic form then to solve resulting problem using an efficient iterative algorithm. Furthermore, we propose a stochastic optimization based benchmark to quantify the performance of the proposed algorithm. \section{Deterministic Optimization Algorithm} In this section, we propose to solve (\ref{eq:mainP}) by approximating it in a deterministic form. The key enabler of this new method is a recent result from \cite{Kai_arxiv19_Rx_Nonorthogonal} that approximates the ergodic rate $\mathbb E_{\bm h}[R_{ik}(\bm p,\bm h)]$ in a deterministic form \begin{equation} \hat{R}_{ik}(\bm{p})=\log_{2}\bigg(1+\frac{a_{ik}p_{ik}}{\sum_{(j,l)}b_{ik,jl}p_{jl}+M\rho_{ik}\sigma^{2}-a_{ik}p_{ik}}\bigg),\label{eq:deterministic} \end{equation} where $\rho_{ik}=v_{i,ik}^{2}\ensuremath{\bm{\phi}_{ik}^{\ast}}\mathbf{\mathbf{U}}_{i}^{-1}\text{\ensuremath{\bm{\phi}_{ik}}}$, $a_{ik}=M^{2}\rho_{ik}^{2}$, and $b_{ik,jl}=M\rho_{ik}v_{i,jl}+M^{2}v_{i,ik}^{2}v_{i,jl}^{2}\bm{\phi}_{ik}^{\ast}\mathbf{U}_{i}^{-1}\bm{\phi}_{jl}\bm{\phi}^{\ast}_{jl}\mathbf{U}_{i}^{-1}\bm{\phi}_{ik}$. As shown in \cite{Kai_arxiv19_Rx_Nonorthogonal}, this approximate data rate $\hat{R}_{ik}(\bm{p})$ can be obtained from the so-called use-and-then-forget bound \cite{MassiveMIMO_book}. This approximate rate is always achievable but can be strictly lower than the ergodic rate $\mathbb E_{\bm h}[R_{ik}(\bm p,\bm h)]$. Observe that $\hat{R}_{ik}(\bm{p})$ only depends on the large-scale fading $\{v_{j,ik}\}$, so it allows us to bypass the expectation over $\bm h$ and to devise a power control strategy based on the large-scale fading only. The resulting approximation of the weighted sum-rate maximization problem (\ref{eq:mainP}) is \begin{equation} \label{eq:WSRMP} \max_{\bm{p}\in\mathcal{P}}\:\sum_{(i,k)}w_{ik}\hat{R}_{ik}(\bm{p}). \end{equation} Treating the fractional term $\frac{a_{ik}p_{ik}}{\sum_{(j,l)}b_{ik,jl}p_{jl}+M\rho_{ik}\sigma^{2}-a_{ik}p_{ik}}$ as a virtual SINR, we can recognize (\ref{eq:WSRMP}) as a deterministic power control problem that has been extensively studied in the existing literature. If we apply the idea of weighted minimum mean square error (WMMSE) algorithm \cite{luo_wmmse}, problem (\ref{eq:WSRMP}) can be reformulated as \begin{equation} \min_{\bm{p}\in\mathcal{P},\,\bm{\mu}\succeq\mathbf0}\:\sum_{(i,k)}w_{ik}\left(\mu_{ik}\eta_{ik}-\log_{2}\mu_{ik}\right),\label{eq:WMMSE} \end{equation} where $\mu_{ik}>0$ is an auxiliary variable introduced for each user $(i,k)$ and another new variable $\eta_{ik}$ is computed as \begin{equation} \label{eta} \eta_{ik}=\sum_{(j,l)}b_{ik,jl}p_{jl}+M\rho_{ik}\sigma^{2}+1-2\sqrt{a_{ik}p_{ik}}. \end{equation} We propose an alternative optimization between $\bm p$ and $\bm \mu$ in (\ref{eq:WMMSE}) along with $\bm\eta$ updated by (\ref{eta}) iteratively. When $\bm p$ is fixed, $\bm\mu$ can be optimally determined by solving the first-order condition, that is \begin{equation} \label{mu} \mu_{ik}^\star=\eta^{-1}_{ik}. \end{equation} Likewise, when $\bm \mu$ is held fixed, the optimal $\bm p$ is \begin{equation} \label{wmmse:p} p^\star_{ik}=\Bigg[\Bigg({\sum_{(j,l)}w_{jl}\mu_{jl}b_{jl,ik}\Bigg)^{-2}}{w^2_{ik}\mu^2_{ik}a_{ik}}\Bigg]_{0}^{P_{\mathrm{max}}}, \end{equation} where $[\cdot]^{P_{\max}}_0$ refers to $\max\{0,\min\{\cdot,P_{\max}\}\}$. \begin{algorithm}[t] \textbf{Input:} Pilots and large-scale fading\; \Repeat{the increment on the value of the objective function in \eqref{eq:WMMSE} is less than some threshold $\epsilon_1>0$}{ Update $\bm\mu$ according to (\ref{mu})\; Update $\bm p$ according to (\ref{wmmse:p})\; Update $\bm \eta$ according to (\ref{eta}); } \caption{Deterministic Power Control} \label{alg:deterministic} \end{algorithm} We remark that Algorithm \ref{alg:deterministic} is different from the original WMMSE algorithm \cite{luo_wmmse} in that its ``SINR'' term has the desired power variable $p_{ik}$ appearing in both the numerator and the denominator. { \begin{Thm} \label{thm:deterministic} Algorithm \ref{alg:deterministic} is guaranteed to converge to a stationary point of problem (\ref{eq:WSRMP}), with the weighted sum rate in (\ref{eq:WSRMP}) nondecreasing after each iteration. \end{Thm} } The proof of this theorem is relegated to Appendix A. {Algorithm 1 is similar to the conventional power control for massive MIMO systems, except that nonorthogonal pilots are used here. The main contribution of this letter is in justifying this deterministic approximation using a stochastic optimization framework.} \section{Stochastic Optimization Benchmark} In order to investigate the performance of the deterministic approach (i.e., Algorithm 1), this section proposes a stochastic optimization formulation for solving problem (\ref{eq:mainP}) assuming that the successive realizations of $\bm{h}$ are observed over time. Although this algorithm is not applicable in practice, because $\bm{h}$ is never known exactly, it can still be used as a benchmark for justifying the deterministic approximation in Algorithm \ref{alg:deterministic}. We optimize the power variable $\bm p$ in an iterative fashion, now as function of the instantaneous channel realization. Here, superscript $t$ is used to denote variables associated with the $t$th iteration. In each iteration, one realization of the channel $\{\bm{h}^t\}$ is observed. Then, instead of directly maximizing the average weighted sum rate objective, we construct a surrogate function of the objective based on the observed channel $\bm{h}^t$ to enable a successive convex approximation (SCA) \cite{Yang_TSP2016_SSCA} of the original nonconvex problem as \begin{multline} \label{eq:SurrogateFunction} \hat{g}^{t}(\bm{p}^t) \!=\!\!\sum_{(i,k)}\!\Big( \alpha^{t}w_{ik}R_{ik}(\bm{p}^{t-1})+p^t_{ik}\xi_{ik}^{t}\,-\\ \frac{\tau_{ik}}{2}\big(p^t_{ik}-p_{ik}^{t-1}\big)^{2}\Big), \end{multline} where $\alpha^t$ is the first trade-off sequence to be properly chosen, and $\tau_{ik}>0$ is an arbitrary positive constant. This function $\hat g^t(\bm{p}^t)$ is meant to approximate the original objective in (\ref{eq:mainP}); it contains an auxiliary variable iteratively updated as \begin{equation}\label{recursiveXi} \xi_{ik}^{t} =\alpha^{t}\sum_{(j,l)}w_{jl}\cdot\frac{\partial R_{jl}(\bm p^{t},\bm h^t)}{\partial p^t_{ik}}+\left(1-\alpha^{t}\right)\xi_{ik}^{t-1}, \end{equation} with $\xi_{ik}^{0}=0$. The key observation is that the new objective function $\hat g^t(\bm{p}^t)$ can be decoupled on the per-user basis, i.e., $\hat g^t(\bm{p}^t) = \sum_{(i,k)} q^{t}_{ik}(p^t_{ik}),$ where \begin{multline} \label{func_q} q^t_{ik}(p^t_{ik}) = \alpha^{t}w_{ik}R_{ik}(\bm{p}^{t-1})+p^t_{ik}\xi_{ik}^{t}\,-\\ \frac{\tau_{ik}}{2}\big(p^t_{ik}-p_{ik}^{t-1}\big)^{2}. \end{multline} Thus, finding the optimal $\bm p^\star$ that maximizes the new objective $\hat g^t(\bm {p}^t)$ amounts to solving a set of separate subproblems: \begin{equation} \label{eq:powerP} \mathrm{\underset{0\le\mathit{p^t_{ik}\le P_{\max}}}{\text{max}}\:}q_{ik}^{t}(p^t_{ik}). \end{equation} The objective function in \eqref{eq:powerP} can be recognized as \begin{equation} q^t_{ik}(p^t_{ik})=A (p^t_{ik})^2+Bp^t_{ik}+C \end{equation} for some constants $A<0$, $B$, and $C$, so it is a concave quadratic function of variable $p^t_{ik}$. As such, we can apply the first-order optimal condition to obtain the solution $\bm p^\star$, and update $p^t_{ik}$ as \begin{equation} \bm p^{t}=\left(1-\beta^{t}\right)\bm p^{t-1}+\beta^{t}\bm p^\star,\label{eq:updateP} \end{equation} where $\beta^{t}$ is a second trade-off factor in addition to the previous $\alpha^t$; the choice of these two factors is specified later in Proposition \ref{prop:convergence}. The above steps are performed iteratively, as summarized in Algorithm \ref{alg:stochastic}. \begin{algorithm}[t] \textbf{Input:} Random channel samples $\bm h^t$, $t=1,2,\ldots$\; \textbf{Initialization:} $\bm p^0\in\mathcal P$, $\theta^0_{ik}=0$, $t=0$\; \Repeat{the increment on the value of the objective function in \eqref{eq:SurrogateFunction} is less than some threshold $\epsilon_2>0$}{ $t\leftarrow t+1$\; Construct the function $q^t_{ik}(p_{ik})$ according to (\ref{func_q})\; Obtain $\bm p^\star$ by solving the convex problem (\ref{eq:powerP})\; Compute $\bm p^{t}$ according to (\ref{eq:updateP}); } \caption{Stochastic Optimization Benchmark} \label{alg:stochastic} \end{algorithm} \begin{figure*}[t] \begin{minipage}[b]{0.34\linewidth} \centerline{\includegraphics[width=6.4cm]{convergence_subfig}} \caption{Convergence of the proposed algorithms.} \label{fig:Convergence} \end{minipage} \begin{minipage}[b]{0.34\linewidth} \centering \centerline{\includegraphics[width=6.5cm]{Final_CDFr}} \caption{Cumulative distribution function of sum rate.} \label{fig:Cumulative-distributon-function} \end{minipage} \begin{minipage}[b]{0.34\linewidth} \centering \centerline{\includegraphics[width=6.5cm]{Final_Mr}} \caption{Sum rate vs. number of antennas at each BS.} \label{fig:numAtn} \end{minipage} \end{figure*} Algorithm \ref{alg:stochastic} can be viewed as a training process in which the power control strategy is adapted to a sequence of channel samples $\bm h^t$ generated according to (\ref{chn}). Because of this training process, Algorithm \ref{alg:stochastic} is more complex than Algorithm \ref{alg:deterministic}. However, a key advantage of Algorithm \ref{alg:stochastic} is that it guarantees convergence to a stationary point of problem (\ref{eq:mainP}) provided that the parameters $\{\alpha^t,\beta^t\}$ are chosen properly, as stated below. Thus, Algorithm \ref{alg:stochastic} can be used a benchmark to quantify the performance of Algorithm \ref{alg:deterministic}. \begin{Thm} \label{prop:convergence} If the trade-off factors $\{\alpha^{t}\}$ and $\{\beta^{t}\}$ satisfy the following four conditions \cite{Yang_TSP2016_SSCA}: \end{Thm} \begin{enumerate} \item $\alpha^{t}\rightarrow0$, $\frac{1}{\alpha^{t}}\leq O\left(t^{\kappa}\right)$ for some $\kappa\in\left(0,1\right)$, $\sum_{t}\left(\alpha^{t}\right)^{2}<\infty,$ \item $\beta^{t}\rightarrow0$, $\sum_{t}\beta^{t}=\infty$, $\sum_{t}\big(\beta^{t}\big)^{2}<\infty$, \item $\lim_{t\rightarrow\infty}\beta^{t}/\alpha^{t}=0$, \item $\underset{t\rightarrow\infty}{\lim \sup}~ \alpha^{t}\big(\sum_{(i,k)}L^t_{ik}\big)=0,$ almost surely, where $L^t_{ik}$ refers to the Lipschitz constant for the gradient of $\mathbb E_{\bm h}[R_{ik}(\bm p,\bm h)]$ with respect to $\bm p$ in the $t$-th iteration, \end{enumerate} then the sequence $\{\bm{p}^{t}\}$ produced by Algorithm \ref{alg:stochastic} converges to a stationary point of problem (\ref{eq:mainP}) \emph{almost surely}. The proof of convergence is similar to that in \cite{Yang_TSP2016_SSCA}. The motivation for some key conditions on parameters $\{\alpha^t,\beta^t\}$ is explained below. According to \eqref{eq:SurrogateFunction}, the surrogate function $\hat{g}^t(\bm{p}^{t})$ is recursively updated by averaging the instantaneous rates over a time window of size $\frac{1}{\alpha^t}$. Since $\bm{p}^t$ is changing over time $t$, the surrogate function $\hat{g}^t(\bm{p}^t)$ may not converge to $\sum_{(i,k)}w_{ik}\mathbb E_{\bm h}\big[R_{ik}\big]$ in general. However, if $\lim_{t\rightarrow\infty}\beta^{t}/\alpha^{t}=0$, it follows from \eqref{eq:updateP} that $\bm{p}^t$ is almost unchanged within the time window $\frac{1}{\alpha^t} for sufficiently large $t$. In other words, $\hat{g}^t(\bm{p}^{t})$ will converge to $\sum_{(i,k)}w_{ik}\mathbb E_{\bm h}\big[R_{ik}\big]$ as $t\rightarrow\infty$, which is crucial for guaranteeing the convergence of Algorithm 2 to a stationary point of problem (\ref{eq:mainP}). {Comparing Algorithm 2 with Algorithm 1 in term of the channel state information (CSI) cost, in Algorithm 1 each BS $j$ requires only the set of large-scale fading related to either itself or its user terminals, namely $\{\mathbf V_{ji}\;\text{and}\;\mathbf V_{ij},\forall i\}$. In contrast, Algorithm 2 depends on the small-scale fading over the time instances in addition, thus requiring the CSI $\{\mathbf H_{ji}\;\text{and}\;\mathbf H_{ij},\forall i\}$ for each BS $j$. The main point of this letter is that despite the significantly less CSI requirement, the deterministic power control in Algorithm 1 already performs close to the stochastic optimization benchmark Algorithm 2, as will be verified by simulation in Section \ref{sec:sim}.} We further analyze the computational {complexity}. Let $I$ be the total number of BSs deployed throughout the network. Assuming that each cell has the same number of users $K$, both the deterministic algorithm and the stochastic algorithm have the computational complexity $O(K^3I^3M^2)$ per iteration. However, since the stochastic algorithm requires many more iterations to converge as compared to the deterministic, its overall computational complexity is significantly higher, as will be verified by simulations in Section \ref{sec:sim}. \section{Simulation Results} \label{sec:sim} Consider a $7$-cell wrapped-around cellular topology. The cell radius is 500 meters. A total of 9 users are uniformly distributed in each cell. Assume that each BS has 96 antennas. The power constraint is $P_{\max}=10$ dBm. The pilot sequence has 16 symbols. The spectrum bandwidth is 1 MHz. Following the setup in {\cite{Emil_EUSP2015}}, we assume that the background noise is $-169$ dBm/Hz, and that the path-loss between user $(i,k)$ and BS $j$ is modeled as $\gamma_{j,ik}=\zeta_{j,ik}/d_{j,ik}^{3}$ , where $\zeta_{j,ik}\sim\mathcal{CN}(0,\sigma_{\zeta}^{2})$ with the standard deviation $\sigma_{\zeta}=8$ dB is an i.i.d. log-normal Gaussian random variable, and $d_{j,ik}$ is the distance between user $(i,k)$ and BS $j$. The parameters $(\alpha^t,\beta^t)$ follow the diminishing stepsize rule as suggested in \cite{Yang_TSP2016_SSCA}. We simulate the following power control methods: (i) Deterministic, namely Algorithm \ref{alg:deterministic}, denoted as ``D''; (ii) Stochastic, namely Algorithm \ref{alg:stochastic}, denoted as ``S''; (iii) Equal allocation, denoted as ``E''. We also consider three different pilot design: (i) Nonorthogonal pilots as designed in \cite{Kai_arxiv19_Rx_Nonorthogonal}, denoted as ``N''; (ii) Orthogonal, denoted as ``O''; (iii) Random (according to the Gaussian distribution), denoted as ``R''. In terms of the receiver, we consider either the MRC and the MMSE receiver. A total of eight different algorithms with different receivers, different power control methods and different pilot designs are investigated. This work advocates the deterministic power control coupled with the nonorthogonal pilots. Fig.~\ref{fig:Convergence}(a) and (b) plot the respective objective functions of the Algorithm 1 and Algorithm 2 as functions of the iteration number. It shows that Algorithm 1 requires only around 10 iterations to converge, while Algorithm 2 requires many more iterations. Note that Algorithm 2 is used as a benchmark for Algorithm 1. Fig.~\ref{fig:Cumulative-distributon-function} compares the cumulative distribution of user rates using the optimized power obtained from the different algorithms. As shown in the figure, the proposed deterministic algorithm achieves almost the same performance as the stochastic optimization benchmark with only a slight rate loss in the high rate regime. Combined with Theorem \ref{prop:convergence}, this implies that the proposed deterministic algorithm achieves close to a stationary point of the ergodic rate maximization problem (\ref{eq:mainP}). The two algorithms with nonorthogonal pilots are superior to all the methods with orthogonal pilots. For instance, they improve upon the Stochastic-Orthogonal, which is the best among the orthogonal schemes, by around 16\% at the 50th percentile point. {Observe that all the methods with random pilots perform even worse than those with orthogonal pilots under power control (i.e., except E-O-MMSE).} It also shows that the performance of E-O-MMSE and E-O-MMSE are both inferior to D-N-MRC in terms of the data rates, which demonstrates the importance of power control to the multi-cell massive MIMO network. Thus, with aid of power control, a simple MRC even outperforms much more complex MMSE receiver. Fig.~\ref{fig:numAtn} shows the sum rate performance versus the number of antennas at each BS. It shows that the performance is monotonically increasing with the number of antennas; the growth rate tapers off as the number of antennas increases. Again, the proposed deterministic and stochastic algorithms have similar performance. Moreover, it is seen that the two algorithms outperform all the other benchmarks for all $M$. \section{Conclusion} This letter explores the potential of using power control to mitigate pilot contamination for uplink massive MIMO under nonorthogonal pilots. The main contribution of this letter is in showing that performing power control by maximizing a deterministic approximation of the ergodic rates is already close to a stochastic optimization benchmark in which power control can be hypothetically performed over the instantaneous channel realizations. The proposed power control method provides significant throughput improvements upon the classic massive MIMO system with orthogonal pilots due to its ability to better mitigate pilot contamination. \appendices \section{Proof of Proposition \ref{thm:deterministic}}\label{appendixA} We first show the existence of at least one limit point. The feasible set of the variables $(\bm{\mu},\bm{p},\bm{\eta})$ is convex and compact. It can be shown that problem (10) is upper bounded over the feasible set. Thus, the sequence of iterates produced by Algorithm 1 is compact and bounded as well. Since any compact and bounded sequence must have at least one limit point, the existence of a limit point of Algorithm 1 is guaranteed. We now prove the equivalence between (9) and (10). We let $f_o(\bm{p})=\sum_{(i,k)}w_{ik}\hat{R}_{ik}(\bm{p})$, and let $f_r(\bm{p},\bm{\eta},\bm{\mu})= -\sum_{(i,k)} w_{ik}(\mu_{ik}\eta_{ik}-\log_2\mu_{ik})$. Moreover, we use superscript $i$ to index the iteration in Algorithm 1. Thus, $\bm{\eta}^i$ is updated by (11) with $(\bm{p}^i,\bm{\mu}^i)$, and $\bm{\mu}^i$ is updated by (12) with $(\bm{p}^i,\bm{\eta}^i)$. It turns out that \begin{align} f_o(\bm{p}^{i+1}) &\overset{(a)}{=} f_r(\bm{p}^{i+1},\bm{\eta}^{i+1},\bm{\mu}^{i+1})\overset{(b)}{\geq} f_r(\bm{p}^{i},\bm{\eta}^{i+1},\bm{\mu}^{i+1}) \nonumber\\ &\overset{(c)}{\geq} f_r(\bm{p}^{i},\bm{\eta}^{i},\bm{\mu}^{i+1})\overset{(d)}{\geq} f_r(\bm{p}^{i},\bm{\eta}^{i},\bm{\mu}^{i})\overset{(e)}{=} f_o(\bm{p}^{i}),\nonumber \end{align} where (a) and (e) both follow by the equivalence between (9) and (10); (b) follows since the update of $\bm{p}$ in (13) maximizes $f_r$ when the other variables are fixed; (c) follows since the update of $\bm{\eta}$ in (11) maximizes $f_r$ when the other variables are fixed; (d) follows since the update of $\bm{\mu}$ in (12) maximizes $f_r$ when the other variables are fixed. Hence, the deterministic power control problem (10) is nondecreasing after each iteration. Furthermore, the convergence to a stationary point can be established by using a block coordinate descent (BCD) argument as in \cite{luo_wmmse}. The proof is thus completed. \bibliographystyle{IEEEtran}
1,314,259,994,117
arxiv
\section{Introduction}\label{sec:intro} Hyperspectral images (HSIs) are high-dimensional images---often remotely sensed by airborne or orbital spectrometers---that encode rich spectral and spatial structure~\cite{bioucas2013hyperspectral} that has enabled the detection of material structure in a scene using machine learning algorithms~\cite{ADVIS, DVIS}. However, due to the large volume of HSI data continuously generated by remote sensors, expert annotations (often required for supervised algorithms) are usually difficult to obtain. Moreover, there is an inherent trade-off between spatial and spectral resolution in HSIs~\cite{bioucas2013hyperspectral, DVIS}. HSIs are often created at a coarse spatial resolution due to this trade-off, meaning that some pixels in an HSI correspond to spatial regions in the scene containing many different materials~\cite{bioucas2013hyperspectral, DVIS}. Thus, it is crucial to develop unsupervised approaches that capture the underlying geometric structure of an HSI while incorporating spectral mixing. This work introduces the Spatial-Spectral Image Reconstruction and Clustering with Diffusion Geometry (DSIRC) algorithm for unsupervised material discrimination using HSIs. DSIRC is a variant of the unsupervised Diffusion and Volume maximization-based Image Clustering (D-VIC) algorithm~\cite{DVIS}. DSIRC improves D-VIC by incorporating spatial information in Shape-adaptive Reconstruction (SaR), which smooths the spectral signatures of pixels and thus promotes local spatial regularity of HSIs before cluster analysis~\cite{LiSar2022}. Since HSI pixels that are spatially close tend to come from the same cluster, DSIRC substantially outperforms D-VIC (which is agnostic to spatial information) in extensive numerical results on real-world HSI data. This article is organized as follows. Section \ref{sec: background} contains background on HSI analysis (e.g., clustering, reconstruction, and spectral unmixing), diffusion geometry, and D-VIC. Section \ref{sec: DSIRC} introduces DSIRC. Section \ref{sec: numerics} contains numerical comparisons of DSIRC against classical and state-of-the-art algorithms. Section \ref{sec: conclusions} concludes and discusses future work. \section{Related Works} \label{sec: background} \subsection{Hyperspectral Image Clustering} Algorithms for HSI clustering segment HSI pixels $X=\{x_i\}_{i=1}^N \subset~\mathbb{R}^B$ (interpreted as a point cloud of $N$ pixels' spectral signatures, where $B$ is the number of spectral bands) into \emph{clusters} of pixels $\{X_k\}_{k=1}^K$~\cite{friedman2001elements}. Ideally, any two pixels from the same cluster should share key commonalities (e.g., common materials~\cite{DVIS}). HSI clustering algorithms are \emph{unsupervised}; i.e., the partition $\{X_k\}_{k=1}^K$ is recovered without the aid of ground truth (GT) labels~\cite{DVIS}. \subsection{Hyperspectral Image Reconstruction} Spatially close HSI pixels tend to come from the same cluster, but intra-cluster spectral reflectances may vary substantially due to the inherently coarse spatial resolution of HSIs. This section reviews \emph{HSI reconstruction}, which efficiently denoises hyperspectral data by reconstructing HSI pixels using the spectra of spatial nearest neighbors. HSI reconstruction has been successfully used as a preprocessing step for semi-supervised learning~\cite{LiSar2022, LiMDPI} and is expected to be useful for unsupervised learning~\cite{DVIS}. HSI reconstruction algorithms denoise an image by reconstructing the spectral signature of each pixel $x\in X$ using a linear combination of spatial neighbors' spectral signatures. A pixel is considered a spatial neighbor of $x$ if it is contained in a spatial window centered at $x$ in the original image. While simple spatial squares have been successfully used as spatial windows in unsupervised and semi-supervised algorithms, the spatial radius generally requires tuning in practice~\cite{LiSar2022, LiMDPI, murphy2018unsupervised, murphy2019spectral, sam2021multi}. In contrast, shape-adaptive (SA) regions may be used for parameter-free HSI reconstruction~\cite{LiSar2022}. \subsection{Shape-adaptive Reconstruction}\label{sec: SaR} This section introduces the SaR algorithm for HSI reconstruction. Denote the spatial coordinate of an HSI pixel $x\in X$ by $\xi_x=(\xi_1,\xi_2)^T$. We model the first principal component (PC) score of $x$~\cite{friedman2001elements}, denoted $\mathbf{Z}(\xi_x)$, as $\mathbf{Z}(\xi_x) = {\mathbf{I}(\xi_x)} + \varepsilon_x$, where $\mathbf{I}(\xi_x)$ and ${\varepsilon_x}$ model the ideal signal and noise associated with the spectral signature $x$. To learn the SA region for $x$, SaR first estimates the signal $\mathbf{I}(\xi_x)$ using local polynomial approximation (LPA) filtering. Mathematically, for each direction $\theta_m$ ($m=~1,2,\dots,8$) and length candidate $l \in L_{sar}$ ($L_{sar}$ is the set of all possible length candidates), LPA estimates the ideal signal associated with $x$ using $\hat{\mathbf{I}}_{l,\theta_m}(\xi_x)=\sum_s{g_{l,\theta_m}(u_s)\mathbf{Z}(\mathbf{R}_{\theta_m}\xi_s)},$ where $g_{l,\theta_m}(u_s)$ is the directional LPA kernel~\cite{LiSar2022} for direction $\theta_m$ and length $l$, $u_s=~\mathbf{R}_{\theta_m}(\xi_x-~\xi_s)$ is the rotated coordinate difference between $\xi_x$ and $\xi_s$ (each $\xi_s$ depends on $l$~\cite{katkovnik_2006}), and $\mathbf{R}_{\theta_m}=~{\begin{bmatrix} \cos(\theta_m) & \sin(\theta_m) \\ -\sin(\theta_m) & \cos(\theta_m) \\ \end{bmatrix}}$. As such, LPA estimates the signal associated with the pixel $x$ for each direction $\theta_m$ and length candidate $l$. To select the optimal length $l_{\theta_m}^*\in L_{sar}$ for each direction $\theta_m$, SaR relies on the intersection of confidence intervals (ICI) rule, implemented on the first PC of $X$. Mathematically, for each direction $\theta_m$ and length candidate $l$, ICI estimates a confidence interval for the signal associated with the pixel $x$: $\text{CI}(\hat{\mathbf{I}}_{l,\theta_m}(\xi_x))=[\alpha_{l,\theta_m}(\xi_x),\beta_{l,\theta_m}(\xi_x)]$, using bounds $\alpha_{l,\theta_m}(\xi_x) = \hat{\mathbf{I}}_{l,\theta_m}-\tau \sigma\sum_s{g_{l,\theta_m}(u_s)^2}$ and $\beta_{l,\theta_m}(\xi_x)=\hat{\mathbf{I}}_{l,\theta_m}+\tau \sigma\sum_s{g_{l,\theta_m}(u_s)^2}$, and $\tau$ is a threshold that may be tuned via cross validation~\cite{katkovnik_2006}. The length $l_{\theta_m}^*$ is then selected as the largest $l$ such that $\max\limits_{i=1,\dots,j}{\alpha_{l_i,\theta_m}}\leq\min\limits_{i=1,\dots,j}{\beta_{l_i,\theta_m}}$~\cite{foi2007pointwise}. In particular, for each direction $\theta_m$, ICI selects the largest length $l_{\theta_m}^*$ such that the intersection of confidence intervals of LPA-estimated signal values $\bigcap_{l_i\leq l}{\rm CI}(\hat{\mathbf{I}}_{l_i,\theta_m}(\xi_x))$ is nonempty~\cite{foi2007pointwise, fu2015hyperspectral}. SaR uses the SA region learned through the procedure outlined above to reconstruct the spectral signature of $x$ as $\tilde{x}=~\frac{\sum_{y\in Z(x)}w(x,y)y}{\sum_{y\in Z(x)}w(x,y)}$, where $Z(x)$ is the set of pixels contained in the SA region associated with $x$ and $w(x,y)$ is the Pearson correlation coefficient between $x$ and $y$~\cite{LiSar2022}. SaR (provided in Algorithm \ref{alg: SaR}) has been successfully used to aid in semi-supervised segmentation of HSIs~\cite{LiSar2022} and is expected to prove useful for unsupervised HSI clustering. \begin{algorithm}[t] \SetAlgoLined \KwIn{ $X$ (dataset), $L_{sar}$ (length candidates)} \KwOut{$\tilde{X}$ (reconstructed dataset)} Project pixel spectra on to their first PC~\cite{friedman2001elements}\; \For{$x\in X$}{ Estimate the signal associated with $x$ using LPA for each length $l\in L_{sar}$ and direction $\theta_m$ ($m=1,2,\dots 8$)~\cite{LiSar2022}\; For each direction $\theta_m$ ($m=1,2,\dots, 8$), compute the optimal length $l_{\theta_m}^*$ for that direction using ICI~\cite{foi2007pointwise}. Denote the pixels in the resulting SA region as $Z(x)$\; Reconstruct $x$ as $\tilde{x} = \frac{\sum_{y\in Z(x)}w(x,y)y}{\sum_{y\in Z(x)}w(x,y)}$\; } \caption{Shape-adaptive Reconstruction (SaR)}\label{alg: SaR} \end{algorithm} \subsection{Blind Spectral Unmixing} Due to an inherent tension between spatial and spectral resolution, HSIs are often generated at a coarse spatial resolution~\cite{bioucas2013hyperspectral}. As such, a single pixel may correspond to a spatial region containing multiple materials~\cite{DVIS,bioucas2008HySime}. Linear spectral unmixing algorithms recover latent material structure in HSIs by decomposing pixel spectra into a linear combination of endmembers: spectral signatures intrinsic to the materials in the scene. Mathematically, if $p$ is the number of materials in the scene, a blind spectral unmixing algorithm locates a matrix $\mathbf{E}\in\mathbb{R}^{p\times B}$ (with rows containing endmembers) and abundance vectors $a_i\in\mathbb{R}^p$ such that $x_i \approx a_i^{\top}\mathbf{E}$~\cite{bioucas2013hyperspectral,cui2021unsupervised}. The \emph{purity} of a pixel $x_i$---defined by $\eta(x_i)=\max_{1\leq j\leq p}{ } (a_i)_j$---therefore quantifies the level of mixture in the pixel $x_i$. Indeed, $\eta(x)$ will be large only if it corresponds to a spatial region containing predominantly just one material~\cite{ADVIS, DVIS,cui2021unsupervised}. \subsection{Diffusion Geometry} Graph-based clustering methods efficiently extract latent nonlinear structure in HSIs by interpreting pixels as nodes in an undirected, weighted graph~\cite{coifman2006diffusion}. Edges between nodes are encoded in an adjacency matrix $\mathbf{W}\in\mathbb{R}^{N\times N}$; $\mathbf{W}_{ij}=1$ if $x_j$ is one of the $k_{n}$ $\ell^2$-nearest neighbors of $x_i$ in $X$, and $\mathbf{W}_{ij}=0$ otherwise. Let $\mathbf{D}$ be the $N\times N$ diagonal degree matrix with $\mathbf{D}_{ii} = \sum_{j=1}^N \mathbf{W}_{ij}$. Then, $\mathbf{P} = \mathbf{D}^{-1}\mathbf{W}$ may be interpreted as the transition matrix for a Markov diffusion process on HSI pixels. If the graph underlying $\mathbf{P}$ is irreducible and aperiodic, then $\mathbf{P}$ has a unique stationary distribution $\pi\in\mathbb{R}^{1\times N}$ satisfying $\pi \mathbf{P}=\pi$. \emph{Diffusion distances} enable comparisons between pixels in the context of the diffusion process encoded in $\mathbf{P}$. Define the diffusion distance at time $t\geq 0$ between $x_i,x_j\in X$ by $D_t(x_i, x_j) = \sqrt{\sum_{k=1}^N [(\mathbf{P}^t)_{ik}-(\mathbf{P}^t)_{jk}]^2/\pi_k }$~\cite{coifman2006diffusion}. The diffusion time parameter $t$ controls the scale of structure considered by diffusion distances; smaller $t$ corresponds to the recovery of local structure and larger $t$ corresponds to the recovery of global structure~\cite{sam2021multi,murphy2021multiscale}. Diffusion distances may be efficiently computed using the eigendecomposition of $\mathbf{P}$. Indeed, if $\{(\lambda_k, \psi_k)\}_{k=1}^N$ are the (right) eigenvalue-eigenvector pairs of the transition matrix $\mathbf{P}$, then $D_t(x_i,x_j) = \sqrt{\sum_{k=1}^N|\lambda_k|^{2t}[(\psi_k)_i -(\psi_k)_j]^2}$ for any $t\geq 0$ and $x_i,x_j\in X$. Importantly, for $t$ sufficiently large, diffusion distances therefore can be accurately approximated by using just the eigenvectors $\psi_k$ with sufficiently large $|\lambda_k|$. \subsection{Diffusion and Volume Maximization-based Image Clustering} \label{sec: D-VIC} D-VIC is a highly-accurate diffusion-based HSI clustering algorithm that directly incorporates spectral mixing into its labeling procedure~\cite{DVIS}. D-VIC first estimates $\eta(x)$ through a standard spectral unmixing step: using Hyperspectral Subspace Identification by Minimum Error to estimate $p$, Alternating Volume Maximization to estimate $\mathbf{E}$, and a nonnegative least square solver to estimate abundances~\cite{bioucas2008HySime,chan2011simplex, bro1997fast}. Empirical density of pixels is estimated using $f(x) = \sum_{y\in NN_{k_n}(x)}\exp(-\|x-y\|_2^2/\sigma_0^2)$, where $NN_{k_n}(x)$ is the set of $k_n$ $\ell^2$-nearest neighbors of $x$ in $X$ and $\sigma_0>0$ is the scaling factor controlling the interaction radius between pixels in density calculations. Denoting $\zeta(x)$ as the harmonic mean of normalized purity $\hat{\eta}(x)=\frac{\eta(x)}{\max_{1\leq i\leq N}\eta(x_i)}$ and density $\hat{f}(x)=\frac{f(x)}{\max_{1\leq i\leq N}f(x_i)}$, the following function is constructed to incorporate diffusion geometry into mode selection: \begin{align*} d_t(x) = \begin{cases} \max\limits_{y\in X}D_t(x,y) & x = \argmin\limits_{y\in X}\zeta(y),\\ \min\limits_{y\in X}\{D_t(x,y)| \zeta(y)\geq \zeta(x)\} & \text{otherwise.} \end{cases} \end{align*} By definition, the $K$ maximizers of $\mathcal{D}_t(x)=\zeta(x)d_t(x)$ are pixels high in density and purity but far in diffusion distance from other pixels high in density and purity. These pixels are selected as class modes and given unique labels. Non-modal pixels are (in order of non-increasing $\zeta(x)$) assigned the label of their labeled $D_t$-nearest neighbor of higher $\zeta$-value that is already labeled. \section{Spatial-Spectral Image Reconstruction and Clustering with Diffusion Geometry} \label{sec: DSIRC} Real-world HSIs often encode strong spatial regularity; i.e., pixels that are spatially close are more likely to contain the same materials. As such, incorporating the rich spatial structure contained in HSIs often leads to substantially higher performance among algorithms for HSI clustering and segmentation~\cite{LiSar2022,LiMDPI,murphy2018unsupervised,murphy2019spectral,sam2021multi}. \begin{algorithm}[tb] \SetAlgoLined \KwIn{ $X$ (dataset), $k_n$ (\# nearest neighbors), \\ $\sigma_0$ (KDE bandwidth), $t$ (diffusion time parameter),\\ $K$ (\# classes), $L_{sar}$ (length candidates)} \KwOut{$\hat{\mathcal{C}}$ (HSI clustering)} For each $x\in X$, compute $\eta(x)$ and $f(x)$\; For each $x\in X$, compute $\zeta(x) = \frac{2\hat{f}(x) \hat{\eta}(x)}{\hat{f}(x)+ \hat{\eta}(x)}$\; Compute the reconstructed data $\tilde{X}={\rm SaR}(X)$\; Label $\hat{\mathcal{C}}(\tilde{x}_{m_k}) = k$ for $1\leq k \leq K$, where $\{\tilde{x}_{m_k}\}_{k=1}^K$ are the $K$ maximizers of $\mathcal{D}_t(\tilde{x}) = \zeta(x) d_t(\tilde{x})$\; In order of non-increasing $\zeta(x)$, assign each non-modal $\tilde{x}\in \tilde{X}$ the label $\hat{\mathcal{C}}(\tilde{x}) = \hat{\mathcal{C}}(\tilde{x}^*)$, where $\tilde{x}^* = \argmin\limits_{y\in X}\{D_t(\tilde{x},y)|\zeta(y)\geq \zeta(\tilde{x}) \ \land \ \hat{\mathcal{C}}(y)>0\}$\; \caption{Spatial-Spectral Image Reconstruction and Clustering with Diffusion Geometry}\label{alg: DSIRC} \end{algorithm} This section introduces the DSIRC clustering algorithm, which explicitly incorporates HSI reconstruction into D-VIC. In its first stage, DSIRC computes $\zeta(x)$ using the original pixel spectra (as is described in Section \ref{sec: D-VIC})~\cite{DVIS}. Then, the SaR algorithm (Section \ref{sec: SaR}) is implemented on $X$: a denoising step that incorporates spatial information into its HSI reconstruction~\cite{LiSar2022}. Diffusion distances are calculated using the SaR-reconstructed image. The $K$ maximizers of $\mathcal{D}_t(\tilde{x})=\zeta(x)d_t(\tilde{x})$ are considered cluster modes and assigned unique labels. Non-modal pixels are (in order of non-increasing $\zeta(x)$) assigned the label of their labeled $D_t$-nearest neighbor of higher $\zeta$-value that is already labeled. As such, the sole difference between DSIRC and D-VIC is that DSIRC incorporates spatial information through its SaR-based HSI reconstruction step, and D-VIC is agnostic to spatial information~\cite{DVIS,LiSar2022}. \begin{figure*}[ht] \centering \begin{subfigure}[t]{0.19\textwidth} \centering \includegraphics[width = \textwidth]{Figures/IP/GT.eps} \hspace{0.02in} \vspace{-0.5cm} \caption{Ground Truth} \end{subfigure} % \begin{subfigure}[t]{0.19\textwidth} \centering \includegraphics[width = \textwidth]{Figures/IP/PC1.eps} \hspace{0.02in} \vspace{-0.5cm} \caption{First PC} \end{subfigure} % \begin{subfigure}[t]{0.19\textwidth} \centering \includegraphics[width = \textwidth]{Figures/Comparisons/kmeans.eps} \hspace{0.02in} \vspace{-0.5cm} \caption{$K$-Means} \end{subfigure} % \begin{subfigure}[t]{0.19\textwidth} \centering \includegraphics[width = \textwidth]{Figures/Comparisons/kmeansPca.eps} \hspace{0.02in} \vspace{-0.5cm} \caption{$K$-Means PCA} \end{subfigure} % \begin{subfigure}[t]{0.19\textwidth} \centering \includegraphics[width = \textwidth]{Figures/Comparisons/gmmpca.eps} \vspace{-0.5cm} \caption{GMM PCA} \end{subfigure} \begin{subfigure}[t]{0.19\textwidth} \centering \includegraphics[width = \textwidth]{Figures/Comparisons/sc.eps} \hspace{0.02in} \vspace{-0.5cm} \caption{SC} \end{subfigure} % \begin{subfigure}[t]{0.19\textwidth} \centering \includegraphics[width = \textwidth]{Figures/Comparisons/dvis.eps} \hspace{0.02in} \vspace{-0.5cm} \caption{D-VIC} \end{subfigure} % \begin{subfigure}[t]{0.19\textwidth} \centering \includegraphics[width = \textwidth]{Figures/Comparisons/sci.eps} \hspace{0.02in} \vspace{-0.5cm} \caption{SC-I} \end{subfigure} % \begin{subfigure}[t]{0.19\textwidth} \centering \includegraphics[width = \textwidth]{Figures/Comparisons/dlss.eps} \hspace{0.02in} \vspace{-0.5cm} \caption{DLSS} \end{subfigure} % \begin{subfigure}[t]{0.19\textwidth} \centering \includegraphics[width = \textwidth]{Figures/Comparisons/dsirc.eps} \vspace{-0.5cm} \caption{DSIRC} \end{subfigure} \caption{Comparison of clusterings results of classical algorithms (Panels (c)-(e)), state-of-the-art algorithms (Panels (f)-(i)) and DSIRC (Panel (j)) on the Indian Pines HSI (Panel (a)-(b)). DSIRC---the only algorithm evaluated that incorporates spatial information into a diffusion-based clustering scheme that accounts for spectral mixing---produces substantially better HSI segmentations than comparison methods evaluated. } \label{fig:results} \end{figure*} \section{Experimental Results}\label{sec: numerics} This section contains comparisons of DSIRC against related HSI clustering algorithms on the Indian Pines HSI. Indian Pines---collected by the NASA AVIRIS sensor in northwest Indiana, USA---encodes $B=200$ bands of reflectance across $145\times 145$ pixels. The Indian Pines scene consists of $K=16$ GT classes, which are visualized in Fig.\ref{fig:results}(a). Fig.\ref{fig:results}(b) visualizes the first PC of Indian Pines. Clusterings were evaluated using overall accuracy (OA)---the fraction of correctly labeled pixels---and Cohen's $\kappa$-coefficient $\kappa=\frac{{\rm OA}-p_e}{1-p_e}$, ($p_e$ is the probability of random agreement between two labelings). The classical algorithms we compared DSIRC against were $K$-Means and the Gaussian Mixture Model (GMM)~\cite{friedman2001elements}. $K$-Means locates the clustering that minimizes within-cluster Euclidean distances to cluster centroids. GMM fits a mixture of Gaussian distributions to the dataset using the expectation-maximization algorithm. As HSIs have hundreds of spectral bands, PCA dimensionality reduction is often implemented before cluster analysis using $K$-Means and GMM~\cite{friedman2001elements}. We also compared against several state-of-the-art graph-based clustering algorithms. To emphasize the improvement associated with incorporating spatial information, we evaluated two graph-based algorithms that are agnostic to spatial information: spectral clustering (SC)~\cite{ng2001spectral} and D-VIC (see Section \ref{sec: D-VIC})~\cite{DVIS}. SC implements $K$-Means after the nonlinear mapping $x_i\mapsto[(\psi_1)_i\;(\psi_2)_i\dots (\psi_K)_i]$~\cite{ng2001spectral}. We also compared DSIRC against graph-based clustering algorithms that use spatial information. First considered was improved spectral clustering (SC-I), which modifies the graph underlying $\mathbf{P}$ in SC to incorporate spatial information into edge weights~\cite{zhao2019fast}. Additionally, we evaluated spectral-spatial diffusion learning (DLSS), which incorporates spatial information into a graph-based clustering framework~\cite{maggioni2019learning} by restricting edges between pixels to spatial nearest neighbors in a $(2R+1)\times (2R+1)$ spatial square centered at those pixels, where $R$ is a tunable spatial window input parameter~\cite{murphy2018unsupervised}. We optimized for OA across the same hyperparameter grid for all graph-based algorithms. The set of length candidates for DLSS and DSIRC ranged across the same set: $L_{sar} = \{1,2,3,5,7,9\}$. Table \ref{tab:results} compares the performance of DSIRC against the methods described above, and Figure \ref{fig:results} visualizes Indian Pines and optimal clusterings obtained by DSIRC and comparison methods. DSIRC outperformed DLSS (its closest competitor) by 0.13 in OA, and 0.21 in $\kappa$. The main difference between these two algorithms is that DSRIC incorporates pixel purity into its mode selection and utilizes the spatial regularity of the HSI before its unsupervised diffusion-based labeling process. In contrast, DLSS incorporates spatial information through a spatially-regularized graph but does not directly rely on pixel purity to label the HSI~\cite{murphy2018unsupervised}. Furthermore, DSIRC relies on a spatially-adaptive window with automatically-determined shape, whereas DLSS requires the user to input the spatial window size $R$~\cite{murphy2018unsupervised}. Image reconstruction in DSIRC appears to efficiently remove ``spatial noise'' observed in the D-VIC clustering, as is visualized in Figure \ref{fig:results}. Thus, enforcing spatial regularity appears to improve the quality of a diffusion-based clustering quite substantially. \begin{table}[tb] \centering \resizebox{0.6\textwidth}{!}{% \begin{tabular}{|c|cc|c|cc|} \hline & \textbf{OA} & \boldmath{$\kappa$} & & \textbf{OA} & \boldmath{$\kappa$} \\ \hline \textbf{GMM PCA} & 0.3581 & 0.2821 & \textbf{SC-I} & 0.4696 & 0.3493 \\ \textbf{SC} & 0.3784 & 0.3029 & \textbf{D-VIC} & 0.4756 & 0.3848 \\ \boldmath{$K$}\textbf{-Means} & 0.3817 & 0.3080 & \textbf{DLSS} & {\ul 0.4886} & {\ul 0.4074} \\ \boldmath{$K$}\textbf{-Means PCA} & 0.3837 & 0.3085 & \textbf{DSIRC} & \textbf{0.6195} & \textbf{0.6123} \\ \hline \end{tabular}% } \caption{Performances of DSIRC and comparison methods on the Indian Pines dataset. Bold and underlined values indicate highest and second-highest performances, respectively. Performances of $K$-Means, $K$-Means PCA, GMM PCA, SC, D-VIC, and DSIRC were averaged across ten trials. DSIRC achieved substantially higher performance than any comparison method.} \label{tab:results} \end{table} \section{Conclusions and Future Works}\label{sec: conclusions} We conclude that incorporating spatial information through image reconstruction appears to substantially improve the performance of pixel-wise HSI clustering algorithms that exploit known HSI structure such as spectral mixing. Thus, incorporating a shape-adaptive reconstruction akin to that which was used in DSIRC may be useful before the labeling of HSI pixels. Future work includes extending DSIRC to the active learning domain, wherein the labels of a few carefully-selected pixels are queried and propagated to the rest of the image~\cite{ADVIS,murphy2018unsupervised}. We also expect that DSIRC may be extended to the unsupervised multiscale clustering setting~\cite{sam2021multi,murphy2021multiscale}. The resulting unsupervised and active learning algorithms are likely to be successful in a number of applications, e.g., identifying changes of mining ponds in multispectral images over time, possibly reflecting the occurrence of artisanal and small-scale gold mining activities~\cite{SedaChange2022}. \printbibliography \end{document}
1,314,259,994,118
arxiv
\section{INTRODUCTION} X-ray photoemission spectroscopy (XPS) is the most powerful technique to track the changes in the chemical environment of an atom through its core-level shifts. Core-level peaks are readily detected and identified, and core-level shifts can be determined, nowadays, with the highest precision. In reality, two phenomena contribute to the energy shift of a core-level during a chemical process. First, there is the change in the number of valence electrons or charge transfer into the atom or molecule, which is the quantity we wish to determine. Second, and sometimes overlooked, is the change in the way all energy levels in each atomic species, including core-electron levels, are screened by the external environment. This may include screening from the surrounding atoms, molecules or the substrate. As these two contributions are often of the same order of magnitude, and typically differ in sign, even a substantial charge transfer may result in quite a small core-level shift \cite{Ortega09FrozenLayer,Ortega09Assemblies}. Experimentally, core-level shifts are known to be affected by the so-called photoemission final-state effects, i.e., the changes in the screening of the photoemission core-hole. Moreover, core-hole screening can vary so much, e.g., during the oxidation of a metal \cite{Ortega94}, as to basically dictate the core-level shift. At this point, only photoemission theory can be used to account for both the (initial state) core-electron and the (final state) core-hole screening, in order to assess the charge transfer from the (experimental) XPS core-level shift. However, this requires expensive calculations of the photoemission excited state \cite{1cls,clsInteraction,Cole97,triguero98,Faulkner98CLS,Cole02CLSXPS,Stierle03CLSNiAl,Alagia05,Olovsson06CLSRev,Olovsson10CLS,Li10CLSmolads,Schmidt10} even for the simplest system, and hence it becomes unaffordable for molecular complexes with large numbers of atoms. \renewcommand{\thefootnote}{} \footnotetext{\hspace{-0.5cm}This document is the unedited Author's version of a Submitted Work that was subsequently accepted for publication in The Journal of Physical Chemistry C, copyright \copyright American Chemical Society after peer review. To access the final edited and published work see \href{http://dx.doi.org/10.1021/jp3004213}{http://dx.doi.org/10.1021/jp3004213}.} Systems where the changes in core-hole screening are small are better suited for combined theory/experiment studies, since final state effects can be neglected. For example, when moving from a pure to a mixed donor/acceptor molecular monolayer on the same metal substrate. These systems have stimulated much interest for both experimentalists and theorists, particularly for olefins \cite{Dai05OlefinsAg111}, pentacene \cite{Louie02PENGrowth,Endres04PEN,HighPentaceneLDA,Cantrell08PENDiff,Koch08PENAg111,Toyoda10PENCuAgAu111,Mete10PENAg111,Oteyza10Fluorination}, perfluorinated pentacene \cite{Sakurai09PFPBi,Duhm2010PFPAg111,Witte11PFPAg111}, their 1:1 mixture \cite{Tokito04PFPPEN,Hinderhofer07PENandPFP,Koch08PENPFP,Toyoda11PENPFPCu111}, copper phthalocyanine \cite{Gerlach05CuPcAg,Wee09CuPcAg111,Wee09FCuPcAgAu111,Kumpf10CuPcAg111,Moller96CuPcAg111,TMPcAg100}, and fluorinated phthalocyanine \cite{Wee10FCuPcPEN,PEN+FCuPcAu111,FCuPcPENAu111}, on the (111) facet of the coinage metals. It has been repeatedly reported that such two-dimensional blending of donors and acceptors gives rise to core-level shifts in all atomic components. Here we show that the corresponding transfer of charge can be estimated from the core-level shift if changes in the external environment during the molecular blending process are properly accounted for. In fact, we demonstrate that, in the absence of major chemical disruptions, an effective potential approach can be utilized for a semi-quantitative evaluation of changes in core-electron screening. This effective potential approach is computationally cheap, thereby allowing a fast and accurate determination of molecular charge transfer. The present work combines density functional theory (DFT) calculations with the XPS study of perfluorinated pentacene (PFP), copper phthalocyanine (CuPc) and their 1:1 mixture (PFP+CuPc), on the (111) surface facet of Ag. In Section~\ref{sec:methodology} we provide details of the computational and experimental methods employed, a derivation of the effective potential model, and a discussion of the initial state method used to calculate core-level shifts. We also test the reliability of these DFT calculations by a direct comparison of scanning tunneling microscopy (STM) measurements and simulations. The results are discussed in Section~\ref{sec:resultsanddiscussion}, beginning with the charging of PFP on Ag(111). The initial state method for calculating core-level shifts is then compared with XPS measurements for multilayers and monolayers of PFP on Ag(111). The dependence of the calculated core-level shifts on both charge transfer and the external potential is then demonstrated for monolayers of pure PFP, CuPc and their 1:1 mixture PFP+CuPc, and compared with experiment. These results are then used to compare the calculated charge transfer from DFT with that obtained from a model based on the core-level shifts and change in external potential. The model is then applied to the XPS core-level shifts, to estimate the experimental charge of PFP on Ag(111). This is followed by concluding remarks in Section~\ref{sec:conclusions}. In Appendix~\plainref{sec:stm} we provide provide further details of the STM simulation method employed. We then compare results from three different final state methods, described in Appendix~\plainref{sec:finalstatemethods}, with XPS measurements and initial state method calculations for monolayers of PFP on Ag(111) in Appendix~\plainref{sec:comparison}. \section{METHODOLOGY}\label{sec:methodology} \subsection{Computational Methods} DFT calculations have been performed using the real-space projector augmented wavefunction (PAW) \cite{blo} code \textsc{gpaw} \cite{gpaw1,gpaw2}, within the local density approximation (LDA) for the exchange-correlation (xc)-functional \cite{LDA}, using a grid spacing of 0.2~\AA. An electronic temperature of $k_B T \approx 0.1$~eV was employed to obtain the occupation of the Kohn-Sham orbitals, with all energies extrapolated to $T = 0$~K. Monolayers of PFP, CuPc, and their 1:1 mixture PFP+CuPc have been structurally optimized until a maximum force below 0.05~eV/\AA~was obtained in vacuum and adsorbed on the Ag(111) surface, while keeping the coordinates of the metal slab fixed. The lattice parameters, shown in Table~\ref{para}, are those commensurate with the experimental bulk lattice constant for Ag of 4.09~\AA \cite{Ashcroft}, which are nearest to the periodicity of the monolayer on the surface as observed by STM \cite{goiri}. In Figure~\ref{fgr:STM} (a) and (b) we compare the measured and calculated STM images, respectively, for the mixed 1:1 PFP+CuPc monolayer on Ag(111). The images agree quite well justifying our approach. Further details concerning the STM simulation procedure are provided in Appendix~\plainref{sec:stm}. \begin{figure}[!t] \centering \includegraphics[width=\columnwidth]{Figure1} \caption{STM images of a mixed 1:1 monolayer of CuPc and PFP adsorbed on Ag(111) from (a) experiment \cite{goiri} and (b) a Tersoff-Hamann \cite{stm} calculation. The unit cell of the calculation is also shown, with $a$, $b$, and $\alpha$ provided in Table~\ref{para}.} \label{fgr:STM} {\color{JPCCBlue}{\rule{\columnwidth}{1pt}}} \end{figure} \begin{table}[!t] \caption{\textrm{ Lattice parameters \textit{a}, \textit{b}, and {$\bm{\alpha}$}, for the PFP and CuPc pure and 1:1 mixed monolayers on Ag(111) as obtained from STM (upper values)\cite{goiri}, and those commensurate with the experimental bulk lattice constant for Ag of 4.09~\AA \cite{Ashcroft}, used in the calculations (lower values)}}\label{para} \begin{tabular}{lr@{.}lr@{.}ll} \multicolumn{6}{>{\columncolor[gray]{0.9}}c}{ }\\[-3mm] \rowcolor[gray]{0.9}monolayer& \multicolumn{2}{>{\columncolor[gray]{.9}}c}{$a$ (\AA)} & \multicolumn{2}{>{\columncolor[gray]{.9}}c}{$b$ (\AA)} & \multicolumn{1}{>{\columncolor[gray]{.9}}c}{$\alpha$}\\[1mm] \multirow{2}{*}{PFP} & 17&$0\pm 1.0$ & 8&$8\pm 0.9 $ & $62^\circ\pm 2^\circ$ \\ & 17&352 & 8&676 & ${60}^\circ$ \\[1mm] \multirow{2}{*}{CuPc} & 14&$1\pm 0.8$ & 13&$9\pm 0.7 $ & $88^\circ\pm 4^\circ$ \\ & 14&460 & 15&028 & ${90}^\circ$ \\[1mm] \multirow{2}{*}{PFP+CuPc} & 29&$3\pm 0.6$ & 22&$0\pm 2.0 $ & $89^\circ\pm 6^\circ$ \\ & 30&055 & 23&137 & ${90}^\circ$ \\ \end{tabular} {\color{JPCCBlue}{\rule{\columnwidth}{1pt}}} \end{table} The Ag(111) surface has been modeled using $N=1,2,3,\ldots,6$ layers with the slabs separated from their periodic images by more than 12~\AA\ of vacuum. For such large unit cells as those used herein, \emph{cf}.~Table~\ref{para}, we found a $\Gamma$ point calculation was sufficient to converge the electronic density for the mixed PFP+CuPc monolayer, while for pure PFP and CuPc we employed a ($1\times3\times1$) and ($3\times3\times1$) $\textbf{k}$-point sampling, respectively. However, for calculating core-level shifts from an initial or final state method, a $\Gamma$-point calculation based on the optimized geometry was found to be sufficient to converge the core 1s levels. For each monolayer, both spin polarized and spin paired calculations were performed in vacuum. For CuPc we find the molecule has a magnetic moment of $\mu = 1\mu_0$ in vacuum. However, in the 1:1 mixture consisting of two CuPc and two PFP molecules, shown in Figure~\ref{fgr:STM}, CuPc is paramagnetic with no net magnetic moment, as was also the case for PFP. For this reason spin paired calculations were sufficient to describe the adsorption of the two monolayers on the Ag(111) surface. To model the effective potential for the semi-infinite Ag(111) surface $V$ in the experiment, we have used a fully-relaxed 13 layer Ag(111) slab. Such a thick slab is required to completely converge the band structure of the Ag(111) surface, and remove surface---surface interactions from the calculation. In this case, we have employed the generalized gradient approximation (GGA) as implemented by Perdew, Burke, and Ernzerhof (PBE) for the xc-functional\cite{PBE} with an optimized bulk lattice constant of 4.166~\AA\ in the surface plane, and a ($11\times11\times1$) $\textbf{k}$-point sampling. The GGA-PBE xc-functional is expected to provide a more accurate description of the experimentally-observed effective potential, by removing the spurious long range over-binding found in LDA calculations. \subsection{Experimental Setup} The Ag crystal was cleaned by cycles of Ar$^{\mathrm{+}}$ ion sputtering followed by annealing to about 400$^\circ$C. Molecule coverage was calibrated using a quartz crystal microbalance. Measurements took place in UHV conditions, with base pressures in the 10$^{-10}$ mbar range. STM measurements were performed at a commercial Omicron VT-STM in constant current mode with electrochemically etched W tips. The XPS experiments were performed at ALOISA beamline of the Elettra Synchrotron in Trieste, Italy. A photon energy of 500~eV was used for the C1s and N1s core-levels, and 810~eV was used for the F1s core-level. Cleanliness of the surface was checked by measuring the C1s and O1s spectrum. At the same time, coverage in pure and mixed layers was verified through analysis of the N1s and F1s core-level intensities, with the Ag3d level as common reference. \subsection{Theoretical Model}\label{sec:theoreticalmodel} \begin{figure*}[!t] \centering \includegraphics[width=0.9\textwidth]{Figure2} \caption{Schematic depicting the initial state method for calculating core-level energy shifts as applied to a PFP monolayer in vacuum, charged, and adsorbed on a Ag(111) three layer surface. The calculated density of states (DOS) for the initial state ({\Large{\textbf{------}}}), charge of the molecule $Q$, external potential $V$ ({\color{red}{{---------}}}), and C1s energy shifts are shown. Occupation of the DOS is denoted by grey regions, with charge added to the molecule in vacuum marked in red. Depictions of the neutral, charged ($Q \approx -0.75$~$e$), and adsorbed PFP on Ag(111) are also shown below, with the change in charge density depicted by isosurfaces of $-0.1$~$e/a_0^3$. C and F atoms are depicted by gray and green balls, respectively. Note the DOS for the PFP monolayer in vacuum was increased by a factor of five for clarity.} \label{fgr:initialstatemethod} {\color{JPCCBlue}{\rule{\textwidth}{1pt}}} \end{figure*} Once the equilibrium structures were obtained from standard DFT, calculations of the core-level shifts were then performed. The core-level shift, \(\Delta E\), is defined as the difference in binding energy of an electron in a core state between two environments \(E_{1}\) and \(E_{2}\), so that \begin{equation} \Delta E \equiv E_{2} - E_1 \end{equation} As a particular example, we may consider \(\Delta E\) the difference in binding energy of the C1s level of PFP between a monolayer in vacuum and a mixed 1:1 monolayer of PFP+CuPc adsorbed on the Ag(111) surface. The core-level shift is determined, to a large extent, by two factors: (1) the charge transfer into the atom $Q_a$, and (2) the effect of screening of the atom by the external environment. In the linear regime, where major chemical interactions are absent, the change in screening should be related to the change in the effective potential coming from the atom/molecule's external environment \(V\), at the atom's position \(\mathbf{r}_a\). Under these conditions, the core-level shift should be an additively separable function of the charge transfer and change in external potential, i.e. \begin{equation} \Delta E(Q_a, V) \approx f_{\textit{mol}}(Q_a) + g(V)\label{eqn:ECLS} \end{equation} Here \(g(V)\) describes the change in screening of the atom between the two environments. If we assume the molecule does not undergo significant alteration between the two environments, then \(g\) should only weakly depend on the molecule. In this weak interaction limit, we may further approximate the screening from the environment simply by \begin{equation} g(V) \approx - V(\mathbf{r}_a)\label{eqn:gscr} \end{equation} In this way, the core-levels of an atom should shift to \emph{stronger} binding energies when the molecule enters a binding external potential, e.g.\ due to a screening by a surface. Although somewhat drastic, we shall show that this crude approximation captures the physics of the core-electron screening for such systems. The dependence of the core-level shift on the charge transfer \(f_{\textit{mol}}(Q_a)\) should depend only on the local chemical environment of the molecule, and may be assumed independent of the external environment. In this way, for typical donor--acceptor charge transfers, \(f_{\textit{mol}}(Q_a) \approx \kappa_a Q_a \approx \kappa_X Q_X\), where $X$ is one of the symmetrically inequivalent chemical environments on the molecule, i.e.\ atomic species, and \(\kappa_{a/X}\) are constants. Further simplification is possible by reformulating \(f_{\textit{mol}}\) in terms of the total charge transfer into the molecule \(Q\). This is done by assuming that the fraction of the total charge which is given to atomic species $X$, \(\frac{Q_X}{Q}\), is a linear function of the total charge. More precisely, \begin{equation} f_{\textit{mol}}(Q_X) \approx \kappa_X \frac{Q_X}{Q} Q\approx \left(\xi + \zeta Q\right)Q = \xi Q + \zeta Q^2\label{eqn:fmol} \end{equation} where \(\xi > 0\). This implies that the core-levels should shift to \emph{weaker} binding energies when charge is transferred into the molecule. Further, if $X$ is less electronegative than the other atomic species in the molecule, i.e.\ C relative to N or F, then \(\zeta > 0\). On the other hand, the opposite would be true for more electronegative atomic species, such as N or F relative to C. Substituting \ref{eqn:gscr} and \ref{eqn:fmol} into \ref{eqn:ECLS}, we obtain a simple expression for the core-level shift, \begin{equation} \Delta E \approx \xi Q + \zeta Q^2 - V\label{eqn:DEModel} \end{equation} in terms of the molecule's charge $Q$, the effective potential from the external environment $V$, and two molecule dependent parameters $\xi$ and $\zeta$. These two parameters may be obtained by performing core-level shift calculations for a charged pure monolayer, as depicted schematically in Figure~\ref{fgr:initialstatemethod}. In this case, the screening from the external environment does not play a role $g(V) \approx 0$, so that the core level shifts are only dependent on the local chemical environment of the molecule, i.e.\ $\xi$, $\zeta$, and $Q$. Since the charge of the molecule is specified within the core-level calculation, by performing a few such calculations for various chargings $Q$, one quickly obtains a good estimate for $\xi$ and $\zeta$. Once one has obtained $\xi$ and $\zeta$, since $f_{\textit{mol}}$ is assumed to be dependent only on the charge of the molecule, one simply needs to calculate the molecule's charge via a Bader analysis in the external environment. On the other hand, for an estimate of the external potential $V$, one should separately calculate the effective potential from the external environment, e.g.\ a clean metal substrate or only the surrounding molecules in a mixture. This is depicted schematically in Figure~\ref{fgr:initialstatemethod} for a pure PFP monolayer on three layer Ag(111). In this case, $V$ is the effective potential from a clean three layer Ag(111) slab at the height $h$ of the PFP monolayer above the surface. Altogether, this allows one to model the core-level shifts relative to the neutral molecule using \ref{eqn:DEModel}. Likewise, the total charge transfer \(Q\) into the molecule may be modeled using the core-level shift relative to the neutral molecule in vacuum, \(E - E_{Q=0}\), by \begin{equation} Q \approx -\frac{\xi}{2\zeta} + \sqrt{\frac{\xi^2}{4\zeta^2} + \frac{E - E_{Q=0} + V}{\zeta}}\label{eqn:Qmodel} \end{equation} This result allows us to formulate the following effective potential approach for describing charge transfer in donor--acceptor/metal systems based on core-level shifts. \begin{enumerate} \item Perform a DFT structural relaxation of the neutral pure monolayer in vacuum. \item Calculate core-level binding energies for the neutral pure monolayer in vacuum $E_{Q=0}$. \item Calculate core-level shifts for pure monolayers in vacuum with various chargings $Q$, $\Delta E(Q)$. \item Obtain $\xi$ and $\zeta$ from a quadratic fit to $\Delta E(Q)$. \item Measure or calculate heights $h$ or positions $\mathbf{r}_a$ of the monolayer on the metal substrate or in a mixed monolayer. \item Calculate the external potential $V$ from the metal substrate or mixed monolayer at the heights or positions of the atomic species. \item Measure or calculate core-level binding energies \(E\) for the atomic species on the metal substrate or in the mixed monolayer. \item Using $\xi$, $\zeta$, $V$, $E$, and $E_{Q=0}$ in \ref{eqn:Qmodel}, estimate the charge of the molecule $Q$. \end{enumerate} Thus, in order to determine the total charge of a molecule $Q$, environment 1 should refer to the neutral molecule, e.g.\ a pure monolayer in vacuum, or a multilayer crystal on a surface. For this reason, we first consider the core-level shifts between a monolayer of PFP in vacuum and adsorbed on the three layer Ag(111) surface. In this case, though, noticeable changes in core-hole screening are expected. To assess how much the core-hole screening may contribute to the core-level shifts, we have used both the initial state method, where core-hole screening is neglected, and three types of final state methods, where the core-hole is directly modeled. Specifically, we modeled the final state within the full core-hole, half core-hole, and screened core-hole approximations, which are described in Appendix~\plainref{sec:finalstatemethods}. Although each method has its advantages and disadvantages, overall, we find the initial state method provides the best balance of accuracy with computational costs, for the systems we consider herein. \subsection{Initial State Method}\label{sec:initialstatemethod} In the initial state method, depicted schematically in Figure~\ref{fgr:initialstatemethod}, the binding energy is described using the Kohn-Sham eigenenergies for the given core-level relative to the vacuum energy. This requires an additional DFT calculation using the relaxed geometry, which includes the core 1s levels in the valence for all relevant atoms, i.e.\ C, F, and N. In this way, an all-electron calculation is performed for the entire molecule, within the PAW method, without requiring a finer grid spacing, e.g.\ \(\sim 0.05\)~\AA. The binding energy of atomic species $X$'s 1s level \(E\), is then modeled by \begin{equation} E \approx - \varepsilon_{X\mathrm{1s}} + E_{\textit{vac}} \end{equation} where \(\varepsilon_{X\mathrm{1s}}\) is the energy of a local maxima in the total density of the states due to atomic species $X$'s 1s levels, and \(E_{\textit{vac}}\) is the vacuum energy. This is given by the maximum in the surface averaged effective potential, \begin{equation} E_{\textit{vac}} = \max_z \iint_{\mathcal{A}} \frac{dx dy}{\mathcal{A}} V(x,y,z)\\ \approx \lim_{h\rightarrow\infty} V(h) \end{equation} where \(\mathcal{A}\) is the the area of the monolayer in the unit cell, $h$ is the height above the surface, and $V$ is the effective Kohn-Sham potential. To summarize, an initial state calculation of the binding energy $E$ for the 1s core level of atomic species $X$ involves the following procedure: \begin{enumerate} \item Perform a DFT structural relaxation of the molecular system. \item Recalculate using all-electron PAW pseudopotentials for the molecule. \item Use the surface averaged effective potential to calculate the vacuum level $E_{\textit{vac}}$. \item Obtain the local maxima in the DOS for atomic species $X$'s core level $\varepsilon_{X\mathrm{1s}}$. \item Calculate the initial state binding energies for atomic species $X$ using $E = -\varepsilon_{X\mathrm{1s}} - E_{\textit{vac}}$. \end{enumerate} Although the Kohn-Sham eigenenergies underestimate the experimental binding energies by \(\sim 10\%\), due to error cancellation, the \emph{shifts} in the binding energies are quite accurately described. This method also has the advantage of calculating the core-level shifts for all atoms in the molecule simultaneously. For complex systems such as PFP+CuPc with \(\sim 100\) C atoms, this results in a computational advantage of two orders of magnitude over final state methods, where separate calculations are required for each chemical environment, i.e.\ atomic species. \section{RESULTS AND DISCUSSION}\label{sec:resultsanddiscussion} \subsection{Charge of PFP on Ag(111)} In order to estimate the charge transfer to the molecules we have used the Bader partitioning scheme~\cite{bader}. This method only requires the DFT all-electron density, with the partitioning of the density determined according to its zero-flux surfaces. \begin{figure}[!ht] \includegraphics[width=\columnwidth]{Figure3} \caption{Charge $Q$ in $e$ of PFP in a pure monolayer ({$\medbullet$}) and a 1:1 mixed PFP+CuPc monolayer ({\color{red}{$\blacksquare$}}) as a function of the number of Ag(111) atomic layers $N$, calculated via a Bader analysis.} \label{fgr:charge_vs_layers} {\color{JPCCBlue}{\rule{\columnwidth}{1pt}}} \end{figure} Figure~\ref{fgr:charge_vs_layers} shows the calculated charge $Q$ of PFP in a pure monolayer as a function of the number of layers $N$ of the Ag(111) substrate. We find that for $N = 3$ the charge transfer to the pure PFP layer is already converged to the limit of $Q \approx -0.75$~$e$. This suggests a three layer Ag(111) slab should provide a good description of charge transfer from the infinite slab, at a reasonable computational cost. Since we are mostly interested in an accurate description of charge transfer within our donor--acceptor/metal systems, we may employ a three layer Ag(111) slab model for describing the pure CuPc and the mixed 1:1 PFP+CuPc monolayers. For the mixed 1:1 PFP+CuPc monolayer, the calculated charge of PFP increases monotonically with the number of layers $N$ of the Ag(111) substrate, as shown in Figure~\ref{fgr:charge_vs_layers}. However, the charge transfer between CuPc and PFP remains quite small, as is seen from comparing the charge of PFP in the pure monolayer and 1:1 mixture with CuPc. This suggests that the effect of the Ag substrate on charge transfer between PFP and CuPc in their mixtures is quite small, and most probably within the accuracy of the calculation. For this reason, calculations of PFP and CuPc pure and mixed monolayers in vacuum may suffice to describe XPS measurements on the Ag(111) surface. \subsection{Pure PFP and CuPc Monolayers} As discussed in Section~\ref{sec:theoreticalmodel}, to determine a molecule's charge $Q$ from a core-level shift requires that the initial state be neutral. For this reason we have calculated the core-level shifts between a PFP monolayer in vacuum and adsorbed on a three layer Ag(111) surface with the initial state method, as depicted in Figure~\ref{fgr:initialstatemethod}. To assess the reliability of these results we have also compared them with the XPS core-level shifts between multilayer PFP and a monolayer of PFP on Ag(111). This is quite reasonable, since the neutral multilayer of PFP on Ag(111) should have quite similar C1s binding energies to the neutral PFP monolayer in vacuum. In Figure~\ref{fgr:MultiMono} we plot the measured XPS spectra for PFP in a multilayer ($N \gtrsim 4$) and monolayer on Ag(111). The three different chemical environments in PFP, namely C$_{\mathrm{C}}$, C$_{\mathrm{F}}$, and F, are also depicted schematically in Figure~\ref{fgr:MultiMono}. Specifically, for C$_{\mathrm{C}}$ we measure a core-level shift of \(\Delta E \approx -0.24\)~eV. Small shifts to weaker binding energy such as these are often found when moving from a multilayer to a monolayer on a metal substrate, and are typically attributed to the stronger core-hole screening by the surface. However, we will show that there is also significant charge transfer and screening of the initial state by the metal substrate in such systems. Figure~\ref{fgr:initialstatemethod} compares the initial state binding energies, density of states, and charge transfer for a PFP monolayer in vacuum, charged, and adsorbed on a three layer Ag(111) slab. As seen in Figure~\ref{fgr:charge_vs_layers}, the Ag slab donates a significant amount of charge to the PFP (\(\sim -0.75\)~$e$) upon adsorption. Charging PFP in vacuum by the same amount yields a significant C1s core-level shift to \emph{weaker} binding energies (\(\sim +1.5\)~eV) as expected. On the other hand, at the height $h$ of PFP above the clean Ag(111) surface, the external potential shown in Figure~\ref{fgr:initialstatemethod} is strongly binding (\(\sim -1.8\)~eV), shifting the C1s core-level to \emph{stronger} binding energies. These two competing effects cancel, yielding a small overall core-level shift of \(\Delta E \approx 0.25\)~eV. Although this overestimates the XPS core-level shift by about 0.5~eV, this may be attributed to the substantial difference in core-hole screening between the PFP multilayer and monolayer on Ag(111), which is not accounted for in the initial state method. On the other hand, for core-level shifts between pure and mixed monolayers, the differences in core-hole screening should be quite small, and may be neglected. \begin{figure*} \includegraphics[width=1.5\columnwidth]{Figure4} \caption{XPS spectra for multi-layer PFP ({\footnotesize{$\square$}}) and a monolayer of PFP ({\color{red}{{\footnotesize{$\Diamondblack$}}}}) on Ag(111) of the F (left), C$_{\mathrm{F}}$ and C$_{\mathrm{C}}$ (right) atomic species, as shown schematically. C and F atoms are depicted by gray and green balls, respectively. }\label{fgr:MultiMono} \end{figure*} \begin{figure*}[!t] \centering \includegraphics[width=0.65\textwidth]{Figure5} \caption{Energy $E$ in eV versus charge $Q$ per molecule in $e$ for (a) C$_{\mathrm{C}}$ and (b) C$_{\mathrm{F}}$ atomic species in PFP and (c) C$_{\mathrm{H}}$ and (d) C$_{\mathrm{N}}$ atomic species in CuPc of the C1s level \(-\varepsilon_\mathrm{C1s}\) in vacuum ($\medcirc$), on an $N$ layer Ag(111) surface ({\color{red}{$\medbullet$}}) and after subtracting the change in external potential due to the Ag(111) surface \(-\varepsilon_\mathrm{C1s} + V\) ({\color{blue}{$\Diamondblack$}}). All energies are taken relative to the binding energy of the neutral molecule $E_{Q=0}$. A quadratic fit to the pure monolayer C1s binding energies in vacuum (\textbf{------}) is also given. } \label{fgr:C1sEnergiesPure} {\color{JPCCBlue}{\rule{\textwidth}{1pt}}} \end{figure*} \begin{table}[!t] \caption{\textrm{ Fitting parameters to the C$_{\textrm{C}}$, C$_{\textrm{F}}$, C$_{\textrm{H}}$, and C$_{\textrm{N}}$ 1s binding energies $\bm{-\varepsilon_\textrm{C1s}-E_{\bm{Q=0}} \approx f_{\textit{mol}}(Q) \approx \xi Q + \zeta Q^2}$ in eV for PFP and CuPc pure monolayers in vacuum, where $Q$ is the charge per molecule in \textit{e}}}\label{tbl:ModelParameters} \begin{tabular}{ccc} \multicolumn{3}{>{\columncolor[gray]{0.9}}c}{}\\[-3mm] \rowcolor[gray]{0.9}level & $\xi$ (eV/$e$) & $\zeta$ (eV/$e^2$) \\[1mm] C$_{\textrm{C}}$1s & 2.328 & 0.285\\ C$_{\textrm{F}}$1s & 2.382 & 0.321\\ C$_{\textrm{H}}$1s & 1.341 & 0.125\\ C$_{\textrm{N}}$1s & 1.803 & 0.118\\ \end{tabular} \end{table} \begin{table}[!t] \caption{\textrm{ Average effective potential \textit{V} relative to the vacuum level \textit{E}$_{\textit{vac}}$ in eV of a clean Ag(111) \textit{N} layer surface at the height of adsorbed PFP and CuPc pure and 1:1 mixed PFP+CuPc monolayers}}\label{tbl:VextPure} \begin{tabular}{cccc} \multicolumn{4}{>{\columncolor[gray]{0.9}}c}{ }\\[-3mm] \rowcolor[gray]{0.9}Ag(111)&\multicolumn{3}{>{\columncolor[gray]{.9}}c}{$V$ (eV)}\\ \rowcolor[gray]{0.9}$N$ & \multicolumn{1}{>{\columncolor[gray]{.9}}c}{PFP} & \multicolumn{1}{>{\columncolor[gray]{.9}}c}{CuPc} & \multicolumn{1}{>{\columncolor[gray]{.9}}c}{PFP+CuPc}\\[1mm 1 & -1.787 & -1.646 & -1.773\\ 2 & -2.033 & -2.033 & -1.966\\ 3 & -1.874 & -1.959 & -1.844\\ 4 & -1.993 & --- & ---\\ 5 & -1.890 & --- & ---\\ 6 & -1.905 & --- & ---\\ \end{tabular} {\color{JPCCBlue}{\rule{\columnwidth}{1pt}}} \end{table} To test the reliability of the effective potential model for core-level shifts given in \ref{eqn:DEModel}, we must first obtain a fit for $f_{\textit{mol}}(Q)$ while keeping the external environment, i.e.~the effective potential, constant. This is accomplished by calculating core-level shifts for the monolayer in vacuum when applying an external charge $Q$ through an appropriate shift of the Fermi level. Figure~\ref{fgr:C1sEnergiesPure} shows the calculated C1s core-level energies for a PFP monolayer in vacuum as a function of the applied charge $Q$. We find separate local maxima in the DOS \(\varepsilon_{\textrm{C1s}}\), related to the different C bonding environments or atomic species in the system, namely C$_{\textrm{C}}$ and C$_{\textrm{F}}$ as depicted schematically in Figure~\ref{fgr:MultiMono} for PFP, and C$_{\textrm{H}}$ and C$_{\textrm{N}}$ for CuPc. For each atomic species, we find the core-level shifts are described quantitatively by \ref{eqn:fmol}, with fitting parameters $\xi$ and $\zeta$ given in Table~\ref{tbl:ModelParameters}. Taken together, these results show that the core-level shifts are indeed linearly dependent on the charge transfer into an atom, as assumed in Section~\ref{sec:theoreticalmodel}. \begin{figure*}[!th] \centering \includegraphics[width=0.65\textwidth]{Figure6} \caption{Energy $E$ in eV versus charge $Q$ of PFP in $e$ for (a) C$_{\textrm{C}}$ and (b) C$_{\textrm{F}}$ atomic species of the C1s level \(-\varepsilon_\textrm{C1s}\) in a 1:1 mixture with CuPc in vacuum ({\color{red}{$\square$}}), on an $N$ layer Ag(111) surface ({\color{red}{$\blacksquare$}}), and after subtracting the change in external potential \(-\varepsilon_\textrm{C1s} + V\) due to the other molecules in vacuum ({\color{blue}{$\triangle$}}) and due to the $N$ layer Ag(111) surface ({\color{Green}{$\blacktriangle$}}). The binding energies of the pure PFP monolayer adsorbed on an $N$ layer Ag(111) surface ({\color{red}{$\medbullet$}}) are provided for comparison. All energies are taken relative to the binding energy of the neutral molecule $E_{Q=0}$. A quadratic fit to the pure monolayer C1s binding energies in vacuum (\textbf{------}) is also given. The mixed 1:1 CuPc+* structure, where the average external potential ${V} \approx -0.307$~eV is calculated at the positions of the C atoms in PFP, is depicted schematically above. C, N, H, and Cu atoms are depicted by gray, blue, white and orange balls, respectively.} \label{fgr:C1sEnergiesMix} {\color{JPCCBlue}{\rule{\textwidth}{1pt}}} \end{figure*} \begin{figure*}[!th] \includegraphics[width=1.5\columnwidth]{Figure7} \caption{The calculated density of states (DOS) (lines) and measured XPS spectra (symbols) for the F1s, N1s, and C1s levels versus binding energy for pure monolayers of PFP ({\color{red}{------}},{\color{red}{$\Diamondblack$}}), CuPc ({\color{blue}{------}},{\color{blue}{$\blacksquare$}}), and a 1:1 mixture of PFP+CuPc (------,$\medbullet$). The PFP and CuPc structures, along with the four different C atomic species, C$_{\textrm{C}}$, C$_{\textrm{F}}$, C$_{\textrm{H}}$, and C$_{\textrm{N}}$, consisting of six and four symmetrically inequivalent C atoms in PFP and CuPc, respectively, are shown above. C, F, N, H, and Cu atoms are depicted by gray, green, blue, white and orange balls, respectively.} \label{fgr:spectraVac} {\color{JPCCBlue}{\rule{\textwidth}{1pt}}} \end{figure*} Using our calculated fit to $f_{\textit{mol}}$, we may now test how well the change in screening of the core-level $g$ may be approximated by the change in the effective potential, as given in \ref{eqn:gscr} and \ref{eqn:DEModel}. This may be accomplished by using a change in the molecule's environment, e.g.\ adsorption on an $N$ layer Ag(111) surface, to charge the molecule. In Figure~\ref{fgr:C1sEnergiesPure} (a) and (b) we compare the core-level energies for a pure PFP monolayer adsorbed on $N$ layer Ag(111) surfaces, where \(N=1,2,3,\ldots,6\). The variation of the charge transfer from the surface to the molecule with number of layers, as shown in Figure~\ref{fgr:charge_vs_layers}, means that these calculations provide a further test of the reliability of the model for \(\Delta E\) given in \ref{eqn:DEModel}. Little correlation is initially obvious between the C1s core-levels \(\varepsilon_{\textrm{C1s}}\) and the charge transfer \(Q\) for the pure layers on Ag(111). However, upon removing the effect of screening, i.e.\ plotting \(-\varepsilon_{\textrm{C1s}} + V \approx f_{\textit{mol}}(Q)\), we recover the charge transfer dependence previously observed for the pure PFP layer in vacuum. On the other hand, Figure~\ref{fgr:C1sEnergiesPure} (c) and (d) show weaker agreement when the same procedure is applied to CuPc on $N$ layer Ag(111), although the correlation with the charge transfer dependence \(f_{\textit{mol}}\) is still obtained up to a constant shift. This suggests other contributions are present in the core-electron screening for CuPc. We attribute this to the greater screening inside the CuPc molecule and stronger interaction with the surface, due to metallic Cu---Ag chemical bonds. Taken together, these results validate three major assumptions made in Section~\ref{sec:theoreticalmodel}. Namely, that (1) $f_{\textit{mol}}$ is linearly dependent on the charge of an atom, (2) $f_{\textit{mol}}$ is independent of the external environment, and (3) $g$ may be reasonably approximated by the change in effective potential of the external environment for PFP, while for CuPc screening within the molecule and chemical interaction with the substrate are also important. It should also be noted that the charge of the molecules $Q$ is directly specified for calculations of the monolayer in vacuum, while on the Ag(111) surface $Q$ is obtained from a Bader analysis. This agreement suggests that a Bader analysis provides an excellent description of the charge transfer for these systems. However, as discussed in Appendix~\plainref{sec:comparison}, there is a significant difference between the core-hole screening of the PFP monolayer in vacuum and on the Ag(111) surface. This suggests that the calculated initial state core-level shifts should be shifted by \(\sim -0.4\)~eV to describe the XPS measurements. To avoid such a discrepancy, and provide a better comparison between the calculated initial state core-level shifts and XPS measurements, we shall next compare pure and mixed monolayers of PFP and CuPc on Ag(111) in Section~\ref{sec:mixed}. \subsection{Mixed 1:1 PFP+CuPc Monolayers}\label{sec:mixed} As a further test of the effective potential model, we next calculate core-level shifts upon charging a 1:1 mixture of PFP and CuPc. As shown in Figure~\ref{fgr:C1sEnergiesMix}, the mixed 1:1 PFP+CuPc monolayer in vacuum follows the $f_{\textit{mol}}(Q)$ relation up to a constant shift. Overall, for PFP the core-level is shifted to higher binding energies ($\sim 0.3$ eV), while for CuPc it is shifted to lower binding energies ($\sim -0.2$ eV) when the two layers are mixed. This is in near quantitative agreement with the experimental results on Ag(111), as shown in Figure~\ref{fgr:spectraVac}. To estimate the change in external potential between the pure and mixed PFP+CuPc monolayers, we have performed separate calculations of the relaxed mixed layer geometry in vacuum with PFP removed (CuPc+*) and with CuPc removed (PFP+*). The average effective potential at the coordinates of the C atoms in the empty sites is then calculated relative to the vacuum energy, \(V = V(\mathbf{r}_a) - E_{\textit{vac}}\), as depicted schematically in Figure~\ref{fgr:C1sEnergiesMix}. For the CuPc+* layer we obtain a change in external potential of $\sim-0.307$~eV, which brings the core-level shifts for PFP onto the pure layer values, as seen in Figure~\ref{fgr:C1sEnergiesMix}. This suggests that for PFP both \ref{eqn:gscr} for the screening and \ref{eqn:fmol} are valid. Further, it shows that the charge transfer dependent portion of $\Delta E$, i.e.\ $f_{\textit{mol}}(Q)$, is independent of the external environment, and defined by the molecular environment alone. On the other hand, for the PFP+* layer we obtain a negligible external potential shift, so that the core-level shift is overestimated by the model of \ref{eqn:DEModel}. However, this discrepancy may again be explained by greater screening in the CuPc molecule due to the Cu metal atom. For the mixed 1:1 PFP+CuPc monolayer on Ag(111), we have assumed the external potentials from CuPc+* and Ag(111) are additively separable, so that \(V \approx V^{\textrm{Ag(111)}} + V^{\textrm{CuPc+*}}\). Based on the semi-quantitative agreement shown in Figure~\ref{fgr:C1sEnergiesMix} between $f_{\textit{mol}}(Q)$ and $-\varepsilon_{\textrm{C1s}} + V$, this does appear to be the case. Figure~\ref{fgr:C1sEnergiesMix} also shows the core-level shifts between the pure PFP monolayer and the 1:1 mixture with CuPc on an $N=1,2,3$ layer Ag(111) slab. By comparing with the charge transfer for the same systems, shown in Figure~\ref{fgr:charge_vs_layers}, we find overall the core-levels shift to weaker or stronger binding energy when charge is transferred out of or into PFP, respectively. This means the core-level shifts are strongly dependent on the number of layers in the Ag(111) surface. Finally, in Figure~\ref{fgr:spectraVac} we directly compare the experimental XPS spectra with the total DOS for monolayers of PFP, CuPc and the mixed 1:1 PFP+CuPc monolayers in vacuum. We find that by shifting the C$_{\textrm{F}}$1s and C$_{\textrm{H}}$1s peaks to match the pure monolayer experimental peaks for PFP and CuPc, respectively, we describe the experimental core-level shifts and relative binding energies for the pure and mixed monolayers near-quantitatively. This suggests that the inclusion of the surface, although providing a significant charge transfer, is a nearly constant shift, so that calculations for the monolayers in vacuum remain an effective means of describing core-level shifts. On the other hand, the requirement of separate shifts for the CuPc and PFP monolayers suggests that the details of the PFP---CuPc interactions in the mixed layer are not completely captured at the LDA level. Further calculations including long range van der Waals type interactions may be necessary to describe both the PFP and CuPc binding energies with a single energy shift. However, determining the charge transfer into the molecules based on the XPS core-level shifts only requires an accurate description of the effective potential, as we will show in Section~\ref{sec:chargemodel}. \subsection{Charge Transfer Model}\label{sec:chargemodel} \begin{figure}[!t] \centering \includegraphics[width=0.8\columnwidth]{Figure8} \caption{Comparison of charge $Q$ in $e$ from an effective potential model \(Q^{\textrm{Model}}\) and from DFT calculations \(Q^\textrm{DFT}\) for PFP in vacuum ($\medcirc$), on an $N$ layer Ag(111) surface ({\color{blue}{$\Diamondblack$}}), in a 1:1 mixture with CuPc in vacuum ({\color{red}{$\triangle$}}), and on an $N$ layer Ag(111) surface ({\color{Green}{$\blacktriangle$}}). The standard deviations for PFP and PFP+CuPc on Ag(111), $\sigma \approx \pm 0.09$~$e$, are shown as regions of gray. } \label{fgr:QModel} {\color{JPCCBlue}{\rule{\columnwidth}{1pt}}} \end{figure} To determine the reliability of the effective potential model for describing charge transfer based on C1s binding energies, we next compare the calculated charge transfer $Q$ with that obtained from \ref{eqn:Qmodel}. In Figure~\ref{fgr:QModel} we plot the calculated and model charge transfer, $Q^{\textrm{DFT}}$ and $Q^{\textrm{Model}}$ respectively, for a a pure PFP monolayer and a 1:1 mixture with CuPc monolayers in vacuum and on Ag(111). From \ref{eqn:Qmodel}, we find for the initial state model that \begin{equation} Q^{\textrm{Model}} \equiv -\frac{\xi}{2\zeta} + \sqrt{\frac{\xi^2}{4\zeta^2} + \frac{-\varepsilon_{\textrm{C1s}} - E_{Q=0} + V}{\zeta}} \end{equation} where the model parameters \(\xi\), \(\zeta\), and \(V\) are provided in Tables~\ref{tbl:ModelParameters}, \ref{tbl:VextPure}, and Figure \ref{fgr:C1sEnergiesMix}. We find that for PFP the charge transfer into the molecule is near-quantitatively described by the model. Specifically, for pure PFP and its 1:1 mixture with CuPc on Ag(111), the standard deviation between $Q^{\textrm{Model}}$ and $Q^{\textrm{DFT}}$ is $\sigma \approx \pm 0.09$~$e$, as shown in Figure~\ref{fgr:QModel}. These results strongly support the potential use of the core-level shift relative to a molecule in vacuum to describe the charge transfer upon adsorption and molecular mixing on a metal surface. \begin{table}[!t] \caption{\textrm{ Charge \textit{Q}$^{\textrm{Model}}$ in \textit{e} of PFP in a pure and mixed 1:1 PFP+CuPc monolayer on Ag(111) from an effective potential model using XSW heights \textit{h} in \AA$^{\textit{a}}$ to calculate the effective potential for Ag(111) \textit{V} relative to the vacuum level \textit{E}$_{\textit{vac}}$ in eV, combined with the XPS C1s core-level shifts $\bm{\Delta E}$ in eV$^{\textit{b}}$}}\label{tbl:Qexp} \begin{tabular}{c@{}cr@{.}lr@{.}lr@{.}lr@{.}l} \multicolumn{10}{>{\columncolor[gray]{0.9}}c}{ }\\[-3mm] \rowcolor[gray]{0.9}&&\multicolumn{4}{>{\columncolor[gray]{0.9}}c}{PFP} & \multicolumn{4}{>{\columncolor[gray]{0.9}}c}{PFP+CuPc}\\ \rowcolor[gray]{0.9}&&\multicolumn{2}{>{\columncolor[gray]{.9}}c}{C$_{\textrm{C}}$1s}&\multicolumn{2}{>{\columncolor[gray]{.9}}c}{C$_{\textrm{F}}$1s}&\multicolumn{2}{>{\columncolor[gray]{.9}}c}{C$_{\textrm{C}}$1s}&\multicolumn{2}{>{\columncolor[gray]{.9}}c}{C$_{\textrm{F}}$1s}\\[1mm] $h$ &(\AA) & 3&16 & 3&16 & 3&28 & 3&51\\ $V$ &(eV) & -1&62 & -1&62 & -1&51 & -1&30\\ $\Delta E$ &(eV) & -0&26 & -0&24 & 0&00 & 0&07\\ $Q^{\textrm{Model}}$ &($e$) & -0&91 & -0&89 & -0&87 & -0&71\\ \end{tabular} \begin{flushleft} $^a$XSW heights for C$_{\textrm{C}}$ and C$_{\textrm{F}}$ in PFP and PFP+CuPc on Ag(111) taken from refs.~\citenum{Duhm2010PFPAg111}~and~\citenum{XWSGoiri}, respectively. $^b$XPS core-level shifts taken relative to multilayer PFP. \end{flushleft} {\color{JPCCBlue}{\rule{\columnwidth}{1pt}}} \end{table} In Table~\ref{tbl:Qexp} we show the results of applying the effective potential model to estimate the charge transfer to PFP based on the experimental core-level shifts. Here we have used experimental x-ray standing wave (XSW) measurements to determine the heights $h$ for C$_{\textrm{C}}$ and C$_{\textrm{F}}$ atomic species in pure PFP \cite{Duhm2010PFPAg111} and mixed PFP+CuPc \cite{XWSGoiri} monolayers on Ag(111). Based on this data, we then use a DFT calculation for a 13 layer Ag(111) slab to determine the effective potential at a height $h$ above the surface $V(h)$. Combining this with the XPS core-level shifts, effective potential for CuPc+* of $-0.307$~eV, and the fitting parameters for $f_{\textit{mol}}$ provided in Table~\ref{tbl:ModelParameters}, we obtain from \ref{eqn:Qmodel} the charge of PFP $Q^{\textrm{Model}}$, given in Table~\ref{tbl:Qexp}. We find a charge of about \(-0.9\)~$e$ is donated to PFP by the Ag(111) surface, in both the pure and mixed monolayers. This suggests there is very little net charge transfer to PFP when going from the pure monolayer to a 1:1 mixture with CuPc in the experiment. This explains why the calculations for the monolayers in vacuum describe the XPS core-level shifts so well in Figure~\ref{fgr:spectraVac}. It should be noted these results most probably overestimate the charge transfer when going from the multilayer to the monolayer of PFP, as the XPS core-level shifts also include differences in the strength of the core-hole screening. As this was found to be about 0.4~eV, as discussed in Appendix \plainref{sec:comparison}, we may expect the actual charge transfer to be closer to $-0.7$~$e$, in agreement with the DFT results shown in Figure~\ref{fgr:charge_vs_layers}. In any case, by combining the results of XPS and XSW measurements with DFT calculations, we estimate that there is a significant charge transfer to PFP upon adsorption on a Ag(111) surface, which is basically unchanged by mixing with a CuPc donor molecule. \begin{figure}[!t] \includegraphics[width=0.75\columnwidth]{Figure9} \caption{(a) Energy $E$ of the C$_{\textrm{C}}$1s ({\color{red}{$\medbullet$}}) and C$_{\textrm{F}}$1s ({\color{black}{$\blacksquare$}}) levels in eV and (b) charge $Q$ in $e$ ($\medbullet$) of a pure PFP monolayer versus height $h$ in \AA\ above a three layer Ag(111) surface. Energies are taken relative to the binding energy of the neutral molecule $E_{Q=0}$ in vacuum. Exponential fits are provided as guides to the eye.} \label{fgr:EC1sQvsh} \end{figure} \begin{table}[!h] \caption{\textrm{ Calculated heights \textit{h} in \AA\ of each type of atomic species in the PFP and CuPc pure and 1:1 mixed PFP+CuPc monolayers above Ag(111).}}\label{tbl:heights} \begin{tabular}{cccc} \multicolumn{4}{>{\columncolor[gray]{0.9}}c}{ }\\[-3mm] \rowcolor[gray]{0.9} atomic & \multicolumn{3}{>{\columncolor[gray]{.9}}c}{$h$ (\AA)}\\ \rowcolor[gray]{0.9} species & \multicolumn{1}{>{\columncolor[gray]{.9}}c}{PFP} & \multicolumn{1}{>{\columncolor[gray]{.9}}c}{CuPc} & \multicolumn{1}{>{\columncolor[gray]{.9}}c}{PFP+CuPc}\\[1mm] C$_{\textrm{C}}$ & 2.82 & --- & 2.86 \\ C$_{\textrm{F}}$ & 2.80 & --- & 2.82 \\ F & 2.73 & --- & 2.72 \\ C$_{\textrm{H}}$ & --- & 2.71 & 2.94 \\ C$_{\textrm{N}}$ & --- & 2.80 & 2.98 \\ H & --- & 2.71 & 2.90 \\ N & --- & 2.82 & 3.01 \\ Cu & --- & 2.73 & 2.93 \\ \end{tabular} {\color{JPCCBlue}{\rule{\columnwidth}{1pt}}} \end{table} It should be noted, however, that LDA calculations typically underestimate heights of weakly adsorbed molecular monolayers on metal surfaces. This is clearly seen by comparing the heights for PFP and CuPc pure and 1:1 mixed monolayers on Ag(111) from XSW measurements \cite{Duhm2010PFPAg111,XWSGoiri} with LDA results, as shown in Tables~\ref{tbl:Qexp} and \ref{tbl:heights}, respectively. We find LDA calculations consistently yield heights for C in PFP at \(\sim 2.8\)~\AA\ above the Ag(111) surface. This is in contrast to XSW measurements, which find both C$_{\textrm{C}}$ and C$_{\textrm{F}}$ atomic species at \(h\approx 3.16\)~\AA\ in the pure PFP monolayer, and much higher at $h\approx 3.28$ and 3.51~\AA, respectively, in the 1:1 mixture with CuPc, \emph{cf}.\ Table~\ref{tbl:Qexp}. To understand how these discrepancies may affect the reliability of the effective potential model, we have calculated the dependence of the calculated core level shifts $\Delta E$ and charge $Q$ on the height $h$ of a pure PFP monolayer on three layer Ag(111). This is accomplished by performing separate initial state core-level calculations and Bader analyzes after rigidly shifting the PFP monolayer to a height $h$ above the Ag(111) surface. Figure~\ref{fgr:EC1sQvsh} (a) shows that as the PFP monolayer is raised off the surface, the C$_{\textrm{C}}$1s and C$_{\textrm{F}}$1s binding energies decrease monotonically to the binding energy for the neutral monolayer in vacuum, $E_{Q=0}$. Further, the dependence of the core-level shifts on the height of the molecule is rather weak, changing by less than 0.2 eV between the calculated and measured PFP heights of 2.82 and 3.16 \AA, respectively. On the other hand, as shown in Figure~\ref{fgr:EC1sQvsh} (b), the charge $Q$ of PFP has a stronger dependence on the height $h$, with $Q \sim -0.8$ and $-0.4$~$e$ at the calculated and measured PFP heights, respectively. As expected, we find the charge of PFP decays monotonically to zero as the monolayer is raised off the Ag(111) surface. Taken together, these results suggest that although the charge transfer to the PFP monolayer $Q$ decreases in magnitude with increasing height $h$, this is countered by a decrease in magnitude of the external potential $V$ from the Ag(111) substrate. In effect, changes in $f_{\textit{mol}}(Q)$ and $V$ with $h$ balance, so that the core-level shifts $\Delta E$ change rather little. Overall, this suggests LDA initial state calculations of core-level shifts should provide a reliable description of XPS measurements, and an effective potential model based on LDA parameters may be applied to estimate charge transfer based on XPS core-level shifts and XSW heights. \section{CONCLUSIONS}\label{sec:conclusions} In summary, we have derived and applied an effective potential approach to describe charge transfer within a reticular donor-acceptor/metal complex based on core-level shifts. To do so we have performed DFT calculations and XPS measurements of core-level shifts for PFP, CuPc, and mixed 1:1 PFP+CuPc layers adsorbed on Ag(111). We find that the calculated core-level shifts are described near-quantitatively in terms of the charge transfer into the molecule, and the change in external potential from the environment, which captures the effect of screening for the weakly interacting PFP molecule. Using this model, we were able to estimate the charge transfer into a molecule using the experimental core-level shift relative to the pure multilayer crystal, and the calculated change in effective potential due to the other molecules and the metallic substrate. This provides a novel method for the direct assessment of charge transfer in weakly interacting molecule--substrate systems via XPS measurements and routine DFT calculations. However, further study is needed for other donor--acceptor/metal systems, e.g.\ PEN or FCuPc on Cu or Au, to fully assess the applicability of the effective potential approach.
1,314,259,994,119
arxiv
\section{Introduction} A number of astrophysical source classes including supernova remnants (SNRs), pulsar wind nebulae (PWNe), molecular clouds, normal galaxies, and galaxy clusters are expected to be spatially resolvable by the Large Area Telescope (LAT), the primary instrument on the {\em \textit{Fermi}\xspace Gamma-ray Space Telescope} (\textit{Fermi}\xspace). Additionally, dark matter satellites are also hypothesized to be spatially extended. See \cite{atwood_fermi} for pre-launch predictions. The LAT has detected seven SNRs which are significantly extended at \text{GeV}\xspace energies: W51C, W30, IC~443, W28, W44, RX\,J1713.7$-$3946, and the Cygnus Loop \citep{w51c,w30_lat,ic443,w28,w44,rx_j1713_lat,cygnus_loop_lat}. In addition, three extended PWN have been detected by the LAT: MSH\,15$-$52, Vela~X, and HESS\,J1825$-$137 \citep{msh1552,velax,fermi_hess_j1825}. Two nearby galaxies, the Large and Small Magellanic Clouds, and the lobes of one radio galaxy, Centaurus A, were spatially resolved at \text{GeV}\xspace energies \citep{lmc,smc,cen_a_lat}. A number of additional sources detected at \text{GeV}\xspace energies are positionally coincident with sources that exhibit large enough extension at other wavelengths to be spatially resolvable by the LAT at \text{GeV}\xspace energies. In particular, there are 59 \text{GeV}\xspace sources in the second Fermi Source Catalog (2FGL) that might be associated with extended SNRs \citep[2FGL,][]{second_cat}. Previous analyses of extended LAT sources were performed as dedicated studies of individual sources so we expect that a systematic scan of all LAT-detected sources could uncover additional spatially extended sources. The current generation of air Cherenkov detectors have made it apparent that many sources can be spatially resolved at even higher energies. Most prominent was a survey of the Galactic plane using the High Energy Stereoscopic System (H.E.S.S) which reported 14 spatially extended sources with extensions varying from $\sim0\fdg1$ to $\sim0\fdg25$ \citep{hess_plane_survey}. Within our Galaxy very few sources detected at \text{TeV}\xspace energies, most notably the $\gamma$-ray binaries LS\,5039 \citep{HESSLS5039}, LS I+61$-$303 \citep{MAGICLSI, VERITASLSI}, HESS\,J0632+057 \citep{HESS0632}, and the Crab nebula \citep{crab_weekes}, have no detectable extension. High-energy $\gamma$-rays from \text{TeV}\xspace sources are produced by the decay of $\pi^0$s produced by hadronic interactions with interstellar matter and by relativistic electrons due to Inverse Compton (IC) scattering and bremsstrahlung radiation. It is plausible that the \text{GeV}\xspace and \text{TeV}\xspace emission from these sources originates from the same population of high-energy particles and so at least some of these sources should be detectable at \text{GeV}\xspace energies. Studying these \text{TeV}\xspace sources at \text{GeV}\xspace energies would help to determine the emission mechanisms producing these high energy photons. The LAT is a pair conversion telescope that has been surveying the $\gamma$-ray sky since 2008 August. The LAT has broad energy coverage (20 \text{MeV}\xspace to $>300$ \text{GeV}\xspace), wide field of view ($\sim 2.4$ sr), and large effective area ($\sim 8000\ \text{cm}\xspace^2$ at $>1$ \text{GeV}\xspace) Additional information about the performance of the LAT can be found in \cite{atwood_LAT_mission}. Using 2 years of all-sky survey data, the LAT Collaboration published 2FGL \citep[2FGL,][]{second_cat}. The possible counterparts of many of these sources can be spatially resolved when observed at other frequencies. But detecting the spatial extension of these sources at \text{GeV}\xspace energies is difficult because the size of the point-spread function (PSF) of the LAT is comparable to the typical size of many of these sources. The capability to spatially resolve \text{GeV}\xspace $\gamma$-ray sources is important for several reasons. Finding a coherent source extension across different energy bands can help to associate a LAT source to an otherwise confused counterpart. Furthermore, $\gamma$-ray emission from dark matter annihilation has been predicted to be detectable by the LAT. Some of the dark matter substructure in our Galaxy could be spatially resolvable by the LAT \citep{pre_luanch_dark_matter_fermi}. Characterization of spatial extension could help to identify this substructure. Also, due to the strong energy dependence of the LAT PSF, the spatial and spectral characterization of a source cannot be decoupled. An inaccurate spatial model will bias the spectral model of the source and vice versa. Specifically, modeling a spatially extended source as point-like will systematically soften measured spectra. Furthermore, correctly modeling source extension is important for understanding an entire region of the sky. For example, an imperfect model of the spatially extended LMC introduced significant residuals in the surrounding region \citep{first_cat,second_cat}. Such residuals can bias the significance and measured spectra of neighboring sources in the densely populated Galactic plane. For these reasons, in Section~\ref{analysis_methods_section} we present a new systematic method for analyzing spatially extended LAT sources. In Section~\ref{validate_ts}, we demonstrate that this method can be used to test the statistical significance of the extension of a LAT source and we assess the expected level of bias introduced by assuming an incorrect spatial model. In Section~\ref{extension_sensitivity}, we calculate the LAT detection threshold to resolve the extension of a source. In Section~\ref{dual_localization_method}, we study the ability of the LAT to distinguish between a single extended source and unresolved closely-spaced point-like sources In Section~\ref{test_2lac_sources}, we further demonstrate that our detection method does not misidentify point-like sources as being extended by testing the extension of active Galactic nuclei (AGN) believed to be unresolvable. In Section~\ref{validate_known}, we systematically reanalyze the twelve extended sources included in the 2FGL catalog and in Section~\ref{systematic_errors_on_extension} we describe a way to estimate systematic errors on the measured extension of a source. In Section~\ref{extended_source_search_method}, we describe a search for new spatially extended LAT sources. Finally, in Section~\ref{new_ext_srcs_section} we present the detection of the extension of nine spatially extended sources that were reported in the 2FGL catalog but treated as point-like in the analysis. Two of these sources have been previously analyzed in dedicated publications. \section{Analysis Methods} \label{analysis_methods_section} Morphological studies of sources using the LAT are challenging because of the strongly energy-dependent PSF that is comparable in size to the extension of many sources expected to be detected at \text{GeV}\xspace energies. Additional complications arise for sources along the Galactic plane due to systematic uncertainties in the model for Galactic diffuse emission. For energies below $\sim$300~\text{MeV}\xspace, the angular resolution is limited by multiple scattering in the silicon strip tracking section of the detector and is several degrees at 100 \text{MeV}\xspace. The PSF improves with energy approaching a 68\% containment radius of $\sim0\fdg2$ at the highest energies (when averaged over the acceptance of the LAT) and is limited by the ratio of the strip pitch to the height of the tracker \citep{atwood_LAT_mission,on_orbit_calibration,lat_on_orbit_psf}.\footnote{More information about the performance of the LAT can be found at the \textit{Fermi}\xspace Science Support Center (FSSC, \url{http://fermi.gsfc.nasa.gov}).} However, since most high energy astrophysical sources have spectra that decrease rapidly with increasing energy, there are typically fewer higher energy photons with improved angular resolution. Therefore sophisticated analysis techniques are required to maximize the sensitivity of the LAT to extended sources. \subsection{Modeling Extended Sources in the \ensuremath{\mathtt{pointlike}}\xspace Package} A new maximum-likelihood analysis tool has been developed to address the unique requirements for studying spatially extended sources with the LAT. It works by maximizing the Poisson likelihood to detect the observed distributions of $\gamma$-rays (referred to as counts) given a parametrized spatial and spectral model of the sky. The data are binned spatially, using a HEALPix pixellization, and spectrally \citep{healpix} and the likelihood is maximized over all bins in a region. The extension of a source can be modeled by a geometric shape (e.g. a disk or a two-dimensional Gaussian) and the position, extension, and spectrum of the source can be simultaneously fit. This type of analysis is unwieldy using the standard LAT likelihood analysis tool \ensuremath{\mathtt{gtlike}}\xspace\footnote{\ensuremath{\mathtt{gtlike}}\xspace is distributed publicly by the FSSC.} because it can only fit the spectral parameters of the model unless a more sophisticated iterative procedure is used. We note that \ensuremath{\mathtt{gtlike}}\xspace has been used in the past in several studies of source extension in the LAT Collaboration \citep{lmc,smc,w28,w51c}. In these studies, a set of \ensuremath{\mathtt{gtlike}}\xspace maximum likelihood fits at fixed extensions was used to build a profile of the likelihood as a function of extension. The \ensuremath{\mathtt{gtlike}}\xspace likelihood profile approach has been shown to correctly reproduce the extension of simulated extended sources assuming that the true position is known \citep{francesco_2011}. But it is not optimal because the position, extension, and spectrum of the source must be simultaneously fit to find the best fit parameters and to maximize the statistical significance of the detection. Furthermore, because the \ensuremath{\mathtt{gtlike}}\xspace approach is computationally intensive, no large-scale Monte Carlo simulations have been run to calculate its false detection rate. The approach presented here is based on a second maximum likelihood fitting package developed in the LAT Collaboration called \ensuremath{\mathtt{pointlike}}\xspace \citep{first_cat,matthew_kerr_thesis}. The choice to base the spatial extension fitting on \ensuremath{\mathtt{pointlike}}\xspace rather than \ensuremath{\mathtt{gtlike}}\xspace was made due to considerations of computing time. The \ensuremath{\mathtt{pointlike}}\xspace algorithm was optimized for speed to handle larger numbers of sources efficiently, which is important for our catalog scan and for being able to perform large-scale Monte Carlo simulations to validate the analysis. Details on the \ensuremath{\mathtt{pointlike}}\xspace package can be found in \cite{matthew_kerr_thesis}. We extended the code to allow a simultaneous fit of the source extension together with the position and the spectral parameters. \subsection{Extension Fitting} \label{extension_fitting} In \ensuremath{\mathtt{pointlike}}\xspace, one can fit the position and extension of a source under the assumption that the source model can be factorized: $M(x,y,E)=S(x,y)\times X(E)$, where $S(x,y)$ is the spatial distribution and $X(E)$ is the spectral distribution. To fit an extended source, \ensuremath{\mathtt{pointlike}}\xspace convolves the extended source shape with the PSF (as a function of energy) and uses the \ensuremath{\mathtt{minuit}}\xspace library \citep{minuit_documentation} to maximize the likelihood by simultaneously varying the position, extension, and spectrum of the source. As will be described in Section~\ref{monte_carlo_validation}, simultaneously fitting the position, extension, and spectrum is important to maximize the statistical significance of the detection of the extension of a source. To avoid projection effects, the longitude and latitude of the source are not directly fit but instead the displacement of the source in a reference frame centered on the source. The significance of the extension of a source can be calculated from the likelihood-ratio test. The likelihood ratio defines the test statistic (TS) by comparing the likelihood of a simpler hypothesis to a more complicated one: \begin{equation} \text{TS}\xspace=2\log(\ensuremath{\mathcal{L}}\xspace(H_1)/\ensuremath{\mathcal{L}}\xspace(H_0)), \end{equation} where $H_1$ is the more complicated hypothesis and $H_0$ the simpler one. For the case of the extension test, we compare the likelihood when assuming the source has either a point-like or spatially extended spatial model: \begin{equation} {\ensuremath{\text{TS}_{\text{ext}}}}\xspace=2\log(\ensuremath{\mathcal{L}}\xspace_\text{ext}/\ensuremath{\mathcal{L}}\xspace_\text{ps}). \end{equation} \ensuremath{\mathtt{pointlike}}\xspace calculates {\ensuremath{\text{TS}_{\text{ext}}}}\xspace by fitting a source first with a spatially extended model and then as a point-like source. The interpretation of {\ensuremath{\text{TS}_{\text{ext}}}}\xspace in terms of a statistical significance is discussed in Section~\ref{monte_carlo_validation}. For extended sources with an assumed radially-symmetric shape, we optimized the calculation by performing one of the integrals analytically. The expected photon distribution can be written as \begin{equation} \text{PDF}(\vec r) = \int \text{PSF}(|\vec r - \vec r'|)I_\text{src}(\vec r') r' dr' d\phi' \end{equation} where $\vec r$ represents the position in the sky and $I_\text{src}(\vec r)$ is the spatial distribution of the source. The PSF of the LAT is currently parameterized in the Pass~7\_V6 (P7\_V6) Source Instrument Response Function \citep[IRFs,][]{lat_on_orbit_psf} by a King function \citep{king_function}: \begin{equation} \text{PSF}(r) = \frac{1}{2\pi\sigma^2} \left(1-\frac{1}{\gamma}\right) \left(1+\frac{u}{\gamma}\right)^{-\gamma}, \end{equation} where $u=(r/\sigma)^2/2$ and $\sigma$ and $\gamma$ are free parameters \citep{matthew_kerr_thesis}. For radially-symmetric extended sources, the angular part of the integral can be evaluated analytically \begin{align} \text{PDF}(u) & = \int_0^\infty r' dr' I_\text{src}(v) \int_0^{2\pi} d\phi' \text{PSF}(\sqrt{2\sigma^2(u+v-2\sqrt{uv}\cos(\phi-\phi'))}) \\ & = \int_0^\infty dv I_\text{src}(v) \left(\frac{\gamma-1}{\gamma}\right) \left( \frac{\gamma}{\gamma + u + v}\right)^\gamma \times {}_2F_1 \left(\gamma/2,\frac{1+\gamma}{2},1,\frac{4uv}{(\gamma+u+v)^2}\right), \end{align} where $v=(r'/\sigma)^2/2$ and ${}_2F_1$ is the Gaussian hypergeometric function. This convolution formula reduces the expected photon distribution to a single numerical integral. There will always be a small numerical discrepancy between the expected photon distribution derived from a true point-like source and a very small extended source due to numerical error in the convolution. In most situations, this error is insignificant. But in particular for very bright sources, this numerical error has the potential to bias the TS for the extension test. Therefore, when calculating {\ensuremath{\text{TS}_{\text{ext}}}}\xspace, we compare the likelihood fitting the source with an extended spatial model to the likelihood when the extension is fixed to a very small value ($10^{-10}$ degrees in radius for a uniform disk model). We estimate the error on the extension of a source by fixing the position of the source and varying the extension until the log of the likelihood has decreased by 1/2, corresponding to a $1\sigma$ error \citep{Statistical_methods_book}. Figure~\ref{four_plots_ic443} demonstrates this method by showing the change in the log of the likelihood when varying the modeled extension of the SNR IC~443. The localization error is calculated by fixing the extension and spectrum of the source to their best fit values and then fitting the log of the likelihood to a 2D Gaussian as a function of position. This localization error algorithm is further described in \cite{second_cat}. \subsection{\ensuremath{\mathtt{gtlike}}\xspace Analysis Validation} \label{gtlike_crosscheck} \ensuremath{\mathtt{pointlike}}\xspace is important for analyses of LAT data that require many iterations such as source localization and extension fitting. On the other hand, because \ensuremath{\mathtt{gtlike}}\xspace makes fewer approximations in calculating the likelihood we expect the spectral parameters found with \ensuremath{\mathtt{gtlike}}\xspace to be slightly more accurate. Furthermore, because \ensuremath{\mathtt{gtlike}}\xspace is the standard likelihood analysis package for LAT data, it has been more extensively validated for spectral analysis. For those reasons, in the following analysis we used \ensuremath{\mathtt{pointlike}}\xspace to determine the position and extension of a source and subsequently derived the spectrum using \ensuremath{\mathtt{gtlike}}\xspace. Both \ensuremath{\mathtt{gtlike}}\xspace and \ensuremath{\mathtt{pointlike}}\xspace can be used to estimate the statistical significance of the extension of a source and we required that both methods agree for a source to be considered extended. There was good agreement between the two methods. Unless explicitly mentioned, all \text{TS}\xspace, {\ensuremath{\text{TS}_{\text{ext}}}}\xspace, and spectral parameters were calculated using \ensuremath{\mathtt{gtlike}}\xspace with the best-fit positions and extension found by \ensuremath{\mathtt{pointlike}}\xspace. \subsection{Comparing Source Sizes} \label{compare_source_size} We considered two models for the surface brightness profile for extended sources: a 2D Gaussian model \begin{equation}\label{gauss_pdf} I_\text{Gaussian}(x,y)=\tfrac{1}{2\pi\sigma^2}\exp\left(-(x^2+y^2)/2\sigma^2\right) \end{equation} or a uniform disk model \begin{equation}\label{disk_pdf} I_\text{disk}(x,y)= \begin{cases} \frac{1}{\pi\sigma^2} & x^2+y^2\le\sigma^2 \\ 0 & x^2+y^2>\sigma^2. \end{cases} \end{equation} Although these shapes are significantly different, Figure~\ref{compare_disk_gauss} shows that, after convolution with the LAT PSF, their PDFs are similar for a source that has a 0\fdg5 radius typical of LAT-detected extended sources. To allow a valid comparison between the Gaussian and the uniform disk models, we define the source size as the radius containing 68\% of the intensity (${\ensuremath{\text{r}_{68}}}\xspace$). By direct integration, we find \begin{align} {\ensuremath{\text{r}_{68}}}\xspace_\text{,Gaussian}=&1.51\sigma, \\ {\ensuremath{\text{r}_{68}}}\xspace_\text{,disk}=&0.82\sigma, \end{align} where $\sigma$ is defined in Equation~\ref{gauss_pdf} and Equation~\ref{disk_pdf} respectively. For the example above, ${\ensuremath{\text{r}_{68}}}\xspace=0\fdg5$ so $\sigma_\text{disk}=0.61\ensuremath{^\circ}\xspace$ and $\sigma_\text{Gaussian}=0.33\ensuremath{^\circ}\xspace$. For sources that are comparable in size to the PSF, the differences in the PDF for different spatial models are lost in the noise and the LAT is not sensitive to the detailed spatial structure of these sources. In section \ref{bias_wrong_spatial_model}, we perform a dedicated Monte Carlo simulation that shows there is little bias due to incorrectly modeling the spatial structure of an extended source. Therefore, in our search for extended sources we use only a radially-symmetric uniform disk spatial model. Unless otherwise noted, we quote the radius to the edge ($\sigma$) as the size of the source. \section{Validation of the \text{TS}\xspace Distribution} \label{validate_ts} \subsection{Point-like Source Simulations Over a Uniform Background} \label{monte_carlo_validation} We tested the theoretical distribution for {\ensuremath{\text{TS}_{\text{ext}}}}\xspace to evaluate the false detection probability for measuring source extension. To do so, we tested simulated point-like sources for extension. \cite{mattox_egret} discuss that the \text{TS}\xspace distribution for a likelihood-ratio test on the existence of a source at a given position is \begin{equation}\label{ts_ext_distribution} P(\text{TS}\xspace)=\onehalf(\chi^2_1(\text{TS}\xspace)+\delta(\text{TS}\xspace)), \end{equation} where $P(\text{TS}\xspace)$ is the probability density to get a particular value of TS, $\chi^2_1$ is the chi-squared distribution with one degree of freedom, and $\delta$ is the Dirac delta function. The particular form of Equation \ref{ts_ext_distribution} is due to the null hypothesis (source flux $\Phi=0$) residing on the edge of parameter space and the model hypothesis adding a single degree of freedom (the source flux). It leads to the often quoted result $\sqrt{TS}=\sigma$, where $\sigma$ here refers to the significance of the detection. It is plausible to expect a similar distribution for the TS in the test for source extension since the same conditions apply (with the source flux $\Phi$ replaced by the source radius $r$ and $r<0$ being unphysical). To verify Equation~\ref{ts_ext_distribution}, we evaluated the empirical distribution function of {\ensuremath{\text{TS}_{\text{ext}}}}\xspace computed from simulated sources. We simulated point-like sources with various spectral forms using the LAT on-orbit simulation tool \ensuremath{\mathtt{gtobssim}}\xspace\footnote{\ensuremath{\mathtt{gtobssim}}\xspace is distributed publicly by the FSSC.} and fit the sources with \ensuremath{\mathtt{pointlike}}\xspace using both point-like and extended source hypotheses. These point-like sources were simulated with a power-law spectral model with integrated fluxes above 100 \text{MeV}\xspace ranging from $3\times10^{-9}$ to $1\times10^{-5}$ \ensuremath{\text{ph}\,\text{cm}^{-2}\,\text{s}^{-1}}\xspace and spectral indices ranging from 1.5 to 3. These values were picked to represent typical parameters of LAT-detected sources. The point-like sources were simulated on top of an isotropic background with a power-law spectral model with integrated flux above 100 \text{MeV}\xspace of $1.5\times10^{-5}$ \ensuremath{\text{ph}\,\text{cm}^{-2}\,\text{s}^{-1}}\xspace sr$^{-1}$ and spectral index 2.1. This was taken to be the same as the isotropic spectrum measured by EGRET \citep{sreekumar_isotropic}. This spectrum is comparable to the high-latitude background intensity seen by the LAT. The Monte Carlo simulation was performed over a one-year observation period using a representative spacecraft orbit and livetime. The reconstruction was performed using the P7\_V6 Source class event selection and IRFs \citep{lat_on_orbit_psf}. For each significantly detected point-like source ($\text{TS}\xspace\ge25$), we used \ensuremath{\mathtt{pointlike}}\xspace to fit the source as an extended source and calculate {\ensuremath{\text{TS}_{\text{ext}}}}\xspace. This entire procedure was performed twice, once fitting in the 1 \text{GeV}\xspace to 100 \text{GeV}\xspace energy range and once fitting in the 10 \text{GeV}\xspace to 100 \text{GeV}\xspace energy range. For each set of spectral parameters, $\sim20,000$ statistically independent simulations were performed. For lower-flux spectral models, many of the simulations left the source insignificant ($\text{TS}\xspace<25$) and were discarded. Table~\ref{ts_ext_num_sims} shows the different spectral models used in our study as well as the number of simulations and the average point-like source significance. The cumulative density of {\ensuremath{\text{TS}_{\text{ext}}}}\xspace is plotted in Figures~\ref{ts_ext_mc_1000} and \ref{ts_ext_mc_10000} and compared to the $\chi^2_1/2$ distribution of Equation~\ref{ts_ext_distribution}. Our study shows broad agreement between simulations and Equation~\ref{ts_ext_distribution}. To the extent that there is a discrepancy, the simulations tended to produce smaller than expected values of {\ensuremath{\text{TS}_{\text{ext}}}}\xspace which would make the formal significance conservative. Considering the distribution in Figures~\ref{ts_ext_mc_1000} and \ref{ts_ext_mc_10000}, the choice of a threshold {\ensuremath{\text{TS}_{\text{ext}}}}\xspace set to 16 (corresponding to a formal $4\sigma$ significance) is reasonable. \subsection{Point-like Source Simulations Over a Structured Background} \label{validation_over_plane} We performed a second set of simulations to show that the theoretical distribution for {\ensuremath{\text{TS}_{\text{ext}}}}\xspace is still preserved when the point-like sources are present over a highly-structured diffuse background. Our simulation setup was the same as above except that the sources were simulated on top of and analyzed assuming the presence of the standard Galactic diffuse and isotropic background models used in 2FGL. In our simulations, we selected our sources to have random positions on the sky such that they were within 5\ensuremath{^\circ}\xspace of the Galactic plane. This probes the brightest and most strongly contrasting areas of the Galactic background. To limit the number of tests, we selected only one flux level for each of the four spectral indices and we performed this test only in the 1 \text{GeV}\xspace to 100 \text{GeV}\xspace energy range. As described below, the fluxes were selected so that $\text{TS}\xspace\sim50$. We do not expect to be able to spatially resolve sources at lower fluxes than these, and the results for much brighter sources are less likely to be affected by the structured background. Because the Galactic diffuse emission is highly structured with strong gradients, the point-source sensitivity can vary significantly across the Galactic plane. To account for this, we scaled the flux (for a given spectral index) so that the source always has approximately the same signal-to-noise ratio: \begin{equation} \label{scale_flux_by_background} F(\vec{x}) = F(\text{GC}) \times \left( \frac{B(\vec{x})}{B(\text{GC})}\right)^{1/2}. \end{equation} Here, $\vec{x}$ is the position of the simulated source, $F$ is the integral flux of the source from 100 \text{MeV}\xspace to 100 \text{GeV}\xspace, $F(\text{GC})$ is the same quantity if the source was at the Galactic center, $B$ is the integral of the Galactic diffuse and isotropic emission from 1 \text{GeV}\xspace to 100 \text{GeV}\xspace at the position of the source, and $B(\text{GC})$ is the same quantity if the source was at the Galactic center. For the four spectral models, Table~\ref{ts_ext_num_sims} lists $F(\text{GC})$ and the average value of \text{TS}\xspace. For each spectrum, we performed $\sim90,000$ simulations. Figure~\ref{tsext_plane_plot} shows the cumulative density of {\ensuremath{\text{TS}_{\text{ext}}}}\xspace for each spectrum. For small values of {\ensuremath{\text{TS}_{\text{ext}}}}\xspace, there is good agreement between the simulations and theory. For the highest values of {\ensuremath{\text{TS}_{\text{ext}}}}\xspace, there is possibly a small discrepancy, but the discrepancy is not statistically significant. Therefore, we are confident we can use {\ensuremath{\text{TS}_{\text{ext}}}}\xspace as a robust measure of statistical significance when testing LAT-detected sources for extension. \subsection{Extended Source Simulations Over a Structured Background} \label{bias_wrong_spatial_model} We also performed a Monte Carlo study to show that incorrectly modeling the spatial extension of an extended source does not substantially bias the spectral fit of the source, although it does alter the value of the \text{TS}\xspace. To assess this, we simulated the spatially extended ring-type SNR W44. We selected W44 because it is the most significant extended source detected by the LAT that has a non-radially symmetric photon distribution \citep{w44}. W44 was simulated with a power-law spectral model with an integral flux of $7.12\times10^{-8}$ \ensuremath{\text{ph}\,\text{cm}^{-2}\,\text{s}^{-1}}\xspace in the energy range from 1 \text{GeV}\xspace to 100 \text{GeV}\xspace and a spectral index of 2.66 (see Section~\ref{validate_known}). W44 was simulated with the elliptical ring spatial model described in \cite{w44}. For reference, the ellipse has a semi-major axis of 0\fdg3, a semi-minor axis of 0\fdg19, a position angle of $147\ensuremath{^\circ}\xspace$ measured East of celestial North, and the ring's inner radius is 75\% of the outer radius. We used a simulation setup similar to that described in Section~\ref{validation_over_plane}, but the simulations were over the 2-year interval of the 2FGL catalog. In the simulations, we did not include the finite energy resolution of the LAT to isolate any effects due to changing the assumed spatial model. The fitting code we use also ignores this energy dispersion and the potential bias introduced by this will be discussed in an upcoming paper by the LAT collaboration \citep{lat_on_orbit_psf}. In total, we performed 985 independent simulations. The simulated sources were fit using a point-like spatial model, a radially-symmetric Gaussian spatial model, a uniform disk spatial model, an elliptical disk spatial model, and finally with an elliptical ring spatial model. We obtained the best fit spatial parameters using \ensuremath{\mathtt{pointlike}}\xspace and, with these parameters, obtained the best fit spectral parameters using \ensuremath{\mathtt{gtlike}}\xspace. Figure~\ref{ts_comparison_w44sim}a shows that the significance of W44 in the simulations is very large ($\text{TS}\xspace\sim3500$) for a model with a point-like source hypothesis. Figure~\ref{ts_comparison_w44sim}b shows that the significance of the spatial extension is also large (${\ensuremath{\text{TS}_{\text{ext}}}}\xspace\sim250$). On average {\ensuremath{\text{TS}_{\text{ext}}}}\xspace is somewhat larger when fitting the sources with more accurate spatial models. This shows that assuming an incorrect spatial model will cause the source's significance to be underestimated. Figure~\ref{ts_comparison_w44sim}c shows that the sources were fit better when assuming an elliptical disk spatial model compared to a uniform disk spatial model ($\text{TS}\xspace_\text{elliptical\ disk}-\text{TS}\xspace_\text{disk}\sim30$). Finally, Figure~\ref{ts_comparison_w44sim}d shows that the sources were fit somewhat better assuming an elliptical ring spatial model compared to an elliptical disk spatial model ($\text{TS}\xspace_\text{elliptical\ ring}-\text{TS}\xspace_\text{elliptical\ disk}\sim9$). This shows that the LAT has some additional power to resolve substructure in bright extended sources. Figure~\ref{bias_w44sim}a and Figure~\ref{bias_w44sim}b clearly show that no significant bias is introduced by modeling the source as extended but with an inaccurate spatial model, while a point-like source modeling results in a $\sim10\%$ and $\sim0.125$ bias in the fit flux and index, respectively. Furthermore, Figure~\ref{bias_w44sim}c shows that the {\ensuremath{\text{r}_{68}}}\xspace estimate of the extension size is very mildly biased ($\sim10\%$) toward higher values when inaccurate spatial models are used, and thus represents a reasonable measurement of the true 68\% containment radius for the source. For the elliptical spatial models, {\ensuremath{\text{r}_{68}}}\xspace is computed by numeric integration. \section{Extended Source Detection Threshold} \label{extension_sensitivity} We calculated the LAT flux threshold to detect spatial extent. We define the detection threshold as the flux at which the value of ${\ensuremath{\text{TS}_{\text{ext}}}}\xspace$ averaged over many statistical realizations is $\langle{\ensuremath{\text{TS}_{\text{ext}}}}\xspace\rangle=16$ (corresponding to a formal $4\sigma$ significance) for a source of a given extension. We used a simulation setup similar to that described in Section~\ref{monte_carlo_validation}, but instead of point-like sources we simulated extended sources with radially-symmetric uniform disk spatial models. Additionally, we simulated our sources over the two-year time range included in the 2FGL catalog. For each extension and spectral index, we selected a flux range which bracketed ${\ensuremath{\text{TS}_{\text{ext}}}}\xspace=16$ and performed an extension test for $>100$ independent realizations of ten fluxes in the range. We calculated $\langle{\ensuremath{\text{TS}_{\text{ext}}}}\xspace\rangle=16$ by fitting a line to the flux and ${\ensuremath{\text{TS}_{\text{ext}}}}\xspace$ values in the narrow range. Figure~\ref{index_sensitivity} shows the threshold for sources of four spectral indices from 1.5 to 3 and extensions varying from $\sigma=0\fdg1$ to $2\fdg0$. The threshold is high for small extensions when the source is small compared to the size of the PSF. It drops quickly with increasing source size and reaches a minimum around 0\fdg5. The threshold increases for large extended sources because the source becomes increasingly diluted by the background. Figure~\ref{index_sensitivity} shows the threshold using photons with energies between 100 \text{MeV}\xspace and 100 \text{GeV}\xspace and also using only photons with energies between 1 \text{GeV}\xspace and 100 \text{GeV}\xspace. Except for very large or very soft sources, the threshold is not substantially improved by including photons with energies between 100 \text{MeV}\xspace and 1 \text{GeV}\xspace. This is also demonstrated in Figure~\ref{four_plots_ic443} which shows {\ensuremath{\text{TS}_{\text{ext}}}}\xspace for the SNR IC~443 computed independently in twelve energy bins between 100 \text{MeV}\xspace and 100 \text{GeV}\xspace. For IC~443, which has a spectral index $\sim2.4$ and an extension $\sim0\fdg35$, almost the entire increase in likelihood from optimizing the source extent in the model comes from energies above 1 \text{GeV}\xspace. Furthermore, other systematic errors become increasingly large at low energy. For our extension search (Section~\ref{extended_source_search_method}), we therefore used only photons with energies above 1 \text{GeV}\xspace. Figure~\ref{all_sensitivity} shows the flux threshold as a function of source extension for different background levels ($1\times$, $10\times$, and $100\times$ the nominal background), different spectral indices, and two different energy ranges (1 \text{GeV}\xspace to 100 \text{GeV}\xspace and 10 \text{GeV}\xspace to 100 \text{GeV}\xspace). The detection threshold is higher for sources in regions of higher background. When studying sources only at energies above 1 \text{GeV}\xspace, the LAT detection threshold (defined as the 1 \text{GeV}\xspace to 100 \text{GeV}\xspace flux at which $\langle{\ensuremath{\text{TS}_{\text{ext}}}}\xspace\rangle=16$) depends less strongly on the spectral index of the source. The index dependence of the detection threshold is even weaker when considering only photons with energies above 10 \text{GeV}\xspace because the PSF changes little from 10 \text{GeV}\xspace to 100 \text{GeV}\xspace. Overlaid on Figure~\ref{all_sensitivity} are the LAT-detected extended sources that will be discussed in Sections~\ref{validate_known} and \ref{new_ext_srcs_section}. The extension thresholds are tabulated in Table~\ref{all_sensitivity_table}. Finally, Figure~\ref{time_sensitivity} shows the projected detection threshold of the LAT to extension with a 10 year exposure against 10 times the isotropic background measured by EGRET. This background is representative of the background near the Galactic plane. For small extended sources, the threshold improves by a factor larger than the square root of the relative exposures because the LAT is signal-limited at high energies where the present analysis is most sensitive. For large extended sources, the relevant background is over a larger spatial range and so the improvement is closer to a factor corresponding to the square root of the relative exposures that is caused by Poisson fluctuations in the background. \section{Testing Against Source Confusion} \label{dual_localization_method} It is impossible to discriminate using only LAT data between a spatially extended source and multiple point-like sources separated by angular distances comparable to or smaller than the size of the LAT PSF. To assess the plausibility of source confusion for sources with ${\ensuremath{\text{TS}_{\text{ext}}}}\xspace\ge16$, we developed an algorithm to test if a region contains two point-like sources. The algorithm works by simultaneously fitting in \ensuremath{\mathtt{pointlike}}\xspace the positions and spectra of the two point-like sources. To help with convergence, it begins by dividing the source into two spatially coincident point-like sources and then fitting the sum and difference of the positions of the two sources without any limitations on the fit parameters. After simultaneously fitting the two positions and two spectra, we define \ensuremath{\text{TS}_{\text{2pts}}}\xspace as twice the increase in the log of the likelihood fitting the region with two point-like sources compared to fitting the region with one point-like source: \begin{equation} \ensuremath{\text{TS}_{\text{2pts}}}\xspace=2\log(\ensuremath{\mathcal{L}}\xspace_\text{2pts}/\ensuremath{\mathcal{L}}\xspace_\text{ps}). \end{equation} For the following analysis of LAT data, \ensuremath{\text{TS}_{\text{2pts}}}\xspace was computed by fitting the spectra of the two point-like sources in \ensuremath{\mathtt{gtlike}}\xspace using the best fit positions of the sources found by \ensuremath{\mathtt{pointlike}}\xspace. \ensuremath{\text{TS}_{\text{2pts}}}\xspace cannot be quantitatively compared to {\ensuremath{\text{TS}_{\text{ext}}}}\xspace using a simple likelihood-ratio test to evaluate which model is significantly better because the models are not nested \citep{statistics_with_care}. Even though the comparison of {\ensuremath{\text{TS}_{\text{ext}}}}\xspace with \ensuremath{\text{TS}_{\text{2pts}}}\xspace is not a calibrated test, ${\ensuremath{\text{TS}_{\text{ext}}}}\xspace>\ensuremath{\text{TS}_{\text{2pts}}}\xspace$ indicates that the likelihood for the extended source hypothesis is higher than for two point-like sources and we only consider a source to be extended if ${\ensuremath{\text{TS}_{\text{ext}}}}\xspace>\ensuremath{\text{TS}_{\text{2pts}}}\xspace$. We considered using the Bayesian information criterion \citep[BIC,][]{BIC_statistical_test} as an alternative Bayesian formulation for this test, but it is difficult to apply to LAT data because it contains a term including the number of data points. For studying $\gamma$-ray sources in LAT data, we analyze relatively large regions of the sky to better define the contributions from diffuse backgrounds and nearby point sources. This is important for accurately evaluating source locations and fluxes but the fraction of data directly relevant to the evaluation of the parameters for the source of interest is relatively small. As an alternative, we considered the Akaike information criterion test \citep[\text{AIC}\xspace,][]{AIC_statistical_test}. The \text{AIC}\xspace is defined as $\text{AIC}\xspace=2k-2\log\ensuremath{\mathcal{L}}\xspace$, where $k$ is the number of parameters in the model. In this formulation, the best hypothesis is considered to be the one that minimizes the \text{AIC}\xspace. The first term penalizes models with additional parameters. The two point-like sources hypothesis has three more parameters than the single extended source hypothesis (two more spatial parameters and two more spectral parameters compared to one extension parameter), so the comparison $\text{AIC}\xspace_\text{ext} < \text{AIC}\xspace_\text{2pts}$ is formally equivalent to ${\ensuremath{\text{TS}_{\text{ext}}}}\xspace + 6 > \ensuremath{\text{TS}_{\text{2pts}}}\xspace$. Our criterion for accepting extension (${\ensuremath{\text{TS}_{\text{ext}}}}\xspace > \ensuremath{\text{TS}_{\text{2pts}}}\xspace$) is thus equivalent to requesting that the AIC-based empirical support for the two point-like sources model be ``considerably less'' than for the extended source model, following the classification by \cite{aic_stats_book}. We assessed the power of the ${\ensuremath{\text{TS}_{\text{ext}}}}\xspace>\ensuremath{\text{TS}_{\text{2pts}}}\xspace$ test with a Monte Carlo study. We simulated one spatially extended source and fit it as both an extended source and as two point-like sources using \ensuremath{\mathtt{pointlike}}\xspace. We then simulated two point-like sources and fit them with the same two hypotheses. By comparing the distribution of \ensuremath{\text{TS}_{\text{2pts}}}\xspace and {\ensuremath{\text{TS}_{\text{ext}}}}\xspace computed by \ensuremath{\mathtt{pointlike}}\xspace for the two cases, we evaluated how effective the ${\ensuremath{\text{TS}_{\text{ext}}}}\xspace>\ensuremath{\text{TS}_{\text{2pts}}}\xspace$ test is at rejecting cases of source confusion as well as how likely it is to incorrectly reject that an extended source is spatially extended. All sources were simulated using the same time range as in Section~\ref{extension_sensitivity} against a background 10 times the isotropic background measured by EGRET, representative of the background near the Galactic plane. We did this study first in the energy range from 1 \text{GeV}\xspace to 100 \text{GeV}\xspace by simulating extended sources of flux $4\times10^{-9}$ \ensuremath{\text{ph}\,\text{cm}^{-2}\,\text{s}^{-1}}\xspace integrated from 1 \text{GeV}\xspace to 100 \text{GeV}\xspace and a power-law spectral model with spectral index 2. This spectrum was picked to be representative of the new extended sources that were discovered in the following analysis when looking in the 1 \text{GeV}\xspace to 100 \text{GeV}\xspace energy range (see Section~\ref{new_ext_srcs_section}). We simulated these sources using uniform disk spatial models with extensions varying up to $1\ensuremath{^\circ}\xspace$. Figure~\ref{confusion_extended_plot}a shows the distribution of {\ensuremath{\text{TS}_{\text{ext}}}}\xspace and \ensuremath{\text{TS}_{\text{2pts}}}\xspace and Figure~\ref{confusion_extended_plot}c shows the distribution of ${\ensuremath{\text{TS}_{\text{ext}}}}\xspace-\ensuremath{\text{TS}_{\text{2pts}}}\xspace$ as a function of the simulated extension of the source for 200 statistically independent simulations. Figure~\ref{confusion_2pts_plot}a shows the same plot but when fitting two simulated point-like sources each with half of the flux of the spatially extended source and with the same spectral index as the extended source. Finally, Figure~\ref{confusion_2pts_plot}c shows the same plot with each point-like source having the same flux but different spectral indices. One point-like source had a spectral index of 1.5 and the other an index of 2.5. These indices are representative of the range of indices of LAT-detected sources. The same four plots are shown in Figure~\ref{confusion_extended_plot}b, \ref{confusion_extended_plot}d, \ref{confusion_2pts_plot}b, and \ref{confusion_2pts_plot}d but this time when analyzing a source of flux $10^{-9}$ \ensuremath{\text{ph}\,\text{cm}^{-2}\,\text{s}^{-1}}\xspace (integrated from 10 \text{GeV}\xspace to 100 \text{GeV}\xspace) only in the 10 \text{GeV}\xspace to 100 \text{GeV}\xspace energy range. This flux is typical of the new extended sources discovered using only photons with energies between 10 \text{GeV}\xspace and 100 \text{GeV}\xspace (see Section~\ref{new_ext_srcs_section}). Several interesting conclusions can be made from this study. As one would expect, ${\ensuremath{\text{TS}_{\text{ext}}}}\xspace-\ensuremath{\text{TS}_{\text{2pts}}}\xspace$ is mostly positive when fitting the simulated extended sources. In the 1 \text{GeV}\xspace to 100 \text{GeV}\xspace analysis, only 11 of the 200 simulated extended sources had ${\ensuremath{\text{TS}_{\text{ext}}}}\xspace>16$ but were incorrectly rejected due to \ensuremath{\text{TS}_{\text{2pts}}}\xspace being greater than {\ensuremath{\text{TS}_{\text{ext}}}}\xspace. In the 10 \text{GeV}\xspace to 100 \text{GeV}\xspace analysis, only 7 of the 200 sources were incorrectly rejected. From this, we conclude that this test is unlikely to incorrectly reject truly spatially extended sources. On the other hand, it is often the case that ${\ensuremath{\text{TS}_{\text{ext}}}}\xspace>16$ when testing the two simulated point-like sources for extension. This is especially the case when the two sources had the same spectral index. Forty out of 200 sources in the 1 \text{GeV}\xspace to 100 \text{GeV}\xspace energy range and 43 out of 200 sources in the 10 \text{GeV}\xspace to 100 \text{GeV}\xspace energy range had ${\ensuremath{\text{TS}_{\text{ext}}}}\xspace>16$. But in these cases, we always found the single extended source fit to be worse than the two point-like source fit. From this, we conclude that the ${\ensuremath{\text{TS}_{\text{ext}}}}\xspace>\ensuremath{\text{TS}_{\text{2pts}}}\xspace$ test is powerful at discarding cases in which the true emission comes from two point-like sources. The other interesting feature in Figure~\ref{confusion_extended_plot}a and \ref{confusion_extended_plot}b is that for simulated extended sources with typical sizes ($\sigma\sim0\fdg5$), one can often obtain almost as large an increase in likelihood fitting the source as two point-like sources ($\ensuremath{\text{TS}_{\text{2pts}}}\xspace\sim{\ensuremath{\text{TS}_{\text{ext}}}}\xspace$). This is because although the two point-like sources represent an incorrect spatial model, the second source has four additional degrees of freedom (two spatial and two spectral parameters) and can therefore easily model much of the extended source and statistical fluctuations in the data. This effect is most pronounced when using photons with energies between 1 \text{GeV}\xspace and 100 \text{GeV}\xspace where the PSF is broader. From this Monte Carlo study, we can see the limits of an analysis with LAT data of spatially extended sources. Section~\ref{monte_carlo_validation} showed that we have a statistical test that finds when a LAT source is not well described by the PSF. But this test does not uniquely prove that the emission originates from spatially extended emission instead of from multiple unresolved sources. Demanding that ${\ensuremath{\text{TS}_{\text{ext}}}}\xspace>\ensuremath{\text{TS}_{\text{2pts}}}\xspace$ is a powerful second test to avoid cases of simple confusion of two point-like sources. But it could always be the case that an extended source is actually the superposition of multiple point-like or extended sources that could be resolved with deeper observations of the region. There is nothing about this conclusion unique to analyzing LAT data, but the broad PSF of the LAT and the density of sources expected to be \text{GeV}\xspace emitters in the Galactic plane makes this issue more significant for analyses of LAT data. When possible, multiwavelength information should be used to help select the best model of the sky. \section{Test of 2LAC Sources} \label{test_2lac_sources} For all following analyses of LAT data, we used the same two-year dataset that was used in the 2FGL catalog spanning from 2008 August 4 to 2010 August 1. We applied the same acceptance cuts and we used the same P7\_V6 Source class event selection and IRFs \citep{lat_on_orbit_psf}. When analyzing sources in \ensuremath{\mathtt{pointlike}}\xspace, we used a circular $10\ensuremath{^\circ}\xspace$ region of interest (ROI) centered on our source and eight energy bins per logarithmic decade in energy. When refitting the region in \ensuremath{\mathtt{gtlike}}\xspace using the best fit spatial and spectral models from \ensuremath{\mathtt{pointlike}}\xspace, we used the `binned likelihood' mode of \ensuremath{\mathtt{gtlike}}\xspace on a $14\ensuremath{^\circ}\xspace\times14\ensuremath{^\circ}\xspace$ ROI with a pixel size of 0\fdg03. Unless explicitly mentioned, we used the same background model as 2FGL to represent the Galactic diffuse, isotropic, and Earth limb emission. To compensate for possible residuals in the diffuse emission model, the Galactic emission was scaled by a power-law and the normalization of the isotropic component was left free. Unless explicitly mentioned, we used all 2FGL sources within $15\ensuremath{^\circ}\xspace$ of our source as our list of background sources and we refit the spectral parameters of all sources within $2\ensuremath{^\circ}\xspace$ of the source. To validate our method, we tested LAT sources associated with AGN for extension. \text{GeV}\xspace emission from AGN is believed to originate from collimated jets. Therefore AGN are not expected to be spatially resolvable by the LAT and provide a good calibration source to demonstrate that our extension detection method does not misidentify point-like sources as being extended. We note that megaparsec-scale $\gamma$-ray halos around AGNs have been hypothesized to be resolvable by the LAT \citep{pair_halo_paper}. However, no such halo has been discovered in the LAT data so far \citep{neronov_agn_halo}. Following 2FGL, the LAT Collaboration published the Second LAT AGN Catalog (2LAC), a list of high latitude ($|b|>10\ensuremath{^\circ}\xspace$) sources that had a high probability association with AGN \citep{second_agn_cat}. 2LAC associated 1016 2FGL sources with AGN. To avoid systematic problems with AGN classification, we selected only the 885 AGN which made it into the clean AGN sub-sample defined in the 2LAC paper. An AGN association is considered clean only if it has a high probability of association $P\ge 80\%$, if it is the only AGN associated with the 2FGL source, and if no analysis flags have been set for the source in the 2FGL catalog. These last two conditions are important for our analysis. Source confusion may look like a spatially extended source and flagged 2FGL sources may correlate with unmodeled structure in the diffuse emission. Of the 885 clean AGN, we selected the 733 of these 2FGL sources which were significantly detected above 1 \text{GeV}\xspace and fit each of them for extension. The cumulative density of {\ensuremath{\text{TS}_{\text{ext}}}}\xspace for these AGN is compared to the $\chi^2_1/2$ distribution of Equation~\ref{ts_ext_distribution} in Figure~\ref{agn_ts_ext}. The {\ensuremath{\text{TS}_{\text{ext}}}}\xspace distribution for the AGN shows reasonable agreement with the theoretical distribution and no AGN was found to be significantly extended (${\ensuremath{\text{TS}_{\text{ext}}}}\xspace>16$). The observed discrepancy from the theoretical distribution is likely due to small systematics in our model of the LAT PSF and the Galactic diffuse emission (see Section~\ref{systematic_errors_on_extension}). The discrepancy could also in a few cases be due to confusion with a nearby undetected source. We note that the Monte Carlo study of section~\ref{monte_carlo_validation} effectively used perfect IRFs and a perfect model of the sky. The overall agreement with the expected distribution demonstrates that we can use {\ensuremath{\text{TS}_{\text{ext}}}}\xspace as a measure of the statistical significance of the detection of the extension of a source. We note that the LAT PSF used in this study was determined empirically by fitting the distributions of gamma rays around bright AGN (see Section~\ref{systematic_errors_on_extension}). Finding that the AGN we test are not extended is not surprising. This validation analysis is not suitable to reject any hypotheses about the existence of megaparsec-scale halos around AGN. \section{Analysis of Extended Sources Identified in the 2FGL Catalog} \label{validate_known} As further validation of our method for studying spatially extended sources, we reanalyzed the twelve spatially extended sources which were included in the 2FGL catalog \citep{second_cat}. Even though these sources had all been the subjects of dedicated analyses and separate publications, and had been fit with a variety of spatial models, it is valuable to show that these sources are significantly extended using our systematic method assuming radially-symmetric uniform disk spatial models. On the other hand, for some of these sources a uniform disk spatial model does not well describe the observed extended emission and so the dedicated publications by the LAT collaboration provide better models of these sources. Six extended SNRs were included in the 2FGL catalog: W51C, IC~443, W28, W44, the Cygnus Loop, and W30 \citep{w51c,ic443,w28,w44,cygnus_loop_lat,w30_lat}. Using photons with energies between 1 \text{GeV}\xspace and 100 \text{GeV}\xspace, our analysis significantly detected that these six SNRs are spatially extended. Two nearby satellite galaxies of the Milky Way the Large Magellanic Cloud (LMC) and the Small Magellanic Cloud (SMC) were included in the 2FGL catalog as spatially extended sources \citep{lmc,smc}. Their extensions were significantly detected using photons with energies between 1 \text{GeV}\xspace and 100 \text{GeV}\xspace. Our fit extensions are comparable to the published result, but we note that the previous LAT Collaboration publication on the LMC used a more complicated two 2D Gaussian surface brightness profile when fitting it \citep{lmc}. Three PWNe, MSH\,15$-$52, Vela X, and HESS\,J1825$-$137, were fit as extended sources in the 2FGL analysis \citep{msh1552,velax,fermi_hess_j1825}. In the present analysis, HESS\,J1825$-$137 was significantly detected using photons with energies between 10 \text{GeV}\xspace and 100 \text{GeV}\xspace. To avoid confusion with the nearby bright pulsar PSR\,J1509$-$5850, MSH\,15$-$52 had to be analyzed at high energies. Using photons with energies above 10 \text{GeV}\xspace, we fit the extension of MSH\,15$-$52 to be consistent with the published size but with {\ensuremath{\text{TS}_{\text{ext}}}}\xspace=6.6. Our analysis was unable to resolve Vela X which would have required first removing the pulsed photons from the Vela pulsar which was beyond the scope of this paper. Our analysis also failed to detect a significant extension for the Centaurus A Lobes because the shape of the source is significantly different from a uniform radially-symmetric disk\citep{cen_a_lat}. Our analysis of these sources is summarized in Table~\ref{known_extended_sources}. This table includes the best fit positions and extensions of these sources when fitting them with a radially-symmetric uniform disk model. It also includes the best fit spectral parameters for each source. The positions and extensions of Vela X and the Centaurus A Lobes were taken from \cite{velax,cen_a_lat} and are included in this table for completeness. \section{Systematic Errors on Extension} \label{systematic_errors_on_extension} We developed two criteria for estimating systematic errors on the extensions of the sources. First, we estimated a systematic error due to uncertainty in our knowledge of the LAT PSF. Before launch, the LAT PSF was determined by detector simulations which were verified in accelerator beam tests \citep{atwood_LAT_mission}. However, in-flight data revealed a discrepancy above 3 \text{GeV}\xspace in the PSF compared to the angular distribution of photons from bright AGN \citep{lat_on_orbit_psf}. Subsequently, the PSF was fit empirically to bright AGN and this empirical parameterization is used in the P7\_V6 IRFs. To account for the uncertainty in our knowledge of the PSF, we refit our extended source candidates using the pre-flight Monte Carlo representation of the PSF and consider the difference in extension found using the two PSFs as a systematic error on the extension of a source. The same approach was used in \cite{ic443}. We believe that our parameterization of the PSF from bright AGN is substantially better than the Monte Carlo representation of the PSF so this systematic error is conservative. We estimated a second systematic error on the extension of a source due to uncertainty in our model of the Galactic diffuse emission by using an alternative approach to modeling the diffuse emission which takes as input templates calculated by GALPROP\footnote{GALPROP is a software package for calculating the Galactic $\gamma$-ray emission based on a model of cosmic-ray propagation in the Galaxy and maps of the distributions of the components of the interstellar medium \citep{galprop1998,galprop2011}. See also \url{http://galprop.stanford.edu/} for details.} but then fits each template locally in the surrounding region. The particular GALPROP model that was used as input is described in the analysis of the isotropic diffuse emission with LAT data \citep{isotropic_lat}. The intensities of various components of the Galactic diffuse emission were fitted individually using a spatial distribution predicted by the model. We considered separate contributions from cosmic-ray interactions with the molecular hydrogen, the atomic and ionized hydrogen, residual gas traced by dust \citep{isabelle_dark_gass}, and the interstellar radiation field. We further split the contributions from interactions with molecular and atomic hydrogen to the Galactic diffuse emission according to the distance from the Galactic center in which they are produced. Hence, we replaced the standard diffuse emission model by 18 individually fitted templates to describe individual components of the diffuse emission. A similar crosscheck was used in an analysis of RX\,J1713.7$-$3946 by the LAT Collaboration \citep{rx_j1713_lat}. It is not expected that this diffuse model is superior to the standard LAT model obtained through an all-sky fit. However, adding degrees of freedom to the background model can remove likely spurious sources that correlate with features in the Galactic diffuse emission. Therefore, this tests systematics that may be due to imperfect modeling of the diffuse emission in the region. Nevertheless, this alternative approach to modeling the diffuse emission does not test all systematics related to the diffuse emission model. In particular, because the alternative approach uses the same underlying gas maps, it is unable to be used to assess systematics due to insufficient resolution of the underlying maps. Structure in the diffuse emission that is not correlated with these maps will also not be assessed by this test. We do not expect the systematic error due to uncertainties in the PSF to be correlated with the systematic error due to uncertainty in the Galactic diffuse emission. Therefore, the total systematic error on the extension of a source was obtained by adding the two errors in quadrature. There is another systematic error on the size of a source due to issues modeling nearby sources in crowded regions of the sky. It is beyond the scope of this paper to address this systematic error. Therefore, for sources in crowded regions the systematic errors quoted in this paper may not represent the full set of systematic errors associated with this analysis. \section{Extended Source Search Method} \label{extended_source_search_method} Having demonstrated that we understand the statistical issues associated with analyzing spatially extended sources (Section~\ref{monte_carlo_validation} and~\ref{test_2lac_sources}) and that our method can correctly analyze the extended sources included in 2FGL (Section~\ref{validate_known}), we applied this method to search for new spatially extended \text{GeV}\xspace sources. The data and general analysis setting is as described in Section~\ref{test_2lac_sources}. Ideally, we would apply a completely blind and uniform search that tests the extension of each 2FGL source in the presence of all other 2FGL sources to find a complete list of all spatially extended sources. As our test of AGN in Section~\ref{test_2lac_sources} showed, at high Galactic latitude where the source density is not as large and the diffuse emission is less structured, this method works well. But this is infeasible in the Galactic plane where we are most likely to discover new spatially extended sources. In the Galactic plane, this analysis is challenged by our imperfect model of the diffuse emission and by an imperfect model of nearby sources. The Monte Carlo study in Section~\ref{dual_localization_method} showed that the overall likelihood would greatly increase by fitting a spatially extended source as two point-like sources so we expect that spatially extended sources would be modeled in the 2FGL catalog as multiple point-like sources. Furthermore, the positions of other nearby sources in the region close to an extended source could be biased by not correctly modeling the extension of the source. The 2FGL catalog contains a list of sources significant at energies above 100 \text{MeV}\xspace whereas we are most sensitive to spatial extension at higher energies. We therefore expect that at higher energies our analysis would be complicated by 2FGL sources no longer significant and by 2FGL sources whose positions were biased by diffuse emission at lower energies. To account for these issues, we first produced a large list of possibly extended sources employing very liberal search criteria and then refined the analysis of the promising candidates on a case by case basis. Our strategy was to test all point-like 2FGL sources for extension assuming they had a uniform radially-symmetric disk spatial model and a power-law spectral model. Although not all extended sources are expected to have a shape very similar to a uniform disk, Section~\ref{compare_source_size} showed that for many spatially extended sources the wide PSF of the LAT and limited statistics makes this a reasonable approximation. On the other hand, choosing this spatial model biases us against finding extended sources that are not well described by a uniform disk model such as shell-type SNRs. Before testing for extension, we automatically removed from the background model all other 2FGL sources within 0\fdg5 of the source. This distance is somewhat arbitrary, but was picked in hopes of finding extended sources with sizes on the order of $\sim1\ensuremath{^\circ}\xspace$ or smaller. On the other hand, by removing these nearby background sources we expect to also incorrectly add to our list of extended source candidates point-like sources that are confused with nearby sources. To screen out obvious cases of source confusion, we performed the dual localization procedure described in Section~\ref{dual_localization_method} to compare the extended source hypothesis to the hypothesis of two independent point-like sources. As was shown in Section~\ref{extension_sensitivity}, little sensitivity is gained by using photons with energies below 1 \text{GeV}\xspace. In addition, the broad PSF at low energy makes the analysis more susceptible to systematic errors arising from source confusion due to nearby soft point-like sources and by uncertainties in our modeling of the Galactic diffuse emission. For these reasons, we performed our search using only photons with energies between 1 \text{GeV}\xspace and 100 \text{GeV}\xspace. We also performed a second search for extended sources using only photons with energies between 10 \text{GeV}\xspace and 100 \text{GeV}\xspace. Although this approach tests the same sources, it is complementary because the Galactic diffuse emission is even less dominant above 10 \text{GeV}\xspace and because source confusion is less of an issue. A similar procedure was used to detect the spatial extensions of MSH\,15$-$52 and HESS\,J1825$-$137 with the LAT \citep{msh1552,fermi_hess_j1825}. When we applied this test to the 1861 point-like sources in the 2FGL catalog, our search found 117 extended source candidates in the 1 \text{GeV}\xspace to 100 \text{GeV}\xspace energy range and 11 extended source candidates in the 10 \text{GeV}\xspace to 100 \text{GeV}\xspace energy range. Most of the extended sources found above 10 \text{GeV}\xspace were also found above 1 \text{GeV}\xspace and in many cases multiple nearby point-like sources were found to be extended even though they fit the same emission region. For example, the sources 2FGL\,J1630.2$-$4752, 2FGL\,J1632.4$-$4753c 2FGL\,J1634.4$-$4743c, and 2FGL\,J1636.3$-$4740c were all found to be spatially extended in the 10 \text{GeV}\xspace to 100 \text{GeV}\xspace energy range even though they all fit to similar positions and sizes. For these situations, we manually discarded all but one of the 2FGL sources. Similarly, many of these sources were confused with nearby point-like sources or influenced by large-scale residuals in the diffuse emission. To help determine which of these fits found truly extended sources and when the extension was influenced by source confusion and diffuse emission, we generated a series of diagnostic plots. For each candidate, we generated a map of the residual TS by adding a new point-like source of spectral index 2 into the region at each position and finding the increase in likelihood when fitting its flux. Figure~\ref{res_tsmaps} shows this map around the most significantly extended source IC~443 when it is modeled both as a point-like source and as an extended source. The residual TS map indicates that the spatially extended model for IC~443 is a significantly better description of the observed photons and that there is no $\text{TS}\xspace>25$ residual in the region after modeling the source as being spatially extended. We also generated plots of the sum of all counts within a given distance of the source and compared them to the model predictions assuming the emission originated from a point-like source. An example radial integral plot is shown for the extended source IC~443 in Figure~\ref{four_plots_ic443}. For each source, we also made diffuse-emission-subtracted smoothed counts maps (shown for IC~443 in Figure~\ref{four_plots_ic443}). We found by visual inspection that in many cases our results were strongly influenced by large-scale residuals in the diffuse emission and hence the extension measure was unreliable. This was especially true in our analysis of sources in the 1 \text{GeV}\xspace to 100 \text{GeV}\xspace energy range. An example of such a case is 2FGL\,J1856.2+0450c analyzed in the 1 \text{GeV}\xspace to 100 \text{GeV}\xspace energy range. Figure~\ref{example_bad_fit} shows a diffuse-emission-subtracted smoothed counts map for this source with the best fit extension of the source overlaid. There appear to be large-scale residuals in the diffuse emission in this region along the Galactic plane. As a result, 2FGL\,J1856.2+0450c is fit to an extension of $\sim2\ensuremath{^\circ}\xspace$ and the result is statistically significant with {\ensuremath{\text{TS}_{\text{ext}}}}\xspace=45.4. However, by looking at the residuals it is clear that this complicated region is not well modeled. We manually discard sources like this. We only selected extended source candidates in regions that did not appear dominated by these issues and where there was a multiwavelength counterpart. Because of these systematic issues, this search can not be expected to be complete and it is likely that there are other spatially extended sources that this method missed. For each candidate that was not biased by neighboring point-like sources or by large-scale residuals in the diffuse emission model, we improved the model of the region by deciding on a case by case basis which background point-like sources should be kept. We kept in our model the sources that we believed represented physically distinct sources and we removed sources that we believed were included in the 2FGL catalog to compensate for residuals induced by not modeling the extension of the source. Soft nearby point-like 2FGL sources that were not significant at higher energies were frozen to the spectras predicted by 2FGL. When deciding which background sources to keep and which to remove, we used multiwavelength information about possibly extended source counterparts to help guide our choice. For each extended source presented in Section~\ref{new_ext_srcs_section}, we describe any modifications from 2FGL of the background model that were performed. In Table~\ref{fake_2fgl_sources}, we summarize the sources in the 2FGL catalog that we have concluded here correspond to residuals induced by not modeling the extensions of nearby extended sources. The best fit positions of nearby point-like sources can be influenced by the extended source and vice versa. Similarly, the best fit positions of nearby point-like sources in the 2FGL catalog can be biased by systematic issues at lower energies. Therefore, after selecting the list of background sources, we iteratively refit the positions and spectra of nearby background sources as well as the positions and extensions of the analyzed spatially extended sources until the overall fit converged globally. For each extended source, we will describe the positions of any relocalized background sources. After obtaining the overall best fit positions and extensions of all of the sources in the region using \ensuremath{\mathtt{pointlike}}\xspace, we refit the spectral parameters of the region using \ensuremath{\mathtt{gtlike}}\xspace. With \ensuremath{\mathtt{gtlike}}\xspace, we obtained a second measure of {\ensuremath{\text{TS}_{\text{ext}}}}\xspace. We only consider a source to be extended when both \ensuremath{\mathtt{pointlike}}\xspace and \ensuremath{\mathtt{gtlike}}\xspace agree that ${\ensuremath{\text{TS}_{\text{ext}}}}\xspace\ge16$. We further required that ${\ensuremath{\text{TS}_{\text{ext}}}}\xspace\ge16$ using the alternative approach to modeling the diffuse emission presented in Section~\ref{systematic_errors_on_extension}. We then replaced the spatially extended source with two point-like sources and refit the positions and spectra of the two point-like sources to calculate \ensuremath{\text{TS}_{\text{2pts}}}\xspace. We only consider a source to be spatially extended, instead of being the result of confusion of two point-like sources, if ${\ensuremath{\text{TS}_{\text{ext}}}}\xspace>\ensuremath{\text{TS}_{\text{2pts}}}\xspace$. As was shown in Section~\ref{dual_localization_method}, this test is fairly powerful at removing situations in which the emission actually originates from two distinct point-like sources instead of one spatially extended source. On the other hand, it is still possible that longer observations could resolve additional structure or new sources that the analysis cannot currently detect. Considering the very complicated morphologies of extended sources observed at other wavelengths and the high density of possible sources that are expected to emit at \text{GeV}\xspace energies, it is likely that in some of these regions further observations will reveal that the emission is significantly more complicated than the simple radially-symmetric uniform disk model that we assume. \section{New Extended Sources} \label{new_ext_srcs_section} Nine extended sources not included in the 2FGL catalog were found by our extended source search. Two of these have been previously studied in dedicated publications: RX\,J1713.7$-$3946 and Vela Jr. \citep{rx_j1713_lat,vela_jr_lat}. Two of these sources were found when using photons with energies between 1 \text{GeV}\xspace and 100 \text{GeV}\xspace and seven were found when using photons with energies between 10 \text{GeV}\xspace and 100 \text{GeV}\xspace. For the sources found at energies above 10 \text{GeV}\xspace, we restrict our analysis to higher energies because of the issues of source confusion and diffuse emission modeling described in Section~\ref{extended_source_search_method}. The spectral and spatial properties of these nine sources are summarized in Table~\ref{new_ext_srcs_table} and the results of our investigation of systematic errors are presented in Table~\ref{alt_diff_model_results}. Table~\ref{alt_diff_model_results} also compares the likelihood assuming the source is spatially extended to the likelihood assuming that the emission originates from two independent point-like sources. For these new extended sources, ${\ensuremath{\text{TS}_{\text{ext}}}}\xspace>\ensuremath{\text{TS}_{\text{2pts}}}\xspace$ so we conclude that the \text{GeV}\xspace emission does not originate from two physically distinct point-like sources (see Section~\ref{dual_localization_method}). Table~\ref{alt_diff_model_results} also includes the results of the extension fits using variations of the PSF and the Galactic diffuse model described in Section~\ref{systematic_errors_on_extension}. There is good agreement between {\ensuremath{\text{TS}_{\text{ext}}}}\xspace and the fit size using the standard analysis, the alternative approach to modeling the diffuse emission, and the alternative PSF. This suggests that the sources are robust against mis-modeled features in the diffuse emission model and uncertainties in the PSF. \subsection{2FGL\,J0823.0$-$4246} \label{section_2FGL_J0823.0-4246} 2FGL\,J0823.0$-$4246 was found by our search to be an extended source candidate in the 1 \text{GeV}\xspace to 100 \text{GeV}\xspace energy range and is spatially coincident with the SNR Puppis A. Figure~\ref{1FGL_J0823.3-4248} shows a counts map of this source. There are two nearby 2FGL sources 2FGL\,J0823.4$-$4305 and 2FGL\,J0821.0$-$4254 that are also coincident with the SNR but that do not appear to represent physically distinct sources. We conclude that these nearby point-like sources were included in the 2FGL catalog to compensate for residuals induced by not modeling the extension of this source and removed them from our model of the sky. After removing these sources, 2FGL\,J0823.0$-$4246 was found to have an extension $\sigma=0\fdg37\pm0\fdg03_\text{stat}\xspace\pm0\fdg02_\text{sys}\xspace$ with ${\ensuremath{\text{TS}_{\text{ext}}}}\xspace=48.0$. Figure~\ref{snr_seds} shows the spectrum of this source. Puppis A has been studied in detail in radio \citep{puppis_a_vla}, and X-ray \citep{rosat_puppis_a,suzaku_puppis_a}. The fit extension of 2FGL\,J0823.0$-$4246 matches well the size of Puppis A in X-ray. The distance of Puppis A was estimated at 2.2 kpc \citep{reynoso_1995,reynoso_2003} and leads to a 1 \text{GeV}\xspace to 100 \text{GeV}\xspace luminosity of $\sim 3\times 10^{34}$ ergs$\,\text{s}\xspace^{-1}$. No molecular clouds have been observed directly adjacent to Puppis A \citep{co_eastern_puppis_a}, similar to the LAT-detected Cygnus Loop SNR \citep{cygnus_loop_lat}. The luminosity of Puppis A is also smaller than that of other SNRs believed to interact with molecular clouds \citep{w51c,ic443,w44,w28,w49b_lat}. \subsection{2FGL\,J0851.7$-$4635} \label{section_2FGL_J0851.7-4635} 2FGL\,J0851.7$-$4635 was found by our search to be an extended source candidate in the 10 \text{GeV}\xspace to 100 \text{GeV}\xspace energy range and is spatially coincident with the SNR Vela Jr. This source was recently studied by the LAT Collaboration in \cite{vela_jr_lat}. Figure~\ref{Vela_Jr} shows a counts map of the source. Overlaid on Figure~\ref{Vela_Jr} are \text{TeV}\xspace contours of Vela Jr. \citep{vela_jr_hess}. There are three point-like 2FGL sources 2FGL\,J0848.5$-$4535, 2FGL\,J0853.5$-$4711, and 2FGL\,J0855.4$-$4625 which correlate with the multiwavelength emission of this SNR but do not appear to be physically distinct sources. They were most likely included in the 2FGL catalog to compensate for residuals induced by not modeling the extension of Vela Jr. and were removed from our model of the sky. With this model of the background, 2FGL\,J0851.7$-$4635 was found to have an extension of $\sigma=1\fdg15\pm0\fdg08_\text{stat}\xspace\pm0\fdg02_\text{sys}\xspace$ with ${\ensuremath{\text{TS}_{\text{ext}}}}\xspace=86.8$. The LAT size matches well the \text{TeV}\xspace morphology of Vela Jr. While fitting the extension of 2FGL\,J0851.7$-$4635, we iteratively relocalized the position of the nearby point-like 2FGL source 2FGL\,J0854.7$-$4501 to $(l,b)=(266\fdg24,0\fdg49)$ to better fit its position at high energies. \subsection{2FGL\,J1615.0$-$5051} \label{section_2FGL_J1615.0-5051} 2FGL\,J1615.0$-$5051 and 2FGL\,J1615.2$-$5138 were both found to be extended source candidates in the 10 \text{GeV}\xspace to 100 \text{GeV}\xspace energy range. Because they are less than $1\ensuremath{^\circ}\xspace$ away from each other, they needed to be analyzed simultaneously. 2FGL\,J1615.0$-$5051 is spatially coincident with the extended \text{TeV}\xspace source HESS\,J1616$-$508 and 2FGL\,J1615.2$-$5138 is coincident with the extended \text{TeV}\xspace source HESS\,J1614$-$518. Figure~\ref{1FGL_J1613.6-5100c} shows a counts map of these sources and overlays the \text{TeV}\xspace contours of HESS\,J1616$-$508 and HESS\,J1614$-$518 \citep{hess_plane_survey}. The figure shows that the 2FGL source 2FGL\,J1614.9$-$5212 is very close to 2FGL\,J1615.2$-$5138 and correlates with the same extended \text{TeV}\xspace source as 2FGL\,J1615.2$-$5138. We concluded that this source was included in the 2FGL catalog to compensate for residuals induced by not modeling the extension of 2FGL\,J1615.2$-$5138 and removed it from our model of the sky. With this model of the sky, we iteratively fit the extensions of 2FGL\,J1615.0$-$5051 and 2FGL\,J1615.2$-$5138. 2FGL\,J1615.0$-$5051 was found to have an extension $\sigma=0\fdg32\pm0\fdg04_\text{stat}\xspace\pm0\fdg01_\text{sys}\xspace$ and {\ensuremath{\text{TS}_{\text{ext}}}}\xspace=16.7. The \text{TeV}\xspace counterpart of 2FGL\,J1615.0$-$5051 was fit with a radially-symmetric Gaussian surface brightness profile with $\sigma=0\fdg136\pm0\fdg008$ \citep{hess_plane_survey}. This \text{TeV}\xspace size corresponds to a 68\% containment radius of ${\ensuremath{\text{r}_{68}}}\xspace=0\fdg21\pm0\fdg01$, comparable to the LAT size ${\ensuremath{\text{r}_{68}}}\xspace=0\fdg26\pm0\fdg03$. Figure~\ref{hess_seds} shows that the spectrum of 2FGL\,J1615.0$-$5051 at \text{GeV}\xspace energies connects to the spectrum of HESS\,J1616$-$508 at \text{TeV}\xspace energies. HESS\,J1616$-$508 is located in the region of two SNRs RCW103 (G332.4-04) and Kes~32 (G332.4+0.1) but is not spatially coincident with either of them \citep{hess_plane_survey}. HESS\,J1616$-$508 is near three pulsars PSR\,J1614$-$5048, PSR\,J1616$-$5109, and PSR\,J1617$-$5055. \citep{discovery_of_PSR_J1617-5055,integral_HESS_J1616-508}. Only PSR\,J1617$-$5055 is energetically capable of powering the \text{TeV}\xspace emission and \cite{hess_plane_survey} speculated that HESS\,J1616$-$508 could be a PWN powered by this young pulsar. Because HESS\,J1616$-$508 is $9\arcmin$ away from PSR\,J1617$-$5055, this would require an asymmetric X-ray PWNe to power the \text{TeV}\xspace emission. \text{{\em Chandra}}\xspace ACIS observations revealed an underluminous PWN of size $\sim1\arcmin$ around the pulsar that was not oriented towards the \text{TeV}\xspace emission, rendering this association uncertain \citep{discovery_of_pwn_for_PSR_J1617-5055}. No other promising counterparts were observed at X-ray and soft $\gamma$-ray energies by \text{{\em Suzaku}}\xspace \citep{suzakzu_HESS_J1616-508}, \text{{\em Swift}/XRT}\xspace, IBIS/ISGRBI, BeppoSAX and \text{{\em XMM-Newton}}\xspace \citep{integral_HESS_J1616-508}. \cite{discovery_of_pwn_for_PSR_J1617-5055} discovered additional diffuse emission towards the center of HESS\,J1616$-$508 using archival radio and infared observations. Deeper observations will likely be necessary to understand this $\gamma$-ray source. \subsection{2FGL\,J1615.2$-$5138} \label{section_2FGL_J1615.2-5138} 2FGL\,J1615.2$-$5138 was found to have an extension $\sigma=0\fdg42\pm0\fdg04_\text{stat}\xspace\pm0.02_\text{sys}\xspace$ with ${\ensuremath{\text{TS}_{\text{ext}}}}\xspace=46.5$. To test for the possibility that 2FGL\,J1615.2$-$5138 is not spatially extended but instead composed of two point-like sources (one of them represented in the 2FGL catalog by 2FGL\,J1614.9$-$5212), we refit 2FGL\,J1615.2$-$5138 as two point-like sources. Because $\ensuremath{\text{TS}_{\text{2pts}}}\xspace=35.1$ is less than ${\ensuremath{\text{TS}_{\text{ext}}}}\xspace=46.5$, we conclude that this emission does not originate from two closely-spaced point-like sources. 2FGL\,J1615.2$-$5138 is spatially coincident with the extended \text{TeV}\xspace source HESS\,J1614$-$518. H.E.S.S. measured a 2D Gaussian extension of $\sigma=0\fdg23\pm0\fdg02$ and $\sigma=0\fdg15\pm0\fdg02$ in the semi-major and semi-minor axis. This corresponds to a 68\% containment size of ${\ensuremath{\text{r}_{68}}}\xspace=0\fdg35\pm0\fdg03$ and $0\fdg23\pm0\fdg03$, consistent with the LAT size ${\ensuremath{\text{r}_{68}}}\xspace=0\fdg34\pm0\fdg03$. Figure~\ref{hess_seds} shows that the spectrum of 2FGL\,J1615.2$-$5138 at \text{GeV}\xspace energies connects to the spectrum of HESS\,J1614$-$518 at \text{TeV}\xspace energies. Further data collected by H.E.S.S. in 2007 resolve a double peaked structure at \text{TeV}\xspace energies but no spectral variation across this source, suggesting that the emission is not the confusion of physically separate sources \citep{closer_look_hess_j1614-518}. This double peaked structure is also hinted at in the LAT counts map in Figure~\ref{1FGL_J1613.6-5100c} but is not very significant. The \text{TeV}\xspace source was also detected by CANGAROO-III \citep{cangaroo_j1614-518}. There are five nearby pulsars, but none are luminous enough to provide the energy output required to power the $\gamma$-ray emission \citep{closer_look_hess_j1614-518}. HESS\,J1614$-$518 is spatially coincident with a young open cluster Pismis 22 \citep{hess_1614_landi_atel,closer_look_hess_j1614-518}. \text{{\em Suzaku}}\xspace detected two promising X-ray candidates. Source A is an extended source consistent with the peak of HESS\,J1614$-$518 and source B coincident with Pismis 22 and towards the center but in a relatively dim region of HESS\,J1614$-$518 \citep{suazku_hess_j1614_518}. Three hypotheses have been presented to explain this emission: either source A is an SNR powering the $\gamma$-ray emission; source A is a PWN powered by an undiscovered pulsar in either source A or B; and finally that the emission may arise from hadronic acceleration in the stellar winds of Pismis 22 \citep{cangaroo_j1614-518}. \subsection{2FGL\,J1627.0$-$2425c} \label{section_2FGL_J1627.0-2425c} 2FGL\,J1627.0$-$2425c was found by our search to have an extension $\sigma=0\fdg42\pm0\fdg05_\text{stat}\xspace\pm0\fdg16_\text{sys}\xspace$ with ${\ensuremath{\text{TS}_{\text{ext}}}}\xspace=32.4$ using photons with energies between 1 \text{GeV}\xspace and 100 \text{GeV}\xspace. Figure \ref{1FGL_J1628.6-2419c} shows a counts map of this source. This source is in a region of remarkably complicated diffuse emission. Even though it is $16\ensuremath{^\circ}\xspace$ from the Galactic plane, this source is on top of the core of the Ophiuchus molecular cloud which contains massive star-forming regions that are bright in infrared. The region also has abundant molecular and atomic gas traced by CO and H~I and significant dark gas found only by its association with dust emission \citep{isabelle_dark_gass}. Embedded star-forming regions make it even more challenging to measure the column density of dust. Infared and CO ($J=1\rightarrow 0$) contours are overlaid on Figure~\ref{1FGL_J1628.6-2419c} and show good spatial correlation with the \text{GeV}\xspace emission \citep{iras_rho_ophiuci,co_rho_ophiuci}. This source might represent $\gamma$-ray emission from the interactions of cosmic rays with interstellar gas which has not been accounted for in the LAT diffuse emission model. \subsection{2FGL\,J1632.4$-$4753c} \label{section_2FGL_J1632.4-4753c} 2FGL\,J1632.4$-$4753c was found by our search to be an extended source candidate in the 10 \text{GeV}\xspace to 100 \text{GeV}\xspace energy range but is in a crowded region of the sky. It is spatially coincident with the \text{TeV}\xspace source HESS\,J1632$-$478. Figure~\ref{1FGL_J1632.9-4802c}a shows a counts map of this source and overlays \text{TeV}\xspace contours of HESS\,J1632$-$478 \citep{hess_plane_survey}. There are six nearby point-like 2FGL sources that appear to represent physically distinct sources and were included in our background model: 2FGL\,J1630.2$-$4752, 2FGL\,J1631.7$-$4720c, 2FGL\,J1632.4$-$4820c, 2FGL\,J1635.4$-$4717c, 2FGL\,J1636.3$-$4740c, and 2FGL\,J1638.0$-$4703c. On the other hand, one point-like 2FGL source 2FGL\,J1634.4$-$4743c correlates with the extended \text{TeV}\xspace source and at \text{GeV}\xspace energies does not appear physically separate. It is very close to the position of 2FGL\,J1632.4$-$4753c and does not show spatially separated emission in the observed photon distribution. We therefore removed this source from our model of the background. Figure~\ref{1FGL_J1632.9-4802c}b shows the same region with the background sources subtracted. With this model, 2FGL\,J1632.4$-$4753c was found to have an extension $\sigma=0\fdg35\pm0\fdg04_\text{stat}\xspace\pm0\fdg02_\text{sys}\xspace$ with ${\ensuremath{\text{TS}_{\text{ext}}}}\xspace=26.9$. While fitting the extension of 2FGL\,J1632.4$-$4753c, we iteratively relocalized 2FGL\,J1635.4$-$4717c to $(l,b)=(337\fdg23,0\fdg35)$ and 2FGL\,J1636.3$-$4740c to $(l,b)=(336\fdg97,-0\fdg07)$. H.E.S.S measured an extension of $\sigma=0\fdg21\pm0\fdg05$ and $0\fdg06\pm0\fdg04$ along the semi-major and semi-minor axes when fitting HESS\,J1632$-$478 with an elliptical 2D Gaussian surface brightness profile. This corresponds to a 68\% containment size ${\ensuremath{\text{r}_{68}}}\xspace=0\fdg31\pm0\fdg08$ and $0\fdg09\pm0\fdg06$ along the semi-major and semi-minor axis, consistent with the LAT size ${\ensuremath{\text{r}_{68}}}\xspace=0\fdg29\pm0\fdg04$. Figure~\ref{hess_seds} shows that the spectrum of 2FGL\,J1632.4$-$4753c at \text{GeV}\xspace energies connects to the spectrum of HESS\,J1632$-$478 at \text{TeV}\xspace energies. \cite{hess_plane_survey} argued that HESS\,J1632$-$478 is positionally coincident with the hard X-ray source IGR\,J1632$-$4751 observed by \text{{\em ASCA}}\xspace, INTEGRAL, and \text{{\em XMM-Newton}}\xspace \citep{asca_plane_survey,Igr_J16320-4751_circ,xmm_newton_IGR_J16320-4751}, but this source is suspected to be a Galactic X-Ray Binary so the $\gamma$-ray extension disfavors the association. Further observations by \text{{\em XMM-Newton}}\xspace discovered point-like emission coincident with the peak of the H.E.S.S. source surrounded by extended emission of size $\sim32\arcsec\times15\arcsec$ \citep{hess_j1632_478_xmm_newton}. They found in archival MGPS-2 data a spatially coincident extended radio source \citep{most_survey_galactic_plane} and argued for a single synchrotron and inverse Compton process producing the radio, X-ray, and \text{TeV}\xspace emission, likely due to a PWN. The increased size at \text{TeV}\xspace energies compared to X-ray energies has previously been observed in several aging PWNe including HESS\,J1825$-$137 \citep{hess_j1825_xmm_newton,hess_j1825_hess}, HESS\,J1640$-$465 \citep{hess_plane_survey,xmm_newton_hess_j_1640-466}, and Vela X \citep{vela_x_rosat,vela_x_hess} and can be explained by different synchrotron cooling times for the electrons that produce X-rays and $\gamma$-rays. \subsection{2FGL\,J1712.4$-$3941} \label{section_2FGL_J1712.4-3941} 2FGL\,J1712.4$-$3941 was found by our search to be spatially extended using photons with energies between 1 \text{GeV}\xspace and 100 \text{GeV}\xspace. This source is spatially coincident with the SNR RX\,J1713.7$-$3946 and was recently studied by the LAT Collaboration in \cite{rx_j1713_lat}. To avoid issues related to uncertainties in the nearby Galactic diffuse emission at lower energy, we restricted our analysis only to energies above 10 \text{GeV}\xspace. Figure~\ref{2FGL_J1712.4-3941} shows a smoothed counts map of the source. Above 10 \text{GeV}\xspace, the \text{GeV}\xspace emission nicely correlates with the \text{TeV}\xspace contours of RX\,J1713.7$-$3946 \citep{rx_j1713_hess} and 2FGL\,J1712.4$-$3941 fit to an extension $\sigma=0\fdg56\pm0\fdg04_\text{stat}\xspace\pm0\fdg02_\text{sys}\xspace$ with ${\ensuremath{\text{TS}_{\text{ext}}}}\xspace=38.5$. \subsection{2FGL\,J1837.3$-$0700c} \label{section_2FGL_J1837.3-0700c} 2FGL\,J1837.3$-$0700c was found by our search to be an extended source candidate in the 10 \text{GeV}\xspace to 100 \text{GeV}\xspace energy range and is spatially coincident with the \text{TeV}\xspace source HESS\,J1837$-$069. This source is in a complicated region. Figure~\ref{1FGL_J1837.5-0659c}a shows a smoothed counts map of the region and overlays the \text{TeV}\xspace contours of HESS\,J1837$-$069 \citep{hess_plane_survey}. There are two very nearby point-like 2FGL sources, 2FGL\,J1836.8$-$0623c and 2FGL\,J1839.3$-$0558c, that clearly represent distinct sources. On the other hand, there is another source 2FGL\,J1835.5$-$0649 located between the three sources that appears to correlate with the \text{TeV}\xspace morphology of HESS\,J1837$-$069 but at \text{GeV}\xspace energies does not appear to represent a physically distinct source. We concluded that this source was included in the 2FGL catalog to compensate for residuals induced by not modeling the extension of this source and removed it from our model of the sky. Figure~\ref{1FGL_J1837.5-0659c}b shows a counts map of this region after subtracting these background sources. After removing 2FGL\,J1835.5$-$0649, we tested for source confusion by fitting 2FGL\,J1837.3$-$0700c instead as two point-like sources. Because $\ensuremath{\text{TS}_{\text{2pts}}}\xspace=10.8$ is less than ${\ensuremath{\text{TS}_{\text{ext}}}}\xspace=18.5$, we conclude that this emission does not originate from two nearby point-like sources. With this model, 2FGL\,J1837.3$-$0700c was found to have an extension $\sigma=0\fdg33\pm0\fdg07_\text{stat}\xspace\pm0\fdg05_\text{sys}\xspace$. While fitting the extension of 2FGL\,J1837.3$-$0700c, we iteratively relocalized the two closest background sources along with the extension of 2FGL\,J1837.3$-$0700c but their positions did not significantly change. 2FGL\,J1834.7$-$0705c moved to $(l,b)=(24\fdg77,0\fdg50)$, 2FGL\,J1836.8$-$0623c moved to $(l,b)=(25\fdg57,0\fdg32)$. H.E.S.S. measured an extension of $\sigma=0\fdg12\pm0\fdg02$ and $0\fdg05\pm0\fdg02$ of the coincident \text{TeV}\xspace source HESS\,J1837$-$069 along the semi-major and semi-minor axis when fitting this source with an elliptical 2D Gaussian surface brightness profile. This corresponds to a 68\% containment radius of ${\ensuremath{\text{r}_{68}}}\xspace=0\fdg18\pm0\fdg03$ and $0\fdg08\pm0\fdg03$ along the semi-major and semi-minor axis. The size is not significantly different from the LAT 68\% containment radius of ${\ensuremath{\text{r}_{68}}}\xspace=0\fdg27\pm0\fdg07$ (less than $2\sigma$). Figure~\ref{hess_seds} shows that the spectrum of 2FGL\,J1837.3$-$0700c at \text{GeV}\xspace energies connects to the spectrum of HESS\,J1837$-$069 at \text{TeV}\xspace energies. HESS\,J1837$-$069 is coincident with the hard and steady X-ray source AX\,J1838.0$-$0655 \citep{hard_x-ray_asca}. This source was discovered by RXTE to be a pulsar (PSR J1838-0655) sufficiently luminous to power the \text{TeV}\xspace emission and was resolved by \text{{\em Chandra}}\xspace to be a bright point-like source surrounded by a $\sim2\arcmin$ nebula \citep{pulsations_HESS_J1837-069}. The $\gamma$-ray emission may be powered by this pulsar. The hard spectral index and spatial extension of 2FGL\,J1837.3$-$0700c disfavor a pulsar origin of the LAT emission and suggest instead that the \text{GeV}\xspace and \text{TeV}\xspace emission both originate from the pulsar's wind. There is another X-ray point-like source AX\,J1837.3$-$0652 near HESS\,J1837$-$069 \citep{hard_x-ray_asca} that was also resolved into a point-like and diffuse component \citep{pulsations_HESS_J1837-069}. Although no pulsations have been detected from it, it could also be a pulsar powering some of the $\gamma$-ray emission. \subsection{2FGL\,J2021.5+4026} \label{section_2FGL J2021.5+4026} The source 2FGL\,J2021.5+4026 is associated with the $\gamma$-Cygni SNR and has been speculated to originate from the interaction of accelerated particles in the SNR with dense molecular clouds \citep{pollock_1985,gaisser_1998}. This association was disfavored when the \text{GeV}\xspace emission from this source was detected to be pulsed \citep[PSR\,J2021+4026,][]{first_lat_pulsar_cat}. This pulsar was also observed by AGILE \citep{gamma_cygni_agile}. Looking at the same region at energies above 10 \text{GeV}\xspace, the pulsar is no longer significant but we instead found in our search an extended source candidate. Figure~\ref{1FGL_J2020.0+4049} shows a counts map of this source and overlays radio contours of $\gamma$-Cygni from the Canadian Galactic Plane Survey \citep{canadian_galactic_plane_survey}. There is good spatial overlap between the SNR and the \text{GeV}\xspace emission. There is a nearby source 2FGL\,J2019.1+4040 that correlates with the radio emission of $\gamma$-Cygni and at \text{GeV}\xspace energies does not appear to represent a physically distinct source. We concluded that it was included in the 2FGL catalog to compensate for residuals induced by not modeling the extension of $\gamma$-Cygni and removed it from our model of the sky. With this model, 2FGL\,J2021.5+4026 was found to have an extension $\sigma=0\fdg63\pm0\fdg05_\text{stat}\xspace\pm0\fdg04_\text{sys}\xspace$ with ${\ensuremath{\text{TS}_{\text{ext}}}}\xspace=128.9$. Figure~\ref{snr_seds} shows its spectrum. The inferred size of this source at \text{GeV}\xspace energies well matches the radio size of $\gamma$-Cygni. Milagro detected a $4.2\sigma$ excess at energies $\sim 30$ \text{TeV}\xspace from this location \citep{lat_bsl,milagro_bright_source_list}. VERITAS also detected an extended source VER\,J2019+407 coincident with the SNR above 200 \text{GeV}\xspace and suggested that the \text{TeV}\xspace emission could be a shock-cloud interaction in $\gamma$-Cygni \citep{veritas_gamma_cygni}. \section{Discussion} Twelve extended sources were included in the 2FGL catalog and two additional extended sources were studied in dedicated publications. Using 2 years of LAT data and a new analysis method, we presented the detection of seven additional extended sources. We also reanalyzed the spatial extents of the twelve extended sources in the 2FGL catalog and the two additional sources. The 21 extended LAT sources are located primarily along the Galactic plane and their locations are shown in Figure~\ref{allsky_extended_sources}. Most of the LAT-detected extended sources are expected to be of Galactic origin as the distances of extragalactic sources (with the exception of the local group Galaxies) are typically too large to be able to resolve them at $\gamma$-ray energies. For the LAT extended sources also seen at \text{TeV}\xspace energies, Figure~\ref{gev_vs_tev_plot} shows that there is a good correlation between the sizes of the sources at \text{GeV}\xspace and \text{TeV}\xspace energies. Even so, the sizes of PWNe are expected to vary across the \text{GeV}\xspace and \text{TeV}\xspace energy range and the size of HESS\,J1825$-$137 is significantly larger at \text{GeV}\xspace than \text{TeV}\xspace energies \citep{fermi_hess_j1825}. It is interesting to compare the sizes of other PWN candidates at \text{GeV}\xspace and \text{TeV}\xspace energies, but definitively measuring a difference in size would require a more in-depth analysis of the LAT data using the same elliptical Gaussian spatial model. Figure~\ref{gev_vs_tev_histogram} compares the sizes of the 21 extended LAT sources to the 42 extended H.E.S.S. sources.\footnote{The \text{TeV}\xspace extension of the 42 extended H.E.S.S. sources comes from the H.E.S.S. Source Catalog \url{http://www.mpi-hd.mpg.de/hfm/HESS/pages/home/sources/}.} Because of the large field of view and all-sky coverage, the LAT can more easily measure larger sources. On the other hand, the better angular resolution of air Cherenkov detectors allows them to measure a population of extended sources below the resolution limit of the LAT (currently about $\sim0\fdg2$). \textit{Fermi}\xspace has a 5 year nominal mission lifetime with a goal of 10 years of operation. As Figure~\ref{time_sensitivity} shows, the low background of the LAT at high energies allows its sensitivity to these smaller sources to improve by a factor greater than the square root of the relative exposures. With increasing exposure, the LAT will likely begin to detect and resolve some of these smaller \text{TeV}\xspace sources. Figure~\ref{compare_index_2FGL} compares the spectral indices of LAT detected extended sources and of all sources in the 2FGL catalog. This, and Tables~\ref{known_extended_sources} and~\ref{new_ext_srcs_table}, show that the LAT observes a population of hard extended sources at energies above 10 \text{GeV}\xspace. Figure~\ref{hess_seds} shows that the spectra of four of these sources (2FGL\,J1615.0$-$5051, 2FGL\,J1615.2$-$5138, 2FGL\,J1632.4$-$4753c, and 2FGL\,J1837.3$-$0700c) at \text{GeV}\xspace energies connects to the spectra of their H.E.S.S. counterparts at \text{TeV}\xspace energies. This is also true of Vela Jr., HESS\,J1825$-$137 \citep{fermi_hess_j1825}, and RX\,J1713.7$-$3946 \citep{rx_j1713_lat}. It is likely that the \text{GeV}\xspace and \text{TeV}\xspace emission from these sources originates from the same population of high-energy particles. Many of the \text{TeV}\xspace-detected extended sources now seen at \text{GeV}\xspace energies are currently unidentified and further multiwavelength follow-up observations will be necessary to understand these particle accelerators. Extending the spectra of these \text{TeV}\xspace sources towards lower energies with LAT observations may help to determine the origin and nature of the high-energy emission. The \textit{Fermi} LAT Collaboration acknowledges generous ongoing support from a number of agencies and institutes that have supported both the development and the operation of the LAT as well as scientific data analysis. These include the National Aeronautics and Space Administration and the Department of Energy in the United States, the Commissariat \`a l'Energie Atomique and the Centre National de la Recherche Scientifique / Institut National de Physique Nucl\'eaire et de Physique des Particules in France, the Agenzia Spaziale Italiana and the Istituto Nazionale di Fisica Nucleare in Italy, the Ministry of Education, Culture, Sports, Science and Technology (MEXT), High Energy Accelerator Research Organization (KEK) and Japan Aerospace Exploration Agency (JAXA) in Japan, and the K.~A.~Wallenberg Foundation, the Swedish Research Council and the Swedish National Space Board in Sweden. Additional support for science analysis during the operations phase is gratefully acknowledged from the Istituto Nazionale di Astrofisica in Italy and the Centre National d'\'Etudes Spatiales in France. This research has made use of pywcsgrid2, an open-source plotting package for Python\footnote{\url{http://leejjoon.github.com/pywcsgrid2/}}. The authors acknowledge the use of HEALPix\footnote{\url{http://healpix.jpl.nasa.gov/}} \citep{healpix}.
1,314,259,994,120
arxiv
\section{Introduction} Understanding coronal mass ejections (CMEs) is central to better grasp the complexities of the heliosphere, as they represent together with flares, the most intense phenomena in the Sun-Earth system. At the Sun, the exact cause(s) and trigger(s) of CME initiation are still a matter of debate (see review by \opencite{Chen:2011}), but it is well established that CMEs are one of the main ways for currents and magnetic energy to be released. CMEs typically consist of mostly closed magnetic field lines and carry mass and magnetic flux into the interplanetary (IP) space. Therefore, during times of high solar activity, CMEs highly structure the solar wind plasma and interplanetary magnetic field (IMF) characteristics in the IP space. CMEs play an important role in the heliospheric magnetic flux balance, by dragging magnetic field lines through the Alfv{\'e}n surface \cite{Owens:2006b,Schwadron:2010b}. CME-driven shocks are overwhelmingly thought to be the main accelerator of gradual solar energetic particles (SEPs) \cite{Kahler:1984,Reames:2013}. CMEs are also the primary drivers of intense geomagnetic storms at Earth \cite{Gonzalez:1987,Gosling:1991,Webb:2000,Zhang:2007}, and they are also associated with many of the strongest substorms \cite{Kamide:1998,Tsurutani:2015}, changes in Earth radiation belts \cite{Miyoshi:2005} and geomagnetically-induced currents (GICs) \cite{Huttunen:2008}. A recent review of CME research can be found in \inlinecite{Gopalswamy:2016}. The rate of CMEs during the solar cycle is highly variable, ranging at the Sun from 2--3 CMEs {\it per} week in solar minimum to 5--6 CMEs {\it per} day in solar maximum. Some CME properties in the corona are now routinely measured by space-based coronagraphs such as the {\it Large Angle and Spectrometric Coronagraph Experiment} on board the {\it Solar and Heliospheric Observatory} (SOHO/LASCO: \opencite{Domingo:1995}, \opencite{Brueckner:1995}) and the {\it Solar-Terrestrial Relations Observatory} coronagraphs (STEREO/COR: \opencite{Kaiser:2008}). Catalogs such as the Coordinated Data Analysis Workshops (CDAW) CME catalog \cite{Yashiro:2008,Gopalswamy:2009b} report the CME speed, mass, acceleration, and angular width projected onto the plane-of-the-sky of the instruments. New catalogs such as such as the Heliospheric Cataloguing, Analysis and Techniques Service (HELCATS)\footnote{\url{http://www.helcats-fp7.eu}} based on STEREO/{\it Heliospheric Imager} (HI: \opencite{Eyles:2009}) observations give CME speed and direction in the IP space. CME properties near Earth are directly measured by spacecraft such as {\it Advanced Composition Explorer} (ACE), {\it Wind}, or {\it Deep Space Climate Observatory} (DSCOVR, operational since July 27, 2016). CME properties may be strongly influenced by their interaction with the solar wind and IMF. To first order, this interaction results in a deceleration of fast CMEs and an acceleration of slow CMEs \cite{Gopalswamy:2000,Vrsnak:2001,Cargill:2004,Liu:2013}, changes in the radial expansion rate of the magnetic ejecta \cite{Gulisano:2010,Poomvises:2010} and, sometimes, its deflection \cite{Wang:2014} and rotation \cite{Nieves:2012}. Adding to these broad tendencies, CME properties may change even more drastically when they interact with corotating solar wind structures, such as fast wind streams and corotating interaction regions (CIRs) and with other CMEs. The interaction of a CME with a CIR has been studied both through numerical modeling as well data analysis \cite{Prise:2015,Winslow:2016}. Combining the CME frequency and their typical propagation time (3--4 days from Sun to Earth), they may be as few as two CMEs or as many as 20 in the 4$\pi$ sr between the Sun and the Earth, depending on the phase of the solar cycle. Assuming that a CME and its shock wave can be modeled as a cone of half-angle of 30$^\circ$, a CME occupies approximately $\pi$/4 sr. In solar maximum, interaction between unrelated successive CMEs is bound to happen; however, CME-CME interaction also happens regularly even in more quiet phases of the solar cycle. Solar observations often reveal that recurrent CMEs occur from the same active region, often associated with homologous flares \cite{Schmieder:1984,Svestka:1989}. On the other hand, sympathetic flares and CMEs may be an even more frequent cause of successive CMEs in relatively close angular and temporal separation ({\it i.e.} in optimal conditions for, at least, partial interaction). Early work based on coronagraphic observations \cite{Hansen:1974} and simulations \cite{Steinolfson:1982} discussed the possibility and consequences on the corona of successive, quasi-homologous eruptions. During their propagation from Sun to Earth, the interaction of successive CMEs may take a variety of forms: 1) the two CME-driven shock waves may interact without the ejecta interacting, 2) one shock wave may interact with a preceding magnetic ejecta, or 3) the successive magnetic ejecta may interact and/or reconnect. The fact that CMEs can interact on their way to Earth has been known for several decades now. Some of the early articles focused on the series of seven flares in 72 hours in early August 1972, and the associated three or four shock waves measured by {\it Pioneer 9}, {\it Prognoz} and the {\it the Interplanetary Monitoring Platform-5} (IPM-5) in the inner heliosphere and one shock wave measured by {\it Pioneer 10} at 2.2~AU \cite{Dryer:1976,Intriligator:1976,Ivanov:1982}. For example, \inlinecite{Ivanov:1982} has a section focusing on ``shock waves from a series of flares'', where complex IP streams originating from compound shock waves and their interaction region are described. \inlinecite{Burlaga:1987} describe a variety of compound streams resulting from the interaction of a transient with another transient or with a solar wind stream. They discussed the interaction of two ejecta, one containing a magnetic cloud and one without, as well as three shock waves and noted that ``the compression of the magnetic cloud by shock S3 produced magnetic field strength up to 36~nT''. The August 1972 series of eruptions resulted in a series of intense geomagnetic storms with the disturbed storm time (Dst) index peaking at $-154$~nT. \inlinecite{Tsurutani:1988} investigated the interplanetary origin of intense geomagnetic storms in the solar maximum of Solar Cycle 21, including cases related to the passage at Earth of a compound stream composed of multiple high-speed streams. \inlinecite{Burlaga:1987} studied the 3\,--\,4 April 1979 event associated with the interaction of two ejecta associated with an intense geomagnetic storm (Dst reached $-202$~nT) and discussed the relation between compound streams and large geomagnetic storms, finding that nine out of 17 large geomagnetic storms for which interplanetary data was available were associated with compound streams (this includes CIR-CME as well as CME-CME interaction). To explain this result, they noted that ``magnetic fields in ejecta can be amplified by the interaction with a shock and/or a fast flow and thereby cause a large geomagnetic storm'' and concluded that ``the interaction between two fast flows is in general a nonlinear process, and hence a compound stream is more than a linear superposition of its parts.'' Another multi-spacecraft study of compound streams measured in the late 1970s was performed by \inlinecite{Burlaga:1991}. Once again, the 2\,--7\, August 1972 events revealed how series of flares and eruptions can result in extremely high level of SEPs \cite{Lin:1976}. \inlinecite{Sanderson:1992} discussed {\it Ulysses} measurements of a shock propagating inside a shock-driving magnetic cloud and the low level of energetic particles between the two shocks. This was explained as the magnetic cloud ``acting as a barrier delaying the onset of the high-energy protons from the second flare''. \inlinecite{Kallenrode:1993} discussed super-events associated with series of flares and CMEs. \inlinecite{Vandas:1997} studied the interaction of a shock wave with a magnetic cloud using a 2.5-D magneto-hydrodynamical (MHD) simulation. This study illustrates the power of numerical simulations, as a case with an overtaking shock was compared with an identical case without an overtaking shock. The authors noted that the shock propagation results in a radial compression of the magnetic cloud, a change of its aspect ratio, acceleration as well as heating of the cloud. In the rest of the article, we focus primarily on developments about the causes and consequences of series of CMEs since 2000. The combination of LASCO imaging and {\it in situ} measurements at L1 from {\it Wind} and/or ACE since 1996 makes it possible to relate coronal observations with their {\it in situ} consequences and geomagnetic effects. The study of CME-CME interaction proliferated following the report of two CMEs interacting within the LASCO/C3 field-of-view and associated type II event \cite{Gopalswamy:2001} as well as the possible association of interacting CMEs with large SEP events \cite{Gopalswamy:2002}. Statistical surveys of geomagnetic storms and their interplanetary causes have become more routine during Solar Cycles 23 and 24 due to the reliability of L1 measurements; this has revealed how interacting CMEs may cause intense geomagnetic storms. In Solar Cycle 24, high spatial and temporal resolution observations by the {\it Solar Dynamics Observatory} (SDO: \opencite{Pesnell:2012}) have returned the study of sympathetic eruptions to central stage. The development of heliospheric imaging with the {\it Solar Mass Ejection Imager} (SMEI: \opencite{Eyles:2003}, \opencite{Jackson:2004}) and the HIs onboard STEREO have led to a large increase in the number of published cases of CME-CME interaction being remotely observed. Lastly, the development of large-scale time-dependent numerical simulations in the past 20 years have yielded new insights into the mechanisms resulting in the initiation of series of CMEs as well as the physical processes occurring during their propagation and interaction. This article is organized as follows. In Section~\ref{sec:initiation}, we discuss recent developments regarding the initiation of successive CMEs, including observations and numerical simulations of sympathetic and homologous CME initiation. In Section~\ref{sec:SEP}, we review observational and theoretical works focusing on the association of successive and interacting CMEs with large SEP events and with enhanced and unusual radio emissions. In Section~\ref{sec:helio}, we focus on the physical processes occurring during CME-CME interaction in the inner heliosphere, with insights gained from recent remote observations by SECCHI as well as by numerical simulations and the analysis of {\it in situ} measurements. In Section~\ref{sec:geo-effect}, we discuss how the complex ejecta resulting from CME-CME interaction may drive Earth's magnetosphere in unusual ways, often driving large geomagnetic storms, but also sometimes in weaker-than-expected storms. In Section~\ref{sec:conclusion}, we discuss what to expect in the upcoming decade with new observations closer to the Sun made possible by {\it Solar Probe+} and {\it Solar Orbiter} and conclude. \section{Initiation of Successive CMEs}\label{sec:initiation} \subsection{Trigger and Initiation of CMEs} As the largest explosive phenomenon on the Sun, a typical CME carries about $10^{32}$ erg of energy~\cite{Vourlidas_etal_2000,Hudson_etal_2006} and $10^{21}$ Mx of magnetic flux~\cite{Dasso_etal_2005,Qiu_etal_2007,Wang_etal_2015} into IP space, associated with reconfiguration of coronal magnetic fields in the CME source region. To support such large eruptions, the following is needed: (1) sufficient magnetic free energy, and (2) triggers and efficient energy conversion processes to release the free energy in a short timescale. The magnetic free energy as well as helicity can be accumulated gradually via various ways, {\it e.g.}, flux emergence~(see, {\it e.g.} \opencite{Heyvaerts_etal_1977}, \opencite{Chen_Shibata_2000}), shearing/rotational motion~(see, {\it e.g.}, \opencite{Manchester_2003}, \opencite{Brown_etal_2003}, \opencite{Kusano_etal_2004}, \opencite{ZhangY_etal_2008}), {\it etc}. It is often found that the magnetic free energy accumulated in an active region (AR) exceeds the energy required for an eruption. A well studied case is AR~11158 based on the SDO/HMI vector magnetograms~(see, {\it e.g.}, \opencite{Schrijver_etal_2011}, \opencite{Sun_etal_2012}, \opencite{WangS_etal_2012}, \opencite{Vemareddy_etal_2012a}). With the aid of a non-linear force-free field (NLFFF) extrapolation method~\cite{Wiegelmann_etal_2012}, \inlinecite{Sun_etal_2012} investigated the evolution of the magnetic field and its energy in the AR from 12\,--17\, February 2011. It was found that the magnetic energy continuously increased with the free energy well above $10^{32}$ erg. The only X-class flare during the period of interest consumed only a small fraction of the accumulated free energy (of the order of 10--20\%). Thus, a pivotal and much unclear issue is what the effective triggers of the free energy release are. It is now acknowledged that there are generally two kinds of triggering mechanisms. The first is a non-ideal process, associated with magnetic reconnection. The tether-cutting model~\cite{Moore_etal_2001} and magnetic breakout model~\cite{Antiochos_etal_1999} are both of this type. The other is loss of equilibrium, an ideal process, due to some instabilities, {\it e.g.} the kink instability~(see, {\it e.g.}, \opencite{Hood_Priest_1979}, \citeyear{Hood_Priest_1980}), torus instability~(see, {\it e.g.}, \opencite{Torok_etal_2004}, \opencite{Kliem_Torok_2006}, \opencite{Fan_Gibson_2007}) and catastrophe~\cite{Forbes_Priest_1995,Lin_Forbes_2000,HuY_2001}. CMEs are large-scale structures that may involve multiple magnetic flux systems, but trigger points usually start locally. A question, whether or not the CME occurrence is random, is naturally raised. In other words, can a CME trigger another one and, if yes, how? A way to test the degree of inter-dependence of CMEs is through a statistical approach. An early attempt to examine the independence of CMEs was done by \inlinecite{Moon_etal_2003}, who considered 3817 CMEs listed in the LASCO CME catalog~\cite{Yashiro_etal_2004} during 1999--2001. They generated the waiting time distribution of these CMEs in terms of their first appearance in the field of view of LASCO/C2, and found that it is very close to an exponential distribution (Figure~\ref{fg_sec2_waiting_time_distritions}a) and can be well explained by a time-dependent Poisson random process. A similar distribution can also be found in solar flares~\cite{Wheatland_2000}. These results imply that interrelated CMEs only constitute, at most, a small fraction of the whole population of CMEs. \begin{figure*}[tb] \centering \includegraphics[width=\hsize]{sec2_waiting_time_distritions} \caption{(a) Adapted from Moon {\it et al.} (2003b), showing the waiting time distribution of all CMEs during October 1998 -- December 2001. For comparison, a stationary Poisson distribution (dotted line) and two non-stationary Poisson distributions (dashed and solid lines) are plotted. (b) Adapted from Wang {\it et al.} (2013), showing the waiting time distribution of quasi-homologous CMEs originating from all the CME-rich super ARs in Solar Cycle 23. The two panels are reproduced by permission of the American Astronomical Society (AAS).} \label{fg_sec2_waiting_time_distritions} \end{figure*} On the other hand, modern observations have shown numerous evidence that some CMEs do not occur independently from each other. Such interrelations can be also found in other explosive phenomena, such as flares, filament eruptions, {\it etc.}, which are generally referred to as ``sympathetic'' eruptions~(see, {\it e.g.}, \opencite{Richardson_1951}, \opencite{Fritzova-Svestkova_etal_1976}, \opencite{Pearce_Harrison_1990}, \opencite{Biesecker_Thompson_2000}, \opencite{WangH_etal_2001}, \opencite{Moon_etal_2002}, \opencite{Schrijver_Title_2011}, \opencite{Jiang_etal_2011}, \opencite{ShenY_etal_2012}, \opencite{Yang_etal_2012}, \opencite{WangR:2016}). In general, sympathetic CMEs refer to those originating from different regions, but almost simultaneously~\cite{Moon_etal_2003}, whereas the eruptions occurring successively from the same region in a relatively short interval (several hours), having similar morphology and similar associated phenomena, are referred to as homologous CMEs~\cite{Zhang_Wang_2002} or generally called ``quasi-homologous'' CMEs regardless of their morphology and associations~(see, {\it e.g.}, \opencite{Chen_etal_2011}, \opencite{Wang_etal_2013}). The two kinds of interrelated CMEs are potential candidates for CME-CME interactions, and such interactions may begin during the initiation and last all the way to the IP space. Thus, it becomes of particular interest to determine under which circumstances CMEs are triggered successively. \subsection{Homologous CMEs}\label{sec:homologous} The possibility that the Sun produces homologous eruptions based on their similar visual aspects and origins was raised at the beginning of space-based coronal observations using ground-based coronagraphs as well as the coronagraph onboard the {\it Orbiting Solar Observatory 7} (OSO-7: \opencite{Hansen:1974}). Although the waiting times of all CMEs are approximately exponentially distributed, a quite different distribution can be found if one considers only the waiting times for CMEs originating from the same ARs. Based on the source locations of all the CMEs during 1997--1998~\cite{Wang_etal_2011}, \inlinecite{Chen_etal_2011} investigated 15 CME-rich ARs which produced more than 80 quasi-homologous CMEs, and analyzed the waiting times between CMEs from the same AR. It was found that the distribution has two components, clearly separated at around 15 hours. The component within 15 hours follows a Gaussian-like distribution with the peak at around 8 hours and it is thought to represent physically related events. The CMEs in the other component are most likely to be independent. \inlinecite{Wang_etal_2013} extended the sample to all the CME-rich super ARs in Solar Cycle 23 covering 281 CMEs, and found a similar distribution of the waiting times of the CMEs (Figure~\ref{fg_sec2_waiting_time_distritions}b). The only difference is that the separation time of the two components slightly increases from 15 hours to 18 hours and the peak of the Gaussian-like component decreases to around 7 hours. In this way, we may refine the definition of quasi-homologous CMEs as the successive CMEs originating from the same AR with a separation less than $\sim$ 15--18 hours. This finding raises two subsequent questions: how are the quasi-homologous CMEs physically related and what causes the second CME? The Gaussian-like component of the waiting time distribution suggests that either (1) the magnetic free energy and/or helicity accumulate and reach a threshold on a pace of about 7 hours on average, or (2) the timescale of the growth of the instability of a loop system triggered by the preceding CME is about 7 hours. The former mechanism is applicable to the quasi-homologous CMEs originating from the same polarity inversion lines (PILs), whereas the latter is for those from the different parts of a PIL or neighboring PILs even though they are in the same AR. This picture is worthy of further validations with observations. One widely studied case is the homologous CMEs occurring from AR~9236 on 24\,--25\,November 2000~(see, {\it e.g.}, \opencite{Nitta_Hudson_2001}, \opencite{Zhang_Wang_2002}, \opencite{ Moon_etal_2003a}). In a 60-hour interval, a total of six halo CMEs associated with five X-class and one M-class flares originated from the AR. By combining {\it Yohkoh} X-ray data and SOHO/MDI magnetograms, \inlinecite{Nitta_Hudson_2001} showed that all of the associated flares occurred around the leading spot of the AR. The first four flares successively originated from the western part of the spot with the emission intensity decreasing. The intensity of the last two flares increased but originated from the southern part of the spot. The hard X-ray footpoints were located in different regions for the first four flares as compared to the last two flares, suggesting that the two sets of CMEs might originate from the different PILs. Since many small polarity pairs emerged into the spot during the period, \inlinecite{Nitta_Hudson_2001} suggested that the continuously emerging magnetic flux was the cause of the successive CMEs and flares. In more details, \inlinecite{Zhang_Wang_2002} investigated the magnetic flux emergence around the flaring regions for the first three eruptions. They used time-sequences of the high-resolution MDI magnetograms to follow the evolution of 452 moving magnetic features from their births to deaths, and found that there were three flux peaks in the temporal evolution, which well corresponded to the occurrence of the eruptions. The calculation of the magnetic helicity based on the MDI magnetograms also showed that there were significant spikes in the helicity change rate during the eruptions~\cite{Moon_etal_2003a}. These results match the first aforementioned scenario that the rebuilding of free energy is probably a key mechanism for the homologous CMEs. It is noteworthy that the first three CMEs in the series traveled with increasing speeds from about 700 to 1000~km\,s$^{-1}$, and were followed by another extremely fast CME with a speed of $>2000$~km\,s$^{-1}$ originating from a different region~\cite{Nitta_Hudson_2001}. These four successive CMEs interacted in interplanetary space and formed a complex structure at 1 AU~(\opencite{Wang:2002}, also see Section~\ref{sec:helio}.4). The process of how the continuously emerging fluxes cause homologous CMEs was previously proposed by \inlinecite{Sterling_Moore_2001} based on the ``breakout'' picture~\cite{Antiochos_etal_1999}. They studied two homologous CME-associated flares from AR~8210 on 1\,--\,2 May 1998, and found there were signatures of reconnection between the closed field of the emerging flux and the open field in a neighboring coronal hole. This led to a series of CMEs, as the whole process repeats (see Figure~\ref{fg_sec2_homologous_sterling_moore}). Nevertheless, two homologous CMEs reported and studied by \inlinecite{Chandra_etal_2011} seemed to have different triggering mechanisms. The two CMEs originated from AR~10501 on 20 November 2003, associated with homologous H$\alpha$ ribbons. By applying a linear force-free field (LFFF) extrapolation method~\cite{Demoulin_etal_1997}, the authors identified the quasi-separatrix layers in 3D, and compared with the locations of flaring ribbons. They suggested that the first CME and flare were triggered by the tether-cutting process, which manifested a significant shear motion and reconnection below the core field, and resulted in a destabilized magnetic configuration for the second CME and flare, which were more likely to be initially driven by an instability or a catastrophic process. A similar case was reported by \inlinecite{Cheng:2013}, who studied two successive CMEs originating on 23 January 2012, and found that the first CME partially removed the overlying field and triggered the torus instability for the second CME one and half hours later. These two eruptions have also been studied in details by \inlinecite{Li:2013}, \inlinecite{Joshi_etal_2013} and \inlinecite{Sterling:2014}, and their interplanetary consequences by \inlinecite{Liu:2013}. Another example was the two eruptions separated by about 50 minutes on 7 March 2012 from AR~11429 analyzed by \inlinecite{WangR:2014}, who studied the magnetic field restructuring and helicity injection changes before and during these two successive eruptions. Regarding to the time delay between (quasi-)homologous CMEs, an extreme case is that two CMEs originate from one AR at almost the same time, {\it i.e.} within minutes, the so-called ``twin-CME'' scenario \cite{Li:2012}. One such case was reported by \inlinecite{Shen_etal_2013}, two CMEs launched from AR~11476 within about 2 minutes based on high-resolution and high-cadence observations from SDO. SDO/HMI magnetograms suggest that the CMEs originated from two segments of a bent PIL, above which a mature flux rope and a set of sheared arcades were located as revealed by a NLFFF extrapolation. The twin-CMEs caused the first ground-level enhancement (GLE) event in Solar Cycle 24 on 17 May 2012, consistent with the statistical studies that interaction of two CMEs launched in close temporal succession favors particle accelerations~(see, {\it e.g.}, \opencite{Li:2012}, \opencite{Ding:2013}). More discussions about the effect of interacting CMEs on particle accelerations are continued in Section~\ref{sec:SEP}. \begin{figure*}[tb] \centering \includegraphics[width=\hsize]{sec2_homologous_sterling_moore} \caption{Schematic diagram illustrating the process of continuously emerging fluxes causing homologous CMEs (directly adapted from Sterling and Moore, 2001). The rectangles indicate the reconnection regions. Reproduced by permission of the AAS.} \label{fg_sec2_homologous_sterling_moore} \end{figure*} Such successive CMEs from the same AR can be studied in numerical simulations either by supplying free energy into the system through flux emergence~\cite{MacTaggart_Hood_2009, Chatterjee:2013}, through continuous shear motions~\cite{DeVore_Antiochos_2008,Soenen_etal_2009}, or by the perturbation of previously neighboring eruptions~\cite{Torok_etal_2011,Bemporad_etal_2012}. The latter may be treated as a kind of CME-CME interaction during the initiation phase. The simulation by \inlinecite{Torok_etal_2011} was established on a set of zero-$\beta$ compressible ideal MHD equations~\cite{Torok_Kliem_2003} in which four flux ropes~\cite{Titov_Demoulin_1999} were inserted with two of them under a pseudo-streamer and the other two placed on each side of the pseudo-streamer. After the triggering of the eruption of one flux rope next to the pseudo-streamer, the whole simulated system becomes unstable (Figure~\ref{fg_sec2_sim_torok}). The first erupted flux rope expands as it rises and causes breakout reconnection above one of the flux ropes beneath the pseudo-streamer, which leads to the second eruption. As a consequence, a vertical current sheet forms beneath the second erupted flux rope, and reconnection occurs, which results in a third eruption. Both the second and third eruptions are due to the weakening of the constraints of the overlying fields, suggesting that the torus instability plays a pivotal role in the successive eruptions. The second and third eruptions come from the same pseudo-streamer, and therefore match the picture of quasi-homologous CMEs from the same AR but different PILs. The typical timescale of the torus instability, which leads to the third eruption, is of interest, as the statistical analysis suggests about 7 hours. However, studies on this point are rare. In addition, the above simulation results might be also applicable to sympathetic CMEs, which are discussed next. \begin{figure*}[tb] \centering \includegraphics[width=\hsize]{sec2_sim_torok} \caption{Numerical simulations showing the trigger and initiation of successive CMEs (adapted from T{\"o}r{\"o}k {\it et al.}, 2011). The flux ropes, original closed field lines, and open field lines are indicated in yellow, green, and purple colors, respectively. Reproduced by permission of the AAS.} \label{fg_sec2_sim_torok} \end{figure*} \subsection{Sympathetic CMEs}\label{sec:sympathetic} As defined earlier, sympathetic CMEs originate almost simultaneously but from spatially separated regions, and one eruption contributes to the triggering of another one. \inlinecite{Lyons:1999} mentioned the possibility for ``one CME [to] activate the onset of another'', whereas \inlinecite{Moon_etal_2003} are the first to use specifically the term ``sympathetic CMEs''. Defining the term ``simultaneously'' quantitatively is a complex problem. In most studies, it refers to temporal separation between the eruptions of less than several hours. Thus, in this aspect, sympathetic CMEs are similar to those quasi-homologous CMEs originating from different PILs in the same AR. The key question for sympathetic CMEs is how distant magnetic systems connect and interact with each other in such a short interval. The study by \inlinecite{Simnett_Hudson_1997} showed that the CME occurring on 23 February 1997 erupted from the north-east limb of the Sun and quickly merged with a previously much larger event, which was associated with a loop system connecting the northern region to the southern region (another example can be found in Figure~\ref{fg_sec2_sympathetic_schrijver}). Such transequatorial loops are not rare. A statistical study based on {\it Yohkoh} data from October 1991 to December 1998 showed that one third of all ARs present transequatorial loops~\cite{Pevtsov_2000}, suggesting that ARs can be magnetically connected even though they are located on the opposite hemispheres of the Sun (see also \opencite{Webb:1997}). \inlinecite{WangH_etal_2001} presented a case of the connection between two M-class sympathetic flares from two different ARs (referred to as inter-AR interaction). The two flares apart by about 1.5 hours originated from AR~8869 and 8872 on 17 February 2000. Both were associated with a filament. During the progress of the first flare, the associated filament disappeared and a loop structure connecting the two flaring regions became visible in H$\alpha$ images. Along the path of the loop, a surge starting from one end of the erupted filament quickly excited a set of disturbances propagating toward the other AR, which was followed by the second flare and the second filament disappearance. The speed of the disturbances was estimated as about 80~km\,s$^{-1}$, close to the local Alfv{\'e}n speed. Another similar interaction between two eruptions was presented in \inlinecite{JiangY_etal_2008}, in which a transequatorial jet disturbed inter-AR loops and led to an eruption of the inter-AR loops. The jet and the loop eruptions drove two CMEs separated by less than 2 hours. Combining multi-wavelength observations including the higher-resolution data from SDO, \inlinecite{Joshi_etal_2016} recently described sympathetic eruptions in two adjacent ARs on 17 November 2013. A scenario of a series of chain reconnections was proposed for these eruptions with the aid of a NLFFF extrapolation. \begin{figure*}[tb] \centering \includegraphics[width=\hsize]{sec2_sympathetic_schrijver} \caption{a) Three-color composite EUV image combined from SDO/AIA 211\AA, 193\AA\, and 171\AA\ channels on 1 August 2010. Coronal magnetic field lines extrapolated using a potential field source surface (PFSS) model is superimposed showing the magnetic connections among different regions. Letters note the locations of the eruptive events during 1\,--\,2 August 2010. (b) GOES 1-8\AA\ light curve with the same letters denoted. Adapted from Schrijver and Title (2011).} \label{fg_sec2_sympathetic_schrijver} \end{figure*} Such connections or interactions are not limited to adjacent ARs. Thanks to the stereoscopic observations provided by STEREO twin spacecraft as well as SOHO and SDO near the Earth, the global connections among flares and CMEs originating from different regions can be explored. A well-studied series of events are the interrelated eruptive events during 1\,--\,2 August 2010~(see, {\it e.g.}, \opencite{Schrijver_Title_2011}, \opencite{Harrison:2012}, \opencite{Liu:2012}). The study by \inlinecite{Schrijver_Title_2011} focused on the near-synchronous long-distance interactions between magnetic domains. They identified more than ten events including flares, filament eruptions and CMEs. With the aid of a magnetic field extrapolation method based on the potential field assumption, they investigated the global topology of the magnetic field and its changes. It was found that all the scattered major events were connected via large-scale separators, separatrices and quasi-separatrix layers. These results are consistent with the study by \inlinecite{Titov_etal_2012}, who also reconstructed the topology of the coronal magnetic field and investigated the connections between the eruptions and the pseudo-streamers, separatrices and quasi-separatrix layers. They proposed that reconnections along these separators triggered by the first eruption probably caused the sequential eruptions. The resulting CMEs interacted with each other during their propagation in interplanetary space. A more complete picture of this series of events is given in \inlinecite{Harrison:2012} and \inlinecite{Liu:2012}. The long-distance coupling was further studied with more events by \inlinecite{Schrijver_etal_2013}. They argued that there are several distinct pathways for sympathetic eruptions, {\it e.g.}, waves or propagating perturbations, distortion of or reconnection with the overlying field by distant eruptions and other (in)direct magnetic connections. The simulations by \inlinecite{Torok_etal_2011}, mentioned in Section~\ref{sec:homologous}, reproduced the successive CMEs from the regions beneath and beside a pseudo-streamer (Figure~\ref{fg_sec2_sim_torok}), which is not only applicable to the eruption of quasi-homologous CMEs from one AR but also to the possible long-distance coupling between different ARs. In their simulations, the breakout reconnection and weakening of overlying fields due to the neighboring eruptions are responsible for sequential eruptions. The same process was reproduced in the 2.5D MHD simulations by \inlinecite{Lynch_Edmondson_2013}. With a full 3D MHD code under the Space Weather Modeling Framework~\cite{Toth_etal_2012,Holst_etal_2014}, \inlinecite{Jin_etal_2016} numerically studied the long-distance magnetic impacts of CMEs. The coronal environment on 15 February 2011 was established and a CME was initiated by inserting a flux rope of \inlinecite{Gibson_Low_1998} analytical solution into AR 11158. The impacts of the CME on eight ARs, five filament channels, and two quiet Sun regions were evaluated by the decay index, defined as $-\frac{d\log B(h)}{d\log h}$, where $B$ is the magnetic field and $h$ is the height above the solar surface, and other impact factors. They found that the impact gets weaker at longer distances and/or for stronger magnetic structures, and suggested that there were two different types of the impacts. The first is the direct impact due to the CME expansion and the induced reconnection, which may efficiently weaken the overlying field. It is limited spatially to the CME expansion domain. The second is indirect impact outside the CME expansion domain, where the impact of the CME is propagated through waves during both the eruption and the post-eruption phases, and the overlying field may be weakened especially when the global magnetic field relaxes to a steady state during the post-eruption phase. Although the mechanisms of long-distance coupling have been extensively studied and well documented, it is still unclear under which circumstances a CME may successfully take off. That is to say, not all of the regions impacted by a CME do launch a sequential CME. The same issue holds for (quasi-)homologous eruptions. \section{Effects of Successive CMEs on Particle Acceleration}\label{sec:SEP} \subsection{Successive CMEs and Solar Energetic Particle Events} Solar energetic particles (SEPs) are known to be accelerated in association with two main phenomena: solar flares and CMEs. Historically, SEPs have been divided into impulsive events of shorter duration, most often associated with solar flares but not always \cite{Kahler:2001c} and gradual events, most often associated with CME-driven shocks \cite{Cane:1986,Reames:1999,Cliver:2004}. There are typically significant differences between SEPs accelerated through these two mechanisms, including the duration, elemental abundances, spectra, {\it etc.}\ \cite{Mason:1999,Desai:2003,Desai:2006,Tylka:2005}. In the past two decades, with remote observations of CMEs and {\it in situ} measurements of SEPs, it has become well established that the largest gradual SEP events are associated with fast and wide CMEs \cite{kahler92,Reames:1990b,Zank:2000}. Large CME shock fronts are ideal accelerators for charged particles and therefore, SEPs can occasionally reach energies up to several GeVs. SEPs together with cosmic rays play an important role in Space Weather (see, {\it e.g.}, \opencite{usoskin13}). The magnetic field configuration is crucial in order to determine whether accelerated particles might be detected or not. For Earth-affecting SEP events, the particles are thought to be injected onto field lines located in the western hemisphere of the Sun, accounting for Sun-Earth connecting magnetic structures due to the Parker spiral shape of the IP magnetic field (see, {\it e.g.}, \opencite{klein08}, \opencite{schwenn06}). Therefore, fast CMEs originating from the western hemisphere of the Sun are more likely to be magnetically connected to Earth; and hence, fast and wide, western-limb CMEs are the most common cause of large gradual SEP events \cite{Cane:1988,Gopalswamy:2004}. There are also large SEP events observed with clear sources from the eastern solar hemisphere. From STEREO observations, with widely separated spacecraft, it is recognized that SEPs are indeed widespread phenomena (see, {\it e.g.}, \opencite{Dresing:2012}). However, a simple look at SEP and CME statistics reveals that not all fast, wide, and western CMEs are associated with large SEP events \cite{Ding:2013}. Different scenarios of acceleration processes for electrons and ions have been discussed (see, {\it e.g.}, \opencite{kliem03}). Among others, coronal waves, CME lateral expansion as well as CME-CME interaction are possible candidates. Studies on Moreton and EUV waves are still unresolved and cannot fully rule out coronal waves as SEP driving agent (against: \opencite{bothmer97}, \opencite{krucker99}, \opencite{miteva14}; pro: \opencite{malandraki09}, \opencite{rouillard12}). CME-CME interaction itself might play a minor role in the SEP production, but a preceding CME might have a significant effect in terms of preconditioning. This idea originated from a statistical study by \inlinecite{Gopalswamy:2002} which showed that the presence of a previous CME within 12 hours of a wide and fast CME greatly increases the probability that this second, fast CME is SEP-rich \cite{Gopalswamy:2002,Gopalswamy:2004}. The reverse relation was also found: SEP-rich CMEs are about three times more likely than average to be preceded by another eruption \cite{Kahler:2005}. In another study of 57 large SEP events that had intensities $>$ 10 pfu (particle flux units, 1 pfu = 1 proton~cm$^{-2}$~s$^{-1}$~sr$^{-1}$) at $>$ 10 MeV/nuc, \inlinecite{Gopalswamy:2004} showed that there exists a strong correlation between high particle intensities and the presence of preceding CMEs within 24 hours of the main SEP-accelerating CME. As the acceleration of SEPs is believed to happen within the first 10~$R_\odot$ (and most likely within the first 4-5~$R_\odot$), this timeline makes it less probable that direct shock-shock interaction is responsible for the observed larger probability of SEP events \cite{Richardson:2003,Kahler:2003}. While important, these studies are not enough to determine the physical causes of these statistical relations. Hence, the role of interacting CMEs and their relation to large SEP events still leaves many questions open. The preconditioning of the ambient environment close to the Sun has important effects on SEP production. 1) Preceding CMEs (pre-CMEs) not only can provide enhanced seed population, but also lead to a stronger turbulence at the second shock, therefore, increasing the maximum energy of the particles (this is referred to as the twin-CME scenario, as proposed by \inlinecite{Li:2005} and further developed in \inlinecite{Li:2012}; see Figure~\ref{fig0} and Section~\ref{sec:homologous}). \inlinecite{Ding:2013} and \inlinecite{Ding:2014b} tested the twin-CME scenario against all large SEP events and fast CMEs with speeds $>$ 900~km\,s$^{-1}$ from the western hemisphere in Solar Cycle 23. They suggested that a reasonable choice of the time threshold for separating a single CME and a twin-CME is 13 hours. Using this time delay, they found that 60\% twin-CMEs lead to large SEPs while only 21\% single CMEs lead to large SEPs. Furthermore, all large SEP events with a peak intensity larger than 100 pfu at $>$ 10 MeV/nuc recorded by the {\it Geostationary Operational Environmental Satellite} (GOES) are twin-CMEs. Note that twin-CMEs may or may not be associated with direct interaction between the CMEs themselves. 2) The change of the nature of closed and open magnetic field lines in the vicinity of an AR may result in a different shock angle. \inlinecite{Tylka:2005} and \inlinecite{Sokolov:2006}, among others, have shown that shock geometry can have a large influence on the SEP flux and intensity. 3) The presence of closed field lines within a CME might trap particles accelerated by a subsequent CME and, hence, decrease the flux of high-energy particles at Earth \cite{Kahler:2003}, or increase their maximum energy \cite{Gopalswamy:2004}. 4) On longer timescales, the presence of a CME in the heliosphere might dramatically modify the Sun-Earth magnetic connectivity, the length and solar footprints of the field lines connected to Earth. This is clearly visible when a SEP event occurs while an interplanetary CME (ICME) passes over Earth \cite{Kallenrode:2001b,Ruffolo:2006}. This type of configuration usually results in delaying SEPs, but it might also significantly change the Sun-Earth connectivity \cite{Richardson:1991}. \inlinecite{Masson:2013} investigated how flare-accelerated SEPs may reach open field lines through magnetic reconnection during a CME-associated flare. Similar processes need to occur during CME-CME interaction for accelerated particles to be measured at Earth. \begin{figure} \centerline{\includegraphics[width=0.95\textwidth,clip=]{SEP1} } \caption{The twin-CME scenario first outlined by Li {\it et al.} (2012) and adapted by Kahler and Vourlidas (2014). Left: the preCME drives a turbulent shock region (blue shaded area). The SEP-producing CME (primary CME) is launched close to the preCME but later in time. The magnetically accessible (interchange reconnection, marked by orange crosses) turbulent shock region in the preCME acts as amplifier for particles accelerated by the shock of the primary CME. Right: the more developed phase of the preCME-CME interaction, where the primary CME shock has crossed the reconnection region.} \label{fig0} \end{figure} Although the association of preceding CMEs with enhanced SEP intensity is a robust observation, alternative explanations to the twin-CME scenario exist. In a recent work, \inlinecite{Kahler:2014}, making use of an extensive SEP list from \inlinecite{Kahler:2013}, found a relation between the 2 MeV proton background intensities and an increase in the SEP event intensities and the occurrence rates of preceding CMEs. They suggested that preceding CMEs may be an observational signature of enhanced SEP intensities but are not physically coupled with them. This is in contradiction to the events studied in \inlinecite{Gopalswamy:2004} and \inlinecite{Ding:2013} for which no association of larger SEP events with $>$ 2 MeV backgrounds is found. We note that most of the CME related studies are based on the LASCO CME catalogue \cite{Yashiro_etal_2004}, which contains measurements of CME kinematics and hence, energies at heights too far away (beyond 10~R$R_\odot$) to be directly compared with particle energies. Therefore the importance of the background effect remains unclear, as well as whether 2~MeV particles are the right energy level to study ``seed'' particles for SEPs or not. \subsection{Radio Signatures of CME-CME Interaction} Closely related to CMEs and SEP production processes, and most probably more closely related to CME-CME interaction events, is the observation of enhanced radio emission for CME-CME events. \inlinecite{Gopalswamy:2001} first reported about radio signatures in the long-wavelength range, that occurred as intense continuum-like radio emission following an interplanetary type II burst. They linked the timing of the enhancement in the radio emission to the overtaking of a slow CME by a faster one. As shown in Figure~\ref{fig1}, enhanced radio signatures as consequence of CME-CME interaction are in fact frequently reported (see, {\it e.g.}, \opencite{Reiner:2003}, \opencite{hillaris11}, \opencite{martinez-oliveros12}, \opencite{Ding:2014}, \opencite{Temmer:2014}). We note that the description of such a scenario is intimately connected to the 3D geometry and propagation direction of two CMEs. While many of the studies are in agreement that the CME interaction is the cause of the radio enhancement, the interpretation is not straightforward. We review this process step by step. Type II radio bursts give information on the propagation and density behavior of the CME associated shock component \cite{mann95}. As a consequence of interacting CMEs, a continuum-like enhancement of decametric to hectometric (dm to hm) type II radio emission may be interpreted as observational signature of the transit of the shock front of the fast CME through the core of the slow CME. This presumes that the upstream compression due to the passage of a CME enhances the particle density and, therefore, decreases the background Alfv{\'e}n velocity, which would result in a stronger shock \cite{Gopalswamy:2004,Kahler:2005,Li:2005}. However, the collision not only increases the electron density due to compression but also the magnetic field. In fact, the Alfv{\'e}n speed is expected to be higher inside a CME that would actually lead to a reduction of the shock Mach number \cite{Kahler:2003,klein06}. Even with higher coronal density, this would make the overtaking shock weaker and less likely to occur. Numerical simulations actually show that in CME-CME interaction events, large variations in density, Alfv{\'e}n speed, and magnetic field can be expected within the preceding CME \cite{Lugaz:2005b}. In this respect, we note that a reduced Alfv{\'e}n speed within the structures would reduce the efficiency of reconnection processes and CME ``cannibalism'' might not work efficiently. This is confirmed by \textit{in situ} measurements from CME-CME interaction events showing rather intact separate flux ropes for the CME-CME interaction events (see, {\it e.g.}, \opencite{martinez-oliveros12}). Nevertheless, the merging process is taking place as shown, among others, by \inlinecite{Maricic:2014} where possible reconnection outbursts from \textit{in situ} data at 1~AU are observed for the CME-CME interaction event series from 13\,--\,15 February 2011. However, the time span needed to merge two flux ropes completely might be too short and might only be observed beyond distances of 1~AU -- more details about CME-CME interaction processes are found in Section~\ref{sec:helio}. \inlinecite{Ding:2014} and \inlinecite{Temmer:2014} are two of the few examples using stereoscopic observations with the Graduated Cylindrical Shell (GCS) model of \inlinecite{Thernisien:2011} to determine the real direction and heights of two successively erupting CMEs rather than plane-of-sky heights and projected directions. Using this approach, \inlinecite{Ding:2014} found that the start time of type II radio emissions coincided with the interaction between the front of the second CME and the trailing edge of the first CME, interaction which occurred around 6~$R_\odot$, also close to the distance of peak SEP acceleration. This is not supported by \inlinecite{Temmer:2014} who concluded that the timing for the enhanced type II bursts did not match the time of interaction for the CMEs, but they could be related to a kind of shock-streamer interaction \cite{shen13}. In the event under study, the flanks of the following CME might interact with the field which was opened and compressed by the preceding CME. Another scenario describing the occurrence of continuum-like radio emissions might be reconnection processes of the poloidal field components between the interacting CMEs \cite{Gopalswamy:2004}. In fact, enhanced type II radio signatures may be the signatures of several different types of interactions. \begin{figure} \centerline{\includegraphics[width=0.95\textwidth,clip=]{radio2}} \caption{Selection of observations for enhanced type II radio bursts associated to CME-CME interaction events (identified by the solar event naming convention - solar object locator SOL): a) Gopalswamy {\it et al.}, 2001 (SOL-2000-06-10); b) Reiner {\it et al.}, 2003 (SOL-2001-01-20); c) Martinez-Oliveros {\it et al.}, 2012 (SOL-2011-08-01); d) Ding {\it et al.}, 2014 (SOL-2013-05-22); e) Temmer {\it et al.}, 2014 (SOL-2011-02-15).} \label{fig1} \end{figure} Information on the magnetic field topology involved in the process of CME-CME interaction might be given by radio type III bursts. Radio type III bursts are generated by energetic electron beams guided along quasi-open magnetic field lines. Due to a sudden change in the magnetic topology, type N radio bursts show in addition a drift in the opposite direction (classical interpretation: magnetic mirror effect). As CMEs manifest themselves as sudden change in the, generally outward directed, IP magnetic field and electron density distribution, \inlinecite{demoulin07} conclude that decametric type N radio bursts are most likely not caused by mirroring effects but due to geometry effects as consequence of the magnetic restructuring in CME-CME interaction events. \inlinecite{hillaris11} report on peculiar type III radio bursts due to accelerated electrons that might be disrupted by the turbulence near the front of a preceding CME (see also \opencite{reiner99}). Results from \inlinecite{Temmer:2014} showed that the observed type II enhancements, which were associated with type III bursts, stopped at frequencies related to the downstream region of the extrapolated type II burst, as if it would be a barrier for particles entering the magnetic structure \cite{macdowall89}. In this respect, there have been several attempts to explore magnetic connectivity to interplanetary observers. Some of them have used realistic MHD simulations combined with a simple particle source input at the inner boundary in the inner heliosphere and ballistic particle propagation \cite{Luhmann:2010}, while others employ an idealized shock surface and Parker spiral, together with physics-based transport \cite{Aran:2007,Rodriguez-Gasen:2014}. \inlinecite{Masson:2012} investigated the interplanetary magnetic field configurations based on observations during ten GLE events, and concluded that particle arrival times were significantly later than what would be expected under a Parker spiral field, illustrating how the magnetic connectivity to a given observer cannot be assumed to be static. It may be modified before and during the eruption: by other structures between the Sun and the Earth, such as other CMEs, solar wind streams and corotating interaction regions (CIRs), or by reconnection occurring close to the solar surface. Recently, \inlinecite{Kahler:2016} tested the appropriateness of the Parker's spiral approximation for SEP studies using the Wang-Sheeley-Arge (WSA: \opencite{Wang:1990}, \opencite{Arge:2000}) model, and reached similar conclusions. One limitation of these studies is that none of them includes a magnetic ejecta driving a shock wave which is initiated in the low corona, {\it i.e.}\,where particle acceleration is known to occur. The significant drawback in these observational studies comes from the limitation of currently available data, which may only reveal the consequences of CME-CME interaction but not the interaction process itself. A more direct insight into the CME-CME interaction process and related plasma and magnetic field parameters could be gained from {\it in situ} data. However, most of CME events collide far from where plasma and magnetic field parameters are actually monitored. An exception is the 30 September 2012 event, which revealed interaction between two CMEs close to 1AU (probably started interacting $\sim$0.8 AU) as shown in studies by \inlinecite{Liu:2014b} and \inlinecite{Mishra:2015b}. The {\it in situ} instruments aboard {\it Solar Orbiter} and {\it Solar Probe+} (to be launched in 2018), which will travel at close distances to the Sun, will be of great interest and will give a great complementary view on the CME-CME interaction processes. \section{The Interaction of CMEs in the Inner Heliosphere}\label{sec:helio} Direct observations of CME-CME interaction became possible in the mid-1990s with the larger field-of-view of LASCO/C3 (up to 32~R$_\odot \sim$ 0.15~AU) which yielded the first reported white-light observations of CME-CME interaction \cite{Gopalswamy:2001}. Although the interaction of successive CMEs at distances beyond LASCO/C3 field-of-view can often be deduced from their white-light time-distance track or from radio emissions \cite{Reiner:2001}, only few articles focused on the analysis of direct interaction following this first report \cite{Reiner:2003}. In the meantime, there had been a resurgence of interest regarding CME-CME interaction based on the analysis of {\it in situ} measurements near L1 \cite{Burlaga:2002,Burlaga:2003,Wang:2002,Wang:2003,Wang:2003a}. Reported observations became relatively routine with the development of heliospheric imaging, first with SMEI starting in 2003 and, second with the heliospheric imagers (HIs) onboard STEREO starting in 2007 (see Figure~7 for some examples). Although a number of SMEI observations focused on series of CMEs \cite{Bisi:2008,Jackson:2008}, their analyses did not dwell on the physical processes occurring during CME-CME interaction. However, one of the very first CMEs observed remotely by STEREO was in fact a series of two interacting CMEs \cite{Harrison:2009,Lugaz:2008b,Lugaz:2009b,Lugaz:2009c,Odstrcil:2009,Webb:2009}. During the period from the first remote detection in 2001 to routine remote observations in the late 2000s, numerical simulations have been used to fill the gap between the upper corona and the near-Earth space. Early simulations include the work by \inlinecite{Wu:2002}, \inlinecite{Odstrcil:2003} and \inlinecite{Schmidt:2004}. In the past decade, the combination of these three approaches (remote observations, {\it in situ} measurements and numerical simulations) has resulted in a much deeper understanding of the physical processes occurring during CME-CME interaction. \subsection{Changes in the CME Properties} One of the essential aspects of CME-CME interaction is the change in CME properties, such as their speed, size, expansion rate, {\it etc}. This may directly affect space weather forecasting, as not only the CME speed and direction may change (modifying the hit/miss probability and the expected arrival time) but also its internal magnetic field (modifying the expected geomagnetic responses). In addition, understanding how CME-CME interaction changes CME properties can deepen our understanding of the internal structure of CMEs. Many studies have investigated the change in the speed of CMEs due to their interaction, both through remote observations and numerical simulations. Some studies have focused on the nature of the collision, in terms of restitution coefficient and inelastic {\it vs.}\ elastic {\it vs.}\ super-elastic collision \cite{CShen:2012,Mishra:2015a,Mishra:2015b,Colaninno:2015,Mishra:2016}. Different natures of collision seem to be possible (see for example the review by \opencite{FShen:2016b}). Particularly, if CME-CME collision can be super-elastic, it raises the questions of which circumstances yield an increase of the total kinetic energy, and what is the source of the kinetic energy gain. However, CMEs are large-scale magnetized plasma structures propagating in the solar wind, and therefore, the ``collision'' of CMEs is a much more complex process than the classic collision of ordinary objects. There are many factors causing the complexity: 1) depending on the speed of the CMEs, the interaction between two CMEs may involve zero, one or two CME-driven shocks, some of which may dissipate during the interaction, 2) the interaction should take at least one Alfv{\'e}n crossing time of a CME. With a typical CME size at 0.5~AU of 20--25~R$_\odot$, and a typical Alfv{\'e}n speed inside a CME of 200-500~km~s$^{-1}$, the interaction should take 8--24 hours, 3) the CME speed can change significantly, even at large distances from the Sun due to their interaction with the solar wind \cite{Temmer:2011,WuCC:2016}, 4) CME-CME interaction is inherently a three-dimensional process, and the changes in kinematics may differ greatly depending on the CME part that is considered \cite{Temmer:2014}. Using numerical simulations, it is somewhat possible to control some of these effects, for example by performing simulations with or without interaction but identical CME properties, and by knowing the velocity field in the entire 3D domain \cite{FShen:2013,FShen:2016}. This has revealed that the momentum exchange with the ambient solar wind during CME-CME interaction may be neglected in some cases. \begin{figure*}[tb] \centerline{\includegraphics[width=0.98\textwidth,clip=]{Fig7.png}} \label{fig:CME_HI} \caption{Observations of CME-CME interaction in LASCO/C3 and STEREO/HI1 fields-of-view. (a) shows the two CMEs from the initial report of CME-CME interaction by Gopalswamy {\it et al.} (2001). (b) shows base-difference images on 25 May 2010 at 01:29UT (left: HI1A, right: HI1B) corresponding to the event studied in Lugaz {\it et al.} (2012). (c) is a running-difference HI1A image of the event of 15 February 2011 studied in Temmer {\it et al.} (2014). (d) is a base difference image of the HI1A image of the 10 November 2012 event studied by Mishra, Srivastava and Chakrabarty (2015). (c) is reproduced by permission of the AAS.} \end{figure*} As CME-CME interaction involves a faster, second CME overtaking a slower, leading CME, the end result is to homogenize the speed, as was noted from {\it in situ} measurements in \inlinecite{Burlaga:2002}, \inlinecite{Farrugia:2004}, and through simulations by \inlinecite{Schmidt:2004} and \inlinecite{Lugaz:2005b}, among others, and this occurs independently of the relative speed of the two CMEs. One main issue is to understand what determines the final speed of the complex ejecta which was formed through the CME-CME interaction. In an early work, \inlinecite{Wang:2005} found that, in the absence of CME-driven shock waves, the final speed is determined by that of the slower ejecta, whereas \inlinecite{Schmidt:2004} and \inlinecite{Lugaz:2005b} found that when the CMEs drive shocks, the final speed is primarily determined by that of the faster ejecta, as the shock's propagation through the first magnetic ejecta accelerates it to a speed similar to that of the second ejecta (see Figure~8). Most recent works have combined remote observations and numerical simulations. It now appears relatively clear that the final speeds depend on the relative masses of the CMEs, as well as their approaching speed \cite{FShen:2016}, and, hence, on their relative kinetic energy. \inlinecite{Poedts:2003}, based on 2.5D simulations, noted that the acceleration of the first CME increases as the mass of the second CME increases and that an apparent acceleration of the first CME is in fact due to a slower than expected deceleration. Making the situation more complex are the changes in the CME expansion during the propagation as well as the fact that remote observations can be used to determine the velocity of the dense structures, but not really that of the low-density magnetic ejecta. As discussed in \inlinecite{Lugaz:2005b} and further in \inlinecite{Lugaz:2009c}, when the trailing shock impacts the leading magnetic ejecta, the dense sheath behind the shock must remain between the two magnetic ejecta, even as the shock propagates through the first ejecta. As HIs observe density structures, the observations may not be able to capture the shock propagating through a low-density ejecta. \begin{figure*}[tb] \centering \centerline{\includegraphics[width=0.98\textwidth,clip=]{Fig8.png}} \label{fig:CME_speed} \caption{Changes in the CME properties due to CME-CME interaction. (a) shows the changes in speed associated with the August 2010 events from Temmer {\it et al.} (2012); see also Liu {\it et al.} (2012). (b) and (c) show the change of speeds for simulated CMEs from the work of Lugaz {\it et al.} (2005) and Shen {\it et al.} (2012b), respectively. (d) shows the change in radial size and angular width for a case with a shock overtaking a CME (dash) {\it vs}.\ an isolated CME (solid) from Xiong {\it et al.} (2006). (a) and (b) are reproduced by permission of AAS. (c) and (d) are reproduced by permission of John Wiley and Sons.} \end{figure*} In addition to changes in velocity, CME-CME interaction may result in the deflection of one CME by another \cite{Xiong:2009,Lugaz:2012b,CShen:2012}. Combining these works, it appears that the deflection can reach up to 15$^\circ$ when the two CMEs are initially about 15\,--\,20$^\circ$ apart. Such angular separations are quite frequent between successive CMEs, as it corresponds to a delay of about one day for two CME originating from the same active region (due to solar rotation). This change in direction must be taken into account when deriving the changes in velocity, as done in \inlinecite{CShen:2012} and \inlinecite{Mishra:2016}. Next, we discuss the changes in the CME internal properties, such as radial extent, radial expansion speed, and magnetic field strength. Only the radial extent can be reliably derived from remote observations \cite{Savani:2009,Nieves:2012,Lugaz:2012b}. Both numerical simulations and remote observations confirm that the radial extent of the leading CME plateaus during the main phase ({\it i.e}.\ when the speed of both CMEs changes significantly) of interaction \cite{Schmidt:2004,Lugaz:2005b,Xiong:2006,Lugaz:2012b,Lugaz:2013b} and this is typically associated with a ``pancaking'' of the leading CME \cite{Vandas:1997}. It should be noted that it appears nearly impossible for the CME radial extent to decrease, but rather the compression of the back of the leading CME is associated with a slowing down of its radial expansion. In \inlinecite{Lugaz:2005b}, the authors discussed how the shock propagation through the leading CME is the main way in which the expansion slows. It is as yet unclear whether the compression changes for cases with or without shocks. What clearly changes is the resulting expansion of the leading CME after the end of the main interaction phase ({\it i.e}.\ after the shock exited the ejecta). In \inlinecite{Xiong:2006}, the authors found that the leading CME overexpands to return to its expected size; as such the compression is only a temporal state. This was confirmed by the statistical study of the magnetic ejecta radial size at different distances \cite{Gulisano:2010}, as well as one study where remote observations indicated compression but, a day after the interaction ended, when the CME impacted Earth, the {\it in situ} measurements indicated a typical CME size \cite{Lugaz:2012b}. In numerical simulations with two magnetic ejecta (see Figure~9), the rate of over-expansion is found to depend on the rate of reconnection between the two ejecta; as such, it depends on the relative orientation of the two magnetic ejecta \cite{Schmidt:2004,Lugaz:2013b}, but also probably on their density. The potential full coalesence of two ejecta into one was discussed in a few studies \cite{Odstrcil:2003,Schmidt:2004,Chatterjee:2013} but has not been investigated in details with realistic reconnection rates. \subsection{Changes in the Shock Properties} In addition to changes in the CME properties, the fast forward shocks propagating inside the magnetic ejecta encounter highly varying and unusual upstream conditions, affecting the shock properties. Most of what is known about the changes in shock properties was learnt from numerical simulations; however, there have been many reported detections of shocks propagating inside a magnetic cloud or magnetic ejecta at 1~AU \cite{Wang:2003a,Collier:2007,Richardson:2010b,Lugaz:2015a,Lugaz:2016}. \inlinecite{Vandas:1997} noted that a shock propagates faster inside a magnetic cloud due to the enhanced fast magnetosonic speeds inside, which may result in shock-shock merging close to the nose of the magnetic cloud but two distinct shocks in the flanks. \inlinecite{Odstrcil:2003} noted that associated with this acceleration, the density jump becomes smaller. \inlinecite{Lugaz:2005b} performed an in-depth analysis of the changes in the shock properties, dividing the interaction into four main phases: i) before any physical interaction, when the shock propagates faster than an identical isolated shock due to the smaller density in the solar wind, ii) during the shock propagation inside the magnetic cloud, when the shock speed in a rest frame increases and its compression ratio decreases, confirming the findings of \inlinecite{Odstrcil:2003}, iii) during the shock propagation inside the dense sheath when the shock decelerates, as pointed out by \inlinecite{Vandas:1997}, and iv) the shock-shock merging when, as predicted by MHD theory, a stronger shock forms followed by a contact discontinuity. If the shock is weak or slow enough, it may dissipate as it propagates into the region of higher magnetosonic speed inside the magnetic cloud \cite{Xiong:2006,Lugaz:2007}. High spatial resolution is necessary to resolve weak shocks in MHD simulations, and low resolution may affect the prediction of shock dissipation. The merging or dissipation of shocks was noted by \inlinecite{Farrugia:2004} when Helios measured four shocks at 0.67~AU and ISEE-3 measured only two shocks later on at 1~AU. Shock-shock interaction was studied by means of 2-D MHD simulations by \inlinecite{Poedts:2003}, where the authors identified a fast forward shock and a contact discontinuity as the result of two fast forward shocks merging. \begin{figure*}[tb] \centering \includegraphics[width=6cm]{3d_16h_CME2c_field_Byiso_pm170nT.png} \includegraphics[width=6cm]{FShen_2013.png} \label{fig:CME_simu} \caption{3D MHD simulations of CME-CME interaction. (a) shows a case simulated by Lugaz {\it et al.} (2013) with two CMEs with perpendicular orientations. 3D magnetic field lines are color-coded with the north-south $B_z$ component of the magnetic field. Isosurfaces show regions of east-west $B_y$ component of the magnetic field equal to $\pm$~ 170~nT (pink positive, fuchsia negative). The CME fronts are at about 0.3 and 0.15~AU. (b) shows the simulation of Shen {\it et al.} (2013) with isosurfaces of radial velocity and magnetic field lines in white. (b) is reproduced by permission of John Wiley and Sons.} \end{figure*} \subsection{Cases Without Direct Interaction} In a similar way that the succession but not the interaction {\it per se} of CMEs can affect the resulting flux of SEPs, the succession of CMEs, even without interaction may affect the properties of the second (and subsequent) CMEs. \inlinecite{Lugaz:2005b} performed the simulation of the two CMEs, initiated with the same parameters (initial energy, size, orientation, {\it etc.}) 10 hours apart; the second CME did not decelerate as much as the first one and therefore had a faster speed and a faster shock wave, even before the interaction started. This result was confirmed in studies with different orientations \cite{Lugaz:2013b}. Many of the shortest Sun-to-1~AU transit times of CMEs appear to be associated with a succession of non-interacting CMEs. It has been suggested that it was the case for the Carrington event of 1859 \cite{Cliver:2004b}, the Halloween events of 2003 \cite{Toth:2007}, and the 23 July 2012 event \cite{Liu:2014}, each of which is a case where the propagation lasted less than 20 hours, {\it i.e}.\ the average transit speed was in excess of 2000~km\,s$^{-1}$. Note that only 15 events propagated from Sun to Earth in less than 24 hours in the past 150 years \cite{Gopalswamy:2005}. \inlinecite{Liu:2014} and \inlinecite{Liu:2015} proposed that this succession of non-interacting CMEs may produce a ``perfect storm'' with the most extreme geoeffectiveness. A careful analysis of which situation result in the most geoeffective storms has to be undertaken. The main reason for this reduced deceleration is that the first CME removed some of the ambient solar wind mass, resulting in less dense and faster flows ahead of the subsequent CME. As such, the second CME experiences less drag and propagates faster \cite{Lugaz:2005b,Liu:2014,Temmer:2015}. \subsection{Resulting Structures} The complex interaction between different shock waves and magnetic ejecta can result in a variety of structures at 1~AU. The ``simplest'' one is a multiple-magnetic cloud (multiple-MC) event \cite{Wang:2002}, in which a single dense sheath precedes two (or more) distinct MCs (or MC-like ejecta). The two MCs are separated by a short period of large plasma $\beta$, corresponding to hot plasma with weaker and more turbulent magnetic field \cite{Wang:2003}, which may be an indication of reconnection between the ejecta (see Figure~10a). Typically, both MCs have a uniform speed profile, {\it i.e}.\ they propagate approximately with the same speed. The prototypical example of a multiple-MC event is the 31 March\,--\,1 April 2001 multiple-MC event \cite{Wang:2003,Berdichevsky:2003,Farrugia:2006}. Such structures have been successfully reproduced in simulations \cite{Wang:2005,Lugaz:2005b,Xiong:2007,FShen:2011}. These simulations reveal that the dense sheath ahead of the two MCs may be the result of the merging of two shock waves. In this case, it is expected that the sheath may be composed of a leading hot part (the sheath of the new merged shock) followed by a denser and cooler section (material which has been compressed twice, see \opencite{Lugaz:2005b}). The extremely dense sheath preceding the March 2001 event may be related to a shock-shock merging. It is also possible that the shock driven by the overtaking CME dissipates as it propagates inside the first MC, which would also result in a single sheath preceding two MCs. Multiple-MC events correspond to cases when the individual MCs can be distinguished, although the uniform speed and the single sheath indicate that they interacted. When multiple ejecta cannot be distinguished, the resulting structure is typically referred to as a complex ejecta or compound stream \cite{Burlaga:2003}. These structures often have a decreasing speed profile, typical of a single event but with complex magnetic fields and a duration of several days (see Figure~10c). Such complex streams may be caused by a number of factors, including 1) interaction close to the Sun, resulting in quasi-cannibalism \cite{Gopalswamy:2001}, 2) the relative orientation of the successive ejecta favorable for reconnection \cite{Lugaz:2013b}, or 3) interaction between more than two CMEs \cite{Lugaz:2007}. Some events, for examples that of 26\,--\,28 November 2000, which involved between three and six successive CMEs, have been analyzed as multiple-MC event \cite{Wang:2002} or complex ejecta \cite{Burlaga:2002}. Even if individual ejecta can be distinguished, they are of short duration (for this event between 3 and 8 hours) and the magnetic field is not smooth. In the simulation of \inlinecite{Lugaz:2007}, it was found that the complex interaction of three successive CMEs and the associated compression resulted in a period of enhanced magnetic field and higher speed at 1~AU but without individual ejecta being identifiable. In this sense, complex ejecta at 1~AU are similar to merged interaction regions often measured in the outer heliosphere, corresponding to the merging of many successive CMEs \cite{Burlaga:1997,leRoux:1999}. It is also possible that the interaction between a fast and massive CME and a slow and small CME may result in cannibalism, whereas interaction of CMEs with similar size and energy results in a multiple-MC event. Numerical simulations have focused primarily on the interaction of CMEs of comparable energies and sizes, but a more complete investigation of the effect of different initial sizes and CME energies are required, building up on the work of \inlinecite{Poedts:2003}. It has also been proposed that seemingly isolated but long-duration events (events that last 36 hours or more at 1~AU) may be associated with the interaction of successive CMEs of nearly perpendicular orientation (see Figure~10b). \inlinecite{Dasso:2009} performed the analysis of the 15 May 2005 CME, including {\it in situ} measurements, radio emissions as well as remote observations (H$\alpha$, EUV, magnetogram and coronagraphic). They concluded that this large event, which lasted close to 2 days at 1~AU was likely to be associated with two non-merging MCs, of nearly perpendicular orientation. The simulations of \inlinecite{Lugaz:2013b} included a case in which two CMEs were initiated with near-perpendicular orientation, as well as two CMEs with the same initial orientation. In the latter case, the authors found that a multiple-MC event was the resulting structure; in the former case, the resulting structure was a long-duration transient having many of the characteristics of a single ejecta. \inlinecite{Lugaz:2014} compared the result at 1~AU of this simulation with the 19\,--\,22 March 2001 CME, another 48-hour period of smooth and slowly rotating magnetic field, monotonically decreasing speed and lower than expected temperature. In both the simulation and data, the second part of the event was characterized by nearly unidirectional magnetic field. The difference between complex ejecta and this type of transient lies in the smoothness of the magnetic field. Both the events studied by \inlinecite{Dasso:2009} and \inlinecite{Lugaz:2014} have been characterized as a single, isolated CME, but their size (twice larger than a typical MC) and the variation of the magnetic field make it unlikely. \begin{figure*}[tb] \centering \includegraphics[width=11.5cm]{Fig12.png} \label{fig:Insitu} \caption{{\it In situ} measurements of CME-CME interaction. The panels show the magnetic field strength, $B_z$ component in Geocentric Solar Magnetospheric (GSM) coordinates, proton density, temperature (expected temperature in red), velocity, Sym-H index (Dst with crosses, AL in red), and dayside magnetopause minimum location following Shue {\it et al.} (1998), from top to bottom. Shocks are marked with red lines, and CME boundaries with blue lines (dashed for internal boundaries).} \end{figure*} These three cases (multiple-MC, complex ejecta and long-duration event) correspond to full interaction, in the sense that the resulting structure at 1~AU is propagating with a single speed profile (typically monotonically decreasing). The main examples of partial, ongoing interaction are associated with the propagation of a fast forward shock wave inside a preceding ejecta \cite{Wang:2003a,Collier:2007,Lugaz:2015a}. There is a clear difference between the part of the first CME which has been accelerated by the overtaking shock as compared to its front which is still in ``pristine'' conditions (see Figure~10d). In some cases, the back of the first CME is in the process of merging with the front of the second CME \cite{Liu:2014b}, {\it i.e}.\ a complex ejecta or a long-duration event is in the process of forming. In the study of \inlinecite{Lugaz:2015a}, the authors identified 49 such shocks propagating within a previous magnetic ejecta between 1997 and 2006. Most such shocks occur towards the back of the ejecta, and shocks tend to be slower as they get closer to the CME front. This can be interpreted as an indication that a number of shocks dissipate inside a CME before exiting it. The two main reasons are that CMEs tend to be expanding and have a decreasing speed profile and that the peak Alfv{\'e}n speed typically occurs close to the center of the magnetic ejecta. The latter reason means that shocks become weaker as they approach the center of the ejecta. The former reason implies that shocks propagate into higher and higher upstream speeds as they move from the back to the front of the CME. \inlinecite{Lugaz:2015a} reported cases when the speed at the front of the first CME exceeds the speed of the overtaking shock, {\it i.e}.\ because of the CME expansion, the shock cannot overtake the front of the CME. These four different structures, often observed at 1~AU, represent four different ways for CME-CME interaction to affect our geospace in a way which differ from the typical interaction of a CME with Earth's magnetosphere. We give some details on the geo-effectiveness of these structures in the following section. \section{The Geoeffectiveness of Interacting CMEs}\label{sec:geo-effect} Compared to isolated ejecta, the geomagnetic disturbances brought about by interacting CMEs need to account for the changes in parameters resulting from their interaction. These typically, though not always, enhance its geoeffectivess. Individual CMEs and their subset magnetic clouds \cite{Burlaga:1981} are known to be major sources of strong magnetospheric disturbances \cite{Gosling:1991}. This is mainly because they often contain a slowly-varying negative north-south ($B_z$) magnetic field component which can reach extreme values (see, {\it e.g.}, \opencite{Farrugia:1993}, \opencite{Tsurutani:1988}). The passage of ejecta at 1 AU takes typically about one day, so these disturbances can last for many hours. By contrast, passage of interacting ejecta as well as complex ejecta can take $\sim$3 days at 1 AU \cite{Burlaga:2002,Xie:2006}, so that the magnetosphere is under strong solar wind forcing for a much longer time. Below we show an example of a storm where the Dst index, which monitors the strength of the ring current, remained below $-200$~nT for about 21 hours. Thus from the point of view of space weather, CME-CME interactions are key players. Clearly, to assess the overall impact of ejecta interactions on space weather, it is crucial to see how frequent they occur at 1~AU. Some studies have addressed this issue. Examining the causes of major geomagnetic storms in the 10-year period 1996-2005, \inlinecite{Zhang:2007} showed that at least 27\% of the intense storms (over 88 storms from 1997 to 2005) were due to multiple CMEs. Using a different approach, \inlinecite{Farrugia:2006b} used the epsilon parameter from \inlinecite{Akasofu:1981} to define so-called ``large events''. They used this parameter to estimate the energy extracted by the magnetosphere from the solar wind and the powering of the magnetosphere by the solar wind. In the period 1995-2003 they found six out of the 16 largest events ($\sim$37\%) involved CMEs interacting with each other forming complex ejecta. One may conclude that CME-CME interactions are important drivers of extreme space weather. Indeed, some recent studies using multi-spacecraft observations at different radial distances and wide azimuthal separations have been undertaken illustrating how these interactions play a leading role. A case in point is the interplanetary and geomagnetic consequences of multiple solar eruptions which took place 1 August 2010 \cite{Moestl:2012,Temmer:2012,Liu:2012}. CME-CME interactions are more frequent during solar maximum conditions when the number of CMEs erupting from the Sun may reach half a dozen {\it per} day. As an illustration, Figure~11 shows {\it in situ} measurements by {\it Wind} of a 31-day period during the maximum phase of Solar Cycle 23. From top to bottom, the figure shows the proton density, bulk speed, temperature, the eleven ICMEs identified in this period (from \opencite{Richardson:2010}), the $B_z$ component of the magnetic field (in GSM coordinates), the total field, the proton $\beta$, the storm-time Dst index and the planetary Kp index. The red trace in the third panel from the top shows the expected proton temperature for normal solar wind expansion \cite{Lopez:1987}. It can be seen that the measured temperature is often well below this value. This is an often-used indicator of ejecta material in space (following the initial report by \opencite{Gosling:1973}, used to select CMEs by \opencite{Richardson:1993}, \citeyear{Richardson:1995}). Complementary to this are the episodic high magnetic field strengths. The sawtooth appearance of the temporal profile of bulk speed indicates a series of radially expanding transients. \begin{figure*}[tb] \centering \includegraphics[width=11cm]{wind_dst_all_new.png} \label{fig:March} \vspace{-0.3cm} \caption{Solar wind measurements and geomagnetic indices of a 31-day period in March-April 2001. The panels show, from top to bottom, the proton density, bulk speed, temperature (expected temperature in red), the eleven ICMEs identified in this period, the $B_z$ component of the magnetic field (in GSM coordinates), the total field, the proton $\beta$, the storm-time Dst index and the planetary Kp index.} \end{figure*} With so many CMEs, this interval represents a particularly active period at the Sun and in the inner heliosphere (see \opencite{Wang:2003}, \opencite{Berdichevsky:2003}). Correspondingly, the geomagnetic Kp index and the storm-time Dst index indicate that it is also a very disturbed period for the magnetosphere. The event on 31 March\,--\,1 April 2001 stands out. Here the proton density, the amplitude of the $B_z$ component of the magnetic field, and the magnetic field strength all reach the highest values during this whole interval. The Dst index (uncorrected for magnetopause currents, see below) reaches peak values of $-350$~nT and has a two-pronged profile, and the Kp index saturates (Kp = 9). The strong geomagnetic effects of partial or total ejecta mergers may be considered to result from a combination of two effects: on top of the geoeffectiveness of the individual ejecta there is that introduced by the interaction process. Data and simulations have elaborated on aspects of CME-CME interactions which enhance their geoeffectivness. \inlinecite{Berdichevsky:2003} and \inlinecite{Farrugia:2004} noted the following features: (i) transfer of momentum of the trailing shock and its post-shock flow to the leading CME, (ii) acceleration (deceleration) of the leading (trailing) CME, (iii) strengthening of the shock after its merger with the trailing shock, and (iv) heating and compression of the leading ejecta. In simulations, \inlinecite{Lugaz:2005b} emphasized the importance of the trailing shock as it passes through the preceding ejecta to eventually merge with the leading shock, strengthening it. Below, we first give an example of the role of the enhanced density in intensifying the geomagnetic disturbances and then discuss another one where the role of the trailing shock is central. The geomagnetic storm during the event on 31 March\,--\,1 April 2001 (Figure~10a) has been studied by many researchers. Most relevant here is that \inlinecite{Farrugia:2006} argued that the extreme severity of the storm was ultimately due to a very dense plasma sheet combined with strong southward magnetic fields. They showed that the plasma sheet densities were, in turn, correlated with the high densities in the ejecta merger. Since compression of plasma is a feature of CME-CME interactions, they concluded that the interaction was ultimately responsible for causing this superstorm, with Dst reaching values below $-250$~nT for several hours. \begin{figure*}[tb] \centering \includegraphics[width=9cm]{density_new.png} \label{fig:March_zoom} \vspace{-0.3cm} \caption{Comparison of the solar wind and plasma sheet densities during the 31 March\,--\,1 April 2001 CME-CME interaction event. The top panels shows the solar wind density (thick black line), total ion densities above $\sim$100 eV acquired by three {\it Los Alamos National Laboratory} (LANL) spacecraft in geostationary orbit on the nightside in colored symbol, and the compound plasma sheet density (divided by four for clarity). The middle panel shows the total magnetic field. The bottom panel shows a scatter plot of $N_{ps}$ versus $N_{sw}$ and a best fit-line. Adapted from Farrugia {\it et al.} (2006) and published by permission of John Wiley and Sons.} \end{figure*} We illustrate this by Figure~12. The middle panel gives the total magnetic field for reference and shows the two interacting ejecta in the process of merging. In the top panel is plotted the density of the interacting ejecta (thick black trace). Below in color is the density of the plasma sheet obtained from the total ion densities above $\sim$100 eV acquired by three {\it Los Alamos National Laboratory} (LANL) spacecraft in geostationary orbit on the nightside (18-06 MLT). Below, we show this density again as a single-valued function (divided by four for clarity). To produce this, all data were retained where there was no overlap between measurements from the different spacecraft. When there was an overlap, we kept only data acquired closest to local midnight. One can see that the temporal trends in the densities of plasma sheet ($N_{ps}$) and ejecta ($N_{sw}$) are very similar. The bottom panel shows a scatter plot of $N_{ps}$ versus $N_{sw}$, for which the regression line gives $N_{ps} \sim N_{sw}^{1/2}$. The strength of the ring current depends mainly on two factors: (i) the electric field in which the particles drift as they travel inwards from the plasma sheet, and (ii) the density of the plasma sheet itself, since it constitutes the seed population \cite{Jordanova:2006}. In this case, we have a large enhancement of the latter. Using the global kinetic drift-loss model developed by \inlinecite{Jordanova:1996}, it was shown that the two-dip Dst profile could be reproduced even when the code was run with the plasma sheet density kept constant at its original value. However, the intensity of the storm was thereby grossly underestimated. It could be adequately reproduced only when the plasma sheet density was updated in accordance with the data (see Figure 4 in \opencite{Farrugia:2006}). The end result was a storm where Dst stayed below $-200$~nT for 21 hours and below $-250$~nT for 7 hours. In a broader context, \inlinecite{Borovsky:1998} carried out a statistical study relating properties of the plasma sheet with those of the solar wind. One of these was the density, where they found a strong correlation. The relationship they obtained, with $N_{ps}$ scaling as the square root of $N_{sw}$, is similar to that described here. However, compared to their survey, the dynamic range of the density in the case just described was a factor of five higher. An important implication is that CME-CME interaction can be a viable source of two-dip geomagnetic storms. \inlinecite{Kamide:1998}, studying over 1200 geomagnetic storms, noted that a significant proportion of these were double-dip storms: the Dst index reaches a minimum, recovers for a few hours, and then reaches a second minimum. A traditional view on how this profile comes about is the sheath-ejecta mechanism, that is, the negative $B_z$ fields in the sheath region, which have been compressed by the shock ahead of the ejecta, are responsible for the first Dst drop and the negative $B_z$ phase in the ejecta is then responsible for the second one (\opencite{Tsurutani:1988}, \opencite{Tsurutani:1988}, \opencite{Gonzalez:2002}, and review by \opencite{Tsurutani:1997}). Here we have given an example of how such two-dip Dst profiles can arise from CME-CME interactions. In a wide survey of major storms (Dst $< -100$~nT) over Solar Cycle 23 (1996-2006), \inlinecite{Zhang:2008} found that a common source of double-dip storms consists of closely spaced or interacting CMEs. Of course, there are other levels of complexity, such as multiple-dip storms (see example, \opencite{Xiong:2006} and \opencite{Zhang:2008} and \opencite{Richardson:2008}.) Discussed so far are primarily cases corresponding to multiple-MC events or two (or more) CMEs in close succession but that do not interact. In \inlinecite{Lugaz:2014}, the authors discussed how a long-duration seemingly isolated event resulting from the merging of two CMEs may drive the magnetosphere for a long period, resulting in sawtooth events associated with injection of energetic particles observed at geo-synchronous orbit. In the case of the 19\,--\,22 March 2001 event (see Figure~10b), in addition to sawtooth events, it resulted in an intense geomagnetic storm for which the peak Dst reached $-149$~nT and which remained below the moderate level (below $-50$~nT) for 55 hours. We now consider a case where the trailing shock passing through the front CME plays a leading role in the resulting geomagnetic disturbances. There are two effects of the interaction on increasing geo-effectiveness. First, the leading ejecta get compressed by either the overtaking shock or magnetic ejecta, or a combination of both \cite{Burlaga:1991,Vandas:1997,Farrugia:2004,Lugaz:2005b,Xiong:2006,Liu:2012}. Assuming conservation of magnetic flux, the compressed southern $B_z$ interval is more geoeffective than uncompressed southern $B_z$ interval (see the quantitative analysis in \opencite{Wang:2003d} and \opencite{Xiong:2007}). Secondly, the overtaking shock itself and the sheath of compressed ejecta may be more efficient in driving geoeffects. Recently, \inlinecite{Lugaz:2015a} and \inlinecite{Lugaz:2016} published survey studies about the effect of a shock propagating inside a preceding ejecta on the geo-effectiveness of the shock/sheath region. They found that about half the shocks whose sheaths results in at least a moderate geomagnetic storm are shocks propagating within a previous CME or a series of two shocks. Shocks inside CMEs as a potential source of intense geomagnetic storms were also discussed by \inlinecite{Wang:2003a} and further investigated in \inlinecite{Wang:2003c}. Specific examples were also discussed in \inlinecite{Lugaz:2015b}, who argued that the combination of high dynamic pressure and compressed magnetic field just behind the shock may be particularly efficient in pushing Earth's magnetopause earthwards and, therefore, in driving energetic electron losses in Earth's radiation belt. An example of such an event is given in Figure~10d for a shock inside a magnetic ejecta that occurred on 19 February 2014 and resulted in an intense geomagnetic storm. Complex ejecta, due to their complex and turbulent magnetic field and the short duration of any southward periods, often result in strong but not extreme driving of Earth's magnetosphere. For example, the 26\,--\,28 November 2000 event (Figure~10b), caused by a series of homologous CMEs, as discussed in Section~\ref{sec:homologous}, resulted in a peak Dst of $-80$~nT between 26 and 28 November, even though the dawn-to-dusk electric field reached values above 10 mV\,m$^{-1}$, typically associated with intense geomagnetic storms. The large and rapid changes in the orientation of the magnetic field certainly played a role in the reduced geo-effectiveness \cite{Burlaga:2002,Wang:2002,Lugaz:2007}. The response of the magnetosphere is expected to become nonlinear and even saturate under the strong forcing when interacting ejecta pass Earth, for example, the polar cap potential \cite{Hill:1976,Siscoe:2002} and erosion of the dayside magnetosphere \cite{Muehlbachler:2005}. Furthermore, the correction to the raw Dst index from magnetopause currents (Chapman-Ferraro currents, \opencite{Burton:1975}) becomes inappropriate since the Region 1 currents have taken over the role of the Chapman-Ferraro currents in standing off the solar wind (\opencite{Siscoe:2005}, and references therein). Following \inlinecite{Vasylinas:2004} and \inlinecite{Siscoe:2005}, one can speak of the magnetosphere as having transitioned from being solar wind-driven to being ionosphere-driven. \section{Discussions and Conclusion}\label{sec:conclusion} We have reviewed the main physical phenomena and concepts associated with the initiation and interaction of successive CMEs, including their effects on particle acceleration and their potential for strong driving of Earth's magnetosphere. Initiation mechanisms need to be studied in the photosphere and chromosphere combining observations and simulations. The close relation between CMEs and filaments leads to the urge for more detailed studies on filament evolution and partial eruptions that result in multiple CMEs (twin-CMEs, sympathetic CMEs and homologous CMEs). Homologous CMEs and sympathetic CMEs are two main sources of interacting CMEs though unrelated successive CMEs may interact too. Based on the current knowledge, the possible mechanisms of one CME triggering another include (1) destabilization of magnetic structures by removing the overlying field, and (2) the continuous emergence of flux and helicity from the lower atmosphere. However, statistical studies have suggested that homologous or sympathetic CMEs only correspond to a small fraction of total CMEs, meaning that not all of CMEs can trigger subsequent eruptions. It raises the serious question as to why some CMEs trigger another, whereas others do not. Answering this question will directly determine the capability of forecasting of homologous and sympathetic CMEs. Coronal magnetic field is the key information in studying the initiation of CMEs. However, it is now mostly known from extrapolation methods based on limited observations. {\it Solar Orbiter}, to be launched in 2018, will provide vector magnetic field at the photosphere from a point of view other than that of the Earth. Combined with the magnetic field observed by SDO, it may increase the accuracy of the coronal magnetic field extrapolation. With respect to particle acceleration, it is clear that SEPs are strongly affected by preconditioning in terms of turbulence and seed population. Radio enhancements, themselves, are likely to be directly related to the CME-CME interaction process. SEPs, on the other hand, are more likely to be related to having a succession of (not necessarily interacting) CMEs. Enhanced levels of turbulence and suprathermals following a first CME are likely physical explanations. Studies have shown that type II radio burst intensification is due to the front of a faster, second CME colliding with the rear of a slower, preceding CME; however, shock-streamer interaction is also a likely cause of such enhancements. In fact, enhanced type II radio signatures may occur for different types of interaction scenarios. Wide-angle heliospheric imaging of CMEs, now routinely performed by STEREO, has resulted in a new era of study of CME-CME interaction where 3D kinematics can be combined with Sun-to-Earth imaging of succession of CMEs, 3D numerical simulations and radio observations. Many of the recent studies have focused on the relation between the interaction and the resulting structure measured at 1 AU as well as the exchange of momentum between the CMEs. In many instances, after their interaction, the two CMEs are found to propagate with a uniform speed profile, similar to that of an isolated CME, but how this final speed relates to that of the two CMEs before the interaction is still an area of active research. The ``compression'' of the leading CME (associated with a reduced rate of expansion) and the potential deflection of the CMEs are two additional effects of CME-CME interaction that can have a strong influence on space weather forecasting. The exchange of momentum between CMEs is likely to involve a series of phases, including compression of the leading CME by the trailing shock wave that increases the CME magnetic energy, followed by an over-expansion of the leading CME, and potential reconnection between the two magnetic ejecta. Most studies still primarily focus on the different aspects of CME-CME interaction (initiation, SEP acceleration, heliospheric interaction, {\it in situ} measurements, geo-effects) without attempting to obtain a global view. One issue is that heliospheric imaging provide information about different density structures and their kinematics but not about the magnetic field. More global studies, for example, to determine whether sympathetic or homologous (or unrelated) eruptions are more likely to result in CME-CME interaction, and to which structures at 1~AU they correspond are now possible with modern remote imaging but have not been undertaken yet. Study of SEP acceleration and radio bursts associated with successive CMEs must be combined with study of the stereoscopic observations to determine the de-projected CME directions, distances and kinematics, as done only for few events so far \cite{Temmer:2014,Ding:2014}. Similar studies may help to investigate which time delay between successive CMEs result in the most intense geo-effects, and what the most extreme scenario can be. A recent investigation by the Cambridge Centre for Risk Studies of the economical consequences of an extreme CME took as its base scenario the impact in close succession of two non-interacting CMEs, similar to the perfect storm scenario of \inlinecite{Liu:2014}. It is as yet unclear if such a scenario would result in stronger magnetospheric, ionospheric and ground currents as compared to two CMEs in the process of interacting (shock inside CME) or having interacted (multiple-MC event). Although detailed investigations of CME-CME interaction became more frequent following the report by \inlinecite{Gopalswamy:2001} of a total disappearance of one CME following its overtaking by a faster CME (referred to as CME ``cannibalism''), total magnetic reconnection might not take place regularly, as it might be too slow for effectively merging entire flux ropes. For this reason, the most common result of CME-CME interaction at 1~AU is multiple-MC events, where the individual ejection can be distinguished, complex ejecta where some of the individual characteristics are lost and shocks propagating inside a previous CME. Each of these has a different way to interact with Earth's magnetosphere. Often, the long-duration driving and compressed magnetic fields result in intense geo-effects; in addition shocks within CME have a higher probability of having a geo-effective sheath as compared to shocks propagating into typical solar wind conditions. Currently available {\it in situ} data show us only the consequences of CME-CME interaction but not the interaction process itself. {\it In situ} data during the interaction process is needed, as was possible with Helios \cite{Farrugia:2004}. {\it Solar Orbiter} and {\it Solar Probe+} will provide opportunities for measuring the interaction process as it occurs, and to determine the fate of shocks propagating inside ejecta, the irreversibility (or not) of the interaction. Studies made possible by these spacecraft when they are in alignment (conjunction) will help us to answer some of these outstanding questions. \begin{acks} We acknowledge the following grants: NASA grant NNX15AB87G and NSF grants AGS1435785, AGS1433213 and AGS1460179 (N.~L.), NSFC grants 41131065 and 41574165 (Y. W.), Austrian Science Fund FWF: V195-N16 (M.~T.), and NASA grant NNX16AO04G (C.~J.~F.). N.~L. would like to acknowledge W.~B. Manchester, I.~I. Roussev, Y.~D. Liu, G. Li, J.~A. Davies, T.~A. Howard, and N.~A. Schwadron for fruitful discussion about CME-CME interaction over the years, as well as the VarSITI program. Some of the Figures in this article have been published by permission of the AAS and John Wiley and Sons as indicated in the text. Disclosure of Potential Conflicts of Interest: The authors declare that they have no conflicts of interest. \end{acks} \bibliographystyle{spr-mp-sola-cnd}
1,314,259,994,121
arxiv
\section{Introduction} Bulk dichalcogenides $TX_2$ with transition metals $T$ and $X=$~S, Se, or Te belong to the class of layered semimetals / semiconductors\cite{wilson69a} which attracted considerable attention for decades. They exhibit a rich variety of physical properties, such as superconductivity in, e.g., NbSe$_2$ or TaS$_2$,\cite{revolinsky65a,vanmaaren67a} coexisting but competing charge-density wave order,\cite{wilson74a,morris75a,wilson75a,castroneto01a,valla04a} and thermoelectrical device functions.\cite{brixner62a} In many cases, these materials possess Fermi surfaces consisting of many valleys, which are somehow related to, or responsible for, these phenomena. The discovery of graphene has further increased the interest in layered materials with valley degrees of freedom. It was found that the physical properties of bulk layered materials can be significantly altered when thinning them down to atomically flat layers.\cite{novosolev05a,novosolev05b,yzhang05a} Naturally, different layered systems came into the focus of research. Among them are the monolayered dichalcogenides which are often regarded as two-dimensional semiconducting or semimetallic analogs to graphene.\cite{qzhang14a,behnia12a,gwang14a} The electronic band structure plays a crucial role also in the physics of these thin materials with impact on spintronics,\cite{ochoa13a} optoelectronics devices,\cite{ross14a,baugher14a,pospischil14a} or the emerging field of valleytronics, e.g., in MoS$_2$, MoSe$_2$, WS$_2$, and WSe$_2$. \cite{dxiao07a,behnia12a,qhwang12a,dxiao12a,rycerz07a,zzhu12a} The strong spin-orbit interaction in some transition-metal dichalcogenides and the lack of inversion symmetry in their monolayer variants lead to valley-spin coupling, adding a new feature to their zoo of exotic properties.\cite{dxiao12a,zzhu12a,ochoa13a,hyuan13a,hzlu13a,wyshan13a} Another example of their intriguing nature is the electric-field-induced superconductivity with an optimum $T_{\rm c}$ exceeding 10~K in an electric double-layer transistor structure made from MoS$_2$.\cite{taniguchi12a,jye12a,yzhang12a,radisavijevic13a} The band structures of the aforementioned hexagonal multilayer (bulk) and monolayer WS$_2$, WSe$_2$, and MoS$_2$ differ as well. While the bulk systems exhibit indirect band gaps, the monolayered counterparts have direct band gaps at the inequivalent $K$ and $K^\prime$ points at the corners of the hexagonal Brillouin zone.\cite{kfmak10a,splendiani10a,wzhao13a,ochoa13a,gwang14a} Therefore, to control the band structure is one of the most important issues in the study of those $TX_2$ compounds. The presence of many valleys in the band structure is in general interesting also for thermoelectrical applications since it is well known to enhance the thermoelectrical performance.\cite{rowe95} Thermoelectrical materials are also in the focus of current research because they offer the possibility to transform thermal waste heat back into usable electrical power.\cite{rowe78a,mahan97a,snyder08a,heremans12a,pei12a} A measure of the thermoelectric efficiency of such materials is the figure of merit (FOM) $ZT = S^2 T/(\rho\kappa)$. It consists of the thermopower or Seebeck coefficient $S$, the longitudinal resistivity $\rho$, and the total thermal conductivity $\kappa= \ensuremath{\kappa_{\rm el}}+\ensuremath{\kappa_{\rm ph}} $, where \ensuremath{\kappa_{\rm el}}\ and \ensuremath{\kappa_{\rm ph}}\ are the contributions of the mobile charge carriers and the lattice, respectively. The term $S^2/\rho$ is often referred to as the power factor. To maximize $ZT$, a large thermopower and a small resistivity along with a small thermal conductivity are required. Conventional metals are usually not good candidates for high-efficiency thermoelectrics since the Seebeck coefficient is generally small due to the Fermi degeneracy. Instead, doped semiconductors are promising materials where it is possible to control the charge-carrier concentration and hence the electrical conductivity and the Seebeck coefficient by doping. However, most of such systems suffer from a good lattice heat conductivity. The guiding principle can be abbreviated as ``phonon glass + electron crystal'',\cite{snyder08a,rowe95} i.e., a system which ideally consists of independent charge- and heat-transport channels to obtain low $\rho$ and $\kappa$ values at the same time. In this context, bulk dichalcogenides attracted much interest when it was found that several of them exhibit at room temperature a small heat conductivity of only $\sim 2$~W/K~m, among them WSe$_2$.\cite{brixner62a} Back in the 1960s and 1970s, this material was intensively studied by doping into both the W (cation) and the Se (anion) lattice sites\cite{brixner63a,brixner63b,hicks64a,revolinsky64a,champion65a,mentzen76a} as well as by intercalation of metal elements to bridge the chalcogen layers.\cite{whittingham75a} WSe$_2$ is a $p$-type semiconductor and crystallizes in the hexagonal $P6_3/mmc$ structure (space group 194), usually abbreviated as 2H-WSe$_2$.\cite{glemser48a,brixner62a} It consists of trilayer building blocks Se\,--\,W\,--\,Se with strong covalent bonds. These blocks are separated by only weakly bonded van der Waals gaps. Each W ion is coordinated by six Se ions with a trigonal prism configuration. The unit cell consists of two formula units along the $c$ axis. Several band-structure calculations can be found in literature, the most recent one in Ref.~\onlinecite{hyuan13a}. The valence-band maximum lies at the $\Gamma$ point (zone center) almost degenerate with the only slightly lower-lying band at the $K$ (and $K^\prime$) point. While the band dispersion at the $\Gamma$ point has an almost isotropic nature, at the $K$ point it is anisotropic. By doping pentavalent Ta$^{5+}$ on the W$^{4+}$ site, holes are introduced into the valence band, and the large electrical resistivity of pure WSe$_2$ is successfully suppressed: W$_{1-x}$Ta$_x$Se$_{2}$\ becomes metallic at small Ta concentrations $x\approx 0.03$ with room-temperature resistivities of the order $\sim$ m$\Omega$cm. Up to $x\approx 0.35$, the structure remains hexagonal $P6_3/mmc$ with $p$-type conduction.\cite{brixner62a, brixner63a,hicks64a} On the other hand, there are fewer works published on WSe$_{2-y}$Te$_y$, i.e., the substitution of isovalent Te$^{2-}$ for Se$^{2-}$. The end member WTe$_2$, crystallizing in an orthorhombic structure ($Pmmm$), is a semimetal with a negative Hall coefficient. It seems that at least up to $y=0.5$ the hexagonal WSe$_2$ structure is retained.\cite{champion65a,wilson69a} As for the preparation of these dichalcogenides, it should be noted that they do not melt congruently and dissociate at elevated temperatures. This was discussed earlier as a problem in achieving samples with high packing densities.\cite{brixner63a} Also, the physical properties were not very reproducible.\cite{brixner63a} Here we present a comprehensive study on W$_{1-x}$Ta$_x$Se$_{1.6}$Te$_{0.4}$\ with $0\leq x \leq 0.06$ by means of band-structure calculations, transport, and thermodynamical measurements to elucidate the effect of Te doping on the electronic structure. Interestingly, we found evidence that the valence-band maximum shifts from the $\Gamma$ point in WSe$_2$ toward the $K$ point when substituting Se by Te while keeping the crystal structure, which is reminiscent of the aforementioned situation in the monolayer dichalcogenides. The introduction of Te was also found to further suppress the thermal conductivity around room temperature. The paper is organized as follows: In the next section (Sec.~\ref{prepmeth}), the sample preparation and experimental and theoretical calculation methods are described. After overviewing the electronic band structure (Sec.~\ref{bandstr}), the basic transport properties and structural data below room temperature for W$_{1-x}$Ta$_x$Se$_{1.6}$Te$_{0.4}$\ are introduced, together with those for W$_{1-x}$Ta$_x$Se$_{2}$\ for comparison (Sec.~\ref{sampchar}). Next (Sec.~\ref{specheat}) we focus on the change in the electronic states of WSe$_2$ due to the Se replacement with Te by discussing the results of specific-heat data in the light of the band-structure calculations. In the latter half of this paper (Sec.~\ref{thermprop}), we present the thermoelectric properties of W$_{1-x}$Ta$_x$Se$_{1.6}$Te$_{0.4}$, i.e., thermal-transport data up to $\sim 850$~K from which the FOM is calculated. Section \ref{summ} is devoted to the summary of the present work. \section{Sample Preparation and Methods} \label{prepmeth} Polycrystals\cite{SingleCrystcomment} of W$_{1-x}$Ta$_x$Se$_{1.6}$Te$_{0.4}$\ for $x=0$, 0.02, 0.025, 0.03, 0.035, 0.0375, 0.04, and 0.06 were prepared in three steps. First, stoichiometric amounts of the elements W (purity 99.9\%), Ta (99.9\%), Se (99.999\%), and Te (99.999\%) were thoroughly ground, mixed, sealed into evacuated quartz tubes, and kept for 48 h at $700^{\circ}$C. The resulting reaction product was free-flowing blackish powder. Second, the powder was reground, pelletized, again sealed into evacuated quartz tubes, and kept for 72~h to 96~h at $1000^{\circ}$C. Third, the reaction product was ground again. Approximately 600~mg of the powder was used to synthesize a final batch by employing a high-pressure ($p$) / high-temperature ($T$) technique using a cubic anvil cell. The latter approach was chosen to overcome the aforementioned problem to achieve high-packing densities.\cite{brixner63a} Practically, the powder was first pressed at 2 GPa at room temperature. Next, the temperature was increased to about $1100^{\circ}$C. This temperature was kept for about 10 minutes, then the material was thermally quenched. The pressure was released after the temperature had dropped back to room temperature. For each of the three steps, the reaction product was checked by x-ray diffraction (XRD). The targeted compounds had already formed after the first step although the respective XRD peaks were very broad. This probably indicates a disordered stacking of the characteristic (Se,Te)\,--\,W\,--\,(Se,Te) trilayers. The second and third reaction steps respectively lead to a sharpening of the XRD patterns. For the various measurements, rectangularly- and cylindrically-shaped samples were cut from these batches.\cite{samplecomment} We note that we did not observe any cleavage-like behavior when cutting the samples. This is probably a result of the application of hydrostatic pressure in our cubic anvil press, leading to more isotropic samples without preferred orientation in spite of the layered structure of WSe$_2$. As a control experiment, we also prepared pristine WSe$_2$ and W$_{1-x}$Ta$_x$Se$_{2}$\ ($x = 0.02$, 0.03, 0.04, 0.05, 0.06) by the same high-$p$ / high-$T$ synthesis but without any prereaction, yielding sharp XRD line patterns. Comparably sharp line patterns for W$_{1-x}$Ta$_x$Se$_{1.6}$Te$_{0.4}$\ were only achieved after the three-step growth process. We note that some batches contained small amounts of unreacted Se / Te after the high-$p$ / high-$T$ synthesis step. From TG-DTA (Rigaku ThermoPlus Evo TG 8120) analyses, we found that the samples lost some weight when heating them up. At higher temperatures the system starts to dissociate in agreement with an old report.\cite{brixner63a} Therefore, we restricted our high-temperature experiments to below approximately 850~K. Moreover, most of the examined specimens were annealed 24~h at $150^{\circ}$C before the high-$T$ measurements. Below 300~K, the longitudinal resistivity $\rho_{xx}$ and the Hall resistivity $\rho_{yx}$ were measured by a standard five-probe technique using a commercial system (Quantum Design, PPMS). Low-$T$ longitudinal Seebeck $S_{xx}$ and thermal conductivity $\kappa_{xx}$ data\cite{Skappacomment} were taken with two separate home-built setups, each mounted on a PPMS cryostat. Above room temperature up to approximately 850~K, $\rho_{xx}$ and $S$ were measured simultaneously in a ZEM-3 system (ULVAC Technologies), where the sample is held in He atmosphere by two Pt or Ni electrode stamps acting as current leads. Two thermocouples were pressed to one sample surface acting as voltage pads and were used for the Seebeck measurement. High-$T$ resistivity and thermopower data were taken upon cooling. The respective low-$T$ measurements were carried out using the same samples after the high-$T$ experiment. The thermal conductivity above room temperature was estimated using the formula $\kappa = D \,\ensuremath{c_{p}}\,d_{_{\rm 300K}}$ with the thermal diffusivity $D$, the room-temperature density of the respective samples $d_{_{\rm 300K}}$, and the specific heat \ensuremath{c_{p}}. The thermal diffusivity was measured by employing the laser-flash method in a commercial Netzsch LFA-457 apparatus. The density of the samples was estimated from their mass and dimensions. Specific-heat data below 300~K were measured by a relaxation-time method using the PPMS. The addenda heat capacity was measured before mounting the sample and eventually subtracted from the total signal. Since the specific heat of W$_{1-x}$Ta$_x$Se$_{1.6}$Te$_{0.4}$\ at room temperature already exceeds 95\% of the classical Dulong-Petit limit $c_{_{\rm DP}}$, \ensuremath{c_{p}}\ at higher temperatures was calculated as $\ensuremath{c_{p}} = \gamma T + c_{_{\rm DP}}$ with the Sommerfeld parameter $\gamma$, which was determined from \ensuremath{c_{p}}\ vs $T$ plots at low temperatures.\cite{cpcvcomment} The first-principles band-structure calculations were performed with the WIEN2k code employing the full-potential linearized augmented plane-wave method.\cite{blaha01} We used the Perdew-Burke-Ernzerhof exchange-correlation functional.\cite{perdew96a} In the calculations for WSe$_{2}$, the measured lattice parameters ($a=3.289$~\AA, $c=12.988$~\AA) and the positional parameter $z=0.129$ (W\,--\,Se layer distance) reported in Ref.~\onlinecite{coehoorn87a} were employed. For hypothetical 2H-WTe$_{2}$, we optimized the parameters by using the scalar-relativistic approximation\cite{koelling77a} ($a=3.555$~\AA, $c=14.447$~\AA, and $z=0.126$). Spin-orbit coupling was also taken into account as a relativistic effect in the band-structure calculations.\cite{kunes01a} The structure optimization (band-structure calculation) was carried out with the cutoff $RK_{\rm max}=8.5$ ($10.0$) and $6 \times 6 \times 4$ ($12 \times 12 \times 8$)~$k$ points. The density of states was calculated with $36 \times 36 \times 8$~$k$ points using the tetrahedron method.\cite{bloechl94a} \section{Results and Discussion} \subsection{Band Structure} \label{bandstr} \begin{figure}[t] \centering \includegraphics[width=8.5cm,clip]{Fig1.pdf} \caption{(Color online) Band structure calculated for (a), (c) 2H-WSe$_2$ and (b), (d) hypothetical 2H-WTe$_2$. The values of the lattice constants used for the calculations are indicated in panels (a) and (b). For the details, see text. Panels (c) and (d) provide expanded views of the band dispersions along the $k_z$ directions from the $\Gamma$ and $K$ points of the hexagonal Brillouin zone. In each panel, the Fermi level is indicated by a horizontal line ($E=0$~eV).} \label{fig1} \end{figure} Figure~\ref{fig1} summarizes the results of band-structure calculations. In Fig.~\ref{fig1} (a), the band structure based on experimentally determined lattice constants for 2H-WSe$_2$ is plotted. The valence-band maximum is located at the $\Gamma$ point which has an almost isotropic dispersion.\cite{BScomment} The second, almost degenerate valence-band maximum is found at the $K$ and $K^\prime$ points and lies approximately 100~meV lower in energy, yielding a unique band structure as discussed in Ref.~\onlinecite{hyuan13a}. Panel (c) gives an expanded view of the band dispersion along the $k_z$ direction at the $\Gamma$ and $K$ points, i.e., $\Gamma$-$A$ and $K$-$H$, respectively. The in-plane and out-of-plane band widths around the $K$ and $K^\prime$ points are much more different than around the $\Gamma$ point. Hence, the band dispersion around $K$ or $K^\prime$ has an anisotropic, two-dimensional nature. To investigate how the WSe$_2$ band structure changes upon Te doping, we calculated the energy dispersion for hypothetical 2H-WTe$_2$ based on optimized lattice and positional parameters. It is important to mention that hexagonal 2H-WTe$_2$ \textit{does not} exist in nature. In reality, WTe$_2$ crystallizes in the orthorhombic $Pmmm$ structure.\cite{brown66b,ali14b} The result is shown in Fig.~\ref{fig1} (b). Expanded views along the $k_z$ directions from the $\Gamma$ and $K$ points are plotted in panel (d). Interestingly, the substitution of the heavier Te for Se shifts the valence-band maximum from $\Gamma$ to $K$ (and $K^\prime$), but the energy difference between them remains small. Most importantly, this shift has implications on the density of states (DOS) at the onset of the hole band. While the band-edge DOS in 2H-WSe$_2$ is dominated by the almost isotropic band at the $\Gamma$ point, it originates from the more anisotropic bands at the $K$ and $K^\prime$ points in hypothetical 2H-WTe$_2$. As long as the 2H-type hexagonal structure is retained, it is reasonably expected that the relative position of the band maximum gradually and continuously changes upon increasing the Te concentration in WSe$_{2-y}$Te$_y$. The region dominating the band-edge DOS changes from around the $\Gamma$ point to around the $K$ and $K^\prime$ points, when the energy level at the local maxima for the $\Gamma$ and $K$ points become the same. Following our calculations this happens at $y \approx 0.74$. \subsection{Sample Characterization} \label{sampchar} Figures~\ref{fig2} (a) and (b) show the Ta-doping dependence of the lattice constants $a$ and $c$, respectively, for W$_{1-x}$Ta$_x$Se$_{1.6}$Te$_{0.4}$\ ($y=0.4$; filled red symbols) and W$_{1-x}$Ta$_x$Se$_{2}$\ ($y=0$; open blue symbols). According to the introduction of Ta and Te in the two different lattice sites, the lattice constants change systematically with $x$ and $y$, indicating the successful formation of solid solutions. Ta doping increases the $a$-axis length whereas the $c$ axis shrinks. The increase in the absolute values between $y = 0$ and $y = 0.4$ reflects the replacement of 20\% of the Se$^{2-}$ ions with larger Te$^{2-}$ ions. Both of the Te and Ta ions readily substitute their counterparts. \begin{figure}[t] \centering \includegraphics[width=8cm,clip]{Fig2.pdf} \caption{(Color online) (a), (b) Evolution of the lattice constants $a$ and $c$ with $x$ for W$_{1-x}$Ta$_x$Se$_{2}$\ (open blue symbols; $y=0$) and W$_{1-x}$Ta$_x$Se$_{1.6}$Te$_{0.4}$\ (closed red symbols; $y=0.4$). (c) Actual charge carrier concentration per W site $n$ as estimated from Hall resistivity $\rho_{yx}$ at 300~K is plotted against the nominal Ta-doping concentration $x$ of W$_{1-x}$Ta$_x$Se$_{1.6}$Te$_{0.4}$\ (filled symbols). Data for W$_{1-x}$Ta$_x$Se$_{2}$\ (open symbols) are included for comparison; see text. The dotted line indicates $n = x$. The color code of the different data points is the same as in Fig.~\ref{fig4}.} \label{fig2} \end{figure} The charge-carrier concentration as estimated from Hall resistivity $\rho_{yx}$ measurements at 300~K increases from $\sim 8.7 \times 10^{19}$~cm$^{-3}$ for $x=0.02$ to $\sim 9.4 \times 10^{20}$~cm$^{-3}$ for $x = 0.06$. These data are plotted with filled symbols as charge-carrier concentration per W site $n$ in Fig.~\ref{fig2} (c); $n=0.01$ corresponds to a charge-carrier concentration of $\sim 1.65\times10^{20}$~cm$^{-3}$. We also estimated $n$ at 5~K (not shown), finding only small changes from the values at 300~K. At both temperatures, the Hall resistivity $\rho_{yx}$ is linear in the magnetic field, indicating that there are only hole-type charge carriers. We are not able to measure the Hall resistivity at elevated temperatures, but for low-doped W$_{1-x}$Ta$_x$Se$_{2}$, a temperature-independent charge-carrier concentration up to $T\sim 900$~K was reported earlier.\cite{hicks64a} For comparison, respective data for W$_{1-x}$Ta$_x$Se$_{2}$\ are also shown in open symbols. While the actual charge-carrier concentrations $n$ are close to the nominal doping levels $x$ for W$_{1-x}$Ta$_x$Se$_{2}$, this is not the case for $x < 0.04$ in the W$_{1-x}$Ta$_x$Se$_{1.6}$Te$_{0.4}$\ system. We find a systematic suppression of $n$ compared to the nominal carrier concentration corresponding to the Ta-doping level $x$, i.e., one hole per Ta$^{5+}$. Possibly, some of the introduced charge carriers are annihilated in the low-doped samples due to compensation effects arising from deficiency in the Se\,--\,Te stoichiometry. Such an effect is absent in the ``pure'' diselenide system. This annihilation does not seem to be active when a certain amount of Ta is doped and the system becomes metallic. In Ref.~\onlinecite{hicks64a}, the author reports a smaller than nominal charge-carrier concentration also for W$_{1-x}$Ta$_x$Se$_{2}$\ and speculates about a compensation effect due to impurities at low doping concentrations. \begin{figure}[t] \centering \includegraphics[width=7.5cm,clip]{Fig3.pdf} \caption{(Color online) Temperature dependence of the resistivity $\rho_{xx}$ below room temperature of W$_{1-x}$Ta$_x$Se$_{2}$\ (dashed blue lines; $y=0$) and W$_{1-x}$Ta$_x$Se$_{1.6}$Te$_{0.4}$\ (solid red lines; $y = 0.4$) for $0\leq x \leq 0.06$. Note the change in the scale of the vertical axis.} \label{fig3} \end{figure} Figure~\ref{fig3} summarizes the resistivity $\rho_{xx}$ data below 300~K for both the Se-\textit{unsubstituted} W$_{1-x}$Ta$_x$Se$_{2}$\ (dashed blue lines) and the Se-\textit{substituted} W$_{1-x}$Ta$_x$Se$_{1.6}$Te$_{0.4}$\ (solid red lines). In both systems, the partial replacement of W$^{4+}$ by Ta$^{5+}$ introduces holes and systematically lowers the resistivity with $x$. Already a small Ta concentration of $x \leq 0.02$ drastically suppresses $\rho_{xx}$ by orders of magnitude compared to $x=0$.\cite{brixner63a,hicks64a} WSe$_2$ and WSe$_{1.6}$Te$_{0.4}$ have room-temperature resistivities of the order of $\sim 10$~$\Omega$cm whereas $\rho$ at 300~K of W$_{0.99}$Ta$_{0.01}$Se$_2$ has dropped to $\sim 0.1$~$\Omega$cm, and W$_{0.98}$Ta$_{0.02}$Se$_2$ exhibits already less than $10^{-2}$~$\Omega$cm. For W$_{1-x}$Ta$_x$Se$_{1.6}$Te$_{0.4}$, the resistivity drops for $x=0.03$ below $10^{-2}$~$\Omega$cm. Metallic behavior is observed in both series above $x \gtrsim 0.04$, although the absolute values of $\rho_{xx}$ remain in the m$\Omega$cm range. The resistivity of the doping series W$_{1-x}$Ta$_x$Se$_{2}$\ (see also Refs.~\onlinecite{brixner63a} and \onlinecite{hicks64a}) drops faster than in W$_{1-x}$Ta$_x$Se$_{1.6}$Te$_{0.4}$. This is probably due to additional disorder caused by the partial substitution of Te for Se. We note that in Ref.~\onlinecite{brixner63a} especially for $x=0.01$, smaller values are reported for $\rho_{xx}$, whereas those of the metallic compositions are comparable. \subsection{Specific Heat} \label{specheat} \begin{figure}[t] \centering \includegraphics[width=8cm,clip]{Fig4.pdf} \caption{(Color online) Specific-heat \ensuremath{c_{p}}\ data of W$_{1-x}$Ta$_x$Se$_{1.6}$Te$_{0.4}$\ (filled symbols) and W$_{1-x}$Ta$_x$Se$_{2}$\ (open symbols): (a) \ensuremath{c_{p}}\ vs $T$ as measured. The dashed line depicts the specific heat calculated in the Debye model plus electronic contribution for $x=0.06$; see text. The inset shows an expanded view around 100\,K displayed as \ensuremath{c_{p}/T}\ vs $T$. (b) The same data plotted as \ensuremath{c_{p}/T}\ vs $T^2$ at low $T$. The lines therein are linear fits to the data. An expanded view of the extrapolation to 0~K is shown in (c). Solid lines refer to W$_{1-x}$Ta$_x$Se$_{1.6}$Te$_{0.4}$\ ($y = 0.4$), dashed lines to W$_{1-x}$Ta$_x$Se$_{2}$\ ($y = 0$). The intercepts at $T = 0$~K give the electronic specific-heat coefficients $\gamma$; see text.} \label{fig4} \end{figure} Next, we discuss the Te-doping effect in terms of specific-heat data below $T\sim 330$~K down to approximately 1.9~K. There is no specific-heat study on the doped systems reported in the literature. For the mother compound WSe$_2$, we found only one publication reporting \ensuremath{c_{p}}\ data above 60~K.\cite{bolgar90a} Here we show that the analysis of the \ensuremath{c_{p}}\ data provides important information about the change in the electronic states when going from W$_{1-x}$Ta$_x$Se$_{2}$\ to W$_{1-x}$Ta$_x$Se$_{1.6}$Te$_{0.4}$. These data are summarized in Fig.~\ref{fig4} for W$_{1-x}$Ta$_x$Se$_{1.6}$Te$_{0.4}$\ ($x=0.02$, 0.03, 0.04, and 0.06; filled symbols) along with data for W$_{1-x}$Ta$_x$Se$_{2}$\ ($x = 0$, 0.02, and 0.06; open symbols). In Fig.~\ref{fig4} (a), the data are displayed as \ensuremath{c_{p}}\ vs.\ $T$. The specific heat of all samples is very similar on this scale. Around room temperature, each sample has released more than $\sim95$\% of the entropy expected in the classical Dulong-Petit limit $c_{_{\rm DP}}$ \textit{plus} its respective electronic contribution $\gamma T$. This is exemplarily indicated as a dashed line in Fig.~\ref{fig4} (a) for $x = 0.06$. However, the expanded plot shown as $c_{\rm p}/T$ vs $T$ in the inset of Fig.~\ref{fig4} (a) reveals that there is a clear impact on \ensuremath{c_{p}}\ when changing $x$ and $y$. The difference between \ensuremath{c_{p}}\ of W$_{1-x}$Ta$_x$Se$_{2}$\ ($y=0$) and W$_{1-x}$Ta$_x$Se$_{1.6}$Te$_{0.4}$\ ($y=0.4$) is due to the change in the phonons (lattice) caused by the replacement of 20\,\% Se with Te. Moreover, we could also successfully resolve very small but systematic changes in both systems due to the increase of the Ta concentration $x$, and hence a change of the electronic specific heat. This can be seen in Fig.~\ref{fig4} (b), which shows the specific heat at low temperatures displayed as $\ensuremath{c_{p}}/T$ vs $T^2$, where the phononic contribution is small and the electronic sector can be studied. Figure~\ref{fig4} (c) provides an expanded view of the extrapolation of the low-$T$ data to 0~K (linear fits to the respective data). Solid lines refer to W$_{1-x}$Ta$_x$Se$_{1.6}$Te$_{0.4}$, dashed lines to W$_{1-x}$Ta$_x$Se$_{2}$. Conventional Debye fits to the data \ensuremath{c_{p}}\ vs $T$ for $T\leq 5$~K using \begin{equation} \ensuremath{c_{p}} = \ensuremath{c_{\rm el}} + \ensuremath{c_{\rm ph}} = \gamma T + A_3 T^3 \end{equation} yield good descriptions with the electronic specific-heat coefficient $\gamma$ and the coefficient of the phononic part $A_3$. From the latter, the Debye temperatures $\Theta_{\rm D}$ of each sample were calculated via $A_3 = (12/5)\,\pi^4 N N_{\rm A} k_{\rm B}/\Theta_{\rm D}^3$ with the number of atoms per formula unit $N=3$, the Avogadro number $N_{\rm A}$, and the Boltzmann constant $k_{\rm B}$. For all measured samples of the W$_{1-x}$Ta$_x$Se$_{1.6}$Te$_{0.4}$\ system, we found $\Theta_{\rm D}\approx 224$\,K and for the W$_{1-x}$Ta$_x$Se$_{2}$\ system, $\Theta_{\rm D}\approx 235$~K for $x=0$, 0.02, 0.06, and 211~K ($x=0.01$), and 228~K ($x=0.04$). The difference in the released entropy when going from the pristine diselenide system ($y=0$) to the Se-substituted system ($y=0.4$) up to approximately 200~K where the different \ensuremath{c_{p}}\ data start to converge amounts to approximately 3~J/(mol~K). \begin{figure}[t] \centering \includegraphics[width=7.5cm,clip]{Fig5.pdf} \caption{(Color online) (a) Charge-carrier concentration dependence of $\gamma$ of W$_{1-x}$Ta$_x$Se$_{1.6}$Te$_{0.4}$\ (filled symbols) and W$_{1-x}$Ta$_x$Se$_{2}$\ (open symbols). The dashed (red) and dashed-dotted (blue) lines are fits to the data assuming differences between the two systems in the underlying band structure; see text. The color code of the different data points is the same as in Fig.~\ref{fig4}. Since the data points for both mother compounds without Ta are overlapping on this scale, the one referring to WSe$_2$ is plotted with a smaller symbol. (b) Calculated density of states of W$_{1-x}$Ta$_x$Se$_{2}$\ and hypothetical 2H-W$_{1-x}$Ta$_x$Te$_2$ as a function of charge-carrier concentration per W site $n$. The arrow marks the onset of filling of the isotropic band at the $\Gamma$ point for hypothetical 2H-W$_{1-x}$Ta$_x$Te$_2$; see text.} \label{fig5} \end{figure} The undoped compounds without Ta exhibit an one-order-of-magnitude smaller electronic contribution to the specific heat as already suggested by the insulating nature of these specimens with a band gap. For both systems we observe a systematic change of $\gamma=(\pi^2 k_{\rm B}^2/3){\rm DOS}$ with the actual charge-carrier concentration $n$ as plotted in Fig.~\ref{fig5}~(a). This reflects the change in the DOS upon hole doping, clearly showing that the filling dependence of $\gamma$ appears to be qualitatively different between the two systems. In view of our band-structure calculations shown in Fig.~\ref{fig1}, the substitution of Te for Se leads to a rise of the bands at the $K$ point with respect to the $\Gamma$ point. Since the band dispersion at the $K$ point is highly anisotropic whereas it is almost isotropic at the $\Gamma$ point, charge carriers in W$_{1-x}$Ta$_x$Se$_{2}$\ and W$_{1-x}$Ta$_x$Se$_{1.6}$Te$_{0.4}$\ upon Ta doping may be accommodated into bands of different degree of anisotropy. Motivated by this, we tried to fit the $\gamma(n)$ data points to different formulas: (i) for W$_{1-x}$Ta$_x$Se$_{2}$\ to \begin{equation} \gamma(n)= a\,n^{1/3} \end{equation} as expected for isotropic bands, and (ii) for W$_{1-x}$Ta$_x$Se$_{1.6}$Te$_{0.4}$\ to \begin{equation} \gamma(n)= \gamma_0 + b\,n^{1/3}. \end{equation} Here, $\gamma_{0}$ is an offset which accounts for the constant DOS of an ideally two-dimensional band structure. These approaches yield the dashed-dotted (W$_{1-x}$Ta$_x$Se$_{2}$) and dashed (W$_{1-x}$Ta$_x$Se$_{1.6}$Te$_{0.4}$) lines in Fig.~\ref{fig5}~(a), well accounting for a qualitative difference between both systems. For W$_{1-x}$Ta$_x$Se$_{1.6}$Te$_{0.4}$, we find $\gamma_0 = 0.52$~mJ/mol\,K$^2$. We note that for W$_{1-x}$Ta$_x$Se$_{2}$\ the $\gamma$ value of the undoped compound WSe$_2$ was included to the fit without problems whereas for W$_{1-x}$Ta$_x$Se$_{1.6}$Te$_{0.4}$, data for $x=0$ had to be excluded to obtain good fitting results, which is another indication for the difference in the DOS of both systems. Since the bands at the $\Gamma$ and $K$ points change their relative position when going from WSe$_2$ to WTe$_2$, our observation is reasonable. Upon increasing the Te concentration, the bands at both points are getting closer in energy: In W$_{1-x}$Ta$_x$Se$_{1.6}$Te$_{0.4}$, charge carriers also populate the bands of highly anisotropic character at the $K$ point. Hence the constant offset $\gamma_{0}$ is needed to describe $\gamma(n)$ properly. To further strengthen this argument, the calculated DOS as a function of charge-carrier concentration per W site $n$ is shown in Fig.~\ref{fig5}~(b). Again the dashed-dotted line refers to W$_{1-x}$Ta$_x$Se$_{2}$\ and the dashed line to hypothetical 2H-W$_{1-x}$Ta$_x$Te$_2$. The ditelluride system exhibits an initial steplike increase of the DOS as expected for the filling of charge carriers into a highly anisotropic band at the $K$ point. The arrow in Fig.~\ref{fig5}~(b) marks the onset of filling into the isotropic bands at the $\Gamma$ point. By contrast, the diselenide system exhibits a smoother initial filling, indicating the isotropic character of the band structure at the $\Gamma$ point. A direct comparison of Figs.~\ref{fig5}~(a) and (b) is difficult, since W$_{1-x}$Ta$_x$Se$_{1.6}$Te$_{0.4}$\ contains Se and Te and hence is located in between the two extreme cases shown in panel (b). However, the filling dependence of $\gamma$ for W$_{1-x}$Ta$_x$Se$_{1.6}$Te$_{0.4}$\ can be well accounted for if the onset of the filling into the isotropic band at the $\Gamma$ point (as indicated by the arrow) nearly coincides with $n=0$. \subsection{Thermoelectric Properties} \label{thermprop} \begin{figure}[t] \centering \includegraphics[width=7.5cm,clip]{Fig6.pdf} \caption{(Color online) Transport data of W$_{1-x}$Ta$_x$Se$_{1.6}$Te$_{0.4}$\ for $0.02 \leq x \leq 0.06$. (a) Resistivity $\rho_{xx}$, (b) thermopower $S$, and (c) power factor $S^2/\rho_{xx}$ are plotted against temperature up to 850~K. In (a), data for WSe$_{1.6}$Te$_{0.4}$ is also included for comparison.} \label{fig6} \end{figure} \begin{figure}[t] \centering \includegraphics[width=7.5cm,clip]{Fig7.pdf} \caption{(Color online) Temperature dependence of (a) thermal conductivity $\kappa$ and (b) dimensionless thermoelectric figure of merit $ZT$ of W$_{1-x}$Ta$_x$Se$_{1.6}$Te$_{0.4}$\ below 850~K. } \label{fig7} \end{figure} Having confirmed that the band structure is successfully modified by the partial replacement of Se, next we studied the thermoelectric properties of the W$_{1-x}$Ta$_x$Se$_{1.6}$Te$_{0.4}$\ system. In Fig.~\ref{fig6}, the temperature dependence of transport data up to $\sim 850$~K for W$_{1-x}$Ta$_x$Se$_{1.6}$Te$_{0.4}$\ is summarized: (a) resistivity $\rho_{xx}$, (b) thermopower $S$, and (c) the corresponding power factor $S^2/\rho_{xx}$. Although different experimental setups were used for the low-$T$ and high-$T$ measurements, respective resistivity data sets agree with each other around room temperature within the experimental error bars, except for $x = 0.04$. For $x < 0.04$, each sample exhibits a minimum in $\rho_{xx}$ at elevated temperatures. Above about 650~K, all of them feature a metal-like positive slope of $\rho_{xx}$ against $T$. The low-$T$ and high-$T$ Seebeck coefficients shown in panel (b) also agree well around 300~K. The Ta doping leads to a very systematic change of $S$, too. For all $x$, the thermopower increases with $T$ up to the highest measurement temperature 850~K, although the slope flattens above room temperature for most of them. For degenerated semiconductors, the thermopower is expected to be linear in temperature, which we indeed observe below the flattening. The most metallic sample $x = 0.06$ retains the linearity up to the highest measurement temperature. At around 850~K, the largest thermopower $S \approx 300$~$\mu$V/K is found for the low-doped samples. From the data shown in Fig.~\ref{fig6} (a) and (b), the power factors $S^2/\rho_{xx}$ were calculated for each sample and plotted in (c). We observe a maximum in the power factor of $S^2/\rho_{xx} \approx 8$~$\mu$W/K$^2$cm at around 850~K for samples in the intermediate doping range $x=0.03 - 0.04$. This is due to the benefit of an increased conductivity while the thermopower is still fairly large even for the more metallic specimen with $x = 0.04$. The thermal conductivity and respective FOM $ZT = S^2 T/(\rho\kappa)$ for these samples are plotted in Figs.~\ref{fig7}~(a) and (b), respectively. For the thermal-conductivity measurements, not only the experimental setups differ between high-$T$ and low-$T$ measurements, but also different samples were used.\cite{samplecomment} The agreement between low-$T$ and high-$T$ data is less satisfactory than in the resistivity and thermopower measurements, but still within the acceptable range. Except for $x=0.03$, all data exhibit a maximum in $\kappa(T)$ below 100~K. Toward higher temperatures, the thermal conductivity for all $x$ continuously decreases due to the three-phonon-scattering process and does not show any signature of saturation even at 850~K. The corresponding FOM plotted in Fig.~\ref{fig7}~(b) increases with increasing temperature for all $x$. Above 800~K a maximum $ZT\approx 0.35$ is found for $x = 0.035$. For a better comparison, we replotted various properties of W$_{1-x}$Ta$_x$Se$_{1.6}$Te$_{0.4}$\ against the actual charge-carrier concentration $n$ at 300~K (filled symbols) and at 800~K (open symbols) in Fig.~\ref{fig8}: (a) $\rho_{xx}$, (b) Hall mobility $\mu = (n e \rho_{xx})^{-1}$ with the element charge $e$, (c) $S$, (d) $S^2/\rho_{xx}$, (e) $\kappa$, and (f) $ZT$. The dotted and dashed lines are guides to the eyes. The absolute values of the resistivities are not so different at 300~K and 800~K. The mobilities are plotted in Fig.~\ref{fig8}(b). Since Hall-effect measurements were carried out only at 300~K and not at elevated temperatures, panel (b) contains only 300~K data. The suppression of $\rho_{xx}$ with $x$ is almost fully ascribed to the increase in $n$: An almost constant mobility is observed in this system. The thermopower at 300~K and 800~K given in Fig.~\ref{fig8} (c) decreases upon Ta doping, in agreement with the decrease in the resistivity. The difference $\Delta S = S_{\rm 800~K}-S_{\rm 300~K}$ in the absolute values for each sample amounts to around 100$\mu$V/K. The maximum of the power factor is clearly seen in Fig.~\ref{fig8} (d) between $n=0.02$ and 0.04 at both temperatures. The absolute values increase by a factor of three or four when going from 300~K to 800~K. At the same time the average thermal conductivity drops roughly to half of its room-temperature value, as depicted by the dotted and dashed lines in Fig.~\ref{fig8} (e). This reduction of the thermal conductivity should be attributed to an increased phonon-phonon scattering rate. The dependence of the FOM on $n$ shown in Fig.~\ref{fig8} (f) resembles that of the power factor and also exhibits a maximum in the intermediate Ta-doping range. The increase in $ZT$ between 300~K and 800~K amounts to more than one order of magnitude. \begin{figure}[t] \centering \includegraphics[width=8.5cm,clip]{Fig8.pdf} \caption{(Color online) Summary of transport data for the W$_{1-x}$Ta$_x$Se$_{1.6}$Te$_{0.4}$\ system at $T=300$~K (filled symbols) and 800~K (open symbols). (a) Resistivity, (b) Hall mobility, (c) thermopower, (d) power factor, (e) thermal conductivity, and (f) FOM are plotted as a function of charge-carrier concentration per W site $n$. The lines in the panels are guides to the eyes. The color code of the different data points is the same as in Fig.~\ref{fig4}.} \label{fig8} \end{figure} \begin{table*}[t] \centering \begin{ruledtabular} \caption{Thermoelectric properties of W$_{1-x}$Ta$_x$Se$_{2}$\ (from Ref.~\onlinecite{brixner63a}) and W$_{1-x}$Ta$_x$Se$_{1.6}$Te$_{0.4}$\ (this work). The actual charge-carrier concentration $n$ is presented in holes per W site, the resistivity in m$\Omega$cm, the thermopower in $\mu$V/K, the power factor in $\mu$W/K$^2$~cm, and the thermal conductivity in W/K~m; see text for details.} \label{tab1} \begin{tabular}{cccccccccccc} \toprule & & \multicolumn{5}{c}W$_{1-x}$Ta$_x$Se$_{2}$\ & \multicolumn{5}{c}W$_{1-x}$Ta$_x$Se$_{1.6}$Te$_{0.4}$\ \\ \addlinespace[0.5em] \cline{3-7}\cline{8-12} \addlinespace[0.5em] $n$ & $T$ & $\rho$ & $S$ & $S^2/\rho$ & $\kappa_{\rm tot}$ & $ZT$ & $\rho$ & $S$ & $S^2/\rho$ & $\kappa_{\rm tot}$ & $ZT$\\ \addlinespace[0.5em]\hline \addlinespace[0.5em] \multirow{2}{*}{0.006} & 300~K & 7.0 & 112 & 1.8 & 6.87 & 0.01 & 23.0 & 194 & 1.6 & 4.11 & 0.01 \\ \addlinespace[0.1em] & 800~K & 6.9 & 335 & 16.3 & 2.56 & 0.60 & 15.2 & 302 & 6.0 & 2.00 & 0.24 \\ \addlinespace[0.1em] \multirow{2}{*}{0.027} & 300~K & 2.0 & 90 & 4.0 & 7.31 & 0.05 & 5.1 & 105 & 2.2 & 3.10 & 0.02 \\ \addlinespace[0.1em] & 800~K & 3.1 & 166 & 8.9 & 2.23 & 0.35 & 5.8 & 210 & 7.6 & 1.90 & 0.32 \\ \addlinespace[0.1em] \bottomrule \end{tabular} \end{ruledtabular} \end{table*} Finally, we compare the results on W$_{1-x}$Ta$_x$Se$_{1.6}$Te$_{0.4}$\ with those on W$_{1-x}$Ta$_x$Se$_{2}$\ reported in the literature. There is one paper focusing comprehensively on the thermoelectric properties in W$_{1-x}$Ta$_x$Se$_{2}$\ by Brixner.\cite{brixner63a} Additional transport data from the same group are summarized in Ref.~\onlinecite{hicks64a}. In Ref.~\onlinecite{brixner63a}, the temperature dependences of $\rho$, $S$, $\kappa$, and $ZT$ are shown for $x = 0.01$ and 0.03. The respective values for the FOM at 800~K are $ZT \approx 0.6$ for $x=0.01$ and $ZT \approx 0.35$ for $x=0.03$, exceeding the maximum $ZT$ value of 0.32 reported here for W$_{1-x}$Ta$_x$Se$_{1.6}$Te$_{0.4}$. The thermoelectric parameters for these samples are summarized for comparison in Table~\ref{tab1}. There are several differences between these systems: (i) For small $x$, the resistivity is smaller and the thermopower is larger in W$_{1-x}$Ta$_x$Se$_{2}$\ than in W$_{1-x}$Ta$_x$Se$_{1.6}$Te$_{0.4}$, leading to larger power factors. (ii) Due to the disorder-scattering-induced reduction of the phononic mean-free path, the thermal conductivity in the Se-substituted system with $y=0.4$ is clearly smaller at room temperature than the value of W$_{1-x}$Ta$_x$Se$_{2}$\ ($y=0$) reported in Ref.~\onlinecite{brixner63a}. Apparently, this merit of the Te codoping is overcompensated by the increase in the resistivity due to additional disorder in the anion sublattice, which increases the scattering rate of the $p$-type charge carriers. (iii) The thermal conductivity\cite{brixnercomment} at 800~K in W$_{1-x}$Ta$_x$Se$_{2}$\ is larger than 2~W/K m, but almost comparable to W$_{1-x}$Ta$_x$Se$_{1.6}$Te$_{0.4}$, and hence at high temperatures the FOM is larger in the ``pure'' diselenide system. \begin{table}[b] \centering \begin{ruledtabular} \caption{Electronic \ensuremath{\kappa_{\rm el}}\ and phononic \ensuremath{\kappa_{\rm ph}}\ contributions to the total thermal conductivities $\kappa_{\rm tot}=\ensuremath{\kappa_{\rm ph}}+\ensuremath{\kappa_{\rm el}}$ in W$_{1-x}$Ta$_x$Se$_{2}$\ (from Ref.~\onlinecite{brixner63a}) and W$_{1-x}$Ta$_x$Se$_{1.6}$Te$_{0.4}$\ (this work). The actual charge-carrier concentration $n$ is presented in holes per W site and the thermal conductivity values in W/K~m; see text for details.} \label{tab2} \begin{tabular}{cccccccc} \toprule & & \multicolumn{3}{c}W$_{1-x}$Ta$_x$Se$_{2}$\ & \multicolumn{3}{c}W$_{1-x}$Ta$_x$Se$_{1.6}$Te$_{0.4}$\ \\ \addlinespace[0.1em] \cline{3-5}\cline{6-8} $n$ & $T$ & $\kappa_{\rm el}$ & $\kappa_{\rm ph}$ & $\kappa_{\rm tot}$ & $\kappa_{\rm el}$ & $\kappa_{\rm ph}$ & $\kappa_{\rm tot}$ \\ \addlinespace[0.25em]\hline \addlinespace[0.75em] \multirow{2}{*}{0.006} & 300~K & 0.10 & 6.77 & 6.87 & 0.03 & 4.08 & 4.11 \\ \addlinespace[0.1em] & 800~K & 0.28 & 2.28 & 2.56 & 0.13 & 1.87 & 2.00 \\ \addlinespace[0.1em] \multirow{2}{*}{0.027} & 300~K & 0.36 & 6.95 & 7.31 & 0.14 & 2.96 & 3.10 \\ \addlinespace[0.1em] & 800~K & 0.64 & 1.59 & 2.23 & 0.33 & 1.57 & 1.90 \\ \addlinespace[0.1em] \bottomrule \end{tabular} \end{ruledtabular} \end{table} To see whether the phononic contributions to the total thermal conductivity are different, we assume that the Wiedemann-Franz law holds in these systems, and calculate the contribution of the charge carriers via $\ensuremath{\kappa_{\rm el}} = L_0 (T/\rho)$ with the Lorenz number $L_0 = 2.44 \times 10^{-8}$~V$^2$/K$^2$ of the Drude-Sommerfeld model. By subtracting \ensuremath{\kappa_{\rm el}}\ from the total thermal conductivity \ensuremath{\kappa_{\rm tot}}, the phononic part \ensuremath{\kappa_{\rm ph}}\ was calculated. In W$_{1-x}$Ta$_x$Se$_{1.6}$Te$_{0.4}$, the electronic thermal conductivity \ensuremath{\kappa_{\rm el}}\ increases linearly with $n$ (not shown) and amounts at room temperature to $\sim 0.03 - 0.3$~W/K~m and at 800~K to $\sim 0.1 - 0.5$~W/K~m. This is less than 22\% of the total thermal conductivity, and most of the thermal conductivity is attributed to the phononic contribution. From the data given in Ref.~\onlinecite{brixner63a}, we estimated the respective electronic and phononic thermal conductivities in W$_{1-x}$Ta$_x$Se$_{2}$. To readily compare these numbers to W$_{1-x}$Ta$_x$Se$_{1.6}$Te$_{0.4}$, one has to take into account the real charge-carrier concentration per W site as estimated from Hall-effect measurements. In Ref.~\onlinecite{brixner63a} the hole concentration is only given for $x=0.01$ which corresponds to 0.004 per W site. Following Ref.~\onlinecite{hicks64a}, the carrier concentration for $x = 0.03$ is 0.02 per W site. Using these numbers, the electronic and phononic contributions of the thermal conductivities from Ref.~\onlinecite{brixner63a} have to be compared with the respective values of our W$_{1-x}$Ta$_x$Se$_{1.6}$Te$_{0.4}$\ samples for nominal $x = 0.02$ ($n\approx 0.006$) and $x=0.035$ ($n\approx 0.027$); see Fig.~\ref{fig2} (c). Table~\ref{tab2} summarizes the different contributions for these two charge-carrier concentrations at 300~K and 800~K. In both systems, the thermal conductivity is dominated by the phononic contribution. However, the phononic thermal conductivity is smaller in W$_{1-x}$Ta$_x$Se$_{1.6}$Te$_{0.4}$\ than in W$_{1-x}$Ta$_x$Se$_{2}$, especially at 300~K. Hence both components of the thermal conductivity $\kappa = \ensuremath{\kappa_{\rm ph}} + \ensuremath{\kappa_{\rm el}}$ are substantially reduced in W$_{1-x}$Ta$_x$Se$_{1.6}$Te$_{0.4}$. It might be promising to further change the Te concentration and see whether one can optimize the thermoelectric properties and exceed the FOM reported for W$_{1-x}$Ta$_x$Se$_{2}$. \section{Summary} \label{summ} We present a comprehensive theoretical and experimental study on W$_{1-x}$Ta$_x$Se$_{1.6}$Te$_{0.4}$\ for small Ta concentrations $0\leq x \leq 0.06$. From band-structure calculations and an analysis of the specific heat, we find clear evidence that upon introducing Te into the Se sites in WSe$_2$, the band structure changes. The isotropic band at the $\Gamma$ point is lowered in energy while the anisotropic bands at the $K$ and $K^\prime$ points shift toward the Fermi level, leading to a change in the doped-hole character. This crossover was monitored in the electronic specific-heat coefficient which reflects the apparent change in the filling dependence of the density of states, namely, from filling charge carriers solely into the isotropic band at the $\Gamma$ point in W$_{1-x}$Ta$_x$Se$_{2}$\ toward filling into both the isotropic band at the $\Gamma$ point and the anisotropic bands at the $K$ and $K^\prime$ points in W$_{1-x}$Ta$_x$Se$_{1.6}$Te$_{0.4}$. Thermal- and electronic-transport measurements up to 850~K on this system yield that the maximum thermoelectric figure of merit is about 0.3 at 850~K in the doping range $0.03 \leq x \leq 0.035$. In comparison to the W$_{1-x}$Ta$_x$Se$_{2}$\ system, Te doping was found to successfully suppress the thermal conductivity, especially around room temperature. However, this merit is overcompensated by an increased resistivity due to the additional disorder in the anion sublattice, which leads to stronger scattering of the hole carriers. \section*{Acknowledgments} We thank A.~Yamamoto and T.~Ideue for technical assistance as well as M.~S.~Bahramy for fruitful discussions. This study was supported by the Funding Program for World-Leading Innovative R\&D on Science and Technology (FIRST Program) from JSPS. MK is supported by a Grant-in-Aid for Young Scientists (B) from the Japan Society for the Promotion of Science (JSPS KAKENHI Grant No. 25800197).
1,314,259,994,122
arxiv
\section{Introduction} A number of approaches for question answering have been proposed recently that use reinforcement learning to reason over a knowledge graph \citep{minerva,LinRX2018:MultiHopKG,N18-1165,DBLP:conf/aaai/ZhangDKSS18}. In these methods the input question is first parsed into a constituent question entity and relation. The answer entity is then identified by sequentially taking a number of steps (or `hops') over the knowledge graph (KG) starting from the question entity. The agent receives a positive reward if it arrives at the correct answer entity and a negative reward for an incorrect answer entity. For example, for the question ``What is the capital of France?", the question entity is $(France)$ and the goal is to find a path in the KG which connects it to $(Paris)$. The relation between the answer entity and question entity in this example is $(Capital\ of)$ which is missing from the KG and has to be inferred via alternative paths. This is illustrated in Figure~\ref{fig:ex_graph}. A possible two-hop path to find the answer is to use the fact that $(Macron)$ is the president of $(France)$ and that he lives in $(Paris)$. However, there are many paths that lead to the entity $(Paris)$ but also to other entities which makes finding the correct answer a non-trivial task. \begin{figure}[t] \centering \include{ex_graph} \caption{Fictional graph for the the question ``What's the capital of France?". The relation $(Capital\ of)$ does not exist in the graph and thus an alternative path needs to be used that leads to the correct answer.} \label{fig:ex_graph} \end{figure} The standard evaluation metrics used for these systems are metrics developed for web search such as Mean Reciprocal Rank (MRR) and hits@k, where $k$ ranges from 1 to 20. We argue that this is not a correct evaluation mechanism for a practical question-answering system (such as Alexa, Cortana, Siri, etc.) where the goal is to return a single answer for each question. Moreover it is assumed that there is always an answer entity that could be reached from the question entity in limited number of steps. However this cannot be guaranteed in a large-scale commercial setting and for all KGs. For example, in our proprietary dataset used for the experimentation, for 15.60\% of questions the answer entity cannot be reached within the limit of number of steps used by the agent. Hence, we propose a new evaluation criterion, allowing systems to return `no answer' as a response when no answer is available. We demonstrate that existing state-of-the-art methods are not suited for a practical question-answering setting and perform poorly in our evaluation setup. The root-cause of poor performance is the reward structure which does not provide any incentive to learn not to answer. The modified reward structure we present allows agents to learn not to answer in a principled way. Rather than having only two rewards, a positive and a negative reward, we introduce a ternary reward structure that also rewards agents for not answering a question. A higher reward is given to the agent for correctly answering a question compared to not answering a question. In this setup the agent learns to make a trade-off between these three possibilities to obtain the highest total reward over all questions. Additionally, because the search space of possible paths exponentially grows with the number of hops, we also investigate using Depth-First-Search (DFS) algorithm to collect paths that lead to the correct answer. We use these paths as a supervised signal for training the neural network before the reinforcement learning algorithm is applied. We show that this improves overall performance. \section{Related work} The closest works to ours are the works by \citet{LinRX2018:MultiHopKG}, \citet{DBLP:conf/aaai/ZhangDKSS18} and \citet{minerva}, which consider the question answering task in a reinforcement learning setting in which the agent always chooses to answer.\footnote{An initial version of this paper has been presented at the Relational Representation Learning Workshop at NeurIPS 2018 as \citet{r2l_paper}.} Other approaches consider this as a link prediction problem in which multi-hop reasoning can be used to learn relational paths that link two entities. One line of work focuses on composing embeddings \cite{P15-1016,D15-1038,P16-1136} initially introduced for link prediction, e.g., TransE \cite{bordes2013translating}, ComplexE \cite{trouillon2016complex} or ConvE \cite{dettmers2018conve}. Another line of work focuses on logical rule learning such as neural logical programming \cite{DBLP:conf/nips/YangYC17} and neural theorem proving \cite{rocktaschel2017end}. Here, we focus on question answering rather than link prediction or rule mining and use reinforcement learning to circumvent that we do not have ground truth paths leading to the answer entity. Recently, popular textual QA datasets have been extended with not-answerable questions \citep{W17-2623,P18-2124}. Questions that cannot be answered are labeled with `no answer' option which allows for supervised training. This is different from our setup in which there are no ground truth `no answer' labels. \begin{figure*}[t] \centering \begin{subfigure}[b]{0.6\textwidth} \centering \include{arch} \caption{History LSTM} \label{fig:arch_lstm} \end{subfigure} \centering \begin{subfigure}[b]{0.35\textwidth} \centering \include{arch_policy} \caption{Policy network at timestep $t$} \label{fig:arch_policy} \end{subfigure} \centering \caption{Figure \ref{fig:arch_lstm} illustrates the LSTM which encodes history of the path taken. The output at timestep $t$ is used as input to the policy network, illustrated in Figure \ref{fig:arch_policy}, to determine which action to take next.} \label{fig:arch} \end{figure*} \section{Background: Reinforcement learning} We base our work on the recent reinforcement learning approaches introduced in \citet{minerva} and \citet{LinRX2018:MultiHopKG}. We denote the knowledge graph as $\mathcal{G}$, the set of entities as $\mathcal{E}$, the set of relations as $\mathcal{R}$ and the set of directed edges $\mathcal{L}$ between entities of the form $l = (e_1,r,e_2)$ with $e_1,e_2 \in \mathcal{E}$ and $r \in \mathcal{R}$. The goal is to find an answer entity $e_a$ given a question entity $e_q$ and the question relation $r_q$, when $(e_q,r_q,e_a)$ is not part of graph $\mathcal{G}$. We formulate this problem as a Markov Decision Problem (MDP) \citep{Sutton:1998:IRL:551283} with the following states, actions, transition function and rewards: \paragraph{States.} At every timestep $t$, the state $s_t$ is defined by the current entity $e_t$, the question entity $e_q$ and relation $r_q$, for which $e_t,e_q \in \mathcal{E}$ and $r_q \in \mathcal{R}$. More formally, $s_t = (e_t,e_q,r_q)$. \paragraph{Actions.} For a given entity $e_t$, the set of possible actions is defined by the outgoing edges from $e_t$. Thus ${A}_t = \{(r',e')|(e_t,r',e') \in \mathcal{G}\}$. \paragraph{Transition function.} The transition function $\delta$ maps $s_t$ to a new state $s_{t+1}$ based on the action taken by the agent. Consequently, $s_{t+1}=\delta(s_t,A_t)=\delta(e_t,e_q,r_q,A_t)$. \paragraph{Rewards.} The agent is rewarded based on the final state. For example, in \citet{minerva} and \citet{LinRX2018:MultiHopKG} the agent obtains a reward of 1 if the correct answer entity is reached as the final state and 0 otherwise (i.e., $R(s_T)=\mathbb{I}\{e_T=e_a\}$). \subsection{Training} We train a policy network $\pi$ using the REINFORCE algorithm of \citet{Williams:1992:SSG:139611.139614} which maximizes the expected reward: \begin{equation} J(\bm{\theta}) = \mathbb{E}_{(e_q,r_q,e_a) \in \mathcal{G}} \mathbb{E}_{a_1,\dots,a_T \sim \pi} [R(s_T|e_q,r_q)] \end{equation} in which $a_t$ is the action selected at timestep $t$ following the policy $\pi$, and $\bm{\theta}$ are the parameters of the network. The policy network consists of two parts: a Long Short-Term Memory (LSTM) network which encodes the history of the traversed path, and a feed-forward neural network to select an action ($a_t$) out of all possible actions. Each entity and relation have a corresponding vector $\bm{e}_t, \bm{r}_t \in \mathbb{R}^d$. The action $a_t \in A_t$ is represented by the vectors of the relation and entity as $\bm{a}_t = [\bm{r}_{t+1}; \bm{e}_{t+1}] \in \mathbb{R}^{2d}$. The LSTM encodes the history of the traversed path and updates its hidden state each timestep, based on the selected action \begin{equation} \bm{h}_t = LSTM(\bm{h}_{t-1},\bm{a}_{t-1}) \end{equation} This is illustrated in Figure~\ref{fig:arch_lstm}. Finally, the feed-forward neural network ($f$) combines the history $\bm{h}_t$, the current entity representation $\bm{e}_t$ and the query relation $\bm{r}_{q}$. Using softmax, we compute the probability for each action by calculating the dot product between the output of $f$ and each action vector $\bm{a}_t$: \begin{equation} \pi(a_t|s_t) = softmax(\bm{A}_t f(\bm{h}_t, \bm{e}_t, \bm{r}_{q})) \end{equation} in which $\bm{A}_t \in \mathbb{R}^{|A_t| \times 2d}$ is a matrix consisting of rows of action vectors $\bm{a}_t$. This is illustrated in Figure~\ref{fig:arch_policy}. During training, we sample over this probability distribution to select the action $a_t$, whereas during inference, we use beam search to select the most probable path. \section{Evaluation} User-facing question answering systems inherently face a trade-off between presenting an answer to a user that could potentially be incorrect, and choosing not to answer. However, prior work in knowledge graph question-answering (QA) only considers cases in which the answering agent always produces an answer. This setup originates from the link prediction and knowledge base completion tasks in which the evaluation criteria are hits@k and Mean Reciprocal Rank (MRR), where $k$ ranges from 1 to 20. However, these metrics are not an accurate representation of practical question-answering systems in which the goal is to return a single correct answer or not answer at all. Moreover, using these metrics result in the problem of the model learning `spurious' paths since the metrics encourage the models to make wild guesses even if the path is unlikely to lead to the correct answer. We therefore propose to measure the fraction of questions the system answers (Answer Rate) and the number of correct answers out of all answers (Precision) to measure the system performance. We combine these two metrics by taking the harmonic mean and call this the QA Score. This can be viewed as a variant of the popular F-Score metric, with answer rate used as an analogue to recall in the original metric. \section{Proposed method} \label{sec:method} In this section, we will first introduce the supervised learning technique we used to pretrain the neural network before applying the reinforcement learning algorithm. Next we will describe the ternary reward structure. \subsection{Supervised learning} Typically in reinforcement learning, the search space of possible actions and paths grows exponentially with the path length. Our problem is no exception to this. Hence an imitation learning approach could be beneficial here where we provide a number of expert paths to the learning algorithm to bootstrap the learning process. This idea has been explored previously in the context of link and fact prediction in knowledge graphs where \citet{D17-1060} proposed to use a Breadth-First-Search (BFS) between the entity pairs to select a set of plausible paths. However BFS favours identification of shorter paths which could bias the learner. We therefore use Depth-First-Search (DFS) to identify paths between question and answer entities and sample up to 100 paths to be used for the supervised training. If no path can be found between the entity pair we return a `no answer' label. Following this, we train the network using reinforcement learning algorithm which refines it further. Note that it is not guaranteed that the set of paths found using DFS are all most efficient. However as we show in our experiments, bootstrapping with these paths provide good initialization for the reinforcement learning algorithm. \subsection{Ternary reward structure}\label{sec:ternary} As mentioned previously, we encounter situations when the answer entity cannot be reached in the limited number of steps taken by an agent. In such cases, the system should return a special answer `no answer' as the response. We can achieve this by adding a synthetic `no answer' action that leads to a special entity $e_{NO ANSWER}$. This is illustrated in Figure~\ref{fig:ex_graph_noanswer}. In the framework of \citet{minerva} a binary reward is used which rewards the learner for the answer being wrong or correct. Following a similar protocol, we could award a score of 1 to return `no answer' when there is no answer available in the KG. However, we cannot achieve reasonable training with such reward structure. This is because there is no specific pattern for `no answer' that could be directly learned. Hence, if we reward a system equally for correct or no answer, it learns to always predict `no answer'. We therefore propose a ternary reward structure in which a positive reward is given to a correct answer, a neutral reward when $e_{NO ANSWER}$ is selected as an answer, and a negative reward for an incorrect answer. More formally: \begin{equation} R(s_T) = \begin{cases} r_{pos} & \text{if } e_T=e_a, \\ 0 & \text{if } e_T=e_{NO ANSWER}, \\ r_{neg} & \text{if } e_T \not\in \{e_a,e_{NO ANSWER}\} \\ \end{cases} \end{equation} with $r_{pos} \textgreater 0$ and $r_{neg} \textless 0$. The idea is that the agent receives a larger reward for a correct answer compared to not answering the question, and a negative reward for incorrectly answering a question compared to not answering the question. In the experimental section, we show that this mechanism provides better performance. \begin{figure}[t] \centering \include{ex_graph_noanswer} \caption{Fictional graph for the the question ``What's the capital of France?". The relation $(Capital\ of)$ does not exist in the graph and thus an alternative path needs to be used that leads to the correct answer. To avoid that the agent returns an incorrect answer when not finding the correct answer, a `no answer' relation is added between every entity node and a special `no answer' node, to be able to return `no answer'. } \label{fig:ex_graph_noanswer} \end{figure} \begin{table*}[t] \caption{Results on FB15k-237 dataset.} \centering \input{results_table.tex} \label{tab:results_fb} \end{table*} \begin{table*}[t] \caption{Results on Alexa69k-378 dataset.} \centering \input{results_table_evi.tex} \label{tab:results_evi} \end{table*} \begin{table}[t] \caption{Statistics of various datasets.} \centering \input{datasets.tex} \label{tab:dataset} \end{table} \section{Experimental setup} We evaluate our proposed approach on a publicly available dataset, FB15k-237 \citep{fb15k237} which is based on the Freebase knowledge graph and a proprietary dataset Alexa69k-378 which is a sample of Alexa's proprietary knowledge graph. Both the public dataset and the proprietary dataset are good examples of real-world general-purpose knowledge graphs that can be used for question answering. FB15k-237 contains 14,505 different entities and 237 different relations resulting in 272,115 facts. Alexa69k-378 contains 69,098 different entities and 378 different relations resulting in 442,591 facts. We follow the setup of \citet{minerva}, using the same train/val/test splits for FB15k-237. For Alexa69k-378 we use 10\% of the full dataset for validation and test. For both datasets, we add the reverse relations of all relations in the training set in order to facilitate backward navigation following the approach of previous work. Similarly, a `no op' relation is added for each entity between the entity and itself, which allows the agent to loop/reason multiple consecutive steps over the same entity. An overview of both datasets can be found in Table~\ref{tab:dataset}. We extend the publicly available implementation of \citet{minerva} for our experimentation. We set the size of the entity and relation representations $d$ at 100 and the hidden state at 200. We use a single layer LSTM and train models with path length 3 (tuned using hyper-parameter search). We optimize the neural network using Adam \citep{DBLP:journals/corr/KingmaB14} with learning rate 0.001, mini-batches of size 256 with 20 rollouts per example. During the test time, we use beam search with the beam size of 100. Unlike \citet{minerva}, we also train entity embeddings after initializing them with random values. Reward values are set as $r_{pos}=10$ and $r_{neg}=-0.1$ after performing a coarse grid search for various reward values on the validation set. For all experiments, we selected the best model with the highest QA Score on the corresponding validation set. \section{Results} \label{sec:results} The results of our experiments for FB15k-237 and Alexa69k-378 are given in Table~\ref{tab:results_fb} and Table~\ref{tab:results_evi} respectively. \paragraph{Supervised learning} For FB15k-237, we see that the model trained using reinforcement learning (RL) scores as well as the model trained using supervised learning. This makes supervised learning using DFS a strong baseline system for question answering over knowledge graphs, and for FB15k-237 in particular. On Alexa69k-378, models trained using supervised learning score lower on all metrics compared to RL. When combining supervised learning with RL overall performance increases. \paragraph{No answer} When we train RL system with our ternary reward structure (No Answer RL), the precision and QA score increase significantly on both datasets. For FB15k-237, our No Answer RL model decided not to answer over 40\% of the questions, with an absolute hits@1 reduction of only 1.3\% over standard RL. Moreover, of all the answered questions, 40.11\% were answered correctly compared to 24.75\% of the original question-answering system: an absolute improvement of over 15\%. This resulted in the final QA Score of 47.58\%, around 8\% higher than standard RL and 12\% higher than \citet{minerva}. Similarly, 60\% of the questions did not get answered on Alexa69k-378. This resulted in hits@1 decrease of roughly 1\% but compared to standard RL, the precision increased from 16.77\% to 38.92\%: an absolute increase of more than 20\%. The final QA Score also increased from 28.72\% to 39.55\%, and also significantly improved over \citet{minerva} and \citet{LinRX2018:MultiHopKG}. The results indicate that using our method allows us to improve the precision of the question-answering system by choosing the right questions to be answered by not answering many questions that were previously answered incorrectly. This comes at the expense of not answering some questions that previously could be answered correctly. \paragraph{All} Finally, all methods were combined in a single method. First the model was pretrained in a supervised way. Then the model was retrained using RL algorithm with ternary reward structure. This jointly trained model obtained better QA scores than any individually trained model. On FB15k-237, a QA score of 52.16\% is obtained which is an absolute improvement of 4.58\% over the best individual model and 2.66\% over \citet{LinRX2018:MultiHopKG}. Similarly, on Alexa69k-378, an absolute improvement of 2.57\% over the best individual result is obtained, almost 10\% absolute improvement over \citet{LinRX2018:MultiHopKG}. Sample results from our method are given in Table~\ref{tab:ex_correct} and Table~\ref{tab:ex_incorrect}. \begin{table*}[t] \caption{Example paths of correctly answered questions on FB15k-237. Note that the fact $(e_q,r_q,e_a)$ is not part of the KG.} \centering \input{examples.tex} \label{tab:ex_correct} \end{table*} \begin{table*}[t] \caption{Example question from FB15k-237, incorrectly answered by \cite{minerva} and not answered by our system. Note that the fact $(e_q,r_q,e_a)$ is not part of the KG. } \centering \input{examples_incorrect.tex} \label{tab:ex_incorrect} \end{table*} \begin{figure}[t] \centering \include{positive_reward} \caption{Influence of changing the positive reward for FB15k-237. The negative reward is fixed at $r_{neg}=-0.1$ and the neutral reward is fixed at $r_{neutral}=0$.} \label{fig:pos_reward} \end{figure} \begin{figure}[t] \centering \include{negative_reward} \caption{Influence of changing the negative reward for FB15k-237. The positive reward is fixed at $r_{pos}=10$ and the neutral reward is fixed at $r_{neutral}=0$.} \label{fig:neg_reward} \end{figure} \paragraph{Reward tuning} An important part of increasing the QA score is to select the right combination of rewards. Therefore, we ran additional experiments where we varied either the positive or negative reward, keeping the other rewards fixed. In Figure~\ref{fig:pos_reward}, the precision, answer rate and QA score are shown when varying the positive reward and keeping the neutral and negative rewards fixed. When, the positive reward is very small ($r_{pos}=0.625$), almost no question is answered. When the positive reward $r_{pos}$ is $1.25$, roughly 20\% of the questions are answered with a 50\% precision. After that, the precision starts declining and the answer rate starts increasing, resulting in an overall increase in QA score. The QA score plateaus between 5 and 10 and then starts decreasing slowly. In Figure~\ref{fig:neg_reward}, the precision, answer rate and QA score are shown when varying the negative reward and keeping the neutral and positive rewards fixed. In this case, the highest QA score is achieved when the negative reward is between -0.25 and -0.1. As long as the negative reward is lower than zero, a wrong answer gets penalized and the QA score stays high. \section{Conclusions} In this paper, we addressed the limitations of current approaches for question answering over a knowledge graph that use reinforcement learning. Rather than only returning a correct or incorrect answer, we allowed the model to not answer a question when it is not sure about it. Our ternary reward structure gives different rewards for correctly answered, incorrectly answered and not answered questions. We also introduced a new evaluation metric which takes these three options into account. We showed that we can significantly improve the precision of answered questions compared to previous approaches, making this a promising direction for the practical usage in knowledge graph-based QA systems. \bibliographystyle{acl_natbib}
1,314,259,994,123
arxiv
\section{Introduction} \setcounter{equation}{0} Recently in Ref.\cite{IV3} we have proposed a phenomenological quantum field theoretic model for strong low--energy $K^-p$ interactions at threshold for the analysis of the experimental data by the DEAR Collaboration Refs.\cite{DEAR,SIDDHARTA} on the energy level displacement of the ground state of kaonic hydrogen \begin{eqnarray}\label{label1.1} - \epsilon^{(\exp)}_{1s} + i\,\frac{\Gamma^{(\exp)}_{1s}}{2} &=& (- 194 \pm 37\,(\rm stat.) \pm 6\,(\rm syst.))\nonumber\\ &+& i\,(125 \pm 56\,(\rm stat.) \pm 15\,(\rm syst.))\,{\rm eV}. \end{eqnarray} According to the Deser--Goldberger--Baumann--Thirring--Trueman formula (the DGBTT) Ref.\cite{SD54}, the energy level displacement of the ground state of kaonic hydrogen is related to the S--wave amplitude $f^{K^-p}_0(0)$ of $K^-p$ scattering at threshold as \begin{eqnarray}\label{label1.2} - \,\epsilon_{1s} + i\,\frac{\Gamma_{1s}}{2} &=& 2\,\alpha^3\mu^2 \,f^{K^-p}_0(0) = 412.13\,f^{K^-p}_0(0), \end{eqnarray} where $\mu = m_K m_p/(m_K + m_p) = 323.48\,{\rm MeV}$ is the reduced mass of the $K^-p$ pair, calculated for $m_K = 493.68\,{\rm MeV}$ and $m_p = 938.27\,{\rm MeV}$, and $\alpha = 1/137.036$ is the fine--structure constant Ref.\cite{PDG04}. The theoretical accuracy of the DGBTT formula Eq.(\ref{label1.2}) is about $3\,\%$ including the vacuum polarisation correction Ref.\cite{TE04}. For a non--zero relative momentum $Q$ the amplitude $f^{K^-p}_0(Q)$ is defined by \begin{eqnarray}\label{label1.3} f^{K^-p}_0(Q) = \frac{1}{2iQ}\,\Big(\eta^{K^-p}_0(Q)\, e^{\textstyle\,2i\delta^{K^-p}_0(Q)} - 1\Big), \end{eqnarray} where $\eta^{K^-p}_0(Q)$ and $\delta^{K^-p}_0(Q)$ are the inelasticity and the phase shift of the reaction $K^- + p \to K^- + p$, respectively. The real part ${\cal R}e\,f^{K^-p}_0(0)$ of $f^{K^-p}_0(0)$ defines the S--wave scattering length $a^{K^-p}_0$ of $K^-p$ scattering \begin{eqnarray}\label{label1.4} {\cal R}e\,f^{K^-p}_0(0) = a^{K^-p}_0 = \frac{1}{2}\,(a^0_0 + a^1_0), \end{eqnarray} where $a^0_0$ and $a^1_0$ are the S--wave scattering lengths $a^I_0$ with isospin $I = 0$ and $I = 1$, respectively. The imaginary part ${\cal I}m\,f^{K^-p}_0(0)$ of $f^{K^-p}_0(0)$ is caused by inelastic channels $K^-p \to Y\pi$, where $Y\pi = \Sigma^-\pi^+$, $\Sigma^+\pi^-$, $\Sigma^0\pi^0$ and $\Lambda^0\pi^0$, allowed kinematically at threshold $Q = 0$. The S--wave amplitude Eq.(\ref{label1.3}) can be represented in the following form \begin{eqnarray}\label{label1.5} f^{K^-p}_0(Q) &=& \frac{1}{2iQ}\,\Big(\eta^{K^-p}_0(Q)\, e^{\textstyle\,2i\delta^{K^-p}_0(Q)} - 1\Big) =\nonumber\\ &=& \frac{1}{2iQ}\,\Big( e^{\textstyle\,2i\delta^{K^-p}_B(Q)} - 1\Big) + e^{\textstyle\,2i\delta^{K^-p}_B(Q)}f^{K^-p}_0(Q)_R, \end{eqnarray} where $\delta^{K^-p}_B(Q)$ is the phase shift of an elastic background of low--energy $K^-p$ scattering and $f^{K^-p}_0(Q)_R$ is the contribution of resonances. In our model of strong low--energy $\bar{K}N$ interactions near threshold proposed in Ref.\cite{IV3} the imaginary part ${\cal I}m\,f^{K^0}_0(0)$ of the S--wave amplitude of $K^-p$ scattering is defined by the contributions of strange baryon resonances $\Lambda(1405)$, $\Lambda(1800)$ and $\Sigma(1750)$. This implies that \begin{eqnarray}\label{label1.6} {\cal I}m\,f^{K^-p}_0(0) = {\cal I}m\,f^{K^-p}_0(0)_R. \end{eqnarray} According to Gell--Mann's $SU(3)$ classification of hadrons, the $\Lambda(1405)$ resonance is an $SU(3)$ singlet, whereas the resonances $\Lambda(1800)$ and $\Sigma(1750)$ are components of an $SU(3)$ octet Ref.\cite{PDG04}. The real part ${\cal R}e\,f^{K^-p}_0(0)$ of the S--wave amplitude of $K^-p$ scattering at threshold \begin{eqnarray}\label{label1.7} {\cal R}e\,f^{K^-p}_0(0) = {\cal R}e\,f^{K^-p}_0(0)_R + {\cal R}e\,\tilde{f}^{K^-p}_0(0) \end{eqnarray} is defined by the contribution of (i) the strange baryon resonances ${\cal R}e\,f^{K^-p}_0(0)_R$ in the s--channel of low--energy elastic $K^-p$ scattering, (ii) the exotic four--quark (or $K\bar{K}$ molecules) scalar states $a_0(980)$ and $f_0(980)$ in the $t$--channel of low--energy elastic $K^- p$ scattering and (iii) hadrons with non--exotic quark structures, i.e. $q\bar{q}$ for mesons and $qqq$ for baryons, where $q = u, d$ or $s$ quarks. The contributions of exotic mesons and non--exotic hadrons we denote as ${\cal R}e\,\tilde{f}^{K^-p}_0(0)$. According to Ref.\cite{MR96}, we describe strange baryon resonances as elementary particle fields coupled to octets of low-lying baryons $B = (N,\Lambda^0,\Sigma, \Xi)$ and pseudoscalar mesons $P = (\pi,K,\bar{K},\eta(550))$. The effective phenomenological low--energy Lagrangians of these interactions are Ref.\cite{IV3}: \begin{eqnarray}\label{label1.8} \hspace{-0.3in}{\cal L}_{\Lambda_1BP}(x) &=& g_1\bar{\Lambda}^0_1(x)\,{\rm tr}\{B(x)P(x)\} + {\rm h.c.}, \nonumber\\ \hspace{-0.3in}{\cal L}_{B_2BP}(x) &=& \frac{1}{\sqrt{2}}\,g_2\,{\rm tr}\{\{\bar{B}_2(x),B(x)\}P(x)\} + \frac{1}{\sqrt{2}}\,f_2\,{\rm tr}\{[\bar{B}_2(x),B(x)]P(x)\} + {\rm h.c.}, \end{eqnarray} where $g_1$, $g_2$ and $f_2$ are phenomenological coupling constants, $\Lambda^0_1(x)$ and $B_2(x)$ are interpolating field operators of the singlet $\Lambda(1405)$ and octet of strange baryon resonances, respectively. The interactions of resonances with the meson--baryon pairs $\bar{K}N$, $Y\pi$ and $Y\eta(550)$, where $Y = \Sigma^{\pm}, \Sigma^0$ or $\Lambda^0$, are given by \begin{eqnarray}\label{label1.9} {\cal L}_{\Lambda^0_1BP}(x) &=& g_1\,\bar{\Lambda}^0_1(x)(\vec{\Sigma}(x)\cdot \vec{\pi}(x) - p(x)K^-(x) + n(x)\bar{K}^0(x) + \frac{1}{3}\,\Lambda^0(x)\eta(x)) + {\rm h.c.},\nonumber\\ {\cal L}_{\Lambda^0_2BP}(x) &=& \frac{g_2}{\sqrt{3}}\,\bar{\Lambda}^0_2(x)(\vec{\Sigma}(x)\cdot \vec{\pi}(x) - \Lambda^0(x)\eta(x))\nonumber\\ &+& \frac{g_2 + 3f_2}{2\sqrt{3}}\,\bar{\Lambda}^0_2(x)\,(p(x)K^-(x) - n(x)\bar{K}^0(x)) + {\rm h.c.},\nonumber\\ {\cal L}_{\Sigma^0_2BP}(x)&=& f_2\,\bar{\Sigma}^0_2(x)\,(\Sigma^-(x)\pi^+(x) - \Sigma^+(x)\pi^-(x))\nonumber\\ &+& \frac{g_2}{\sqrt{3}}\,\bar{\Sigma}^0_2(x)\,(\Lambda^0(x)\pi^0(x) + \Sigma^0(x)\eta(x))\nonumber\\ &+& \frac{g_2 - f_2}{2}\,\bar{\Sigma}^0_2(x)\,(- p(x)K^-(x) - n(x)\bar{K}^0(x)) + {\rm h.c.},\nonumber\\ {\cal L}_{\Sigma^-_2BP}(x) &=& f_2\,\bar{\Sigma}^-_2(x) (\Sigma^-(x) \pi^0(x) - \Sigma^0(x) \pi^-(x)) + \frac{g_2}{\sqrt{3}}\,\bar{\Sigma}^-_2(x)\Lambda^0(x)\pi^-(x)\nonumber\\ &&- \frac{1}{\sqrt{2}}\,(g_2 - f_2)\,\bar{\Sigma}^-_2(x) n(x) K^-(x) + {\rm h.c.} \end{eqnarray} As has been pointed out in Ref.\cite{MR96}, the inclusion of the $\Lambda(1405)$ resonance as an elementary particle fields does not contradict ChPT by Gasser and Leutwyler Ref.\cite{JG83} and allows to calculate the low--energy parameters of $\bar{K}N$ scattering to leading order in Effective Chiral Lagrangians. Using the effective Lagrangians Eq.(\ref{label1.9}) we obtain the S--wave amplitudes of inelastic channels of $K^-p$ scattering at threshold $f(K^-p \to Y\pi)$, where $Y\pi = \Sigma^{\mp}\pi^{\pm}$, $\Sigma^0\pi^0$ and $\Lambda^0\pi^0$. The theoretical cross sections $\sigma(K^-p \to Y\pi)$ for these reactions satisfy the experimental data Ref.\cite{DT71} \begin{eqnarray}\label{label1.10} \gamma &=& \frac{\sigma(K^-p \to \Sigma^-\pi^+)}{\sigma(K^-p \to \Sigma^+\pi^-)} = 2.360 \pm 0.040,\nonumber\\ R_c &=& \frac{\sigma(K^-p \to \Sigma^-\pi^+) + \sigma(K^-p \to \Sigma^+\pi^-)}{\sigma(K^-p \to \Sigma^-\pi^+) + \sigma(K^-p \to \Sigma^+\pi^-) + \sigma(K^-p \to \Sigma^0\pi^0) + \sigma(K^-p \to \Lambda^0\pi^0)} = \nonumber\\ &=&0.664 \pm 0.011,\nonumber\\ R_n &=& \frac{\sigma(K^-p \to \Lambda^0\pi^0)}{\sigma(K^-p \to \Sigma^0\pi^0) + \sigma(K^-p \to \Lambda^0\pi^0)} = 0.189 \pm 0.015 \end{eqnarray} with an accuracy of about $6\,\%$ and the constraint that the $\Lambda(1800)$ resonance decouples from the $K^-p$ pair that gives $f_2 = -g_2/3$. This result is obtained without the specification of the numerical values of the coupling constants $g_1$ and $g_2$ and the masses of resonances, but using only physical masses of interacting particles for the calculation of phase volumes. For $f_2 = -g_2/3$ the S--wave amplitudes $f(K^-p\to Y\pi)$ can be defined by \begin{eqnarray}\label{label1.11} f(K^-p \to \Sigma^-\pi^+) &=&+\, \frac{1}{4\pi}\,\frac{\mu}{~m_{K^-}}\,\sqrt{\frac{m_{\Sigma^-}}{m_p}} \Big(-\,A + \frac{1}{2}\,B\Big),\nonumber\\ f(K^-p \to \Sigma^+\pi^-) &=&+\, \frac{1}{4\pi}\,\frac{\mu}{~m_{K^-}}\, \sqrt{\frac{m_{\Sigma^+}}{m_p}}\,\Big(-\,A - \frac{1}{2}\,B\Big),\nonumber\\ f(K^-p \to \Sigma^0\pi^0) &=&-\, \frac{1}{4\pi}\,\frac{\mu}{~m_{K^-}}\, \sqrt{\frac{m_{\Sigma^0}}{m_p}}\,A,\nonumber\\ f(K^-p \to \Lambda^0\pi^0) &=& -\,\frac{1}{4\pi}\, \frac{\mu}{~m_{K^-}}\,\sqrt{\frac{m_{\Lambda^0}}{m_p}}\,\frac{\sqrt{3}}{2}\,B, \end{eqnarray} where $A = -\,6.02\,{\rm fm}$ is the contribution of the $\Lambda(1405)$ resonance, calculated for $g_1 = 0.91$ and $m_{\Lambda(1405)} = 1405\,{\rm MeV}$ Ref.\cite{IV3}. The parameter $B$ describes the contribution of the baryon resonance octet. Unfortunately, in Ref.\cite{IV3} we have exaggerated the role of the $\Sigma(1750)$ resonance in strong low--energy $\bar{K}N$ interactions at threshold. The contribution of the $\Sigma(1750)$ resonance with the recommended values of its parameters does not define $B$ correctly. More definitely the contribution of the $\Sigma(1750)$ resonance does not saturate the sum rule Eq.(\ref{label2.13}), which is the consequence of the low--energy theorem $a^0_0 + 3\,a^1_0 = 0$ Eq.(\ref{label2.8}). Therefore, instead of the assertion that $B$ is caused by the contribution of the $\Sigma(1750)$ resonance we argue that {\it $B$ is defined by a contribution of a baryon background with a property of an $SU(3)$ octet and quantum numbers of the $\Lambda(1800)$ and $\Sigma(1750)$ resonances}. The former is important for the correct description of the experimental data Eq.(\ref{label1.10}). Using the relation between the S--wave amplitudes of the reactions $K^-p \to \Sigma^-\pi^+$ and $K^-p \to \Sigma^+\pi^-$, imposed by the experimental data Eq.(\ref{label1.10}), we obtain the contribution of the baryon background $B$ in terms of $\gamma$, $A$ and the phase volumes of the final $\Sigma^-\pi^+$ and $\Sigma^+\pi^-$ states. This gives \begin{eqnarray}\label{label1.12} B = 2\,\frac{\sqrt{\gamma\,k_{\Sigma^+\pi^-}} - \sqrt{k_{\Sigma^-\pi^+}}}{ \sqrt{\gamma\,k_{\Sigma^+\pi^-}} + \sqrt{k_{\Sigma^-\pi^+}}}\,(-\,A) = 2.68\,{\rm fm}, \end{eqnarray} where $k_{\Sigma^+\pi^-} = 181.34\,{\rm MeV}$ and $k_{\Sigma^-\pi^+} = 172.73\,{\rm MeV}$ are the relative momenta of the $\Sigma^{\pm}\pi^{\mp}$ pairs at threshold of $K^-p$ scattering, calculated for physical masses of interacting particles Ref.\cite{PDG04}. The phase volumes of the final $\Sigma^-\pi^+$ and $\Sigma^+\pi^-$ states are equal to $k_{\Sigma^-\pi^+} /4\pi(m_K + m_p)$ and $k_{\Sigma^+\pi^-}/4\pi(m_K + m_p)$, respectively. The paper is organised as follows. In Section 2 we calculate the S--wave amplitudes of $K^-p$ and $K^-n$ scattering at threshold. We show that the S--wave scattering lengths $a^{K^-p}_0$ and $a^{K^-n}_0$ of $K^-p$ and $K^-n$ scattering satisfy the low--energy theorem $a^{K^-p}_0 + a^{K^-n} = (a^0_0 + 3\,a^1_0)/2 = 0$. We show that in the chiral limit due to isospin invariance $a^{K^-p}_0 + a^{K^-n} = (a^0_0 + 3\,a^1_0)/2 = -\,\sqrt{6}\,b^0_0 = 0$, where $b^0_0 = (a^{\pi^-p}_0 + a^{\pi^-n}_0)/2$ is the isoscalar S--wave scattering length $\pi^-N$ scattering, vanishing in the chiral limit. The low--energy theorem $a^0_0 + 3\,a^1_0 = 0$ can be also derived using invariance of strong low--energy interactions under $U$--spin rotations \cite{HL65}. We calculate the energy level displacement of the ground state of kaonic hydrogen. Theoretical value agrees well with the experimental data by the DEAR Collaboration. Using the results obtained in Section 2 for the S--wave scattering lengths of $K^-N$ scattering and in Ref.\cite{IV4} we recalculate the S--wave scattering length $a^{K^-d}_0$ of $K^-d$ scattering at threshold. We calculate the energy level displacement of the ground state of kaonic deuterium. All results agree well with those obtained in Ref.\cite{IV4}. In Section 3 we analyse the cross sections for elastic and inelastic $K^-p$ scattering for laboratory momenta $70\,{\rm MeV}/c \le 150\,{\rm MeV}/c$ of the incident $K^-$--meson. The theoretical cross sections agree with the available experimental data within two standard deviations. In the Conclusion we discuss the obtained results. \section{S--wave amplitude of $K^-N$ scattering at threshold} \setcounter{equation}{0} \subsection{S--wave amplitude of $K^-p$ scattering at threshold} As has been shown in Ref.\cite{IV3}, the imaginary part of the S--wave amplitude of $K^-p$ scattering at threshold can be represented by \begin{eqnarray}\label{label2.1} {\cal I}m\,f^{K^-p}_0(0) &=& {\cal I}m\,f^{K^-p}_0(0)_R = \frac{1}{R_c}\, \Big(1 + \frac{1}{\gamma}\Big)\,|f(K^-p \to \Sigma^-\pi^+)|^2 k_{\Sigma^-\pi^+} =\nonumber\\ &=& (0.35\pm 0.02)\,{\rm fm}, \end{eqnarray} where $f(K^-p \to \Sigma^-\pi^+) = 0.43\,{\rm fm}$ and $\pm\,0.02$ is an accuracy of about $6\,\%$ of our description of the experimental data Eq.(\ref{label1.11}). The contribution of the $\Lambda(1405)$ resonance and the baryon background to ${\cal R}e\,f^{K^-0}_0(0)$ is equal to Ref.\cite{IV3} \begin{eqnarray}\label{label2.2} {\cal R}e\,f^{K^-p}_0(0)_R = \frac{1}{4\pi}\,\frac{\mu}{~~m_K}\,(A + B) = (-\,0.17\pm 0.01)\,{\rm fm}. \end{eqnarray} Since the contribution ${\cal R}e\,\tilde{f}^{K^-p}_0(0) = (-\,0.33\pm 0.04)\,{\rm fm}$, calculated in Ref.\cite{IV3}, is not changed, the total real part of the S--wave amplitude of $K^-p$ scattering at threshold amounts to \begin{eqnarray}\label{label2.3} {\cal R}e\,f^{K^-p}_0(0) = {\cal R}e\,f^{K^-p}_0(0)_R +{\cal R}e\,\tilde{f}^{K^-p}_0(0) = (-\,0.50 \pm 0.05)\,{\rm fm}. \end{eqnarray} Hence, for the S--wave amplitude of $K^-p$ scattering at threshold we get \begin{eqnarray}\label{label2.4} f^{K^-p}_0(0) = (-\,0.50\pm 0.05) + \,i\,(0.35 \pm 0.02)\,{\rm fm}. \end{eqnarray} This agrees well with the result obtained in Ref.\cite{IV3}. \subsection{S--wave amplitude of $K^-n$ scattering at threshold} Since the $K^-n$ pair has isospin $I = 1$, in our model the resonant parts of the S--wave amplitudes of elastic and inelastic $K^-n$ scattering at threshold are described by the contribution of the baryon background $B$. The imaginary part ${\cal I}m\,f^{K^-n}_0(0)$ is defined by the inelastic channels $K^-n \to Y\pi$ with $Y\pi = \Sigma^-\pi^0$, $\Sigma^0\pi^-$ and $\Lambda^0\pi^-$. Using the results obtained in Ref.\cite{IV4} we get \begin{eqnarray}\label{label2.5} {\cal R}e\,f^{K^-n}_0(0) &=& {\cal R}e\,\tilde{f}^{K^-n}_0(0) + \frac{1}{2\pi}\, \frac{\mu}{m_K}\,B =\nonumber\\ &=& (0.22\pm 0.02) + \frac{1}{2\pi}\, \frac{\mu}{m_K}\,B = (0.50 \pm 0.02)\,{\rm fm},\nonumber\\ {\cal I}m\,f^{K^-n}_0(0) &=& \sum_{Y\pi}|f(K^-n \to Y\pi)|^2 k_{Y\pi} = 0.04 \,{\rm fm},\nonumber\\ f(K^-n \to \Sigma^-\pi^0)&=& +\,\frac{1}{4\pi}\, \frac{\mu}{m_K}\,\sqrt{\frac{m_{\Sigma^-}}{m_p}}\, \frac{1}{\sqrt{2}}\,B = +\,0.11\,{\rm fm}, \nonumber\\ f(K^-n \to \Sigma^0\pi^-)&=& -\,\frac{1}{4\pi}\, \frac{\mu}{m_K}\,\sqrt{\frac{m_{\Sigma^0}}{m_p}}\,\frac{1}{\sqrt{2}}\, B = -\,0.11\,{\rm fm},\nonumber\\ f(K^-n \to \Lambda^0\pi^-)&=& -\,\frac{1}{4\pi}\, \frac{\mu}{m_K}\,\sqrt{\frac{m_{\Lambda^0}}{m_p}}\,\frac{\sqrt{3}}{2}\, B = -\,0.13\,{\rm fm}, \end{eqnarray} where $k_{\Sigma^-\pi^0} = 181.36\,{\rm MeV}$, $k_{\Sigma^0\pi^-} = 183.50\,{\rm MeV}$ and $k_{\Lambda^0\pi^-} = 256.88\,{\rm MeV}$ are the relative momenta of the pairs $\Sigma^-\pi^0$, $\Sigma^0\pi^-$ and $\Lambda^0\pi^-$ at threshold of $K^-n$ scattering. Since the contribution of the exotic scalar mesons $a_0(980)$ and $f_0(980)$ to the S--wave scattering amplitude of $K^-n$ scattering at threshold vanishes, ${\cal R}e\,\tilde{f}^{K^-n}_0(0) = (0.22\pm 0.02)\,{\rm fm}$ is defined by low--energy interactions of non--exotic hadrons only \cite{IV4}. The S--wave amplitude of $K^-n$ scattering at threshold is equal to \begin{eqnarray}\label{label2.6} f^{K^-n}_0(0) = (+\,0.50\pm 0.02) + \,i\,(0.04 \pm 0.00)\,{\rm fm}. \end{eqnarray} Equating $f^{K^-p}_0(0) = (\tilde{a}^0_0 + \tilde{a}^1_0)/2$ and $f^{K^-n}_0(0) = \tilde{a}^1_0$, where $\tilde{a}^0_0$ and $\tilde{a}^1_0$ are complex S--wave scattering lengths of $\bar{K}N$ scattering with isospin $I = 0$ and $I = 1$, we get the numerical values of $\tilde{a}^0_0$ and $\tilde{a}^1_0$: \begin{eqnarray}\label{label2.7} \tilde{a}^0_0 &=& (-\,1.50\pm 0.05) + \,i(\,0.66 \pm 0.04)\,{\rm fm}, \nonumber\\ \tilde{a}^1_0 &=& (+\,0.50\pm 0.02) + \,i\,(0.04 \pm 0.00)\,{\rm fm}, \end{eqnarray} where ${\cal R}e\,\tilde{a}^0_0 = a^0_0 = (-\,1.50\pm 0.05)\,{\rm fm}$ and ${\cal R}e\,\tilde{a}^1_0 = a^1_0 = (+\,0.50\pm 0.02)\,{\rm fm}$. The complex S--wave scattering length $\tilde{a}^0_0$ agrees well with the scattering length obtained by Dalitz and Deloff Ref.\cite{RD91} \[\tilde{a}^0_0 = (-\,1.54 \pm 0.05) + \,i\,(0.74 \pm 0.02)\,{\rm fm}\] \noindent for the position of the pole on sheet II of the $E$--plane $E^* - i\,\Gamma/2$ with $E^* = 1404.9\,{\rm MeV}$ and $\Gamma = 53.1\,{\rm MeV}$ Ref.\cite{RD91}. This corresponds to our choice of the parameters of the $\Lambda(1405)$ resonance. The complex S--wave scattering lengths Eq.(\ref{label2.7}) we apply to the calculation of the energy level displacement of the ground state of kaonic hydrogen. The real parts of these scattering lengths $a^0_0$ and $a^1_0$ we use for the calculation of the energy level shift of the ground state of kaonic deuterium. \subsection{Low--energy theorem $a^0_0 + 3 a^1_0 = 0$} The numerical values of the real parts of the S--wave scattering lengths $a^{K^-p}_0= (a^0_0 + a^1_0)/2$ and $a^{K^-n}_0 = a^1_0$ of $K^-N$ scattering satisfy the relation \begin{eqnarray}\label{label2.8} a^{K^-p}_0 + a^{K^-n}_0 = \frac{1}{2}\,(a^0_0 + 3\,a^1_0) = 0. \end{eqnarray} This is the low--energy theorem valid in the chiral limit, which can be derived relating the S--wave scattering lengths of $K^-N$ scattering to the S--wave scattering lengths of $\pi^- N$ scattering. As has been shown by Weinberg Ref.\cite{SW66}, in the chiral limit the S--wave scattering lengths $a^{\pi^-p}_0 = (2\,a^{1/2}_0 + a^{3/2}_0)/3$ and $a^{\pi^-n}_0 = a^{3/2}_0$ of $\pi^-N$ elastic scattering, where $a^{1/2}_0$ and $a^{3/2}_0$ are the S--wave scattering lengths of $\pi N$ scattering with isospin $I = 1/2$ and $I = 3/2$, obey the constraint \begin{eqnarray}\label{label2.9} a^{\pi^-p}_0 + a^{\pi^-n}_0 = \frac{2}{3}\,(a^{1/2}_0 + 2a^{3/2}) = 2\,b^0_0 = 0, \end{eqnarray} which is caused by Adler's consistency condition Ref.\cite{SA65}, where $b^0_0$ is the S--wave scattering length of $\pi N$ scattering in the $t$--channel with isospin $I = 0$. For the derivation of the low--energy theorem Eq.(\ref{label2.8}) it is convenient to use the $\mathbb{K}$--matrix approach Refs.\cite{TE88,VV04}. In terms of the matrix elements of the $\mathbb{K}$--matrix in the $t$--channel the sum of the S--wave scattering lengths $a^{\pi^-p}_0 + a^{\pi^-n}_0$ is equal to \begin{eqnarray}\label{label2.10} a^{\pi^-p}_0 + a^{\pi^-n}_0 = \langle \pi^+\pi^-|\mathbb{K}|\bar{p}p + \bar{n}n\rangle = -\,\sqrt{\frac{2}{3}}\,\langle I = 0|\mathbb{K}|I = 0\rangle, \end{eqnarray} where we have taken into account the isospin properties of the hadronic state $|\bar{p}p + \bar{n}n\rangle$ and $|\pi^-\pi^+\rangle$. Setting \begin{eqnarray}\label{label2.11} \langle I = 0|\mathbb{K}|I = 0\rangle = -\,\sqrt{6}\,b^0_0 \end{eqnarray} we arrive at the low--energy theorem Eq.(\ref{label2.9}). In terms of the matrix element of the $\mathbb{K}$--matrix in the $t$--channel the sum of the S--wave scattering lengths $a^{K^-p}_0 + a^{K^-n}_0$ can be defined by \begin{eqnarray}\label{label2.12} a^{K^-p}_0 + a^{K^-n}_0 = \langle K^+K^-|\mathbb{K}|\bar{p}p + \bar{n}n\rangle = \langle I = 0|\mathbb{K}|I = 0\rangle = -\,\sqrt{6}\,b^0_0 = 0. \end{eqnarray} This proves the low--energy theorem Eq.(\ref{label2.8}), which is, of course, valid only at leading order in chiral expansion. The former becomes more obvious if to derive the low--energy theorem Eq.(\ref{label2.8}) using invariance of strong low--energy interactions under $U$--spin rotations Ref.\cite{HL65}. According to $U$--spin classification of the components of the pseudoscalar octet \cite{HL65}, the mesons $\pi$ and $K$ transform as components of doublets $(K^+,\pi^+)$ and $(\pi^-,K^-)$. This can be allowed only in the chiral limit. The relation Eq.(\ref{label2.8}) can be rewritten in the form of the sum rule \begin{eqnarray}\label{label2.13} {\cal R}e\tilde{f}^{K^-p}_0(0) + {\cal R}e\tilde{f}^{K^-n}_0(0) = -\,\frac{1}{4\pi}\,\frac{\mu}{m_{K^-}}\, (A + 3 B). \end{eqnarray} Using the numerical values ${\cal R}e\tilde{f}^{K^-p}_0(0) = -\,0.33\,{\rm fm}$, ${\cal R}e\tilde{f}^{K^-n}_0(0) = 0.22\,{\rm fm}$, $A = -\,6.02\,{\rm fm}$ and $B = 2.68\,{\rm fm}$, one can show that the sum rule Eq.(\ref{label2.13}) is fulfilled \begin{eqnarray*} {\cal R}e\tilde{f}^{K^-p}_0(0) + {\cal R}e\tilde{f}^{K^-n}_0(0) = -\,0.11\,{\rm fm}\quad,\quad -\,\frac{1}{4\pi}\,\frac{\mu}{m_{K^-}}\, (A + 3 B) = -\,0.11\,{\rm fm}. \end{eqnarray*} Unfortunately, the $\Sigma(1750)$ resonance does not saturate the sum rule Eq.(\ref{label2.13}). In our model of strong $\bar{K}N$ interactions at threshold the l.h.s. of Eq.(\ref{label2.13}) is defined by quark--hadron interactions, whereas the r.h.s. of Eq.(\ref{label2.13}) is the resonant part, caused by the contribution of the $\Lambda(1405)$ resonance $A$ and the baryon background $B$. This is to some extent a manifestation of quark--hadron duality pointed out by Shifman {\it et al.} within non--perturbative QCD in the form QCD sum rules \cite{SVZ}. Since the l.h.s. of Eq.(\ref{label2.13}) can be calculated independently of the assumption of the contribution of the $\Lambda(1405)$ resonance and the baryon background, the sum rule Eq.(\ref{label2.13}) places constraints on the parameters of the $\Lambda(1405)$ resonance and the baryon background calculated at leading order in chiral expansion. \subsection{Energy level displacement of the ground state of kaonic hydrogen} For the S--wave amplitude of $K^-p$ scattering at threshold Eq.(\ref{label2.4}) the energy level displacement of the ground state of kaonic hydrogen is equal to \begin{eqnarray}\label{label2.14} - \,\epsilon^{(0)}_{1s} + i\,\frac{\Gamma^{(0)}_{1s}}{2} = 421.13\,f^{K^-p}_0(0) = 421.13\;\frac{\tilde{a}^0_0 + \tilde{a}^1_0}{2} = (-\,205 \pm 21) + \,i\,(144 \pm 9)\,{\rm eV}.~~~ \end{eqnarray} This result agrees well with the experimental data by the DEAR Collaboration Eq.(\ref{label1.1}). As has been shown in Ref.\cite{IV6}, the energy level shift and width of the ground state of kaonic hydrogen acquire the dispersive corrections, caused by the intermediate $\bar{K}^0n$ state on--mass shell \begin{eqnarray}\label{label2.15} \hspace{-0.3in}\delta^{Disp}_S &=& \frac{\delta \epsilon^{\bar{K}^0n}_{1s}}{\epsilon^{(0)}_{1s}} = \frac{1}{4}\, (a^1_0 - a^0_0)^2\,q^2_0 = (8.6\pm 0.9)\,\%,\nonumber\\ \hspace{-0.3in}\delta^{Disp}_W &=& \frac{\delta \Gamma^{\bar{K}^0n}_{1s}}{\Gamma^{(0)}_{1s}} = \frac{1}{2\pi}\, \frac{(a^1_0 -a^0_0)^2}{{\cal I}m\,f^{K^-p}_0(0)\,a_B}\, {\ell n}\Big[\frac{2 a_B}{|a^0_0 + a^1_0|}\Big] = (11.1\pm 1.2)\,\%, \end{eqnarray} where $q_0 = \sqrt{2\mu(m_{\bar{K}^0} - m_{K^-} + m_n - m_p)} = 58.35\,{\rm MeV}$, calculated for $m_{\bar{K}^0} - m_{K^-} = 3.97\,{\rm MeV}$ and $ m_n - m_p = 1.29\,{\rm MeV}$ Ref.\cite{PDG04} and $a_B = 1/\alpha \mu = 83.59\,{\rm fm}$ is the Bohr radius. Taking into account the dispersive corrections Eq.(\ref{label2.15}), the energy level displacement of the ground state of kaonic hydrogen is equal to \begin{eqnarray}\label{label2.16} - \,\epsilon^{(\rm th)}_{1s} + i\,\frac{\Gamma^{(\rm th)}_{1s}}{2} = (- 223 \pm 21) + \,i\,(159 \pm 9)\,{\rm eV}. \end{eqnarray} As we have shown above that the S--wave scattering lengths of $K^-p$ and $K^-n$ scattering are calculated at leading order in chiral expansion and satisfy the low--energy theorem Eq.(\ref{label2.8}). This allows to take into account contributions, caused by next--to--leading order corrections in chiral expansion. The most important next--to--leading order correction in chiral expansion is the contribution of the $\sigma^{(I = 1)}_{KN}(0)$--term, given by Ref.\cite{IV5}: \begin{eqnarray}\label{label2.17} \delta \epsilon^{(\sigma)}_{1s} = \frac{\alpha^3 \mu^3}{2\pi m_K F^2_K}\Big[\sigma^{(I = 1)}_{KN}(0) - \frac{m^2_K}{4m_N}i\int d^4x\langle p(\vec{0},\sigma_p) |{\rm T}(J^{4+i5}_{50}(x)J^{4-i5}_{50}(0))| p(\vec{0},\sigma_p)\rangle\Big].\quad \end{eqnarray} Here $J^{4\pm i5}_{50}(x)$ are time--components of the axial--vector hadronic currents $J^{4\pm i5}_{5\mu}(x)$, changing strangeness $|\Delta S| = 1$, $F_K = 113\,{\rm MeV}$ is the PCAC constant of the $K$--meson Ref.\cite{PDG04} and the $\sigma^{(I = 1)}_{KN}(0)$--term is defined by Refs.\cite{ER72}--\cite{JG99} \begin{eqnarray}\label{label2.18} \sigma^{(I = 1)}_{KN}(0) = \frac{m_u + m_s}{4m_N}\, \langle p(\vec{0},\sigma_p)|\bar{u}(0)u(0) + \bar{s}(0)s(0)|p(\vec{0}, \sigma_p)\rangle, \end{eqnarray} where $u(0)$ and $s(0)$ are operators of the interpolating fields of $u$ and $s$ current quarks Ref.\cite{FY83}. The correction $\delta \epsilon^{(\sigma)}_{1s}$ to the shift of the energy level of the ground state of kaonic hydrogen, caused by the $\sigma^{(I = 1)}_{KN}(0)$, is obtained from the S--wave amplitude of $K^-p$ scattering, calculated to next--to--leading order in ChPT expansion at the tree--hadron level Ref.\cite{IV5} and Current Algebra Refs.\cite{SA68,HP73} (see also Ref.\cite{ER72}): \begin{eqnarray}\label{label2.19} \hspace{-0.3in} 4\pi\,\Big(1 + \frac{m_K}{m_N}\Big)\,\tilde{f}^{K^-p}_0(0) &=& \frac{m_K}{F^2_K} - \frac{1}{F^2_K}\,\sigma^{(I = 1)}_{KN}(0)\nonumber\\ \hspace{-0.3in} &+& \frac{m^2_K}{4m_N F^2_K} i\int d^4x \langle p(\vec{0},\sigma_p) |{\rm T}(J^{4+i5}_{50}(x)J^{4-i5}_{50}(0))| p(\vec{0},\sigma_p)\rangle.\quad \end{eqnarray} The contribution of the $\sigma^{(I = 1)}_{KN}(0)$--term, $- \sigma^{(I = 1)}_{KN}(0)/F^2_K$, to the S--wave amplitude of $K^-p$ scattering in Eq.(\ref{label2.19}) has a standard structure Ref.\cite{ER72}. Since the first term $m_K/F^2_K,$ calculated to leading order in chiral expansion, has been already taken into account in Ref.\cite{IV3}, the second term, $- \sigma^{(I = 1)}_{KN}(0)/F^2_K$, and the third one define next--to--leading order corrections in chiral expansion to the S--wave amplitude of $K^-p$ scattering at threshold. Taking into account the contribution $\delta \epsilon^{(\sigma)}_{1s}$, the total shift of the energy level of the ground state of kaonic hydrogen is equal to \begin{eqnarray}\label{label2.20} \hspace{-0.3in} \epsilon^{(\rm th)}_{1s} &=& 223 \pm 21 + \frac{\alpha^3 \mu^3}{2\pi m_K F^2_K} \sigma^{(I = 1)}_{KN}(0)\nonumber\\ \hspace{-0.3in}&-& \frac{\alpha^3 \mu^3 m_K}{ 8\pi F^2_K m_N} i\!\int\! d^4x \langle p(\vec{0},\sigma_p) |{\rm T}(J^{4+i5}_{50}(x)J^{4-i5}_{50}(0))| p(\vec{0},\sigma_p)\rangle. \end{eqnarray} The theoretical estimates of the value of $\sigma^{(I = 1)}_{KN}(0)$, carried out within ChPT with a dimensional regularization of divergent integrals, are converged around the number $\sigma^{(I = 1)}_{KN}(0) = (200 \pm 50)\,{\rm MeV}$ Refs.\cite{VB93,BB99}. Hence, the contribution of $\sigma^{(I = 1)}_{KN}(0)$ to the energy level shift amounts to \begin{eqnarray}\label{label2.21} \frac{\alpha^3 \mu^3}{2\pi m_K F^2_K}\,\sigma^{(I = 1)}_{KN}(0) = (67\pm 17)\,{\rm eV}. \end{eqnarray} The total shift of the energy level of the ground state of kaonic hydrogen is given by \begin{eqnarray}\label{label2.22} \epsilon^{(\rm th)}_{1s} = (290 \pm 27) - \frac{\alpha^3 \mu^3 m_K}{ 8\pi F^2_K m_N}\,i\int d^4x\,\langle p(\vec{0},\sigma_p) |{\rm T}(J^{4+i5}_{50}(x)J^{4-i5}_{50}(0))| p(\vec{0},\sigma_p)\rangle. \end{eqnarray} Hence the theoretical analysis of the second term in Eq.(\ref{label2.22}) is required for the correct understanding of the contribution of the $\sigma^{I = 1}_{KN}(0)$--term to the energy level shift. Of course, one can solve the inverse problem. Indeed, calculating the contribution of the term \begin{eqnarray}\label{label2.23} \frac{\alpha^3 \mu^3 m_K}{ 8\pi F^2_K m_N}\,i\int d^4x\,\langle p(\vec{0},\sigma_p) |{\rm T}(J^{4+i5}_{50}(x)J^{4-i5}_{50}(0))| p(\vec{0},\sigma_p)\rangle \end{eqnarray} in Eq.(\ref{label2.20}) and using the experimental data on the energy level shift, measured by the DEAR Collaboration Eq.(\ref{label1.1}), one can extract the value of the $\sigma^{(I = 1)}_{KN}(0)$--term. \subsection{Energy level displacement of the ground state of kaonic deuterium} Using the real parts of the S--wave scattering lengths of $K^-N$ scattering Eq.(\ref{label2.7}) we recalculate the S--wave scattering length $a^{K^-d}_0$ of $K^-d$ scattering. As has been shown in Ref.\cite{IV4}, the S--wave scattering length $a^{K^-d}_0$ is equal to \begin{eqnarray}\label{label2.24} a^{K^-d}_0 = (a^{K^-d}_0)_{\rm EW} + {\cal R}e\,\tilde{f}^{\,K^-d}_0(0), \end{eqnarray} where $(a^{K^-d}_0)_{\rm EW}$ is the Ericson--Weise scattering length of $K^-d$ scattering in the S--wave state Ref.\cite{IV4} \begin{eqnarray}\label{label2.25} (a^{K^-d}_0)_{\rm EW} &=& \frac{1 + m_K/m_N}{1 + m_K/m_d}\,\frac{1}{2}\,(a^0_0 + 3 a^1_0) + \frac{1}{4}\,\Big(1 + \frac{m_K}{m_d}\Big)^{-1}\Big(1 + \frac{m_K}{m_N}\Big)^2\nonumber\\ &&\times\,\Big(4a^1_0(a^0_0 + a^1_0) - (a^0_0 - a^1_0)^2\Big)\, \Big\langle \frac{1}{r_{12}}\Big\rangle, \end{eqnarray} where the term proportional to $(a^0_0 - a^1_0)^2$ describes the contribution of the charge--exchange channel and $r_{12}$ is a distance between two scatterers $n$ and $p$ Ref.\cite{TE88}. In our approach $\langle 1/r_{12}\rangle$ is defined by Ref.\cite{IV4} \begin{eqnarray}\label{label2.26} \Big\langle \frac{1}{r_{12}}\Big\rangle = \int d^3x\,\Psi^*_d(\vec{r}\,)\,\frac{\displaystyle e^{\textstyle\,- m_K r}}{r}\,\Psi_d(\vec{r}\,) = 0.29\,m_{\pi}, \end{eqnarray} where $\Psi_d(\vec{r}\,)$ is the wave function of the deuteron in the ground state. We would like to remind that Ericson and Weise did not investigate the $K^-d$ scattering. They analysed only $\pi^-d$ scattering \cite{TE88}. However, since the structure of the contribution, given by Eq.(\ref{label2.25}), is very similar to that of $\pi^-d$ scattering we call such a contribution as the Ericson--Weise scattering length $(a^{K^-d}_0)_{\rm EW}$, which has been derived in Ref.\cite{IV4} at the quantum field theoretic level. The double scattering contribution to the S--wave amplitude of $K^-d$ scattering has been calculated by Kamalov {\it et al.} \cite{EO01}. In the notation by Kamalov {\it et al.} the amplitude $\tilde{f}^{\,K^-d}_0(0)_{\rm EW}$ reads \begin{eqnarray}\label{label2.27} \tilde{f}^{\,K^-d}_0(0)_{\rm EW} = \Big(1 + \frac{m_K}{m_d}\Big)^{-1}\Big(1 + \frac{m_K}{m_N}\Big)^2\,(2 a_p a_n - a^2_x)\, \Big\langle \frac{1}{r_{12}}\Big\rangle, \end{eqnarray} where $a_p = (a^0_0 + a^1_0)/2$, $a_n = a^1_0$ and $a_x = (a^1_0 - a^0_0)/2$\,\footnote{In our former version nucl--th/0505078v1 the contribution of the double scattering contained the factor $(a_p a_n - a^2_x)$ instead of $(2 a_p a_n - a^2_x)$, where the term proportional to $a^2_x$ is defined by the charge--exchanged channel, \cite{IV4a}. We are grateful to Avraham Gal for calling our attention to this discrepancy. The replacement of $a_pa_n$ by $2a_p a_n$ changes the contribution of the double scattering and, correspondingly, the S--wave scattering length of $K^-d$ scattering by $17\,\%$, which is commensurable with a theoretical uncertainty.}. The term ${\cal R}e\,\tilde{f}^{\,K^-d}_0(0)$ in Eq.(\ref{label2.24}) is defined by the inelastic two--body and three--body channels of the $K^-d$ scattering at threshold. As has been shown in Ref.\cite{IV4}, the contribution of this term is negligible in comparison with the Ericson--Weise scattering length $(a^{K^-d}_0)_{\rm EW}$. Dropping the contribution of this term we get \begin{eqnarray}\label{label2.28} a^{K^-d}_0 = (a^{K^-d}_0)_{\rm EW} = (-\,0.57\pm 0.07)\,{\rm fm}. \end{eqnarray} Since the imaginary part of the S--wave amplitude of $K^-d$ scattering at threshold is not changed, using the results of Ref.\cite{IV4} we obtain \begin{eqnarray}\label{label2.29} f^{K^-d}_0(0) = (-\,0.57 \pm 0.07) + i\,(0.52 \pm 0.08)\,{\rm fm}. \end{eqnarray} The energy level displacement for the ground state of kaonic deuterium agrees well with that obtained in Ref.\cite{IV4}: \begin{eqnarray}\label{label2.30} -\,\epsilon_{1s} + i\,\frac{\Gamma_{1s}}{2} = 601.56\,f^{K^-d}_0(0) = (-\,343 \pm 42) + i\,(315 \pm 48)\;{\rm eV}. \end{eqnarray} The value of the S--wave amplitude of $K^-d$ scattering at threshold as well as of the energy level displacement of the ground state of kaonic deuterium agree well with the results obtained in Ref.\cite{IV4}. \section{Cross sections for low--energy $K^-p$ scattering} \setcounter{equation}{0} In this Section we apply our model of strong $K^-N$ interactions at threshold to the description of the experimental data on the cross sections for the reactions $K^-p \to K^-p$ and $K^-p \to Y\pi$, where $Y\pi = \Sigma^{\mp}\pi^{\pm}$, $\Sigma^0\pi^0$ and $\Lambda^0\pi^0$, as functions of a laboratory momentum $p_{lab}$ of the incident $K^-$ meson. The available experimental data of the cross sections are given for the laboratory momenta $50\,{\rm MeV}/c \le p_{lab} \le 250\,{\rm MeV}/c$ Ref.\cite{WW04}. This corresponds to relative momenta $40\,{\rm MeV}/c \le k \le 200\,{\rm MeV}/c$ of the $K^-p$ pair. We analyse the cross sections for the reactions $K^-p \to K^-p$ and $K^-p \to Y\pi$ only for the laboratory momenta $70\,{\rm MeV}/c \le p_{lab} \le 150\,{\rm MeV}/c$ of the incident $K^-$, where the experimental data are most reliable. For these momenta the S--wave amplitudes of the inelastic reactions $K^-p \to Y\pi$ are described well by the S--wave scattering lengths \begin{eqnarray}\label{label3.1} f(K^-p \to \Sigma^-\pi^+) &=& a_{\Sigma^-\pi^+} = +\,0.43\,{\rm fm}\,,\, f(K^-p \to \Sigma^+\pi^-) = a_{\Sigma^+\pi^-} = +\,0.28\,{\rm fm},\nonumber\\ f(K^-p \to \Sigma^0\pi^0) &=&a_{\Sigma^0\pi^0}= +\,0.36\,{\rm fm}\,,\, f(K^-p \to \Lambda^0\pi^0) = a_{\Lambda^0\pi^0}=-\,0.14\,{\rm fm}. \end{eqnarray} For laboratory momenta $70\,{\rm MeV}/c \le p_{lab} \le 150\,{\rm MeV}/c$, due to smallness of the S--wave scattering lengths $a_{Y\pi}$, the cross sections are equal to \begin{eqnarray}\label{label3.2} \sigma_{\Sigma^-\pi^+}(k) &=&4\pi\,\frac{k_{\Sigma^-\pi^+}(k)}{k}\, C^2_0(k)\,a^2_{\Sigma^-\pi^+}\,,\, \sigma_{\Sigma^+\pi^-}(k) = 4\pi\,\frac{k_{\Sigma^+\pi^-}(k)}{k}\, C^2_0(k)\,a^2_{\Sigma^+\pi^-}, \nonumber\\ \sigma_{\Sigma^0\pi^0}(k) &=&4\pi\,\frac{k_{\Sigma^0\pi^0}(k)}{k}\, C^2_0(k)\,a^2_{\Sigma^0\pi^0}\,,\, \sigma_{\Lambda^0\pi^0}(k) = 4\pi\,\frac{k_{\Lambda^0\pi^0}(k)}{k}\, C^2_0(k)\,a^2_{\Lambda^0\pi^0}, \end{eqnarray} where $C^2_0(k)$ is the contribution of the Coulomb interaction of the $K^-p$ pair \begin{eqnarray}\label{label3.3} C^2_0(k) = \frac{2\pi \alpha \mu}{k}\,\frac{1}{\displaystyle 1 - e^{\textstyle -\,2\pi \alpha \mu/k}}. \end{eqnarray} The cross sections for inelastic channels agree well with those obtained in Refs.\cite{RD60,RD62} (see also Ref.\cite{WH62})). The calculation of the cross sections Eq.(\ref{label3.2}), taking into account the Coulomb interaction in the initial and final state, one can carry out within the potential model approach with strong low--energy interactions described by the effective zero--range potential Ref.\cite{IV7}: \begin{eqnarray}\label{label3.4} V(\vec{r}\,) = - \frac{2\pi}{\mu}\,a_{Y\pi}\,\delta^{(3)}(\vec{r}\,), \end{eqnarray} where $a_{Y\pi}$ is a S--wave scattering length of the inelastic channel under consideration. The S--wave amplitude $f(\vec{k},\vec{k}_{Y\pi})$ of the inelastic channel $K^-p \to Y\pi$ is defined by the spatial integral \begin{eqnarray}\label{label3.5} \hspace{-0.3in}f(\vec{k},\vec{k}_{Y\pi}) &=& -\,\frac{\mu}{2\pi}\int d^3x\, e^{\textstyle -\,i\,\vec{k}_{Y\pi}\cdot \vec{r}} V(\vec{r}\,) \psi^{\,C}_{K^-p}(\vec{k},\vec{r}\,)=\nonumber\\ \hspace{-0.3in}&=& a_{Y\pi}\,e^{\textstyle\,\pi/2ka_B}\,\Gamma(1 - i/ka_B). \end{eqnarray} Here $\psi^{\,C}_{K^-p}(\vec{k},\vec{r}\,)$ is the exact non--relativistic Coulomb wave function of the relative motion of the $K^-p$ pair in the incoming scattering state with a relative momentum $\vec{k}$. It is given by Ref.\cite{LL65} \begin{eqnarray}\label{label3.6} \psi^C_{K^-p}(\vec{k},\vec{r}\,) = e^{\textstyle\,\pi/2ka_B}\, \Gamma(1 - i/ka_B)\, e^{\textstyle\,i\,\vec{k}\cdot \vec{r}}\,F(i/ka_B,1, ikr - i\,\vec{k}\cdot \vec{r}\,), \end{eqnarray} where $F(i/ka_B,1, ikr - i\,\vec{k}\cdot \vec{r}\,)$ is the confluent hypergeometric function Refs.\cite{LL65,MA72}. The numerical values of the theoretical cross sections for the reactions $K^-p \to \Sigma^-\pi^+$, $K^-p \to \Sigma^+\pi^-$, $K^-p \to \Sigma^0\pi^0$ and $K^-p \to \Lambda^0\pi^0$, calculated for the experimental values of the masses of interacting hadrons \cite{PDG04}, are adduced in Table 1 and the experimental data are given in Table 2 Ref.\cite{JC83} and Table 3 Ref.\cite{KT92}. The cross sections as functions of the laboratory momentum of the incident $K^-$ meson are represented in Fig.1. It is seen that theoretical cross sections agree with experimental data within two standard deviations. For the S--wave scattering length $a_{\Lambda^0\pi^0} = -\,0.14\,{\rm fm}$ of the inelastic $K^-p \to \Lambda^0\pi^0$ reaction we calculate the S--wave phase shift of $\Lambda\pi$ scattering at threshold of the $\bar{K}N$ pair $\delta^{\Lambda^0\pi^0}_S = a_{\Lambda^0\pi^0}k_{\Lambda^0\pi^0} = -\,10.3^0$. This agrees well with recent results obtained by Tandean {\it et al.} Ref.\cite{JT01} (see Fig. 3 and take the value of the phase shift of $\Lambda\pi$ scattering at threshold of the $\bar{K}N$ pair production). Due to a contribution of the pure Coulomb interaction to the S--wave amplitude of elastic $K^-p$ scattering only a differential cross section for elastic $K^-p$ scattering is well defined. For the analysis of experimental data the differential cross section for elastic $K^-p$ scattering has been taken in the form Ref.\cite{WH62} (see also Refs.\cite{RD60,RD62}): \begin{eqnarray}\label{label3.7} \frac{d\sigma^{\,e\ell}_{pK^-}(k)}{d\Omega} = \Big|\frac{\sec^2(\theta/2)}{2k^2 a_B}\, \exp\Big[\frac{2i}{ka_B}\,\sin(\theta/2)\Big] + C^2_0(k)R\,\exp(i\alpha)\Big|^2, \end{eqnarray} where $R$ and $\alpha$ are the experimental fit parameters Ref.\cite{WH62}. For the momenta $100\,{\rm MeV}/c \le p_{lab} \le 175\,{\rm MeV}/c$ the experimental values of the fit parameters, obtained in Ref.\cite{WH62}, are $R = (0.81 \pm 0.06)\,{\rm fm}$ and $\alpha = (78 \pm 31)^0$. In our model the theoretical values of these parameters are equal to $R = |a^{K^-p}_0| = (0.50 \pm 0.05)\,{\rm fm}$ and $\alpha = 180^0$. However, the experimental values for the cross section for elastic $K^-p$ scattering, obtained in Ref.\cite{MS65} for momenta $100\,{\rm MeV}/c \le p_{lab} \le 160\,{\rm MeV}/c$, by a factor 1.5 smaller than the data by Humphrey and Ross Ref.\cite{WH62}. This implies that the parameter $R$ can be reduced to the value $R \approx 0.67$, which agrees better with our prediction. The cross sections for elastic and inelastic $K^-p$ scattering Eq.(\ref{label3.2}), defined for the momenta $70\,{\rm MeV/c} \le p_{lab} \le 150\,{\rm MeV/c}$, do not contradict to the theoretical results obtained by Borasoy {\it et al.} \cite{WW04}. The agreement of the theoretical predictions for the cross sections of elastic and inelastic $K^-p$ scattering is qualitative within about two standard deviations. However, due to self--consistency of our calculation of the S--wave amplitudes of $K^-N$ scattering at threshold and the agreement with the experimental data by the DEAR Collaboration, we can argue that the experimental values of the cross sections for elastic and inelastic channels of $K^-p$ scattering as well as for $K^-n$ scattering should be remeasured. The same recommendation has been pointed out by Borasoy {\it et al.} Ref.\cite{WW04}. \section{Conclusion} \setcounter{equation}{0} We have revisited our phenomenological quantum field theoretic model of strong low--energy $\bar{K}N$ interactions at threshold. The main change concerns the replacement of the contribution of the $\Sigma(1750)$ resonance with quantum numbers $I\,(J^P) = 1\,(\frac{1}{2}^-)$ by the baryon background with the same quantum numbers and $SU(3)$ properties. We remind that according to Gell--Mann's classification of hadrons, the $\Sigma(1750)$ resonance belongs to an $SU(3)$ octet of baryons. Following our previous analysis of strong low--energy $\bar{K}N$ interactions Ref.\cite{IV3} and assuming that the S--wave amplitudes of inelastic channels of $K^-p$ scattering at threshold are fully defined by the contribution of the $\Lambda(1405)$ resonance with quantum numbers $I\,(J^P) = 0\,(\frac{1}{2}^-)$ and the octet of baryon background with $J^P = \frac{1}{2}^-$ we describe the experimental data on ratios of the cross sections for inelastic channels of $K^-p$ scattering Eq.(\ref{label1.10}) within an accuracy of about $6\,\%$. Since the non--resonant parts of the S--wave amplitudes are not changed, we have used them and calculated the complex S--wave scattering lengths $\tilde{a}^0_0$ and $\tilde{a}^1_0$ of $\bar{K}N$ scattering with isospin $I = 0$ and $I = 1$, given by Eq.(\ref{label2.7}). The complex S--wave scattering length $\tilde{a}^0_0$ agrees well with that obtained by Dalitz and Deloff Ref.\cite{RD91}. It is interesting to notice that the complex S--wave scattering length $\tilde{a}^0_0$, calculated in our model, does not contradict the result obtained by Akaishi and Yamazaki \cite{YA02} under the assumption that the $\Lambda(1405)$ resonance is the bound $K^-p$ state. The real parts of the complex S--wave scattering lengths $a^{K^-p}_0 = (a^0_0 + a^1_0)/2$ and $a^{K^-n}_0 = a^1_0$ of $K^-N$ scattering satisfy the low--energy theorem Eq.(\ref{label2.8}). As we have shown above, this low--energy theorem is a $\bar{K}N$ scattering version of the well--known low--energy theorem for the S--wave scattering lengths of $\pi^-N$ scattering by Weinberg \cite{SW66}. The low--energy theorem Eq.(\ref{label2.8}) can be rewritten in the form of the sum rule (\ref{label2.13}), where the l.h.s. is defined by quark--hadron interactions, whereas the r.h.s. is the resonant part caused by the contribution of the $\Lambda(1405)$ resonance $A$ and the baryon background $B$. The sum rule Eq.(\ref{label2.13}) can be accepted as a manifestation of quark--hadron duality pointed out by Shifman {\it et al.} \cite{SVZ} within non--perturbative QCD in the form of QCD sum rules. Since in our model the S--wave scattering lengths are calculated to leading order in chiral expansion and satisfy the low--energy theorem Eq.(\ref{label2.8}), we can argue that our model is self--consistent to leading order in chiral expansion. This implies that the inclusion of the contributions of next--to--leading order corrections in chiral expansion is well--defined and allows to provide the investigation of the contribution of the $\sigma^{I = 1}_{KN}(0)$--term to the S--wave scattering length of $K^-p$ and $K^-d$ scattering and the energy level displacements of the ground states of kaonic atoms. The analysis of the contribution of the $\sigma^{I = 1}_{KN}(0)$--term demands the calculation of the quantity, defined by Eq.(\ref{label2.23}), $$ \frac{\alpha^3 \mu^3 m_K}{ 8\pi F^2_K m_N}\,i\int d^4x\,\langle p(\vec{0},\sigma_p) |{\rm T}(J^{4+i5}_{50}(x)J^{4-i5}_{50}(0))|p(\vec{0},\sigma_p)\rangle. $$ We are planning to carry out this calculation in our forthcoming publication. The energy level displacement of the ground state of kaonic hydrogen Eq.(\ref{label2.14}), calculated for the complex S--wave scattering lengths Eq.(\ref{label2.7}), agrees well with the result obtained in Ref.\cite{IV3} and the experimental data by the DEAR Collaboration. The account for the contribution of the dispersive corrections, caused by the intermediate $\bar{K}^0n$ state on--mass shell Ref.\cite{IV6}, changes the values of the energy level shift and width by about $8\,\%$. We have recalculated the S--wave scattering length $a^{K^-d}_0$ of $K^-d$ scattering for the new values of the S--wave scattering lengths $a^0_0$ and $a^1_0$ obeying the low--energy theorem $a^0_0 + 3\, a^1_0 = 0$. We have shown that the obtained result is not changed with respect to that calculated in Ref.\cite{IV4}. For the confirmation of the self--consistency our approach we have analysed the cross sections for elastic and inelastic channels of $K^-p$ scattering for laboratory momenta $70\,{\rm MeV}/c \le p_{lab} \le 150\,{\rm MeV}/c$ of the incident $K^-$--meson. We have shown that the cross sections for the reactions $K^-p \to K^-p$ and $K^-p \to Y\pi$, which we have calculated by using the S--wave amplitudes of elastic and inelastic channels of $K^-p$ scattering at threshold, do not contradict the experimental data within two standard deviations. However, the constraints imposed by recent experimental data by the DEAR Collaboration demand a revision of these data. The energy level displacement of the ground state of kaonic hydrogen has been analysed in Ref.\cite{WW04} and Ref.\cite{UM04}. The result predicted by Borasoy {\it et al.} Ref.\cite{WW04} within the $SU(3)$ chiral effective Lagrangian approach with relativistic coupled channels technique is equal to \begin{eqnarray}\label{label4.1} - \,\epsilon_{1s} + i\,\frac{\Gamma_{1s}}{2} = 412.13\,f^{K^-p}_0(0) = -\,235 + \,i\,195\,{\rm eV}, \end{eqnarray} where $f^{K^-p}_0(0) = -\,0.57 + \,i\,0.47\,{\rm fm}$. It has been obtained as {\it a result of an ``optimal'' compromise between the various existing data sets} Ref.\cite{WW04}. The energy level displacement of the ground state of kaonic hydrogen, obtained in Ref.\cite{WW04}, agree with the experimental data by the DEAR Collaboration within experimental error bars. Our result for the energy level shift agrees well with that obtained in Ref.\cite{WW04}, whereas the agreement for the values of the energy level width is only within an accuracy of about $30\,\%$. The energy level displacement of the ground state of kaonic hydrogen has been calculated by Mei\ss ner {\it et al.} Ref.\cite{UM04} under the assumption of the dominant role of the $\bar{K}^0n$--cusp. Such a hypothesis has been proposed by Dalitz and Tuan in 1960 Ref.\cite{RD60} (see also Ref.\cite{RD62}) in the $\mathbb{K}$--matrix approach in the zero--range approximation. Mei\ss ner {\it et al.} have argued that the S--wave amplitude \begin{eqnarray}\label{label4.2} \tilde{f}^{K^-p}_0(0) = \frac{\displaystyle \frac{\tilde{a}^0_0 + \tilde{a}^1_0}{2} + q_0\,\tilde{a}^0_0\tilde{a}^1_0}{\displaystyle 1 + \Big(\frac{\tilde{a}^0_0 + \tilde{a}^1_0}{2}\Big)\,q_0}, \end{eqnarray} obtained by Dalitz and Tuan within the $\mathbb{K}$--matrix approach in the zero--range approximation Refs.\cite{RD60,RD62}, can be derived within a non--relativistic effective Lagrangian approach based on ChPT by Gasser and Leutwyler Ref.\cite{JG83}. For the complex S--wave scattering lengths Eq.(\ref{label2.7}) the energy level displacement of the ground state of kaonic hydrogen, caused by the $\bar{K}^0n$--cusp, is equal to \begin{eqnarray}\label{label4.3} - \,\epsilon_{1s} + i\,\frac{\Gamma_{1s}}{2} = 412.13\,f^{K^-p}_0(0) = -\,325 +\,i\,248\,{\rm eV}. \end{eqnarray} where $f^{K^-p}_0(0) = -\,0.78 + \,i\,0.60\,{\rm fm}$. This result agrees well with the experimental data by the KEK Collaboration Ref.\cite{KEK} \begin{eqnarray}\label{label4.4} -\,\epsilon^{(\exp)}_{1s} + i\,\frac{\Gamma^{(\exp)}_{1s}}{2} = (- 323 \pm 64) + i\,(204\pm 115)\,{\rm eV}. \end{eqnarray} Thus, as has been pointed out by Gasser \cite{JG04}: {\it $\ldots$ the theory of $\bar{K}p$ scattering leaves many questions open. More precise data will reveal whether present techniques are able to describe the complicated situation properly.} A new set of measurements by the DEAR/SIDDHARTA Collaborations Ref.\cite{SIDDHARTA}, which is planned on 2006 year and intended for to reach a precision of the experimental data on the energy level displacement of the ground state of kaonic hydrogen and kaonic deuterium at the ${\rm eV}$ level, should place constraints on theoretical approaches to the description of strong low--energy $\bar{K}N$ interactions at threshold. \section*{Acknowledgement} We are grateful to Torleif Ericson for reading the manuscript and helpful comments on the results obtained in the paper. The remarks and comments by Wolfram Weise are greatly appreciated. \begin{figure}[h] \psfrag{0}{$0$}\psfrag{20}{$20$}\psfrag{40}{$40$} \psfrag{60}{$60$}\psfrag{80}{$80$}\psfrag{100}{$100$} \psfrag{plab}{\hspace{-4mm}$p_{\mathrm{lab}}/\mathrm{MeV}$} \psfrag{s/mb}{$\sigma/\mathrm{mb}$} \psfrag{sigma1}{$\sigma_{\Sigma^-\pi+}$} \psfrag{sigma2}{$\sigma_{\Sigma^+\pi-}$} \psfrag{sigma3}{$\sigma_{\Sigma^0\pi0}$} \psfrag{sigma4}{$\sigma_{\Lambda^0\pi0}$} \centering \includegraphics[scale=0.8]{fig.eps} \caption{Cross--sections for the inelastic reactions $K^-p \to Y\pi$, where $Y\pi = \Sigma^{-}\pi^+,\Sigma^{+}\pi^-,\Sigma^{0}\pi^0$ and $\Lambda^0\pi^0$. } \label{rho-Verteilung} \end{figure} \begin{table}[h] \begin{tabular}{|l||r|r|r|r|r|r|r|r|r|r|r|} \hline $p_{lab}$& 70 & 80 & 90 & 100 & 110 & 120 & 130 & 140 & 150 & 160 & 170\\[0.5ex] \hline $\sigma_{\Sigma^-\pi^+}$ &83.1&72.3&64.1&57.8&52.8&48.7&45.3&42.5&40.1&38.1&36.3\\ \hline $\sigma_{\Sigma^+\pi^-}$ &36.9&32.1&28.4&25.6&23.4&21.6&20.0&18.8&17.7&16.8&16.0\\ \hline $\sigma_{\Sigma^0\pi^0}$ &60.9&52.9&46.9&42.2&38.5&35.5&33.0&31.0&29.2&27.7&26.4\\ \hline $\sigma_{\Lambda^0\pi^0}$&12.8&11.1& 9.8& 8.8& 8.0& 7.3& 6.8& 6.3& 5.9& 5.6& 5.3\\ \hline \end{tabular} \caption{Theoretical values of cross sections for inelastic channels of $K^-p$ scattering. The laboratory momentum of the incident $K^-$--meson is measured in ${\rm MeV}/c$ and the cross sections in ${\rm m b}$.} \end{table} \begin{table}[h] \begin{tabular}{|l||r|r|r|} \hline $p_{lab}$& $90 - 110$ & $110 - 130$& $130 - 150$ \\[0.5ex] \hline $\sigma_{\Sigma^-\pi^+}$& $68 \pm 8$ & $60 \pm 6$ & $46 \pm 4$\\ \hline $\sigma_{\Sigma^+\pi^-}$& $34 \pm 5$ & $23 \pm 4$ & $26 \pm 3$\\ \hline \end{tabular} \caption{Experimental data on the cross sections for the reactions $K^-p \to \Sigma^-\pi^+$ and $K^-p \to \Sigma^+\pi^-$. The laboratory momentum of the incident $K^-$--meson is measured in ${\rm MeV}/c$ and the cross sections in ${\rm m b}$.} \end{table} \begin{table}[h] \begin{tabular}{|l||r|r|} \hline $p_{lab}$ & 120 & 160 \\[0.5ex] \hline $\sigma_{\Sigma^0\pi^0}$& $ 20 \pm 10$ & $15 \pm 7$\\ \hline $\sigma_{\Lambda^0\pi^0}$& $ 22 \pm 10$ & $15 \pm 3$\\ \hline \end {tabular} \caption{Experimental data on the cross sections for the reactions $K^-p \to \Sigma^0\pi^0$ and $K^-p \to \Lambda^0\pi^0$. The laboratory momentum of the incident $K^-$--meson is measured in ${\rm MeV}/c$ and the cross sections in ${\rm m b}$.} \end{table} \clearpage
1,314,259,994,124
arxiv
\section{introduction} The F-doped LnFeAsO (Ln = rare-earth elements), which has been abbreviated as 1111 phase, is the first reported family with the highest critical transition temperature $T_c$ in bulk in the Fe-based superconductors (FeSCs).~\cite{Hosono} However, investigations on the physical properties of 1111-type FeSCs are restricted remarkably, compared with the 122 phase and 11 phase, due to the difficulties in obtaining sizable single crystals. As we know, it is essential to have high-quality single crystals when carrying out many experiments, including the measurements of electrical transport, inelastic neutron diffraction, angle resolved photoemission spectroscopy, and so on. During the past several years, many efforts have been made to improve the quality and size of the single crystals. NaCl and KCl were first used as the flux and small single crystals with sizes of 20-70 $\mu$m can be obtained.~\cite{NaCl} Then more attempts, including the high-pressure method and the NaAs-flux method,~\cite{NaAs,HP} were made to further improve the growth processes. Up to now, the two goals, sizable and high-quality, are still not achieved commendably. Recently, single crystals with the size of several millimeters were reported to be accessible in F-vacant and Na-doped CaFeAsF,~\cite{F-vacant,Na-doped} which is another type of 1111 phase without oxygen,~\cite{SrFeAsF1,SrFeAsF2} possibly due to the decrease of melting point in this fluorine-based system. As we know, a rather high $T_c$ above 50 K can also be achieved by doping in this fluorine-based 1111 system.~\cite{CaFeAsF-Nd,CaFeAsF-Co,SrFeAsF-Sm,SrFeAsF-La} More important information can be obtained owning to the availability of the sizable single crystals. To our knowledge, the investigations on the single crystals of the parent phase CaFeAsF are still lacking. Here we present the growth, structure, and transport measurements of the high-quality CaFeAsF single crystals with the sizes of 1-2 mm. The single crystals were grown by the self-flux method. The structure details were obtained from the refinement of the single-crystal x-ray diffraction data. The structural transition at 121 K was confirmed by the resistivity, magnetoresistance, and magnetic susceptibility measurements. A feature coming from the antiferromagnetic transition was also observed in the resistivity data. \section{Experimental details} High quality CaFeAsF single crystals were grown using the self-flux method with CaAs as the flux. First, the starting materials Ca granules (purity 99. 5\%, Alfa Aesar) and As grains (purity 99.995\%, Alfa Aesar) were mixed in 1: 1 ratio. Then the mixture was sealed in an evacuated quartz tube and followed by a heating process at 700$^\circ$C for 10 h to get the CaAs precursor. CaAs, FeF2 powder (purity 99\%, Alfa Aesar) and Fe powder (purity 99+\%, Alfa Aesar) were mixed together in the stoichiometric ratio 10: 1: 1, and the mixture were placed in a crucible. Finally, the crucible was sealed in a quartz tube with vacuum. All the weighing and mixing procedures were carried out in a glove box with a protective argon atmosphere. The quartz tube was heated at 950$^\circ$C for 40 hours firstly, and then it was heated up to 1230$^\circ$C and stay for 20 hours. Finally it was cooled down to 900$^\circ$C at a rate of 2$^\circ$C /h and followed by a quick cooling down to room temperature. The microstructure was examined by the scanning electron microscopy (SEM, Zeiss Supra55). The composition of the single crystals was checked and determined by the energy dispersive x-ray spectroscopy (EDS) measurements on an Oxford Instruments. The crystals were first checked using a DX-2700 type powder x-ray diffractometer. The detailed structure was characterized and analyzed by the single-crystal x-ray diffraction measurements on a Bruker D8 Focus diffractometer equipped with the graphite-monochromatized Mo $K_\alpha$ radiation. The magnetic susceptibility measurement was carried out on the magnetic property measurement system (Quantum Design, MPMS 3). The electrical resistance and magnetoresistance (MR) were measured using a four-probe technique on the physical property measurement system (Quantum Design, PPMS) with magnetic field up to 9 T. For the MR measurements, the magnetic field was oriented parallel to the c axis of the samples and the data were measured for both positive and negative field orientations to eliminate the effect of the Hall signals. \section{Results and discussions} A typical dimension of the single crystals is 1.2$\times$1.0$\times$0.1 mm$^3$. The morphology was examined by the scanning electron microscopy. An SEM picture for the CaFeAsF single crystal can be seen in Fig. 1(a), which shows the flat surface and some terrace-like features. An enlarged view of this picture can be seen in Fig. 1(b). The composition of the crystals was characterized by energy-dispersive x-ray spectroscopy (EDS) measurements. We measured the EDS at different positions of the sample. Here we show a typical result in Fig. 1(c) and Table I, which revealed that the ratio of Ca: Fe: As is close to the stoichiometric ratio. The content of the light element F is difficult to determine precisely based on EDS measurements. The structure of the crystals was first check by a powder x-ray diffractometer, where the x-ray was incident on the ab-plane of the crystal. The diffraction pattern is shown in Fig. 2. All the diffraction peaks can be indexed to the tetragonal ZrCuSiAs-type structure (see the inset of Fig. 2). Only sharp peaks along (00l) orientation can be observed, suggesting a high c-axis orientation. The full width at half maximum (FWHM) of the diffraction peaks is only about 0.10$^\circ$ after deducting the $K_{\alpha2}$ contribution, indicating a rather fine crystalline quality. The c-axis lattice constant was obtained to be 8.584 {\AA} by analyzing the diffraction data. \begin{figure} \includegraphics[width=9.5 cm]{Fig1.pdf} \caption {(color online) (a) An SEM picture of a CaFeAsF crystal with the lateral size larger than 1 mm. (b) The enlarged view of the SEM picture. (c) The EDS microanalysis spectrum taken on one crystal.} \label{fig1} \end{figure} \begin{table} \centering \caption{Compositions of the crystal characterized by EDS measurements.} \begin{tabular} {ccccccc}\hline \hline Element & Weight (\%) & Atomic (\%) \\ \hline F & 14.90 & 34.52 \\ Ca & 19.58 & 21.50 \\ Fe & 27.31 & 21.52 \\ As & 38.22 & 22.46 \\ \hline \hline \end{tabular} \label{tab.1} \end{table} \begin{figure} \includegraphics[width=8.8cm]{Fig2.pdf} \caption {(color online) X-ray diffraction pattern measured on the CaFeAsF single crystal with the x-ray incident on the $ab$-plane. The inset is the schematic of the crystal structure of CaFeAsF.} \label{fig2} \end{figure} \begin{table} \centering \caption{Parameters for the data collection and structure refinement of CaFeAsF.} \begin{tabular}{ll} \hline \hline Theta range for data collection & 4.749 to 27.508$^\circ$ \\ Index ranges & -4$\leq$h$\leq$5 \\ & -5$\leq$k$\leq$5 \\ & -11$\leq$l$\leq$11 \\ Reflections collected & 2035 \\ Refinement method & Full-matrix least-squares on F$^2$ \\ Refinement program & SHELXL-2014 (Sheldrick, 2014) \\ Data /restraints /parameters & 114 /0 /12 \\ Goodness-of-fit on F$^2$ & 1.224 \\ Final R indices & R$_1$ = 0.0139\\ & wR$_2$ = 0.0318 \\ Weighting scheme & w=1/[$\sigma^2$(F$_o$$^2$)+0.6368P] \\ & where P=(F$_o$$^2$+2F$_c$$^2$)/3 \\ Extinction coefficient & 0.014(3) \\ Largest diff. peak and hole & 0.655 and -0.364 e{\AA}$^{-3}$ \\ R.M.S. deviation from mean & 0.122 e{\AA}$^{-3}$ \\ \hline \hline \end{tabular} \label{tab.2} \end{table} \begin{table} \centering \caption{Refined lattice constants for the CaFeAsF single crystal.} \begin{tabular}{ll} \hline \hline Chemical formula & CaFeAsF \\ Formula weight & 189.85 g/mol \\ Temperature & 296(2) K \\ Wavelength & 0.71073 {\AA} \\ Crystal system & tetragonal \\ Space group & P4/nmm (No. 129) \\ Z & 2 \\ Unit cell dimensions & a = 3.8774(4) {\AA}, $\alpha$ = 90$^\circ$ \\ & b = 3.8774(4) {\AA}, $\beta$ = 90$^\circ$ \\ & c = 8.5855(10) {\AA}, $\gamma$ = 90$^\circ$ \\ Volume & 129.076(4) {\AA}$^3$\\ Bond angle ($\delta_{As-Fe-As}$) & 107.82(6)$^\circ$ $\times$2 \\ & 110.30(4)$^\circ$ $\times$4 \\ Anion height & $h_{As}$ = 1.413 {\AA} \\ Density (calculated) & 4.885 g/cm$^3$ \\ Absorption coefficient & 20.223 mm$^{-1}$ \\ F(000) & 176 \\ \hline \hline \end{tabular} \label{tab.3} \end{table} \begin{table} \centering \caption{Atomic coordinates and equivalent isotropic atomic displacement parameters (\AA$^2$) for CaFeAsF.} \begin{tabular}{lllll} \hline \hline Atom & x & y & z & U(eq) \\ \hline As & 1/4 & 1/4 & 0.16461(8) & 0.0067(3) \\ Fe & 3/4 & 1/4 & 0 & 0.0069(3) \\ Ca & 3/4 & 3/4 & 0.34801(16) & 0.0080(4) \\ F & 3/4 & 1/4 & 1/2 & 0.0092(8)\\ \hline \hline \end{tabular} \label{tab.4} \end{table} We used the high-resolution single-crystal x-ray diffraction to study the structural details of our sample. The diffraction data were collected at room temperature by the $\omega$- and $\varphi$-scan method. The crystal structure was solved by SHELXS-2014 and refined by SHELX-2014.~\cite{SHELXS-2014} The parameters for the data collection and structure refinement are listed in Table II. The values of R$_1$ and wR$_2$ are much smaller than the previously reported polycrystalline results,~\cite{CaFeAsF-Co} and also small compared to the Na-doped single crystalline system,~\cite{Na-doped} indicating the high-quality of our sample and the reliability of our refinements. As shown in Table III, the final cell constants are determined to be $a$ = $b$ = 3.8774(4) {\AA}, c = 8.5855(10) {\AA}. It is clear that the c-axis lattice constants are very close to that obtained from the data in Fig. 2. In addition, the a- and c-axis lattice constants determined from our experiment are consistent with the polycrystalline samples reported previously.~\cite{CaFeAsF-Co,CaFeAsF-Nd} Compared to the Na-doped single crystalline samples, the a-axis lattice constant is similar while the c-axis constant is clearly smaller.~\cite{Na-doped} The anion (As) height relative to Fe layer is a bit larger than the optimal value (1.38 {\AA}) for the highest $T_c$ in FeSCs.~\cite{As-height} The atomic coordinates from the refinement are shown in Table IV, which also confirm the structure obtained earlier on the basis of powder data, with a difference of about 0.16\% for the c-axis position of As element.~\cite{CaFeAsF-Co} The bond angles $\delta_{As-Fe-As}$ deviate a bit from the optimal value of about 109.47$^\circ$. \begin{figure} \includegraphics[width=8cm]{Fig3.pdf} \caption {(color online) Temperature dependence of the in-plane resistivity (a), magnetoresistance (b), and the magnetic susceptibility (c). The field of 1 T was applied along the c-axis of the crystal during the magnetic susceptibility measurement. The MR data were collected under the field of 9 T. The dashed lines are guides for eyes.} \label{fig3} \end{figure} The resistivity, MR, and magnetic susceptibility change the variation tendency at the same temperature 121 K on the temperature dependent curves, as revealed in Figs. 3(a), (b), and (c). This seems to be a common feature in most of the FeSCs, associated with the structure and the spin-density-wave (SDW)-type antiferromagnetic transition. This transition temperature is a little higher than the polycrystalline results (118-120 K).~\cite{CaFeAsF-Nd,CaFeAsF-Co} Temperature dependence of resistivity are shown in Fig. 3(a). Above 121 K, the resistivity increases almost linearly with the decrease of temperature. We note that it is rather conflicting about this behavior, among different repots based on polycrystalline samples.~\cite{CaFeAsF-Co,CaFeAsF-Nd,SrFeAsF1,SrFeAsF2} Only one result reported on SrFeAsF by Tegel et al. shows similar tendency, compared with our data.~\cite{SrFeAsF2} We argue that the data from single-crystal samples reveals the intrinsic properties since the scattering processes are not affected by the grain boundaries. Moreover, the transition at 121 K is sharper than the results from polycrystalline samples. Below that temperature, the resistivity decreases with cooling and a clear kink can be observed at about 110 K, as indicated by the green dashed line. These two characteristic temperatures with the interval of 11 K are reminiscent of the reported $T_{str}$ = 150 K and $T_N$ = 138 K in the another 1111 phase LnFeAsO (Ln = La, Ce),~\cite{LaFeAsO,CeFeAsO} where $T_{str}$ and $T_N$ are the transition temperatures from tetragonal to orthogonal structure and that from paramagnetic to SDW-type antiferromagnetic phase, respectively. We note that such two distinct transition temperatures have also been detected from the resistivity data in the Co-doped BaFe$_2$As$_2$ system.~\cite{Fisher} So it is very likely that these two transition temperatures are $T_{str}$ = 121 K and $T_N$ = 110 K for the present CaFeAsF system. In is paper, MR is expressed as $\Delta\rho/\rho_0=[\rho(9 \mathrm{T})-\rho_0]/\rho_0$, where $\rho$(9T) and $\rho_0$ are the resistivity under the field 9 T and zero field, respectively. In Fig. 3(b), we show temperature dependence of MR. The magnitude of MR decreases with the increase of the temperature monotonously until $T_{str}$ and vanishes above this temperature. These observations suggest that the MR in this system is associated with the magnetic and electronic structures, which are affected by the structure and the SDW-type antiferromagnetic transitions remarkably. The transition on the $M-T$ curve shows the feature of an antiferromagnetic transition, as shown in Fig. 3(c). In the high temperature non-magnetic normal state, a linear-temperature-dependent behavior can be observed. This is a non-Curie-Weiss-like paramagnetic behavior and cannot be understood within a simple mean-field picture. This behavior should be very important to understand the mechanism of high-$T_c$ superconductivity because it was also observed in undoped and highly underdoped cuprates.~\cite{cuprates} In the pnictide compounds, this feature was interpreted by the antiferromagnetic fluctuations with the local SDW correlations.~\cite{TXiang} \section{Conclusions} In summary, high-quality and sizable single crystals of CaFeAsF were grown successfully by the self-flux method. The single-crystal x-ray diffraction measurements were carried out and the structure details were refined based on the data. The resistivity, magnetoresistance, and magnetic susceptibility show clear different behaviors below and above 121 K. The critical temperatures of the structure and antiferromagnetic transition were determined to be $T_{str}$ = 121 K and $T_N$ = 110 K, respectively. Our results supply a platform to study the intrinsic properties of the 1111 phase of FeSCs. \begin{acknowledgments} This work is supported by the National Natural Science Foundation of China (No. 11204338), the ``Strategic Priority Research Program (B)" of the Chinese Academy of Sciences (No. XDB04040300, XDB04040200 and XDB04030000) and Youth Innovation Promotion Association of the Chinese Academy of Sciences (No. 2015187). \end{acknowledgments}
1,314,259,994,125
arxiv
\section{Introduction} \textbf{COVID-19 2020.} The novel coronavirus pandemic (COVID-19) is an ongoing global health event. In this rapidly unfolding crisis, people are unsure about what is happening and what they should do. They seek to make sense of their uncertainty \cite{sensemaking,maitlis2010sensemaking}. To do so, many people turn to new media platforms that provide support and real time information that cannot be found elsewhere \cite{stephens2009if,choi2009consumer}. Before the pandemic, more than 60\% of Singaporeans were consuming news via social media, and we expect this figure to rise as more people stay home \cite{newman2019reuters,nielsen2020}. Public health authorities need to satisfy the public's need for information, prevent risk exaggeration, and encourage desirable behaviors like social distancing and hand-washing. Understanding online public opinion is one way to understand the efficacy of public health messaging and improve future communications. On February 15, World Health Organisation Director-General said “we’re not just fighting an epidemic, we’re fighting an infodemic”. We study a Singapore-based Telegram group chat with more than 10,000 participants that was created to discuss COVID-19, focusing on the first six weeks of the group's existence, from 27 January to 8 March 2020. These weeks represent the ``first wave" of the pandemic in Singapore, during which the country saw the number of confirmed cases grow from 4 to 153, mostly imported from China. For two of those weeks Singapore had the most number of confirmed cases in the world outside China. During this period, the Ministry of Health raised the DORSCON (Disease Outbreak Response System Condition) level from yellow to orange. The weekly number of cases and key events in Singapore and worldwide are listed in Figure \ref{fig:cases}. Specifically, we ask the following research questions: \textbf{RQ1:} How does group opinion change over time?\newline \textbf{RQ2:} How prevalent is government-identified misinformation in the group? \textbf{Telegram public groups.} Telegram is an instant messaging service with more than 200 million monthly active users. Telegram facilitates the building of groups of up to 200,000 members. Messages in the group are only visible to people who search for or join the group, with no limits on forwarding. Telegram positions itself as a platform that protects user privacy and free expression \cite{telegramopennetwork}. Users can use the platform without revealing personally identifying information to other users. The combination of large group sizes, partial visibility and anonymity plausibly facilitates the spread of misinformation. \textbf{Misinformation and Government Corrections.} The Singapore government has taken steps to combat misinformation about COVID-19. The official source for updates about the local situation is a Ministry of Health web page. Prominently displayed at the top of the page are recent clarifications on misinformation. The Government has also used the Protection from Online Falsehoods and Manipulation Act (POFMA) to correct claims about COVID-19. \section{Related Work} \textbf{Group Chats.} Researchers have analysed the general patterns of interaction in group chats \citep{caetano_analyzing_2018,garimella2018whatapp,qiu2016lifecycle} and the use of group chats for specific purposes \cite{bouhnik2014whatsapp,wani2013efficacy}. Previous papers which study misinformation and group chats mostly do so in the context of political elections \cite{resende2019analyzing,machado2019study,10.1145/3308558.3313688}. Within the crisis informatics literature, others have studied the role of group chats in events like floods, war and kidnap responses \cite{bhuvana2019facebook,malka2015fighting,simon2016kidnapping}. \textbf{Social media and pandemics.} \citep{liu2011organizations,wilson2018new} analyse how public health authorities use social media and other infocomm technologies for diagnostic efforts, coordination, and risk communication. Other studies focus on the public instead of health authorities: \cite{chew2010pandemics,szomszor2011twitter} study the types and source of content shared during the 2001 H1N1 outbreak; \cite{strekalova2017health} analyses audience engagement with posts from the Centers for Disease Control and Prevention (CDC) Facebook channel during the Ebola outbreak; \cite{sharma2017zika} find that misleading posts are more popular than posts containing accurate information during the Zika outbreak in the United States. \textbf{Contribution.} In the overall literature on the social science of new media, most studies have focused on public platforms like Facebook and Twitter, as opposed to less-visible, chat-based platforms like Telegram. Studies that analyse group chats do not focus on disease outbreaks, and studies that focus on disease outbreaks do not analyse group chats. As far as we know, our paper is the first to analyse group chats during a disease outbreak. \section{Methodology} We describe how we gathered data from a public Telegram group and the methods used to process and analyse the data. \subsection{Data collection} Several Singapore-based public Telegram groups emerged after Singapore's first confirmed case of the coronavirus on 23 January 2020. We found the groups by searching "Singapore Coronavirus Telegram" on Google and Telegram. Some groups were: \textit{SG Fight COVID-19}\footnote{\url{http://t.me/sgVirus}}, \textit{Wuhan COVID-19 Commentary}\footnote{\url{http://t.me/WuhanCOVID}} and \textit{SG Fight Coronavirus}\footnote{\url{http://t.me/sgFight}}. In this paper, we focus on \textit{SG Fight COVID-19} because it contains the most discussion and members. In contrast, the other groups are characterised by one-to-many news broadcasts. From 19 January to 8 March 2020, we retrieved messages with the Python Telethon API. Our method is similar to \cite{pushshift}. In total, we collected 48,050 messages. Of these, 10,765 were system-generated (e.g. automated messages about people joining and leaving the group, group name changes), leaving us with 37,285 mixed-media messages to analyse. The breakdown of messages types is presented in Table \ref{tab:stats}. \begin{table} \centering \begin{tabular}{ |p{3cm}|p{1cm}| p{1.5cm}|p{1cm}|} \hline \#Total Users & 10,765 & \#Video & 276 \\ \hline \#Total Messages & 48,050 & \#Audio & 36 \\ \hline \#Text Messages & 26,153 & \#Url & 8830\\ \hline \#Images & 1928 & \#Files & 62 \\ \hline \end{tabular} \caption{Overview of \textit{sgVirus} Telegram Group} \label{tab:stats} \end{table} \subsection{Data Post-Processing} Our post-processing steps include: conversion of timestamp from UTC to GMT+8 to represent Singapore's timezone, removal of stop-words using NLTK stopword module, filtering out urls and names of news agencies (e.g. CNA, SCMP) often referred to in text messages; tokenizing text messages with NLTK's Tweet Tokenizer module\cite{10.3115/1118108.1118117}. \subsection{Data Limitation} Participants in our group chat may be more digitally literate compared to users of other chat platforms. While reliable demographic data about Singaporean Telegram users is not available, our personal experience is that Telegram is mostly used by people below 65 years old. People above 65 in Singapore use other messaging channels like WhatsApp. \cite{oldpeople} found that Facebook users over 65 are most likely to share misinformation. \section{How does group opinion change over time?} \label{sec:analysisSec} We analyse group opinion over time with four indicators: participation, sentiment, topics, and psychological features. \subsection{Participation} \textbf{Active Participants.} For each week, we look at the number of users who sent at least one message, and the total number of messages. We exclude bots that forwarded messages from news websites. Our results are shown in Figure \ref{fig:messages}. \begin{figure}[h] \centering \includegraphics[width=0.5\textwidth]{images/messagesovertimecombined.jpg} \caption{Group Participation and Messages Over Time} \label{fig:messages} \end{figure} Most notable is the peak of participation in week 2. During this week, the Ministry of Health raised the disease emergency level from yellow to orange. While the news was officially announced at 5.20pm on Friday February 7, messages discussing the announcement were circulating on the group since at least 10.30am. That weekend, several supermarkets temporarily ran out of essential items and political leaders asked the public not to panic buy. After peaking in week 2, active participation fell over time. The most popular timing of messages is between 12-1pm and 8-10pm. \textbf{Lifespan of participants.} We take the ten most active participants for each week and search for their activity in other weeks. Of the 50 most active participants, 60\% of the participants were active for 1 week only, while 40\% were active for two weeks. \textbf{Discussion.} We observe that the increase in group activity corresponds with a government announcement, rather than an unusual spike in number or rate of confirmed cases. We believe this illustrates the importance of unified and coherent public health communication. The leaked information about DORSON orange prior to the official announcement likely increased uncertainty, causing people to turn to the group chat for information and support. Even true information can cause alarm, especially if it is shared in an untimely and haphazard manner. The Singapore government has recognised a need to strength internal processes and no similar leaks have occurred since \cite{leaks}. We postulate that activity rises between 12-1pm and 8-10pm because people use their devices after lunch and dinner. Furthermore, the Ministry of Health usually releases daily updates at 8pm, triggering a flurry of discussion. The decrease in group activity and short life span of participation suggests that the group did not meet users' needs for information. Users may have stopped relying on the Telegram group and turned to other sources. Between January-April 2020, the number of subscribers to the official government WhatsApp channel grew from 7,000 to more 900,000 \cite{sggovwhatsapp}. \subsection{Sentiment} To determine sentiment in the group, we perform phrase-level analysis before combining the results to obtain overall sentence-level sentiment \citep{Rezapour2018UsingLC}. This method was adopted because Telegram texts, like Tweets, are short and conversational, so traditional sentiment analysis methods for articles do not perform well. We first tokenized words with NLTK TweetTokenizer \cite{10.3115/1118108.1118117} which handles short expressions and strings, then identified Parts-Of-Speech(POS) tags of tokenized words with TweetNLP \cite{posonline}. Contextual sentiment was determined using the MPQA lexicon \cite{MPQA} by matching the word and the POS tag to the the appropriate entry to retrieve the corresponding contextual sentiment. The overall entry sentiment of the text message was determined through the aggregation of all the contextual sentiments. Figure \ref{fig:sentiment} represents the change in sentiment over time. We observe that there is generally more negative sentiment than positive sentiment. From Week 2 (beginning on 3 Feb) to Week 3 (beginning on 10 Feb), positive and especially negative sentiment increased. The rise in negative sentiment corresponds with the DORSCON orange weekend. \begin{figure}[h] \centering \includegraphics[width=0.5\textwidth,height=0.3\textwidth]{images/sentiment.png} \caption{Sentiment Dimension (Percentage) over Time} \label{fig:sentiment} \end{figure} \subsection{Topics} We use Mallet's Latent Dirichlet Allocation (LDA) module \cite{McCallumMALLET} to identify the top five word clusters each week. Clusters are chosen based on coherence scores and manual assessment. Where some clusters are too similar, we grouped them as one. Table \ref{tab:topics} shows the word clusters, and our interpretation of the topic associated with each cluster. Readers can cross reference the topics with the timeline of events in Figure \ref{fig:cases}. \begin{table} \centering \begin{tabular}{|p{1cm}|p{6.5cm}| } \hline \textbf{Start Date} & \textbf{Topics \& Keywords} \\ \hline 27 Jan & \textit{New Outbreak in China}- new, health, chinese, outbreak \newline \textit{Number of confirmed cases}- cases, confirmed, world, chinese \newline \textit{Flight and Travel restrictions} - outbreak, flight, travel, restrictions \newline \textit{Wearing masks}- masks, wear, don, think, need \\ \hline 3 Feb & \textit{Chinese outbreak}- outbreak, orange, travel, home, chinese, source, masks \newline \textit{Diamond Princess Cruise Ship in Japan}- cruise, ship, japan, ncov, fake \newline 3- new, confirmed, cases, infected, quarantine \newline \textit{Don't Panic and wear masks}- masks, don, need, buy, dont wear, panic \newline \textit{Outbreak in Hong Kong}- world, outbreak, chinese, hong kong \\ \hline 10 Feb & \textit{Wearing masks}- mask dont need wear \newline \textit{Outbreak in Hong Kong}- outbreak, world, hong kong \newline \textit{Diamond Princess cruise ship cases spike} new cases, cruise ship \newline \textit{WHO releases name of virus}- covid 19, cases, health, confirmed \newline \textit{Sourcing for masks}- masks, buy home, need source \\ \hline 17 Feb & \textit{Grace Assembly of God church cluster}- church case, test \newline \textit{Encouraging the wearing of masks}- wear mask, dont spread, really true \newline \textit{South Korea and Italy spike in cases}- south korea, infected, italy death \newline \textit{Measures for work}- covid 19, going work safe \newline \textit{Keeping track of confirmed and discharged cases count}- confirmed case, discharged hospital, infection \\ \hline 24 Feb & \textit{Vaccines}- flu, make vaccine, information \newline \textit{Italy cases spike}- italy death, spread \newline \textit{South Korea and Iran report cases}- covid19 cases, south korea, iran \newline \textit{Encouraging staying home}- stay home, don come work \newline \textit{Confirmation of cases}- wear mask, confirmed, moh, linked \\ \hline 2 Mar & \textit{Wearing masks and symptoms}- wear masks, vaccine, ncov, cough, symptoms \newline \textit{Jurong SAFRA cluster}- new cases, covid 19, cluster, safra, hospital discharged \newline \textit{Cases in Iran and Italy}- cases test, iran, italy, confirmed \newline \textit{Keeping track of case counts}- infected, true cases, new patients \\ \hline \end{tabular} \caption{Summary of topics across the week identified using LDA} \label{tab:topics} \end{table} We observe two consistently popular topics. First, topics related to ``cases". Understandably, the number of cases is a focal point in a pandemic (\textit{Feb 10:} [...]unfortunately, the ship has 60 new cases[...]; \textit{Feb 13:} A new case in NUS![...]). In all six weeks, the keyword ``cases" is almost always accompanied by ``confirmed" (e.g. week 3: cases, health, confirmed). In week 6, we observe for the first time the keyword ``true cases". Public health experts have warned that confirmed cases may not be the same as true cases, especially in countries that are under-testing \citep{warning}. We speculate that this distinction is being reflected in the group chat. The second consistently popular topic is masks. In weeks 1 and 2, participants seem to discourage people from panicking and wearing masks (e.g. \textit{Feb 9:} Not sick dont wear mask.). In later weeks, the keyword ``masks" is no longer clustered with ``dont", suggesting that participants are starting to support mask-wearing (\textit{27 Feb:} Absolutely agreed! Wear a mask to protect ourselves and not wear when we are ill!). In parallel, we observe that participants encourage others to stay home (\textit{8 Mar: }Students stay at home. Adults try to work from home.). Our observation that participants begin to discuss socially responsible behaviors in later weeks potentially indicates that messaging from public health authorities is having an impact. One behavior that is not a popular topic in the group chat is hand-washing. Possible reasons include: participants do not feel that hand-washing is important, or hand-washing is an obvious and non-controversial behavior that does not warrant discussion. Global events that receive attention in the group chat are the Diamond Princess Cruise ship in Japan (weeks 2 and 3), and the spread of the virus in South Korea, Italy and Iran from week 3 onward. \subsection{Psychological dimensions} \textbf{Shift in emotional values.} We use the 2015 version of Linguistic Inquiry and Word Count (LIWC) \cite{doi:10.1177/0261927X09351676}. LIWC is a word-count lexicon that summarises the emotional, cognitive, and structural components in a given text sample. We focus on cognitive and affective components (Figure \ref{fig:psycho}). In terms of affective emotions, anxiety fell over time but sadness increased over time. We speculate that group chat members were becoming more certain that the pandemic is a serious event. \begin{figure}[h] \centering \includegraphics[width=0.5\textwidth]{images/cogproc.png} \caption{Psychological Dimensions Over Time} \label{fig:psycho} \end{figure} \textbf{Correlation between Cognitive Processes and Emotions.} We perform a Pearson correlation test on affective and cognitive dimensions (Table \ref{tab:liwc-correlations}). We find that positive emotions are significantly correlated with cognitive processes. We also find that anxiety has significant negative correlation with cognitive processes. By analysing 835 messages with both positive emotions and cognitive processes, we find that 20\% of messages contain words that represent hope (\textit{Feb 10: }Let’s humanity prevails and hope all good at the end of the day) and thankfulness (\textit{18 Feb: }A huge thank you to our healthcare family for providing our patients with the best care possibl [sic]). In 1148 messages with anxiety, we note many participants experience fear as this disease is unknown (\textit{18 Feb:} I never sick for more than 5 years, but still have concern. Why? As announced by the gov, there is a lot we don't know about the nCoV-19. So why take the risk?). \begin{table}[b!] \centering \begin{tabular}{|p{0.8cm}|p{0.9cm}|p{0.9cm}|p{0.9cm}|p{0.8cm}|p{0.9cm}|p{0.8cm}| } \hline & \textbf{cog-\newline proc} & \textbf{in-\newline sight} & \textbf{ten-\newline tat} & \textbf{cer-\newline tain} & \textbf{cause} & \textbf{dis-\newline crep} \\ \hline \textbf{affect} & \textbf{0.94} & \textbf{0.92} & \textbf{0.95} & \textbf{0.90} & \textbf{0.91} & \textbf{0.99} \\ \hline \textbf{pos-\newline emo} & \textbf{0.91} & \textbf{0.91} & \textbf{0.95} &0.80 & \textbf{0.86} & \textbf{0.88} \\ \hline \textbf{neg-\newline emo} & 0.58 & 0.52 & 0.49 & 0.76 & 0.62 & 0.54 \\ \hline \textbf{anx} & \textbf{-0.83} & \textbf{-0.82} & \textbf{-0.86} & -0.70 & \textbf{-0.87} & -0.73 \\ \hline \textbf{anger} & 0.72 & 0.66 & 0.72 & 0.74 & 0.75 & 0.51 \\ \hline \textbf{sad} & 0.53 & 0.50 & 0.42 & 0.71 & 0.58 & 0.51 \\ \hline \end{tabular} \caption{Correlation values between Cognitive Process (rows) and Affective (columns) dimensions. Statistically significant values where $p<0.05$ are highlighted in \textbf{bold}} \label{tab:liwc-correlations} \end{table} \textbf{Visualising LIWC Features.} We reduce our n-dimensional LIWC feature set into a 2D space using Singular Value Decomposition (SVD). We first perform SVD by using different singular value numbers from k=0 to k=50, and finally select k=20 by the use of the elbow rule heuristic on the singular values generated across the different k-values. We visualise the salient set of compressed features with a t-SNE plot, and use a k-Nearest-Neighbours clustering algorithm to find cluster means, visualised in Figure \ref{fig:tsne}. In this cluster analysis of text messages using linguistic features, we observe that users participate in five main clusters: (1) Reposts from news websites, (2) Short netspeak expressions, (3) General discourse, (4) Questions and (5) Sharing medical knowledge. \begin{figure}[h] \centering \includegraphics[width=0.5\textwidth]{images/messagecluster.jpg} \caption{Clusters of Text Messages} \label{fig:tsne} \end{figure} \section{How prevalent is government-identified misinformation?} We analyse the prevalence of misinformation in the group chat, focusing on pieces of misinformation which have been corrected by the Singapore government. We discuss participants' reactions to these pieces of misinformation. \subsection{Identifying misinformation} We refer to the list of corrections about COVID-19 provided by the Ministry of Health (MOH)\footnote{\url{https://www.moh.gov.sg/covid-19/clarifications}} and the government's fact-checking web page Factually\footnote{\url{https://www.gov.sg/factually}}. Between 24 January and 8 March 2020, the two sites listed 17 corrections. The fact that the Singapore government has addressed these pieces of misinformation suggests that they have been identified as particularly harmful. Our approach (relying on a list created by a fact-checking third party) is borrowed from others including \cite{10.1145/3308558.3313688,resende2019analyzing}. We recognise that the third-party (in our case, the Singapore government) may not have identified every piece of misinformation. This limitation speaks to wider challenges in the study of misinformation, namely the difficulty of establishing a ground-truth standard of what constitutes misinformation. To search for misinformation in the group chat, we focus on the five days surrounding each piece of misinformation: two days before, the day of clarification, and two days after. For textual messages, we first filter for keywords, then manually screen the results. For example, to identify messages challenging the ``validity of the maskgowhere.gov.sg site", we perform automatic keyword filtering for messages containing ``maskgowhere", which returns 6 results. However, all 6 messages discuss the site in general instead of challenging the site's validity, so our final result is 0. For images, videos, audio and urls, we perform a manual search. \subsection{Results} We find that government-identified misinformation is rare on the group chat. 6 (out of 17) pieces of misinformation are discussed in the group chat. These pieces of misinformation are contained in 18 textual messages, 3 urls, 6 images and 1 video, representing 0.05\% of all messages. Full results are shown in Table \ref{tab:rq2}. Moreover, we find that messages that discuss misinformation tend to express skepticism. Participants seek to verify the accuracy of content rather than simply ``passing it on".(\textit{12 Feb:} jolibee (sic) lucky plaza got case?). Participants' attitudes towards misinformation deserve a comprehensive investigation, but here we report two preliminary, interesting observations. First, 0.4\% of all messages contain ``Is this fake" or variations (``true or fake", ``true or not"), almost 10x more than messages discussing misinformation. This suggests that participants are aware of the prevalence of misinformation. Many people are not passive consumers; they seek to actively check multiple sources and verify information \citep{dubois2018echo}. Second, in 63 instances, participants use ``POFMA" in new ways. The POFMA (Protection from Online Falsehoods and Manipulation Act) is a Singapore law passed in 2019 aimed at correcting misinformation. Originally the name of a law, the term is being used as a verb (``POFMA me), an entity with agency (``POFMA busy), and a byword for misinformation (``that doesn't look like censoring but pofma"). \begin{table} \centering \begin{tabular}{ |p{1cm}|p{2cm}|p{2cm}|p{2cm}| } \hline \textbf{Date} & \textbf{Description} & \textbf{Keywords Matches} & \textbf{Sample Messages} \\ \hline 14 Feb & False statements on States Times Review Facebook page & 2 texts: States Times Review (2) \newline 1 url & SST is known for publishing fake news \\ \hline 13 Feb & Graphic aired on CNA Asia listed a death to Singapore instead of Hong Kong & 3 texts: CNA (3), death(3) \newline 5 images, 2 url & Antonio: Isn't the 1 death from Philippines not Singapore? \newline Is this true? Can't find other source for the 1 death in sg \\ \hline 12 Feb & Voice recording via text messaging platforms advising persons to avoid Lucky Plaza after an individual had fainted at Jollibee & 2 texts: lucky plaza (2), jollibee (2) & jolibee (sic) lucky plaza got case? \newline Heard theres a lock down now on the Sixth Floor of Lucky Plaza. Who can verify? \\ \hline 7 Feb & Message on a death in Singapore due to the virus & 5 texts: died (5) & Singapore dorscon level will change to orange at 2.30pm. The condition of 2 people with the virus have worsened. 1 of them in critical condition. \newline most likely announce orange follow by 1 death. \\ \hline 28 Jan & Woodlands MRT was closed for disinfection & 4 texts: Woodlands mrt (1), disinfection (4) & is woodlands mrt really closed for disinfection? \\ \hline 24 Jan & Suspected case at EastPoint Mall, who visited raffles medical center & 4 texts: Eastpoint mall(4), raffles medical (1) \newline 1 image \newline 1 video & is this verified? (regarding eastpoint suspected case) \\ \hline \end{tabular} \caption{Government Clarifications and Chatter on \textit{sgVirus} group} \label{tab:rq2} \end{table} \section{Conclusion} We analyse a Telegram group chat about COVID-19 in Singapore. The group was most active from 3-9 February. This week is notable because of a leaked press release prior to the official shift to DORSCON orange. We believe this demonstrates the importance of coherent and unified public health communication. User participation is short-lived, plausibly indicating that the group chat did not meet users' needs for information and support. Across the weeks, emotions shifted from anxiety to sadness, and negative sentiment decreased. We also find that government-identified misinformation is rare on the group chat. Messages that discuss these pieces of misinformation tend to be skeptical. In general, participants often express doubt about the validity of content; they are not passive consumers of (mis)information. Our study is a preliminary attempt to investigate an ongoing and dynamic crisis. It contributes to the small but growing literature on group chats as platforms for sharing (mis)information. A single Telegram group is not representative of the rest of the population, so it is hard to tell if the findings are generalizable. Further research can include more group chats, compare cross platform group chats, extend the analysis to cover the ``second wave" in Singapore, and/or look for other types of misinformation, not just government-identified misinformation. \begin{sidewaysfigure*}[ht] \includegraphics[width=1.0\textwidth]{images/cases.png} \caption{Timeline of major news events and number of cases in Singapore for the First Wave, where infections are from persons who come from Wuhan.\footnote{Cases were referred from The Straits Times for Singapore News and New York Times for International News. For Week 5 and 6, the situation had grown proportionally internationally, and we picked the most salient topics that the Singapore group chat was discussing.}} \label{fig:cases} \end{sidewaysfigure*} {\small
1,314,259,994,126
arxiv
\section{Introduction} The {\it Fermi} Large Area Telescope (LAT) has provided the most comprehensive view of the $\gamma$-ray sky in the 100 MeV$-$300 GeV energy range \citep{FLAT}. The most recent catalogue of $\gamma$-ray sources detected by the LAT, the third {\it Fermi} Large Area Telescope source catalogue \citep[3FGL;][]{3FGL}, is based on data collected in four years of operation, from 2008 August 4 (MJD 54682) to 2012 July 31 (MJD 56139)\footnote{Data are available from the Fermi Science Support Center website:\\ {\tt http://fermi.gsfc.nasa.gov/ssc/data/access/lat/4yr\_catalog/}} and contains 3033 sources. The two largest $\gamma$-ray source classes are Active Galactic Nuclei (AGN), with 1745 objects, and pulsars (PSR), with 167 objects. Out of 1745 AGN, 1144 are blazars, subdivided into 660 BL Lacertae (BLL) and 484 Flat Spectrum Radio Quasars (FSRQ). The catalogue includes also 573 blazars of uncertain type (BCU), i.e. $\gamma$-ray sources positionally coincident with an object showing distinctive broad-band blazar characteristics but lacking reliable optical spectrum measurements. In addition, 30 per cent of the 3FGL sources, 1010 objects, have not even a tentative association with a likely $\gamma$-ray-emitting object and are referred to as unassociated sources. As a result, the nature of about half the $\gamma$-ray sources, i.e. BCU and unassociated sources, is still not completely known. Since blazars are the most numerous $\gamma$-ray source class, we expect that a large fraction of unassociated sources might belong to one of its subclasses, BLLs or FSRQs. Rigorous determination of whether an unassociated source is a BLL or a FSRQ requires the optical spectrum of the correct counterpart. FSRQs have strong, broad emission lines at optical wavelengths, while BLLs show at most weak emission lines, sometimes display absorption features, and can also be completely featureless \citep{abd10}. For this reason detailed optical spectral observation campaigns to identify the nature of many unassociated sources are in progress \citep[e.g.][]{lan15,mas16,alv16,mar16}. Unfortunately, optical observations are demanding and time consuming. An easy screening method to suggest the nature of a $\gamma$-ray source counterpart could be very useful for the scientific community in order to plan new focused observational campaigns and research projects. Machine-learning techniques are powerful tools for screening and ranking objects according to their \emph{predicted} classification. Recently, \citet{pablo} developed a method based on machine-learning techniques to distinguish pulsars from AGN candidates among 3FGL unassociated sources using only $\gamma$-ray data. In this work we explore the possibility of applying our Blazar Flaring Pattern (B-FlaP) algorithm, which is based on an artificial neural network technique \citep[for a full description see][]{bflap}, to provide a preliminary and reliable identification of AGN-like unassociated sources as likely BLL or FSRQ candidates. The paper is organized as follows: in Sectn.~\ref{sec:2} we provide a brief description of the machine-learning technique we employed, in Sectn.~\ref{sec:3} we present results of the algorithm at classifying 3FGL unassociated sources and we test our predictions through optical spectral observations of a number of targets, and we discuss implications of our results in Sectn.~\ref{sec:5}. \section{Machine-learning analysis}\label{sec:2} The aim of this work is to examine the nature of 3FGL unassociated sources in order to select the best candidate sources, according to their predicted source class, for multiwavelength observations and to estimate the number of new $\gamma$-ray sources in each class that we might expect to identify in the future. Machine-learning algorithms are the best techniques for screening and classification of unassociated sources based on $\gamma$-ray data only. Such techniques were applied to 3FGL unassociated sources by \citet{mir16} to pinpoint potentially novel source classes, and by \citet{pablo} to classify them as likely AGN or PSR including, for the latter, predictions on the likely type of pulsar. Focusing on the latter approach, they distinguished AGN from PSR using their $\gamma$-ray timing and spectral properties combining results from Random Forest \citep{bre01} and Boosted logistic regression \citep{fri00}. Out of 1010 unassociated sources, 559 were classified as likely AGN and 334 as likely PSR with an overall accuracy of $\sim$96 per cent. In addition, they used the same approach to classify pulsars into ``young'' and millisecond, leaving unexplored the distinction of AGN subclasses. Here we want to integrate to the analysis performed in \citet{pablo} a classification of 559 3FGL unassociated sources likely AGN as likely BLL or FSRQ using the B-FlaP method described in \citet{bflap}. B-FlaP uses Empirical Cumulative Distribution Function (ECDF) and Artificial Neural Network (ANN) machine-learning techniques to classify blazars taking advantage of different $\gamma$-ray flaring activity for BLLs and FSRQs. We used a two-layer ANN algorithm \citep{bis95} to quantify the blazar flaring including as input the source parameters associated to 10 $\gamma$-ray flux values corresponding to the 10\%, 20\%, ..., 100\% fraction of observations below this flux. The output was set up to have two possibilities: FSRQ or BLL, with a likelihood ($L$) assigned to each so that $L_{\textrm{\scriptsize BLL}}=1-L_{\textrm{\scriptsize FSRQ}}$. The closer to 1 is the value of $L$, the greater the likelihood that the source is in that specific source class. ANN was optimized using as source sample all 660 BLL and 484 FSRQ in the 3FGL catalogue through a learning method based on a standard back-propagation algorithm. The left-hand panel of Figure~\ref{fig:ANN} shows the likelihood distribution applied to all 3FGL blazars, which shows a clear separation of the two sub-classes of blazars based on flaring patterns. Defining the \emph{precision} as the positive association rate, a $L_{\textrm{\scriptsize BLL}}$ value greater than 0.566 provides a \emph{precision} of 90 per cent for recognizing BLLs, while $L_{\textrm{\scriptsize BLL}}$ less than 0.230 identifies FSRQs with 90 per cent \emph{precision}. Thanks to this approach, we have been able to apply B-FlaP to the full sample of 3FGL BCUs \citep{bflap}, obtaining statistical classifications for approximately 85 per cent of the sources\footnote{The list of B-FlaP BCU classifications is published online at:\\ \texttt{https://academic.oup.com/mnras/article/462/3/3180/2589794/Blazar-flaring-patterns-B-FlaP-classifying-blazar\#supplementary-data}}. Comparing the B-FlaP predictions with spectroscopic observations that were subsequently retrieved in literature \citep[e.g.][]{Vermeulen95, Titov11, Shaw13, AC16a, AC16b, Klindt17}, we note that there is a very good agreement between predictions and spectroscopic classifications. With 55 classified BCUs, 52 turn out to be spectroscopically confirmed, while 3, 2 FSRQ (3FGL J0343.3+3622, 3FGL J0904.3+4240) and one BLL (3FGL J1129.4$-$4215), are not consistent with the observations. While for 3FGL J0343.3+3622 ($L_{BLL} = 0.583$) and 3FGL J0904.3+4240 ($L_{BLL} = 0.673$) we have an intermediate likelihood value, meaning that the sources fall in a region characterised by a substantial overlap of the two different source classes, the case of 3FGL J1129.4$-$4215 is more problematic. With its low $L_{BLL} = 0.062$, it should be a high confidence FSRQ, while the observed counterpart definitely shows a BL Lac nature \citep{AC16b}. This source, however, has multiple possible counterparts (SUMSS J113014$-$421414 and SUMSS J113006$-$421441), all lying several arc minutes away from the signal centroid. In such circumstances, it may happen that the $\gamma$-ray signal is affected by contamination or badly associated, leading to the observed contradiction. \citet{pablo} used any type of AGN to classify unassociated sources as likely AGN, including not only blazars, but also radio galaxies, compact steep spectrum quasars, Seyfert galaxies, narrow-line Seyfert 1s and other non-blazar active galaxies as well. These objects produce an overall contamination of $\sim$1.5 per cent to the blazar sample, slightly changing the value of \emph{precision} of the ANN given the classification thresholds. As a result, the \emph{precision} for recognizing BLLs and FSRQ decreases to 87 per cent, introducing a contamination given by non-blazars of $\sim$2 per cent for the former and $\sim$ 3 per cent for the latter. \begin{figure*} \begin{center} {\includegraphics[width=7.5cm, angle=0,clip=true]{Blazar_distribution_all.png}\hspace{.6cm} \includegraphics[width=7.5cm, angle=0,clip=true]{UCS_distributions.png}} \caption{(Left) Distribution of the ANN likelihood to be a BLL candidate for 3FGL BLLs (blue) and FSRQs (red). The distribution of the likelihood to be an FSRQ candidate ($L_{\textrm{\scriptsize FSRQ}}$) is 1--$L_{\textrm{\scriptsize BLL}}$. (Right) Same distribution for 559 3FGL unidentified sources classified as likely AGN by \citet{pablo}. Vertical blue and red lines indicate the classification thresholds of our ANN algorithm to label a source as BLL or FSRQ, respectively, as described in the text.} \label{fig:ANN} \end{center} \end{figure*} \section {Results and validation}\label{sec:3} In this section we discuss the results of our optimized ANN algorithm at classifying BLL and FSRQ candidates among 3FGL unassociated sources. Applying our optimized algorithm to the 559 unassociated sources classified as likely AGN by \citet{pablo}, we find that 271 are classified as BLL candidates, 185 as FSRQ candidates, and 103 remain likely AGN of uncertain type. The right-hand panel of Figure~\ref{fig:ANN} shows the likelihood distribution applied to likely AGN, which reflects very well those of known BLL and FSRQ. Interestingly, we find that the ratio of likely BLL to FSRQ obtained by our analysis ($\sim$ 1.4) is very similar to the ratio of known BLL and FSRQ (1.4). Table~\ref{tab:3} shows a portion of individual results of the classification of 3FGL unassociated sources classified as likely AGN, where, for each source, we provide the ANN likelihood ($L$) to be a BLL or an FSRQ, and the predicted classification according to the defined classification thresholds. The second and third columns of the list show the Galactic longitude and latitude respectively. The full table is available electronically from the journal. \begin{table*} \caption{Classification list of 3FGL unassociated sources classified as likely AGN. The columns are: the name in the 3FGL catalogue, the Galactic longitude and latitude ($b$ and $l$), the ANN likelihood to be classified as a BLL ({$L_{\textrm{\scriptsize BLL}}$}) and a FSRQ ({$L_{\textrm{\scriptsize FSRQ}}$}) and the predicted classification. The full table is available online.}\label{tab:3} \begin{center} \begin{tabular}{lcccccc} \hline \hline {\bf 3FGL Name} & {\bf l ($^{\circ}$)} & {\bf b ($^{\circ}$)} & {\bf $L_{\textrm{\scriptsize BLL}}$ } & {\bf $L_{\textrm{\scriptsize FSRQ}}$} & {\bf Classification} \\ \hline J0000.2--3738 & 345.41 & --74.947 & 0.986 & 0.014 & BLL \\ J0001.6+3535 & 111.66 & --26.188 & 0.842 & 0.158 & BLL \\ J0002.0-6722 & 310.14 & --49.062 & 0.974 & 0.026 & BLL \\ J0003.5+5721 & 116.49 & --4.912 & 0.712 & 0.288 & BLL \\ J0004.2+0843 & 103.6 & --52.363 & 0.903 & 0.098 & BLL \\ J0006.2+0135 & 100.4 & --59.297 & 0.772 & 0.228 & BLL \\ J0006.6+4618 & 114.91 & --15.867 & 0.383 & 0.617 & AGN Uncertain \\ J0007.4+1742 & 108.33 & --43.911 & 0.825 & 0.175 & BLL \\ J0007.9+4006 & 113.98 & --22.007 & 0.856 & 0.144 & BLL \\ J0014.3--0455 & 99.59 & --66.096 & 0.118 & 0.882 & FSRQ \\ \hline \end{tabular} \end{center} \end{table*} Since we did not include any spectral information in the ANN algorithm, we validate our results comparing the spectra for known BLL and FSRQ with those of likely blazar subclasses. Gamma-ray BLL have average spectra that are flatter than those of FSRQs \citep{ack15}. The best-fitting photon spectral index (in 3FGL named power-law index) distribution has a mean value of 2.02$\pm$0.25 for the former and 2.45$\pm$0.20 for the latter, where uncertainties are reported at the 1$\sigma$ confidence level. Figure~\ref{fig:powlaw} shows that the power-law index distribution for likely BLL and FSRQ is consistent with those of known BLL and FSRQs (mean value of 2.10$\pm$0.27 for the former and 2.54$\pm$0.21 for the latter). \begin{figure*} \begin{center} {\includegraphics[width=7.5cm, angle=0,clip=true]{PIbll.png}\hspace{.6cm} \includegraphics[width=7.5cm, angle=0,clip=true]{PIfsrq.png}} \caption{Power-law index distribution for the unidentified sources classified as blazars by the ANN method (filled histograms) in comparison to the previously classified blazars. Left: BLL; right: FSRQs.}\label{fig:powlaw} \label{plaw} \end{center} \end{figure*} Another way to validate the predictions of our method is to compare them with classifications obtained after the release of the 3FL catalogue. Currently, optical spectroscopic observations campaigns to hunt blazars among unassociated $\gamma$-ray sources are ongoing \citep[see e.g.][]{lan15,mas16,alv16,mar16}. These follow-up multiwavelength classification efforts have resulted in 24 new blazar associations, 21 classified as BLL and 3 as FSRQs. Since our algorithm was optimized to select the best targets to observe in other wavelengths, we can evaluate the performance of our method analyzing the positive association rate (\textit{precision}). Out of 24 new blazars with optical spectra, B-FlaP classifies 22 as BLL, while 2 remain unclassified. For the subset of 22 BLL candidates, our prediction matches in about 90 per cent of the objects with optical spectra, in agreement with our classification thresholds definition, while we cannot assert anything about the \textit{precision} in the classification of FSRQ. This result give a strong confirmation about the optimal performance of our classification algorithm even if the number of new blazar associations is still small. \subsection{Optical spectroscopic observations}\label{sec:4} Encouraged by optimal performance of our classification algorithm, we carried out optical spectral observations at the Asiago Astrophysical Observatory of the best targets within the unassociated source sample classified as likely AGN. The new observations were executed with the 1.82m {\it Copernico} telescope\footnote{Website: \url{http://www.oapd.inaf.it/index.php/en}} and the 1.22m {\it Galileo} telescope\footnote{Website: \url{http://www.dfa.unipd.it/index.php?id=300}}, configured for long-slit optical spectroscopy, with the instrumental configurations reported in Table~\ref{tabAsiago}. Both telescopes are able to provide moderate spectral resolution data ($R \sim 600$) over a wavelength range spanning 3700\AA\ to 7500\AA, achieving a continuum signal to noise ratio (SNR) of order $\sim 20$ in 2 hours of exposure on targets with an optical magnitude $V \sim 17$. \begin{table*} \caption{Instrumental characteristics of the Asiago Astrophysical Observatory Telescopes. \label{tabAsiago}} \begin{center} \begin{tabular}{lcc} \hline \hline {\bf Instrumental characteristic} & {\bf Copernico Telescope} & {\bf Galileo Telescope}\\ \hline Main mirror diameter & $1.82\,$m & $1.22\,$m \\ Focal length & $5.39\,$m & $6.00\,$m \\ Spectrograph & AFOSC$^{\rm a}$ & Boller \& Chivens \\ Entrance slit width & $1.69\arcsec$ & $3.5 - 5.0\arcsec$\\ Grating & $300\,$gr mm$^{-1}$ & $300\,$gr mm$^{-1}$\\ Wavelength range & $3700$--$8000\,$\AA & $3500$--$7500\,$\AA \\ Spectral resolution & $600$ & $600$ \\ \hline \end{tabular} \end{center} \begin{flushleft}\hspace{3.5cm} {\footnotesize $^{\rm a}$Asiago Faint Object Spectrograph and Camera } \end{flushleft} \end{table*} Taking into account the observational limitations introduced by the magnitude constraints and the geographical position of the observing site, we performed an observing campaign, selecting the targets for observations from the list of 3FGL {unassociated sources}. Due to the quite large uncertainties on the positions of $\gamma$-ray sources, especially faint ones, the identification of the plausible optical counterparts to be observed was obtained by combining radio and X-ray observations, in a similar way to the method described in \citet{bflap}. The reason for combination lies in the theoretical expectation that a high-energy source, powered by a jet of relativistic charged particles, which produce $\gamma$-ray photons, should suffer significant energy losses from synchrotron radiation at radio and X-ray frequencies \citep*{Boettcher12}. Since the spatial resolution of detectors operating at such lower energies is far better than in the $\gamma$-ray band (down to a few arcseconds), the optical counterparts were associated with objects emitting both radio and X-ray photons, within the $\gamma$-ray signal confinement area at the 95 per cent confidence level. Although this technique proved reliable to support the association of BCU targets to optical counterparts, in the case of unassociated sources, further care was required. These sources are generally faint and with quite large positional uncertainties. Consequently, also the expected synchrotron losses are weak and might be missed in standard X-ray and radio surveys. In general we are able to select the low-energy candidate counterpart by matching the NRAO VLA Sky Survey \citep[NVSS]{Condon98} with the {\it Swift} satellite X-ray catalogue \citep[1SXPS]{Evans14} in circular regions of 10$'$ in radius, centered on the $\gamma$-ray signal centroid. In some cases, however, the sources are too weak to be listed in a catalogue, particularly at X-rays, and a subsequent reanalysis of the observations of the target is the only way to pinpoint the most likely optical counterpart to such sources. When all the observational constraints were satisfied, we observed the targets, collecting a total of 2 hours of exposure time for every target, split in observations lasting from 20 up to 30 minutes each. The exact duration of the single exposures was determined by the best trade-off between the requirement to improve the SNR, the need to track the spectrum on the surface of the detectors, and the contamination from cosmic rays and night sky emission lines. These background contributions, indeed, may lead to saturation effects on the detector, with consequent loss of spectral information, and must therefore be removed. Combining several short exposures to form a longer observation can immediately filter out the cosmic ray background, since it follows a random pattern that is easily identified and masked out from the complete data set. The same process leads to an efficient subtraction of the sky emission lines, because the combined spectrum is not subject to saturation limits and can collect an arbitrarily high signal, in order to interpolate the sky contribution from regions close to the source and to remove them from the spectrum. All the observations were taken together with comparison FeAr arc lamp spectra to perform wavelength calibration, and they were paired with observations of spectro-photometric standard stars, to provide flux calibration. We proceeded with the standard long slit spectroscopic data reduction procedure, which involves bias subtraction and flat field correction, by means of standard IRAF tasks\footnote{Website \url{http://www.iraf.noao.edu}}, arranged in a specific pipeline that is optimized to work with the Asiago telescopes instrumental configuration. In this study, we obtained 5 new spectra, illustrated in Fig.~\ref{figAsiago}. We detail in the following the characteristics of these spectra and the resulting insights. \subsubsection{3FGL 0032.5+3912} This object has been associated with an optical source that, once observed with the 1.82m telescope, showed the characteristic spectrum of an elliptical galaxy. The identification of absorption lines like the Ca~{\small II} $\lambda\lambda$3933,3969 doublet (detected at 4530\AA\ and 4572\AA), together with Mg~{\small I} $\lambda$5175 and Na~{\small I} $\lambda$5893 (detected at 5961\AA\ and 6789\AA) places the redshift of this object at $z = 0.152$. Its optical spectrum is consistent with an elliptical galaxy that may host BLL activity, in agreement with the BLL classification suggested by the machine learning technique. \subsubsection{3FGL 2224.4+0351} Observed with the 1.82m telescope, this object shows a featureless continuum spectrum, with a peak close to 7000\AA, subsequently decaying towards the short wavelength regions. No clear signs of emission and absorption lines are detected in the spectrum, consistent with the predicted BLL classification of the source. \subsubsection{3FGL 2247.2-0004} This source has been observed with the 1.22m telescope and shows a featureless continuum spectrum, decaying towards the short wavelength regime, much like the previous case. The lack of clear emission and absorption lines is consistent with its predicted BLL classification. \subsubsection{3FGL 2300.0+4053} The spectrum of this source shows a more prominent power-law continuum that increases towards the short wavelength regime. With the exception of some unidentified spike-like features, very likely descending from increase of noise due to lower detector efficiency, the classification turns out to be consistent with a source of BLL type. \subsubsection{3FGL 2358.5+3827} When observed with the 1.22m telescope, this source shows a clear system of emission lines that can be identified as the close [O~{\small II}] $\lambda\lambda$3727,3929 doublet (detected as an unresolved feature at 4477\AA) and the strong [O~{\small III}] $\lambda\lambda$4959,5007 doublet (falling at 5956\AA\ and 6013\AA), which place this source at redshift $z = 0.201$. The detection of strong narrow lines, together with an underlying continuum that becomes weaker at short wavelength, are suggestive of an obscured AGN activity. The source has a 1.4GHz flux measured by the NVSS of $$F_{1.4\,{\rm GHz}} = 57.4 \pm 1.8\,{\rm mJy}$$ that, adopting a standard $\Lambda$ Cold Dark Matter Cosmology with $H_0 = 70\,{\rm km\, s^{-1}\, Mpc^{-1}}$, $\Omega_\Lambda = 0.7$ and $\Omega_M = 0.3$, corresponds to a distance of 920.3 Mpc and to an intrinsic luminosity $$\nu L_\nu = (5.49 \pm 0.17) \cdot 10^{40}\,{\rm erg\, cm^{-2}\, s^{-1}}.$$ This suggests that this object could be classified as a Narrow Line Radio Galaxy (NLRG), in spite of the predicted classification as a BLL, although the detection of such type of objects in $\gamma$-rays, at the inferred redshift, is extremely rare. \section{Discussion and conclusions}\label{sec:5} One of the main goals of our investigation is to complete the census of blazar subclasses in the 3FGL source catalogue using the ANN technique based on B-FlaP \citep{bflap}. B-FlaP is well suited to perform preliminary and reliable classification of likely blazars when detailed observational or multiwavelength data are not yet available. This is the typical situation for almost all unassociated sources in \emph{Fermi}-LAT catalogues. Recently, \citet{pablo} applied a number of machine-learning techniques to classify 3FGL unassociated sources as likely pulsar or AGN, focusing only on the former, to identify the most promising unassociated source to target in pulsar search. We applied our algorithm to 559 3FGL unassociated sources classified as likely AGN to investigate their source subclass. These sources can be divided in 271 BLL candidates, 185 FSRQ candidates, leaving only 103 without a clear classification. We validated our predictions comparing their $\gamma$-ray spectra with the expected ones. In addition, we compared our results with the source classes inferred by recently published optical spectroscopic observations \citep{lan15,mas16,alv16,mar16}. This comparison results in 29 new blazar associations, out of which 5 are obtained thanks to our new optical observations. For the subset of 27 overlapping sources, our prediction matches in $\sim$ 90 per cent of the objects as expected. Such excellent agreement confirms the power of our method as a classifier for unidentified sources as well. Our work can help to identify targets both for blazar searches and for follow-up studies of blazars at very-high $\gamma$-ray energies with ground-based imaging air Cherenkov telescopes (MAGIC, HESS, VERITAS). \citet{lef17} have recently published a paper aimed at researching blazar candidates among the {\it Fermi}-LAT 3FGL catalogue using a combination of boosted classification trees and multilayer perceptron artificial neural networks methods. Their work is divided in two steps. In the first one they applied the combined classifier to separate 3FGL unassociated sources as blazar or pulsar candidates, while in the second one they use the same approach to determine the BLL or FSRQ nature of both blazar candidates and of BCU. In contrast to our approach, they used both spectral and timing $\gamma$-ray parameters to separate source classes. Out of 595 blazar candidates among the 3FGL unassociated sources, \citet{lef17} study the blazar subclass nature for 417 sources that have no caution flag as described in \citet{3FGL}. Out of these, 371 match with our blazar candidate sample. Applying their classifier to this sample they divide them into 192 BLL and 129 FSRQ candidates. The comparison with our corresponding subset of 223 BLL candidates shows that our prediction is in agreement with \citet{lef17} for 174 objects (about 80 per cent) and in disagreement for 28 (about 12 per cent). We observe that 13 objects in disagreement are characterized by a very low prediction value (L$_{BLL}<0.7$), thus the discrepancy between the two approaches decreases significantly when defining a more robust classification threshold. In addition, comparing our subset of 83 FSRQ candidates we observe that our prediction is in agreement for 62 sources (about 75 per cent) and in disagreement for 8 (about 9 per cent). Interestingly, analysing the PowerLaw Index distribution for the sources that are in disagreement with \citet{lef17} we observe that they are located in the region of overlap between BLL and FSRQ (between 2.1 and 2.9) making difficult their classification including even spectral information. Only future optical spectroscopic observations will unveil the real nature of these sources. As a result, although \citet{lef17} applied a very different approach from our work, we find a good overall agreement, indicating that both methods are useful classifiers. We obtained the same result applying our optimised algorithm to all 417 un-flagged sources classified as blazar candidates by \citet{lef17}. Putting together the overall result of this study with the ones obtained in \citet{bflap} and \citet{pablo} we can characterize the entire $\gamma$-ray population proposing a new distribution of 3FGL sources, as shown in Figure~\ref{fig:6}, where cells in red represent results obtained in this work. Table~\ref{tab:4} shows the number of $\gamma$-ray sources per each class reported in the 3FGL catalogue and after this work. The number of BLL (or candidates) increased by a factor of 1.9, while that of FSRQ of 1.7, raising the ratio of BLL to FSRQ from 1.36 to 1.55. Interestingly, out of 180 blazars of uncertain type, only 20 (11 per cent) are located at low Galactic latitude (|b|<10 deg). We expect that a very small fraction (less than 3 per cent) of non-blazar AGN subclasses (Seyferts, radio galaxies and other AGN) could contaminate the sample of blazar of uncertain type. As an important result, the efforts aimed at classifying 3FGL sources decreased the fraction of uncertain sources (BCU and unassociated sources) from 52 per cent to 10 per cent, discriminating the best targets for future follow-up multi-wavelength observations. \begin{figure} \resizebox{\hsize}{!}{\includegraphics{TreeDiagram2.pdf}} \caption{3FGLzoo after this work. Orange: source classification provided by \citet{bflap} and \citet{pablo}. Red: our classification of unassociated sources classified as likely AGN. Here UCS are unassociated sources that are not classified as PSR or AGN candidates.}\label{fig:6} \end{figure} \begin{table} \caption{The new classification of the most numerous $\gamma$-ray sources classes combining this study with \citet{bflap} and \citet{pablo} in comparison with the 3FGL catalogue.}\label{tab:4} \begin{center} \begin{footnotesize} \begin{tabular}{lccccccr} \hline \hline \bf{Class} & \bf{3FGL} & \bf{Post 3FGL} \\ \hline Blazar & 1717 (57\%) & 2276 (75\%)\\ \quad -- BLL & 660 & 1273 \\ \quad -- FSRQ & 484 & 823 \\ \quad -- BCU &573 & 180 \\ Pulsar &167 (6\%) & {\bf 501 (17\%)}\\ Unassociated & 1010 (33\%) & {\bf 117 (4\%)}\\ Others & 139 (4\%) & 139 (4\%)\\ \hline \end{tabular} \end{footnotesize} \end{center} \end{table} \section*{Acknowledgements} We thank the anonymous referee for his/her very helpful comments and suggestions to our manuscript. Support for science analysis during the operations phase is gratefully acknowledged from the \emph{Fermi}-LAT collaboration for making the 3FGL results available in such a useful form, the Institute of Space Astrophysics and Cosmic Physics of Milano -Italy (IASF INAF) and The Goddard Space Flight Center NASA. DS acknowledges support through EXTraS, funded from the European Commission Seventh Framework Programme (FP7/2007-2013) under grant agreement n. 607452. We thank Pablo Saz Parkinson because this paper builds directly on his work. This work is based on observations collected at Copernico telescope (Asiago, Italy) of the INAF - Osservatorio Astronomico di Padova and on observations collected with the 1.22m \textit{Galileo} telescope of the Asiago Astrophysical Observatory, operated by the Department of Physics and Astronomy ``G. Galilei'' of the University of Padova. \begin{figure*} \includegraphics[width=0.48\textwidth]{3FGLJ00325+3912.png} \includegraphics[width=0.48\textwidth]{3FGLJ22244+0351.png} \includegraphics[width=0.48\textwidth]{3FGLJ22472-0004.png} \includegraphics[width=0.48\textwidth]{3FGLJ23000+4053.png} \includegraphics[width=0.48\textwidth]{3FGLJ23585+3827.png} \caption{Optical spectra of 3FGL unassociated sources observed from the Asiago Astrophysical Observatory. \label{figAsiago}} \end{figure*}
1,314,259,994,127
arxiv
\section{Introduction and main results}\label{Intro} \setcounter{equation}{0} The purpose of the present paper is to put into evidence a new formula in Hamiltonian dynamics, both simple and general, relating the time evolution of localisation observables to the variation of energy along classical orbits. Our result is the following. Let $M$ be a (finite or infinite-dimensional) symplectic manifold with symplectic $2$-form $\omega$ and Poisson bracket $\{\;\!\cdot\;\!,\;\!\cdot\;\!\}$. Let $H\in\cinf(M)$ be an Hamiltonian on $M$ with complete flow $\{\varphi_t\}_{t\in\R}$. Let $\Phi\equiv(\Phi_1,\ldots,\Phi_d)\in\cinf(M;\R^d)$ be a family of observables satisfying the condition \begin{equation}\label{cond_com} \big\{\{\Phi_j,H\},H\big\}=0 \end{equation} for each $j\in\{1,\ldots,d\}$. Then we have (see Theorem \ref{IntCont}, Corollary \ref{cor_car} and Lemma \ref{LemChile} for a precise statement): \begin{Theorem}\label{gros_bidule} Let $H$ and $\Phi$ be as above. Let $f:\R^d\to\C$ be such that $f=1$ on a neighbourhood of $\,0$, $f=0$ at infinity, and $f(x)=f(-x)$ for each $x\in\R^d$. Then there exist a closed subset $\Crit(H,\Phi)\subset M$ and an observable $T_f\in\cinf\big(M\setminus\Crit(H,\Phi)\big)$ satisfying $\{T_f,H\}=1$ on $M\setminus\Crit(H,\Phi)$ such that \begin{equation}\label{LaFormule} \lim_{r\to\infty}\12\int_0^\infty\d t\,\big[\big(f(\Phi/r)\circ\varphi_{-t}\big)(m) -\big(f(\Phi/r)\circ\varphi_t\big)(m)\big] =T_f(m) \end{equation} for each $m\in M\setminus\Crit(H,\Phi)$. \end{Theorem} The observable $T_f$ admits a very simple expression given in terms of the Poisson brackets $\partial_jH:=\{\Phi_j,H\}$ and the vector $\nabla H:=(\partial_1H,\ldots,\partial_dH)$, namely, \begin{equation}\label{prem_def_T} T_f:=-\Phi\cdot(\nabla R_f)(\nabla H), \end{equation} where $\nabla R_f:\R^d\to\C^d$ is some explicit function (see Section \ref{section_R}). In order to give an interpretation of Formula \eqref{LaFormule}, consider for a moment the situation where $M:=T^*\R^n\simeq\R^{2n}$ is the standard symplectic manifold with canonical coordinates $(q,p)$ and $2$-form $\omega:=\sum_{j=1}^n\d q^j\wedge\d p_j$. Furthermore, let $H(q,p):=h(p)$ be a purely kinetic energy Hamiltonian and let $\Phi(q,p):=q$ be the standard family of position observables. In such a case, the condition \eqref{cond_com} is readily verified, the vector $\nabla H$ reduces to the usual velocity observable $\nabla h$ associated to $H$, and the l.h.s. of Formula \eqref{LaFormule} has the following meaning: For $r>0$ and $m\in M\setminus\Crit(H,\Phi)$ fixed, it is equal to the difference of times spent by the classical orbit $\{\varphi_t(m)\}_{t\in\R}$ in the past (first term) and in the future (second term) within the region $\Sigma_r:=\supp[f(\Phi/r)]\subset M$ defined by the localisation observable $f(\Phi/r)$. Moreover, if we interpret the map $\frac\d{\d H}:=\{T_f,\;\!\cdot\;\!\}$ as a derivation on $\cinf\big(M\setminus\Crit(H,\Phi)\big)$, then $T_f$ on the r.h.s. of \eqref{LaFormule} can be seen as an observable ``derivative with respect to the energy $H$'' on $M\setminus\Crit(H,\Phi)$, since $\frac\d{\d H}(H)=\{T_f,H\}=1$ on $M\setminus\Crit(H,\Phi)$. Therefore, Formula \eqref{LaFormule} provides a new relation between sojourn times and variation of energy along classical orbits. Evidently, this interpretation remains valid in the general case provided that we consider the observables $\Phi_j$ as the components of an abstract position observable $\Phi$ (see Remark \ref{Rem_Int}). Our interest in this issue has been aroused by a recent paper \cite{RT10}, where the authors establish a similar formula in the framework of quantum (Hilbertian) theory. In that reference, $H$ is a selfadjoint operator in a Hilbert space $\H$, $\Phi\equiv(\Phi_1,\ldots,\Phi_d)$ is a family of mutually commuting selfadjoint operators in $\H$, \eqref{cond_com} is a suitable version of the commutation relation $\big[[\Phi_j,H],H\big]=0$, and $T_f$ is a time operator for $H$ (\ie a symmetric operator satisfying the canonical commutation relation $[T_f,H]=i$). So, apart from its intrinsic interest, the present paper provides also a new example of result valid both in quantum and classical mechanics. Points of the symplectic manifold correspond to vectors of the Hilbert space, complete Hamiltonian flows correspond to one-parameter unitary groups, Poisson brackets correspond to commutators of operators, \etc~(see \cite[Sec.~5.4]{AM78} and \cite{Lan98} for the interconnections between classical and quantum mechanics). Accordingly, we try put into light throughout all of the paper the relation between both theories. For instance, we link in Remark \ref{rem_spec} the confinement (resp. the non-periodicity) of the classical orbits $\{\varphi_t(m)\}_{t\in\R}$, $m\in M$, to the affiliation of the corresponding quantum orbits $\{\e^{itH}\psi\}_{t\in\R}$, $\psi\in\H$, to the singular (resp. absolutely continuous) subspace of $\H$. Moreover, we show in Section \ref{qsys} that the Hilbertian space theory of \cite{RT10} can be recast into the present framework of symplectic geometry by using expectation values. We also mention that Formula \eqref{LaFormule}, with r.h.s. defined by \eqref{prem_def_T}, provides a crucial preliminary step for the proof of the existence of classical time delay for abstract scattering pairs $\{H,H+V\}$ (see \cite{BP09}, \cite[Sec.~4.1]{dCN02}, and \cite[Sec.~3.4]{Thi97} for an account on classical time delay). If $V$ is an appropriate perturbation of $H$ and $S$ is the associated scattering map, then the classical time delay $\tau(m)$ for $m\in M$ defined in terms of the localisation operators $f(\Phi/r)$ should be reexpressed as follows: it is equal to the l.h.s. of \eqref{LaFormule} minus the same quantity with $m$ replaced by $S(m)$. Therefore, if $m$ and $S(m)$ are elements of $M\setminus\Crit(H,\Phi)$, then the classical time delay for the scattering pair $\{H,H+V\}$ should satisfy the equation $$ \tau(m)=(T_f-T_f\circ S)(m). $$ Now, the property $\{T_f,H\}(m)=1$ implies that $T_f(m)=(T_f\circ\varphi_t)(m)-t$ for each $t\in\R$. Since $S$ commutes with $\varphi_t$, this would imply that $$ \tau(m)=\big[(T_f-T_f\circ S)\circ\varphi_t\big](m) $$ for all $t\in\R$, meaning that the classical time delay is a first integral of the free motion. This property corresponds in the quantum case to the fact that the time delay operator is decomposable in the spectral representation of the free Hamiltonian (see \cite[Rem.~4.4]{RT11}). Let us now describe more precisely the content of this paper. In Section \ref{section_R} we recall some definitions in relation with the function $f$ that appear in Theorem \ref{gros_bidule}. The function $R_f$ is introduced and some of its properties are presented. Then we prove various versions of Formula \eqref{LaFormule} in the particular case where the functions $\Phi\circ\varphi_{\pm t}:M\to\R^d$ are fixed vectors $x\pm ty$, $x,y\in\R^d$ (see Proposition \ref{f_integrale}, Lemma \ref{boulette} and Corollary \ref{poiscaille}). In Section \ref{Sec_Crit}, we introduce the Hamiltonian system $(M,\omega,H)$ and the abstract position observable $\Phi$. Then we define the (closed) set of critical points $\Crit(H,\Phi)$ associated to $H$ and $\Phi$ as (see \cite[Def.~2.5]{RT10} for the quantum analogue): $$ \Crit(H,\Phi):=\big\{m\in M\mid(\nabla H)(m)=0\big\}. $$ When $H(q,p)=h(p)$ and $\Phi(q,p):=q$ on $M=\R^{2n}$, $\Crit(H,\Phi)$ coincides with the usual set $\Crit(H)$ of critical points of the Hamiltonian vector field $X_H$, \ie $$ \Crit(H) \equiv\big\{m\in M\mid X_H(m)=0\big\} =\big\{(q,p)\in\R^{2n}\mid(\nabla h)(p)=0\big\} =\Crit(H,\Phi). $$ But, in general, we simply have the inclusion $\Crit(H)\subset\Crit(H,\Phi)$. In Section \ref{sec_main_form}, we prove the main results of this paper. Namely, we show Formula \eqref{LaFormule} when the localisation function $f$ is regular (Theorem \ref{IntCont}) or equal to a characteristic function (Corollary \ref{cor_car}). We also establish in Theorem \ref{thm_discrete} a discrete-time version of Formula \eqref{LaFormule}. The interpretation of these results is discussed in Remarks \ref{rem_spec} and \ref{Rem_Int}. In Section \ref{exemp}, we show that our results apply to many Hamiltonian systems $(M,\omega,H)$ appearing in literature. In the case of finite-dimensional manifolds, we treat, among other examples, Stark Hamiltonians, homogeneous Hamiltonians, purely kinetic Hamiltonians, the repulsive harmonic potential, the simple pendulum, central force systems, the Poincar\'e ball model and covering manifolds. In the case of infinite-dimensional manifolds, we discuss separately classical and quantum Hamiltonians systems. In the classical case, we treat the wave equation, the nonlinear Schr\"odinger equation and the Korteweg-de Vries equation. In the quantum case, we explain how to recast into our framework the (Hilbertian) examples of \cite[Sec.~7]{RT10}, and we also treat an example of Laplacian on trees and complete Fock spaces. In all these cases, we are able to exhibit a family of position observables $\Phi$ satisfying our assumptions. The diversity of the examples covered by our theory, together with the existence of a quantum analogue \cite{RT10}, make us strongly believe that Formula \eqref{LaFormule} is of natural character. Moreover it also suggests that the existence of time delay is a very common feature of classical scattering theory. \section{Integral formula}\label{section_R} \setcounter{equation}{0} In this section, we prove an integral formula and a summation formula for functions on $\R^d$. For this, we start by recalling some properties of a class of averaged localisation functions which appears naturally when dealing with quantum scattering theory. These functions, which are denoted $R_f$, are constructed in terms of functions $f\in\linf(\R^d)$ of localisation around the origin $0\in\R^d$. They were already used, in one form or another, in \cite{GT07,RT10,RT11,Tie08,Tie09}. We use the notation $\langle x\rangle:=\sqrt{1+|x|^2}$ for any $x\in\R^d$. \begin{Assumption}\label{assumption_f} The function $f\in\linf(\R^d)$ satisfies the following conditions: \begin{enumerate} \item[(i)] There exists $\rho>0$ such that $|f(x)|\le{\rm Const.}\;\!\langle x\rangle^{-\rho}$ for almost every $x\in\R^d$. \item[(ii)] $f=1$ on a neighbourhood of~~$0$. \end{enumerate} \end{Assumption} It is clear that $\lim_{r\to\infty}f(x/r)=1$ for each $x\in\R^d$ if $f$ satisfies Assumption \ref{assumption_f}. Furthermore, one has for each $x\in \R^d\setminus\{0\}$ $$ \left|\int_0^\infty\frac{\d\mu}\mu\[f(\mu x)-\chi_{[0,1]}(\mu)\]\right| \le\int_0^1\frac{\d\mu}\mu\,|f(\mu x)-1| +{\rm Const.}\int_1^{+\infty}\d\mu\,\mu^{-(1+\rho)} <\infty, $$ where $\chi_{[0,1]}$ denotes the characteristic function for the interval $[0,1]$. Therefore the function $R_f:\R^d\setminus\{0\}\to\C$ given by $$ R_f(x):=\int_0^{+\infty}\frac{\d\mu}\mu\[f(\mu x)-\chi_{[0,1]}(\mu)\] $$ is well-defined. In the next lemma we recall some differentiability and homogeneity properties of $R_f$. We also give the explicit form of $R_f$ when $f$ is a radial function. The reader is referred to \cite[Sec. 2]{Tie09} for proofs and details. The symbol $\S(\R^d)$ stands for the Schwartz space on $\R^d$. \begin{Lemma}\label{function_R} Let $f$ satisfy Assumption \ref{assumption_f}. \begin{enumerate} \item[(a)] Assume that $\frac{\partial f}{\partial x_j}(x)$ exists for all $j\in\{1,\ldots,d\}$ and $x\in\R^d$, and suppose that there exists some $\rho>0$ such that $ \big|\frac{\partial f}{\partial x_j}(x)\big| \le{\rm Const.}\;\!\langle x\rangle^{-(1+\rho)} $ for each $x\in\R^d$. Then $R_f$ is differentiable on $\R^d\setminus\{0\}$, and its gradient is given by \begin{equation*} (\nabla R_f)(x)=\int_0^\infty\d\mu\,(\nabla f)(\mu x). \end{equation*} In particular, if $f\in\S(\R^d)$ then $R_f$ belongs to $\cinf(\R^d\setminus\{0\})$. \item[(b)] Assume that $R_f$ belongs to ${\sf C}^m(\R^d\setminus\{0\})$ for some $m\ge1$. Then one has for each $x\in \R^d\setminus\{0\}$ and $t>0$ the homogeneity properties \begin{align} x\cdot(\nabla R_f)(x)&=-1,\label{minusone}\\ t^{|\alpha|}(\partial^\alpha R_f)(tx) &=(\partial^\alpha R_f)(x),\nonumber \end{align} where $\alpha\in\N^d$ is a multi-index with $1\le|\alpha|\le m$. \item[(c)] Assume that $f$ is radial, \ie there exists $f_0\in\linf(\R)$ such that $f(x)=f_0(|x|)$ for almost every $x\in \R^d$. Then $R_f$ belongs to $\cinf(\R^d\setminus\{0\})$, and $(\nabla R_f)(x)=-x^{-2}x$. \end{enumerate} \end{Lemma} In the sequel, we say that a function $f:\R^d\to\C$ is even if $f(x)=f(-x)$ for almost every $x\in\R^d$. \begin{Proposition}\label{f_integrale} Let $f:\R^d\to\C$ be an even function as in Lemma \ref{function_R}.(a). Then we have for each $x\in\R^d$ and each $y\in\R^d\setminus\{0\}$ \begin{equation}\label{tite_formule} \lim_{r\to\infty}\12\int_0^\infty\d t\;\! \Big[f\Big(\frac{x-ty}r\Big)-f\Big(\frac{x+ty}r\Big)\Big] =-x\cdot(\nabla R_f)(y). \end{equation} In particular, if $f$ is radial, the l.h.s. is independent of $f$ and equal to $(x\cdot y)/y^2$. \end{Proposition} \begin{proof} The change of variables $\mu:=t/r$, $\nu:=1/r$, and the fact that $f$ is even, gives \begin{align} &\lim_{r\to\infty}\12\int_0^\infty\d t\,\textstyle \big[f\big(\frac{x-ty}r\big)-f\big(\frac{x+ty}r\big)\big]\nonumber\\ &=\lim_{\nu\searrow0}\12\int_0^\infty\frac{\d\mu}\nu\,\big[ f(\nu x-\mu y)-f(\nu x+\mu y)\big]\nonumber\\ &=\lim_{\nu\searrow0}\12\int_0^\infty\d\mu\,\textstyle \big\{\frac1\nu\big[f(\nu x-\mu y)-f(-\mu y)\big]-\frac1\nu\big[f(\nu x+\mu y) -f(\mu y)\big]\big\}.\label{integrant} \end{align} By using the mean value theorem and the assumptions of Lemma \ref{function_R}.(a), one obtains that $$ {\textstyle\frac1\nu}\big|f(\nu x\pm\mu y)-f(\pm\mu y)\big| \le{\rm Const.}\sup_{\xi\in[0,1]}\big\langle\xi\nu x\pm\mu y\big\rangle^{-(1+\rho)} $$ for some $\rho>0$. Therefore, if $\mu$ is big enough, the integrant in \eqref{integrant} is bounded by $$ {\rm Const.}\;\!\big\langle\mu|y|-|x|\big\rangle^{-(1+\rho)}. $$ for all $\nu\in(0,1)$. This implies that the integrant in \eqref{integrant} is bounded uniformly in $\nu\in(0,1)$ by a function belonging to $\lone\big([0,\infty),\d\mu\big)$. So, we can apply Lebesgue's dominated convergence theorem to interchange the limit on $\nu$ with the integration over $\mu$ in \eqref{integrant}. This, together with the fact that $(\nabla f)(-x)=-(\nabla f)(x)$, leads to the desired result: \begin{align*} \lim_{r\to\infty}\12\int_0^\infty\d t\,\textstyle \big[f\big(\frac{x-ty}r\big)-f\big(\frac{x+ty}r\big)\big] &=\displaystyle\12\int_0^\infty\d\mu\, \big[x\cdot(\nabla f)(-\mu y)-x\cdot(\nabla f)(\mu y)\big]\\ &=-\int_0^\infty\d\mu\,x\cdot(\nabla f)(\mu y)\\ &=-x\cdot(\nabla R_f)(y).\qedhere \end{align*} \end{proof} The result of Proposition \ref{f_integrale} can be extended to less regular functions $f:\R^d\to\C$. The interested reader can check that the result holds for functions $f$ admitting a weak derivative $f'$ such that, for every real line $L\subset\R^d$, $f'$ is of class $\lone$ on $L$ (see \cite[Thm.~2.1.6]{Zie89}). We only present here the case (of particular interest for the theory of classical time delay) where $f$ is the characteristic function $\chi_1$ for the unit ball $B_1:=\{x\in\R^d\mid|x|\le 1\}$. \begin{Lemma}\label{boulette} One has for each $x\in\R^d$ and each $y\in\R^d\setminus\{0\}$ $$ \lim_{r\to\infty}\12\int_0^\infty\d t\;\! \Big[\chi_1\Big(\frac{x-ty}r\Big)-\chi_1\Big(\frac{x+ty}r\Big)\Big] =\frac{x\cdot y}{y^2}\;\!. $$ \end{Lemma} \begin{proof} Direct calculations and the change of variables $\mu:=t/r$, $\nu:=1/r$, give \begin{align*} \int_0^\infty\d t\,\textstyle\chi_1\big(\frac{x\pm ty}r\big) =\int_0^\infty\frac{\d\mu}\nu~\chi_{[0,1]}\big(|\nu x\pm\mu y|^2\big) &=\int_0^\infty\d\mu~\textstyle\chi_{[0,y^{-2}]} \Big(\frac{\nu^2x^2}{y^2}\pm\frac{2\nu\mu x\cdot y}{y^2}+\mu^2\Big)\\ &=\int_0^\infty\frac{\d\mu}\nu~\textstyle\chi_{[0,y^{-2}]} \Big(\big(\mu\pm\frac{\nu x\cdot y}{y^2}\big)^2 +\frac{\nu^2}{y^4}\big(x^2y^2-(x\cdot y)^2\big)\Big)\\ &=\int_0^\infty\frac{\d\mu}\nu~\textstyle\chi_{[-a(\nu,x,y),y^{-2}-a(\nu,x,y)]} \Big(\big(\mu\pm\frac{\nu x\cdot y}{y^2}\big)^2\Big), \end{align*} with $a(\nu,x,y):=\frac{\nu^2}{y^4}\big(x^2y^2-(x\cdot y)^2\big)$. Now, $a(\nu,x,y)\ge0$, and $y^{-2}-a(\nu,x,y)\ge0$ if $\nu>0$ is small enough. So, the last expression is equal to \begin{align*} \int_0^\infty\frac{\d\mu}\nu~\textstyle\chi_{[0,y^{-2}-a(\nu,x,y)]} \Big(\big(\mu\pm\frac{\nu x\cdot y}{y^2}\big)^2\Big) &=\int_0^\infty\frac{\d\mu}\nu~\textstyle \chi_{\big[-\sqrt{y^{-2}-a(\nu,x,y)}\mp\frac{\nu x\cdot y}{y^2}, \sqrt{y^{-2}-a(\nu,x,y)}\mp\frac{\nu x\cdot y}{y^2}\big]}(\mu)\\ &=\frac1\nu\sqrt{y^{-2}-a(\nu,x,y)}\mp\frac{x\cdot y}{y^2} \end{align*} if $\nu$ is small enough. This implies that \begin{align*} &\lim_{r\to\infty}\12\int_0^\infty\d t\,\textstyle \big[\chi_1\big(\frac{x-ty}r\big)-\chi_1\big(\frac{x+ty}r\big)\big]\\ &=\displaystyle\lim_{\nu\searrow0}\frac12 \Big(\frac1\nu\sqrt{y^{-2}-a(\nu,x,y)}+\frac{x\cdot y}{y^2} -\frac1\nu\sqrt{y^{-2}-a(\nu,x,y)}+\frac{x\cdot y}{y^2}\Big)\\ &=\frac{x\cdot y}{y^2}\;\!.\qedhere \end{align*} \end{proof} For the next corollary, we need the following version of the Poisson summation formula (see \cite[Thm.~5]{DF37} or \cite[Thm.~45]{Tit48}). \begin{Lemma}\label{poisson} Let $g:(0,\infty)\to\C$ be a continuous function of bounded variation in $(0,\infty)$. Suppose that $\lim_{t\to\infty}g(t)=0$ and that the improper Riemann integral $\int_0^\infty\d t\,g(t)$ exists. Then we have the identity $$ \12\;\!g(0)+\sum_{n\ge1}g(n) =\int_0^\infty\d t\,g(t)+2\sum_{n\ge1}\int_0^\infty\d t\,\cos(2\pi nt)g(t). $$ \end{Lemma} \begin{Corollary}\label{poiscaille} Let $f:\R^d\to\C$ be an even function such that \begin{enumerate} \item[(i)] $f=1$ on a neighbourhood of $\,0$. \item[(ii)] For each $\alpha\in\N^d$ with $|\alpha|\le2$, the derivative $\partial^\alpha f$ exists and satisfies $ |(\partial^\alpha f)(x)|\le{\rm Const.}\;\!\langle x\rangle^{-(1+\rho)} $ for some $\rho>0$ and all $x\in\R^d$. \end{enumerate} Then we have for each $x\in\R^d$ and each $y\in\R^d\setminus\{0\}$ \begin{equation}\label{formulette} \lim_{r\to\infty}\12\sum_{n\ge1} \Big[f\Big(\frac{x-ny}r\Big)-f\Big(\frac{x+ny}r\Big)\Big] =-x\cdot(\nabla R_f)(y). \end{equation} In particular, if $f$ is radial, the l.h.s. is independent of $f$ and equal to $(x\cdot y)/y^2$. \end{Corollary} \begin{proof} For $r>0$ given, the function $$ \textstyle g_r:(0,\infty)\to\C,\quad t\mapsto g_r(t):=f\big(\frac{x-ty}r\big)-f\big(\frac{x+ty}r\big), $$ satisfies all the hypotheses of Lemma \ref{poisson}. Thus $$ \lim_{r\to\infty}\12\sum_{n\ge1}\textstyle \big[f\big(\frac{x-ny}r\big)-f\big(\frac{x+ny}r\big)\big] =\displaystyle\lim_{r\to\infty}\12\int_0^\infty\d t\,g_r(t) +\lim_{r\to\infty}\sum_{n\ge1}\int_0^\infty\d t\,\cos(2\pi nt)g_r(t). $$ The first term is equal to $-x\cdot(\nabla R_f)(y)$ due to Proposition \ref{f_integrale}. For the second term, the change of variables $\mu:=t/r$, $\nu:=1/r$, and two integrations by parts give \begin{align*} &\lim_{r\to\infty}\sum_{n\ge1}\int_0^\infty\d t\,\cos(2\pi nt)g_r(t)\\ &=\lim_{\nu\searrow0}\sum_{n\ge1}\int_0^\infty\frac{\d\mu}\nu\, \cos(2\pi n\mu/\nu) \big[f(\nu x-\mu y)-f(\nu x+\mu y)\big]\\ &=\sum_jy_j\lim_{\nu\searrow0}\sum_{n\ge1}\int_0^\infty\d\mu\, \frac{\sin(2\pi n\mu/\nu)}{2\pi n} \bigg(\frac{\partial f}{\partial x_j}(\nu x-\mu y) +\frac{\partial f}{\partial x_j}(\nu x+\mu y)\bigg)\\ &=\sum_jy_j\lim_{\nu\searrow0}\sum_{n\ge1}\frac{2\nu}{(2\pi n)^2} \frac{\partial f}{\partial x_j}\big(\nu x\big)\\ &\quad-\sum_{j,k}y_jy_k\lim_{\nu\searrow0}\sum_{n\ge1}\int_0^\infty\d\mu\, \frac{\nu\cos(2\pi n\mu/\nu)}{(2\pi n)^2} \bigg(\frac{\partial^2f}{\partial x_k\partial x_j}(\nu x-\mu y) +\frac{\partial^2f}{\partial x_k\partial x_j}(\nu x+\mu y)\bigg). \end{align*} Since $\sum_{n\ge1}1/n^2<\infty$, one sees directly that the first term is equal to zero. Using the fact that $ \big|\frac{\partial^2f}{\partial x_k\partial x_j}(x)\big| \le{\rm Const.}\<x\>^{-(1+\rho)} $ for some $\rho>0$ and all $x\in\R^d$, one also obtains that the second term is equal to zero. Therefore, $$ \lim_{r\to\infty}\12\sum_{n\ge1}\textstyle \big[f\big(\frac{x-ny}r\big)-f\big(\frac{x+ny}r\big)\big] =-x\cdot(\nabla R_f)(y), $$ and the claim is proved. \end{proof} \section{Hamiltonian dynamics}\label{sec_ham} \setcounter{equation}{0} In the sequel, we require the presence of a symplectic structure in order to speak of Hamiltonian dynamics. However our results still hold if one is only given a Poisson structure. A lack of examples and some complications in infinite dimension regarding the identification of vector fields with derivations have led us to restrict the discussion to the symplectic case for the sake of clarity. \subsection{Critical points}\label{Sec_Crit} Let $M$ be a symplectic manifold, \ie a smooth manifold endowed with a closed two-form $\omega$ such that the morphism $ TM\ni X\mapsto\omega^\flat(X):=\iota_X\omega $ is an isomorphism. In infinite dimension, such a manifold is said to be a strong symplectic manifold (in opposition to a weak symplectic manifold, when the above map is only injective; see \cite[Sec.~8.1]{AMR88}). When the dimension is finite, the dimension must be even, say equal to $2n$, and the $2n$-form $\omega^n$ must be a volume form. The Poisson bracket is defined as follows: for each $f\in\cinf(M)$ we define the vector field $X_f:=(\omega^\flat)^{-1}(\d f)$, \ie $\d f(\;\!\cdot\;\!)=\omega(X_f,\;\!\cdot\;\!)$, and set $\{f,g\}:=\omega(X_f,X_g)$ for each $f,g\in\cinf(M)$. In the sequel, the function $H\in\cinf(M)$ is an Hamiltonian with complete vector field $X_H$. So, the flow $\{\varphi_t\}$ associated to $H$ is defined for all $t\in\R$, it preserves the Poisson bracket: $$ \big\{f\circ\varphi_t,g\circ\varphi_t\big\}=\{f,g\}\circ\varphi_t,\quad t\in\R, $$ and satisfies the usual evolution equation: \begin{equation}\label{evolution} \frac\d{\d t}\;\!f\circ\varphi_t=\{f,H\}\circ\varphi_t,\quad t\in\R. \end{equation} In particular, the Hamiltonian $H$ is preserved along its flow, \ie $H\circ\varphi_t=H$ for all $t\in\R$. We also consider an abstract family $\Phi\equiv(\Phi_1,\ldots,\Phi_d)\in\cinf(M;\R^d)$ of observables\footnote{If need be, the results of this article can be extended to the case where $H$ and $\Phi_j$ are functions of class $\cone$ with $\{\Phi_j,H\}$ also $\cone$.}, and define the associated functions $$ \partial_jH:=\{\Phi_j,H\}\in\cinf(M)\qquad\hbox{and}\qquad \nabla H:=(\partial_1H,\ldots,\partial_dH)\in\cinf(M;\R^d). $$ Then, one can introduce a natural set of critical points: \begin{Definition}[Critical points] The set $$ \Crit(H,\Phi):=(\nabla H)^{-1}(\{0\})\subset M $$ is called the set of critical points associated to $H$ and $\Phi$. \end{Definition} The set $\Crit(H,\Phi)$ is closed in $M$ since $\nabla H$ is continuous. Furthermore, since $\{\Phi_j,H\}=\d\Phi_j(X_H)$, the set $$ \Crit(H):=\big\{m\in M\mid X_H(m)=0\big\}\equiv\big\{m\in M\mid \d H_m=0\big\} $$ of usual critical points of $H$ satisfies the inclusion $\Crit(H)\subset\Crit(H,\Phi)$. Our main assumption is the following: \begin{Assumption}\label{AssCom} One has $\big\{\{\Phi_j,H\},H\big\}=0\,$ for each $j\in\{1,\ldots,d\}$. \end{Assumption} Assumption \ref{AssCom} imposes that all the brackets $\{\Phi_j,H\}$ are first integrals of the motion given by $H$. When $M$ is a symplectic manifold of dimension $2n$, these first integrals are functions of $k \in \{1,2,\ldots,2n-1\}$ independent first integrals $J_1\equiv H,J_2,\ldots,J_k$ ($J_1,\ldots,J_k$ are independent in the sense that their differential are linearly independent at each point of $M$)\footnote{In the setup of Liouville's theorem \cite[Sec.~49]{Arn89}, we have $k=n$ and the first integrals are mutually in involution. Furthermore, on the connected components of submanifolds given by fixing the values of these $n$ integrals in involution, the flow is conjugate to a translation flow on cylinders $\R^{n-\ell}\times\mathbb T^\ell$ (see \cite[Thm.~5.2.24]{AM78}).}. So, one should have $\{\Phi_j,H\}=g_j(J_1,\ldots,J_k)$ for some functions $g_j\in\cinf(\R^n;\R)$. Using the properties of $\{\;\!\cdot\;\!,H\}$ as a derivation, one infers that $\big\{g_j(J_1,\ldots,J_k)^{-1}\Phi_j,H\big\}=1$ outside $g_j(J_1,\ldots,J_k)^{-1}(\{0\})$. Thus, if $k$ first integrals as $J_1,\ldots,J_k$ are known, finding functions $\Phi_j$ satisfying Assumption \ref{AssCom} is to some extent equivalent to finding functions $\Phi_0$ solving $\{\Phi_0,H\}=1$ (the equivalence is not complete because these functions $\Phi_0$ are in general not $\cinf$ since $\{\;\!\cdot\;\!,H\}$ is necessarily $0$ on $\Crit(H)$). For further use, we define the $\cinf$-function $T_f:M\setminus\Crit(H,\Phi)\to\R$ by $$ T_f:=-\Phi\cdot(\nabla R_f)(\nabla H). $$ When $f$ is radial, $T_f$ is independent of $f$ and equal to $$ T:=\Phi\cdot\frac{\nabla H}{(\nabla H)^2}\;\!, $$ due to Lemma \ref{function_R}.(c). In fact, the closed subset $T^{-1}(\{0\})$ of $M\setminus\Crit(H,\Phi)$ admits an interesting interpretation: If we consider the observables $\Phi_j$ as the components of an abstract position observable $\Phi$, then $\nabla H$ can be seen as the velocity vector for the Hamiltonian $H$, and the condition \begin{equation}\label{truffette} T(m)=0~\iff~\Phi(m)\cdot(\nabla H)(m)=0 \end{equation} means that the position and velocity vectors are orthogonal at $m\in T^{-1}(\{0\})$. Alternatively, one has $T(m)=0$ if and only if the vector fields $X_{|\Phi|^2}$ and $X_H$ are $\omega$-orthogonal at $m$, that is, $\omega_m\big(X_{|\Phi|^2}(m),X_H(m)\big)=0$. The simplest example illustrating the condition \eqref{truffette} is when $\Phi(q,p):=q$ and $H(q,p):=\12|p|^2$ are the usual position and kinetic energy on $(M,\omega):=\big(\R^{2n},\sum_{j=1}^n\d q^j\wedge\d p_j\big)$. In such a case, \eqref{truffette} reduces to $q\cdot p=0$. \subsection{Sojourn times of classical orbits}\label{sec_main_form} Next Theorem is our main result. We refer to Remark \ref{Rem_Int} below for its interpretation. \begin{Theorem}\label{IntCont} Let $H$ and $\Phi$ satisfy Assumption \ref{AssCom}. Let $f:\R^d\to\C$ be an even function as in Lemma \ref{function_R}.(a). Then we have for each point $m\in M\setminus\Crit(H,\Phi)$ \begin{equation}\label{BelleLouloute} \lim_{r\to\infty}\12\int_0^\infty\d t\,\big[\big(f(\Phi/r)\circ\varphi_{-t}\big)(m) -\big(f(\Phi/r)\circ\varphi_t\big)(m)\big] =T_f(m). \end{equation} In particular, if $f$ is radial, the l.h.s. is independent of $f$ and equal to $\Phi(m)\cdot\frac{(\nabla H)(m)}{(\nabla H)(m)^2}$\;\!. \end{Theorem} \begin{proof} Equation \eqref{evolution} implies that $$ \frac\d{\d t}\;\!\Phi_j\circ\varphi_t=\{\Phi_j,H\}\circ\varphi_t $$ for each $t\in\R$. Similarly, using Assumption \ref{AssCom}, one gets that $$ \frac\d{\d t}\;\!\{\Phi_j,H\}\circ\varphi_t =\big\{\{\Phi_j,H\},H\big\}\circ\varphi_t =0. $$ So, $\Phi_j$ varies linearly in $t$ along the flow of $X_H$, and one gets for any $m\in M$ $$ (\Phi_j\circ\varphi_t)(m) =(\Phi_j\circ\varphi_0)(m) +t\,\Big(\frac\d{\d t}\;\!(\Phi_j\circ\varphi_t)(m)\Big|_{t=0}\Big) =\Phi_j(m)+t(\partial_jH)(m). $$ This, together with Formula \eqref{tite_formule}, gives \begin{align*} &\lim_{r\to\infty}\12\int_0^\infty\d t\, \big[\big(f(\Phi/r)\circ\varphi_{-t}\big)(m) -\big(f(\Phi/r)\circ\varphi_t\big)(m)\big]\\ &=\lim_{r\to\infty}\12\int_0^\infty\d t\,\textstyle \Big[f\Big(\frac{\Phi(m)-t(\nabla H)(m)}r\Big) -f\Big(\frac{\Phi(m)+t(\nabla H)(m)}r\Big)\Big]\\ &=T_f(m).\qedhere \end{align*} \end{proof} Due to Lemma \ref{boulette}, the proof of Theorem \ref{IntCont} also works in the case $f=\chi_1$. So, we have the following corollary. \begin{Corollary}\label{cor_car} Let $H$ and $\Phi$ satisfy Assumption \ref{AssCom}. Then we have for each point $m\in M\setminus\Crit(H,\Phi)$ \begin{equation}\label{2eme_loulette} \lim_{r\to\infty}\12\int_0^\infty\d t\,\textstyle \big[\big(\chi_1(\Phi/r)\circ\varphi_{-t}\big)(m) -\big(\chi_1(\Phi/r)\circ\varphi_t\big)(m)\big] =\Phi(m)\cdot\frac{(\nabla H)(m)}{(\nabla H)(m)^2}\;\!. \end{equation} \end{Corollary} We know from the proof of Theorem \ref{IntCont} that \begin{equation}\label{linear} (\Phi_j\circ\varphi_t)(m)=\Phi_j(m)+t(\partial_jH)(m) \quad\hbox{for all $t\in\R$ and all $m\in M$.} \end{equation} Therefore, the l.h.s. of \eqref{BelleLouloute} and \eqref{2eme_loulette} are zero if $m\in\Crit(H,\Phi)$. For the next remark, we recall that any selfadjoint operator $A$ in a Hilbert space $\H$, with spectral measure $E^A(\;\!\cdot\;\!)$, is reduced by an orthogonal decomposition \cite[Sec. 7.4]{Wei80} $$ \H =\H_{\rm ac}(A)\oplus\H_{\rm p}(A)\oplus\H_{\rm sc}(A) \equiv\H_{\rm ac}(A)\oplus\H_{\rm s}(A), $$ where $\H_{\rm ac}(A),\H_{\rm p}(A),\H_{\rm sc}(A)$ and $\H_{\rm s}(A)$ are respectively the absolutely continuous, the pure point, the singular continuous and the singular subspaces of $A$. Furthermore, a vector $\varphi\in\H$ is said to have spectral support with respect to $A$ in a set $J\subset\R$ if $\varphi=E^A(J)\varphi$. \begin{Remark}\label{rem_spec} If $m\in\Crit(H,\Phi)$, then one must have $\varphi_t(m)\in\Crit(H,\Phi)$ for all $t\in\R$, since \eqref{linear} implies $(\partial_jH)(\varphi_t(m))=(\partial_jH)(m)$ for all $t\in\R$. Conversely, if $m\in M\setminus\Crit(H,\Phi)$, then one must have $\varphi_t(m)\neq m$ for all $t\neq0$, since $\Phi$ cannot take two different values at a same point. So, under Assumption \ref{AssCom}, each orbit $\{\varphi_t(m)\}_{t\in\R}$ either stays in $\Crit(H,\Phi)$ if $m\in\Crit(H,\Phi)$, or stays outside $\Crit(H,\Phi)$ and is not periodic if $m\notin\Crit(H,\Phi)$. In the corresponding Hilbertian framework \cite{RT10}, the Hamiltonian $H$ and the functions $\Phi_j$ are selfadjoint operators in a Hilbert space $\H$, and the critical set $\kappa$ associated to $H$ and $\Phi$ is a closed subset of the spectrum of $H$. Outside $\kappa$, the spectrum of $H$ is purely absolutely continuous \cite[Thm.~3.6.(a)]{RT10}. Therefore, vectors $\psi\in\H$ having spectral support with respect to $H$ in $\kappa$ belong to the singular subspace $\H_{\rm s}(H)$ of $H$, and thus lead to orbits $\{\e^{itH}\psi\}_{t\in\R}$ confined in $\H_{\rm s}(H)$ (for instance, $\e^{itH}\psi$ stays in a one-dimensional subspace of $\H$ if $\psi$ is an eigenvector of $H$). Conversely, vectors $\psi\in\H$ having spectral support outside $\kappa$ belong to the absolute continuous subspace $\H_{\rm ac}(H)$ of $H$, and thus lead to orbits $\{\e^{itH}\psi\}_{t\in\R}$ contained in $\H_{\rm ac}(H)$ (see \cite[Prop.~5.7]{Amr09} for the escape properties of such orbits). These properties are the quantum counterparts of the confinement to $\Crit(H,\Phi)$ (when $m\in\Crit(H,\Phi)$) and the non-periodicity outside $\Crit(H,\Phi)$ (when $m\notin\Crit(H,\Phi)$) of the classical orbits $\{\varphi_t(m)\}_{t\in\R}$. \end{Remark} \begin{Lemma}\label{LemChile} If $H$, $\Phi$ and $f$ satisfy the assumptions of Theorem \ref{IntCont}, then we have \begin{equation}\label{E_derivative} \{T_f,H\}\circ\varphi_t\equiv\frac\d{\d t}\;\!(T_f\circ\varphi_t)=1 \end{equation} on $M\setminus\Crit(H,\Phi)$. In particular, one has $T_f\circ\varphi_t=T_f+t$ on $M\setminus\Crit(H,\Phi)$. \end{Lemma} If we interpret the map $\frac\d{\d H}:=\{T_f,\;\!\cdot\;\!\}$ as a derivation on $\cinf\big(M\setminus\Crit(H,\Phi)\big)$, this implies that $T_f$ can be seen as an observable ``derivative with respect to the energy $H$'' on $M\setminus\Crit(H,\Phi)$, since $$ \textstyle\frac\d{\d H}(H)=\{T_f,H\}=1 $$ on each orbit $\{\varphi_t(m)\}_{t\in\R}$, with $m\in M\setminus\Crit(H,\Phi)$. \begin{proof}[Proof of Lemma \ref{LemChile}] The first equality in \eqref{E_derivative} follows from \eqref{evolution}. For the second one, we use successively the fact that $\varphi_t$ leaves invariant $H$ and the Poisson bracket, Assumption \ref{AssCom}, and Equation \eqref{minusone}. Doing so, we get on $M\setminus\Crit(H,\Phi)$ the following equalities \begin{align*} \frac\d{\d t}\;\!(T_f\circ\varphi_t) =-\frac\d{\d t}\;\!(\Phi\circ\varphi_t)\cdot (\nabla R_f)\big(\{\Phi\circ\varphi_t,H\}\big) &=-\frac\d{\d t}\;\!\big(\Phi+t(\nabla H)\big)\cdot (\nabla R_f)\big(\{\Phi+t(\nabla H),H\}\big)\\ &=-\frac\d{\d t}\;\!\big(\Phi+t(\nabla H)\big)\cdot(\nabla R_f)(\nabla H)\\ &=-(\nabla H)\cdot(\nabla R_f)(\nabla H)\\ &=1.\qedhere \end{align*} \end{proof} \begin{Remark}\label{Rem_Int} Theorem \ref{IntCont} relates the sojourn times of classical orbits within expanding regions of $M$ to the observable $T_f$. If we consider the observables $\Phi_j$ as the components of an abstract position observable $\Phi$, then the l.h.s. of Formula \eqref{BelleLouloute} has the following meaning: For $r>0$ and $m\in M\setminus\Crit(H,\Phi)$ fixed, it can be interpreted as the difference of times spent by the classical orbit $\{\varphi_t(m)\}_{t\in\R}$ in the past (first term) and in the future (second term) within the region $\Sigma_r:=\supp[f(\Phi/r)]\subset M$ defined by the localisation observable $f(\Phi/r)$. Thus, Formula \eqref{BelleLouloute} shows that this difference of times tends as $r\to\infty$ to the value of the observable $T_f$ at $m$. Since $T_f$ can be interpreted as an observable derivative with respect to the energy $H$, Formula \eqref{BelleLouloute} provides a new relation between sojourn times and variation of energy along classical orbits. \end{Remark} As a final result, we give a discrete-time counterpart of Theorem \ref{IntCont}, which could be of some interest in the context of approximation of symplectomorphisms by time-$1$ maps of Hamiltonians flows (see \eg \cite{BG94}, \cite[Appendix~B]{Har00}, \cite{KP94} and references therein). \begin{Theorem}\label{thm_discrete} Let $H$ and $\Phi$ satisfy Assumption \ref{AssCom}. Let $f:\R^d\to\C$ be an even function such that \begin{enumerate} \item[(i)] $f=1$ on a neighbourhood of $\,0$. \item[(ii)] For each $\alpha\in\N^d$ with $|\alpha|\le2$, the derivative $\partial^\alpha f$ exists and satisfies $|(\partial^\alpha f)(x)|\le{\rm Const.}\<x\>^{-(1+\rho)}$ for some $\rho>0$ and all $x\in\R^d$. \end{enumerate} Then we have for each point $m\in M\setminus\Crit(H,\Phi)$ $$ \lim_{r\to\infty}\12\sum_{n\ge1}\big[\big(f(\Phi/r)\circ\varphi_{-n}\big)(m) -\big(f(\Phi/r)\circ\varphi_n\big)(m)\big] =T_f(m). $$ In particular, if $f$ is radial, the l.h.s. is independent of $f$ and equal to $\Phi(m)\cdot\frac{(\nabla H)(m)}{(\nabla H)(m)^2}$\;\!. \end{Theorem} \begin{proof} Let $m\in M\setminus\Crit(H,\Phi)$. Then we have by Equation \eqref{linear} \begin{align*} &\lim_{r\to\infty}\12\sum_{n\ge1} \big[\big(f(\Phi/r)\circ\varphi_{-n}\big)(m) -\big(f(\Phi/r)\circ\varphi_n\big)(m)\big]\\ &=\lim_{\nu\searrow0}\12\sum_{n\ge1}\textstyle \Big[f\Big(\frac{\Phi(m)-n(\nabla H)(m)}r\Big) -f\Big(\frac{\Phi(m)+n(\nabla H)(m)}r\Big)\Big], \end{align*} and the claim follows by Formula \eqref{formulette}. \end{proof} \section{Examples}\label{exemp} \setcounter{equation}{0} In this section we show that Assumption \ref{AssCom} is satisfied in various situations. In these situations all the results of Section \ref{sec_ham} such as Theorem \ref{IntCont} or Formula \eqref{E_derivative} hold. Some of the examples presented here are the classical counterparts of examples discussed in \cite[Sec.~7]{RT10} in the context of Hilbertian theory. The configuration space of the system under consideration will sometimes be $\R^n$, and the corresponding symplectic manifold $M=T^*\R^n\simeq\R^{2n}$. In that case, we use the notation $(q,p)$, with $q\equiv(q^1,\ldots,q^n)$ and $p\equiv(p_1,\ldots,p_n)$, for the canonical coordinates on $M$, and set $\omega:=\sum_{j=1}^n\d q^j\wedge\d p_j$ for the canonical symplectic form. We always assume that $f=\chi_1$ or that $f$ satisfies the hypotheses of Theorem \ref{IntCont}. \subsection{$\boldsymbol{\nabla H=g(H)}$} Suppose that there exists a function $g\equiv(g_1,\ldots,g_d)\in C^\infty(\R;\R^d)$ such that $\nabla H=g(H)$. Then $H$ and $\Phi$ satisfy Assumption \ref{AssCom} since $\{g_j(H),H\}=0$ for each $j$. Furthermore, one has $\Crit(H,\Phi)=(g\circ H)^{-1}(\{0\})$, and $T_f=-\Phi\cdot(\nabla R_f)\big(g(H)\big)$ on $M\setminus\Crit(H,\Phi)$. We distinguish various cases: \begin{enumerate} \renewcommand{\labelenumi}{{\normalfont (\Alph{enumi})}} \item Suppose that $g$ is constant, \ie $g=v\in\R^d\setminus\{0\}$. Then $\Crit(H)=\Crit(H,\Phi)=\varnothing$, and we have the equality $T_f=-\Phi\cdot(\nabla R_f)(v)$ on the whole of $M$. Typical examples of functions $H$ and $\Phi$ fitting into this construction are Friedrichs-type Hamiltonians and position functions. For illustration, we mention the case (with $d=n$) of $H(q,p):=v\cdot p+V(q)$ and $\Phi(q,p):=q$ on $M:=\R^{2n}$, with $v\in\R^n\setminus\{0\}$ and $V\in\cinf(\R^n;\R)$. In such a case, one has $\nabla H=v$ and $$ \textstyle \varphi_t(q,p)=\big(vt+q,p-\int_0^t\d s\,(\nabla V)(vs+q)\big). $$ Stark-type Hamiltonians and momentum functions also fit into the construction, \ie $H(q,p):=h(p)+v\cdot q$ and $\Phi(q,p):=p$ on $M:=\R^{2n}$, with $v\in\R^n\setminus\{0\}$ and $h\in \cinf(\R^n;\R)$. In such a case, one has $\nabla H=-v$ and $$ \textstyle \varphi_t(q,p)=\big(q+\int_0^t\d s\,(\nabla h)(p-vs),p-vt\big). $$ Note that these two examples are interesting since the Hamiltonians $H$ contain not only a kinetic part, but also a potential perturbation. \item Suppose that $\Phi$ has only one component ($d=1$), and assume that $g(\lambda)=\lambda$ for all $\lambda\in\R$ (in the Hilbertian framework, one says in such a case that $H$ is $\Phi$-homogeneous \cite{BG91}). Then $\Crit(H,\Phi)=H^{-1}(\{0\})$ and we have the equality $T_f=-\Phi(\nabla R_f)(H)$ on $M\setminus H^{-1}(\{0\})$. We present a general class of pairs $(H,\Phi)$ satisfying these assumptions: The Hamiltonian flow of the function $D(q,p):=q\cdot p$ on $\R^{2n}$ is given by $\varphi^D_t(q,p)=(\e^tq,\e^{-t}p)$. So, $D$ is the generator of a dilations group on $\R^{2n}$ (in the Hilbertian framework, the corresponding operator is the usual generator of dilations on $\ltwo(\R^n)$, see \eg \cite[Sec.~1.2]{ABG}). Therefore, the relation $\{D,H\}\propto H$ holds for a large class of homogeneous functions $H$ on $\R^{2n}$, due to Euler's homogeneous function theorem. Let us consider an explicit situation. Take $\alpha>0$ and let $M$ be some open subset of $(\R^n\setminus\{0\})\times\R^n$. Define on $M$ the function $\Phi:=\frac1\alpha D$ and the Hamiltonian $H(q,p):=h(p)+V(q)$, where $h\in\cinf(\R^n;\R)$ is positive homogeneous of degree $\alpha$ and $V\in\cinf(\R^n\setminus\{0\};\R)$ is positive homogeneous of degree $-\alpha$. Then one has $\nabla H\equiv\{\Phi,H\}=H$ on $M$, and \begin{align*} \Crit(H) &=\big\{(q,p)\in M\mid(\nabla h)(p)=(\nabla V)(q)=0\big\}\\ &\subset\big\{(q,p)\in M\mid p\cdot(\nabla h)(p)=q\cdot(\nabla V)(q)=0\big\}\\ &=\big\{(q,p)\in M\mid H(q,p)=0\big\}\\ &=\Crit(H,\Phi). \end{align*} Furthermore, if the functions $h$ and $V$ and the subset $M$ are well chosen, the Hamiltonian vector field $X_H$ of $H$ is complete. For instance, \begin{enumerate} \item[(i)] If $V\equiv0$, then one can take $M=\R^{2n}$, and one has $\varphi_t(q,p)=\big(q+t(\nabla h)(p),p\big)$ and $$ \Crit(H) =\big\{(q,p)\in M\mid(\nabla h)(p)=0\big\} \subset\big\{(q,p)\in M\mid p\cdot(\nabla h)(p)=0\big\} =\Crit(H,\Phi) $$ (when $h(p)=\12|p|^2$ is the classical kinetic energy, one has $\Crit(H)=\Crit(H,\Phi)=\R^n\times\{0\}$). \item[(ii)] Let $K>0$. Then the Hamiltonian given by $H(q,p):=\12(|p|^2+K|q|^{-2})$ on $M:=\R^n\setminus\{0\}\times\R^n$ has a complete Hamiltonian vector field $X_H$. To see it, we use the push-forward of $X_H$ by the diffeomorphism $\iota:\R^n\setminus\{0\}\times\R^n\to\R^n\setminus\{0\}\times\R^n$, $(q,p)\mapsto\big(q|q|^{-2},p\big)\equiv(r,p)$, namely, $$ [\iota_*(X_H)](r,p)=\sum_j \left(\big(|r|^2p_j-2(p\cdot r)r^j\big)\frac\partial{\partial r^j}\Big|_{(r,p)} +Kr^j|r|^2\frac\partial{\partial p_j}\Big|_{(r,p)}\right). $$ Then, we obtain that $\iota_*(X_H)$ is complete by using the criterion \cite[Prop.~2.1.20]{AM78} with the proper function $g:\R^n\setminus\{0\}\times\R^n\to[0,\infty)$ given by $g(r,p):=|p|^2+K|r|^2$. Since $\iota$ is a diffeomorphism, this implies that $X_H$ is also complete (see \cite[Lemma~1.6.4]{Jos05}). \end{enumerate} \item Many other examples with $\nabla H=g(H)$ can be obtained using homogeneous Hamiltonians functions. For instance, consider $H(q,p):=q^2/q^1+q^1/q^2$ and $\Phi(q,p):=p_1q^2+p_2q^1$ on $M:=(\R^2\setminus\{0\})\times\R^2$. Then one has $\nabla H=H^2-4$, $\varphi_t(q,p)=\big(q,p-t\frac{\partial H}{\partial q}(q,p)\big)$ and $$ \Crit(H)=\Crit(H,\Phi)=\big\{q\in\R^2\setminus\{0\}\mid q^1=\pm q^2\big\}\times\R^2. $$ \end{enumerate} \subsection{$\boldsymbol{H=h(p)}$} Consider on $M:=\R^{2n}$ a purely kinetic Hamiltonian $H(q,p):=h(p)$ with $h\in\cinf(\R^n;\R)$, and take the usual position functions $\Phi(q,p):=q$ with $d=n$. Then $\varphi_t(q,p)=\big(q+t(\nabla h)(p),p\big)$, $\nabla H=\nabla h$, and Assumption \ref{AssCom} is satisfied: $$ \big\{\{\Phi_j,H\},H\big\} =\big\{(\partial_jh)(p),h(p)\big\} =0. $$ In this example, we have $\Crit(H)=\Crit(H,\Phi)=\R^n\times(\nabla h)^{-1}(\{0\})$. \subsection{The assumption $\boldsymbol{\{\{\Phi_j,H\},H\}=0}\,$ as a differential equation}\label{S-EqDiff} Consider on $M:=\R^{2n}$ an Hamiltonian function $H$ with partial derivatives $H_{p_k}:=\partial H/\partial p_k$ and $H_{q^k}:=\partial H/\partial q^k$. Then, finding the functions $\Phi_j$ of Assumption \ref{AssCom} amounts to solving for $\Phi_0$ the second-order linear equation $$ \big\{\{\Phi_0,H\},H\big\} \equiv\bigg(\sum_{\ell=1}^n \big(H_{p_\ell}\partial_{q^\ell} -H_{q^\ell}\partial_{p_\ell}\big)\bigg)^2\Phi_0 =0. $$ As observed in Section \ref{Sec_Crit}, this is essentially equivalent (when $k$ independent first integrals $J_1\equiv H,J_2,\ldots,J_k$ are known) to find the solutions $\Phi_0$ to \begin{equation}\label{eq_diff} \{\Phi_0,H\} =\sum_{\ell=1}^n \big(H_{p_\ell}\partial_{q^\ell} -H_{q^\ell}\partial_{p_\ell}\big)\Phi_0 =g(J_1,\ldots,J_k). \end{equation} The case $g\equiv1$ is sufficient, though trying to solve $\{\Phi_0,H\}=1$ can at best provide solutions which are $\cinf$ outside the set $\Crit(H)$. A way to remove these singularities could be to multiply the solutions by a function $g(H)$ that vanishes and is infinitely flat on $\Crit(H)$. For instance, if $H\big(\Crit(H)\big)$ consists of a finite number of values $c_1,\ldots,c_s\in\R$, one could take $g(H)=\prod_{j=1}^s\e^{-(H-c_j)^{-2}}$. Another possibility is to restrict the study to a submanifold $M'$ of $M$ (typically an open subset of the same dimension). However, problems can arise as the same (induced) symplectic structure (or Poisson bracket) must be used for the dynamic to remain unchanged; in particular, it must checked that the Hamiltonian flow preserves $M'$. \begin{enumerate} \renewcommand{\labelenumi}{{\normalfont(\Alph{enumi})}} \item Repulsive harmonic potential. In this example we first solve the equation $\{\Phi_0,H\}=1$, and then correct the functions $\Phi_0$ to make them $\cinf$. So, let us consider for $K\neq0$ the Hamiltonian $H(q,p):=\12\big(|p|^2-K^2|q|^2\big)$ on $M:=\R^{2n}$. One can check that $\Crit(H)=\{0\}$ and that $$ \textstyle \varphi_t(q,p)=\big(\frac{Kq+p}{2K}\;\!\e^{Kt}+\frac{Kq-p}{2K}\;\!\e^{-Kt}, \frac{Kq+p}2\;\!\e^{Kt}-\frac{Kq-p}2\;\!\e^{-Kt}\big). $$ For $j\in\{1,\ldots,n\}$, take $\Phi_j(q,p):=\frac1K\tanh^{-1}(Kq^j/p_j)$, where $\tanh^{-1}(z)\equiv\12\ln\big|\frac{1+z}{1-z}\big|$ is $\cinf$ on $\R\setminus\{\pm1\}$. Whenever $p_j=\pm Kq^j$, the $\Phi_j$ are not well-defined, but outside these regions, they satisfy $\{\Phi_j,H\} =1$. It is possible in this case to get rid of the singular regions. Indeed, the functions $H_j(q,p):=\12\big(p_j^2-K^2(q^j)^2\big)$ are first integrals of the motion and the singular regions correspond to the level sets $H_j^{-1}(\{0\})$. Therefore, the functions $\Phi'_j:=\e^{-H_j^{-2}}\Phi_j$ are well-defined and satisfy Assumption \ref{AssCom}: $$ \big\{\{\Phi_j',H\},H\big\}=\big\{\e^{-H_j^{-2}},H\big\}=0. $$ In this example, one has $\{0\}=\Crit(H)\subsetneq\Crit(H,\Phi')=\bigcap_jH_j^{-1}(\{0\})$. \item Simple pendulum. In this example we first consider the dynamics on a manifold and then restrict it to an appropriate submanifold. For $K>0$, take $H(q,p):=\12\big(p^2+K(1-\cos q)\big)$ on $M:=\R^2$. One has $\Crit(H)=\pi\Z\times\{0\}$ (the values $q\in2\pi\Z$ correspond to minima, while $q\in2\pi\Z+\pi$ correspond to inflexion points). Then, consider the open subset $M'$ of $M$ defined by the relation $H>K$, \ie $M':=\big\{(q,p)\in\R^2\mid p^2/2-K\cos^2(q/2)>0\big\}$. One verifies easily that $M'$ is preserved by the Hamiltonian flow, that $M'\cap\Crit(H)=\varnothing$ and that $M'$ corresponds to the region where the values of $q$ along an orbit cover all of $\R$. Define also $$ \Phi(q,p):=\sqrt{\frac2{H(q,p)}}\;\!F\big(q/2\big|\sqrt{K/H(q,p)}\big) \equiv\sqrt2\int_0^{q/2}\frac{\d\vartheta}{\sqrt{H(q,p)-K\sin^2(\vartheta)}}\;\!, $$ where $F(\;\!\cdot\;\!|\;\!\cdot\;\!)$ denotes the incomplete elliptic integral of the first kind. Then one verifies that the function $\Phi$ is well-defined on $M'$ and a direct calculation gives $\{\Phi,H\}(q,p)=p/|p|$ for each $(q,p)\in M'$. Now, $p/|p|=1$ on one connected component of $M'$ and $p/|p|=-1$ on the other one. Thus Assumption \ref{AssCom} is verified on $M'$ and $\Crit(H,\Phi)=\varnothing$. \item Unbounded trajectories of central force systems. Once again, we first consider the dynamics on a manifold and then restrict it to an appropriate submanifold. For $K\in\R\setminus\{0\}$, take $H(q,p):=\12\big(|p|^2-K|q|^{-1}\big)$ on $M:=(\R^n\setminus\{0\})\times\R^n$, with $n>1$ if $K>0$ and $n\geq1$ if $K<0$. One has $\Crit(H)=\varnothing$. When $K>0$ (and $n>1$), we must restrict our attention to the case where the Hamiltonian function $H$ is positive (to avoid periodic orbits), and where at least one of the two-dimensional angular momenta $L_{ij}(q,p):=q^ip_j-q^jp_i$ is nonzero (to avoid collisions, \ie orbits whose flow is not defined for all $t\in\R$, see \cite{OV06}). Therefore, the open set $ M':=\big\{(q,p)\in M|H(p,q)>0,\,\sum_{i,j=1}^n|L_{ij}(q,p)|^2\neq0\big\} $ is an appropriate submanifold of $M$ when $K>0$. Consider now the real valued functions on $M$ (resp. $M'$) when $K<0$ (resp. $K>0$ and $n>1$) given by $$ \Phi_\pm(q,p):=\frac{p\cdot q}{2H(q,p)} \mp\frac K{2\big(2H(q,p)\big)^{3/2}}\;\!\ln\Big(|q|\big(2H(q,p)+|p|^2\big) \pm2\sqrt{2H(q,p)}\;\!p\cdot q\Big). $$ Since $|p|^2<2H(q,p)$ (resp. $|p|^2>2H(q,p)$), then \begin{align*} \big(\sqrt{2H(q,p)}-|p|\big)^2>0 &\Longrightarrow 2H(q,p)|p|^2\pm2\sqrt{2H(q,p)}\;\!p\cdot\frac q{|q|}>0\\ &\hspace{-5pt}\iff|q|\big(2H(q,p)+|p|^2\big)\pm2\sqrt{2H(q,p)}\;\!p\cdot q>0. \end{align*} So, $\Phi_\pm$ are well-defined, and further calculations show that $\{\Phi_\pm,H\}=1$ on $M$ (resp. $M'$). As before, $\Crit(H)=\Crit(H,\Phi_\pm)=\varnothing$. Note that $\Phi_\pm(q,p)=p\cdot q/|p|^2$ when $K=0$, which is coherent with the canonical function $\Phi$ for the purely kinetic Hamiltonian $H(q,p)=\12|p|^2$. One can construct a more intuitive function $\Phi_0$ in terms of $\Phi_\pm$, namely, $$ \Phi_0(q,p) :=\12(\Phi_++\Phi_-)(q,p) =\frac{p\cdot q}{2H(q,p)}-\frac K{2\big(2H(q,p)\big)^{3/2}}\;\! \tanh^{-1}\bigg(\frac{2\sqrt{2H(q,p)}\;\!p\cdot q}{|q|\big(2H(q,p)+|p|^2\big)}\bigg), $$ which also satisfies $\{\Phi_0,H\}=1$. Since the functions satisfying Assumption \ref{AssCom} are linear in $t$, one can regard them as inverse functions for the flow. The appearance of the inverse hyperbolic function $\tanh^{-1}$ in $\Phi_0$ is related to the fact that unbounded trajectories of the central force system given by $H>0$ are hyperbolas. \item Poincar\'e ball model. Consider $B_1:=\big\{q\in\R^n\mid|q|<1\big\}$ endowed with the Riemannian metric $g$ given by $$ g_q(X_q,Y_q):=\frac4{(1-|q|^2)^2}\;\!(X_q\cdot Y_q),\quad q\in B_1,~X_q,Y_q\in T_qB_1\simeq\R^n. $$ Let $M:=T^*B_1\simeq\big\{(q,p)\in B_1\times\R^n\big\}$ be the cotangent bundle on $B_1$ with symplectic form $\omega:=\sum_{j=1}^n\d q^j\wedge\d p_j$, and let $$ H:M\to\R,\quad(q,p)\mapsto\12\sum_{j,k=1}^ng^{jk}(q)p_jp_k ={\textstyle \frac18}|p|^2\big(1-|q|^2\big)^2 $$ be the kinetic energy Hamiltonian. It is known that the integral curves of the vector field $X_H$ correspond to the geodesics curves of $(B_1,g)$ (see \cite[Thm.~1.6.3]{Jos05} or \cite[Sec.~6.4]{CC05}). Since, $(B_1,g)$ is geodesically complete (see Proposition 3.5 and Exercice 6.5 of \cite{Lee97}), this implies that $X_H$ is complete. There remains only to find a function $\Phi$ satisfying Assumption \ref{AssCom} in order to apply the theory. Some calculations using spherical-type coordinates suggest the function $$ \Phi:M\to\R,\quad(q,p)\mapsto\e^{-1/H(q,p)} \tanh^{-1}\left(\frac{(p\cdot q)(1-|q|^2)}{\sqrt{2H(q,p)}(1+|q|^2)}\right). $$ Indeed, since $$ \left|\frac{(p\cdot q)(1-|q|^2)}{\sqrt{2H(q,p)}(1+|q|^2)}\right| =\left|\frac{2(p\cdot q)}{|p|(1+|q|^2)}\right| \leq\frac{2|q|}{1+|q|^2} <1, $$ the function $\Phi$ is well-defined. Furthermore, direct calculations show that $\Phi$ is $\cinf$ and that $\{\Phi,H\}=\e^{-1/H}\sqrt{2H}$. Therefore, Assumption \ref{AssCom} is verified and one has $\Crit(H)=\Crit(H,\Phi)=B_1\times\{0\}$. In one dimension, $q(t):=\tanh(t)$ is (up to speed and direction) the only geodesic curve, and $$ \Phi(q,p) =\e^{-1/H(q,p)}\tanh^{-1}\left(\frac{2pq}{|p|(1+q^2)}\right) =2\e^{-1/H(q,p)}\frac p{|p|}\tanh^{-1}(q). $$ So, apart from the smoothing factor $2\e^{-1/H}$, our $\Phi$ coincides in one dimension with the inverse function of the flow. \end{enumerate} \subsection{Passing to a covering manifold} In this subsection we briefly discuss a way of avoiding the obstruction of periodic orbits: Given $M$ a symplectic manifold with symplectic form $\omega$ and Hamiltonian $H$, we let $\pi:\widetilde M\to M\setminus\Crit(H)$ be $\cinf$-covering manifold. In order to preserve the dynamics, we endow the manifold $\widetilde M$ with the pullback $\widetilde\omega:=\pi^*\omega$ of the symplectic form $\omega$ and with the pullback $\widetilde H:=\pi^*H$ of the Hamiltonian $H$.\footnote{If one wants to consider only a Poisson manifold $M$, a Poisson structure can also be defined on $\widetilde{M}$ given that $\pi$ is $\cinf$. Indeed, for $U\subset M\setminus\Crit(H)$ a sufficiently small open set (\ie such that $\pi^{-1}(U)$ is a disjoint union of diffeomorphic copies), connected components of $\pi^{-1}(U)$ are diffeomorphic to $U$ and the Poisson structure can be induced by this diffeomorphism.} Here are two simple examples of finite-dimensional symplectic covering manifolds. \begin{enumerate} \renewcommand{\labelenumi}{{\normalfont(\Alph{enumi})}} \item Consider on the sphere $M:=\mathbb S^2$ (as seen in $\R^3$ and with its standard symplectic structure) the Hamiltonian $H$ given by the projection onto the $z$-coordinate. Outside the $2$ polar critical points, all the orbits are periodic: the flow corresponds to rotations around the $z$-axis. In this case, one can use the covering of $\mathbb S^2\setminus\{(0,0,\pm1)\}$ given by $\widetilde M:=\big\{(\vartheta,z)\mid\vartheta\in\R,~z\in(-1,1)\}$ and the covering map \begin{align*} \pi:\widetilde M\to M\setminus\Crit(H) \equiv\mathbb S^2\setminus\{(0,0,\pm1)\},\quad (\vartheta,z)\mapsto\big(\sqrt{1-z^2}\;\!\cos(\vartheta), \sqrt{1-z^2}\;\!\sin(\vartheta),z\big). \end{align*} Consequently, $\widetilde H:\widetilde M\to(-1,1)$ is the projection onto the $z$-coordinate and $\widetilde\omega=\d\vartheta\wedge\d z$. One can also check that $\varphi_t(\vartheta,z):=(\vartheta+t,z)$ is the flow of $\widetilde H$ and that $\big\{\Phi,\widetilde H\big\}=1$ for $\Phi(\vartheta,z):=\vartheta$. So, Assumption \ref{AssCom} is verified on $\widetilde M$ and $\Crit(\widetilde H)=\Crit(\widetilde H,\Phi)=\varnothing$. \item Harmonic oscillator. Consider on $M:=\R^{2n}$ (with its standard symplectic structure) the Hamiltonian given by $H(q,p):=\12\big(|p|^2+K^2|q|^2\big)$, where $K\in\R\setminus\{0\}$. Define $\widetilde M:=\big\{(r,\vartheta)\mid r\in(0,\infty)^n,\,\vartheta\in\R^n\big\}$ and $\pi:\widetilde M\to M\setminus\Crit(H)\equiv\R^{2n}\setminus\{0\}$, with $$ \pi(r,\vartheta) :=\big(K^{-1}r_1\cos(\vartheta_1),\ldots,K^{-1}r_n\cos(\vartheta_n), r_1\sin(\vartheta_1),\ldots,r_n\sin(\vartheta_n)\big). $$ Then $\widetilde H(r,\vartheta)=\12|r|^2$, $\widetilde\omega=K^{-1}\sum_{j=1}^nr_j\;\!\d r_j\wedge \d\theta_j$, and $\varphi_t(r,\vartheta)=(r,\vartheta-Kt)$ is the flow of $\widetilde H$. Furthermore, one has $\big\{\Phi_j,\widetilde H\big\}=-K$ for each function $\Phi_j(r,\vartheta):=\vartheta_j$. Therefore, Assumption \ref{AssCom} is verified on $\widetilde M$ with $\Phi\equiv(\Phi_1,\ldots,\Phi_n)$ and $\Crit(\widetilde H)=\Crit(\widetilde H,\Phi)=\varnothing$. \end{enumerate} \subsection{Infinite dimensional Hamiltonian systems} \subsubsection{Classical systems} In the following examples, the infinite dimensional manifold $M$ is either $\ltwo(\R)$ or $\ltwo(\R)\oplus\ltwo(\R)$ (equivalence classes of real valued square integrable functions)\footnote{In the case of the wave and the Schr\"odinger equations below, one can easily extend the results to the situation where $\ltwo(\R)$ is replaced by $\ltwo(\R^n)$. We restrict ourselves to the case $n=1$ for the sake of notational simplicity.}. The atlas of $M$ consists in only one chart, the tangent space $T_uM$ at a point $u\in M$ is isomorphic to $M$, and the Riemannian metric on $M$ is flat (\ie independent of the base point in $M$) and given by the usual scalar product $\langle\;\!\cdot\;\!,\;\!\cdot\;\!\rangle$ on $\ltwo(\R)$ or $\ltwo(\R)\oplus\ltwo(\R)$. To define the symplectic form on $M$ in terms of the metric $\langle\;\!\cdot\;\!,\;\!\cdot\;\!\rangle$ we let $\H^s$, $s\in\R$, denote the real Sobolev space $\H^s(\R)$ or $\H^s(\R)\oplus\H^s(\R)$ (see \cite[Sec.~4.1]{ABG} for the definition in the complex case) and we let $\S$ denote the real Schwartz space $\S(\R)$ or $\S(\R)\oplus\S(\R)$. Then we consider an operator $J:\S\to\S$ (which can be interpreted by continuity as an endomorphism of the tangent spaces $T_uM\simeq M$) satisfying the following: \begin{enumerate} \item[(i)] There exists a number $d_J\geq0$, called the order of $J$, such that for each $s\in\R$ the operator $J$ extends to an isomorphism $\H^s\to\H^{s-d_J}$ (which we denote by the same symbol). \item[(ii)] $J$ is antisymmetric on $\S$, \ie $\langle Jf,g\rangle=-\langle f,Jg\rangle$ for all $f,g\in\S$. \end{enumerate} It is known \cite[Lemma~1.1]{Kuk93} that the operator $\bar J:=-J^{-1}:M\to\H^{d_J}$ (of order $-d_J$) is bounded and anti-selfadjoint in $M$. In consequence, for each $s\geq0$ the map $\omega:\H^s\times\H^s\to\R$ given by $$ \omega(f,g):=-\big\langle\bar Jf,g\big\rangle $$ defines a symplectic form on $\H^s$. The functions on the phase space (such as $H$ or $\Phi_j$) are infinitely Fr\'echet differentiable mappings from $\O_{s_H}$ (a subset of $\H^{s_H}$ for some $s_H\geq0$) to $\R$, \ie elements of $\cinf(\O_{s_H};\R)$. The Hamiltonian function $H\in\cinf(\O_{s_H};\R)$ is defined as follows: for some $h\in\cinf(\R^{k+1};\R)$ (or $h\in\cinf(\R^{2(k+1)};\R)$ if $M=\ltwo(\R)\oplus\ltwo(\R)$), one has for each $u\in\O_{s_H}$ $$ H(u):=\int_\R\d x\,h(u_0,u_1,\ldots,u_{s_H}), $$ where $u_j:=\frac{\d^ju}{\d x^j}$\;\!. Since $H\in\cinf(\O_{s_H};\R)$, the differential of $H$ at $u\in\O_{s_H}$ on a tangent vector $f\in\S\subset M \simeq T_u M$ is given by $$ \d H_u(f) =\lim_{t\to0}\frac1t\big[H(u+tf)-H(u)\big] =\int_\R\d x\;\!\sum_{j=0}^{s_H}\frac{\partial h}{\partial u_j}\frac{\d^jf}{\d x^j} =\sum_{j=0}^{s_H}\int_\R\d x\,(-1)^jf\;\!\frac{\d^j}{\d x^j} \frac{\partial h}{\partial u_j}\;\!, $$ where the second equality is obtained using integrations by parts (with vanishing boundary contributions). The (Riemannian) gradient vector field $\grad H$ associated to the linear functional $\d H$ satisfies by definition $\big\langle(\grad H)(u),f\big\rangle=\d H_u(f)$ for all $u\in\O_{s_H}$ and $f\in\S$ (here $(\grad H)(u)$ \apriori only belongs to the topological dual $\S^*$ of $\S$, which means that $\langle\;\!\cdot\;\!,\;\!\cdot\;\!\rangle$ denotes \apriori the duality map between $\S^*$ and $\S$). So, $(\grad H)(u)$ is given by \begin{equation}\label{gradienf} (\grad H)(u) =\sum_{j=0}^{s_H}(-1)^j\frac{\d^j}{\d x^j}\frac{\partial h}{\partial u_j}\;\!. \end{equation} Then, the Hamiltonian vector field $X_H$ is the map $\O_{s_H}\to\S^*$ satisfying $$ \big\langle\bar Jf,X_H(u)\big\rangle =-\omega\big(f, X_H(u)\big) =\d H_u(f) =\big\langle f,(\grad H)(u)\big\rangle $$ for all $u\in\O_{s_H}$ and $f\in\S$. Since $\bar J$ is anti-selfadjoint, this implies that $\bar JX_H(u)=-(\grad H)(u)$ in $\S^*$, which is equivalent to $X_H(u)=J(\grad H)(u)$ in $\S^*$. So, the equation of motion with Hamiltonian $H$ has the form $\ddt\;\!u=J(\grad H)(u)$, and $\{\Phi,H\}=\d\Phi(X_H)=\big\langle\grad\Phi,J(\grad H)\big\rangle$ for all functions $\Phi,H\in\cinf(\O_{s_H};\R)$ with appropriate gradient. Before passing to concrete examples, we refer to \cite{Kat75} for standard results on the local existence in time of Hamiltonian flows (global existence is specific to the system considered). \begin{enumerate} \renewcommand{\labelenumi}{{\normalfont(\Alph{enumi})}} \item The wave equation. We refer to \cite[Ex.~5.5.1]{AM78}, \cite[Ex.~8.1.12]{AMR88}, \cite[Sec.~2.1]{CM74} and \cite[Sec.~X.13]{RS75} for a description of the model. The existence of the flow for all times depends on the nonlinear term in the Hamiltonian (see for instance \cite[Thm.~X.74]{RS75} and the corollary that follows). In this example, the scale $\{\H^s\}_{s\geq0}$ is given by $\H^s:=\H^s(\R)\oplus\H^s(\R)$. The metric on $M:=\ltwo(\R)\oplus\ltwo(\R)$ is given for each $(p,q),(\widetilde p,\widetilde q)\in M$ by $ \big\langle(p,q),(\widetilde p,\widetilde q)\big\rangle :=\int_\R\d x\,(p\widetilde p+q\widetilde q) $, and the operator $J$ is given by $$ J:M\to M,\quad(p,q)\mapsto(-q,p). $$ It is an isomorphism of degree $0$ with $\bar J=J$. Given $m\geq0$ and $F\in\cinf(\R;\R)$, one can find a subset $\O_1\subset\H^1$ (depending on $F$) such that the Hamiltonian function $$ H:\O_1\to\R,\quad(p,q)\mapsto\int_\R\d x\,h(p,q,\partial_xq) \equiv\12\int_\R\d x\,\big\{p^2+(\partial_xq)^2+m^2q^2+2F(q)\big\}, $$ is well-defined and $\cinf$. In fact, we assume that $\O_1$ is chosen such that (i) all the functions on the phase space appearing below are elements of $\cinf(\O_1;\R)$, and (ii) integrations by parts involving these functions come vanishing boundary contributions. Then one checks that $(\grad H)(p,q)=\big(p,m^2q+F'(q)-\partial_x^2q\big)$ due to \eqref{gradienf}, and that $X_H(p,q)$ is trivial if and only if $p=0$ and $m^2q+F'(q)-\partial_x^2q=0$. The constraint on $q$ depends on the choice of $F$. For example, when $F(q)=0,q$ or $q^2$, the solution $q$ of the differential equation does not decay as $|x|\to\infty$. In consequence, the corresponding pairs $(p,q)$ cannot belong to $M$, and $\Crit(H)=\{(0,0)\}$. The equation of motion \begin{equation}\label{eq_motion} \ddt\;\!(p,q)=J(\grad H)(p,q) \end{equation} coincides with the usual the wave equation since the combination of $\ddt p=\partial_x^2q-m^2q-F'(q)$ and $\ddt q=p$ gives $$ \frac{\d^2}{\d t^2}\;\!q=\partial_x^2q-m^2q-F'(q). $$ When $m\neq0$, this equation is called the Klein-Gordon equation, and $F$ is usually assumed to be a nonlinear term of the form $F(q)=q^\lambda$ for some $\lambda\in\R$. A first relevant observation is that the function $C_0\in\cinf(\O_1;\R)$ given by $C_0(p,q):=\int_\R\d x\,p(\partial_x q)$ is a first integral of the motion. Furthermore, the function $\Phi_0\in\cinf(\O_1;\R)$ given by $\Phi_0(p,q):=\int_\R\d x\,\id_\R h(p,q,\partial_x q)$ has gradient $ (\grad \Phi_0)(p,q) =\big(\id_\R p,\id_\R m^2q+\id_\R F'(q)-\partial_x(\id_\R\partial_xq)\big). $ Therefore, $$ \{\Phi_0,H\}(p,q) =\big\langle(\grad\Phi_0)(p,q),J(\grad H)(p,q)\big\rangle =\int_\R\d x\,p\;\!\big\{\id_\R\partial_x^2q-\partial_x(\id_\R\partial_xq)\big\} =-C_0(p,q), $$ and $\Phi_0$ satisfies Assumption \ref{AssCom}. Here, we clearly have $$ \textstyle \Crit(H,\Phi_0) =C_0^{-1}(\{0\}) =\big\{(p,q)\in\O_1\mid\int_\R p(\partial_xq)\,\d x=0\big\} \supsetneq\{(0,0)\} =\Crit(H). $$ If we assume further that $F\equiv0$, then the equation of motion \eqref{eq_motion} is linear. Therefore any pair $(\partial_x^jp,\partial_x^j q)$, $j\geq1$, with $(p,q)$ a solution of \eqref{eq_motion}, also satisfies \eqref{eq_motion}. Consequently, if the subsets $\O_j\subset\H^j$ have properties similar to the ones of $\O_1$, then the functions $C_j\in\cinf(\O_{j+1};\R)$ and $H_j\in\cinf(\O_{j+1};\R)$ given by $ C_j(p,q):=\int_\R\d x\,\big(\partial_x^jp\big)\big(\partial_x^{j+1}q\big) $ and $ H_j(p,q):=\int_\R\d x\,h\big(\partial_x^jp,\partial_x^jq,\partial_x^{j+1}q\big) $ are first integrals of the motion. Accordingly, one deduces that the functions $\Phi_j\in\cinf(\O_{j+1};\R)$ given by $ \Phi_j(p,q) :=\int_\R\d x\,\id_\R h\big(\partial_x^jp,\partial_x^jq,\partial_x^{j+1}q\big) $ satisfy $\{\Phi_j,H\}=-C_j$ on $\O_{j+1}$. So, if $F\equiv0$, there is an infinite family of functions $\Phi_j$ satisfying Assumption \ref{AssCom}, and one has again $\Crit(H,\Phi_j)\supsetneq\Crit(H)$, with $\partial_x^j:\Crit(H,\Phi_j)\to\Crit(H,\Phi_0)$ an isomorphism. Finally, when $F\equiv0$ and $m=0$ one can check that the function $\widetilde\Phi_0\in\cinf(\O_1;\R)$ given by $\widetilde\Phi_0(p,q):=\int_\R\d x\,\id_\R p(\partial_xq)$ has gradient $ \big(\grad\widetilde\Phi_0\big)(p,q)=(\id_\R\partial_xq,-\id_\R\partial_xp-p). $ Then, $$ \big\{\widetilde\Phi_0,H\big\}(p,q) =\int_\R\d x\, \big(\id_\R(\partial_xq)(\partial_x^2q)-\id_\R p\;\!\partial_xp-p^2\big) =-\12\int_\R\d x\,\big((\partial_x q)^2+p^2\big) =-H(p,q), $$ where the third equality is obtained using integrations by parts (with vanishing boundary contributions). Thus $\widetilde\Phi_0$ satisfies Assumption \ref{AssCom}. Furthermore, since $\{\widetilde\Phi_0,H\}(p,q)= 0$ implies $\int_\R\d x\,\big\{(\partial_x q)^2+p^2\big\}=0$, one has $\Crit(H,\widetilde\Phi_0)=\Crit(H)=\{(0,0)\}$. As before, any derivative of a solution of the equation of motion is still a solution of the equation of motion. So, it can be checked that the functions $\widetilde\Phi_j\in\cinf(\O_{j+1};\R)$ given by $ \widetilde\Phi_j(p,q) :=\int_\R\d x\,\id_\R\big(\partial_x^j p\big)\big(\partial_x^{j+1}q\big) $ satisfy $\{\widetilde\Phi_j,H\}=-H_j$ on $\O_{j+1}$. Therefore, one has once again $\Crit(H,\widetilde\Phi_j)=\Crit(H)=\{(0,0)\}$ and the $\widetilde\Phi_j$'s constitutes a second infinite family of functions satisfying Assumption \ref{AssCom}. \item The nonlinear Schr{\"o}dinger equation. We refer to \cite[Ex.~1.3,~p.~3\,\&\,5]{Kuk93} for a description of the model. The existence of the flow for all times depends on the nonlinear term in the Hamiltonian (see for instance \cite[Sec.~I.2]{Bou99} and \cite[Sec.~3.2.2-3.2.3]{Sul99}). The setting is the same as that of the previous example, except that the Hamiltonian function $H\in\cinf(\O_1;\R)$ is given by $$ H(p,q):=\12\int_\R\d x\,\big\{(\partial_xp)^2+(\partial_xq)^2+V\cdot(p^2+q^2) +F(p^2+q^2)\big\}, $$ where $V,F\in\cinf(\R;\R)$. Using \eqref{gradienf}, one checks that the gradient of $H$ at $(p,q)\in\O_1$ is $$ (\grad H)(p,q) =\big(-\partial_x^2p+Vp+pF'(p^2+q^2),-\partial_x^2q+Vq+qF'(p^2 +q^2)\big). $$ So, the equation of motion $\ddt(p,q)=J({\rm grad}H)(p,q)$ is equivalent to the nonlinear Schr\"odinger equation \begin{equation}\label{NLS} \ddt\;\!u=i\big(-\partial_x^2u+Vu+uF'(|u|^2)\big), \end{equation} with $u:=p+iq$. Without additional assumptions on $F$ or $V$, it is hardly possible to determine the set $\Crit(H)$ of functions $u$ for which the r.h.s. of \eqref{NLS} vanishes. However, it is known that in general $\Crit(H)$ is not trivial, as in the case of elliptic stationary nonlinear Schr\"odinger equations (see Theorem 1.1 and Proposition 1.1 of \cite{BL90}). Now, assume that $V\equiv F\equiv0$ and for each $j\geq1$ let $\O_j\subset\H^j$ be a subset having properties similar to the ones of $\O_1$. Then the functions $H_j\in\cinf(\O_j;\R)$ and $C_j\in\cinf(\O_{j+1};\R)$ given by $ H_j(p,q) :=\12\int_\R\d x\,\big\{(\partial_x^jq)^2+(\partial_x^jp)^2\big\} \equiv\int_\R\d x\,h_j(p,q) $ and $ C_j(p,q) :=\int_\R\d x\,\big\{(\partial_x^jq)(\partial_x^{j+1}p) -(\partial_x^{j+1}q)(\partial_x^j p)\big\} \equiv\int_\R\d x\,c_j(p,q) $ are first integrals of the motion. Furthermore, the functions $\Phi_j\in\cinf(\O_j;\R)$ and $\widetilde\Phi_j\in\cinf(\O_{j+1};\R)$ given by $\Phi_j(p,q):=\int_\R\d x\, \id_\R h_j(p,q)$ and $\widetilde\Phi_j(p,q):=\int_\R\d x\,\id_\R c_j(p,q)$ satisfy $\{\Phi_j,H\}=C_j$ and $\{\widetilde\Phi_j,H\}=4H_{j+1}$ on $\O_{j+1}$. So, the $\Phi_j$'s and the $\widetilde\Phi_j$'s constitute two infinite families of functions satisfying Assumption \ref{AssCom}. Note that the sets $ \Crit(H,\Phi_j) =C_j^{-1}(\{0\}) =\big\{(p,q)\in\O_{j+1}\mid\int_\R\d x\,(\partial_x^j q)(\partial_x^{j+1}p)= 0\big\} $ (with isomorphisms $\partial_x^j:\Crit(H,\Phi_j)\to\Crit(H,\Phi_0)$) are rather large, whereas $\Crit(H,\widetilde\Phi_j)=\Crit(H)=\{(0,0)\}$. Some of the above functions still work when $V$ and $F$ are not trivial. For instance, the identity $\{\Phi_0,H\}=C_0$ on $\O_1$ remains valid for all $V$ and $F$. Furthermore, if $V={\rm Const.}$, then $\{C_0,H\}=0$ on $\O_1$. Consequently, $\Phi_0$ satisfies Assumption \ref{AssCom} for all $F$ and for $V={\rm Const.}$, and one has $\Crit(H,\Phi_0)\supsetneq\Crit(H)$. This last example is interesting since it applies to a large class of nonlinear Schr{\"o}dinger equations. \item The Korteweg-de Vries equation. Among many other possible references, we mention \cite[Ex.~5.5.7]{AM78} and \cite[Ex.~1.4,~p.~3\,\&\,5]{Kuk93}. For the global existence of the flow, we refer the reader to \cite[Sec.~1]{CKSTT03} and references therein. In this example, the scale $\{\H^s\}_{s\geq0}$ is given by $\H^s:=\H^s(\R)$ and the sets $\O_j$, $j\in\N$, are appropriate subsets of $\H^j$. The Hamiltonian function $H\in\cinf(\O_1;\R)$ is given by $$ H(u):=\int_\R\d x\,\big( \12(\partial_xu)^2+u^3 \big), $$ and the isomorphism $J:=\partial_x$ is of order $1$. The gradient of $H$ at $u\in\O_1$ is $-\partial_x^2u+3u^2$. So, the elements of $\Crit(H)$ are functions $u$ satisfying $-\partial_x^2u+3u^2=0$; these are Weierstrass $\wp$-functions \cite[Sec.~134.F]{Kiy87}, that is, functions with many singularities and no decay at infinity. Thus, $\Crit(H)=\{0\}$. Furthermore, the equation of motion $\ddt u=J(\grad H)(u)$ coincides with the KdV equation $\ddt u=\partial_x\big(-\partial_x^2 u + 3u^2\big)$. There exists an infinite number of first integrals of the motion with polynomial density, that is, of the form $H_j:=\int_\R\d x\,h_j$, where $h_j$ is a finite polynomial in $u$ and its derivatives (see \cite[Sec.~3]{MGK68}). For example, when $h_1(u)=u$, $h_2(u)=u^2$, $h_3(u)=\12(\partial_xu)^2+u^3$, or $h_4(u)=(\partial_x^2u)^2+10u(\partial_xu)^2+5u^4$. So, let $\Phi_0\in\cinf(\O_0;\R)$ be given by $\Phi_0(u):=\int_\R\d x\,\id_\R u$. Then the gradient of $\Phi_0$ at $u$ is $\id_\R$, and $\{\Phi_0,H\}=-3H_2$ on $\O_1$. Since $H_2$ is a first integral of the motion, this implies that $\Phi_0$ satisfies Assumption \ref{AssCom}. Furthermore, the fact that $H_2(u)=\|u\|_{\ltwo(\R)}$ implies that $\Crit(H,\Phi_0)=\{0\}=\Crit(H)$. Looking for others $\Phi$ of the form $\Phi(u)=\int_\R\d x\,g(x)\;\!G(u,\partial_xu,\ldots,\partial_x^ku)$, with $G$ a polynomial and $g$ a $\cinf$ function, is unnecessay. Indeed, both $\{\Phi,H\}$ and $\Upsilon(t):=\Phi-t\{\Phi,H\}$ are first integrals of the motion with density $\cinf$ in $x$ and polynomial in $u$ and its derivatives (and $t$-linear in the case of $\Upsilon$). Thus, we know from \cite[Thm.~1~\&~Rem.~3]{SW97} that they are completely characterised, up to the usual equivalence of conservation laws \cite[Sec.~4.3]{Olv93}. Therefore, the functions $\Phi$ are also completely characterised. Note however, that it is not excluded that functions $\Phi$ with an integrand $G$ involving fractional derivatives, an infinite number of derivatives, or of class $\cinf$ might work. Non-polynomial conserved densities are known to exist in the periodic case (see \cite[Sec.~5]{MGK68}). \end{enumerate} \subsubsection{Quantum systems} \label{qsys} Let $\H$ be a complex Hilbert space, with scalar product $\langle\;\!\cdot\;\!,\;\!\cdot\;\!\rangle$ antilinear in the left entry. Define on $\H$ the usual quantum-mechanical symplectic form $$ \omega:\H\times\H\to\R,\quad(\psi_1,\psi_2)\mapsto2\im\langle\psi_1,\psi_2\rangle. $$ The pair $(\H,\omega)$ has the structure of an (infinite-dimensional) symplectic vector space. Now, define for any bounded selfadjoint operator $H_{\rm op}\in\B(\H)$ the expectation value Hamiltonian function $$ H:\H\to\R,\quad \psi\mapsto\langle H_{\rm op}\rangle(\psi) :=\langle\psi,H_{\rm op}\psi\rangle. $$ Then, it is known \cite[Cor.~2.5.2]{MR99} that the vector field and the flow associated to $H$ are $X_H=-iH_{\rm op}$ and $\varphi_t(\psi)=\e^{-itH_{\rm op}}\psi$. Therefore, the Poisson bracket of two such Hamiltonian functions $H,K$ satisfies for each $\psi\in\H$ $$ \{K,H\}(\psi) =\omega\big(X_K(\psi),X_H(\psi)\big) =-\omega(K_{\rm op}\psi,H_{\rm op}\psi) =\big\langle\psi,i[K_{\rm op},H_{\rm op}]\psi\big\rangle. $$ So, in this framework, verifying Assumption \ref{AssCom} amounts to finding Hamiltonian functions $H\equiv\langle H_{\rm op}\rangle$ and $\Phi_j\equiv\langle(\Phi_j)_{\rm op}\rangle$ satisfying the commutation relation \begin{equation}\label{doubleC} \big[[(\Phi_j)_{\rm op},H_{\rm op}],H_{\rm op}\big]=0. \end{equation} In concrete examples, the operators $H_{\rm op}$ and $(\Phi_j)_{\rm op}$ are usually unbounded. Therefore, the preceding calculations can only be justified (using the theory of sesquilinear forms) on subspaces of $\H$ where all the operators are well-defined. We do not present here the whole theory since much of it, examples included, is similar to that of \cite{RT10}. We prefer to present a new example inspired by \cite{GG05}, where all the calculations can be easily justified. Let $U$ be an isometry in $\H$ admitting a number operator, that is, a selfadjoint operator $N$ such that $UNU^*=N-1$. Define on $\H$ the bounded selfadjoint operators $$ \textstyle \Delta:=\re(U)\equiv\frac12(U+U^*)\qquad\hbox{and}\qquad S:=\im(U)\equiv\frac1{2i}(U-U^*). $$ Then we know from \cite[Sec.~3.1]{GG05} that any polynomial in $U$ and $U^*$ leaves invariant the domain $\dom(N)\subset\H$ of $N$. In particular, the operator $$ \textstyle A_0:=\frac12(SN+NS),\quad\dom(A_0):=\dom(N), $$ is well-defined and symmetric. In fact, it is shown that $A_0$ admits a selfadjoint extension $A$ with domain $\dom(A)=\dom(NS)$. Furthermore, one has on $\dom(N)$ the identity $i[A,\Delta]=\Delta^2-1$. So, if we define the Hamiltonian functions $$ H:\H\to\R,\quad\psi\mapsto\langle\Delta\rangle(\psi)\qquad\hbox{and}\qquad \Phi:\dom(N)\to\R,\quad\psi\mapsto\langle A\rangle(\psi), $$ we obtain for each $\psi\in\dom(N)$ $$ (\nabla H)(\psi) =\{\Phi,H\}(\psi) =\langle i[A,H]\rangle(\psi) =\langle\Delta^2-1\rangle(\psi), $$ and Assumption \ref{AssCom} is verified for each $\psi\in\dom(N)$: $$ \big\{\{\Phi,H\},H\big\}(\psi) =\omega\big(X_{\langle\Delta^2-1\rangle}(\psi),X_{\langle\Delta\rangle}(\psi)\big) =\big\langle i[\Delta^2-1,\Delta]\big\rangle(\psi) =0. $$ Now, since the spectrum of $\Delta$ is $[-1,1]$, the operator $1-\Delta^2$ is positive, so we have the equivalences $$ \langle\Delta^2-1\rangle(\psi)=0~~\Longleftrightarrow~~ \big\|(1-\Delta^2)^{1/2}\psi\big\|^2=0~~\Longleftrightarrow~~ \psi\in E^\Delta(\{\pm1\}). $$ Thus, $$ \Crit(H,\Phi) \equiv(\nabla H)^{-1}(\{0\}) =\big\{\psi\in\dom(N)\mid\langle\Delta^2-1\rangle(\psi)=0\big\} =\dom(N)\cap E^\Delta(\{\pm1\}). $$ On the other hand, the elements $\psi\in\Crit(H)$ satisfy the condition $$ 0=X_H(\psi)=-i\Delta\psi~~\Longleftrightarrow~~\psi\in E^\Delta(\{0\}). $$ This implies that $\Crit(H)=\{0\}$, since the spectrum of $\Delta$ is purely absolutely continuous outside the points $\pm1$ \cite[Prop.~3.2]{GG05}. Finally, the function $T_f$ is given by $$ T_f=-\langle A\rangle\cdot(\nabla R_f)\big(\langle\Delta^2-1\rangle\big) $$ on $\dom(N)\setminus\Crit(H,\Phi)$. Typical examples of operators $\Delta$ and $N$ of the preceding type are Laplacians and number operators on trees or complete Fock spaces (see \cite{GG05} for details). \section*{Acknowledgements} Part of this work was done while R.T.d.A was visiting the Max Planck Institute for Mathematics in Bonn. He would like to thank Professor Dr. Don Zagier for his kind hospitality. R.T.d.A also thanks Professor M. Musso for a useful conversation on the stationary nonlinear Schr\"odinger equation.
1,314,259,994,128
arxiv
\section{Introduction} In general, an ordinary realistic measurement can also be regarded as an information transfer process between the target system and us via a measurement device, where our available information depends on all of them. A study on the device limitations, therefore, contributes to an understanding of what information is really available in the measurement process. Measurement of a current is one of the most standard techniques to obtain the intrinsic information about the target system in the condensed matter physics. Theoretically, the probability distribution of transferred charge obtained in a current measurement is described by the full counting statistics, that was first proposed by Levitov and Lesovik~\cite{Levitov:1993ma,Levitov:1996ie} and then has been established in the last two decades. Most of theoretical studies, however, focus on the ideal measurement (see Refs~\onlinecite{Nazarov:2003,Esposito:2009zz} and references therein) and only a few of studies deal with the influence of the device limitations~\cite{Naaman:2006,Utsumi:2010,Bednorz:2008}. When ideal current measurements are conducted, the universal relation is satisfied between the linear conductance and current noise, i.e. the Johnson-Nyquist (J-N) relation~\cite{Johnson:1927tu,Nyquist:1928wx}. The J-N relation is an early significant example of the fluctuation-dissipation theorem~\cite{Callen:1951wg,Kubo:1957wk}, and provides a proportional relation between the variance of a fluctuating current through a conductor, i.e. current noise, and the conductance as \begin{equation} S_{0}|_{V=0}=2k_{\rm{}B}TG_{0}, \label{eq:JNrelation} \end{equation} where $T$ is the temperature of the conducting electrons, $k_{\rm{}B}$ is the Boltzmann constant, $S_0|_{V=0}$ represents the equilibrium noise, and $G_0\equiv\lim_{V\to0}dI_0/dV$ reads the linear response of the averaged current $I_{0}$ to applied bias voltage $V$, respectively. In addition to its importance in fundamental physics, the J-N relation also has a practical significance in thermometry~\cite{White:1996wr}. Since the temperature can be determined by measuring only $S_{0}|_{V=0}$ and $G_{0}$, the Johnson noise thermometry has been exploited in rapidly developing noise measurements of nanosystems from which we obtain the useful information about the low-energy excitations in the quantum systems~\cite{Reznikov:1995us,depicciotto:1997dk,Saminadayar:1997tl,Lefloch:2003fp,Sela:2006kq,Zarchin:2008gq,Hashisaka:2008ef,Delattre:2009,Yamauchi:2011cq} and confirm the steady state fluctuation theorem~\cite{Tobiska:2005ht,Saito:2008hs,Nakamura:2010hn}. When a sample is placed in a dilute refrigerator, however, the noise temperature determined from the J-N relation, $T_{\rm{}JN}$, is sometimes higher than the temperature of the refrigerator independently measured with a resistance thermometer, $T_{\rm{}ref}$~\cite{Hashisaka:2008ef,Nakamura:2010hn}. The discrepancy has been recognized since early 1970s~\cite{Webb:1973ej}, and attributed to a heat leak to the sample in the refrigerator~\cite{Hashisaka:2008ef,Webb:1973ej}. Since an increasing discrepancy is observed only at very low temperatures above which $T_{\rm{}JN}\simeq{}T_{\rm{}ref}$ is satisfied, it is generally agreed that the measured noise is properly calibrated and $T_{\rm{}JN}$ represents the actual electron temperature~\cite{Nakamura:2010hn}. The seemingly correct interpretation, however, does not include consideration of the possibility of an extrinsic noise enhancing only at such very low temperatures. In this paper, we theoretically investigate the influence of resolution, (in other words the smallest detectable change in measurement), on the current measurement, which at least qualitatively accounts for the discrepancy. The resolution fundamentally limits the available information in the measurement process, which must affect the observed fluctuation and noise. In fact, the limited resolution gives rise to an enhancement of the extrinsic noise only at very low temperatures as discussed in the Sec. V. Before going into the detail, we briefly explain our formalism and main results. To understand the resolution effects on the current measurement, we exploit the two-point measurement statistics proposed by Esposito, Harbola, and Mukamel~\cite{Esposito:2009zz}. They calculated the probability distribution of the particle-number change $n\equiv{}N'-N$ taking place in a part of the system in a measurement time ${\cal{}T}$. $N$ and $N'$ read the particle numbers of the part at $t=0$ and $t={\cal{}T}$, respectively, which are given by the projective measurement in the basis of the particle-number operator, $\hat{N}_{\rm{}part}$. Note that the equation of continuity connects $n$ with the net current flowing into the part. $n$ can be any integer, which means that the electrons in current are ideally distinguished, one by one. We extend their scheme of current measurement to take into account a limited resolution $\Delta$. In other words, we study a coarse-graining of the available information on current. $\Delta$ is introduced in the particle-number measurements at $t=0$ and $\cal{}T$, which are described by projection operators parameterized by an integer $k$, $\{\hat{P}_{k}^{\rm part}(\Delta)\}$, where \begin{equation} \hat{P}_{k}^{\rm part}(\Delta)\equiv\int_{\chi_{k}-\frac{\Delta}{2}}^{\chi_{k}+\frac{\Delta}{2}}dx\delta(x-\hat{N}_{\rm{}part}). \end{equation} Here, $\chi_{k}\equiv\chi_{0}+k\Delta$ is the outcome of the measurement where $\chi_0$ is the zero-point deviation. In our scheme, $n\equiv{}\chi_{k'}-\chi_{k}=(k'-k)\Delta$ is the available outcome and can be any multiple of $\Delta$, which means that $\Delta$-particles are required for the detection of the change in $n$ at least. Our scheme is described by a positive operator-valued measure~\cite{Davies:1970ux,Kraus:1971wd} (POVM) measurement characterized by two measurement parameters, ${\cal{}T}$ and $\Delta$. It is noteworthy that the scheme is reduced to that of Esposito {\it et al}.~\cite{Esposito:2009zz} and the full counting statistics proposed by Levitov and Lesovik~\cite{Levitov:1993ma,Levitov:1996ie} in the case of $\Delta=1$ with a long ${\cal{}T}$ in comparison with the characteristic time scale of the transport in the target system. \begin{figure}[tb] \begin{center} \includegraphics[width=40mm]{fig1.eps} \end{center} \caption{(color online). Schematic illustration of resonant level model. The $\varepsilon_{0}$-level is coupled to two reservoirs A and B between which the bias voltage $V$ is applied. $\mathit{\Gamma}_{\rm A(B)}$ reads the characteristic frequency of the electron transfer between the level and the reservoir A(B). $\mu_{\rm A(B)}$ represents the chemical potential of the reservoir A(B). We take $\mu_{\rm{}A}=0$ and $\mu_{\rm{}B}=eV$. We introduce $\mathit{\Gamma}^{-1}\equiv[(\mathit{\Gamma}_{\rm{}A}+\mathit{\Gamma}_{\rm{}B})/2]^{-1}$ and $r\equiv\mathit{\Gamma}_{\rm{}A}\mathit{\Gamma}_{\rm{}B}/\mathit{\Gamma}^2$ as the characteristic time scale and the degree of asymmetry of the couplings, respectively.} \label{fig1} \end{figure} Since the available information depends on the measurement device, it is important to explain what is our intended device. As a model for actual galvanometers, Levitov and Lesovik introduced a precessing 1/2 spin, which measures a current indirectly via the induced magnetic field~\cite{Levitov:1996ie}: The precession angle is proportional to the net charge transferred near by the spin for a measurement time, $\cal{}T$. Our scheme is, therefore, expected to take into account the essence of a conventional current-measuring device including the function of a galvanometer, which requires $\Delta$-electrons at least during a time ${\cal T}$ to work. Note that in our scheme, most of the electrons can move without disturbance by projection during the measurement because $\cal{}T$ is usually much longer than the microscopic time scale of electrons. In contrast to the conventional current measurement, a newly developing charge-sensing device, a quantum-point-contact detector, works in a different way and gives us a real-time detection of a charge state by projecting the system to the charge diagonal state~\cite{Naaman:2006,Utsumi:2010,Fujisawa:2006jf,Gustavsson:2006jm,Gustavsson:2007,Kung:2012ct}. Namely, our scheme describes the conventional current measurement device but the newly developing one. \begin{figure}[tb] \begin{center} \includegraphics[width=86mm]{fig2.eps} \end{center} \caption{ (color online). Ratio of excess and intrinsic noises $\langle\mathit{\Delta}S\rangle_{\delta}/S_{0}$ in the thermal equilibrium state ($V=0$) as a function of $Q\equiv S_{\rm{}M}/S_{0}$ for several choices of (${\cal{}T},\Delta$), where $S_{\rm{}M}\equiv(e\Delta)^2/{\cal{}T}$. The other parameters are fixed at $\varepsilon_0=0$ and $r=1$. The black solid line indicates the universal exponential $A\exp[-\gamma/Q]$, with $A=2$ and $\gamma=(2\pi)^2$ estimated from Eq.~\eqref{eq:estimation_exponential}. The dashed line represents the square root dependence $B\sqrt{Q}$, where $B=1/\sqrt{4\pi}$ determined from Eq.~\eqref{eq:estimation_squareroot}. The inset shows the linear dependence of the logarithm of the ratio on $Q^{-1}$. } \label{fig2} \end{figure} Applying the extended two-point measurement scheme to the current through a resonant level depicted in Fig.~\ref{fig1}, we show that the limited resolution gives rise to the departure of the measured noise $S$ from the intrinsic one $S_{0}$ while the measured current $I$ is unchanged at $I_{0}$. The excess noise, $\langle\mathit{\Delta}S\rangle_{\delta}=S-S_0$, is positive and shows an anomalous temperature dependence, which can make the usual empirical method of noise calibration~\cite{DiCarlo:2006} unjustified~\cite{Hashisaka:2008ef,Nakamura:2010hn,Webb:1973ej}. Note that $\langle\mathit{\Delta}S\rangle_{\delta}$ is explicitly evaluated by using Eq.~\eqref{eq:averagednoise}. Hence, the J-N relation can be violated between the measured noise $S$ and measured conductance $G\equiv\lim_{V\to0}dI/dV$ in the practical cases, which causes a discrepancy between $T_{\rm JN}$ and $T_{\rm ref}$ at low temperatures. The deviation from the J-N relation between $S$ and $G$ caused by the limited resolution is represented by, \begin{equation} \frac{S|_{V=0}}{2k_{\rm{}B}TG}-1=\frac{\langle\mathit{\Delta}S\rangle_{\delta}}{S_{0}}\Big|_{V=0}\ge{}0.\label{eq:deviationfromJN} \end{equation} It is remarkable that the ratio of noises obeys a scaling law with the scaling variable $Q\equiv S_{\rm{}M}/S_{0}$ as seen in Fig.~\ref{fig2}, where $S_{\rm{}M}\equiv(e\Delta)^2/{\cal{}T}$ is the characteristic noise determined solely from the measurement scheme. The scaling function exhibits the universal exponential dependence for $Q<1$ having the essential singularity at $Q=0$ with increasing from zero to unity, and shows a crossover to an algebraic increase or a constant at $Q>1$. From the scaling law, we find that $S_0$ is not detectable in noise experiments when $S_0$ is much smaller than $S_\textrm{M}$, $Q\gg1$. Since $\Delta_{0}\equiv\sqrt{S_0\mathcal{T}}/e$ is the standard deviation of the transferred particle number counted with the ideal resolution, it means the average number of particle involved in the measurement for $V=0$. The enhanced deviation for large $Q=(\Delta/\Delta_{0})^2$ is, therefore, consistent with our intuition that the resolution error of noise should be more profound in the case that only a few particles are involved. Although the above discussion of scaling is based on the specific model, essentially the same scaling relation is expected be satisfied for an arbitrary mesoscopic conductor coupled to normal reservoirs, as will be discussed in Sec. IV. The experimental anomalous enhancement of noise at low temperatures can be understood by the scaling behavior: The excess noise being irrelevant at high temperatures becomes profound at low temperatures because $Q$ increases with decreasing the temperature. Note that there are other known noise sources that make the violation of the J-N relation, e.g. the background noise. The noises coming from the sources, however, can be calibrated by using the empirical method because of their trivial temperature dependences accounted for by circuit theory~\cite{DiCarlo:2006} and do not give an explanation for the observed discrepancy between $T_{\rm{}JN}$ and $T_{\rm{}ref}$. The plan of the paper is the following. In Sec. II we formulate the resolution of the current measurement exploiting the two-point measurement, and obtain a formula which describes the characteristic function of the distribution of the transferred particle number counted with limited resolution. In Sec. III, we apply the formula to the resonant level model and calculate the measured current and measured noise analytically. Section IV gives the numerical calculations of the intrinsic and excess noises in the thermal equilibrium state and the linear response of the current. Section V is devoted to the comparison between theory and experiment. It is clarified that our results are consistent with the experiments and may account for the difference between $T_{\rm{}JN}$ and $T_{\rm{}ref}$. A summary and conclusions of our work are given in Sec. VI. \section{Formalism of Current Measurement with Limited Resolution} In this section, we formulate the two-point measurement statistics under limited resolutions of steady state current through a reservoir (lead) in a multi-terminal mesoscopic system that consists of a conductor connected to multiple reservoirs. The system is described by the following general Hamiltonian, \begin{align} \hat{\cal{}H}(t)=\hat{H}_{0}+\hat{V}(t), \label{eq:Hamiltonian} \end{align} where \begin{align} \hat{H}_{0}&=\hat{H}_{\rm{}con}+\sum_{\rm X=A,B,\cdots}\hat{H}_{\rm{}X},\\ \hat{V}(t)&=\sum_{\rm X=A,B,\cdots}\hat{V}_{\rm{}X}\theta(t). \end{align} Here $\hat{H}_{\rm{}con}$ and $\hat{H}_{\rm{}X}$ read the Hamiltonians of the conductor and the reservoir X, respectively, $\hat{V}_{\rm{}X}$ is the hopping matrix between the reservoir X and the conductor, and $\theta(t)$ is the step function. The current is observed as the net change of particle number in the reservoir A from $t=0$ to $t={\cal T}$. Before the current measurement, it is assumed that the conductor is disconnected for $t\le 0$ from all of the reservoirs, which are in the isolated thermal equilibrium states with the different chemical potentials. Then, the density matrix at $t=0$ is given by \begin{align} \hat{\rho}(0)&=\hat{\rho}_{{\rm con}}^{0}\otimes\frac{\exp[-\beta(\hat{H}_{{\rm A}}-\mu_{{\rm A}}\hat{N}_{{\rm A}})]}{\textrm{Tr}\Big[\exp[-\beta(\hat{H}_{{\rm A}}-\mu_{{\rm A}}\hat{N}_{{\rm A}})]\Big]}\notag\\ &\quad\otimes \frac{\exp[-\beta(\hat{H}_{{\rm B}}-\mu_{{\rm B}}\hat{N}_{{\rm B}})]}{\textrm{Tr}\Big[\exp[-\beta(\hat{H}_{{\rm B}}-\mu_{{\rm B}}\hat{N}_{{\rm B}})]\Big]}\otimes \cdots, \end{align} where $\hat{N}_{{\rm X}}$ is the total number operator of the reservoir $X$ that commutes with $\hat{H}_{\rm X}$, $\beta\equiv 1/k_{\rm{}B}T$ is the inverse temperature of the system, $\hat{\rho}_{{\rm con}}^{0}$ is the initial density matrix of the conductor, and $\mu_{\rm X}$ represents the chemical potential of the reservoir X. Since the reservoir A is isolated for $t\le 0$, the particle number of the reservoir A takes a constant, $N_{{\rm A}}^0$, which is the initial particle number of the reservoir A at $t=0$: $\hat{\rho}(0)\hat{N}_{{\rm A}}=N_{{\rm A}}^{0}\hat{\rho}(0)$. It is noteworthy that any number of channels of the reservoir and any interaction of the conductor, e.g. Coulomb interaction, can be dealt with in this model. Our measurement scheme is a simple extension of that proposed by Esposito, Harbola, and Mukamel~\cite{Esposito:2009zz}. Note that in Ref. \onlinecite{Esposito:2009zz}, the full counting statistics is reformulated with using the superoperators in Liouville space, that is convenient to the simple description of the current measurement scheme. We here, however, use the ordinary operators in Hilbert space for the convenience of the general readers. The indirect measurement of current flowing into the reservoir A via the induced magnetic field can be described by the measurement of the number of electrons flowing into reservoir A during a measurement time, $\cal{}T$. Esposito, Harbola, and Mukamel calculated the probability that the slight change in the particle number in the reservoir A during a measurement time ${\cal{}T}$ is equal to $k$ with the following two-point measurement, \begin{equation} {\cal P}^{\rm{}EHM}(k;{\cal T})= \sum_{l}\textrm{Tr}[\hat{P}_{l+k}\hat{U}({\cal T},0)\hat{P}_{l}\hat{\rho}(0)\hat{P}_{l}\hat{U}^\dagger({\cal T},0)\hat{P}_{l+k}], \label{eq:P_EHM} \end{equation} where $\hat{P}_{k}\equiv|k\rangle\langle{}k|$ is the projective operator of the particle number operator of reservoir A, $\hat{N}_{A}=\sum_{k}k|k\rangle\langle{}k|$, where $k$ is the eigenvalue, and $\hat{U}(t,t')\equiv\breve{T}\exp\big[-\frac{i}{\hbar} \int_{t'}^{t}\hat{\cal H}(t_1) dt_1\big]$ reads the time-evolution operator. They showed that the cumulant generating function of ${\cal P}^{\rm{}EHM}(k;{\cal T})$ is equal to the one obtained in the full counting statistics in the case of ${\cal T}\mathit{\Gamma}\gg1$. From the viewpoint of quantum measurement theory, the measurement can be described by the POVM formalism, \begin{equation} {\cal{}P}^{\rm{}EHM}(k;{\cal{}T})={\rm{}Tr}[\hat{D}_{k}^{\rm{}EHM}({\cal{}T})\hat{\rho}(0)], \end{equation} where the operators $\hat{D}_{k}^{\rm{}EHM}({\cal{}T})$ are the POVM elements defined by $\hat{D}_{k}^{\rm{}EHM}({\cal{}T})\equiv \sum_{l}\hat{M}_{k,l}^{\rm{}EHM\dagger}({\cal T})\hat{M}_{k,l}^{\rm{}EHM}({\cal T})$ where \begin{equation} \hat{M}_{k,l}^{\rm{}EHM}({\cal T})\equiv \hat{P}_{l+k}\hat{U}({\cal T},0)\hat{P}_{l}. \end{equation} In their calculation, the outcome of ${\cal P}^{\rm{}EHM}(k;{\cal T})$, $k$, can be any integers, which implies that the measurement device has the function to detect the change of even just one electron during $\cal{}T$. That is, however, not realistic. The ultimately high resolution is attributed to the part of the projective measurement, $\hat{P}_{k}$. We implement the limitation of the resolution by introducing smallest detectable number of electrons $\Delta$ and replace $\hat{P}_{k}$ with a projection operator $\hat{P}_{k}(\Delta)$ defined by \begin{equation} \hat{P}_{k}(\Delta)\equiv \int_{\chi_{k}-\frac{\Delta}{2}}^{\chi_{k}+\frac{\Delta}{2}}dx \delta(x -\hat{N}_{A}). \end{equation} Here, $\chi_k\equiv\chi_{0}+k\Delta-\eta$. $\chi_0$ and $\eta$ read the zero point deviation of the particle-number measurement and the positive infinitesimal, respectively. $\hat{P}_{k}(\Delta)$ satisfies $\hat{P}_{k}(\Delta)\hat{P}_{l}(\Delta)=\delta_{k,l}\hat{P}_{k}(\Delta)$ and projects a state onto the subspace spanned by the eigenvectors belonging to the eigenvalues of $\hat{N}_{\rm{}A}$ which satisfy $\chi_{k}-\frac{\Delta}{2}\le N_{A} <\chi_{k}+\frac{\Delta}{2}$. $\Delta$, therefore, represents the resolution of the particle-number measurement of the reservoir A and becomes a scale unit in the outcome. With using the projection operators, the probability that the particle number change of the reservoir A during ${\cal T}$ is equal to $k\Delta$, ${\cal P}(k;{\cal T},\Delta)$, is obtained from \begin{equation} {\cal P}(k;{\cal T},\Delta)= \textrm{Tr}[\hat{D}_{k}({\cal T},\Delta)\hat{\rho}(0)], \label{eq:P} \end{equation} where $\hat{D}_{k}({\cal T},\Delta)\equiv \sum_{l} \hat{M}_{k,l}^{\dagger}({\cal T},\Delta)\hat{M}_{k,l}({\cal T},\Delta)$ are POVM~\cite{Davies:1970ux,Kraus:1971wd} elements. The operators $\hat{M}_{k,l}({\cal T},\Delta)$ are defined by the following equation; \begin{equation} \hat{M}_{k,l}({\cal T},\Delta)\equiv \hat{P}_{l+k}(\Delta)\hat{U}({\cal T},0)\hat{P}_{l}(\Delta). \end{equation} Note that although, in this paper, we consider the particle flow with the two-point measurement statistics with a limited resolution, our definition of resolution is easy to be extended and can be applied to the measurement of other physical quantities such as heat current. In that case, the resolution could be more significant because there is no apriori unit of the measurement. For the calculation of the average and the variance of the current, it is useful to consider the characteristic function of the probability defined by ${\cal M}(\lambda;{\cal T},\Delta)\equiv \sum_{k}\exp[i \lambda k]{\cal P}(k;{\cal T},\Delta)$. With some calculations, the characteristic function is written as \begin{align} &{\cal M}(\lambda;{\cal T},\Delta)\notag\\ &=\sum_{m=-\infty}^{\infty}\mathrm{sinc}(\frac{\lambda+2\pi m}{2})\exp[i2\pi m \frac{\delta}{\Delta}]{\cal M}_{0}(\frac{\lambda+2\pi m}{\Delta}, {\cal T}), \label{eq:momentGF} \end{align} where \begin{equation} {\cal M}_{0}(\lambda; {\cal T})\equiv \textrm{Tr}[\hat{U}^\dagger({\cal T},0;-\frac{\lambda}{2})\hat{U}({\cal T},0;\frac{\lambda}{2}) \hat{\rho}(0)], \end{equation} \begin{equation} \delta \equiv N_{{\rm A}}^0-\chi_{0} \bmod \Delta \quad (0 \le \delta < \Delta). \end{equation} $\hat{U}(t,t';\lambda)\equiv \breve{T}\exp[-i/\hbar \int_{t'}^{t}\hat{\cal H}(t_1;\lambda) dt_1]$ is the modified time evolution operator with the counting field $\lambda$ where $\hat{\cal H}(t;\lambda)\equiv \exp[i\lambda\hat{N}_{{\rm A}}]\hat{\cal H}(t)\exp[-i\lambda\hat{N}_{{\rm A}}]$, and $\mathrm{sinc}(x)\equiv \sin(x)/x$. Note that in the above calculation, we ignore a constant factor of ${\cal M}(\lambda;{\cal T},\Delta)$ which does not affect our final results. In Eq.~\eqref{eq:momentGF}, all the detailed information of the target system is included in ${\cal M}_{0}(\lambda; {\cal T})$ that is the characteristic function of the distribution of the transferred particle number in the ideal resolution case. Equation \eqref{eq:momentGF} represents, therefore, the general formula of the characteristic function of the transferred particle number counted with the limited resolution. \section{Application to Resonant Level Model and Random Averaging} To proceed the concrete calculation, we apply the above formal result to the resonant level connected to two noninteracting reservoirs (see Fig.~\ref{fig1}). The Hamiltonian of the resonant level model which consists of a resonant level $\varepsilon_{0}$ coupled to two reservoirs A and B is represented by Eq. \eqref{eq:Hamiltonian} with replacing the terms with $\hat{H}_{0}=\hat{H}_{\rm{}A}+\hat{H}_{\rm{}B}+\hat{H}_{\rm{}sys}$, $\hat{V}(t)=\hat{V}_{\rm{}A}\theta(t)+\hat{V}_{\rm{}B}\theta(t)$, $\hat{H}_{\rm{}sys}=\varepsilon_{0}\hat{d}^\dagger\hat{d}$, $\hat{H}_{\rm{}X}=\sum_{x{}\in{\rm{}X}}\varepsilon_{x}^{\rm{}X}\hat{c}_{x}^\dagger\hat{c}_{x}$, and $\hat{V}_{X}=\sum_{x{}\in{\rm{}X}}(t_{\rm{}X}\hat{d}^\dagger\hat{c}_{x}+{\rm{}H.c.})$ for ${\rm{}X}={\rm{}A},{\rm{}B}$. Here, $\hat{d}^{\dagger}$ creates a spinless electron with charge $e$ at the resonant level $\varepsilon_{0}$, while $\hat{c}_{x\in\text{X}}^{\dagger}$ denotes the creation operator of a spinless electron at a wave number $x$ in the reservoir X=A or B, with a constant density of states $\rho_{\text{X}}$. The resonant level is coupled to the reservoir X with a hybridization $t_{\text{X}}$, where the characteristic transport frequency $\mathit{\Gamma}_X$ is given by $\mathit{\Gamma}_{\rm{}X}=2\pi|t_{\rm{}X}|^2\rho_{\rm{}X}/\hbar$. The chemical potentials of reservoirs have the different values, $\mu_{B}=eV$ and $\mu_{A}=0$, because of the applied bias voltage $V$ between the reservoirs. We note that though the reservoir A is used for the two-point measurement, the choice of the reservoir does not influence our results in this two-terminal case. To obtain the stationary current distribution, ${\cal{}T}$ is assumed to be much longer than the characteristic time scale of the electrons determined by $\mathit{\Gamma}^{-1}\equiv[(\mathit{\Gamma}_{\rm{}A}+\mathit{\Gamma}_{\rm{}B})/2]^{-1}$ but finite. This model can be considered as a simple model of a quantum dot coupled to two reservoirs, which is one of the typical nanosystems where the noise measurements have been conducted at very low temperatures in the experimental studies~\cite{Zarchin:2008gq,Delattre:2009,Yamauchi:2011cq}. In addition, our model in the equilibrium state with $k_{\rm{}B}T/\hbar\mathit{\Gamma}\ll1$ also describes a single-channel quantum point contact (QPC)~\cite{Hashisaka:2008ef} where the transmission probability is given by $r/[(\varepsilon_0/\hbar\mathit{\Gamma})^2+1]$. Here, $r\equiv\mathit{\Gamma}_{\rm{}A}\mathit{\Gamma}_{\rm{}B}/\mathit{\Gamma}^2$ represents the coupling asymmetry. Being described by the forward and backward time-evolutions obeying the different modified Hamiltonians, $\hat{\cal H}(t;\pm\lambda/2)$, ${\cal M}_{0}(\lambda;{\cal{}T})$ in Eq.~\eqref{eq:momentGF} is adequately evaluated with using the Keldysh Green's function method~\cite{Kindermann:2003th,Kamenev:2005vu}. ${\cal T}\mathit{\Gamma}\gg1$ is necessary for measuring the stationary current statistics. The leading time order for logarithm of ${\cal M}_{0}(\lambda; {\cal T})$ is evaluated as \begin{align} \ln{\cal M}_{0}(\lambda; {\cal T})={\cal T}\mathit{\Gamma}{\cal C}_{0}(\lambda)+o(\mathcal{T}) \label{eq:longtimeapproximation} \end{align} where \begin{align} {\cal C}_{0}(\lambda)&\equiv\int_{-\infty}^{\infty}\frac{dx}{2\pi}\ln\Big[1+T(x)\big[(\exp[i\lambda]-1)[1-f_{\rm A}(x)]f_{\rm B}(x)\notag\\ &\quad+(\exp[-i\lambda]-1)f_{\rm A}(x)[1-f_{\rm B}(x)]\big]\Big] \end{align} is the cumulant generating function of current obtained with the Levitov-Lesovik formula~\cite{Levitov:1993ma,Esposito:2009zz}. It is noteworthy that the steady state current statistics is determined solely from the leading order. Hence we omit the sub-leading order terms that describe the approach from the disconnected state at $t=0$ to the connected state where the steady state current flows. Here $T(x)\equiv r/[(x-\varepsilon_{0}/\hbar\mathit{\Gamma})^2+1]$ reads the transmission probability of the system, and $f_{\rm X}(x)\equiv [\exp[\beta\hbar\mathit{\Gamma}(x-\mu_{\rm X}/\hbar\mathit{\Gamma})]+1]^{-1}$ is the Fermi-Dirac distribution function for the reservoir $\rm X$. We then obtain the following asymptotic form of the characteristic function: \begin{align} {\cal M}(\lambda;{\cal T},\Delta)&= \sum_{m=-\infty}^{\infty}\mathrm{sinc}(\frac{\lambda+2\pi m}{2})\exp[i2\pi m \frac{\delta}{\Delta}]\notag\\ &\quad\times\exp\Big[{\cal T}\mathit{\Gamma}{\cal C}_{0}(\frac{\lambda+2\pi m}{\Delta})\Big]. \label{eq:characteristic} \end{align} In Eq.~(\ref{eq:characteristic}), ${\cal M}(\lambda;{\cal T},\Delta)$ depends on $\delta$, which means that we can in principle distinguish each specific initial state with the ideal resolution. The distinction, however, blurs in actual experiments. To take into account the actual resolution limit for initial preparation, we take a simple average over $\delta$ for $\ln{\cal M}(\lambda;{\cal T},\Delta)$ as \begin{equation} \langle {\cdots}\rangle_{\delta}\equiv \int_{0}^{\Delta}\frac{d\delta}{\Delta} \cdots. \end{equation} We assume that the $\delta$-averaging appropriately simulates actual current measurements because it is hardly possible that the current is repeatedly measured under an identical condition with a fixed $\delta$. In other words, the $\delta$-averaging of the logarithm of ${\cal M}(\lambda;{\cal T},\Delta)$ is an analogy of the random average in quenched random systems. Accordingly, the cumulant generating function of the particle current in the long time measurement is given by \begin{equation} {\cal{}C}_{I}(\lambda;{\cal{}T},\Delta)=\frac{\partial\langle\ln{\cal{}M}(\lambda;{\cal{}T},\Delta)\rangle_\delta}{\partial\cal{}T}, \label{eq:CGF} \end{equation} In the case of $\Delta=1$, the cumulant generating function in Eq. (\ref{eq:CGF}) is identical to that obtained in the previous study, ${\cal{}C}_{I}(\lambda;{\cal{}T},1)={\cal{}C}_{0}(\lambda)$~\cite{Esposito:2009zz}. Here, we focus on the averaged current $I$ and the noise $S$ measured by the above measurement scheme. By differentiating the cumulant generating function ${\cal{}C}_{I}(\lambda;{\cal{}T},\Delta)$ in terms of $\lambda$, we evaluate $I$ and $S$ as \begin{equation} I=e\Delta\frac{\partial {\cal C}_{I}(\lambda,{\cal T},\Delta)}{\partial (i\lambda)}\Big|_{\lambda=0}= I_{0}+\langle \mathit{\Delta} I \rangle_\delta, \label{eq:observedcurrent} \end{equation} \begin{equation} S=e^2\Delta^2\frac{\partial^2 {\cal C}_{I}(\lambda,{\cal T},\Delta)}{\partial (i\lambda)^2}\Big|_{\lambda=0}= S_{0}+\langle \mathit{\Delta} S \rangle_\delta, \label{eq:observednoise} \end{equation} where $I_{0}\equiv e\mathit{\Gamma}\partial {\cal C}_0(\lambda)/\partial(i\lambda)|_{\lambda=0}$ and $S_{0}\equiv e^2\mathit{\Gamma}\partial^2 {\cal C}_0(\lambda)/\partial(i\lambda)^2|_{\lambda=0}$ are the intrinsic current and the intrinsic noise obtained in the ideal measurement case of $\Delta=1$, respectively. $I_{0}$ and $S_{0}$ are determined only by the intrinsic parameters of the system and which satisfy the J-N relation. The excess terms, attributed to the limited resolution measurement, can be evaluated as \begin{equation} \langle\mathit{\Delta}I\rangle_{\delta}=0, \label{eq:I0} \end{equation} and \begin{equation} \langle\mathit{\Delta}S\rangle_{\delta}=-\frac{e^2\mathit{\Gamma}\Delta^2}{2 \pi^2}\sum_{m\ge 1}\frac{\exp\big[{\cal T}\mathit{\Gamma}{\cal C}_{0}^{{\rm sym}}(\frac{2\pi m}{\Delta})\big]{\cal C}_{0}^{{\rm sym}}(\frac{2\pi m}{\Delta})}{m^2}. \label{eq:averagednoise} \end{equation} Here we define ${\cal C}_{0}^{{\rm sym}}(\lambda)\equiv{\cal C}_{0}(\lambda)+{\cal C}_{0}(-\lambda)$. Equation (\ref{eq:I0}) agrees with the naive intuition that the intrinsic current is correctly obtained for the repeated measurement. Note that $\langle \mathit{\Delta} S \rangle_\delta$ depends on the measurement parameters, ${\cal T}$ and $\Delta$, as well as the parameters of the system. From this result, it is found that the limited resolution does not affect the average of the current, which means that our measurement scheme is unbiased. In addition, it is remarkable that the excess noise is always non-negative, \begin{equation} \langle \mathit{\Delta} S \rangle_\delta\ge 0, \label{eq:DeltaSPositive} \end{equation} because ${\cal C}_{0}^{{\rm sym}}(\lambda)\le{}0$. These results are general for any $V$. In the case of $\Delta=1$, since ${\cal C}_{0}^{\rm sym}(2\pi m)=0 $, the excess noise obviously disappears in accordance with our expectation that the measured noise and measured current satisfy the J-N relation in the ideal case. On the other hand, for large $\Delta$, Eq. (\ref{eq:averagednoise}) is evaluated as \begin{equation} \langle\mathit{\Delta} S\rangle_{\delta}\approx\frac{e^2\mathit{\Gamma}\Delta}{2 \pi^2}\int_{0}^{\infty}s(x,{\cal T})dx \label{eq:lineardelta} \end{equation} where \begin{equation} s(x,{\cal T})\equiv -\frac{\exp\big[{\cal T}\mathit{\Gamma}{\cal C}_{0}^{{\rm sym}}(2\pi x )\big]{\cal C}_{0}^{{\rm sym}}(2\pi x)}{x^2}. \end{equation} Since $s(x,{\cal T})$ is independent of $\Delta$, the excess noise scales linearly with large $\Delta$. Here we explain the origin of the excess terms, $\langle\mathit{\Delta}I\rangle_{\delta}$ and $\langle\mathit{\Delta}S\rangle_{\delta}$. These terms can be regarded as the resolution error because it vanishes at $\Delta=1$ and depend on the measurement parameters and $\delta$. $\delta\equiv{}N_A^0-\chi_0\bmod{}\Delta$ represents the degree of freedom for the initial particle number of the reservoir A hidden in the limited resolution. The vanishing excess currents and the non-negative excess noise by the $\delta$-averaging, therefore, mean that the lack of our knowledge of the initial conditions beyond the resolution makes the cancellation of the excess current, namely no error on average, and enhances the measured noise. Note that essentially the same Equations~(\ref{eq:observedcurrent}-\ref{eq:averagednoise}) can be obtained not only for the present resonant level model but also for general mesoscopic systems which obey the Hamiltonian \eqref{eq:Hamiltonian} when we assume that the leading time order of $\ln\mathcal{M}_{0}(\lambda;\mathcal{T})$ is proportional to ${\cal T}$. This assumption is physically sound when the steady-state exists in the mesoscopic systems because the transferred particle number during the measurement time $\mathcal{T}$ should be proportional to $\mathcal{T}$ for the long time measurement. The coefficient of the term proportional to $\mathcal{T}$ is given by the cumulant generating function of the steady-state current measured with the ideal resolution, as shown in Eq.~\eqref{eq:longtimeapproximation}. The assumption is valid even for the quantum dot systems which include the Coulomb interaction effects~\cite{Bagrets:2003,Utsumi:2006} and the electron-phonon couplings.~\cite{Avriller:2009} \section{Numerical Results in the thermal equilibrium} \begin{figure}[tb] \begin{center} \includegraphics[width=86mm]{fig3.eps} \end{center} \caption{ (color online). Equilibrium excess noise $\langle\mathit{\Delta}S\rangle_{\delta}$ at $V=0$ as a function of temperature for $\varepsilon_0=0$ and $r=1$. The measurement parameters are fixed at ${\cal{}T}\mathit{\Gamma}=1000$ in (a) and $\Delta=10$ in (b). The solid line indicates the equilibrium intrinsic noise, $S_{0}/e^2\mathit{\Gamma}$.} \label{fig3} \end{figure} Hereafter, we focus on the equilibrium noises and the linear conductance, $G\equiv\lim_{V\to0}dI/dV=G_0$, in the resonant level model to discuss the resolution effects on the J-N relation. For simplicity, $S$, $S_0$, and $\langle\mathit{\Delta}S\rangle_{\delta}$ are always assumed to carry the measured, intrinsic, and excess noises at $V=0$, respectively. Figure \ref{fig3}(a) shows $\langle\mathit{\Delta}S\rangle_{\delta}$ as a function of temperature $T$ for several choices of $\Delta$. Let us first consider $\Delta<50$. With decreasing $T$, the excess noise $\langle\mathit{\Delta}S\rangle_{\delta}$ increases and shows a peak at a temperature $k_{\rm{}B}T<\hbar\mathit{\Gamma}$ where $S_{0}$ decreases proportionally to $T$. This means that the excess noise may appear only at sufficiently low temperatures. The appearance leads to a difficulty in measuring the intrinsic noise in experiments. With an increase in $\Delta$, $\langle\mathit{\Delta}S\rangle_{\delta}$ is enhanced, and becomes pronounced even at high temperatures $k_{\rm{}B}T\gg\hbar\mathit{\Gamma}$. The measurement time ${\cal T}$ also affects $\langle\mathit{\Delta}S\rangle_{\delta}$ as seen in Fig.~\ref{fig3}(b) where $\langle\mathit{\Delta}S\rangle_{\delta}$ is suppressed with an increase in ${\cal{}T}$. The larger $\cal{}T$ is, therefore, the smaller intrinsic noise we can access in the experiments. To investigate the resolution effects on the J-N relation in more detail, we calculate the ratio of excess and intrinsic noises which characterizes the deviation from the J-N relation between $S$ and $G$, namely $\langle\mathit{\Delta}S\rangle_{\delta}/S_{0}$ in Eq.~\eqref{eq:deviationfromJN}, as has been already shown in Fig~\ref{fig2}. Figure~\ref{fig2} illustrates the ratio $\langle\mathit{\Delta}S{}\rangle_{\delta}/S_{0}$ as a function of a single non-dimensional positive parameter $Q=S_{\rm{}M}/S_{0}$ for several choices of (${\cal{}T}$, $\Delta$). A scaling behavior is found in the deviation from the J-N relation. All the curves collapse into a single one for $Q\ll10^2$, that is described by the exponential dependence \begin{align} \langle\mathit{\Delta}S\rangle_{\delta}/S_{0}=A\exp[-\gamma/Q]. \label{eq:exponentialscaling} \end{align} Above $Q\approx{}10^2$, there exists another single-parameter scaling described by \begin{align} \langle\mathit{\Delta}S\rangle_{\delta}/S_{0}=B\sqrt{Q}. \label{eq:squarerootscaling} \end{align} Here, $A$, $\gamma$ and $B$ are estimated as $A=2$, $\gamma=(2\pi)^2$ and $B=1/\sqrt{4\pi}$ from the analytical discussion in the later part of this section below Eq.~\eqref{eq:generalexcessnoise}. Then all the curves saturate at a sufficiently high $Q$, the saturated value of which is not universal but roughly scaled by $\Delta$. The saturation occurs roughly at the crossing of $\Delta$ and $B\sqrt{Q}$ as $Q\approx{}4\pi\Delta^2$. The deviation, therefore, becomes serious at low temperatures and for low conductance which satisfies $Q=S_{\rm{}M}/2k_{\rm{}B}TG>(2\pi)^2/\ln 2\simeq 56.96$ where $\langle\mathit{\Delta}S\rangle_{\delta}/S_{0}$ is estimated to be larger than unity by using Eq.~\eqref{eq:exponentialscaling}. On the other hand, it is negligible for $Q\ll(2\pi)^2/\ln 2$: For instance, it becomes less than $10^{-10}$ for $Q<1$. This result means that the direct detectability of $S_0$ in noise measurements with limited resolution only depends on $Q$. The intrinsic distribution of the transferred particles through a resonant level continuously changes with the change in the parameters of system, e.g. Gaussian for $k_{B}T\to0$ in the equilibrium perfect transmission and bi-poissonian for $r\to0$ in the equilibrium with $\varepsilon_{0}=0$~\cite{Bagrets:2003,Levitov:2004}. The diversity in the distributions seems to be, however, irrelevant for the scaling feature of the deviation from the J-N relation. Our calculation indeed shows that the same exponential and the square root dependences represented by the universal coefficients and the exponent even when we change the parameters of the system, implying that the scaling behavior is universal not only in this specific distribution but also in other types of the distributions. To confirm our conjecture analytically, we use the following general cumulant generating function $\mathcal{C}_{\rm G}(\lambda)$, \begin{align} \mathcal{C}_{\rm G}(\lambda)\equiv \sum_{n=1}^{\infty}\frac{\kappa_{n}}{n!}(i\tilde{\lambda})^n \label{eq:generalcumulant} \end{align} where $\tilde{\lambda}\equiv \lambda +2\pi\lfloor\lambda/2\pi+1/2\rfloor$ with $\lfloor\cdots\rfloor$ being the floor function. The periodicity of $\mathcal{C}_{\rm G}(\lambda)$ in $\lambda$ is crucial to ensure the integer value of the transferred electron number. We assume that the average and the variance of the distribution are given by $\kappa_{1}=I_{0}/e\varGamma$ and $\kappa_{2}=S_{0}/e^2\varGamma$, respectively. Substituting $\mathcal{C}_{\rm G}(\lambda)$ instead of $\mathcal{C}_{0}(\lambda)$ in Eq.\eqref{eq:averagednoise} for $\Delta>2$, we obtain the following equation, \begin{align} &\langle\mathit{\Delta}S\rangle_{\delta}\notag\\ &=-\frac{(e\Delta)^2\mathit{\Gamma}}{2\pi^2}\sum_{m\ge 1}\frac{\exp\big[{\cal T}\mathit{\Gamma}{\cal C}_{\rm G}^{{\rm sym}}(\frac{2\pi m}{\Delta})\big]{\cal C}_{\rm G}^{{\rm sym}}(\frac{2\pi m}{\Delta})}{m^2}\notag\\ &=-\frac{e^2\mathit{\Gamma}}{2\pi^2}\Big[\sum_{1\le n<\frac{\Delta}{2}}\frac{\pi^2}{\sin^2(\frac{\pi n}{\Delta})}\exp\big[{\cal T}\mathit{\Gamma}{\cal C}_{\rm G}^{{\rm sym}}(\frac{2\pi n}{\Delta})\big]{\cal C}_{\rm G}^{{\rm sym}}(\frac{2\pi n}{\Delta})\notag\\ &\qquad+\delta_{\Delta \bmod 2,0}\frac{\pi^2}{2}\exp\big[{\cal T}\mathit{\Gamma}{\cal C}_{\rm G}^{{\rm sym}}(\pi)\big]{\cal C}_{\rm G}^{{\rm sym}}(\pi)\Big]\notag\\ \label{eq:generalexcessnoise} \end{align} where ${\cal C}_{\rm G}^{{\rm sym}}(\lambda)\equiv{\cal C}_{\rm G}(\lambda)+{\cal C}_{\rm G}(-\lambda)$. First we derive the exponential form emerging at $Q<1$. Using the expansion Eq.~\eqref{eq:generalcumulant}, Eq.~\eqref{eq:generalexcessnoise} is given by \begin{align} &\langle\mathit{\Delta}S\rangle_{\delta}\notag\\ &=2S_{0}\sum_{1\le n<\frac{\Delta}{2}}[1+{\cal O}((\frac{n}{\Delta})^2)]\exp\big[-\frac{(2\pi n)^2}{Q}[1+{\cal O}((\frac{n}{\Delta})^2)]\big]\notag\\ &\quad+\delta_{\Delta \bmod 2,0}\frac{S_{0}\pi^2}{4}\exp\big[-\frac{(\pi\Delta)^2}{Q}[1-\frac{2\pi^2\kappa_{4}}{4!\kappa_{2}}+\cdots]\big]\notag\\ &\quad\times[1-\frac{2\pi^2\kappa_{4}}{4!\kappa_{2}}+\cdots]\notag\\ &\sim 2S_{0}\exp\big[-(2\pi)^2/Q\big] \quad (Q\ll1). \label{eq:estimation_exponential} \end{align} Hence, the deviation from the J-N relation is evaluated for $Q\ll1$ as Eq.~\eqref{eq:exponentialscaling} with $A=2$ and $\gamma=(2\pi)^2$ as is already mentioned. Next we derive the square root dependence for $Q\gg1$. Since the square root dependence emerges only for $\Delta\gg1$, we evaluate the sum in Eq.~\eqref{eq:generalexcessnoise} with using the integral as \begin{align} &\langle\mathit{\Delta}S\rangle_{\delta}\notag\\ &\simeq-\frac{e^2\mathit{\Gamma}\Delta}{2\pi}\int_{0}^{\pi/2}\frac{dx}{\sin^{2}(x)}\exp\big[{\cal T}\mathit{\Gamma}{\cal C}_{\rm G}^{{\rm sym}}(2x)\big]{\cal C}_{\rm G}^{{\rm sym}}(2x)\notag\\ &= \frac{2e^2\mathit{\Gamma}\Delta\kappa_{2}}{\pi}\int_{0}^{\pi/2}dx\frac{x^2(1-\frac{2\kappa_{4}(2x)^2}{\kappa_{2}4!}+\cdots)}{\sin^{2}(x)}\notag\\ &\quad\times\exp\big[-{\cal T}\mathit{\Gamma}\kappa_{2}(2x)^2(1-\frac{2\kappa_{4}(2x)^2}{\kappa_{2}4!}+\cdots)\big]\notag\\ &\sim \frac{2e^2\mathit{\Gamma}\Delta\kappa_{2}}{\pi}\int_{0}^{\infty}dx\exp\big[-4{\cal T}\mathit{\Gamma}\kappa_{2}x^2\big] \quad(Q\ll2\pi^2\Delta^2)\notag\\ &= \sqrt{S_{\rm{}M}S_{0}/4\pi}. \label{eq:estimation_squareroot} \end{align} For $1\ll Q\ll2\pi^2\Delta^2$, the deviation from the J-N relation, therefore, follows Eq.~\eqref{eq:squarerootscaling} with $B=1/\sqrt{4\pi}$. $A=2$, $B=1/\sqrt{4\pi}$, and $\gamma=(2\pi)^2$ perfectly agree with our numerical results (see Fig.~\ref{fig2}). \begin{figure}[tb] \begin{center} \includegraphics[width=86mm]{fig4.eps} \end{center} \caption {(color online). Conductance $G$, intrinsic noise $S_{0}$, and excess noise $\langle\mathit{\Delta}S\rangle_{\delta}$ as a function of temperature $T$.} The measurement parameters are fixed at ${\cal{}T}\mathit{\Gamma}=1000$ and $\Delta=10$. $r=1$ and several choices of $\varepsilon_{0}$ are used in (a), and $\varepsilon_{0}=0$ and several choices of $r$ are used in (b). The insets show the enlarged plots of $S_{0}/{\rm e^2}\mathit{\Gamma}$.} \label{fig4} \end{figure} This proof supports that the scaling functions represented by the exponential dependence Eq.~\eqref{eq:exponentialscaling} and the square root dependence Eq.~\eqref{eq:squarerootscaling} are universal irrespective of the details of the system. Therefore, this scaling should hold in general mesoscopic systems that are described by the Hamiltonian~\eqref{eq:Hamiltonian}, e.g. the quantum dot system in the Coulomb blockade regime~\cite{Bagrets:2003,Utsumi:2006} and in the presence of the energy dissipation by the electron-phonon coupling~\cite{Avriller:2009}. It also supports our expectation that the scaling does not directly depend on the internal system parameters specific to the present model. In the following, we see the $\varepsilon_{0}$ and $r$-dependences of the conductance $G$, the intrinsic noise $S_{0}$, and the excess noise $\langle\mathit{\Delta S}\rangle_{\delta}$ in Fig.~\ref{fig4}. It is seen that all these transport quantities are strongly dependent on $\varepsilon_{0}$ and $r$. Since $G$ and $S_{0}$ are only determined by the system parameters, the characteristic temperature of those quantities is given by $\hbar\mathit{\Gamma}/k_{\rm{}B}$. For $k_{\rm{}B}T/\hbar\mathit{\Gamma}\ll 1$, $G$ takes a constant value and $S_{0}$ shows a simple linear dependence on $T$ expected from the J-N relation. While, $\langle\mathit{\Delta S}\rangle_{\delta}$ shows a strong temperature dependence even for $k_{\rm{}B}T/\hbar\mathit{\Gamma}\ll 1$ because it also depends on the measurement parameters, $\cal{}T$ and $\Delta$. Though it is seemingly difficult to find the universal relation between the transport properties for the different values of $\varepsilon_{0}$ and $r$, the scaling behavior is again confirmed even in the case. In Fig.~\ref{fig5}(a), it is also found the universality of the exponential form for $Q\ll10^2$. The saturation value of the deviation at high $Q$ stays at the order of $\Delta$ but weakly dependent on the system parameters. Figure~\ref{fig5}(b) shows the $\Delta$ dependence of the saturation value at sufficiently high $Q\gg{}4\pi\Delta^2$, where the lower bound of the saturation value is found, \begin{equation} \lim_{Q\to\infty}\langle\mathit{\Delta}S\rangle_{\delta}/S_{0}\ge\Delta-1. \end{equation} Hence, $S$ is always larger than $S_{0}\Delta$ in the limit of $Q\to\infty$. \begin{figure}[tb] \begin{center} \includegraphics[width=86mm]{fig5.eps} \end{center} \caption{(color online). (a) Ratio of excess and intrinsic noises at $V=0$, $\langle\mathit{\Delta}S\rangle_{\delta}/S_{0}$, as a function of $Q\equiv S_{\rm{}M}/S_{0}$ where $S_{\rm{}M}\equiv(e\Delta)^2/{\cal{}T}$ for several choices of ($\varepsilon_0,r$). ${\cal{}T}=1000$ and $\Delta=10$ is used for calculation. The black solid line indicates the universal exponential $A\exp[-\gamma/Q]$, with $A=2$ and $\gamma=(2\pi)^2$. The inset shows the linear dependence of the logarithm of the ratio on $Q^{-1}$. (b) Saturation value of the ratio of noises. The dashed line represents the lower bound of the saturation value.} \label{fig5} \end{figure} \section{Comparison between theory and experiment} In this section, we estimate realistic and presently accessible measurement parameters, ${\cal T}$ and $\Delta$, from an available measurement device. In our two-point measurement scheme, the current is obtained by measuring the net transferred particle number within the measurement time, ${\cal T}$. Although the averaged current is precisely measurable for any choice of the parameters as discussed above, the rigorous value is obtained only when the average is given from the measurement performed infinitely many times. When we consider the case of a single measurement, however, the measurement parameters should give a limit of available information about the current. If the current is fluctuating with a frequency $f$, the detectability of the current must be crucially dependent on the measurement time $\cal T$. For $2f>{\cal T}^{-1}$, we hardly obtain the signal from the single measurement because the net transferred particle number within $\cal T$ is almost zero in our model. Therefore, we estimate the measurement time from the maximum detectable frequency in the actual single measurement, $f_{\rm max}$, as ${\cal T}=(2f_{\rm max})^{-1}$. In addition, the amplitude of the sinusoidal current with a frequency, $f_{\rm max}$, is important for the detectability. $\Delta$ specifies the detectable difference of the particle numbers at the initial and final states in the two-point measurement. If the net change of the number is less than $\Delta$, we have no meaningful signal in the single measurement. Hence, the minimum amplitude of the detectable sinusoidal current $I_{\rm min}$, with the frequency of $f_{\rm max}$ in the single measurement may give the estimation of $\Delta$ as $\Delta=\int_{0}^{\cal T}I_{\rm{}min}\sin(2\pi f_{\rm max}t)dt/e=I_{\rm min}/e\pi f_{\rm max}$. \begin{figure}[tb] \begin{center} \includegraphics[width=86mm]{fig6.eps} \end{center} \caption{(color online). (a) Current noise at $V=0$ as a function of temperature $T$. The parameters are $\hbar\mathit{\Gamma}/k_{\rm{}B}=1{\rm{}K}$, $\varepsilon_{0}=0$, $r=1$, ${\cal{}T}=1\mu{\rm{}s}$, and $\Delta=130$, which leads $S_{\rm{}M}=0.43$ $(10^{-27}{\rm{}A}^2{\rm{}Hz}^{-1})$. The dashed line indicates the fitted line for the measured noise from 50mK to 100mK, $aT+b$. $a=1.00$ ($10^{-27}{\rm{}A}^2{\rm{}Hz}^{-1}{\rm{}K}^{-1}$) is slightly smaller than the expected value for the intrinsic noise at low temperatures, $2k_{\rm{}B}e^2/h\simeq{}1.07$ ($10^{-27}{\rm{}A}^2{\rm{}Hz}^{-1}{\rm{}K}^{-1}$). $b=3.93$ (${}10^{-30}{\rm{}A}^2{\rm{}Hz}^{-1}$). (b) Noise temperature $T_{\rm{}JN}$ plotted versus $T$. The parameters are the same as those in (a). The solid line shows $T_{\rm{}JN}=T$.} \label{fig6} \end{figure} In the actual measurement of current through a mesoscopic device, the signal of current is enhanced via an amplifier because it is too weak to be directly measured with normal ammeters. Amplifiers have two significant parameters: The maximum detectable frequency, $f_{\rm amp}$, and the input current noise, $i_{n}$, which has the dimension of $\rm{}A/\sqrt{Hz}$. Since the precision of the current measurement is limited mainly by amplifiers, we connect our model parameters with those of an amplifier. Since the maximum frequency of the detectable current, $f_{\rm max}$, is supposed to be given by $f_{\rm amp}$, the measurement time is estimated as \begin{equation} {\cal T}=(2f_{\rm amp})^{-1}. \label{eq:estimationT} \end{equation} The input current noise limits the amplitude of the detectable current. To obtain meaningful information in a single measurement, the input sinusoidal current with a frequency of $f_{\rm amp}$ must have the amplitude larger than $i_{n}\sqrt{f_{\rm amp}}$, which leads to $I_{\rm min}=i_{n}\sqrt{f_{\rm amp}}$. Hence, we estimate $\Delta$ as \begin{equation} \Delta=i_{n}/e\pi\sqrt{f_{\rm amp}}. \label{eq:estimationD} \end{equation} More concretely, we estimate ${\cal T}$ and $\Delta$ from the amplifier of CA-554F2 manufactured by NF Corporation in Japan. CA-554F2 is one of the best amplifiers on the market, which has $f_{\rm amp}=500{\rm KHz}$ and $i_{n}=45{\rm fA/\sqrt{Hz}}$. Substituting these parameters into Eq. (\ref{eq:estimationT}) and Eq. (\ref{eq:estimationD}), we obtain ${\cal T}\simeq 1\mu{\rm s}$ and $\Delta\simeq 130$.~\cite{comments} In Fig.~\ref{fig6}(a), the measured and intrinsic noises are plotted versus the temperature for realistic model parameters, $\hbar\mathit{\Gamma}/k_{\rm{}B}=1{\rm{}K}$, $\varepsilon_{0}=0$, and $r=1$. Note that for $T\ll1{\rm{}K}$, the model effectively describes the single channel QPC with perfect transmission. We use ${\cal{}T}=1\mu{\rm{}s}$ and $\Delta=130$. At temperatures higher than 50mK, $S$ shows a clear linear dependence on temperature and takes nearly the same value as $S_0$. While, $S$ deviates form $S_0$ and makes a hump at lower temperatures below 50mK. These features are qualitatively consistent with the experiment~\cite{Hashisaka:2008ef}. Finally, we show the noise temperature in the realistic conditions. Because the noise temperature, $T_{\rm{}JN}$, is explicitly written as \begin{equation} T_{\rm{}JN}\equiv{}S/2k_{\rm{}B}G=T(1+\langle\mathit{\Delta}S\rangle_{\delta}/S_{0}), \end{equation} it is always larger than the thermodynamic temperature, $T$. Figure~\ref{fig6}(b) shows $T_{\rm{}JN}$ as a function of $T$ for the same parameters as those in Fig.~\ref{fig6}(a). The disagreement of $T_{\rm{}JN}$ with $T$ appears below 50mK, which is also consistent with the experiments~\cite{Hashisaka:2008ef,Nakamura:2010hn,Webb:1973ej}. This result indicates that the intrinsic temperature may not be obtained in the Johnson noise thermometry at very low temperatures even if $T_{\rm{}JN}\simeq T_{\rm{}ref}$ holds at higher temperatures. \begin{figure}[tb] \begin{center} \includegraphics[width=80mm]{fig7.eps} \end{center} \caption{(color online). Schematic illustration of departure from Johnson-Nyquist relation for a large $\Delta\gg1$. The universal departure starts at the essential singular point of the exponential function, $2\exp[-(2\pi)^2/Q]$, which followed by the square root dependence $\sqrt{Q/4\pi}$.} \label{fig7} \end{figure} \section{Summary and Prospect} We summarize our findings as schematic in Fig.~\ref{fig7}, where the universal departure from the J-N relation is characterized by the single parameter $Q$. Moreover, the departure starts with a universal function characterized by an exponential form, $2 \exp[-(2\pi)^2/Q]$, when the ideal resolution becomes lost from $Q=0$ where the function has the essential singularity. Then, it is followed by the square root growth, $\sqrt{Q/4\pi}$. In addition to the present proposal, there exist other possible scenarios to explain the experimental anomalous noise enhancement including the heat leak. The smoking gun to prove our proposal is whether the scaling behavior is satisfied or not. It is desired to test in experiments. In this paper, we have focused on the J-N relation within the linear response. Even for the nonlinear regime, similar puzzles of the deviation from the fluctuation theorem~\cite{Nakamura:2010hn} and the discrepancy of the shot noise between theory and experiment are known~\cite{Yamauchi:2011cq}. The resolution effects may also give us a clue to resolve them. More generally, our results may propose the necessity of amending naive accounts of resolution effects in widespread instruments based on the fluctuation-dissipation theorem such as nuclear magnetic resonance, X-ray scattering, neutron scattering, and photoemission. \section*{ACKNOWLEDGEMENT} The authors are grateful to T.~Fujisawa, M.~Hashisaka, K.~Inaba, T.~Kato, K.~Kobayashi, S.~Morita, A.~Oguri, K.~Saito, R.~Sakano, Y.~Utsumi, Y.~Watanabe, and M.~Yamashita for fruitful discussions. This work is financially supported by Grant-in-Aid for Scientific Research (No. 22104010, and 22340090) from MEXT, Japan. This work is financially supported by MEXT HPCI Strategic Programs for Innovative Research (SPIRE) and Computational Materials Science Initiative (CMSI).
1,314,259,994,129
arxiv
\section{Introduction} Silicon is the second most abundant material in the earth's crust. The semiconducting Si I phase (cubic diamond lattice, $Fd3m$ space group) is extensively used in microelectronics, integrated circuits, photovoltaics, and MEMS/ NEMS technologies. Single-crystal Si is also used in high-power lasers. Polycrystalline Si is widely used in solar panels~\citep{heath2020research}, thin transistors~\citep{chelikowsky2004introduction}, and very large-scale integration (VLSI) manufacturing. It also has low toxicity and high stability. Due to high demands, the recent CHIPS and Science Act will provide new funding to boost the research and manufacturing of semiconductors in the US. Under the pressure of 10-16 GPa, semiconducting Si I transforms to metallic phase Si II ($\beta$-tin structure, $I4_1/amd$ space group). Si I is very strong and brittle, and hence its bulk hardness is 12 GPa and is determined by Si I$\rightarrow$Si II PT rather then dislocational plasticity \citep{Domnich-2004,Kiran2015NanoindentationGermanium}. Stresses exceed 10 GPa for machining (turning, polishing, scratching, etc.) of single and polycrystalline Si \citep{Goel2015DiamondSimulation}; such loads cause plastic flow, Si I$\rightarrow$Si II and some other PTs, e.g., amorphization. High-pressure torsion of Si at 24 GPa is used to produce nanostructured metastable phases \citep{Ikoma2012PhaseTorsion,Ikoma2014FabricationTorsion}. Machining of strong brittle semiconducting Si-I is accompanied by microcrack propagation inside the bulk. PT from Si I to ductile and weaker Si II is utilized to develop ductile machining regimes \cite{patten2019ductile}, which reduces forces, energy, and damage. This also may eliminate the necessity of using chemical additives during machining, which brings definite environmental benefits by reducing pollution. Since transformation strain tensor that describes the transformation of cubic to tetragonal Si I $\rightarrow$ Si II PT has large and very anisotropic principal components $\fg \varepsilon_{t}=\{0.1753;0.1753; -0.447\} $ \citep{malyushitskaya1999mechanisms,Levitas2017LatticeCriterion,Levitas2017Triaxial-Stress-InducedPhases}, it is clear from thermodynamics that deviatoric part of the stress tensor should strongly affect this PT. Both pressure- and stress-induced PTs start at pre-existing defects (different dislocation configurations, grain boundaries), which represent stress concentrators. However, in many of the applications, like turning, polishing, scratching, friction, high-pressure torsion and ball milling, PTs occur during plastic deformation. According to classification \citep{Levitas-PRB-2004,levitas2018high,Levitas2019High-pressureAnvils}, such PTs are called plastic strain-induced PT under high pressure, and they occur at defects permanently generated during plastic flow. Strain-induced PTs require completely different thermodynamic, kinetic, and experimental treatments than pressure- and stress-induced PTs. There are numerous very strong effects of plastic deformations on PTs, summarized in \cite{Blank2013,Bridgman1935EffectsPressure,edalati2016review,Gao2019,Levitas-PRB-2004,levitas2018high,Levitas2019High-pressureAnvils}; one of the most important is a drastic reduction in PT pressure. Thus, plastic strain-induced PTs from graphite to hexagonal and cubic diamonds were obtained at 0.4 and 0.7 GPa, 50 and 100 times lower than under hydrostatic loading, respectively, and well below the phase equilibrium pressure of 2.45 GPa \citep{Gao2019}. About an order of magnitude reduction in PT pressure was reported for PT from rhombohedral to cubic BN \citep{Levitas2002Low-pressureTheory}, hexagonal to wurtzitic BN \citep{ji2012shear}, and from $\alpha$ to $\omega$ Zr \citep{Pandey-Levitas-2020,levitas2022laws}. The effect of plastic straining on PTs in Si is also very strong but more sophisticated. Thus, under compression and shear in rotational diamond anvil cell \citep{aleksandrova1993phase}, Si II was obtained at 5.2 GPa, but not directly from Si I, but via Si III. However, in these experiments, only optical, pressure, and electric resistivity measurements were utilized without in situ x-ray diffraction. In our recent in situ x-ray diffraction experiments \citep{Sorb2022}, the PTs pressure for direct Si I$\rightarrow$Si II PT was reduced from 13.5 (hydrostatic loading) to 2.5 GPa (plastic straining) for micron size Si particles and from 16.2 GPa to 0.4 GPa for 100 nm Si nanoparticles, i.e., by a factor of 40.5 (and 26.3 below the phase equilibrium pressure of 10.5 GPa \citep{Voronin-etal-2003}). To understand the effect of stress tensor and plastic strain on Si I$\rightarrow$Si II PT, various techniques at multiple scales are used. With first principle simulations, the lattice instability for Si I under two-parametric loadings was studied in \cite{Pokluda-etal-2015,Umeno-Cerny-PRB-08,Telyatniketal-16,Cernyetal-JPCM-13}. Although it is not specified, it is related to Si I$\rightarrow$Si II PT. The stress-strain behavior and elastic instabilities leading to Si I$\rightarrow$Si II PT under all six components of the stress tensor were determined in \cite{zarkevich2018lattice,Chen-etal-NPJ-2020}. While this seems to be impossible due to a large number of combinations, the following solution was found. First, the analytical expression for crystal lattice instability criterion was formulated within the PFA \citep{Levitas2013Phase-fieldStrains,Levitas2017LatticeCriterion,Levitas2017Triaxial-Stress-InducedPhases}. Then it was confirmed and quantified with first principle simulations in \cite{zarkevich2018lattice}. A similar approach was realized earlier with molecular dynamics simulations \citep{Levitas2017LatticeCriterion,Levitas2017Triaxial-Stress-InducedPhases} under the action of three normal stresses. Molecular dynamics simulations of PTs in Si during various loadings, nanoindentation, scratching, and surface processing were performed in \cite{valentini2007phase,chrobak2011deconfinement,Chen_2019,Zhang2019MolecularSilicon,Kiran2015NanoindentationGermanium,Chen2022NontrivialTransformation}. Nanoscale PFA for PT Si I$\rightarrow$Si II in a single crystal with corresponding simulations was developed in \cite{Levitas2018PhaseTheory,Babaei_2018,Babaei-Levitas-ActaMat-2019,Babaei2020Stress-MeasureTransformations,Babaei2019AlgorithmicStrains}. It was calibrated by results of molecular dynamics simulations in \cite{Levitas2017LatticeCriterion,Levitas2017Triaxial-Stress-InducedPhases}. The effect of a single dislocation on this PT under uniaxial compression is modeled in \cite{Babaei-Levitas-ActaMat-2019}. The main general problem with nanoscale PFA is that the width of the martensitic interface is ~1 nm and one needs at least 3-5 finite elements within the interface \citep{Levitas2011Phase-fieldEnergy}, making problem computationally expensive for large samples. Hence this model can be used to treat nano-sized samples only. Here, we will consider scale-free PFA for modeling discrete martensitic microstructure. It was developed for small strains in \cite{Levitas-etal-2004,Idesman-etal-2005} and updated and applied to NiTi shape memory alloy in \cite{Esfahani-etal-2018}. The only finite-strain generalization of the scale-free model for finite strains is presented in \cite{babaei2020finite}. This model was applied for simulations of Si I$\rightarrow$Si II PT in a single crystal. The main differences between scale-free and nanoscale PFAs are: \begin{enumerate} \item The energy term that includes gradients of the order parameters and determines the phase interface widths and the interface energies is excluded. This makes the model scale-free. Interface width is getting equal to a single finite element, which is much more computationally economical than in the nanoscale approach, where 3-5 elements are usually required to reproduce analytical solution for an interface \citep{Levitas2011Phase-fieldEnergy}. However, this leads to two problems. (a) The solution is mesh dependent. However, detailed computational experiments in \cite{Levitas-etal-2004,Idesman-etal-2005,babaei2020finite} show that the solution becomes practically mesh-independent after the mesh size is 80 times smaller than the sample size. (b) Since the volume fraction of martensite varies from 0 to 1 within the one-element thick interface, for large transformation strains, there are large strain gradients within the interface, which leads often to divergence in the FEM solution. \item The interfaces between individual martensitic variants are not resolved. Each of these variant has a width of $ d\simeq10\;\text{nm} $, i.e., thousands of interfaces at the microscale, making the problem computationally impractical. Here, martensite is considered a mixture of martensitic variants with corresponding volume fractions. \item Since there is no need to reproduce atomic level energy landscape versus order parameters, the linear mixture rule is applied for all material properties. This is in contrast to higher-order polynomials in order parameters for nanoscale PFA. Also, the athermal dissipative threshold for interface propagation (interface friction) can be easily introduced in the scale-free model, while this is a problem for the nanoscale model. \item The volume fraction of the martensite is the order parameter, i.e., it is responsible for the material instability and transformation strain localization at the interface between austenite and martensite. The volume fractions of the individual martensitic variants are only internal variables and do not produce any instabilities. That is how we eliminate interfaces between martensitic variants. \end{enumerate} All the above features allow us to economically model multivariant martensitic PTs in a sample of arbitrary size. The next natural step is to simulate PTs in Si and discrete martensitic microstructure in a polycrystalline sample, which was not done yet. This is also an important step in studying plastic strain-induced PTs. The main hypothesis \citep{Levitas-PRB-2004} is that they initiate at the tip of dislocation pileups against grain boundaries, which causes strong stress concentration for all stress components proportional to the number of dislocations in a pileup. These stresses drastically reduce the required external pressure needed for nucleation and further growth. Nanoscale PFA allows us to treat a bicrystal and to qualitatively prove this hypothesis \citep{Levitas-Mahdi-Nanoscale-14,Javanbakht2018NanoscaleStudy,Javanbakht2016PhaseShear}, however, for 2D and small strain formulation. Our scale-free PFA \citep{Levitas2018Scale-FreeMicrostructure,Esfahani-etal-2020} has been applied to 2D polycrystalline samples with several dozen grains and also for small strain formulation. In this work, we extend the modeling presented in \cite{babaei2020finite} to Si I$\rightarrow$Si II PT in 3D polycrystalline samples with up to 1000 grains. The main challenge is to reach convergence of the computational procedure due to strong nonlinearities, large transformation strain localization with large gradients within a single-element diffuse interface, and potential elastic instabilities. For a single crystal, just two loadings, uniaxial and hydrostatic, have been considered in \cite{babaei2020finite}. However, each grain is subjected to different complex and heterogeneous loading in a polycrystal. While we used the simplest quadratic in elastic Lagrangian strain expression (\ref{elastic_energy}) for the elastic energy, it is shown in \cite{Levitas_2017} that even simple uniaxial compression leads to elastic instability. This instability can cause additional strain localization and divergence of solution. Due to the variety of heterogeneous complex loadings, different for different grains, chances for various elastic instabilities and divergence are very high. That is why several computational parameters have been varied to reach convergence of the solution. Periodic boundary conditions along the lateral surfaces played important part in the avoiding divergence. Another problem is to adequately quantitatively present the evolution of the local and average volume fraction of martensitic variants; a straightforward approach leads to contradictory results. The paper is organized as follows. In Section \ref{model} complete system of coupled PFA and nonlinear mechanics equations is presented. Materials parameters for the model are given in Section \ref{Model parameters}. Problem formulations, results of simulations, and their analyses are presented in Section \ref{results}. Concluding remarks are summarized in Section \ref{Conclusion}. Vectors and tensors are designated with boldface symbols. We designate contractions of tensors ${\fg A}=\{A_{ij}\}$ and ${\fg B}=\{B_{ji}\}$ over one and two indices as ${\fg A}{\fg \cdot}{\fg B}=\{A_{ij}\,B_{jk}\}$ and ${\fg A}{\fg :}{\fg B}=A_{ij}\,B_{ji}$. The transpose of ${\fg A}$ is ${\fg A}^{ T}$; ${\fg I}$ is the unit tensor; $\fg \nabla_0$ is the gradient operator in the undeformed state. \section{Model description}\label{model} Here, we expand the microscale model for multivariant martensitic PTs developed in \cite{babaei2020finite} for a polycrystalline elastic materials. \subsection{Kinematics:} Let us consider polycrystalline Si I aggregate in the undeformed stress-free configuration $ \Omega_0 $. The orientation of each grain is characterized by the rotation tensor ${\bs R_g}$ in the undeformed configuration, which rotates local cubic crystallographic axes of the grain to the global coordinate system. Tensors ${\bs R_g}$ do not evolve during loading and PTs. The deformation gradient $\bs F$ is multiplicatively split into the elastic $\bs F_e$ and the transformational $\bs F_t$ parts: \begin{equation} {\bs F} := \bs \nabla_0 {\bs r}= {\bs F}_e{\bs{\cdot}}{\bs F}_t; \qquad {\bs F}_e={\bs R}_e\cdot{\bs U}_e; \qquad {\bs F}_t={\bs F}_t^T. \label{Fdecom} \end{equation} Here, ${\bs r}$ is the position vector in the current deformed configuration $ \Omega $; ${\bs F}_t$ is defined as ${\bs F}$ after complete local stress release (producing the intermediate configuration $\Omega_t$) and is considered to be rotation-free; ${\bs U}_e$ is the symmetric elastic right stretch tensor and ${\bs R}_e$ is the orthogonal lattice rotation tensor during loading and PT, which determines texture evolution. The transformation deformation gradient is defined using the mixture rule for $m$ variants as \begin{equation} \bs{F}_t= \bs{I}+ \bs{\varepsilon}_{t}= \bs{I}+\sum_{i=1}^m {\bs R}_g\cdot \tilde{\bs{\varepsilon}}_{ti} \cdot {\bs R}_g^T c_i, \label{tr-str} \end{equation} where $\bs {\varepsilon}_{t}$ is the transformation strain, $\tilde{\bs{\varepsilon}}_{ti}$ is the transformation strain of the $i^{th}$ martensitic variant in the local crystallographic basis of the grain, and the $c_i$ is the volume fraction of the $i^{th}$ variant in terms of volumes in the reference configuration. The total, elastic, and transformational Lagrangian strains are defined as \begin{equation} \bs{E}=\frac{1}{2}(\bs{F}^T{\bs{\cdot}}\bs{F}-\bs{I});\quad \bs{E}_e=\frac{1}{2}(\bs{F}_e^T{\bs{\cdot}}\bs{F}_e-\bs{I});\quad \bs{E}_t=\frac{1}{2}(\bs{F}_t^T{\bs{\cdot}}\bs{F}_t-\bs{I}). \qquad \label{Esall} \end{equation} Utilization of multiplicative decomposition \cref{Fdecom} results in the following relationship: \begin{equation} \bs{E}_e = \bs{F}_t^{-1}{\bs{\cdot}} (\bs{E}-\bs{E}_t){\bs{\cdot}} \bs{F}_t^{-1}. \label{Erel} \end{equation} We will also need the ratios of elemental volumes $dV$ and mass densities $\rho$ in the different configurations, which are described by the Jacobian determinants: \begin{equation} J=\frac{d V}{dV_0}=\frac{\rho_0}{\rho}=\det{\bs F}; \quad J_t=\frac{d V_t}{dV_0}=\frac{\rho_0}{\rho_t}=\det{\bs F_t}; \quad J_e=\frac{d V}{dV_t}=\frac{\rho_t}{\rho}=\det{\bs F_e}. \end{equation} \subsection{Helmholtz Free energy} In defining the Helmholtz free energy $ \psi $ per unit undeformed volume in $\Omega_0$ of the mixture of austenite and $ m $ martensitic variants, contributions from the elastic $ \psi^{e} $, thermal $ \psi_i^{\theta} $, and interaction $ \psi^{in} $ energy are given by \begin{equation} \psi(\bs{F}_e,c_i,\theta)= J_t \psi^{e}(\bs{F}_e,c_i)+\psi^{\theta}(\theta,c_i)+\psi^{in}(c_i). \end{equation} Here, $ \psi^{e} $ is defined in $\Omega_t$, and the Jacobian $J_t$ maps it into $\Omega_0$; $ \psi_i^{\theta} $ includes the thermal driving force for the PT and depends on the temperature $ \theta $; $ \psi^{in} =\mathcal{A} c c_0 \geq 0$ includes the interactions between austenite and martensite, the energy of internal stresses, as well as the austenite-martensite phase interface energy; interaction between martensitic variants is neglected to avoid formation of variant-variant interfaces. Positive $\mathcal{A}$ results in a negative tangent modulus of the equilibrium stress-strain curve during PT, which results in local mechanical instability and the formation of the localized transformation bands/regions of the product phase. In such a way, a discrete martensitic structure is reproduced, similar to the nanoscale PFA. The elastic energy is expressed as \begin{equation}\label{elastic_energy} \psi^e=\frac{1}{2}\fg {E}_e \fg :\tilde{\fg C} \fg : \fg {E}_e =\frac{1}{2}\tilde{C}^{ijkl} {E}_e^{ij} {E}_e^{kl}; \qquad \tilde{C}^{ijkl}=\sum_{p=0}^{m} \tilde{C}^{ijkl}_pc_p, \end{equation} \noindent where the components of the forth-rank elastic moduli tensor $\tilde{C}^{ijkl}_p$ in the global coordinate system are defined in terms of components in the local for each grain crystallographic system ${C}^{rqnm}_p$ and grain rotations $R_g^{ml}$ as \begin{equation} \tilde{C}^{ijkl}_p=R_g^{lm}R_g^{kn}R_g^{jq}R_g^{ir}{C}^{rqnm}_p. \label{C_rotation} \end{equation} Since the thermal energy of all martensitic variants is the same, $\psi_i^{\theta}=\psi_j^{\theta}=\psi_M^{\theta}$, the thermal energy of the mixture is \begin{equation}\label{thermal_energy} \psi^\theta=\sum_{i=0}^{m}c_i\psi_i^{\theta}(\theta)=c_0\psi_A^\theta(\theta)+c\psi_M^\theta(\theta); \qquad c = \sum_{i=1}^{m}c_i; \qquad c_0 = 1-c, \end{equation} where $ \psi_A^\theta $ is the thermal energy of austenite, $c$ and $c_0$ are the volume fractions of the martensite and austenite. \subsection{Dissipation inequality} \par The Plank's inequality for isothermal processes is \begin{equation}\label{general_dissipation} D=\bs{P}^T \bs{:}\dot{\bs{F}}-\dot{\psi}\geq 0, \end{equation} where $ D $ is the dissipation rate per unit undeformed volume; $ \bs{P} $ is the first Piola-Kirchhoff stress. After traditional thermodynamic manipulations, $D$ can be expressed as the product of the thermodynamic driving forces for $ A\rightarrow M_i $, $X_{i0}$, and $ M_j\rightarrow M_i $, $X_{i0}$, transformations and conjugate rates: \begin{align}\label{dissipation} \nonumber &D=\sum_{i=1}^{m}X_{i0}\dot{c}_{i0}+\sum_{j=1}^{m-1}\sum_{i=j+1}^{m}X_{ij}\dot{c}_{ij} \geq 0;\\\nonumber &X_{i0}=W_{i0}-\frac{J_t}{2}\bs{E}_e\bs{:}(\bar{\bs{C}}_i-\bar{\bs{C}}_0)\bs{:}\bs{E}_e- \frac{J_t}{2}\left(\bs{E}_e\bs{:}\bar{\bs{C}}(c)\bs{:}\bs{E}_e \right) {\fg F}_{t}^{-1} \bs{:} \bs{\varepsilon}_{ti} -\Delta \psi^{\theta}-\mathcal{A}(1-2c); \\\nonumber &X_{ij}=W_{ij}-\frac{J_t}{2}\bs{E}_e\bs{:}(\bar{\bs{C}}_i-\bar{\bs{C}}_j)\bs{:}\bs{E}_e -\frac{J_t}{2}\left( \bs{E}_e\bs{:}\bar{\bs{C}}(c)\bs{:}\bs{E}_e \right) {\fg F}_{t}^{-1} \bs{:} (\bs{\varepsilon}_{ti}-\bs{\varepsilon}_{tj}) ; \\ &W_{i0} =\bs{P}^T\bs{\cdot}\bs{F}_e\bs{:}\bs{\varepsilon}_{ti}= J{\fg F}^T_e {\fg \cdot} \fg \sigma \bs{\cdot} {\fg F}_{e}^{t-1} \cdot {\fg F}_{t}^{-1} \bs{:}\bs{\varepsilon}_{ti}; \quad W_{ij}= \bs{P}^T\bs{\cdot}\bs{F}_e\bs{:}(\bs{\varepsilon}_{ti}-\bs{\varepsilon}_{tj})= J {\fg F}^T_e {\fg \cdot} \fg \sigma \bs{\cdot} {\fg F}_{e}^{t-1} \cdot {\fg F}_{t}^{-1} \bs{:}(\bs{\varepsilon}_{ti}-\bs{\varepsilon}_{tj}). \end{align} Here, $\dot{c}_{i0}$ and $\dot{c}_{ij}$ are the rate of change of volume fraction of variant $i$ due to transformation to the austenite and variant $j$, respectively; $W_{i0}$ is the transformation work for austenite to martensite PT, $W_{ij}$ is the transformation work for variant $j$ to variant $i$ transformation, $\Delta \psi^{\theta}$ is the jump in thermal energy during transformation, and $\fg \sigma=J^{-1} \bs{P} \bs{\cdot}\bs{F}^T $ is the Cauchy (true) stress tensor. \subsection{Kinetic equations} The kinetic equations are formulated as follows for the $ A\leftrightarrow M_i$ PTs \begin{equation}\label{kinetic_i0} \begin{cases} \begin{split} \dot{c}_{i0}=\lambda_{i0}(X_{i0}-k_{i-0}) \quad \text{if}\; &\{X_{i0}-k_{i-0}(c_i, \sigma_i)>0 \;\&\; c_i<1 \;\&\; c_0>0\} \; \qquad A \rightarrow M_i\\ \dot{c}_{i0}=\lambda_{i0}(X_{i0}+k_{i-0}) \quad \text{if}\; &\{X_{i0}+k_{i-0}(c_i, \sigma_i)<0 \;\&\; c_i>0 \;\&\; c_0<1\}\qquad \; M_i \rightarrow A \end{split}\\ \dot{c}_{i0}=0 \qquad \qquad \qquad \; \text{otherwise;} \qquad i=1,2,...,m, \end{cases} \end{equation} and for $ Mj\leftrightarrow M_i $ PTs \begin{equation}\label{kinetic_ij} \begin{cases} \begin{split} \dot{c}_{ij}=\lambda_{ij}X_{ij} \quad \text{if}\; &\{X_{ij}>0 \;\&\; c_i<1 \;\&\; c_j>0\}\qquad j \rightarrow i\\ \text{or}\; &\{X_{ij}<0 \;\&\; c_i>0 \;\&\; c_j<1\} \qquad i \rightarrow j \end{split}\\ \dot{c}_{ij}=0 \qquad \qquad \qquad \qquad \qquad \quad \text{otherwise;} \qquad i,j = 1,2,...,m, \end{cases} \end{equation} where $k_{i-0}$ is the athermal threshold and $ \lambda_{i0} $ and $ \lambda_{ij} $ are the kinetic coefficients. We neglect the athermal threshold for $ Mj\leftrightarrow M_i $ PTs. The non-strict inequalities for the volume fraction of phases in Eqs. (\ref{kinetic_i0})-(\ref{kinetic_ij}) imply that the PT from any phase does not occur if the parent phase does not exist or if the product phase is complete. \subsection{Macroscopic parameters}\label{Macroscopic} Macroscopic Cauchy stress and the first Piola-Kirchhoff stress, the deformation gradient, transformation strain, and volume fraction of Si II and each martensitic variant, averaged over the sample, are defined as \cite{Hill1984OnStrain,Levitas1996SomeSurfaces,Petryk1998MacroscopicTransformation} \begin{eqnarray} \bar{\fg \sigma}=\frac{1}{V}\int_V \fg \sigma dV; \qquad \bar{\fg P}=\frac{1}{V_0}\int_{V_0} {\fg P} dV_0; \label{av-stress} \end{eqnarray} \begin{eqnarray} \bar{\fg F}=\frac{1}{V_0}\int_{V_0} \fg F dV_0; \qquad \bar{\fg F}_t\simeq\frac{1}{V_0}\int_{V_0} \fg F_t dV_0; \qquad \bar{\fg \varepsilon}_t \simeq \frac{1}{V_0}\int_{V_0} {\fg \varepsilon}_t dV_0; \label{av-Fp} \end{eqnarray} \begin{eqnarray} \bar{c}=\frac{1}{V_0}\int_{V_0} c dV_0; \qquad \bar{c}_i=\frac{1}{V_0}\int_{V_0} c_i dV_0. \label{av-c} \end{eqnarray} For the Cauchy stress and the first Piola-Kirchhoff stress, the deformation gradient, and volume fractions $c$ and $c_i$, averaging equations are strict; for the transformation deformation gradient and strain, they are approximate because, in the unloaded stress-free state, they generally are not compatible due to residual elastic strain. We use Eq. (\ref{av-Fp}) because the exact equation is quite bulky and is for $\dot{\fg F}_t$ instead of ${\fg F}_t$. Eq. (\ref{av-c}) for $c_i$, while formally correct, is misleading (\Cref{Strain-controlled}). Other macroscopic parameters are defined via $\bar{\fg F}$ and $\bar{\fg P}$ by equations similar to the corresponding local equations: \begin{eqnarray} \bar{\fg \sigma}=(\det\bar{\fg F})^{-1} \bar{\bs{P}} \bs{\cdot}\bar{\bs{F}}^T; \qquad \bar{\bs{E}}=\frac{1}{2}(\bar{\bs{F}}^T{\bs{\cdot}}\bar{\bs{F}}-\bs{I}). \label{av-sg-E} \end{eqnarray} Eq. (\ref{av-sg-E}) for the Cauchy stress is used instead of Eq. (\ref{av-stress}) because integration over the fixed parallelepiped with regular cubic mesh is much simpler and faster than the integration over the deformed volume. \section{Model parameters}\label{Model parameters} The material parameters required for the implementation of the model, the same as in \cite{babaei2020finite}, are presented in \cref{model} are listed in \cref{tab1}. The transformation strain tensors are taken from the MD simulations in {\cite{Levitas2017LatticeCriterion,Levitas2017Triaxial-Stress-InducedPhases}}, and in the local crystallographic axes for all three variants are given by \begin{align} \scalebox{0.9}{$ \tilde{\fg \varepsilon}_{t1}= \begin{pmatrix} 0.1753 & 0 & 0 \\ 0 & 0.1753 & 0 \\ 0 & 0 & -0.4470 \end{pmatrix}; \quad \tilde{\fg \varepsilon}_{t2}= \begin{pmatrix} 0.1753 & 0 & 0 \\ 0 & -0.4470 & 0 \\ 0 & 0 & 0.1753 \end{pmatrix}; \quad \tilde{\fg \varepsilon}_{t3}= \begin{pmatrix} -0.4470 & 0 & 0 \\ 0 & 0.1753 & 0 \\ 0 & 0 &0.1753 \end{pmatrix}. $} \end{align} {The elastic constants for both phases are collected from \cite{Schall-etal-2008,Chen-etal-NPJ-2020}}. The constants $C_0^{ij}$ in \cref{tab1} denote the independent elastic constants of the austenite, and ${C}_1^{ij}$ denote those of the first variant of the martensite, both in the local crystallographic axes. The components of the tensor of elastic moduli $C^{ijkl}$ can be calculated using \begin{eqnarray} \nonumber C^{ijkl}=\sum_{n=1}^{3}[\lambda^n\delta^{in}\delta^{jn}\delta^{kn}\delta^{ln}+\mu^n(\delta^{in}\delta^{jn} \delta^{kl}+\delta^{ij}\delta^{kn}\delta^{ln})\\ +\nu^n(\delta^{in}\delta^{jk}\delta^{ln}+\delta^{jn}\delta^{ik}\delta^{ln}+\delta^{in}\delta^{jl}\delta^{kn} +\delta^{jn}\delta^{il}\delta^{kn})], \end{eqnarray} where $\lambda^n$, $\mu^n$ and $\nu^n$ for a cubic and tetragonal crystal lattice are given by \cref{cubic,tetragonal}, respectively: \begin{align}{\label{cubic}} \nonumber &\lambda^1=\lambda^2=\lambda^3=C^{11}-C^{12}-2C^{44},\\\nonumber &2\mu^1=2\mu^2=2\mu^3=C^{12},\\ &2\nu^1=2\nu^2=2\nu^3=C^{44}. \end{align} \begin{align}\label{tetragonal} \nonumber &\lambda^1=\lambda^2=C^{11}-(C^{12}+2C^{66}),\\\nonumber &\lambda^3=C^{33}+C^{12}+2C^{66}-2(C^{13}+2C^{44}),\\\nonumber &2\mu^1=2\mu^2=C^{12}, \qquad \;\;2\mu^3=2C^{13}-C^{12},\\ &2\nu^1=2\nu^2=C^{66},\qquad \;\; 2\nu^3=2C^{44}-C^{66}. \end{align} Under hydrostatic conditions, the phase equilibrium pressure $p^{eq}_0= 10.5 GPa$ {\cite{Voronin-etal-2003}} at $J_t=0.764$. The jump in the thermal energy $\Delta\psi^\theta=-p^{eq}_0({J}_t-1)=2.47 GPa$, where elastic strain and change in elastic moduli are neglected. Localization of strain is required to reproduce discrete microstructure. As noted in \cite{babaei2020finite}, to obtain strain localization, the strain rate should be commensurate with the rate of transformation. The kinetic coefficient $\lambda$ and interaction parameter $A$ are selected to ensure that this condition is satisfied. It is found with the first principle and molecular dynamics simulations \cite{zarkevich2018lattice,Levitas2017LatticeCriterion,Levitas2017Triaxial-Stress-InducedPhases} that the criteria for for Si I$\leftrightarrow$Si II PTs are linearly dependent on the Cauchy stress components normal to the cubic faces. We assume the same for the microscale experiments, where the role of defects is effectively included. For cubic to tetragonal PTs, the PT criteria are given by~\citep{Babaei-Levitas-ActaMat-2019,babaei2020finite} \begin{align}\label{experimental_PT} \nonumber &A\rightarrow M_i:\qquad a^d(\sigma_{1}+\sigma_{2})+b^d \sigma_{3}>c^d;\\ &M_i\rightarrow A:\qquad a^r(\sigma_{1}+\sigma_{2})+b^r \sigma_{3}<c^r, \end{align} where $ a^d,b^d,c^d,a^r,b^r $ and $ c^r $ are constants determined empirically. To make the thermodynamic PT conditions consistent with experimental conditions, the athermal threshold $k_{i-0}$ is considered to be stress and volume fraction dependent, as described in detail in ~\citep{babaei2020finite}, and calculated based on the relations \begin{equation}\label{k_i0} k_{i-0}=J[a_1(c_i) (\sigma_{1}+\sigma_{2})+a_3(c_i)\sigma_{3}]; \qquad a_k(c_i)=d_k+(r_k-d_k)c_i; \;\;k=1\; \rm{ and} \; 3, \end{equation} where $d_k$ and $r_k$ are the fitting parameters, given in \cref{tab1}. They are determined by substituting expressions for $ k_{i-0}$ from \cref{k_i0} in the transformation criteria in \cref{kinetic_i0} with the driving forces from \cref{dissipation} and some empirical data. \begin{table}[h] \centering \caption{ Material parameters including kinetic coefficient $\lambda \, \rm{(Pa \cdot s)}^{-1}$, dimensionless constants in the expression for effective thresholds, as well as interaction coefficient $A$, jump in the thermal energy $ \Delta \psi^{\theta} $, and elastic constants, all in GPa. } \begin{tabular}{c c c c c c c c c } \hline $\lambda$ & $A$ & $\Delta\psi^\theta $ & $d_1$&$d_3$&$r_1$&$r_3$ \\ \hline 0.02 & 2 & 2.47 & 0.082 &0.111&-0.90&0.338 \\ \hline $ C_0^{11}$ & $C_0^{44}$ & $C_0^{12}$ & $C_1^{11}$ & $C_1^{33}$ & $C_1^{44}$ & $C_1^{66}$ & $C_1^{12}$ & $C_1^{13}$ \\ \hline 167.5 & 80.1 & 65.0 & 174.76 & 136.68 & 60.24 & 42.22 & 102.0 & 68.0\\ \hline \end{tabular} \label{tab1} \end{table} \begin{figure} \centering \resizebox{150mm}{!}{\includegraphics[trim={5mm 100mm 100mm 45mm},clip]{grainID.pdf}} \caption{Grain distribution generated from DREAM.3D for (a) 55 grains showing the local orientations of individual grains (red=1, yellow=2, green=3) and (b) 910 grains. Each grain has a different orientation that is randomly assigned to make sure the sample, on the whole, is texture-free. Files with complete information about orientation of each grain for both samples are given in supplementary material.} \label{grainID} \end{figure} \section{Study of multivariant microstructure evolution}\label{results} A finite element implementation of the scale-free model is developed in the open-source FEM code deal.II~\citep{Bangerth2007Deal.IIALibrary} using 8-noded 3D cubic linear elements with first-order interpolation and full integration. For such elements, calculations of volume averaged of any parameter $a$ in the undeformed configuration is the sum of values $a$ in each quadrature point divided by the number of quadrature points. Microstructure evolution is studied for polycrystalline samples containing two different numbers of grains, 55 and 910, as shown in \cref{grainID}. The total number of the finite elements and integration points were $\simeq$2.1 million and $\simeq$16.7 million, respectively, for both cases. If we consider position vector $\fg r$ and volume fractions $c_1$, $c_2$, and $c_3$ as the primary independent variables, then the total number of degrees of freedom is six times the number of the integration points. A 3D unit cube sample is constructed in DREAM.3D~\citep{groeber2014dream} with the required number of grains. Each grain in the sample is assigned an orientation randomly so that the overall sample remains texture free. Two types of compression are applied, strain-controlled and stress-controlled. For both cases, periodic boundary conditions are applied in directions 1 and 2 with averaged zero averaged strains in these directions. Application of the periodic boundary conditions significantly simplified elimination of divergence of the computational procedure in comparison with other conditions. Such conditions are realized when Si is clamped in the 1-2 plane; a thin Si layer is attached to the rigid substrate, or within a shock wave. For the strain-controlled case, periodic boundary conditions are applied in the direction 3 as well, and a sample is subjected to a uniaxial compressive averaged strain in direction 3. For the stress-controlled loading, on the external planes orthogonal to axis 3, the sample is subjected to homogeneous compressive normal Cauchy stress in the deformed configuration with zero shear stresses, i.e., like under the action of the liquid. Stress-controlled loading ends with constant "pressure in liquid " of 11 GPa. In total, four simulations are run with two different numbers of grains and two different loading conditions. \subsection{Strain-controlled loading}\label{Strain-controlled} For strain-controlled loading, the initial strain rate used, in the elastic regime, is $1\times 10^{-2} s^{-1}$ and reduced to $5\times 10^{-5} s^{-1}$ once the material starts to transform. While we can choose any strain rate, such low strain rates are chosen because they are not achievable in atomistic simulations. \Cref{55_cevol,910_cevol} show the evolution of martensite at three intermediate stages (25\%, 50\%, and 75\% of the simulation) and the final stage of the simulation. The first row shows the total volume fraction of martensite $c$ followed by those for individual martensitic variants. The black lines in \cref{55_cevol,910_cevol} outline the grain boundaries for all the grains. The videos demonstrating the evolution of the volume fractions are provided in the supplementary material. Volume fractions of each martensitic variant $\bar{c}_i$ and martensite $\bar{c}$ averaged over the sample based on Eq. \ref{av-c} vs. strain are shown in Fig. \ref{strain_vf}. Nucleation starts mostly at the grain boundaries and triple junctions, which represent stress concentrators. Nucleation starts with the dominant variant, which produces maximum transformation work; two other variants appear in the same regions to accommodate the transformation strain and reduce internal stresses. A similar proportion between variants remains during further growth. In grains with different orientations, growth occurs either along the grain boundaries or inside the grain, which is arrested either at grain boundaries or other martensitic units. In each unit full transformation to Si II occurs quickly, thus forming discrete Si II microstructure, as desired. With increasing strain, both Si II broadening of existing Si II regions and the appearance of new nuclei occur. At the end of the loading, PT is completed almost everywhere, with small residual Si I pockets. They are caused by internal stresses due to large transformation strains. For both numbers of grains, $\bar{c}$ saturates at 96\%. It can be clearly seen from \cref{55_cevol,910_cevol,strain_vf} that the second and the third variants are dominating over the first one, which looks contradictory. To better understand the reasons for these results, let, for simplicity and illustration, consider 5 grains: grain 1 oriented with $[100]$ direction along axis 3 (variant 1); grains 2 and 3, rotated by $\pm 90^o$ about axis 1 (variant 1), and grains 4 and 5, rotated by $\pm 90^o$ about axis 2 (variant 3). Due to symmetry, this is still the same single crystal. If we treat it like a single crystal and transform it homogeneously till completion, we obtain $ c_1=1$, $c_2=c_3=0$, and $\fg \varepsilon_t=\fg \varepsilon_{t1}$. However, if we treat it as a polycrystal, in grain 1, we will have the same $ c_1=1$, $c_2=c_3=0$ and $\fg \varepsilon_t=\fg \varepsilon_{t1}$. In the local coordinate system of grains 2 and 3, this transformation strain looks like $\fg \varepsilon_t=\fg \varepsilon_{t2}$, i.e., $ c_2=1$, $c_1=c_3=0$. Similarly, in the local coordinate system of grains 4 and 5, the transformation strain is $\fg \varepsilon_{t3}$, i.e., $ c_3=1$, $c_1=c_2=0$. After averaging these volume fractions over the entire sample, we obtain $ c_1=1/5$, $c_2=c_3=2/5$, what we approximately observe in Fig. \ref{strain_vf}. The main point is that ${c}_i$ for each grain does not have a lot of sense unless the orientation of the grain is shown. That is why we give complete information about orientation of each grain in supplementary material and show orientations in \cref{grainID}a. The average over the sample $\bar{c}_i$ does not have any physical sense because the orientation of grains is not taken into account; they cannot be used in the macroscopic theories. The average over the sample $\bar{c}$ has clear physical sense. We suggest the following ways to present a multivariant structure in the polycrystal. For local presentation, one can show a triad of local crystallographic axes in each grain along with fields $c_i$ (\cref{grainID}a). One can present fields of six components of $\fg \varepsilon_{ti}c_i$ for each variant $M_i$ in the global coordinate system, which is 18 fields. For the above example with 5 grains, in each grain, we will have components of $\fg \varepsilon_{t3}$ in the global coordinate system, which is consistent with the treatment of the aggregate as a single crystal. More compact is to present six fields of six components of the total transformation strain $\fg \varepsilon_{t}=\sum_{i=1}^m \fg \varepsilon_{ti}c_i$ in the global coordinate system (Figs. \ref{55_etevol} and \ref{910_etevol}), which takes into account in more averaged sense both an orientation of grain and fields $c_i$. For the averaged description, one can use a plot of six components of $\bar{\fg \varepsilon}_{t}$. As it follows from Figs. \ref{55_etevol} and \ref{910_etevol}, normal components of $\fg \varepsilon_{t}$ vary between extremes -0.447 and 0.1753 corresponding to the full local transformation in single variant oriented along the global coordinate axes, despite zero averaged total lateral strains. Shear components vary between extremes $~\pm 0.31$, also despite zero averaged. Colors corresponding to zero value of each component of transformation strain is clear from colors of large Si I regions at 25\% transformation progress corresponding $c=0$ in Figs. \ref{55_cevol} and \ref{910_cevol}. For both grain sizes one sees strong heterogeneity of transformation strains from grain to grain and within grains. Note that fields at the surface do not completely represent fields in the entire volume, that is why some results may look counterintuitive. For example, while averaged $\varepsilon_t^{33}$ is larger than $\varepsilon_t^{11}$ and $\varepsilon_t^{22}$, this is not evident from the surface fields. \begin{comment} Effective averaged volume fractions can be introduced in terms of normal components of $\bar{\fg \varepsilon}_{t}$ relative to components of the dominant variant $\fg \varepsilon_{t1}$: \begin{equation}\label{c-ef} c_i^{ef}=\frac{\bar{\varepsilon}_{t}^{ii}}{\varepsilon_{t1}^{ii}}. \end{equation} Their evolution is presented in Fig. ??? For lower symmetry martensite, like orthorhombic or trigonal, the more general definition can be introduced using Voigt designations of matrix components \begin{equation}\label{c-k-ef} c_k^{ef}=\frac{\bar{\varepsilon}_{t}^{k}}{\varepsilon_{t1}^{k}}; \qquad \varepsilon_{t1}^{k}\neq 0 \end{equation} for all nonzero components of the transformation strain tensor $\fg \varepsilon_{t1}$. (\textcolor{red}{check}) Note that the ideal $c_i^{ef}$ consistent with zero lateral contact strains can be found from the conditions $\bar{\varepsilon}_{t}^{22}=\bar{\varepsilon}_{t}^{33}=0= 0.1753 c_1^{ef}+0.1753 c_2^{ef}-0.447 c_2^{ef}$, i.e., $c_2^{ef}=c_3^{ef}= 0.645 c_1^{ef}$. \end{comment} Despite the fact that for the smaller number of grains size of Si II units is larger, difference in $\bar{c} (\bar{E}_{33})$ (and even $\bar{c}_i (\bar{E}_{33})$) for different number of grains is small (\cref{strain_vf}). Initiation of PT occurs at the same stress $\sigma_{zz}=-10.33 $ GPa for both grain sizes, and stress-strain curves in \cref{strain_con} also differ insignificantly. That means that the current model does not describe experimentally observed effect of the grain size on the Si I to Si II PT pressure or stress \citep{Sorb2022,zeng2020origin,xuan2020pressure,tolbert1996pressure}. The reason is that the current model does not include dislocations as local stress concentrators in bulk and at grain boundaries. This will be the next step in developing the current model. We can use the same approach for introducing discrete dislocations via the solution of the contact problem, as it was done in \cite{Levitas2018Scale-FreeMicrostructure,Esfahani-etal-2020} for small strains and 2D formulations. Of course, it is much more challenging to do this for 3D and large strains. \Cref{55_sevol,910_sevol} shows the evolution of all components of Cauchy stresses throughout the simulation. It can be noticed from \cref{55_sevol,910_sevol} that the grain boundaries and triple junctions have the highest stresses of both signs, which cause nucleation of the dominating variant accompanied by two other variants and growth, often along the grain boundaries. Next, a large stress concentration of both signs appears at the phase interface, causing further nucleation in bulk (so-called autocatalytic effect). The peak stresses are huge for both normal and even shear stresses. Thus, for small grains, $\sigma_{22}$ varies from $- 45$ to $35$ GPa and $\sigma_{33}$ varies from $- 40$ to $10$ GPa. Similar, shear stress $\sigma_{12}$ varies from $- 20$ to $20$ GPa and $\sigma_{23}$ varies from $- 15$ to $15$ GPa. For large grains, the magnitude of extremes in stresses is smaller by $5$ to $10$ GPa. Despite this difference for different grain sizes, the macroscopic PT initiation stress $\sigma_{zz}= -10.33$ GPa is the same, and the entire $\sigma_{zz}-\varepsilon_{zz}$ curve do not differ significantly. The initiation stress is determined by the transformation work, which depends on all components of the stress tensor. While stresses are different, the transformation work may be approximately the same for both grain sizes. This is similar to results in \cite{Babaei-Levitas-ActaMat-2019}, where a strong stress concentrator due to a dislocation in Si produced a relatively small contribution to the transformation work. \begin{figure} \centering \resizebox{150mm}{!}{\includegraphics[]{Images/128-55/55_c.pdf}} \begin{minipage}{\linewidth} \centering \hspace{0.05\linewidth} \begin{minipage}{0.2\linewidth} \begin{figure}[H] \resizebox{20mm}{!}{\includegraphics{Images/axes.png}} \end{figure} \end{minipage} \hspace{0.05\linewidth} \begin{minipage}{0.5\linewidth} \begin{figure}[H] \resizebox{60mm}{!}{\includegraphics{Images/colorbar_c.png}} \end{figure} \end{minipage} \end{minipage} \caption{Evolution of volume fractions of phases for strain-controlled loading of a sample with 55 grains. The figure shows snapshots at different stages of completion of simulation (25\%, 50\%, 75\%, and 100\%) in different columns for Si II, $c$, and each martensitic variant, $c_i$.} \label{55_cevol} \end{figure} \begin{figure} \centering \resizebox{150mm}{!}{\includegraphics[]{Images/128-910/910_c.pdf}} \begin{minipage}{\linewidth} \centering \hspace{0.05\linewidth} \begin{minipage}{0.2\linewidth} \begin{figure}[H] \resizebox{20mm}{!}{\includegraphics{Images/axes.png}} \end{figure} \end{minipage} \hspace{0.05\linewidth} \begin{minipage}{0.5\linewidth} \begin{figure}[H] \resizebox{60mm}{!}{\includegraphics{Images/colorbar_c.png}} \end{figure} \end{minipage} \end{minipage} \caption{Evolution of volume fractions for strain-controlled loading of a sample with 910 grains. } \label{910_cevol} \end{figure} \begin{figure} \centering \resizebox{150mm}{!}{\includegraphics[]{Images/128-55/55_et.pdf}} \begin{minipage}{\linewidth} \hspace{0.1\linewidth} \begin{minipage}{0.1\linewidth} \begin{figure}[H] \resizebox{10mm}{!}{\includegraphics{Images/axes.png}} \end{figure} \end{minipage} \hspace{0.85\linewidth} \end{minipage} \caption{Evolution of components of the transformation strain tensor $\fg \varepsilon_t$ for strain-controlled loading of a sample with 55 grains. } \label{55_etevol} \end{figure} \begin{figure} \centering \resizebox{150mm}{!}{\includegraphics[]{Images/128-910/910_et.pdf}} \begin{minipage}{\linewidth} \hspace{0.1\linewidth} \begin{minipage}{0.1\linewidth} \begin{figure}[H] \resizebox{10mm}{!}{\includegraphics{Images/axes.png}} \end{figure} \end{minipage} \hspace{0.85\linewidth} \end{minipage} \caption{Evolution of the components of the transformation strain tensor $\fg \varepsilon_t$ for strain-controlled loading of a sample with 910 grains. } \label{910_etevol} \end{figure} \begin{figure}[H] \centering \resizebox{70mm}{!}{\includegraphics{Images/c_evol_strain.pdf}} \caption{Volume fraction of each martensitic variant ($\bar{c}_i$) and the total martensite ($\bar{c}$) averaged over the sample based on Eq. (\ref{av-c}) vs. strain for strain-controlled loading. The lack of physical sense for $\bar{c}_i$ is described in the text.} \label{strain_vf} \end{figure} \begin{figure} \centering \resizebox{150mm}{!}{\includegraphics[]{Images/128-55/55_s.pdf}} \begin{minipage}{\linewidth} \hspace{0.1\linewidth} \begin{minipage}{0.1\linewidth} \begin{figure}[H] \resizebox{10mm}{!}{\includegraphics{Images/axes.png}} \end{figure} \end{minipage} \hspace{0.85\linewidth} \end{minipage} \caption{Evolution of components of the Cauchy stress tensor for strain-controlled loading of a sample with 55 grains. } \label{55_sevol} \end{figure} \begin{figure} \centering \resizebox{150mm}{!}{\includegraphics[]{Images/128-910/910_s.pdf}} \begin{minipage}{\linewidth} \hspace{0.1\linewidth} \begin{minipage}{0.1\linewidth} \begin{figure}[H] \resizebox{10mm}{!}{\includegraphics{Images/axes.png}} \end{figure} \end{minipage} \hspace{0.85\linewidth} \end{minipage} \caption{Evolution of the components of the Cauchy stress tensor for strain-controlled loading of a sample with 910 grains. } \label{910_sevol} \end{figure} The stress-strain plots for strain-controlled loading are given in \cref{strain_con}. After PT starts at $\bar{E}_{33}=-0.06$ and $\bar{\sigma}_{33}=-10.33$ GPa for both small and large grains, the stress-strain plots continue along the elastic curve of the austenite with small nonlinearities. This behavior is because, initially, the transformation rate is very low, as can be noticed from \cref{strain_vf}. Si II nuclei are localized near stress concentrators without essential growth. Only at $\bar{E}_{33}=-0.07$ does intense growth start, which leads to a strong reduction in tangent modulus. Note that for a single crystal, the tangent modulus is getting negative at the onset of the PT, causing macroscopic instability \cite{babaei2020finite}. In contrast, for polycrystals, the local mechanical instabilities due to PT and negative local tangent modulus are stabilized at the macroscale by arresting/slowing the growth of Si II regions by the grain boundaries and generating the internal back stresses. This is reflected by the positive tangent moduli in the stress-strain plots in \cref{strain_con}. While intuitively, the more grain boundaries we have, the higher the tangent moduli should be, in fact, the response for 910 grains is slightly softer than that of 55 grains \cref{strain_con}. The reasons are: (a) more triple junctions and nucleation sites for smaller grains leading to Si II regions; (b) smaller misorientation between neighboring grains leading to easier transfer of PT growth from grain to grain, and (c) a larger number of surrounding grains giving more chances to find proper orientation for new nucleation caused by internal stresses in the transforming grains. Although there is a noticeable difference in the response for both cases, the maximum difference is $<1$ GPa or $<10\%$. Such a small difference implies that it is unnecessary to treat a sample with such a large number of grains to estimate the macroscopic behavior of a polycrystal. Note that the Lagrangian elastic strain at the end of simulation, at $\bar{E}_{33}=-0.22$, is $~-0.1$, i.e., comparable to the transformation strain. \begin{minipage}{\linewidth} \centering \begin{minipage}{0.45\linewidth} \begin{figure}[H] \centering \resizebox{70mm}{!}{\includegraphics{strain_con.pdf}} \caption{Averaged Cauchy stress - Lagrangian strain plot for strain-controlled loading for 55 grains and 910 grains. The dot marker represents the onset of phase transformation.} \label{strain_con} \end{figure} \end{minipage} \hspace{0.05\linewidth} \begin{minipage}{0.45\linewidth} \end{minipage} \end{minipage} \hspace{5mm} \Cref{pf_55,pf_910} shows the pole figures for 55 and 910 grains, respectively. \Cref{pf_55}(a) and \Cref{pf_910}(a) for initial austenite demonstrate quite an even spread in all the directions because the random texture of the stress-free initial configuration was chosen. \Cref{pf_55}(a) shows few low-density regions because of the lower number of grains. Comparing the initial (\Cref{pf_55}(a)) and final (\Cref{pf_55}(b)) austenite for 55 grains, we can clearly notice depletion of the austenite grains due to their transformation to martensite. Similar depletion is not noticed in \Cref{pf_910}(b) for 910 grains because many grains do not completely transform to martensite. The increase in density in \Cref{pf_55}(b) or \Cref{pf_910}(b) is because of the rotation of part of or entire austenitic grains, which was observed for single crystal as well \cite{babaei2020finite}. \Cref{pf_55}(c)-(e) and \Cref{pf_910}(c)-(e) present the pole figures for the three martensitic variants at the last stage of simulations. As expected, there is a 90$^\circ$ rotation relation between the first and the second or the third variant about the $(001)$ axis. The trends in the volume fractions of the individual martensitic variant in \Cref{pf_55,pf_910} match that of \Cref{strain_vf}. The first variant is the lowest in both cases, as only the grains that have their $c$-axis aligned with the loading direction can transform to the first variant. As for the second or the third variants, they primarily need to be oriented such that their $a$ or $b$-axis needs to be aligned with the loading direction. But because of the boundary conditions used, there are lateral stresses which can contribute to giving a resultant load in the preferred axes assisting in the transformation. This phenomenon is not possible for the first variant as the lateral stresses are only a fraction of the applied load and cannot be the primary contributor to fulfilling the phase transformation criteria. \begin{figure} \centering \resizebox{150mm}{!}{\includegraphics[trim={45mm 45mm 45mm 45mm}, clip, angle=270]{55_pf.pdf}} \caption{Pole figures of (a) initial austenite, (b) residual austenite, (c) final martensitic variant $M_1$, (d) $M_2$, and (e) $M_3$ for strain-controlled loading of 55 grains. The minimum and maximum intensities for each direction are shown in the legend corresponding to each figure.} \label{pf_55} \end{figure} \begin{figure} \centering \resizebox{150mm}{!}{\includegraphics[trim={45mm 45mm 45mm 45mm}, clip, angle=270]{910_pf.pdf}} \caption{Pole figures of (a) initial austenite, (b) residual austenite, (c) final martensitic variant $M_1$, (d) $M_2$, and (e) $M_3$ for strain-controlled loading of 55 grains. The minimum and maximum intensities for each direction are shown in the legend corresponding to each figure.} \label{pf_910} \end{figure} \subsection{Stress-controlled loading} In stress-controlled, the stress is applied at a rate of {-4 MPa/s till it reaches -11 GPa and held constant thereafter}. \Cref{55ss_cevol,910ss_cevol} and \cref{55ss_sevol,910ss_sevol} show the evolution of the volume fraction of Si II and the individual martensitic variants as well as all stress components at different simulation stages for 55 and 910 grins. The videos showing the evolution of the volume fractions are provided in the supplementary material. Volume fraction of each martensitic variant $\bar{c}_i$ averaged over the sample based on Eq. \ref{av-c} and $\bar{c}$ vs. strain is shown in Fig. \ref{stress_vf}. There is no significant difference, in comparison with the strain-controlled case, in nucleation at grain boundaries and triple junctions and character of growth of martensitic units, stress concentrations, and that $\bar{c}_2\simeq \bar{c}_3\simeq 2\bar{c}_1$. The same discussion on the luck of the physical sense in $\bar{c}_i$ is valid; $\bar{c}$ has a physical meaning only. \begin{minipage}{\linewidth} \centering \begin{minipage}{0.45\linewidth} \begin{figure}[H] \centering \resizebox{70mm}{!}{\includegraphics{stress_con.pdf}} \caption{True stress-strain plot for stress-controlled loading for 55 and 910 grains. The dot marker represents the onset of phase transformation.} \label{stress_con} \end{figure} \end{minipage} \hspace{0.05\linewidth} \begin{minipage}{0.45\linewidth} \begin{figure}[H] \centering \resizebox{70mm}{!}{\includegraphics{Images/c_evol_stress.pdf}} \caption{Volume fraction of each martensitic variant $\bar{c}_i$ and total martensite $\bar{c}$) averaged over the sample based on Eq. \ref{av-c} vs. strain for stress-controlled loading. The lack of physical sense for $\bar{c}_i$ is described in the text.} \label{stress_vf} \end{figure} \end{minipage} \end{minipage} \hspace{-5mm} \begin{figure} \centering \resizebox{150mm}{!}{\includegraphics[]{Images/128-55-ss/55_ss_c.pdf}} \begin{minipage}{\linewidth} \centering \hspace{0.05\linewidth} \begin{minipage}{0.2\linewidth} \begin{figure}[H] \resizebox{20mm}{!}{\includegraphics{Images/axes.png}} \end{figure} \end{minipage} \hspace{0.05\linewidth} \begin{minipage}{0.5\linewidth} \begin{figure}[H] \resizebox{60mm}{!}{\includegraphics{Images/colorbar_c.png}} \end{figure} \end{minipage} \end{minipage} \caption{Evolution of volume fractions of phases for stress-controlled loading of a sample with 55 grains. } \label{55ss_cevol} \end{figure} \begin{figure} \centering \resizebox{150mm}{!}{\includegraphics[]{Images/128-910-ss/910_ss_c.pdf}} \begin{minipage}{\linewidth} \centering \hspace{0.05\linewidth} \begin{minipage}{0.2\linewidth} \begin{figure}[H] \resizebox{20mm}{!}{\includegraphics{Images/axes.png}} \end{figure} \end{minipage} \hspace{0.05\linewidth} \begin{minipage}{0.5\linewidth} \begin{figure}[H] \resizebox{60mm}{!}{\includegraphics{Images/colorbar_c.png}} \end{figure} \end{minipage} \end{minipage} \caption{Evolution of volume fractions of phases for strain-controlled loading of a sample with 910 grains. } \label{910ss_cevol} \end{figure} \begin{figure} \centering \resizebox{150mm}{!}{\includegraphics[]{Images/128-55-ss/55_ss_et.pdf}} \begin{minipage}{\linewidth} \hspace{0.1\linewidth} \begin{minipage}{0.1\linewidth} \begin{figure}[H] \resizebox{10mm}{!}{\includegraphics{Images/axes.png}} \end{figure} \end{minipage} \hspace{0.85\linewidth} \end{minipage} \caption{Evolution of the components of the transformation strain tensor for stress-controlled loading of a sample with 55 grains.} \label{55ss_etevol} \end{figure} \begin{figure} \centering \resizebox{150mm}{!}{\includegraphics[]{Images/128-910-ss/910_ss_et.pdf}} \begin{minipage}{\linewidth} \hspace{0.1\linewidth} \begin{minipage}{0.1\linewidth} \begin{figure}[H] \resizebox{10mm}{!}{\includegraphics{Images/axes.png}} \end{figure} \end{minipage} \hspace{0.85\linewidth} \end{minipage} \caption{Evolution of the components of the transformation strain tensor for strain-controlled loading of a sample with 910 grains.} \label{910ss_etevol} \end{figure} \begin{figure} \centering \resizebox{150mm}{!}{\includegraphics[]{Images/128-55-ss/55_ss_s.pdf}} \begin{minipage}{\linewidth} \hspace{0.1\linewidth} \begin{minipage}{0.1\linewidth} \begin{figure}[H] \resizebox{10mm}{!}{\includegraphics{Images/axes.png}} \end{figure} \end{minipage} \hspace{0.85\linewidth} \end{minipage} \caption{Evolution of the components of the Cauchy stress tensor for stress-controlled loading of a sample with 55 grains.} \label{55ss_sevol} \end{figure} \begin{figure} \centering \resizebox{150mm}{!}{\includegraphics[]{Images/128-910-ss/910_ss_s.pdf}} \begin{minipage}{\linewidth} \hspace{0.1\linewidth} \begin{minipage}{0.1\linewidth} \begin{figure}[H] \resizebox{10mm}{!}{\includegraphics{Images/axes.png}} \end{figure} \end{minipage} \hspace{0.85\linewidth} \end{minipage} \caption{Evolution of the components of the Cauchy stress tensor for strain-controlled loading of a sample with 910 grains.} \label{910ss_sevol} \end{figure} Stress-controlled compression for both numbers of grains is up to 42\% transformation to Si II, after which process diverges. The stress-strain plots for strain-controlled loading are given in \Cref{stress_con}. The PT starts immediately at -11 GPa and continues to the end of simulations at $E_{33}= -0.116$. These values are larger than those for strain-controlled loading: at -11 GPa, we have $E_{33}= -0.063$ and $c=2.4\times10^{-3}$ only; however, at $E_{33}=-0.116$ one gets $c=0.362$ for strain-controlled loading. The possibility of larger strains and transformation progress at -11 GPa is related to less constraint deformation at the horizontal external surfaces. Periodic conditions for displacements along the axis 3 for strain-controlled loading lead to more homogenous PT near both horizontal surfaces and small deviations from the flat surfaces. In contrast, for stress-controlled loading, lack of periodic conditions along the axis 3 leads to much pronounced PT near the upper horizontal surface, which spreads in the upper part of the sample. This leads to the loss of stability of Si II nuclei near stress concentrators, their fast growth, interaction, coalescence, and more pronounced auto-catalytic effect. Thus, specific boundary conditions are very influential, which is necessary to take into account in the problem formulation. The difference in volume fractions of $M_i$ for 55 and 910 grains in \cref{strain_vf} and $\bar{c}$ is much smaller than for the strain-controlled loading in Fig. \ref{strain_vf}, again due to less restrictive boundary conditions. The peak stresses are large but smaller than for strain-controlled loading (\cref{55ss_sevol,910ss_sevol}). Thus, for small grains, $\sigma_{22}$ varies from $- 33$ to $30$ GPa and $\sigma_{33}$ varies from $- 25$ to $5$ GPa. Similar, shear stress $\sigma_{12}$ and $\sigma_{23}$ vary from $- 15$ to $15$ GPa. For large grains, the magnitude of extremes in stresses are smaller by $5$ to $10$ GPa, like for strain-controlled loading. Lower peak stresses are partially caused by a smaller volume fraction of Si II. \Cref{pf_55_ss,pf_910_ss} show the pole figures for 55 and 910 grains, respectively, for the stress-controlled loading. \Cref{pf_55_ss}(a) and \Cref{pf_910_ss}(a) show initial austenite texture for both cases, which are the same as the ones chosen for strain-controlled loading. For both 55 and 910 grains, there is no significant depletion of the austenite, unlike for the strain-controlled loading, because the $\bar{c}=0.42$ only. This is reflected in the small difference between \cref{pf_55_ss}(a) and \cref{pf_55_ss}(b), and in sparsely distributed \cref{pf_55_ss}(c)-(e). Contrary to this, \cref{pf_910_ss}(c)-(e) show the pole figures with more uniform distributions because grains in many different orientations start transforming to martensite like in the case of \cref{pf_910}. \begin{figure}[H] \centering \resizebox{150mm}{!}{\includegraphics[trim={45mm 45mm 45mm 45mm}, clip, angle=270]{55_pf_ss.pdf}} \caption{Pole figures of (a) initial austenite, (b) residual austenite, (c) final martensitic variant $M_1$, (d) $M_2$, and (e) $M_3$ for stress-controlled loading of 55 grains. } \label{pf_55_ss} \end{figure} \begin{figure}[H] \centering \resizebox{150mm}{!}{\includegraphics[trim={45mm 45mm 45mm 45mm}, clip, angle=270]{910_pf_ss.pdf}} \caption{Pole figures of (a) initial austenite, (b) residual austenite, (c) final martensitic variant $M_1$, (d) $M_2$, and (e) $M_3$ for stress-controlled loading of 910 grains. } \label{pf_910_ss} \end{figure} \section{Concluding remarks}\label{Conclusion} In the paper, the first scale-free PFA modeling of the multivariant martensitic PT from cubic Si I to tetragonal Si II in a polycrystalline aggregate with up to 1000 grains is presented. All computational challenges related to large and very anisotropic transformation strain tensors, the stress-tensor dependent athermal dissipative threshold for the PT, and potential elastic instabilities due to a variety of complex loadings in each grain, are overcome. The importance of the simulations should also be stressed by the fact that since Si II does not exist under ambient conditions, its microstructure cannot be studied using traditional post-mortem methods (SEM, TEM, etc.). For a single crystal, positions of Si I-Si II interfaces can be determined using in-situ high-pressure Laue diffraction, but still, in combination with molecular dynamics \cite{Chen2022NontrivialTransformation}. For polycrystals, this is currently impossible. Coupled evolution of discrete martensitic microstructure, volume fractions of martensitic variants, and Si II, stress and transformation strain tensors, and texture are presented and analyzed. It is demonstrated that the volume fraction of each martensitic variant ${c}_i$ in each grain does not have a lot of sense unless the orientation of the grain is explicitly shown. The average over the sample $\bar{c}_i$ does not have any physical sense because the orientation of grains is not taken into account; they are misleading and cannot be used in the macroscopic theories. Macroscopic variables effectively representing multivariant transformational behavior are introduced. One can present fields of six components of $\fg \varepsilon_{ti}c_i$ for each variant $M_i$ in the global coordinate system. More compact is to present six components of the total transformation strain $\fg \varepsilon_{t}=\sum_{i=1}^m \fg \varepsilon_{ti}c_i$ in the global coordinate system. For the averaged description, one can utilize six components of $\bar{\fg \varepsilon}_{t}$. For strain-controlled uniaxial compression with periodic conditions in all directions, almost complete (96\%) PT was reached with small pockets of residual Si I. For stress-controlled uniaxial compression without periodic conditions in the loading direction, 42\% of completion of was achieved at -11 GPa, much larger than 0.24\% for the same axial stress for strain-controlled loading, but relatively close volume fraction of Si II was reached for the same strain. Lack of periodic conditions in the loading direction results in less constraint deformation at the horizontal external surfaces and more localized transformation near one of them, which leads to loss of stability of Si II nuclei near stress concentrators, their fast growth, interaction, and coalescence. Thus, tiny detail in the boundary conditions is very influential, which is necessary to take into account in the problem formulation. In contrast to a single crystal, the local mechanical instabilities due to PT and negative local tangent modulus are stabilized at the macroscale by arresting/slowing the growth of Si II regions by the grain boundaries and generating the internal back stresses. This leads to increasing magnitude of stress during PT. Large transformation strains and grain boundaries lead to huge internal stresses, which affect the microstructure evolution and macroscopic behavior. The peak stresses reach 45 GPa in compression, 35 GPa in tension, and 20 GPa in shear for strain-controlled loading of 910 grains; for 55 grains, the magnitude of extremes in stresses are smaller by $5$ to $10$ GPa. For stress-controlled loading and small grains, the peak stresses reach 35 GPa in compression, 30 GPa in tension, and 15 GPa in shear; for large grains, they are smaller by 5-10 GPa, like for strain-controlled loading. Lower peak stresses for stress-controlled loading are partially caused by a smaller volume fraction of Si II. Despite these differences, the macroscopic (overall) stress-strain and transformational behavior for 55 and 910 grains are quite close and differ less than by 10\% for strain-controlled loading and even less for stress-controlled one. On the good side, this allows the determination of the macroscopic constitutive equations by treating aggregate with a small number of grains. On the bad side, that means that the current model does not describe experimentally observed effect of the grain size on the Si I to Si II PT pressure or stress \cite{Sorb2022,zeng2020origin,xuan2020pressure,tolbert1996pressure}. The reason is that the current model does not include dislocations as local stress concentrators in bulk and at grain boundaries. This will be the next step in developing the current model. We can use the same approach for introducing discrete dislocations via the solution of the contact problem, as it was done in \cite{Levitas2018Scale-FreeMicrostructure,Esfahani-etal-2020} for small strains and 2D formulations. Of course, it is much more challenging to do this for 3D and large strains. A more realistic model of the grain boundaries is required as well. The developed methodology can be used for studying various PTs with large transformation strains (e.g., hexagonal and rhombohedral graphite to hexagonal and cubic diamond, similar PTs from graphite-like BN to superhard diamond-like BN, PTs in semiconducting Ge and GaSb, etc.) and for further development for plastic strain-induced PTs. \noindent {\bf Acknowledgement} \par The support of NSF (CMMI-1943710) and Iowa State University (Vance Coffman Faculty Chair Professorship) is gratefully acknowledged. The simulations were performed at Extreme Science and Engineering Discovery Environment (XSEDE), allocation TG-MSS170015.
1,314,259,994,130
arxiv
\section{Background}\label{section1} \subsection{Saturated Fusion Systems} We begin with a precise definition of what is meant by a fusion system. For any group $S$ and $P,Q \leq S$ we write Hom$_S(P,Q)$ for the set of homomorphisms from $P$ to $Q$ induced by conjugation by elements of $S$ and Inj$(P,Q)$ for the set of monomorphisms from $P$ to $Q$. \begin{Def}\label{fus} Let $S$ be a finite $p$-group. A \textit{fusion system on $S$} is a category $\mathcal{F}$ where Ob$(\mathcal{F}):=\{P \mid P \leq S \}$, and where for each $P,Q \in$ Ob$(\mathcal{F})$ \begin{itemize} \item[(a)] Hom$_S(P,Q) \subseteq$ Hom$_\mathcal{F}(P,Q) \subseteq$ Inj$(P,Q)$; and \item[(b)] each $\varphi \in$ Hom$_\mathcal{F}(P,Q)$ factorises as an $\mathcal{F}$-isomorphism $\begin{CD} P @>>> P\varphi \\ \end{CD}$ followed by an \textit{inclusion} $\begin{CD} \iota_{P\varphi}^Q:P\varphi @>>> Q. \\ \end{CD}$ \end{itemize} \end{Def} Given an arbitrary collection $X_{P,Q} \subseteq$ Inj$(P,Q)$ of morphisms which contains Hom$_S(P,Q)$ for each $P,Q \leq S$, we can always construct a fusion system $\mathcal{F}$ on $S$ with $X_{P,Q} \subseteq$ Hom$_\mathcal{F}(P,Q)$ and where Hom$_\mathcal{F}(P,Q)$ $\backslash$ $X_{P,Q}$ only consists of $\mathcal{F}$-isomorphisms. In particular, $\mathcal{F}$ is minimal (with the respect to the number of morphisms) amongst all fusion systems $\mathcal{G}$ on $S$ with the property that $X_{P,Q} \subseteq$ Hom$_\mathcal{G}(P,Q)$ for each $P,Q \leq S$. Next we define morphisms between fusion systems. \begin{Def} Let $\mathcal{F}$ and $\mathcal{E}$ be fusion systems on finite $p$-groups $S$ and $T$ respectively. $\varphi \in \operatorname{Hom}(S,T)$ is a \textit{morphism} from $\mathcal{F}$ to $\mathcal{E}$ if for each $P,R \leq S$ and each $\alpha \in \operatorname{Hom}_\mathcal{F}(P,R)$, there exists $\beta \in \mbox{Hom}_\mathcal{E}(P\varphi,R\varphi)$ such that $$\alpha \circ \varphi|_R=\varphi|_P \circ \beta.$$ Hence $\varphi$ induces a functor from $\mathcal{F}$ to $\mathcal{E}$. \end{Def} In particular, fusion systems form a category which we denote by $\mathfrak{Fus}$. The following definition collects some important notions to which we will constantly refer, all of which are needed to define saturation. The reader is referred to \cite[Section I.2]{AKO} and \cite[Section 1.5]{CR} for a more thorough introduction to these ideas. \begin{Def} Let $\mathcal{F}$ be a fusion system on a finite $p$-group $S$ and let $P,Q \leq S$. \begin{itemize} \item[(a)] Iso$_\mathcal{F}(P,Q)$ denotes the set of $\mathcal{F}$-isomorphisms from $P$ to $Q$ and Aut$_\mathcal{F}(P) $:= Iso$_\mathcal{F}(P,P)$. \item[(b)] $P$ and $Q$ are \textit{$\mathcal{F}$-conjugate} whenever Iso$_\mathcal{F}(P,Q) \neq \emptyset$ and $P^\mathcal{F}$ denotes the set of all $\mathcal{F}$-conjugates of $P$. \item[(c)] $P$ is \textit{fully $\mathcal{F}$-normalised} respectively \textit{fully $\mathcal{F}$-centralised} if for each $R \in P^\mathcal{F}$, the inequality $$|N_S(P)| \geq |N_S(R)| \mbox{ respectively } |C_S(P)| \geq |C_S(R)|$$ holds. \item[(d)] $P$ is \textit{fully $\mathcal{F}$-automised} if Aut$_S(P) \in$ Syl$_p($Aut$_\mathcal{F}(P)).$ \item[(e)] For each $\varphi \in$ Hom$_\mathcal{F}(P,S)$, $$N_\varphi=N_\varphi^\mathcal{F}:=\{g \in N_S(P) \mid \varphi^{-1} \circ c_g \circ \varphi \in \mbox{Aut}_S(P\varphi) \},$$ where $c_g \in$ Aut$_S(P)$ is the automorphism induced by conjugation by $g$. \end{itemize} \end{Def} There are a number of equivalent ways to define saturation and we refer the reader to \cite[Section I.9]{AKO} or \cite[Section 4.3]{CR} for a comparison of these. The definition we choose is listed among the definitions in these references and also appears in \cite[Definition 1.2]{BLO1}. \begin{Def}\label{sat} Let $\mathcal{F}$ be a fusion system on a finite $p$-group $S$. We say that $\mathcal{F}$ is \textit{saturated} if the following hold: \begin{itemize} \item[(a)] Whenever $P \leq S$ is fully $\mathcal{F}$-normalised, it is fully $\mathcal{F}$-centralised and fully $\mathcal{F}$-automised. \item[(b)] For all $P \leq S$ and $\varphi \in$ Hom$_\mathcal{F}(P,S)$ such that $P\varphi$ is fully $\mathcal{F}$-centralised, there is $\bar{\varphi} \in $ Hom$_\mathcal{F}(N_\varphi, S)$ such that $\bar{\varphi}|_P=\varphi$. \end{itemize} \end{Def} We observe that a natural class of examples of saturated fusion systems is provided by groups. A finite $p$-subgroup $S$ of a group $G$ is a \textit{Sylow $p$-subgroup of $G$} if every finite $p$-subgroup of $G$ is $G$-conjugate to a subgroup of $S$. Let $\mathcal{F}_S(G)$ denote the fusion system on $S$ where for each $P,Q \leq S$, Hom$_{\mathcal{F}_S(G)}(P,Q):=$ Hom$_G(P,Q).$ \begin{Thm}\label{fsgsat} Let $G$ be a finite group and $S$ be a Sylow $p$-subgroup of $G$. The fusion system $\mathcal{F}_S(G)$ is saturated. \end{Thm} \subsection{Alperin's Theorem} We now concern ourselves with `generation' of saturated fusion systems, starting with some more definitions: \begin{Def} Let $\mathcal{F}$ be a fusion system on a finite $p$-group $S$ and let $P \leq S.$ \begin{itemize} \item[(a)] $P$ is \textit{$S$-centric} if $C_S(P)=Z(P)$ and $P$ is \textit{$\mathcal{F}$-centric} if $Q$ is $S$-centric for each $Q \in P^\mathcal{F}.$ \item[(b)] Write $\operatorname{Out}_\mathcal{F}(P):=\operatorname{Aut}_\mathcal{F}(P)/\operatorname{Inn}(P)$. $P$ is \textit{$\mathcal{F}$-essential} if $P$ is $\mathcal{F}$-centric and $\operatorname{Out}_\mathcal{F}(P)$ contains a strongly $p$-embedded subgroup. \item[(c)] $P$ is \textit{$\mathcal{F}$-radical} if $O_p(\operatorname{Out}_\mathcal{F}(P))=1$. \end{itemize} \end{Def} The following lemma concerning $\mathcal{F}$-centric subgroups is extremely useful. \begin{Lem}\label{centlem} Let $\mathcal{F}$ be a fusion system on a finite $p$-group $S$. The following hold for each $P,Q \leq S$: \begin{itemize} \item[(a)] If $P$ is fully $\mathcal{F}$-centralised then $PC_S(P)$ is $\mathcal{F}$-centric. \item[(b)] If $P \leq Q$ and $P$ is $\mathcal{F}$-centric, then $Q$ is $\mathcal{F}$-centric. \end{itemize} \end{Lem} \begin{proof} Writing $R=PC_S(P)$, we see immediately that $C_S(R) \leq R$. If $\varphi \in$ Hom$_\mathcal{F}(R,S)$ then $R\varphi \leq P\varphi C_S(P\varphi)$ and since $P$ is fully $\mathcal{F}$-centralised, necessarily $R\varphi = P\varphi C_S(P\varphi).$ Now $C_S(R\varphi) \leq R\varphi$ which proves (a). Part (b) is trivial. \end{proof} We introduce some notation which makes precise the notion of a `generating' set in our context. \begin{Def} Let $S$ be a finite $p$-group and let $\mathcal{C}$ be a collection of injective maps between subgroups of $S$. Denote by $\langle \mathcal{C} \rangle_S$ the smallest fusion system on $S$ containing all maps which lie in $\mathcal{C}$. If $\mathcal{F}$ is a fusion system on $S$ and $\mathcal{F}=\langle \mathcal{C} \rangle_S$ then the set $\mathcal{C}$ is said to \textit{generate} $\mathcal{F}$. \end{Def} Observe that whenever $\mathcal{C}$ generates $\mathcal{F}$, each morphism in $\mathcal{F}$ can be written as a composite of restrictions of elements of $\mathcal{C}$. \begin{Def} A set of subgroups $\mathcal{X}$ of a finite $p$-group $S$ to be a \textit{conjugation family} for a fusion system $\mathcal{F}$ on $S$ if $\langle\operatorname{Aut}_\mathcal{F}(P) \mid P \in \mathcal{X} \rangle_S=\mathcal{F}$. \end{Def} We may now state Alperin's theorem for saturated fusion systems which provides a very useful conjugation family for any saturated fusion system $\mathcal{F}$. \begin{Thm}\label{alpthm} Let $\mathcal{F}$ be a saturated fusion system on a finite $p$-group $S$. $$\{S\} \cup \{P \mid \mbox{$P$ is fully $\mathcal{F}$-normalised and $\mathcal{F}$-essential} \}$$ is a conjugation family for $\mathcal{F}$. \end{Thm} \begin{proof} See \cite[Theorem I.3.5]{AKO}. \end{proof} Proving that a fusion system $\mathcal{F}$ is saturated can often be a difficult task, since there may be many subgroups for which the saturation axioms must be checked. Fortunately, this job is occasionally made easier when an $\mathcal{F}$-conjugation family is known to exist. \begin{Thm}\label{alpthmconv} Let $\mathcal{F}$ be a fusion system on a finite $p$-group $S$. If $\mathcal{F}$-centric subgroups form a conjugation family then $\mathcal{F}$ is saturated if (a) and (b) in Definition \ref{fus} hold for all such subgroups. \end{Thm} \begin{proof} See \cite[Theorem 3.8]{Puig}. \end{proof} We think of Theorem \ref{alpthmconv} as a partial converse to Theorem \ref{alpthm} and it is a fundamental tool in our argument to prove Theorem B. \subsection{$\mathcal{F}$-normalisers and Constrained Fusion Systems} This section introduces two concepts which will be used in the proof of Theorem C. We begin with the analogue for fusion systems of the ordinary normaliser of a subgroup of a finite group. \begin{Def} Let $\mathcal{F}$ be a fusion system on a finite $p$-group $S$ and let $Q \leq S$. The \textit{$\mathcal{F}$-normaliser of $Q$}, $N_\mathcal{F}(Q)$ is the fusion system on $N_S(Q)$ where for each $P,R \leq N_S(Q)$, Hom$_{N_\mathcal{F}(Q)}(P,R)$ is the set $$\{\varphi \in \mbox{Hom}_\mathcal{F}(P,R) \mid \mbox{ $\exists$ } \bar{\varphi} \in \mbox{Hom}_\mathcal{F}(PQ,RQ) \mbox{ s.t. } \bar{\varphi}|_P=\varphi \mbox{ and } \bar{\varphi}|_Q \in \operatorname{Aut}(Q)\}.$$ \end{Def} \begin{Thm}\label{nfkq} Let $\mathcal{F}$ be a saturated fusion system on a finite $p$-group $S$ and let $Q \leq S$. If $Q$ is fully $\mathcal{F}$-normalised then $N_\mathcal{F}(Q)$ is saturated. \end{Thm} \begin{proof} See \cite[Theorem I.5.5]{AKO}. \end{proof} The notion of an $\mathcal{F}$-normaliser naturally gives rise to the notion of a normal subgroup of a fusion system as follows. \begin{Def}\label{normdef} Let $\mathcal{F}$ be a fusion system on a finite $p$-group $S$ and let $Q \leq S$. $Q$ is \textit{normal} in $\mathcal{F}$ if $N_\mathcal{F}(Q)=\mathcal{F}$. Write $O_p(\mathcal{F})$ for the maximal normal subgroup of $\mathcal{F}.$ \end{Def} The following lemma provides a characterisation of $O_p(\mathcal{F})$ for a saturated fusion system $\mathcal{F}$ in terms of its fully $\mathcal{F}$-normalised, $\mathcal{F}$-essential subgroups. \begin{Lem}\label{esscont} Let $\mathcal{F}$ be a saturated fusion system on a finite $p$-group $S$ and let $Q \leq S$. If $Q$ is normal in $\mathcal{F}$ then $Q \leq R$ for each $\mathcal{F}$-centric $\mathcal{F}$-radical subgroup $R$ of $S$. Conversely, if $Q \leq R$ for each fully $\mathcal{F}$-normalised, $\mathcal{F}$-essential subgroup $R$ of $S$, then $Q$ is normal in $\mathcal{F}$. \end{Lem} \begin{proof} See \cite[Theorem 4.61]{CR}. \end{proof} The notion of a normal subgroup can also be applied to define the analogue for fusion systems of a $p$-constrained finite group. Recall that a finite group $G$ with $O_{p'}(G)=1$ is \textit{$p$-constrained} if there exists a normal subgroup $R \unlhd G$ with $C_G(R) \leq R$. \begin{Def} A saturated fusion system $\mathcal{F}$ on a finite $p$-group $S$ is \textit{constrained} if there exists an $\mathcal{F}$-centric subgroup of $S$ which is normal in $\mathcal{F}$. \end{Def} The final result of this section asserts that every constrained fusion system arises as the fusion system of a $p$-constrained group. \begin{Thm}\label{const} Let $\mathcal{F}$ be a fusion system on a finite $p$-group $S$ and suppose that there exists an $\mathcal{F}$-centric subgroup $R$ of $S$ which is normal in $\mathcal{F}$. There exists a unique finite group $G$ with $S \in \operatorname{Syl}_p(G)$ with the properties that $$\mathcal{F}=\mathcal{F}_S(G), \mbox{ } R \unlhd G, \mbox{ } \operatorname{Out}_G(R) \cong G/R, \mbox{ and } O_{p'}(G)=1.$$ \end{Thm} \begin{proof} See, for example \cite[Theorem I.5.10]{AKO}. \end{proof} \section{Trees of Fusion Systems}\label{treesfus} In this section, we will carefully define what we mean by a tree of fusion systems and the completion of such an object. We then find a natural condition which ensures that the completion of a tree of fusion systems exists. We warn the reader that from now on the symbol $`\mathcal{F}$' will frequently be used to denote a functor with values in the category of fusion systems, rather than just a single fusion system. \subsection{Trees of Groups} We begin by introducing some notation. If $\mathcal{T}$ is a simple, undirected graph, write $V(\mathcal{T})$ and $E(\mathcal{T})$ for the sets of vertices and edges of $\mathcal{T}$ respectively. Each edge $e \in E(\mathcal{T})$ is regarded as an unordered pair of vertices, $(v,w)$ say, and $v$ and $w$ are said to be \textit{incident} on $e$. $\mathcal{T}$ gives rise to a category (also called $\mathcal{T}$) where Ob($\mathcal{T}$) is the disjoint union of the sets $V(\mathcal{T})$ and $E(\mathcal{T})$ and where for each edge $(v,w) \in E(\mathcal{T})$ there exists a pair of morphisms $$ \begin{CD} e @>>> v\\ @VVV \\ w \end{CD}$$ in $\mathcal{T}$. We denote the unique morphism in Hom$_\mathcal{T}(e,v)$ by $f_{ev}$ and write $\mathfrak{Grp}$ for the category of groups and group homomorphisms. \begin{Def} A \textit{tree of groups} is a pair $(\mathcal{T},\mathcal{G})$ where $\mathcal{T}$ is a tree and $\mathcal{G}$ is a functor from $\mathcal{T}$ (regarded as a category) to $\mathfrak{Grp}$ which sends $f_{ev}$ to group monomorphism from $\mathcal{G}(e)$ to $\mathcal{G}(v)$. The \textit{completion} $\mathcal{G}_\mathcal{T}$ of $(\mathcal{T},\mathcal{G})$ is the group $ \underrightarrow{\mbox{colim}}_{ \substack{ \mathcal{T}}} \mathcal{G}.$ \end{Def} \begin{Lem} Each tree of groups $(\mathcal{T},\mathcal{G})$ has a completion which is unique up to group isomorphism. \end{Lem} Let $(\mathcal{T},\mathcal{G})$ be a tree of groups with completion $G:=\mathcal{G}_\mathcal{T}$. Define $G/\mathcal{G}(-)$ to be the functor from $\mathcal{T}$ to $\mathfrak{Set}$ which sends each $v$ and $e \in$ Ob$(\mathcal{T})$ respectively to the sets of left cosets $G/\mathcal{G}(v)$ and $G/\mathcal{G}(e)$ and which sends $f_{ev} \in$ Hom$_\mathcal{T}(e,v)$ to the map from $G/\mathcal{G}(e)$ to $G/\mathcal{G}(v)$ given by sending left cosets $g\mathcal{G}(e)$ to $g\mathcal{G}(v)$. Define the \textit{orbit graph} $\tilde{\mathcal{T}}$ to be the space $$ \underrightarrow{\mbox{hocolim}}_{ \substack{ \mathcal{T}}} G/\mathcal{G}(-).$$ Equivalently, we may think of $\tilde{\mathcal{T}}$ as being a graph whose vertices and edges are labelled by the sets $\{g\mathcal{G}(v) \mid g \in G, v \in V(\mathcal{T})\}$ and $\{i\mathcal{G}(e) \mid i \in G, e \in E(\mathcal{T})\}$ respectively and where two vertices $g\mathcal{G}(v)$ and $h\mathcal{G}(w)$ are connected via an edge $i\mathcal{G}(e)$ if and only if $i\mathcal{G}(v)=g\mathcal{G}(v)$ and $i\mathcal{G}(w)=h\mathcal{G}(w)$. This is obviously a graph on which $G$ acts by left multiplication. Denote by $\tilde{\mathcal{T}}/G$ the graph with vertex set given by the set of orbits of $V(\tilde{\mathcal{T}})$ under the action of $G$ on $V(\tilde{\mathcal{T}})$ and likewise for the edges. We have the following theorem: \begin{Thm}\label{fundbass} Let $(\mathcal{T},\mathcal{G})$ be a tree of groups with completion $G:=\mathcal{G}_\mathcal{T}$. Then $\tilde{\mathcal{T}}$ is a tree and $\tilde{\mathcal{T}}/G \simeq \mathcal{T}.$ \end{Thm} \begin{proof} See \cite[I.4.5]{Serre} \end{proof} \subsection{Trees of Fusion Systems} Suppose that $\mathcal{F}$ and $\mathcal{E}$ are fusion systems on finite $p$-groups $S$ and $T$ respectively. A morphism $\alpha \in$ Hom$(S,T)$ from $\mathcal{F}$ to $\mathcal{E}$ is \textit{injective} if it induces an injective map $$\begin{CD} \mbox{Hom}_\mathcal{F}(P,S) @>>> \mbox{Hom}_\mathcal{E}(P\alpha,T)\\ \end{CD}$$ for each $P \leq S$. \begin{Def} A \textit{tree of $p$-fusion systems} is a triple $(\mathcal{T},\mathcal{F},\S)$ where $(\mathcal{T},\S)$ is a tree of finite $p$-groups and $\mathcal{F}$ is a functor from $\mathcal{T}$ to $\mathfrak{Fus}$ such that the following hold: \begin{itemize} \item[(a)] $\mathcal{F}(v)$ is a fusion system on $\S(v)$ and $\mathcal{F}(e)$ is a fusion system on $\S(e)$, and \item[(b)] $\mathcal{F}$ sends $f_{ev} \in$ Hom$_\mathcal{T}(e,v)$ to an injective morphism from $\mathcal{F}(e)$ to $\mathcal{F}(v).$ \end{itemize} \end{Def} \begin{Def} Let $(\mathcal{T},\mathcal{F},\S)$ be a tree of $p$-fusion systems. The \textit{completion} of $(\mathcal{T},\mathcal{F},\S)$ is a colimit for $\mathcal{F}$. \end{Def} Of course, we need conditions on $(\mathcal{T},\mathcal{F},\S)$ which imply that a colimit for $\mathcal{F}$ exists, since this is no longer guaranteed as it was in the category of groups. Indeed, any colimit for $\mathcal{F}$ must be a fusion system on the completion $S_\mathcal{T}$ of $(\mathcal{T},\S)$ and this may not be a $p$-group\footnote{Consider, for example the amalgam $C_2 * C_2 \cong D_{\infty}$ when $p=2$.}. The following is a very simple (and natural) condition to impose: \newline \newline \textbf{Hypothesis} \textit{$(H)$: There exists a vertex $v_* \in V(\mathcal{T})$ with the property that whenever $v \in V(\mathcal{T})$, $\S(e) \cong \S(v)$ where $e$ is the edge incident to $v$ in the unique minimal path from $v$ to $v_*$.} \newline \newline We will say that a tree of $p$-fusion systems $(\mathcal{T},\mathcal{F},\S)$ satisfies $(H)$ if $(\mathcal{T},\S)$ satisfies Hypothesis $(H)$. Let $(\mathcal{T},\mathcal{F},\S)$ be a tree of $p$-fusion systems which satisfies $(H)$ and write $S:=\S(v_*)$. It is clear that $\S_\mathcal{T}$ is a finite group isomorphic to $\S(v_*)$, so that we may view $\S(e)$ and $\S(v)$ as subgroups of $\S(v_*)$ by identifying them with their images in the completion. Also, $\S(v) \cap \S(w)=\S(e)$ in $\S(v_*)$ whenever $v$ and $w$ are vertices of $\mathcal{T}$ incident on $e$. Furthermore, when $v$ is a vertex incident to an edge $e$, this identification allows us to embed each fusion system $\mathcal{F}(e)$ as a subsystem of $\mathcal{F}(v)$ by identifying each morphism $\alpha \in$ Hom$_{\mathcal{F}(e)}(P,\S(e))$ with its image under the functor from $\mathcal{F}(e)$ to $\mathcal{F}(v)$ induced by $f_{ev} \in$ Hom$_\mathcal{T}(e,v).$ \begin{Lem}\label{compft} Let $(\mathcal{T},\mathcal{F},\S)$ be a tree of $p$-fusion systems which satisfies $(H)$, set $v_0:=v_*$ and write $S:=\S(v_0)=\S_\mathcal{T}$. Define $$\mathcal{F}_\mathcal{T}:=\langle \operatorname{Hom}_{\mathcal{F}(v)}(P,\S(v)) \mid P \leq \S(v),v \in V(\mathcal{T}) \rangle_S,$$ the fusion system generated by the $\mathcal{F}(v)$ for each $v \in V(\mathcal{T})$. $\mathcal{F}_\mathcal{T}$ is a colimit for $\mathcal{F}$ and each $\alpha \in \operatorname{Hom}_{\mathcal{F}_\mathcal{T}}(P,S)$ may be written as a composite $$\begin{CD} P=P_0 @>\alpha_0>> P_1 @>\alpha_1>> P_2 @>\alpha_2>> \cdots @>\alpha_{n-1}>> P_n=P\alpha\\ \end{CD}$$ where for $0 \leq i \leq n-1$, $P_i \leq \S(v_{i-1}) \cap \S(v_i)$, $\alpha_i \in \operatorname{Mor}(\mathcal{F}(v_i))$ for some $v_i \in V(\mathcal{T})$ and $(v_i,v_{i+1})$ is an edge in $\mathcal{T}$. \end{Lem} \begin{proof} The fact that $\mathcal{F}_\mathcal{T}$ is a colimit for $\mathcal{F}$ follows immediately from the fact that it is unique (up to an isomorphism of categories) amongst all fusion systems on $S$ which contain $\mathcal{F}(v)$ for each $v \in V(\mathcal{T})$\footnote{More information concerning this characterisation of $\mathcal{F}_\mathcal{T}$ is provided by the remarks which follow Definition \ref{fus}}. To see the second statement, clearly any such composite of morphisms lies in $\mathcal{F}_\mathcal{T}$. Conversely let $$\begin{CD} P=P_0 @>\alpha_0>> P_1 @>\alpha_1>> P_2 @>\alpha_2>> \cdots @>\alpha_{n-1}>> P_n=P\alpha\\ \end{CD}$$ be a representation of $\alpha \in$ Hom$_{\mathcal{F}_\mathcal{T}}(P,S)$ where $\alpha_i \in$ Mor$(\mathcal{F}(v_i))$ for $0 \leq i \leq n-1$. If $(v_{i-1},v_i)$ is not an edge then let $\eta$ be a minimal path in $\mathcal{T}$ from $v_{i-1}$ to $v_i$. Since $(\mathcal{T},S)$ satisfies $(H)$, $P_i$ is contained in $\S(w)$ for each vertex $w \in \eta$ so that by inserting identity morphisms, the above sequence of morphisms can be refined so that $(v_{i-1},v_i)$ is an edge for each $i$. This completes the proof of the lemma. \end{proof} One observes that by Lemma \ref{compft}, the competion $\mathcal{F}_\mathcal{T}$ of a tree of $p$-fusion systems $(\mathcal{T},\mathcal{F},\S)$ which satisfies $(H)$ is independent of where $\mathcal{F}$ sends the edges $e$ of $\mathcal{T}$. We will make heavy use of this fact later in the proof of Theorem B. \subsection{The $P$-orbit Graph} We now turn to the definition of a certain graph constructed from a tree of $p$-fusion systems (now simply referred to as a tree of fusion systems) and an arbitrary finite $p$-group $P$. Our discussion culminates in the proof of an important result, Proposition \ref{phipft}, which will allow us to describe morphisms in the completion combinatorially. We begin by introducing some notation. Let $\mathcal{K} $ be a fusion system on a finite $p$-group $S$ and let $P$ be any other finite $p$-group. We define an equivalence relation $\sim$ on the set of homomorphisms $\operatorname{Hom}(P,S)$ as follows. For $\alpha, \beta \in \operatorname{Hom}(P,S)$ define $\alpha \sim \beta$ if and only if there is some $\gamma \in \operatorname{Iso}_\mathcal{K} (P\alpha,P\beta)$ such that $\alpha \circ \gamma = \beta$. The fact that $\sim$ is an equivalence relation follows from the axioms for a fusion system. Write $$\mbox{Rep}(P,\mathcal{K} ):= \mbox{Hom}(P,S)/\sim$$ for the set of all equivalence classes, and for each $\alpha \in $ Hom$(P,S)$ let $[\alpha]_\mathcal{K} $ denote the class of $\alpha$ in Rep$(P,\mathcal{K} )$. If $\hat{\mathcal{K} }$ is a fusion system containing $\mathcal{K} $, set Rep$_{\hat{\mathcal{K} }}(P,\mathcal{K} ):=$ Hom$_{\hat{\mathcal{K} }}(P,S)/\sim$. Let $(\mathcal{T},\mathcal{F},\S)$ be a tree of fusion systems, and $P$ be a finite $p$-group. Define Rep$(P,\mathcal{F}(-))$ to be the functor from $\mathcal{T}$ to $\mathfrak{Set}$ which sends vertices $v$ and edges $e$ of $\mathcal{T}$ respectively to Rep$(P,\mathcal{F}(v))$ and Rep$(P,\mathcal{F}(e))$ and which sends $f_{ev} \in$ Hom$_\mathcal{T}(e,v)$ to the map from Rep$(P,\mathcal{F}(e))$ to Rep$(P,\mathcal{F}(v))$ given by $$[\alpha]_{\mathcal{F}(e)} \longmapsto [\alpha \circ \iota_{\S(e)}^{\S(v)}]_{\mathcal{F}(v)}.$$ Observe that this mapping is independent of the choice of $\alpha$. Using this definition we introduce the following space. \begin{Def} Let $P$ be a finite $p$-group and $(\mathcal{T},\mathcal{F},\S)$ be a tree of fusion systems. The \textit{$P$-orbit graph}, Rep$(P,\mathcal{F})$ is the homotopy colimit $$ \underrightarrow{\mbox{hocolim}}_{ \substack{ \mathcal{T}}} \operatorname{Rep}(P,\mathcal{F}(-)).$$ \end{Def} Since there are no $n$-simplices in $ \underrightarrow{\mbox{hocolim}}_{ \substack{ \mathcal{T}}} \operatorname{Rep}(P,\mathcal{F}(-))$ for $n \geq 2$, Rep$(P,\mathcal{F})$ can be described as the geometric realisation of a graph as follows. \begin{Lem}\label{repfgraph} Let $P$ be an arbitrary $p$-group and $(\mathcal{T},\mathcal{F},\S)$ be a tree of fusion systems. Then $R:=$ Rep$(P,\mathcal{F})$ may be regarded as a graph with $$V(R) = \bigcup_{v \in V(\mathcal{T})} \operatorname{Rep}(P,\mathcal{F}(v)) \mbox{ and } E(R) = \bigcup_{e \in E(\mathcal{T})} \operatorname{Rep}(P,\mathcal{F}(e)),$$ where $[\alpha]_{\mathcal{F}(v)}$ and $[\beta]_{\mathcal{F}(w)}$ are connected via an edge $[\gamma]_{\mathcal{F}(e)}$ if and only if $v$ and $w$ are both incident on $e$ in $\mathcal{T}$ and the identities $$[\gamma \circ \iota_{\S(e)}^{\S(v)}]_{\mathcal{F}(v)}=[\alpha]_{\mathcal{F}(v)} \mbox{ and } [\gamma \circ \iota_{\S(e)}^{\S(v)}]_{\mathcal{F}(w)}=[\beta]_{\mathcal{F}(w)}$$ hold. \end{Lem} \begin{proof} This follows immediately from the definition of the homotopy colimit. \end{proof} If $(\mathcal{T},\mathcal{F},\S)$ is a tree of fusion systems which satisfies $(H)$, then by Lemma \ref{compft}, $(\mathcal{T},\mathcal{F},\S)$ has a completion which we denote by $\mathcal{F}_\mathcal{T}.$ Let Rep$_{\mathcal{F}_\mathcal{T}}(P,\mathcal{F}(-))$ be the subfunctor of Rep$(P,\mathcal{F}(-))$ which sends vertices $v$ and edges $e$ of $\mathcal{T}$ respectively to Rep$_{\mathcal{F}_\mathcal{T}}(P,\mathcal{F}(v))$ and Rep$_{\mathcal{F}_\mathcal{T}}(P,\mathcal{F}(e))$ and which sends $f_{ev}$ to the map from Rep$_{\mathcal{F}_\mathcal{T}}(P,\mathcal{F}(e))$ to Rep$_{\mathcal{F}_\mathcal{T}}(P,\mathcal{F}(v))$ given by $$[\alpha]_{\mathcal{F}(e)} \longmapsto [\alpha \circ \iota_{\S(e)}^{\S(v)}]_{\mathcal{F}(v)}.$$ (Note that this map is well-defined since $\mathcal{F}_\mathcal{T}$ is closed under composition with inclusion morphisms). Let $$\mbox{Rep}_{\mathcal{F}_\mathcal{T}}(P,\mathcal{F}):= \underrightarrow{\mbox{hocolim}}_{ \substack{ \mathcal{T}}} \operatorname{Rep}_{\mathcal{F}_\mathcal{T}}(P,\mathcal{F}(-)).$$ We observe that the obvious analogue of Lemma \ref{repfgraph} holds for Rep$_{\mathcal{F}_\mathcal{T}}(P,\mathcal{F})$, and that (for this reason) Rep$_{\mathcal{F}_\mathcal{T}}(P,\mathcal{F})$ may be embedded as a subgraph of Rep$(P,\mathcal{F})$ in the obvious way. We end this section with an important result which provides us with a precise description of this embedding. \begin{Prop}\label{phipft} Let $(\mathcal{T},\mathcal{F},\S)$ be a tree of fusion systems which satisfies $(H)$ and let $\mathcal{F}_\mathcal{T}$ be its completion. Write $S:=\S(v_*)$ and fix $P \leq S$. The following hold: \begin{itemize} \item[(a)] The connected component of the vertex $[\iota_P^S]_{\mathcal{F}(v_*)}$ in $\operatorname{Rep}(P,\mathcal{F})$ is isomorphic to Rep$_{\mathcal{F}_\mathcal{T}}(P,\mathcal{F})$. \item[(b)] The natural map $$\begin{CD} \Phi_P: \pi_0(\operatorname{Rep}(P,\mathcal{F})) @>>> \operatorname{Rep}(P,\mathcal{F}_\mathcal{T})\\ \end{CD}$$ which sends the connected component of a vertex $[\alpha]_{\mathcal{F}(v)}$ in $\operatorname{Rep}(P,\mathcal{F})$ to $[\alpha \circ \iota_{\S(v)}^S]_{\mathcal{F}_\mathcal{T}}$ is a bijection. \end{itemize} \end{Prop} \begin{proof} Since $\mathcal{T}$ has finitely many vertices and $\S(v)$ is finite for each $v \in V(\mathcal{T})$, the graph Rep$(P,\mathcal{F})$ contains finitely many vertices and edges. We may identify $\mathcal{F}(v)$ and $\mathcal{F}(e)$ with their images in $\mathcal{F}_\mathcal{T}$ by Lemma \ref{compft}. Set $v_0:=v_*$ and $\alpha_0:= \iota_P^{\S(v_0)}$ and let $$[\alpha_0]_{\mathcal{F}(v_0)}, [\alpha_1]_{\mathcal{F}(v_1)}, \ldots, [\alpha_n]_{\mathcal{F}(v_n)}$$ be a path in Rep$(P,\mathcal{F})$ and assume that $[\beta_i]_{\mathcal{F}(e_i)}$ is an edge from $[\alpha_{i-1}]_{\mathcal{F}(v_0)}$ to $[\alpha_i]_{\mathcal{F}(v_1)}$ for each $1 \leq i \leq n.$ We need to show that each vertex $[\alpha_i]_{\mathcal{F}(v_i)}$ lies in Rep$_{\mathcal{F}_\mathcal{T}}(P,\mathcal{F})$. Clearly $[\alpha_0]_{\mathcal{F}(v_0)} \in$ Rep$_{\mathcal{F}_\mathcal{T}}(P,\mathcal{F})$. Assume that $n \geq 1$, and that $[\alpha_i]_{\mathcal{F}(v_i)} \in$ Rep$_{\mathcal{F}_\mathcal{T}}(P,\mathcal{F})$ for some $i < n$. If $[\beta_{i+1}]_{\mathcal{F}(e_{i+1})}$ is an edge from $[\alpha_i]_{\mathcal{F}(v_i)}$ to $[\alpha_{i+1}]_{\mathcal{F}(v_{i+1})}$ then there exist maps $$\gamma \in \mbox{Hom}_{\mathcal{F}(v_i)}(P\alpha_i,P\beta_{i+1}) \mbox{ and } \delta \in \mbox{Hom}_{\mathcal{F}(v_{i+1})}(P\beta_{i+1},P\alpha_{i+1})$$ such that $\alpha_i \circ \gamma = \beta_{i+1}$ and $\beta_{i+1} \circ \delta=\alpha_{i+1}$. Hence $\gamma \circ \delta \in$ Hom$_{\mathcal{F}_\mathcal{T}}(P\alpha_i,P\alpha_{i+1}) $ and $[\alpha_{i+1}]_{\mathcal{F}(v_{i+1})} \in$ Rep$_{\mathcal{F}_\mathcal{T}}(P,\mathcal{F})$ and by induction, $[\alpha_i]_{\mathcal{F}(v_i)} \in$ Rep$_{\mathcal{F}_\mathcal{T}}(P,\mathcal{F})$ for all $0 \leq i \leq n.$ Conversely suppose that $\alpha \in$ Hom$_{\mathcal{F}_\mathcal{T}}(P,\S(v))$ and that $[\alpha]_{\mathcal{F}(v)}$ is a vertex in Rep$_{\mathcal{F}_\mathcal{T}}(P,\mathcal{F})$. By the definition of $\mathcal{F}_\mathcal{T}$, there exists a path $v_0,v_1, \ldots, v_n=v$ in $\mathcal{T}$ and for $1 \leq i \leq n$, maps $\alpha_i \in$ Hom$_{\mathcal{F}(v_i)}(P\alpha_{i-1}, P\alpha_i)$ with $\alpha_0=\iota_P^S$ such that $\alpha=\alpha_0 \circ \cdots \circ \alpha_n.$ This implies that there exists a path in Rep$_{\mathcal{F}_\mathcal{T}}(P,\mathcal{F})$, $$[\alpha_0]_{\mathcal{F}(v_0)}, [\alpha_0 \circ \alpha_1]_{\mathcal{F}(v_1)} \ldots, [\alpha_0 \circ \alpha_1 \circ \cdots \circ \alpha_{n-1}]_{\mathcal{F}(v_{n-1})},[\alpha]_{\mathcal{F}(v_n)},$$ and completes the proof of (a). Next, we prove (b). It suffices to show that two vertices $[\alpha]_{\mathcal{F}(v)}$ and $[\beta]_{\mathcal{F}(w)}$ are connected in Rep$(P,\mathcal{F})$ if and only if $[\alpha \circ \iota_{\S(v)}^S]_{\mathcal{F}_\mathcal{T}}$=$[\beta \circ \iota_{\S(w)}^S]_{\mathcal{F}_\mathcal{T}}$ in Rep$(P,\mathcal{F}_\mathcal{T}).$ It is enough to prove this when $[\alpha]_{\mathcal{F}(v)}$ and $[\beta]_{\mathcal{F}(w)}$ are connected via a single edge $[\gamma]_{\mathcal{F}(e)}$. We prove the `only if' direction first. Thus we suppose that $[\alpha]_{\mathcal{F}(v)}$ and $[\beta]_{\mathcal{F}(w)}$ are connected via an edge $[\gamma]_{\mathcal{F}(e)}$ in Rep$(P,\mathcal{F})$ so that there exist morphisms $\gamma_1 \in$ Hom$_{\mathcal{F}(v)}(P\alpha,P\gamma)$ and $\gamma_2 \in$ Hom$_{\mathcal{F}(w)}(P\beta,P\gamma)$ with $$P\alpha\gamma_1=P\gamma=P\beta\gamma_2.$$ Then $\gamma_1\gamma_2^{-1} \in$ Hom$_{\mathcal{F}_\mathcal{T}}(P\alpha,P\beta)$ and $[\beta \circ \iota_{\S(w)}^S]=[\alpha\gamma_1\gamma_2^{-1} \circ \iota_{\S(w)}^S]=[\alpha \circ \iota_{\S(v)}^S] \in$ Rep$(P,\mathcal{F}_\mathcal{T})$. Conversely, suppose that $[\alpha \circ \iota_{\S(v)}^S]_{\mathcal{F}_\mathcal{T}}=[\beta \circ \iota_{\S(w)}^S]_{\mathcal{F}_\mathcal{T}}=[\delta]_{\mathcal{F}_\mathcal{T}}$, for some $\delta \in$ Hom$_{\mathcal{F}_\mathcal{T}}(P,S)$. Then $[\delta^{-1}\alpha]_{\mathcal{F}(v)}$, $[\delta^{-1}\beta]_{\mathcal{F}(v)}$ are vertices in Rep$_{\mathcal{F}_\mathcal{T}}(P\delta,\mathcal{F})$ and by part (a), this graph is connected so that there must exist a path joining $[\delta^{-1}\alpha]_{\mathcal{F}(v)}$ to $[\delta^{-1}\beta]_{\mathcal{F}(v)}$ in Rep$_{\mathcal{F}_\mathcal{T}}(P\delta,\mathcal{F})$. This is easily seen to be equivalent to the existence of a path joining $[\alpha]_{\mathcal{F}(v)}$ to $[\beta]_{\mathcal{F}(w)}$ in Rep$(P,\mathcal{F})$, completing the proof of the lemma. \end{proof} \section{Trees of Group Fusion Systems}\label{orbfus} In this section we will give a precise description of the relationship between trees of groups $(\mathcal{T},\mathcal{G})$ and trees of fusion systems $(\mathcal{T},\mathcal{F},\S)$ by considering what happens when the latter is induced by the former. It will turn out that both the completion and $P$-orbit graph of $(\mathcal{T},\mathcal{F},\S)$ have group-theoretic descriptions in this case, which will allow us to give an entirely group theoretic interpretation of Theorem B, in Section \ref{compfus}. \subsection{The Completion} We start by giving a precise explanation of the word `induced' above. The following lemma is a trivial consequence of Sylow's Theorem. \begin{Lem}\label{induce} Let $(\mathcal{T},\mathcal{G})$ be a tree of finite groups. For any choice of Sylow $p$-subgroups $\S(v) \in \operatorname{Syl}_p(\mathcal{G}(v))$ and $\S(e) \in \operatorname{Syl}_p(\mathcal{G}(e))$, there exists a tree of finite $p$-groups $(\mathcal{T},\S)$ and a tree of fusion systems $(\mathcal{T},\mathcal{F}_{\S(-)}(\mathcal{G}(-)),\S(-))$ \mbox{induced by} $(\mathcal{T},\mathcal{G})$. \end{Lem} \begin{proof} Fix a choice of Sylow $p$-subgroups, $\S(v) \in$ Syl$_p(\mathcal{G}(v))$ and $\S(e) \in$ Syl$_p(\mathcal{G}(e))$ for each vertex $v$ and edge $e$ of $\mathcal{T}$. If $v$ is incident to $e$ then by Sylow's Theorem there exists an element $g_{ev} \in \mathcal{G}(v)$ such that $(\S(e)\mathcal{G}(f_{ev}))^{g_{ev}} \leq \S(v).$ Let $\S$ be the functor from $\mathcal{T}$ to $\mathfrak{Grp}$ which sends $e$ and $v$ respectively to $\S(e)$ and $\S(v)$ and $f_{ev} \in$ Hom$_\mathcal{T}(e,v)$ to $\mathcal{G}(f_{ev}) \circ c_{g_{ev}} \in$ Hom$(\S(e),\S(v))$. Then $f_{ev}$ determines a tree of finite $p$-groups $(\mathcal{T},\S)$. Now let $\mathcal{F}_{\S(-)}(\mathcal{G}(-))$ be the functor from $\mathcal{T}$ to $\mathfrak{Fus}$ which sends $e$ and $v$ respectively to $\mathcal{F}_{\S(e)}(\mathcal{G}(e))$ and $\mathcal{F}_{\S(v)}(\mathcal{G}(v))$ and $f_{ev}$ to the homomorphism $\Phi_{ev}:=\mathcal{G}(f_{ev}) \circ c_{g_{ev}}|_{\S(e)} \in$ Hom$(\S(e),\S(v)).$ Clearly $\Phi_{ev}$ is a morphism of fusion systems and hence determines a tree of fusion systems $(\mathcal{T},\mathcal{F}_{\S(-)}(\mathcal{G}(-)),\S(-))$, as required. \end{proof} The proof of Lemma \ref{induce} shows that there may be many trees of fusion systems $(\mathcal{T},\mathcal{F}_{\S(-)}(\mathcal{G}(-)),\S(-))$ induced by a tree of groups $(\mathcal{T},\mathcal{G})$, since a choice for the functor $\S(-)$ is made when Sylow's Theorem is applied. We need to show that this choice does not interfere with the isomorphism type of the completion $\mathcal{F}_\mathcal{T}$. To do this, we first isolate precisely how conjugation takes place in $\mathcal{G}_\mathcal{T}$ by proving the following lemma of Robinson (\cite[Lemma 1]{Rob}). \begin{Lem}\label{roblem} Let $\mathcal{T}$ be a tree consisting of two vertices $v$ and $w$ with a single edge $e$ connecting them. Let $(\mathcal{T},\mathcal{G})$ be a tree of groups with completion $\mathcal{G}_\mathcal{T}$ and write $A:=\mathcal{G}(v)$, $B:=\mathcal{G}(w)$ and $C:=\mathcal{G}(e).$ The following hold: \begin{itemize} \item[(a)] Any product of elements whose successive terms lie alternately in the sets $A \backslash C$ and $B \backslash C$, lies outside of $C$, and outside at least one of $A$ and $B$. \item[(b)] Each element $g \in G \backslash A$ may be written as a product $g=a_0\omega b_{\infty}$ with $a_0 \in A$ and $b_{\infty} \in B$ so that either $\omega=1$ or $$\omega=\prod_{i=1}^s b_ia_i,$$ where $a_i \in A \backslash C$ and $b_i \in B \backslash C$ for $1 \leq i \leq s$. \end{itemize} Consequently, if $X \leq A$, $g \in G \backslash A$ and $X^g \leq A$ or $X^g \leq B$, then writing $g$ as a product $g=a_0b_1a_1 \ldots b_sa_sb_{\infty}$ as in (b), we have $$\langle X_0,X_i,Y_i \mid 1 \leq i \leq s \rangle \leq C$$ where $X_0=X^{a_0},Y_1=X_0^{b_1},X_1=Y_1^{a_1},$ and so on. \end{Lem} \begin{proof} To see (a), let $w=g_0\ldots g_n$ be a product of elements whose successive terms lie alternately in the sets $A \backslash C$ and $B \backslash C$. Then $w$ is a reduced word in $\mathcal{G}_\mathcal{T}$ so if $w$ lies in $C$, it is no longer reduced, (being representable by an element of $C$). Observe that $g_0$ and $g_n$ dictate where $w$ lies. To see this, note that if $g_0,g_n \in A$ then $w \notin B \backslash C$, since otherwise $w=b \in B \backslash C$ implies that $$1=wb^{-1}=g_0,\ldots g_nb^{-1} \notin C,$$ a contradiction. Similarly if $g_0,g_n \in B$ then $w \notin A \backslash C$ and in the remaining cases (where $g_0$ and $g_n$ lie in different sets), $w \notin (A \cup B) \backslash C$. This proves (a). To see (b), note that certainly any element $g \in G \backslash A$ may be written in the stated way, since (by (a)) such a representation allows for all possibilities for the set in which the element $g$ lies. Finally, we prove the last assertion of the lemma. If $b_\infty \in C$ and equals $c$ say, then $X^{gc^{-1}}=(X^g)^{c^{-1}} \in A \cup B$ if and only if $X^g \in A \cup B.$ This proves that we may assume without loss of generality that either $b_\infty = 1$ (if $X^g \leq B$) or $b_\infty \in B \backslash C$ (if $X^g \leq A$.) In either case, suppose that there exists $u \in X^{a_0} \backslash C$. Then $$b_\infty^{-1}a_s^{-1} \ldots, b_1^{-1}ub_1\ldots a_sb_{\infty}$$ lies in $A$ or $B$ by assumption which contradicts (a). We obtain a similar contradiction if $X_0^{b_1}$ is not contained in $C$. Inductively, we arrive at the stated result. \end{proof} We can now apply Lemma \ref{roblem} to prove Theorem A, relating the two ways in which one can construct a fusion system from a tree of groups. \begin{Thm}\label{treesgroupthm} Let $(\mathcal{T},\mathcal{G})$ be a tree of finite groups and write $\mathcal{G}_\mathcal{T}$ for the completion of $(\mathcal{T},\mathcal{G})$. Let $(\mathcal{T},\mathcal{F},\S)$ be a tree of fusion systems induced by $(\mathcal{T},\mathcal{G})$ which satisfies $(H)$ so that there exists a completion $\mathcal{F}_\mathcal{T}$ for $(\mathcal{T},\mathcal{F},\S)$. The following hold: \begin{itemize} \item[(a)] $\S(v_*)$ is a Sylow $p$-subgroup of $\mathcal{G}_\mathcal{T}$. \item[(b)] $\mathcal{F}_{\S(v_*)}(\mathcal{G}_\mathcal{T})= \mathcal{F}_\mathcal{T}$. \end{itemize} In particular, $\mathcal{F}_\mathcal{T}$ is independent of the choice of tree of fusion systems $(\mathcal{T},\mathcal{F},\S)$ induced by $(\mathcal{T},\mathcal{G})$. \end{Thm} \begin{proof} We first prove that $S:=S(v_*)$ is a Sylow $p$-subgroup of $G_\mathcal{T}$. Let $P$ be a finite $p$-subgroup of $\mathcal{G}_\mathcal{T}$ and consider the image $\mathcal{I} $ of $\tilde{\mathcal{T}}^P$ in $\mathcal{T}$ under the composite $$\begin{CD} \tilde{\mathcal{T}} @>>> \tilde{\mathcal{T}}/{\mathcal{G}_\mathcal{T}} @>\simeq>> \mathcal{T}.\\ \end{CD}$$ Since $\tilde{\mathcal{T}}$ is a tree, $\tilde{\mathcal{T}}^P$ is also a tree, so that $\mathcal{I} $ must be connected. Let $v$ be a vertex in $\mathcal{I} $ which is of minimal distance from $v_*$ and assume that $v \neq v_*$. This implies that there is some $g \in \mathcal{G}_\mathcal{T}$ such that $g\mathcal{G}(v) \in (\mathcal{G}_\mathcal{T}/\mathcal{G}(v))^P$ or equivalently such that $g^{-1}Pg \leq \mathcal{G}(v)$. Let $e$ be the edge $(v,w)$ incident to $v$ in the unique minimal path from $v$ to $v_*$. Since $(\mathcal{T},\mathcal{F},\S)$ satisfies $(H)$, $p \nmid |\mathcal{G}(v):\mathcal{G}(e)|$, so by Sylow's Theorem there is $g' \in \mathcal{G}_\mathcal{T}$ such that $g'^{-1}Pg' \leq \mathcal{G}(e)$. Since $\mathcal{G}(e) \leq \mathcal{G}(w)$, we also have $g'\mathcal{G}(w) \in (\mathcal{G}_\mathcal{T}/\mathcal{G}(w))^P$, so that $w$ is a vertex closer to $v_*$ than $v$ in $\mathcal{I} $, a contradiction. Hence $v=v_*$, $g^{-1}Pg \leq \mathcal{G}(v_*)$ and by Sylow's Theorem there is some $g'' \in \mathcal{G}_\mathcal{T}$ with $g''^{-1}Pg'' \leq S$, as needed. Next we prove that (b) holds. Observe that $\mathcal{F}_\mathcal{T} \subseteq \mathcal{F}_S(\mathcal{G}_\mathcal{T})$ since (by definition) each morphism in $\mathcal{F}_\mathcal{T}$ is a composite of restrictions of morphisms in $\mathcal{F}(v)$, each of which clearly lies in $\mathcal{F}_S(\mathcal{G}_\mathcal{T})$. Hence it remains to prove that $\mathcal{F}_S(\mathcal{G}_\mathcal{T}) \subseteq \mathcal{F}_\mathcal{T}$. We proceed by induction on the number of vertices $n:=|V(\mathcal{T})|$ of $\mathcal{T}$, the result being clear in the case where $n=1$. Suppose that $n > 1$ and fix an extremal vertex, $v$ of $\mathcal{T}$ not equal to $v_*$. Let $\mathcal{T}'$ be the tree obtained from $\mathcal{T}$ by removing $v$ and the unique edge $e$ to which $v$ is incident. Then $(\mathcal{T}',\mathcal{G})$ is a graph of groups and $(\mathcal{T}',\mathcal{F},\S)$ is a tree of fusion systems induced by $(\mathcal{T}',\mathcal{G})$ which satisfies $(H)$. Furthermore, $S \in$ Syl$_p(\mathcal{G}_{\mathcal{T}'})$ and by induction, the completion $\mathcal{F}_{\mathcal{T}'}$ of $(\mathcal{T}',\mathcal{F},\S)$ is the fusion system $\mathcal{F}_S(\mathcal{G}_{\mathcal{T}'})$. Since $$\mathcal{G}_{\mathcal{T}}=\mathcal{G}_{\mathcal{T}'} *_{\mathcal{G}(e)} \mathcal{G}(v) \mbox{ and } \mathcal{F}_\mathcal{T}=\langle \mathcal{F}_{\mathcal{T}'}, \mathcal{F}_{\S(v)}(\mathcal{G}(v)) \rangle,$$ it will be enough to show that $$\mathcal{F}_S(\mathcal{G}_{\mathcal{T}'} *_{\mathcal{G}(e)} \mathcal{G}(v)) \subseteq \langle \mathcal{F}_{\mathcal{T}'}, \mathcal{F}_{\S(v)}(\mathcal{G}(v)) \rangle. $$ Let $X \leq S$ and suppose that $\langle X,X^g \rangle \leq S$ with $g \in \mathcal{G}_\mathcal{T}.$ If $g \in \mathcal{G}_{\mathcal{T}'}$ then we are done, so suppose that $g \notin \mathcal{G}_{\mathcal{T}'}$ and write $$g=g_{-\infty}h_1g_1 \ldots h_rg_rh_{r+1}g_{\infty}$$ where $g_{-\infty},g_{\infty} \in \mathcal{G}_{\mathcal{T}'}$, $g_i \in \mathcal{G}_{\mathcal{T}'} \backslash \mathcal{G}(e)$ and $h_i \in \mathcal{G}(v) \backslash \mathcal{G}(e).$ Set $X^*:=X^{g_{-\infty}}.$ Then $\langle X^*,(X^*)^{h_1g_1 \ldots h_rg_rh_{r+1}} \rangle \leq \mathcal{G}_{\mathcal{T}'}$ so that by Lemma $\ref{roblem}$, $$X^*, (X^*)^{h_1}, (X^*)^{h_1g_1},\ldots,(X^*)^{h_1g_1\ldots h_rg_rh_{r+1}} \leq \mathcal{G}(e).$$ By Sylow's Theorem there exists $k_0 \in \mathcal{G}(e)$ such that $(X^*)^{k_0} \leq \S(e)$, $l_1 \in \mathcal{G}(e)$ such that $(X^*)^{h_1l_1} \leq \S(e)$, $k_1 \in \mathcal{G}(e)$ such that $(X^*)^{h_1g_1k_1} \leq \S(e)$, and so on. Furthermore the sequence of elements $$k_0, k_0^{-1}h_1l_1, l_1^{-1}g_1k_1,k_1^{-1}h_2l_2, \ldots, k_r^{-1}h_{r+1}l_{r+1},l_{r+1}^{-1}g_\infty$$ lie alternately in the groups $\mathcal{G}_{\mathcal{T}'}$ and $\mathcal{G}(v)$ and multiply to give $h_1g_1\ldots h_rg_rh_{r+1}g_{\infty}$. Now, for $1 \leq i \leq r,$ conjugation from $(X^*)^{h_1...h_il_i}$ to $(X^*)^{h_1...h_ig_ik_i}$ is carried out in the fusion system of $\mathcal{G}_{\mathcal{T}'}$ and conjugation from $(X^*)^{h_1...h_ig_ik_i}$ to $(X^*)^{h_1...g_ih_{i+1}l_{i+1}}$ occurs in the fusion system of $\mathcal{G}(v).$ Lastly, conjugation from $(X^*)^{h_1...h_rg_rh_{r+1}l_{r+1}}$ to $(X^*)^{h_1...h_rg_rh_{r+1}g_{\infty}}$ is carried out in the fusion system of $\mathcal{G}_{\mathcal{T}'}$. Since $\S(e)=\S(v)$, the result follows. \end{proof} We end this section with a short example which illustrates how Theorem \ref{treesgroupthm} can be applied to produce fusion systems from group amalgams. \begin{Ex}\label{ftex} Let $H \cong$ Sym$(4)$, $S \cong D_8$ be a Sylow 2-subgroup of $H$ and let $V_1$ and $V_2$ be the two non-cyclic non-conjugate subgroups of order 4 in $S$. Also, let $\mathcal{T}$ be a tree consisting of two vertices $v_1$ and $v_2$ with a single edge $e$ connecting them. Form a tree of groups $(\mathcal{T},\mathcal{G})$ where $\mathcal{G}(v_1) =\mathcal{G}(v_2) = H$ and $\mathcal{G}(e)= S$ and where $V_i f_{ev}$ is normal in $\mathcal{G}(v_i)$ but not normal in $\mathcal{G}(v_{3-i})$ for $i=1,2.$ Let $(\mathcal{T},\mathcal{F},\S)$ be a tree of fusion systems induced by $(\mathcal{T},\mathcal{G})$ and observe that $(\mathcal{T},\mathcal{F},\S)$ satisfies $(H)$ so that it has a completion $\mathcal{F}_\mathcal{T}$ by Lemma \ref{compft}. It is well known that the group $PSL_3(2)$ is a faithful, finite completion of $(\mathcal{T},\mathcal{G})$ (see, for example, \cite[Theorem A]{Gol}) so that by Theorem \ref{treesgroupthm} we must have that $\mathcal{F}_\mathcal{T}$ is the fusion system of $PSL_3(2)$. \end{Ex} \subsection{The $P$-orbit Graph} We next consider what can said about the $P$-orbit graph in the special case where a tree of fusion systems $(\mathcal{T},\mathcal{F},\S)$ which satisfies $(H)$ is induced by a tree of groups $(\mathcal{T},\mathcal{G}).$ It is shown that like the completion $\mathcal{F}_\mathcal{T}$, the isomorphism type of Rep$_{\mathcal{F}_\mathcal{T}}(P,\mathcal{F})$ is independent of the choice of $(\mathcal{T},\mathcal{F},\S)$ since it may described in terms of the orbit graph $\tilde{\mathcal{T}}$ of $(\mathcal{T},\mathcal{G})$. We start by introducing some more notation. If $A$ and $B$ are groups, let Rep$(A,B)$ denote the set of orbits of the action of Inn$(B)$ on Hom$(A,B)$ by right composition. For each $\alpha \in$ Hom$(A,B)$, let $[\alpha]_B$ denote its class modulo Inn$(B)$. If $A$ and $B$ are subgroups of some group $C$, write Rep$_C(A,B)$ for the set of orbits of the action of Inn$(B)$ on Hom$_C(A,B) \cong N_C(A,B)/C_C(A)$ (where $N_C(A,B)=\{g \in C \mid A^g \leq B\}$ and $C_C(A)$ acts on $N_C(A,B)$ via left multiplication). Now let $(\mathcal{T},\mathcal{G})$ be a tree of groups and fix a vertex $v_*$ of $\mathcal{T}$. For each $P \leq \mathcal{G}(v_*)$ let Rep$(P,\mathcal{G}(-))$ be the functor from $\mathcal{T}$ to $\mathfrak{Set}$ which sends vertices $v$ and edges $e$ respectively to Rep$(P,\mathcal{G}(e))$ and Rep$(P,\mathcal{G}(e))$ and which sends $f_{ev}$ to the map from Rep$(P,\mathcal{G}(e))$ to Rep$(P,\mathcal{G}(v))$ given by $$[\alpha]_{\mathcal{G}(e)} \longmapsto [\alpha \circ \iota_{\mathcal{G}(e)}^{\mathcal{G}(v)}]_{\mathcal{G}(v)}.$$ Similarly if $\mathcal{G}_\mathcal{T}$ is the completion of $(\mathcal{T},\mathcal{G})$ let Rep$_{\mathcal{G}_\mathcal{T}}(P,\mathcal{G}(-))$ be the functor from $\mathcal{T}$ to $\mathfrak{Set}$ which sends vertices $v$ and edges $e$ respectively to Rep$_{\mathcal{G}_\mathcal{T}}(P,\mathcal{G}(v))$ and Rep$_{\mathcal{G}_\mathcal{T}}(P,\mathcal{G}(e))$ and which sends $f_{ev}$ to the map given by $$[\alpha]_{\mathcal{G}(e)} \longmapsto [\alpha \circ \iota_{\mathcal{G}(e)}^{\mathcal{G}(v)}]_{\mathcal{G}(v)}.$$ (Observe that this well-defined). Now define: $$\mbox{Rep}(P,\mathcal{G}):= \underrightarrow{\mbox{hocolim}}_{ \substack{ \mathcal{T}}} \operatorname{Rep}(P,\mathcal{G}(-)) \mbox{, } \mbox{Rep}_{\mathcal{G}_\mathcal{T}}(P,\mathcal{G}):= \underrightarrow{\mbox{hocolim}}_{ \substack{ \mathcal{T}}} \operatorname{Rep}_{\mathcal{G}_\mathcal{T}}(P,\mathcal{G}(-))$$ both regarded as graphs in the usual way. The next proposition allows us to compare Rep$_{\mathcal{G}_\mathcal{T}}(P,\mathcal{G})$ with $\tilde{\mathcal{T}}$. \begin{Prop}\label{phip} Let $(\mathcal{T},\mathcal{G})$ be a tree of finite groups. Fix a vertex $v_*$ of $\mathcal{T}$ and a subgroup $P \leq \mathcal{G}(v_*)$ and write $\mathcal{G}_\mathcal{T}$ for the completion of $(\mathcal{T},\mathcal{G})$. There exists a graph isomorphism $$\begin{CD} \tilde{\mathcal{T}}^P/C_{\mathcal{G}_\mathcal{T}}(P) @>\sim>> \operatorname{Rep}_{\mathcal{G}_\mathcal{T}}(P,\mathcal{G}),\\ \end{CD}$$ where $\tilde{\mathcal{T}}^P$ is the subgraph of $\tilde{\mathcal{T}}$ fixed under the action of $P$. \end{Prop} \begin{proof} Define a map $$\begin{CD} \Phi_P:\tilde{\mathcal{T}}^P @>>> \mbox{Rep}_{\mathcal{G}_\mathcal{T}}(P,\mathcal{G})\\ \end{CD}$$ by sending a vertex $g\mathcal{G}(v)$ to $[c_g]_{\mathcal{G}(v)}$ where $c_g \in$ Hom$_{\mathcal{G}_\mathcal{T}}(P,\mathcal{G}(v))$ and similarly by sending an edge $g\mathcal{G}(e)$ to $[c_g]_{\mathcal{G}(e)}$ where $c_g \in$ Hom$_{\mathcal{G}_\mathcal{T}}(P,\mathcal{G}(e)).$ Note that this makes sense: if $g\mathcal{G}(v)$ is a vertex in $\tilde{\mathcal{T}}^P$ then for each $x \in P$, $xg\mathcal{G}(v)=g\mathcal{G}(v)$ which is equivalent to $P^g \leq \mathcal{G}(v)$. The same argument works for edges. To see that this map defines a homomorphism of graphs, note that if $g\mathcal{G}(v)$ and $h\mathcal{G}(w)$ are connected via an edge $i\mathcal{G}(e)$ in $\tilde{\mathcal{T}}^P$ then $g\mathcal{G}(v)=i\mathcal{G}(v)$ and $h\mathcal{G}(w) \in i\mathcal{G}(w)$ so that [$c_g]_{\mathcal{G}(v)}=[c_i \circ \iota_{\mathcal{G}(e)}^{\mathcal{G}(v)}]_{\mathcal{G}(v)}$ and $[c_h]_{\mathcal{G}(w)}=[c_i \circ \iota_{\mathcal{G}(e)}^{\mathcal{G}(v)}]_{\mathcal{G}(w)}.$ It is clear from the definition that $\Phi_P$ is surjective. If $[c_g]_{\mathcal{G}(v)}=[c_h]_{\mathcal{G}(w)}$ then $v=w$ and there is some $r \in \mathcal{G}(v)$ such that $c_g=c_h \circ c_r \in$ Hom$_{\mathcal{G}_\mathcal{T}}(P, \mathcal{G}(v))$. This is equivalent to the orbits of $g$ and $hr$ under the action of $C_{\mathcal{G}_\mathcal{T}}(P)$ on $N_{\mathcal{G}_\mathcal{T}}(P,\mathcal{G}(v))$ being equivalent so that there is some $x \in C_{\mathcal{G}_\mathcal{T}}(P)$ with $xg=hr$. Hence $h^{-1}xg \in \mathcal{G}(v)$, $h\mathcal{G}(v)=xg\mathcal{G}(v)$ and the vertices $h\mathcal{G}(v)$ and $g\mathcal{G}(v)$ lie in the same $C_{\mathcal{G}_\mathcal{T}}(P)$-orbit, as needed. \end{proof} We can now compare the graphs Rep$(P,\mathcal{G})$ and Rep$(P,\mathcal{F})$ when $\mathcal{F}$ is a choice of functor $\mathcal{F}_{\S(-)}(\mathcal{G}(-))$ associated to a tree of finite groups $(\mathcal{T},\mathcal{G})$ (see Lemma \ref{induce}). We need the following simple consequence of Sylow's Theorem. \begin{Lem}\label{ftgt} Let $G$ be a finite group and let $S$ be a Sylow $p$-subgroup of $G$. Then for each $p$-group $P$, there exists a bijection $$\begin{CD} \operatorname{Rep}(P,\mathcal{F}_S(G)) @>\sim>> \operatorname{Rep}(P,G)\\ \end{CD}$$ given by the map $\Psi$ which sends $[\alpha]_{\mathcal{F}_S(G)}$ to $[\alpha \circ \iota_S^G]_G$ for each $\alpha \in \operatorname{Hom}(P,S).$ \end{Lem} \begin{proof} Let $\alpha,\beta \in$ Hom$(P,S)$ and suppose that $[\alpha \circ \iota_S^G]_G=[\beta \circ \iota_S^G]_G$. Then there exists a map $c_g \in$ Inn$(G)$ such that $\alpha \circ \iota_S^G \circ c_g=\beta \circ \iota_S^G$. This implies that $c_g|_S \in$ Hom$_{\mathcal{F}_S(G)}(P\alpha,P\beta)$ and $\alpha \circ c_g|_S = \beta$ so that $[\alpha]_{\mathcal{F}_S(G)}=[\beta]_{\mathcal{F}_S(G)}$ and $\Psi$ is injective. To see that $\Psi$ is surjective, let $[\gamma]_G \in$ Rep$(P,G)$. By Sylow's Theorem there exists $g \in G$ such that $(P\gamma)^g \leq S$. Set $\varphi:=\gamma \circ c_g \in$ Hom$(P,S)$ and observe that $[\varphi \circ \iota_S^G]_G=[\gamma]_G,$ as required. \end{proof} \begin{Cor}\label{corftgt} Let $(\mathcal{T},\mathcal{G})$ be a tree of finite groups with completion $\mathcal{G}_\mathcal{T}$ and let $(\mathcal{T},\mathcal{F},\S)$ be a tree of fusion systems induced by $(\mathcal{T},\mathcal{G})$. Fix a vertex $v_*$ of $\mathcal{T}$ and a subgroup $P \leq \S(v_*)$. Then there exists a natural isomorphism of functors $$\begin{CD} \operatorname{Rep}(P,\mathcal{F}(-)) @>\sim>> \operatorname{Rep}(P,\mathcal{G}(-))\\ \end{CD},$$ which induces a homotopy equivalence $\begin{CD} \operatorname{Rep}(P,\mathcal{F}) @>\simeq>> \operatorname{Rep}(P,\mathcal{G})\\ \end{CD}.$ Furthermore, if $(\mathcal{T},\mathcal{F},\S)$ satisfies $(H)$ and has completion $\mathcal{F}_\mathcal{T}$ then $$\begin{CD} \operatorname{Rep}_{\mathcal{F}_\mathcal{T}}(P,\mathcal{F}) @>\simeq>> \operatorname{Rep}_{\mathcal{G}_\mathcal{T}}(P,\mathcal{G})\\ \end{CD},$$ so that $\operatorname{Rep}_{\mathcal{F}_\mathcal{T}}(P,\mathcal{F})$ is independent of the choice of tree of fusion systems $(\mathcal{T},\mathcal{F},\S)$. \end{Cor} \begin{proof} The existence of a natural isomorphism of functors $$\begin{CD} \eta:\mbox{Rep}(P,\mathcal{F}(-)) @>\sim>> \mbox{Rep}(P,\mathcal{G}(-))\\ \end{CD}$$ is an immediate consequence of Lemma \ref{ftgt}. By \cite[Proposition IV.1.9]{GJ} this natural isomorphism induces a homotopy equivalence $$\begin{CD} \mbox{hocolim}(\eta): \mbox{Rep}(P,\mathcal{F}) @>\simeq>> \mbox{Rep}(P,\mathcal{G})\\ \end{CD}.$$ If $(\mathcal{T},\mathcal{F},\S)$ satisfies $(H)$ then the above equivalence sends the vertex $[\iota_P^{\S(v_*)}]_{\mathcal{F}(v_*)}$ to $[\iota_P^{\mathcal{G}(v_*)}]_{\mathcal{G}(v_*)}.$ Clearly $[\iota_P^{\mathcal{G}(v_*)}]_{\mathcal{G}(v_*)}$ lies in Rep$_{\mathcal{G}_\mathcal{T}}(P,\mathcal{G})$ (it is the image of the coset $\mathcal{G}(v_*)$ under $\Phi_P$). Since Rep$_{\mathcal{G}_\mathcal{T}}(P,\mathcal{G})$ is connected (being the image under $\Phi_P$ of $\tilde{\mathcal{T}}^P$ which is connected), hocolim($\eta$) must restrict to a homotopy equivalence $$\begin{CD} \mbox{Rep}_{\mathcal{F}_\mathcal{T}}(P,\mathcal{F}) @>\simeq>> \mbox{Rep}_{\mathcal{G}_\mathcal{T}}(P,\mathcal{G})\\ \end{CD}$$ by Proposition \ref{phipft}. This completes the proof. \end{proof} \section{The Completion of a Tree of Fusion Systems}\label{compfus} In this section, we prove our second main result, Theorem B, which gives conditions for a tree of fusion systems $(\mathcal{T},\mathcal{F},\S)$ to have a saturated completion $\mathcal{F}_\mathcal{T}$: \begin{Thm}\label{completionsat} Let $(\mathcal{T},\mathcal{F},\S)$ be a tree of fusion systems which satisfies $(H)$ and assume that $\mathcal{F}(v)$ is saturated for each vertex $v$ of $\mathcal{T}.$ Write $S:=\S(v_*)$ and $\mathcal{F}_\mathcal{T}$ for the completion of $(\mathcal{T},\mathcal{F},\S)$. Assume that the following hold for each $P \leq S$. \begin{itemize} \item[(a)] If $P$ is $\mathcal{F}_\mathcal{T}$-conjugate to an $\mathcal{F}(v)$-essential subgroup or $P=\S(v)$ then $P$ is $\mathcal{F}_\mathcal{T}$-centric. \item[(b)] If $P$ is $\mathcal{F}_\mathcal{T}$-centric then $\operatorname{Rep}_{\mathcal{F}_\mathcal{T}}(P,\mathcal{F})$ is a tree. \end{itemize} Then $\mathcal{F}_\mathcal{T}$ is a saturated fusion system on $S$. \end{Thm} Conditions (a) and (b) are respectively motivated by Theorems \ref{alpthm} and \ref{alpthmconv}. The former condition implies that $\mathcal{F}_\mathcal{T}$ is generated by its $\mathcal{F}_\mathcal{T}$-centric subgroups, while the latter ensures that the saturation axioms (Definition \ref{sat} (a) and (b)) hold for all such subgroups. Thus the saturation of $\mathcal{F}_\mathcal{T}$ will ultimately follow from Theorem \ref{alpthmconv}. Theorem \ref{completionsat} was inspired by a theorem of Broto, Levi and Oliver (\cite[Theorem 4.2]{BLO4}) and we will deduce their result from ours in Corollary \ref{completionsatgroups}. \subsection{Finite Group Actions on Trees} Before embarking on the proof of Theorem \ref{completionsat}, it will be important to understand the way in which the automiser Aut$_{\mathcal{F}_\mathcal{T}}(P)$ can act on the $P$-orbit graph Rep$_{\mathcal{F}_\mathcal{T}}(P,\mathcal{F})$. Note that we may draw an analogy here with the situation for trees of groups $(\mathcal{T},\mathcal{G})$ where there exists an action of the completion $\mathcal{G}_\mathcal{T}$ on the orbit tree $\tilde{\mathcal{T}}$. We begin with two important concepts which play a pivotal role in the proof of Theorem \ref{completionsat}. \begin{Def} Let $(\mathcal{T},\mathcal{F},\S)$ be a tree of fusion systems which satisfies $(H)$ and write $\mathcal{F}_\mathcal{T}$ for the completion of $(\mathcal{T},\mathcal{F},\S)$. \begin{itemize} \item[(a)] For each pair of $p$-groups $P \leq Q$ define the \textit{restriction map} from $Q$ to $P$: $$\begin{CD} \mbox{res}^Q_P:\mbox{Rep}_{\mathcal{F}_\mathcal{T}}(Q,\mathcal{F}) @>>> \mbox{Rep}_{\mathcal{F}_\mathcal{T}}(P,\mathcal{F})\\ \end{CD}$$ to be the map which sends a vertex $[\varphi]_{\mathcal{F}(v)}$ in Rep$_{\mathcal{F}_\mathcal{T}}(Q,\mathcal{F})$ to $[\varphi|_P]_{\mathcal{F}(v)}$ in Rep$_{\mathcal{F}_\mathcal{T}}(P,\mathcal{F})$. \item[(b)] For each $P \leq \S(v_*)$ the \textit{action} of $\mathcal{F}_\mathcal{T}$ on Rep$_{\mathcal{F}_\mathcal{T}}(P,\mathcal{F})$ is given by the group action of Aut$_{\mathcal{F}_\mathcal{T}}(P)$ on Rep$_{\mathcal{F}_\mathcal{T}}(P,\mathcal{F})$ which sends a vertex $[\varphi]_{\mathcal{F}(v)}$ to $[\psi \circ \varphi]_{\mathcal{F}(v)}$ for each $\psi \in$ Aut$_{\mathcal{F}_\mathcal{T}}(P)$. \end{itemize} \end{Def} We quickly check that this definition make sense. Firstly, one observes that res$^Q_P$ defines a homomorphism of graphs. To see this, note that if $[\alpha]_{\mathcal{F}(v)}$ is incident to some edge $[\beta]_{\mathcal{F}(e)}$ in Rep$_{\mathcal{F}_\mathcal{T}}(Q,\mathcal{F})$, then there exists $\psi \in$ Hom$_{\mathcal{F}(v)}(Q\alpha,\S(v))$ such that $\beta \circ \iota_{\S(e)}^{\S(v)}=\alpha \circ \psi$. Hence $\beta|_P \circ \iota_{\S(e)}^{\S(v)}=\alpha|_P \circ \psi|_{P\alpha}$ and $[\alpha|_P]_{\mathcal{F}(v)}$ is incident to $[\beta|_P]_{\mathcal{F}(e)}$ in Rep$_{\mathcal{F}_\mathcal{T}}(P,\mathcal{F})$. Secondly, note the described action of Aut$_{\mathcal{F}_\mathcal{T}}(P)$ on Rep$_{\mathcal{F}_\mathcal{T}}(P,\mathcal{F})$ makes sense: if $[\varphi]_{\mathcal{F}(v)}$ is a vertex of Rep$_{\mathcal{F}_\mathcal{T}}(P,\mathcal{F})$ and $\psi \in$ Aut$_{\mathcal{F}_\mathcal{T}}(P)$, then $\psi \circ \varphi \in$ Hom$_{\mathcal{F}_\mathcal{T}}(P,\S(v))$ so that $[\psi \circ \varphi]_{\mathcal{F}(v)}$ is also a vertex of Rep$_{\mathcal{F}_\mathcal{T}}(P,\mathcal{F})$. The following proposition gives conditions on $(\mathcal{T},\mathcal{F},\S)$ which will allow us to calculate the image of res$^Q_P$ in terms of certain fixed points of the action of $\mathcal{F}_\mathcal{T}$ on Rep$_{\mathcal{F}_\mathcal{T}}(P,\mathcal{F})$. \begin{Prop}\label{resprop} Let $(\mathcal{T},\mathcal{F},\S)$ be a tree of fusion systems which satisfies $(H)$ and assume that the following hold: \begin{itemize} \item[(a)] $\mathcal{F}(v)$ is saturated for each vertex $v$ of $\mathcal{T};$ and \item[(b)] $\mathcal{F}(e)$ is a trivial fusion system $\mathcal{F}_{\S(e)}(\S(e))$ for each edge $e$ of $\mathcal{T}$. \end{itemize} Let $P$ and $Q$ be $p$-groups with $P$ $\unlhd$ $Q \leq S(v_*)$ and assume that $P$ is $\mathcal{F}_\mathcal{T}$-centric, where $\mathcal{F}_\mathcal{T}$ is the completion of $(\mathcal{T},\mathcal{F},\S)$. If $\operatorname{Rep}_{\mathcal{F}_\mathcal{T}}(P,\mathcal{F})$ is a tree, then the image of the restriction homomorphism $$\begin{CD} \operatorname{res}^Q_P:\operatorname{Rep}_{\mathcal{F}_\mathcal{T}}(Q,\mathcal{F}) @>>> \operatorname{Rep}_{\mathcal{F}_\mathcal{T}}(P,\mathcal{F})\\ \end{CD}$$ is equal to $\operatorname{Rep}_{\mathcal{F}_\mathcal{T}}(P,\mathcal{F})^{\operatorname{Aut}_Q(P)}$. \end{Prop} \begin{proof} Set $K:=$ Aut$_Q(P)$. If $[\alpha]$ is a vertex or edge in Rep$_{\mathcal{F}_\mathcal{T}}(Q, \mathcal{F})$ then $[\alpha|_P] \in$ Rep$_{\mathcal{F}_\mathcal{T}}(P, \mathcal{F})^K$ since $$c_g \circ \alpha|_P = \alpha|_P \circ (\alpha^{-1}|_{P\alpha} \circ c_g \circ \alpha|_P)$$ for each $g \in Q$. It remains to prove that each vertex or edge of Rep$_{\mathcal{F}_\mathcal{T}}(P, \mathcal{F})^K$ is the image of \textit{some} vertex or edge of Rep$_{\mathcal{F}_\mathcal{T}}(Q, \mathcal{F})$. Since Rep$_{\mathcal{F}_\mathcal{T}}(P, \mathcal{F})$ is a tree and $K$ is finite, Rep$_{\mathcal{F}_\mathcal{T}}(P, \mathcal{F})^K$ is also a tree. In particular Rep$_{\mathcal{F}_\mathcal{T}}(P, \mathcal{F})^K$ is connected. Hence to complete the proof, it will be enough to show that for each edge $[\beta]_{\mathcal{F}(e)}$ in Rep$_{\mathcal{F}_\mathcal{T}}(P, \mathcal{F})^K$ which is connected to the image $[\alpha|_P]_{\mathcal{F}(v)}$ of some vertex $[\alpha]_{\mathcal{F}(v)}$ under res$_P^Q$, there exists an edge $[\bar{\beta}]_{\mathcal{F}(e)}$ in Rep$_{\mathcal{F}_\mathcal{T}}(Q,\mathcal{F})$ connected to $[\alpha]_{\mathcal{F}(v)}$ whose image under res$_P^Q$ is $[\beta]_{\mathcal{F}(e)}$. Since $[\iota_Q^S]_{\mathcal{F}(v_*)}$ gets sent to $[\iota_P^S]_{\mathcal{F}(v_*)}$ under res$_P^Q$, we may assume (inductively) that we have chosen $e$ to be the edge to which $v$ is incident in the unique minimal path from $v$ to $v_*$. In particular we may assume that $\S(e)=\S(v)$. By assumption, there exists $\psi \in$ Hom$_{\mathcal{F}(v)}(P\beta,P\alpha)$ such that $\beta \circ \psi=\alpha|_P$. Set $R:=N_{\S(e)}^{K^\beta}(P\beta)$. Since $[\beta]_{\mathcal{F}(e)}$ is fixed by $K$, $K^\beta \leq$ Aut$_{\S(e)}(P\beta)$ so Aut$_R(P\beta)=K^{\beta}$. Now, Aut$_R(P\beta)^\psi=K^{\beta\psi}=K^\alpha=$ Aut$_{Q\alpha}(P\alpha)$ and hence $$Q\alpha \leq N_{\psi^{-1}}=\{g \in N_{\S(v)}(P\alpha) \mid (c_g)^{\psi^{-1}} \in \mbox{ Aut}_S(P\beta)\}.$$ Since $\mathcal{F}(v)$ is saturated and $P\alpha$ is $\mathcal{F}(v)$-centric (and therefore fully $\mathcal{F}(v)$-centralised), $\psi^{-1}$ extends to a map $\rho \in $ Hom$_{\mathcal{F}(v)}(Q\alpha, \S(v))$. Set $\bar{\beta}:=\alpha \circ \rho \in$ Hom$_{\mathcal{F}(v)}(Q,\S(v))$ so that res$_P^Q([\bar{\beta}]_{\mathcal{F}(e)})= [\bar{\beta}|_{P}]_{\mathcal{F}(e)}=[\beta]_{\mathcal{F}(e)}$ and $[\alpha]_{\mathcal{F}(v)}$ is incident to $[\bar{\beta}]_{\mathcal{F}(e)}$ in Rep$_{\mathcal{F}_\mathcal{T}}(Q,\mathcal{F})$. This completes the proof. \end{proof} \subsection{Proof of Theorem B} We now have all of the tools necessary to prove Theorem B. We remind the reader that this theorem was originally conjectured based on the corresponding statement for trees of group fusion systems, \cite[Theorem 4.2]{BLO4}, and the proof in that case may be seen to rely on some fairly deep homotopy theory. No deep results are required in our proof and for this reason we feel the statement is more naturally understood in the context of trees of fusion systems. Nevertheless, we will present \cite[Theorem 4.2]{BLO4} as Corollary \ref{completionsatgroups} below. A key step in our proof is the observation that the completion $\mathcal{F}_\mathcal{T}$ of a tree of fusion systems $(\mathcal{T},\mathcal{F},\S)$ is independent of where $\mathcal{F}$ sends edges $e$ of $\mathcal{T}$, provided $\mathcal{F}_{\S(e)}(\S(e)) \subseteq \mathcal{F}(e) \subseteq \mathcal{F}(v)$ (inside $\mathcal{F}_\mathcal{T}$). This means that we are able to assume that $\mathcal{F}(e)$ is the trivial fusion system $\mathcal{F}_{\S(e)}(\S(e))$. \begin{proof}[Proof of Theorem B] Let $(\mathcal{T},\mathcal{F}',\S)$ be the tree of fusion systems obtained from $(\mathcal{T},\mathcal{F},\S)$ by setting $\mathcal{F}'(v):=\mathcal{F}(v)$ for each vertex $v$ of $\mathcal{T}$ and $\mathcal{F}'(e):=\mathcal{F}_{\S(e)}(\S(e))$ for each edge $e$ of $\mathcal{T}.$ Then $(\mathcal{T},\mathcal{F}',\S)$ satisfies $(H)$ and has completion $\mathcal{F}_\mathcal{T}$. Also, since any cycle in Rep$_{\mathcal{F}_\mathcal{T}}(P,\mathcal{F}')$ is a cycle in Rep$_{\mathcal{F}_\mathcal{T}}(P,\mathcal{F})$, hypothesis (b) in the theorem remains true for $(\mathcal{T},\mathcal{F}',\S).$ Hence we may assume from now on that $(\mathcal{T},\mathcal{F},\S)=(\mathcal{T},\mathcal{F}',\S)$. Since $\mathcal{F}(v)$ is a saturated fusion system on $\S(v)$ for each vertex $v$ in $\mathcal{T}$, by Theorem \ref{alpthm} $$\mathcal{F}(v)=\langle \{\mbox{Aut} _{\mathcal{F}(v)}(P) \mid P \mbox{ is $\mathcal{F}(v)$-essential }\}\cup \{\mbox{Aut}_{\mathcal{F}(v)}(\S(v)) \} \rangle_{\S(v)}.$$ By hypothesis, each $\mathcal{F}(v)$-essential subgroup and each $\S(v)$ is an $\mathcal{F}_\mathcal{T}$-centric subgroup so by the definition of $\mathcal{F}_\mathcal{T}$ we have $$\mathcal{F}_\mathcal{T}=\langle \mbox{Aut} _{\mathcal{F}(v)}(P) \mid v \in V(\mathcal{T}), P \mbox{ is $\mathcal{F}_\mathcal{T}$-centric} \rangle_S.$$ Hence by Theorem \ref{alpthmconv} it suffices to verify that axioms (a) and (b) in Definition \ref{fus} hold for $\mathcal{F}_\mathcal{T}$-centric subgroups. First we prove Definition \ref{fus} (a) holds. Let $P$ be a fully $\mathcal{F}_\mathcal{T}$-normalised, $\mathcal{F}_\mathcal{T}$-centric subgroup of $S$. Then since $|C_S(P\varphi)|=|Z(P\varphi)|=|Z(P)|$ for each $\varphi \in$ Hom$_{\mathcal{F}_\mathcal{T}}(P,S)$, $P$ is certainly fully $\mathcal{F}_\mathcal{T}$-centralised. It remains to prove that $P$ is fully $\mathcal{F}_\mathcal{T}$-automised. Each $\varphi \in$ Aut$_{\mathcal{F}_\mathcal{T}}(P)$ acts on Rep$_{\mathcal{F}_\mathcal{T}}(P,\mathcal{F})$ by sending a vertex $[\alpha]_{\mathcal{F}(v)}$ to $[\varphi \circ \alpha]_{\mathcal{F}(v)}$. Since Aut$_{\mathcal{F}_\mathcal{T}}(P)$ is finite and Rep$_{\mathcal{F}_\mathcal{T}}(P,\mathcal{F})$ is a tree, there exists a vertex or edge $[\alpha]_{\mathcal{F}(r)}$ in Rep$_{\mathcal{F}_\mathcal{T}}(P,\mathcal{F})$ which is fixed under the action of Aut$_{\mathcal{F}_\mathcal{T}}(P)$ ($r \in V(\mathcal{T}) \cup E(\mathcal{T})$). This means that Aut$_{\mathcal{F}(r)}(P\alpha)=$ Aut$_{\mathcal{F}_\mathcal{T}}(P)^\alpha$. Choose $\beta \in$ Hom$_{\mathcal{F}(r)}(P\alpha, S(r))$ so that $P\alpha\beta$ is fully $\mathcal{F}(r)$-normalised. Since $\mathcal{F}(r)$ is saturated, Aut$_{S(r)}(P\alpha\beta) \in$ Syl$_p($Aut$_{\mathcal{F}(r)}(P\alpha\beta))$. Now $$|\mbox{Aut}_S(P)|=|N_S(P)|/|Z(P)| \geq |N_S(P\alpha\beta)|/|Z(P\alpha\beta)| \geq |\mbox{Aut}_{S(r)}(P\alpha\beta)|,$$ where the first inequality follows from the fact that $P$ is fully $\mathcal{F}_\mathcal{T}$-normalised. Since $$|\mbox{Aut}_{\mathcal{F}(r)}(P\alpha\beta)|=|\mbox{Aut}_{\mathcal{F}(r)}(P\alpha)|=|\mbox{Aut}_{\mathcal{F}_\mathcal{T}}(P)|,$$ we must have Aut$_S(P) \in$ Syl$_p($Aut$_{\mathcal{F}_\mathcal{T}}(P))$, so that $P$ is fully $\mathcal{F}_\mathcal{T}$-automised, as needed. Next we prove that (b) holds in Definition \ref{fus}. Fix $\varphi \in$ Hom$_{\mathcal{F}_\mathcal{T}}(P,S)$ where $P$ is $\mathcal{F}_\mathcal{T}$-centric. We need to show that there is some $\bar{\varphi} \in$ Hom$_{\mathcal{F}_\mathcal{T}}(N_\varphi, S)$ which extends $\varphi$. Set $K:=$ Aut$_{N_\varphi}(P)=$ Aut$_S(P)$ $\cap$ Aut$_S(P\varphi)^{\varphi^{-1}}.$ If $c_g \in K$ then $c_g \circ \varphi=\varphi \circ c_h$ for some $h \in$ Aut$_S(P\varphi)$ and this show that the vertex $[\varphi]_{\mathcal{F}(v_*)}$ in Rep$_{\mathcal{F}_\mathcal{T}}(P,\mathcal{F})$ is fixed by the action of $K$. By Proposition \ref{resprop} applied with $Q=N_\varphi$ there is some $[\psi]_{\mathcal{F}(v_*)} \in$ Rep$_{\mathcal{F}_\mathcal{T}}(N_\varphi, \mathcal{F})$ with $[\psi|_P]_{\mathcal{F}(v_*)}=[\varphi]_{\mathcal{F}(v_*)}$. This implies that there is $\rho \in$ Iso$_{\mathcal{F}(v_*)}(P\varphi,P\psi)$ such that $\psi|_P=\varphi \circ \rho$. Since $\mathcal{F}(v_*)$ is saturated, $\rho^{-1}|_{P\psi}= (\psi|_P)^{-1} \circ \varphi$ extends to a map $\chi \in $ Hom$_{\mathcal{F}(v_*)}((N_\varphi)\psi, S)$. To see this, observe that $(N_\varphi)\psi \leq N_{\rho^{-1}}$ since for $g \in N_\varphi$, $(c_{g\psi})^{\rho^{-1}}=(c_g)^\varphi \in$ Aut$_S(P\varphi)=$ Aut$_S(P\psi\rho^{-1}).$ Hence we set $\bar{\varphi}:= \psi \circ \chi \in$ Hom$_\mathcal{F}(N_\varphi,S)$ so that $\bar{\varphi}$ extends $\varphi$, as needed. This completes the proof of (b) in Definition \ref{fus} and hence the proof of the theorem. \end{proof} \begin{Cor}\label{completionsatgroups} Let $(\mathcal{T},\mathcal{G})$ be a tree of groups with completion $\mathcal{G}_\mathcal{T}$ and assume that there exists a vertex $v_*$ of $\mathcal{T}$ for which the following is true: for each $v \in V(\mathcal{T})$ not equal to $v_*$, $p \nmid |\mathcal{G}(v):\mathcal{G}(e)|$ where $e$ is the to which $v$ is incident in the unique minimal path from $v$ to $v_*$. Write $S:=\S(v_*)$ and assume that for each $P \leq S$, \begin{itemize} \item[(a)] if $P$ is $\mathcal{G}_\mathcal{T}$-conjugate to an $\mathcal{F}_{\S(v)}(\mathcal{G}(v))$-essential subgroup then $P$ is $\mathcal{F}_S(G_\mathcal{T})$-centric; and \item[(b)] if $P$ is $\mathcal{F}_S(G_\mathcal{T})$-centric then $\tilde{\mathcal{T}}^P/C_{\mathcal{G}_\mathcal{T}}(P)$ is a tree. \end{itemize} Then $\mathcal{F}_S(\mathcal{G}_\mathcal{T})$ is saturated. \end{Cor} \begin{proof} Let $(\mathcal{T},\mathcal{F},\S)$ be a tree of fusion systems induced by $(\mathcal{T},\mathcal{G})$. The hypothesis of the corollary implies that $(\mathcal{T},\mathcal{F},\S)$ satisfies $(H)$, so there exists a completion $\mathcal{F}_\mathcal{T}$ of $(\mathcal{T},\mathcal{F},\S)$ by Lemma \ref{compft}. By Theorem \ref{treesgroupthm}, $\mathcal{F}_\mathcal{T} \cong \mathcal{F}_S(\mathcal{G}_\mathcal{T})$, so it suffices to verify conditions (a) and (b) in Theorem \ref{completionsat} hold. Clearly (a) holds by assumption and (b) holds by Proposition \ref{phip} and Corollary \ref{corftgt}. \end{proof} \begin{Ex} Let $(\mathcal{T},\mathcal{F},\S)$ be the tree of fusion systems constructed in Example \ref{ftex}. Clearly (a) holds in the statement of Theorem \ref{completionsat} since $S, V_1$ and $V_2$ are the only $\mathcal{F}_\mathcal{T}$-centric subgroups. It remains to check that (b) holds for these subgroups. Firstly, we show that $|$Rep$_{\mathcal{F}_\mathcal{T}}(V_i,\mathcal{F}(e))|=3$. To see this, for $i=1,2$ write $Z(S)=\langle z \rangle$, choose $x_i$ such that $V_i=\langle z, x_i \rangle$ and notice that $V_i\varphi=V_i$ for each $\varphi \in$ Hom$_{\mathcal{F}_\mathcal{T}}(V_i,S)$. Since Aut$_{\mathcal{F}(e)}(V_i)$ interchanges $x_i$ and $x_iz$, the class of $\varphi$ is determined by which of $x_i,z$ and $x_iz$ gets mapped to $z$ under $\varphi$ yielding exactly $3$ classes of embeddings. By similar arguments the facts that Aut$_{\mathcal{F}(v_i)}(V_i) \cong$ Sym$(3)$ and Aut$_{\mathcal{F}(v_{3-i})}(V_i) \cong C_2$ imply that $|$Rep$_{\mathcal{F}_\mathcal{T}}(V_i,\mathcal{F}(v_i))|=1$ and $|$Rep$_{\mathcal{F}_\mathcal{T}}(V_i,\mathcal{F}(v_{3-i}))|=3$ respectively for $i=1,2$. Since $4=3+1$, the Euler characteristic of Rep$_{\mathcal{F}_\mathcal{T}}(V_i,\mathcal{F})$ is 0 for $i=1,2$ so this graph is a tree. In fact, we can see directly that Rep$_{\mathcal{F}_\mathcal{T}}(V_i,\mathcal{F})$ is a 3 pointed star. Finally, the graph Rep$_{\mathcal{F}_\mathcal{T}}(S,\mathcal{F})$ obviously consists of a single vertex and so the saturation of $\mathcal{F}_\mathcal{T}$ follows from Theorem \ref{completionsat}. \end{Ex} \section{Attaching $p'$-automorphisms to Fusion Systems}\label{extconst} In this section, we apply Theorem B to prove Theorem C which is a general procedure for extending a fusion system by $p'$-automorphisms of its centric subgroups. Theorem C is a generalisation of \cite[Proposition 5.1]{BLO4} to arbitrary saturated fusion systems. \subsection{Extensions of $p$-constrained Groups} We appeal to some cohomological machinery which will allow us to prove a lemma which gives conditions for a $p$-constrained group to be extended by a $p'$-group of automorphisms of a normal centric subgroup. First recall the following characterisation of extensions of non-abelian groups. \begin{Thm}\label{nonabext} Let $G$ and $N$ be finite groups. Each homomorphism $$\begin{CD} \psi_E: G @>>> \operatorname{Out}(N) \end{CD}$$ determines an obstruction in $H^3(G,Z(N))$ which vanishes if and only if there exists an extension $$\begin{CD} 1 @>>> N @>>> E @>>> G @>>> 1 \end{CD}$$ which gives rise to $\psi_E$ via the conjugation action of $E$ on $N$. Furthermore, the group $H^2(G, Z(N))$ acts freely and transitively on the set of (suitably defined) equivalence classes of such extensions. \end{Thm} \begin{proof} See \cite[Theorems IV.6.6 and IV.6.7]{Br}. \end{proof} When $H \leq G$, there is a \textit{restriction map} $$\begin{CD} H^n(G,M) @>\mbox{res}_H^G>> H^n(H,M) \end{CD}$$ for each $G$-module $M$ and $n \geq 0$ (see \cite[Section III.9]{Br}). Each $g \in G$ induces a well-defined map $\begin{CD} H^n(H,M) @>g>> H^n(H^g,M) \end{CD}$ and we define $z \in H^*(H,M)$ to be \textit{$G$-invariant} if $$\mbox{res}_{H \cap H^g}^H z = \mbox{res}_{H \cap H^g}^{H^g} zg $$ for each $g \in G$. The following result characterises $G$-invariant elements when $M$ is an $\mathbb{F}_pG$-module for some prime $p$. \begin{Thm} Let $G$ be a finite group, $p$ be prime and $M$ be an $\mathbb{F}_pG$-module. For each $H \leq G$ with $p \nmid |G:H|$ and $n \geq 0$, res$_H^G$ maps $H^n(G,M)$ isomorphically onto the set of $G$-invariant elements of $H^n(H,M)$. \end{Thm} \begin{proof} This follows from \cite[Theorem III.10.3]{Br}. \end{proof} As an immediate consequence we have: \begin{Cor}\label{cohcor} Let $G$ be a finite group and $M$ be an $\mathbb{F}_pG$-module. If $H$ is a strongly $p$-embedded subgroup of $G$ then res$_H^G$ induces an isomorphism $$\begin{CD} H^*(G,M) @>\mbox{res}_H^G>> H^*(H,M). \end{CD}$$ \end{Cor} Theorem \ref{nonabext} and Corollary \ref{cohcor} may be applied to prove the following result which is a key ingredient in the proof of Theorem C. \begin{Lem}\label{constex} Let $H$ be a finite group with $O_{p'}(H)=1$ and let $Q$ be a normal $p$-subgroup of $H$ with $C_H(Q) \leq Q$. Assume $\Delta \leq \operatorname{Out}(Q)$ is chosen such that $\operatorname{Out}_H(Q)$ is a strongly $p$-embedded subgroup of $\Delta$. There exists a finite group $G$ containing $H$ as a $p'$-index subgroup with $Q \unlhd G$ and $\operatorname{Out}_G(Q) \cong \Delta$. \end{Lem} \begin{proof} Write $K:=$ Out$_H(Q)$. Since $K$ is strongly $p$-embedded in $\Delta$ and $Z(Q)$ may be regarded as an $\mathbb{F}_p \Delta$-module, Corollary \ref{cohcor} implies that $H^j(\Delta; Z(Q)) \cong H^j(K;Z(Q))$ for each $j > 0$. Since this holds when $j=3$, there exists a finite group $G$ which fits into a diagram $$\begin{CD} 1 @>>> Q @>>> G @>>> \Delta @>>> 1\\ @. @| @AAA @AAA \\ 1 @>>> Q @>>> H @>>> K @>>> 1, \end{CD}$$ by Theorem \ref{nonabext}. Since $H^2(\Delta; Z(Q)) \cong H^2(K;Z(Q))$ acts freely and transitively on the set of all such extensions of $Q$ by $\Delta$, we may choose $G$ such that $H \leq G$. In particular, $\Delta \cong G/Q=$ Out$_G(Q)$, as required. \end{proof} \subsection{Proof of Theorem C}\label{proofsat} Let $\mathcal{F}_0$ be a saturated fusion system on a finite $p$-group $S$. Our goal is to apply Lemma \ref{constex} to find conditions for there to exist a saturated fusion system $\mathcal{F}$ containing $\mathcal{F}_0$ with the property that Aut$_{\mathcal{F}_0}(Q)$ is a strongly $p$-embedded subgroup of Aut$_\mathcal{F}(Q)$ whenever $Q \leq S$ is $\mathcal{F}$-essential. The technique we use will involve the construction of a star of fusion systems $(\mathcal{T},\mathcal{F}(-),\S(-))$ with $\mathcal{F}_0$ associated to the central vertex and the fusion systems of certain $p'$-extensions of constrained models of the $N_{\mathcal{F}_0}(Q)$ associated to the outer vertices. The saturation of the completion of $(\mathcal{T},\mathcal{F}(-),\S(-))$ will follow from Theorem \ref{completionsat}. \begin{Thm}\label{proofsatthm} Let $\mathcal{F}_0$ be a saturated fusion system on a finite $p$-group $S$. For $1 \leq i \leq m$, let $Q_i \leq S$ be fully $\mathcal{F}_0$-normalised subgroups with $Q_i\varphi \nleq Q_j$ for each $\varphi \in \operatorname{Hom}_{\mathcal{F}_0}(Q_i,S)$ and $i \neq j$. Set $K_i:=\operatorname{Out}_{\mathcal{F}_0}(Q_i)$ and choose $\Delta_i \leq \operatorname{Out}(Q_i)$ so that $K_i$ is a strongly $p$-embedded subgroup of $\Delta_i$. Write $$\mathcal{F}= \langle \{\operatorname{Hom}_{\mathcal{F}_0}(P,S) \mid P \leq S\} \cup \{\Delta_i \mid 1 \leq i \leq m \} \rangle_S.$$ Assume further that for each $1 \leq i \leq m$, \begin{itemize} \item[(a)] $Q_i$ is $\mathcal{F}_0$-centric (hence $\mathcal{F}$-centric) and minimal (under inclusion) amongst all $\mathcal{F}$-centric subgroups, and \item[(b)] no proper subgroup of $Q_i$ is $\mathcal{F}_0$-essential. \end{itemize} Then $\mathcal{F}$ is saturated. \end{Thm} \begin{proof} Since $Q_i$ is fully $\mathcal{F}_0$-normalised and $\mathcal{F}_0$-centric, $N_{\mathcal{F}_0}(Q_i)$ is a constrained fusion system. Hence by Theorem \ref{const} there exists a unique finite group $H_i$ with $Q_i \unlhd H_i$, $C_{H_i}(Q_i) \leq Q_i$ and $N_{\mathcal{F}_0}(Q_i)=\mathcal{F}_{N_S(Q_i)}(H_i)$. Since Out$_{\mathcal{F}_0}(Q_i)=$ Out$_{H_i}(Q_i)$, Lemma \ref{constex} implies that there is some finite group $G_i$ containing $H_i$ with $p \nmid |G_i: H_i|$ and which satisfies Out$_{G_i}(Q_i)=\Delta_i$. Now let $\mathcal{T}$ be the star formed by a central vertex $v_0$ and $m$ vertices $v_i$ each incident to a unique edge $e_i=(v_0,v_i)$ for $1 \leq i \leq m$. Let $(\mathcal{T},\mathcal{F}(-),\S(-))$ be the tree of fusion systems formed by setting $\mathcal{F}(v_0):=\mathcal{F}_0$, $\mathcal{F}(v_i):=\mathcal{F}_{N_S(Q_i)}(G_i)$ and $\mathcal{F}(e_i):=\mathcal{F}_{N_S(Q_i)}(H_i)=N_{\mathcal{F}_0}(Q_i)$ for $1 \leq i \leq m$. Then $(\mathcal{T},\mathcal{F}(-),\S(-))$ satisfies $(H)$, $\mathcal{F}_\mathcal{T}=\mathcal{F}$ by Lemma \ref{compft} and it suffices to verify that conditions (a) and (b) of Theorem \ref{completionsat} hold. Suppose that $P=\S(v_i)$ or $P$ is $\mathcal{F}$-conjugate to an $\mathcal{F}(v_i)$-essential subgroup. We need to show that $P$ is $\mathcal{F}$-centric. In the first case, since $Q_i$ is $\mathcal{F}$-centric (by assumption), $P=N_S(Q_i)$ is also $\mathcal{F}$-centric by Lemma \ref{centlem} (b). In the second case, there exists some $\varphi \in$ Hom$_\mathcal{F}(P,S)$ such that $P\varphi \leq \S(v_i)$ is $\mathcal{F}(v_i)$-essential. If $i \neq 0$ then since $Q_i \unlhd \mathcal{F}(v_i)$, Lemma \ref{esscont} implies that $Q_i \leq P\varphi$ so that since $Q_i$ is $\mathcal{F}$-centric also $P\varphi$ (and hence $P$) is $\mathcal{F}$-centric by Lemma \ref{centlem} (b) again. If $i = 0$ then since $P\varphi$ is not contained in $Q_i$ (by condition (b) in the theorem) for all $1 \leq i \leq m$ and $P\varphi$ is $\mathcal{F}_0$-centric, $P\varphi$ is also $\mathcal{F}$-centric, by the definition of $\mathcal{F}_\mathcal{T}$. It remains to prove that (b) holds in Theorem \ref{completionsat}. It will be enough to show directly that the connected component of the vertex $[\iota_P^S]_{\mathcal{F}(v_0)}$ in Rep$_{\mathcal{F}_\mathcal{T}}(P,\mathcal{F}(-))$ is a tree for each $\mathcal{F}_\mathcal{T}$-centric subgroup $P \leq S$. Suppose $[\beta]_{\mathcal{F}(v_0)}$ is connected to $[\beta']_{\mathcal{F}(v_i)}$ via an edge $[\alpha]_{\mathcal{F}(e_i)}$ in Rep$_{\mathcal{F}_\mathcal{T}}(P,\mathcal{F}(-))$. If $[\beta']_{\mathcal{F}(v_i)}$ is incident to another edge $[\alpha']_{\mathcal{F}(e_i)}$ then there exists $\rho \in$ Iso$_{\mathcal{F}(v_i)}(P\alpha,P\alpha')$ such that $\alpha'=\alpha \circ \rho$. Since $Q_i \unlhd \mathcal{F}(v_i)$, $\rho$ extends to a map $\bar{\rho} \in$ Hom$_{\mathcal{F}(v_i)}((P\alpha) Q_i,(P\alpha') Q_i)$ with $\bar{\rho}|_{Q_i} \in $ Aut$_{\mathcal{F}(v_i)}(Q_i)$. Set $\chi:=[\rho|_{Q_i}] \in$ Out$_{\mathcal{F}(v_i)}(Q_i)=\Delta_i$ for the class of $\rho|_{Q_i}$ in $\Delta_i$ and note that $\chi \notin K_i$, since otherwise $[\alpha]_{\mathcal{F}(e_i)}=[\alpha']_{\mathcal{F}(e_i)}$. Now $$\chi^{-1} \circ \mbox{Out}_{(P\alpha) Q_i}(Q_i) \circ \chi = \mbox{Out}_{(P\alpha') Q_i}(Q_i) \leq K_i$$ and $$\mbox{Out}_{(P\alpha) Q_i}(Q_i) \leq K_i \cap \chi \circ K_i \circ \chi^{-1},$$ which is a $p'$-group by hypothesis. Since $Q_i$ is $\mathcal{F}$-centric, we have Out$_{(P\alpha) Q_i}(Q_i) \cong (P\alpha) Q_i/Q_i$ so that $P\alpha \leq Q_i$. But $P$ is $\mathcal{F}$-centric and $Q_i$ is minimal amongst all $\mathcal{F}$-centric subgroups (by assumption) and so $P\alpha=Q_i$. Furthermore since $Q_i$ is not $\mathcal{F}_0$-conjugate to $Q_j$ for $i \neq j$, this must occur for some unique $i$. Setting $\beta=\iota_P^S$ in the previous paragraph we see that either Rep$_{\mathcal{F}_\mathcal{T}}(P,\mathcal{F}(-))$ is a tree or $P$ is $\mathcal{F}_0$-conjugate to a unique $Q_i$. Suppose we are in this latter situation and let $\Gamma_0 \subset$ Rep$_\mathcal{F}(P,\mathcal{T})$ be the connected subgraph consisting of all vertices of the form $[\beta]_{\mathcal{F}(v_0)}$ or $[\beta']_{\mathcal{F}(v_i)}$ and edges of the form $[\alpha]_{\mathcal{F}(e_i)}$. If $[\gamma]_{\mathcal{F}(v_j)}$ is a vertex not in $\Gamma_0$ then $j \neq i$ and a minimal path from $[\gamma]_{\mathcal{F}(v_j)}$ to $\Gamma_0$ must consist of a single edge $[\delta]_{\mathcal{F}(e_j)}$, otherwise by the argument in the previous paragraph, we would have that $P$ is $\mathcal{F}_0$-conjugate to $Q_j$, a contradiction (see Figure 4.1 below). Hence $\Gamma_0$ is a deformation retract of Rep$_{\mathcal{F}_\mathcal{T}}(P,\mathcal{F}(-))$ and it is enough to prove that $\Gamma_0$ is a tree. In fact we prove that $\Gamma_0$ is a star. Suppose that a vertex $[\beta]_{\mathcal{F}(v_0)} \in \Gamma_0$ is incident to two (different) edges $[\alpha]_{\mathcal{F}(e_i)}$ and $[\alpha']_{\mathcal{F}(e_i)}$. The argument in the previous paragraph shows that $\alpha, \alpha' \in$ Iso$_{\mathcal{F}(e_i)}(P,Q_i)$ and there exists $\rho \in$ Aut$_{\mathcal{F}(v_i)}(Q_i)=\Delta_i$ such that $\alpha \circ \rho = \alpha'$. Hence $[\alpha]_{\mathcal{F}(e_i)} = [\alpha']_{\mathcal{F}(e_i)}$ and each $[\alpha]_{\mathcal{F}(v_0)} \in \Gamma_0$ is incident to a unique edge. This establishes the claim, and finishes the proof of the theorem. \end{proof} \begin{figure}[!h]\label{treefig} \centering \labellist \pinlabel $[\beta']_{\mathcal{F}(v_i)}$ at 140 70 \pinlabel $[\beta]_{\mathcal{F}(v_0)}$ at 100 150 \pinlabel$[\gamma]_{\mathcal{F}(v_j)}$ at 80 200 \pinlabel$[\alpha]_{\mathcal{F}(e_i)}$ at 80 100 \pinlabel$[\delta]_{\mathcal{F}(e_j)}$ at 20 175 \endlabellist \includegraphics[scale=0.6]{number1} \caption{Vertices of distance at most two from $[\beta']_{\mathcal{F}(v_i)}$ in $\Gamma$ for each $i \neq j$.} \end{figure}
1,314,259,994,131
arxiv
\section{Introduction and main result} Let $\Gamma$ be a rectifiable curve in the complex plane equipped with arc-length measure $|d\tau|$. We suppose that $\Gamma$ is simple, that is, homeomorphic to a segment or to a circle. A measurable function $w:\Gamma\to[0,\infty]$ is said to be a weight if it is positive and finite almost everywhere. Let $p:\Gamma\to (1,\infty)$ be a continuous function. A weighted variable Lebesgue space $L^{p(\cdot)}(\Gamma,w)$ is the set of all measurable complex-valued functions $f$ on $\Gamma$ such that \[ \int_\Gamma |f(\tau)w(\tau)/\lambda|^{p(\tau)}\,|d\tau|<\infty \] for some $\lambda=\lambda(f)>0$. It is a Banach space when equipped with the Luxemburg-Nakano norm \[ \|f\|_{L^{p(\cdot)}(\Gamma,w)}=\inf\left\{ \lambda>0\ :\ \int_\Gamma |f(\tau)w(\tau)/\lambda|^{p(\tau)}\,|d\tau|\le 1 \right\}. \] It is clear that $L^{p(\cdot)}(\Gamma,w)$ coincides with the standard Lebesgue space whenever $p$ is constant. It is a partial case of so-called Musielak-Orlicz spaces (see \cite{Musielak83,MO59}). Two weights $w_1$ and $w_2$ on $\Gamma$ are said to be equivalent if there is a bounded and bounded away from zero function $f$ on $\Gamma$ such that $w_1=fw_2$. It is easy to see that $L^{p(\cdot)}(\Gamma,w_1)$ and $L^{p(\cdot)}(\Gamma,w_2)$ are isomorphic whenever $w_1$ and $w_2$ are equivalent. A curve $\Gamma$ is said to be Carleson (or Ahlfors-David regular) if \[ C_\Gamma:=\sup_{t\in\Gamma}\sup_{\varepsilon>0}\frac{|\Gamma(t,\varepsilon)|}{\varepsilon}<\infty \] where $\Gamma(t,\varepsilon):=\{\tau\in\Gamma:|\tau-t|<\varepsilon\}$ is the portion of the curve in the disk centered at $t$ of radius $\varepsilon$ and $|\Omega|$ denotes the measure of a measurable set $\Omega\subset\Gamma$. We are interested in the boundedness conditions for the maximal operator \[ (Mf)(t):=\sup_{\varepsilon>0}\frac{1}{|\Gamma(t,\varepsilon)|}\int_{\Gamma(t,\varepsilon)}|f(\tau)|\,|d\tau| \quad (t\in\Gamma) \] on weighted variable Lebesgue spaces. This operator is one of the main players in harmonic analysis. It is closely related to the Cauchy singular integral operator \[ (Sf)(t):=\lim_{\varepsilon\to 0}\frac{1}{\pi i}\int_{\Gamma\setminus\Gamma(t,\varepsilon)} \frac{f(\tau)}{\tau-t}\,d\tau \quad(t\in\Gamma). \] The boundedness of both operators on standard weighted Lebesgue spaces is well understood (see e.g. \cite{BK97,GGKK98,Dynkin91,Stein93}). If $T$ is one of the operators $M$ or $S$ and $1<p<\infty$, then $T$ is bounded on $L^p(\Gamma,w)$ if and only if $w$ is a Muckenhoupt weight, $w\in A_p(\Gamma)$, that is, \[ \sup_{t\in\Gamma}\sup_{\varepsilon>0} \left(\frac{1}{\varepsilon}\int_{\Gamma(t,\varepsilon)}w^p(\tau)\,|d\tau|\right)^{1/p} \left(\frac{1}{\varepsilon}\int_{\Gamma(t,\varepsilon)}w^{-q}(\tau)\,|d\tau|\right)^{1/q}<\infty \] where $1/p+1/q=1$. By H\"older's inequality, if $w$ is a Muckenhoupt weight, then $\Gamma$ is a Carleson curve. Let us define the weight we are interested in. Fix $t\in\Gamma$ and consider the function $\eta_{t}:\Gamma\setminus\{t\}\to(0,\infty)$ defined by \[ \eta_{t}(\tau):=e^{-\arg(\tau-t)}, \] where $\arg(\tau-t)$ denotes any continuous branch of the argument on $\Gamma\setminus\{t\}$. For every $\gamma\in\mathbb{C}$, put \[ \varphi_{t,\gamma}(\tau):=|(\tau-t)^\gamma|= |\tau-t|^{{\rm Re}\,\gamma}\eta_t(\tau)^{{\rm Im}\,\gamma} \quad (\tau\in\Gamma\setminus\{t\}). \] In the Fredholm theory of singular integral operators with piecewise continuous coefficients on $L^{p(\cdot)}(\Gamma)$ (without weights!) it is important to know for which values of $\gamma$ the operator $S$ is bounded on $L^{p(\cdot)}(\Gamma,\varphi_{t,\gamma})$ (see e.g. \cite{Karlovich03-JIEA,Karlovich08-IWOTA} and also \cite{BK97}). In fact, an attempt to answer this question is the our main motivation for this work. The above question was completely studied for the case of standard Lebesgue spaces by A.~B\"ottcher and Yu.~Karlovich \cite[Section~3.1]{BK97}. To formulate their result explicitly, we need some definitions. A function $\varrho:(0,\infty)\to(0,\infty]$ is called regular if it is bounded from above in some open neighborhood of the point $1$. A function $\varrho:(0,\infty)\to(0,\infty]$ is said to be submultiplicative if $\varrho(x_1x_2)\le\varrho(x_1)\varrho(x_2)$ for all $x_1,x_2\in(0,\infty)$. A regular submultiplicative function is finite everywhere and one can define \[ \alpha(\varrho):=\sup_{x\in(0,1)}\frac{\log\varrho(x)}{\log x}, \quad \beta(\varrho):=\inf_{x\in(1,\infty)}\frac{\log\varrho(x)}{\log x}. \] In this case, by \cite[Theorem~1.13]{BK97}, $-\infty<\alpha(\varrho)\le\beta(\varrho)<\infty$. The numbers $\alpha(\varrho)$ and $\beta(\varrho)$ are called lower and upper indices of $\varrho$, respectively. For $t\in\Gamma$, put $d_{t}:=\max\limits_{\tau\in\Gamma}|\tau-t|$. Following \cite[Section~1.5]{BK97}, for a continuous function $\psi:\Gamma\setminus\{t\}\to(0,\infty)$, we define \[ (W_{t}\psi)(x):=\left\{ \begin{array}{lll} \displaystyle \sup_{0<R\le d_{t}}\left( \max_{|\tau-t|=xR}\psi(\tau)/\min_{|\tau-t|=R}\psi(\tau) \right) & \mbox{for} & x\in(0,1],\\ \displaystyle \sup_{0<R\le d_{t}}\left( \max_{|\tau-t|=R}\psi(\tau)/\min_{|\tau-t|=x^{-1}R}\psi(\tau) \right) & \mbox{for} & x\in[1,\infty). \end{array} \right. \] This function is submultiplicative in view of \cite[Lemma~1.15]{BK97}. From \cite[Theorem~1.18]{BK97} it follows that $W_{t}\eta_{t}$ is regular for every $t\in\Gamma$ whenever $\Gamma$ is a Carleson curve. Hence the lower and upper spirality indices $\delta_{t}^-$ and $\delta_{t}^+$ at $t\in\Gamma$ are correctly defined by \[ \delta_{t}^-:=\alpha(W_{t}\eta_{t}), \quad \delta_{t}^+:=\beta(W_{t}\eta_{t}). \] \begin{prop}\label{pr:Carleson} \begin{enumerate} \item[{\rm (a)}] If $\Gamma$ is a piecewise smooth curve, then $\arg(\tau-t)=O(1)$ and $\delta_t^-=\delta_t^+=0$ for all $t\in\Gamma$. \item[{\rm (b)}] If $\Gamma$ is a Carleson curve satisfying \begin{equation}\label{eq:logarithmic-Carleson} \arg(\tau-t)=-\delta\log|\tau-t|+O(1) \quad\mbox{as}\quad\tau\to t \end{equation} at some $t\in\Gamma$ with some $\delta\in\mathbb{R}$, then $\delta_t^-=\delta_t^+=\delta$. \item[{\rm (c)}] {\rm\textbf{(R. Seifullayev)}} If $\Gamma$ is a Carleson curve, then \begin{equation}\label{eq:general-Carleson} \arg(\tau-t)=O(-\log|\tau-t|) \quad\mbox{as}\quad\tau\to t \end{equation} for every $t\in\Gamma$. \item[{\rm (d)}] {\rm\textbf{(A. B\"ottcher, Yu. Karlovich)}} For any given real numbers $\alpha,\beta$ such that \[ -\infty<\alpha<\beta<+\infty, \] there exists a Carleson curve $\Gamma$ such that $\delta_t^-=\alpha$ and $\delta_t^+=\beta$ at some point $t\in\Gamma$. \end{enumerate} \end{prop} Parts (a) and (b) are trivial, a proof of part (c) is in \cite[Theorem~1.10]{BK97}, and part (d) is proved in \cite[Proposition~1.21]{BK97}. From \cite[Propistion~3.1]{BK97} it follows that $W_t\varphi_{t,\gamma}$ is regular for every $\gamma\in\mathbb{C}$ and \begin{eqnarray*} \alpha(W_t\varphi_{t,\gamma})&=&{\rm Re}\,\gamma+ \min\{\delta_t^-{\rm Im}\,\gamma,\delta_t^+{\rm Im}\,\gamma\}, \\ \beta(W_t\varphi_{t,\gamma})&=&{\rm Re}\,\gamma+ \max\{\delta_t^-{\rm Im}\,\gamma,\delta_t^+{\rm Im}\,\gamma\}. \end{eqnarray*} These equalities in conjunction with \cite[Theorem~2.33]{BK97} yield the following. \begin{thm}[A.~B\"ottcher, Yu.~Karlovich] \label{th:BK} Let $\Gamma$ be a Carleson curve and $p\in(1,\infty)$ be constant. Suppose $t\in\Gamma$ and $\gamma\in\mathbb{C}$. Then $\varphi_{t,\gamma}\in A_p(\Gamma)$ if and only if \begin{eqnarray*} 0&<&\frac{1}{p}+{\rm Re}\,\gamma+\min\{\delta_t^-{\rm Im}\,\gamma,\delta_t^+{\rm Im}\,\gamma\} \\ &\le&\frac{1}{p}+{\rm Re}\,\gamma+\min\{\delta_t^-{\rm Im}\,\gamma,\delta_t^+{\rm Im}\,\gamma\}<1. \end{eqnarray*} \end{thm} In the last decade many results from classical harmonic analysis for standard (weighted) Lebesgue spaces were extended to the setting of (weighted) variable Lebesgue spaces (see e.g. \cite{CFMP06,Diening04,KS04,KS08,Lerner05} and the references therein). We recall only the most relevant result. Following \cite{Diening04,KS04}, we will always suppose that $p:\Gamma\to(1,\infty)$ is a continuous function satisfying the Dini-Lipschitz condition on $\Gamma$, that is, there exists a constant $C_p>0$ such that \begin{equation}\label{eq:Dini-Lipschitz} |p(\tau)-p(t)|\le\frac{C_p}{-\log|\tau-t|} \end{equation} for all $\tau,t\in\Gamma$ such that $|\tau-t|\le1/2$. For power weights one has the next criterion. For simplicity, we formulate it in the case of one singularity only. However, it is valid for power weights with a finite number of singularities (see \cite{KPS06-OTAA,KS05-Simonenko}). \begin{thm}[V.~Kokilashvili, V.~Paatashvili, S.~Samko] \label{th:KPS} Let $\Gamma$ be a Carleson curve and $p:\Gamma\to(1,\infty)$ be a continuous function satisfying the Dini-Lipschitz condition. For $t\in\Gamma$ and $\lambda\in\mathbb{R}$, define the power weight $w(\tau):=|\tau-t|^\lambda$. If $T$ is one of the operators $M$ or $S$, then $T$ is bounded on $L^{p(\cdot)}(\Gamma,w)$ if and only if \[ 0<\frac{1}{p(t)}+\lambda<1. \] \end{thm} Clearly, $\varphi_{t,\gamma}$ is equivalent to a power weight $w(\tau)=|\tau-t|^\lambda$ if and only if $\gamma$ is real or $\Gamma$ satisfies (\ref{eq:logarithmic-Carleson}) at $t$. Hence, Theorem~\ref{th:KPS} is not applicable to the weight $\eta_t$ for Carleson curves with $\delta_t^-<\delta_t^+$. The sufficiency portion of Theorem~\ref{th:KPS} has been extended recently to the case of radial oscillating weights (see \cite{KSS07-JFSA} for $M$ and \cite{KSS07-MN} for $S$). In the case of one singularity these weights have the form $w(\tau)=f(|\tau-t|)$ where $t\in\Gamma$ is fixed and $f:(0,{\rm diam}(\Gamma)]\to(0,\infty)$ is some continuous function with additional regularity properties. It is clear that the weight $\eta_t$ is not of this form. Thus, in general, weights considered in this paper lie beyond the class of radial oscillating weights. \begin{thm}[Main result] \label{th:main} Let $\Gamma$ be a Carleson curve and $p:\Gamma\to(1,\infty)$ be a continuous function satisfying the Dini-Lipschitz condition. If $t\in\Gamma$, $\gamma\in\mathbb{C}$, and \begin{eqnarray} 0&<& \frac{1}{p(t)}+{\rm Re}\,\gamma+ \min\{\delta_t^-{\rm Im}\,\gamma,\delta_t^+{\rm Im}\,\gamma\} \nonumber \\ &\le& \frac{1}{p(t)}+{\rm Re}\,\gamma+ \max\{\delta_t^-{\rm Im}\,\gamma,\delta_t^+{\rm Im}\,\gamma\}<1, \label{eq:main-conditions} \end{eqnarray} then $M$ is bounded on $L^{p(\cdot)}(\Gamma,\varphi_{t,\gamma})$. \end{thm} We conjecture that Theorem~\ref{th:main} is true with $M$ replaced by $S$ and that a check of the proof of \cite[Theorem~4.3]{KSS07-MN} will indicate the modifications needed to obtain the desired result. We also conjecture that inequalities (\ref{eq:main-conditions}) are necessary for the boundedness of $M$ and $S$ on $L^{p(\cdot)}(\Gamma,\varphi_{t,\gamma})$. To support the second conjecture, note that arguing as in \cite{Karlovich08-JFSA}, one can show that if $S$ is bounded on $L^{p(\cdot)}(\Gamma,\varphi_{t,\gamma})$, then \begin{eqnarray*} 0&\le& \frac{1}{p(t)}+{\rm Re}\,\gamma+ \min\{\delta_t^-{\rm Im}\,\gamma,\delta_t^+{\rm Im}\,\gamma\} \nonumber \\ &\le& \frac{1}{p(t)}+{\rm Re}\,\gamma+ \max\{\delta_t^-{\rm Im}\,\gamma,\delta_t^+{\rm Im}\,\gamma\}\le 1. \end{eqnarray*} The paper is organized as follows. In Section~\ref{sec:Muckenhoupt-ersatz} we formulate a sufficient condition for the boundedness of $M$ on $L^{p(\cdot)}(\Gamma,w)$ involving the classical Muckenhoupt condition. Further we apply it to the case of the weight $\varphi_{t,\gamma}$. In Section~\ref{sec:estimate} we estimate a weight $w$ with the only singularity at $t$ by power weights with exponents $\alpha(W_tw)-\varepsilon$ and $\beta(W_tw)+\varepsilon$ where $\varepsilon$ is small enough. Section~\ref{sec:proof} contains the proof of Theorem~\ref{th:main}. Here we follow an idea from \cite{KSS07-JFSA} and represent the weighted maximal operator as the sum of four maximal operators. The first operator is the maximal operator over a small arc containing the singularity of the weight $\varphi_{t,\gamma}$. Its boundedness follows from the results of Section~\ref{sec:Muckenhoupt-ersatz}. The second and third maximal operators are estimated by maximal operators with power weights with exponents $\alpha(W_t\varphi_{t,\gamma})-\varepsilon$ and $\beta(W_t\varphi_{t,\gamma})+\varepsilon$ by using the results of Section~\ref{sec:estimate}. The boundedness of the latter operators follows from Theorem~\ref{th:KPS}. The last maximal operator is over the complement of the small arc containing the singularity of the weight. Hence there is no influence of the weight on this operator and its boundedness follows trivially from Theorem~\ref{th:KPS}. \section{Sufficient condition involving Muckenhoupt weights} \label{sec:Muckenhoupt-ersatz} Although a complete characterization of weights for which $M$ is bounded on weighted variable Lebesgue spaces is still unknown, one of the most significant recent results to achieve this aim is the following sufficient condition (see \cite[Theorem~${\rm A}^\prime$]{KSS07-JFSA}). \begin{thm}[V.~Kokilashvili, N.~Samko, S.~Samko] \label{th:KSS} Let $\Gamma$ be a Carleson curve, $p:\Gamma\to(1,\infty)$ be a continuous function satisfying the Dini-Lipschitz condition, and $w:\Gamma\to[0,\infty]$ be a weight such that $w^{p/p_*}\in A_{p_*}(\Gamma)$, where \begin{equation}\label{eq:p-min} p_*:=p_*(\Gamma):=\min_{\tau\in\Gamma}p(\tau). \end{equation} Then $M$ is bounded on $L^{p(\cdot)}(\Gamma,w)$. \end{thm} This theorem does not contain the sufficiency portion of Theorem~\ref{th:KPS} whenever $p$ is variable because for the weight $\varrho(\tau)=|\tau-t|^\lambda$ the condition $\varrho^{p/p_*}\in A_{p_*}(\Gamma)$ is equivalent to $-1/p(t)<\lambda<(p_*-1)/p(t)$, while the ``correct" interval for $\lambda$ is wider: $-1/p(t)<\lambda<(p(t)-1)/p(t)$. This means that conditions of Theorem~\ref{th:KSS} cannot be necessary unless $p$ is constant. Now we apply Theorem~\ref{th:KSS} to the weight $\varphi_{t,\gamma}$. \begin{lem}\label{le:ersatz} Let $\Gamma$ be a Carleson curve and $p:\Gamma\to(1,\infty)$ be a continuous function satisfying the Dini-Lipschitz condition. If $t\in\Gamma$, $\gamma\in\mathbb{C}$, and \begin{eqnarray} 0&<& \frac{1}{p(t)}+{\rm Re}\,\gamma+ \min\{\delta_t^-{\rm Im}\,\gamma,\delta_t^+{\rm Im}\,\gamma\} \nonumber \\ &\le& \frac{1}{p(t)}+{\rm Re}\,\gamma+ \max\{\delta_t^-{\rm Im}\,\gamma,\delta_t^+{\rm Im}\,\gamma\}<\frac{p_*}{p(t)}, \label{eq:ersatz-1} \end{eqnarray} where $p_*$ is defined by {\rm(\ref{eq:p-min})}, then $M$ is bounded on $L^{p(\cdot)}(\Gamma,\varphi_{t,\gamma})$. \end{lem} \begin{pf} Inequalities (\ref{eq:ersatz-1}) are equivalent to \begin{eqnarray*} 0&<& \frac{1}{p_*}+{\rm Re}\,\gamma\frac{p(t)}{p_*}+ \min\left\{ \delta_t^-{\rm Im}\,\gamma\frac{p(t)}{p_*},\delta_t^+{\rm Im}\,\gamma\frac{p(t)}{p_*} \right\} \nonumber \\ &\le& \frac{1}{p_*}+{\rm Re}\,\gamma\frac{p(t)}{p_*}+ \max\left\{ \delta_t^-{\rm Im}\,\gamma\frac{p(t)}{p_*},\delta_t^+{\rm Im}\,\gamma\frac{p(t)}{p_*} \right\}<1. \end{eqnarray*} By Theorem~\ref{th:BK}, the latter inequalities are equivalent to $\varphi_{t,\gamma p(t)/p_*}\in A_{p_*}(\Gamma)$. Observe that that the weights $\varphi_{t,\gamma p(t)/p_*}$ and $(\varphi_{t,\gamma})^{p/p_*}$ are equivalent and therefore belong to $A_{p_*}(\Gamma)$ only simultaneously. Indeed, from Proposition~\ref{pr:Carleson} (c) and (\ref{eq:Dini-Lipschitz}) it follows that \begin{eqnarray*} && \frac{[\varphi_{t,\gamma}(\tau)]^{p(\tau)/p_*}}{\varphi_{t,\gamma p(t)/p_*}(\tau)} = \frac{\displaystyle\exp\left\{ \big({\rm Re}\,\gamma\log|\tau-t|-{\rm Im}\,\gamma\arg(\tau-t)\big)\frac{p(\tau)}{p_*} \right\}} {\displaystyle\exp\left\{ \frac{{\rm Re}\,\gamma\,p(t)}{p_*}\log|\tau-t|-\frac{{\rm Im}\,\gamma\, p(t)}{p_*}\arg(\tau-t) \right\}} \\ &&= \exp\left\{ \left(\frac{{\rm Re}\,\gamma}{p_*}\log|\tau-t|-\frac{{\rm Im}\,\gamma}{p_*}\arg(\tau-t)\right) \big(p(\tau)-p(t)\big) \right\} \\ &&= \exp\left\{ \left(\frac{{\rm Re}\,\gamma}{p_*}\log|\tau-t|+\frac{{\rm Im}\,\gamma}{p_*}O(\log|\tau-t|)\right) O\left(\frac{1}{-\log|\tau-t|}\right) \right\} \\ &&=\exp\{O(1)\} \end{eqnarray*} as $\tau\to t$. This immediately implies that the weights $(\varphi_{t,\gamma})^{p/p_*}$ and $\varphi_{t,\gamma p(t)/p_*}$ are equivalent because they are continuous on $\Gamma\setminus\{t\}$. Finally, applying Theorem~\ref{th:KSS}, we obtain that $M$ is bounded on $L^{p(\cdot)}(\Gamma,\varphi_{t,\gamma})$. \qed \end{pf} \section{Estimates of weights with one singularity by power weights} \label{sec:estimate} Recall that there are more convenient formulas for calculation of indices of a regular submultiplicative function. \begin{thm}\label{th:submult-properties} If $\varrho:(0,\infty)\to(0,\infty)$ is regular and submultiplicative, then \begin{enumerate} \item[{\rm (a)}] \[ \alpha(\varrho)=\lim_{x\to 0}\frac{\log\varrho(x)}{\log x}, \quad \beta(\varrho)=\lim_{x\to\infty}\frac{\log\varrho(x)}{\log x}, \] and \[ -\infty<\alpha(\varrho)\le\beta(\varrho)<+\infty; \] \item[{\rm (b)}] $\varrho(x)\ge x^{\alpha(\varrho)}$ for all $x\in(0,1)$ and $\varrho(x)\ge x^{\beta(\varrho)}$ for all $x\in(1,\infty)$; \item[{\rm (c)}] given any $\varepsilon>0$, there exists an $x_0>1$ such that $\varrho(x)\le x^{\alpha(\varrho)-\varepsilon}$ for all $x\in(0,x_0^{-1})$ and $\varrho(x)\le x^{\beta(\varrho)+\varepsilon}$ for all $x\in(x_0,\infty)$. \end{enumerate} \end{thm} Part (a) is proved, for instance, in \cite[Theorem~1.13]{BK97}. Parts (b) and (c) follow from part (a), see e.g. \cite[Corollary~1.14]{BK97}. Fix $t_0\in\Gamma$. Let $\omega(t_0,\delta)$ denote the open arc on $\Gamma$ which contains $t_0$ and whose endpoints lie on the circle $\{\tau\in\mathbb{C}:|\tau-t_0|=\delta\}$. It is clear that $\omega(t_0,\delta)\subset\Gamma(t_0,\delta)$, however, it may happen that $\omega(t_0,\delta)\ne\Gamma(t_0,\delta)$. \begin{lem}\label{le:estimate} Let $\Gamma$ be a Carleson curve and $t_0\in\Gamma$. Suppose $w:\Gamma\setminus\{t_0\}\to(0,\infty)$ is a continuous function and $W_{t_0}w$ is regular. Let $\varepsilon>0$ and $\delta$ be such that $0<\delta<d_{t_0}$. Then there exist positive constants $C_j=C_j(\varepsilon,\delta,w)$, where $j=1,2$, such that \begin{equation}\label{eq:estimate-1} \frac{w(t)}{w(\tau)} \le C_1\left|\frac{t-t_0}{\tau-t_0}\right|^{\beta(W_{t_0}w)+\varepsilon} \end{equation} for all $t\in\Gamma\setminus\omega(t_0,\delta)$ and all $\tau\in\omega(t_0,\delta)$; and \begin{equation}\label{eq:estimate-2} \frac{w(t)}{w(\tau)} \le C_2\left|\frac{t-t_0}{\tau-t_0}\right|^{\alpha(W_{t_0}w)-\varepsilon} \end{equation} for all $t\in\omega(t_0,\delta)$ and all $\tau\in\Gamma\setminus\omega(t_0,\delta)$. \end{lem} \begin{pf} Let us denote $\beta:=\beta(W_{t_0}w)$. By Theorem~\ref{th:submult-properties}(c), for every $\varepsilon>0$ there exists an $x_0\in(1,\infty)$ such that \[ (W_{t_0}w)(x)\le x^{\beta+\varepsilon} \quad\mbox{for all}\quad x\in(x_0,\infty). \] From this inequality and the definition of $W_{t_0}w$ it follows that if $0<R\le d_{t_0}$ and $x\in(x_0,\infty)$, then \[ \max_{|t-t_0|=R}w(t) \le x^{\beta+\varepsilon}\min_{|\tau-t_0|=x^{-1}R}w(\tau) = \left(\frac{R}{|\tau-t_0|}\right)^{\beta+\varepsilon}\min_{|\tau-t_0|=x^{-1}R}w(\tau). \] Hence \begin{equation}\label{eq:estimate-3} w(t)\le\left|\frac{t-t_0}{\tau-t_0}\right|^{\beta+\varepsilon}w(\tau) \end{equation} for all $t\in\Gamma\setminus\{t_0\}$ and all $\tau\in\Gamma$ such that $|t-t_0|/|\tau-t_0|\in(x_0,\infty)$. Put \[ \Delta_{t_0}:=\min_{t\in\Gamma\setminus\omega(t_0,\delta)}|t-t_0|. \] It is clear that if $\tau\in\omega(t_0,\Delta_{t_0}/x_0)$ and $t\in\Gamma\setminus\omega(t_0,\delta)$, then (\ref{eq:estimate-3}) holds. Since the function \[ f(\tau):=\frac{w(\tau)}{|\tau-t_0|^{\beta+\varepsilon}} \] is continuous on $\Gamma\setminus\{t_0\}$, we have \[ 0<M_1:=\inf_{\tau\in\overline{\omega(t_0,\delta)\setminus\omega(t_0,\Delta_{t_0}/x_0)}}f(\tau), \quad M_2:=\sup_{\tau\in\overline{\Gamma\setminus\omega(t_0,\delta)}}f(\tau)<\infty. \] Hence \[ w(t)\le M_2|t-t_0|^{\beta+\varepsilon} \] for all $t\in\Gamma\setminus\omega(t_0,\delta)$ and \[ \frac{1}{w(\tau)}\le\frac{1}{M_1|\tau-t_0|^{\beta+\varepsilon}} \] for all $\tau\in\omega(t_0,\delta)\setminus\omega(t_0,\Delta_{t_0}/x_0)$. Multiplying these inequalities, we obtain \[ \frac{w(t)}{w(\tau)}\le\frac{M_2}{M_1}\left|\frac{t-t_0}{\tau-t_0}\right|^{\beta+\varepsilon} \] for all $t\in\Gamma\setminus\omega(t_0,\delta)$ and all $\tau\in\omega(t_0,\delta)\setminus\omega(t_0,\Delta_{t_0}/x_0)$. Thus (\ref{eq:estimate-1}) holds for $t\in\Gamma\setminus\omega(t_0,\delta)$ and all \[ \tau\in\omega(t_0,\Delta_{t_0}/x_0)\cup [\omega(t_0,\delta)\setminus\omega(t_0,\Delta_{t_0}/x_0)]=\omega(t_0,\delta) \] with $C_1:=\max\{1,M_2/M_1\}$. Estimate (\ref{eq:estimate-2}) is proved by analogy. \qed \end{pf} \section{Proof of Theorem~\ref{th:main}} \label{sec:proof} The idea of the proof is borrowed from \cite[Theorem~B]{KSS07-JFSA}. Fix $t_0\in\Gamma$ and $\gamma\in\mathbb{C}$. Notice that we omitted the subscript for $t_0$ in the formulation of the theorem for brevity. It is easily seen that $M$ is bounded on $L^{p(\cdot)}(\Gamma,\varphi_{t_0,\gamma})$ if and only if the operator \[ (M_{t_0,\gamma}f)(t):=\sup_{\varepsilon>0}\frac{\varphi_{t_0,\gamma}(t)}{|\Gamma(t,\varepsilon)|} \int_{\Gamma(t,\varepsilon)}\frac{|f(\tau)|}{\varphi_{t_0,\gamma}(\tau)}\,|d\tau| \quad (t\in\Gamma) \] is bounded on $L^{p(\cdot)}(\Gamma)$. As it was already mentioned in the introduction, the function $W_{t_0}\varphi_{t_0,\gamma}$ is regular and submultiplicative for every $\gamma\in\mathbb{C}$ and \begin{eqnarray*} \alpha &:=& \alpha(W_{t_0}\varphi_{t_0,\gamma}) = {\rm Re}\,\gamma+ \min\{\delta_{t_0}^-{\rm Im}\,\gamma,\delta_{t_0}^+{\rm Im}\,\gamma\}, \\ \beta &:=& \beta(W_{t_0}\varphi_{t_0,\gamma}) = {\rm Re}\,\gamma+ \max\{\delta_{t_0}^-{\rm Im}\,\gamma,\delta_{t_0}^+{\rm Im}\,\gamma\}. \end{eqnarray*} With these notations, conditions (\ref{eq:main-conditions}) have the form \[ 0<\frac{1}{p(t_0)}+\alpha, \quad \frac{1}{p(t_0)}+\beta<1. \] In this case there is a small $\varepsilon>0$ such that \begin{equation}\label{eq:main-1} 0<\frac{1}{p(t_0)}+\alpha-\varepsilon \le \frac{1}{p(t_0)}+\beta+\varepsilon<1. \end{equation} Since $p:\Gamma\to(1,\infty)$ is continuous and $1/p(t_0)+\beta<1$, we can choose a number $\delta\in(0,d_{t_0})$ such that the arc $\omega(t_0,\delta)$, which contains $t_0$ and has the endpoints on the circle $\{\tau\in\mathbb{C}:|\tau-t_0|=\delta\}$, is so small that $1+\beta p(t_0)<p_*$, where \[ p_*:=p_*(\omega(t_0,\delta))=\min_{\tau\in\overline{\omega(t_0,\delta)}}p(\tau). \] Hence \begin{equation}\label{eq:main-2} 0<\frac{1}{p(t_0)}+\alpha\le\frac{1}{p(t_0)}+\beta<\frac{p_*}{p(t_0)}. \end{equation} Let us denote by $\chi_\Omega$ the characteristic function of a set $\Omega\subset\Gamma$. For $f\in L^{p(\cdot)}(\Gamma)$, we have \begin{eqnarray} M_{t_0,\gamma}f &\le& \chi_{\omega(t_0,\delta)}M_{t_0,\gamma}\chi_{\omega(t_0,\delta)}f+ \chi_{\Gamma\setminus\omega(t_0,\delta)}M_{t_0,\gamma}\chi_{\omega(t_0,\delta)}f \nonumber \\ &&+ \chi_{\omega(t_0,\delta)}M_{t_0,\gamma}\chi_{\Gamma\setminus\omega(t_0,\delta)}f+ \chi_{\Gamma\setminus\omega(t_0,\delta)}M_{t_0,\gamma}\chi_{\Gamma\setminus\omega(t_0,\delta)}f. \label{eq:main-3} \end{eqnarray} From (\ref{eq:main-2}) and Lemma~\ref{le:ersatz} we conclude that $M_{t_0,\gamma}$ is bounded on $L^{p(\cdot)}(\omega(t_0,\delta))$. Hence $\chi_{\omega(t_0,\delta)}M_{t_0,\gamma}\chi_{\omega(t_0,\delta)}I$ is bounded on $L^{p(\cdot)}(\Gamma)$. From Lemma~\ref{le:estimate} it follows that \begin{equation}\label{eq:main-4} \chi_{\Gamma\setminus\omega(t_0,\delta)}M_{t_0,\gamma}\chi_{\omega(t_0,\delta)}f \le C_1 \chi_{\Gamma\setminus\omega(t_0,\delta)}M_{t_0,\beta+\varepsilon}\chi_{\omega(t_0,\delta)}f \le C_1M_{t_0,\beta+\varepsilon}f \end{equation} and \begin{equation}\label{eq:main-5} \chi_{\omega(t_0,\delta)}M_{t_0,\gamma}\chi_{\Gamma\setminus\omega(t_0,\delta)}f \le C_2 \chi_{\omega(t_0,\delta)}M_{t_0,\alpha-\varepsilon}\chi_{\Gamma\setminus\omega(t_0,\delta)}f \le C_2M_{t_0,\alpha-\varepsilon}f, \end{equation} where $C_1$ and $C_2$ are positive constants depending only on $\varepsilon,\delta,\gamma$, and $t_0$. Inequalities (\ref{eq:main-1}), Theorem~\ref{th:KPS}, and inequalities (\ref{eq:main-4})--(\ref{eq:main-5}) imply that the operators $\chi_{\Gamma\setminus\omega(t_0,\delta)}M_{t_0,\gamma}\chi_{\omega(t_0,\delta)}I$ and $\chi_{\omega(t_0,\delta)}M_{t_0,\gamma}\chi_{\Gamma\setminus\omega(t_0,\delta)}I$ are bounded on $L^{p(\cdot)}(\Gamma)$. Finally, since $\Gamma\setminus\omega(t_0,\delta)$ does not contain the singularity of the weight $\varphi_{t_0,\gamma}$ which is continuous on $\Gamma\setminus\{t_0\}$, there exists a constant $C_3>0$ such that \[ \chi_{\Gamma\setminus\omega(t_0,\delta)}M_{t_0,\gamma}\chi_{\Gamma\setminus\omega(t_0,\delta)}f \le C_3Mf. \] Then Theorem~\ref{th:KPS} and the above estimate yield the boundedness of the operator $\chi_{\Gamma\setminus\omega(t_0,\delta)}M_{t_0,\gamma}\chi_{\Gamma\setminus\omega(t_0,\delta)}I$ on $L^{p(\cdot)}(\Gamma)$. Thus all operators on the right-hand side of (\ref{eq:main-3}) are bounded on $L^{p(\cdot)}(\Gamma)$. Therefore the operator on the left-hand side of (\ref{eq:main-3}) is bounded, too. This completes the proof. \qed
1,314,259,994,132
arxiv
\section{Introduction} \label{se:intro} Sampling from probability measures in high dimensional spaces is an important problem that arises in several applications, including computational statistical physics~\cite{Lelievre_al_2009}, Bayesian inference~\cite{Stuart2010}, and machine learning~\cite{Andrieu2003}. Typically one is interested in calculating integrals of the form \begin{equation}\label{e:expect} \pi(\phi):= {\mathbb E}_{\pi} \phi : = \int_{\mathbb{R}^d}\phi(x) \, \pi(dx), \end{equation} where $\pi(dx) =\pi(x) \, dx$\footnote{We assume that the target probability measure has a density with respect to Lebesgue measure. To simplify the notation, we will denote both the measure and the density by $\pi$.} is a probability measure in $\mathbb{R}^d$, known up to the normalization constant and $\phi \in L^2(\pi)$ is an observable. Here $L^2(\pi)$ denotes the weighted $L^2$ space for the scalar product $(\phi,\psi)_{\pi}=\int_{\mathbb{R}^d} \phi(x)\psi(x)\pi(x) dx$ and the corresponding norm is denoted by $\|\phi\|_{\pi}$. A standard methodology for calculating, or, rather, estimating the integral in~\eqref{e:expect} is to construct a stochastic process $\{ X(t) \}_{t >0}$ in $\mathbb{R}^d$, e.g. an It\^{o} diffusion process \begin{equation} \label{eq:sde0} dX(t)= f(X(t)) \, dt + \sigma(X(t)) \, dW_t \end{equation} that is ergodic with respect to the measure $\pi$. Here $W_t$ is a standard $m$--dimensional Brownian motion and $f:\mathbb{R}^d\rightarrow \mathbb{R}^d$ and $\sigma:\mathbb{R}^d\rightarrow \mathbb{R}^{d\times m}$ are assumed smooth and Lipschitz continuous. In particular, $\pi$ is the unique normalized solution of the stationary Fokker-Plank equation $\cL^*\pi=0,$ where $\cL^*$ is the $L^2(dx)$ adjoint of the generator $\cL\phi:=f\cdot\nabla\phi + \frac{1}{2}\sigma\sigma^T: \nabla^2\phi$ of the SDE \eqref{eq:sde0}.\footnote{For two matrices $A$ and $B$ we use the notation $A:B=\trace(A^TB)$.} In what follows we denote by $\cH^*$ the $L^2(dx)$ adjoint of an operator $\cH$ and by $\cH^\sharp$ its $L^2(\pi)$ adjoint. Under appropriate assumptions on the drift and diffusion coefficients, we can prove a strong law of large numbers and a central limit theorem as $T \rightarrow \infty,$ \begin{equation}\label{e:time_conv} \pi_T(\phi):= \frac{1}{T} \int_0^T \phi(X(t)) \, dt \rightarrow \pi(\phi) \quad \mbox{a.e.}, \; \; X_0 = x, \end{equation} and we have the following convergence in law \begin{equation}\label{e:clt} \sqrt{T} (\pi_T(\phi) - \pi(\phi)) \rightarrow \mathcal{N} (0, \sigma_{\phi}^2), \end{equation} where $\sigma^2_{\phi}$ denotes the asymptotic variance of the observable $\phi$, given by the Kipnis-Varadhan formula \begin{equation}\label{e:kipnis} \sigma_{\phi}^2 = \langle (\phi - \pi (\phi), (-\cL)^{-1}(\phi - \pi (\phi)) \rangle_{\pi}. \end{equation} Under the assumption that the generator has a spectral gap in $L^2(\pi)$ (see for instance \cite{Mattingly10con}) we have the following exponential convergence \begin{equation}\label{e:exp} \big|{\mathbb E}(\phi(X(t)))-\pi(\phi)\big|\leq Ce^{-\lambda t}, \end{equation} where $\lambda>0$ is the spectral gap of the generator $\cL$. In this paper, we focus on the overdamped Langevin dynamics for sampling \eqref{e:expect}, \begin{equation}\label{e:langevin} d X(t) = f(X(t)) \, dt + \sqrt{2} \, dW_t, \end{equation} where $f(x):=-\nabla V(x)$ and $W_t$ is a standard $d$-dimensional Brownian motion. The invariant measure of~\eqref{1} is given by $\pi(dx)=Z^{-1}e^{-V(x)} \, dx,$ where $Z = \int_{\mathbb{R}^d} e^{-V(x)} \, dx$ is the normalization constant and $V:\mathbb{R}^d\rightarrow \mathbb{R}$ is a smooth confining potential. A question that has attracted considerable attention in recent years is the construction of modified Langevin dynamics that have better sampling properties in comparison to the standard overdamped Langevin dynamics \eqref{e:langevin}. Several modifications of the Langevin dynamics~\eqref{e:langevin} that can be used in order to sample from $\pi$ are presented in~\cite[Sec 2.2]{DuncanNuskenPavliotis2017}. A well known technique that was first introduced in~\cite{Hwang_al1993,Hwang_al2005} and analyzed in a series of recent papers, e.g.~\cite{Bellet_Spiliopoulos_2015,Bellet_Spiliopoulos_2016,LelievreNierPavliotis2013,DuncanLelievrePavliotis2016} for improving the performance of the Langevin sampler~\eqref{e:langevin}, is to introduce in \eqref{eq:sde0} a divergence-free (with respect to the target distribution) drift perturbation $g:\mathbb{R}^d\rightarrow \mathbb{R}^d,$ \begin{equation} \label{eq:sdedet} dX(t) = (f(X(t)) + g(X(t))) dt + \sqrt{2} \,dW_t, \end{equation} such that \begin{equation}\label{eq:divg} \ddiv(g\pi)=0. \end{equation} We will refer to~\eqref{eq:divg} as the divergence-free condition. This condition ensures that the SDE \eqref{eq:sdedet} has the same invariant measure $\pi$ as \eqref{eq:sde0}. We remark that there are infinitely many vector fields $g$ that satisfy~\eqref{eq:divg}. A complete characterization of all vector fields that satisfy this condition can be found in~\cite[Prop. 2.2]{Hwang_al2005}. It is by now a standard, and not difficult to prove, result that nonreversible dynamics exhibits better properties as a sampling scheme, in the sense that the nonreversible perturbation accelerates convergence to equilibrium and reduces the asymptotic variance. The generator of the nonreversible dynamics \eqref{eq:sdedet} is given by \begin{equation} \label{eq:defLD} \cL_D = \cL + \cA, \end{equation} where $\cL$ is the generator of \eqref{eq:sde0} $\cA$ is defined by $\cA\phi=\nabla\phi\cdot g$ (in the calculations below we will use the notation $\cA\phi=\phi'g$). The drawback of the nonreversible Langevin sampler~\eqref{eq:sdedet} is that, since the generator of the dynamics is a nonselfadjoint operator, a transient, oscillatory phase is introduced. This transient behaviour can be addressed, in principle, by the use of an appropriate splitting numerical scheme~\cite{DuncanPavliotisZygalakis2017}. In this paper, we introduce and analyze an alternative way for perturbing the overdamped reversible Langevin dynamics that is reversible and enjoys all the advantages of the nonreversible sampler \eqref{eq:sdedet}, while not suffering from the drawback of its oscillatory transient dynamics. The new dynamics is given by the Stratonovitch perturbation \begin{equation} \label{eq:sdestrato} dX(t) = f(X(t))\,dt+ g(X(t)) \circ \sqrt{2}\,d\beta_t + \sqrt{2} \, dW_t, \end{equation} where $g$ satisfies the divergence-free condition and we assume that $\beta_t$ is a one-dimensional standard Wiener process that is independent of $W_t$.\footnote{One can also consider Stratonovich perturbations driven by multidimensional Brownian motions with diffusion functions $g^j,$ $j=1,2,\dots$ satisfying $\ddiv(g^j\pi)=0.$ A detailed analysis of such perturbed Langevin dynamics will be presented elsewhere.} For the Stratonovich-perturbed Langevin dynamics~\eqref{eq:sdestrato} we have the following result. \begin{theorem}[Reversibility of the perturbed dynamics] \label{th:reversibility} Considered the perturbed dynamics \eqref{eq:sdestrato}, were $g$ satisfies the divergence-free condition~\eqref{eq:divg}. Then the generator of \eqref{eq:sdestrato} can be written in the form \begin{equation}\label{eq:defLS} \cL_S = \cL + \cA^2, \end{equation} and $\cL_S$ is symmetric in $L^2(\pi)$, i.e. $\cL_S=\cL_S^\sharp$. \end{theorem} As a consequence of Theorem~\ref{th:reversibility}, the eigenvalues of $\cL_S$ are real, hence there is no transient behaviour of the dynamics. \begin{remark} The proposed modified Langevin sampler~\eqref{eq:sdestrato} can be written in the form of general reversible diffusion process\footnote{Our results can be extended to cover the case of the preconditioned/Riemannian manifold Markov Chain Monte Carlo Langevin dynamics. The details will be presented elsewhere.} (see~\cite[Ch. 4]{Pavl2014} for the characterization of diffusion processes that are reversible with respect to a given measure): $$ dX(t) = - (M\nabla V)(X(t)) \, dt + (\div M)(X(t)) \, dt + \sqrt{2} D(X(t)) \, d\widehat{W}_t, $$ where $M = I + g g^T=D D^T \in R^{d \times d}, \, D = (I,g) \in \mathbb{R}^{d \times (d+1)}$ and $\widehat{W}_t = (W_t, \beta_t)$ is a standard $d+1$ dimensional Brownian motion. \end{remark} \begin{theorem}[Invariant measure preservation] \label{th:pres_inv_measure} Under the assumptions of Theorem \ref{th:reversibility}, the perturbed dynamics~\eqref{eq:sdestrato} is ergodic with respect to the measure $\pi(dx) = Z^{-1} e^{-V} \, dx$. \end{theorem} \medskip \begin{remark} We note that Theorems \ref{th:reversibility} and \ref{th:pres_inv_measure} remain true for general ergodic SDEs \eqref{eq:sde0} with a Stratonovich perturbation, \begin{equation} dX(t) = f(X(t))\,dt+ g(X(t)) \circ \sqrt{2}\,d\beta_t + \sigma(X(t))dW_t, \end{equation} where $g$ is a divergence-free vector field with respect to $\pi$ and $f:\mathbb{R}^d\rightarrow \mathbb{R}^d$ does not have a gradient structure. This includes, in particular, degenerate diffusions (e.g. when the diffusion matrix $\sigma \sigma^T$ is only positive semidefinite), for example the underdamped Langevin dynamics. Indeed the gradient structure is not used in the proofs. Note however that in the case where the functional form of $\pi$ is not explicitly known, it can be difficult to compute such a vector field. \end{remark} The next theorem shows that, in comparison to the original overdamped Langevin dynamics~\eqref{e:langevin}, the Stratonovich perturbation yields a larger spectral gap and a reduced asymptotic variance. Similarly to the nonreversible deterministic perturbation \eqref{eq:sdedet}, this hence leads to an improved reversible sampler for the invariant measure \eqref{e:expect}, both in terms of speeding up the convergence to equilibrium \eqref{e:exp} as well as in terms of reducing the asymptotic variance \eqref{e:kipnis}. When combined, these results provide us with improved performance when measured in the mean-squared error; see \cite[Sec.2.3]{DuncanNuskenPavliotis2017}. We recall that, under the assumption that the potential $V$ grows sufficiently fast at infinity, the generator of both the standard Langevin and of the Stratonovich-perturbed dynamics have purely discrete spectrum. \begin{theorem}[Accelerated convergence and reduced asymptotic variance] \label{th:reveribility} Let the assumption of Theorem \ref{th:reversibility} hold and let $\lambda_{L}$ and $\lambda_S$ denote the spectral gaps of the overdamped Langevin~\eqref{e:langevin} and of the Stratonovich-perturbed dynamics~\eqref{eq:sdestrato}, respectively. Then \begin{equation}\label{e:eigs} \lambda_L \leq \lambda_S. \end{equation} Let, furthermore $\phi \in L^2(\pi)$ and denote the corresponding asymptotic variances by $\sigma^2_L(\phi)$ and $\sigma^2_S(\phi)$. Then \begin{equation}\label{e:var} \sigma^2_L(\phi) \geq \sigma^2_S(\phi). \end{equation} \end{theorem} \begin{remark} When the target distribution is Gaussian, in particular for the two dimensional quadratic potential $V(x)=\frac12(x_1^2+\lambda x_2^2)$ with $\lambda \ll 1$, the standard Langevin dynamics \eqref{e:langevin} converges to equilibrium at the very slow rate $\lambda_L=\lambda$, and it was shown in~\cite{LelievreNierPavliotis2013} that a perturbation of the form \begin{equation} \label{eq:gdelta} g(x)=\delta^\theta J\nabla V(x), \qquad J=\begin{psmallmatrix} 0& 1 \\ -1 & 0 \end{psmallmatrix}, \end{equation} with size $\delta \sim \lambda^{-1/2}$ and $\theta=1$ yields in the case of a nonreversible perturbation \eqref{eq:sdedet} an optimally improved convergence rate $\lambda_D={\mathcal O}(1)$. For isotropic Gaussians, the optimally reduced asymptotic variance using a nonreversible perturbation can also be calculated~\cite[Sec. 4]{DuncanLelievrePavliotis2016}. Similarly, an improved convergence rate of $\lambda_S={\mathcal O}(1)$ can also be obtained for the reversible perturbation \eqref{eq:sdestrato} for the same scaling $\delta \sim \lambda^{-1/2}$ and $\theta=1/2$. Observe that the factor $\delta^\theta$ in \eqref{eq:gdelta} yields a perturbation of size $\mathcal{O}(\delta)$ of the Langevin generator $\cL$ in both perturbed generators $\cL_D$ in \eqref{eq:defLD} and $\cL_S$ in \eqref{eq:defLS}. It is important to note that the optimal nonreversible perturbation depends on the optimality criterion used, i.e. on whether our aim is to maximize the rate of convergence to equilibrium or to minimize the asymptotic variance (uniformly over the space of square integrable observables). Contrary to this, the optimal reversible perturbation is the same with respect to these two optimality criteria. This observation will be explored further in a future work together with a complete analysis of optimal Stratonovitch perturbations for Gaussian target distributions. \end{remark} \section{Proof of the main results} We start by recalling from~\cite{LelievreNierPavliotis2013} that the differential operator $\cA$ is skew-symmetric in $L^2(\pi)$, i.e. $\cA^\sharp=-\cA$. This result follows from an integration by parts and \eqref{eq:divg}. To prove our main results we also use that the original SDE \eqref{eq:sde0} has the the generator \begin{equation} \label{eq:defL} \cL \phi = \phi' f + \Delta\phi. \end{equation} {\it Proof of Theorem \ref{th:reversibility}}~ We convert the Stratonovitch SDE~\eqref{eq:sdestrato} into an It\^o one: \begin{equation} \label{eq:stratotointo} dX = f(X) dt + g'(X)g(X) dt + g(X) \sqrt{2}\,d\beta_t + \sqrt{2}\, dW_t. \end{equation} Using the calculation $$ \cA^2\phi = (\phi'g)'g = \phi'g'g + \phi''(g,g). $$ we deduce the result \eqref{eq:defLS} by applying formula \eqref{eq:defL} to the SDE \eqref{eq:stratotointo}. An immediate consequence of $\cA^\sharp=-\cA$ is then that $(\cA^2)^\sharp=\cA^2$, i.e. $\cA^2$ is $L^2(\pi)$ symmetric. As $\cL$ itself is $L^2(\pi)$ symmetric, we have that $\cL_S$ is also $L^2(\pi)$ symmetric. $\qed$ \smallskip\\ {\it Proof of Theorem \ref{th:pres_inv_measure}} The $L^2$-adjoint satisfies \begin{equation} \cL_S^* \pi = \cL^* \pi + \cA^{*} (\cA^{*} \pi)=0, \end{equation} where we have used the fact that $\cL^* \pi=\cA^{*} \pi =0$. Hence $\pi$ is the unique invariant measure of the perturbed dynamics \eqref{eq:sdestrato}. $\qed$ \noindent {\it Proof of Theorem \ref{th:reveribility}} We write the generator of the Stratonovich-perturbed dynamics as $\cL_S = - \cB^\sharp \cB - \cA^\sharp \cA$ with $\cB = \nabla, \;\cA = g\cdot \nabla, \; \cA^\sharp = - \cA $. The quadratic form associated to $\cL_S$ is $\langle - \cL_S \phi , \phi \rangle_{\pi} = \|\cB \phi \|_{\pi}^2 + \|\cA \phi \|_{\pi}^2$ for all $\phi \in H^1(\pi)$ the weighted Sobolev space that is defined in the standard manner. The quadratic form associated to the generator of the reversible Langevin dynamics $\cL = - \cB^\sharp \cB$ is $\langle - \cL \phi , \phi \rangle_{\pi} = \|\cB \phi \|_{\pi}^2$. Since both $\cL_S$ and $\cL$ are symmetric operators in $L^2(\pi)$ with compact resolvents, the spectral gap of the reversible Langevin dynamics is given by the Rayleigh quotient formula, \begin{eqnarray*} \lambda_{S} = \min_{\phi \in H^1(\pi), \int \phi \pi = 0} \frac{\langle -\cL_S \phi , \phi \rangle_{\pi} }{\|\phi \|^2_{\pi}} = \min_{\phi \in H^1(\pi), \int \phi \pi = 0} \frac{\|\cB \phi \|_{\pi}^2 + \|\cA \phi \|_{\pi}^2 }{\|\phi \|^2_{\pi}} \geq \min_{\phi \in H^1(\pi), \int \phi \pi = 0} \frac{\|\cB \phi \|_{\pi}^2 }{\|\phi \|^2_{\pi}} = \lambda_L. \end{eqnarray*} To prove the bound on the asymptotic variance, we first write the formula for $\sigma^2_S(\phi)$ in the form $\sigma^2_S(\phi) = \langle \psi_S,\phi \rangle_{\pi}$ where $\psi_S$ is the solution of the Poisson equation $-\cL_S \psi_S = \phi,$ and where without loss of generality we have assumed that $\int_{\mathbb{R}^d} \phi \, \pi = 0$. We also consider $\psi_L$, the solution of the Poisson equation $-\cL \psi_L= \phi$ and using $\cL=\cL_S+\cA^\sharp\cA$, we obtain \begin{eqnarray*} \sigma^2_L(\phi) &=& \langle \phi ,\psi_L \rangle_{\pi} = \langle (- \cL_S)\psi_S,\psi_L \rangle_{\pi} = \langle\psi_S, (- \cL )\psi_L \rangle_{\pi} - \langle \cA^2\psi_S,\psi_L \rangle_{\pi} = \langle\psi_S, \phi \rangle_{\pi} + \langle \cA^\sharp\cA\psi_S,\psi_L \rangle_{\pi}\\ &=& \sigma^2_S(\phi) + \langle \cA\psi_S, \cA\psi_L \rangle_{\pi}. \end{eqnarray*} To prove our claim, it is sufficient to show that $\langle \cA\psi_S, \cA\psi_L \rangle_{\pi} \geq 0$. We calculate, \begin{eqnarray*} \langle \cA\psi_S, \cA\psi_L \rangle_{\pi} & = & \langle \cA\psi_S, \cA (- \cL)^{-1} \phi \rangle_{\pi} = \langle \cA\psi_S, \cA (- \cL)^{-1} ((- \cL) + (- \cA^2))\psi_S \rangle_{\pi} \\ & = & \langle \cA^\sharp \cA\psi_S, (I + (- \cL)^{-1} (- \cA^2))\psi_S \rangle_{\pi} = \|\cA\psi_S \|_{\pi}^2 + \langle (- \cA^2)\psi_S, (- \cL)^{-1} (- \cA^2))\psi_S \rangle_{\pi} \\ & = & \|\cA\psi_S \|_{\pi}^2 + \|\cB \psi \|_{\pi}^2 \geq 0, \end{eqnarray*} with $\psi := (-\cL)^{-1} (- \cA^2)\psi_S$. $\qed$ \begin{remark} Notice also that the perturbation $ \cA^2$ is only negative semidefinite. In particular, the null space of the perturbation is (much) larger than that of the generator of the overdamped Langevin dynamics which consists of constants. The amount of improvement in the calculation of the integral in~\eqref{e:expect} using the long time average depends on the magnitude of the projection of the observable $\phi$ on the null space of $ \cA^2$. Clearly, if this projection is zero, then the inequality in~\eqref{e:var} is strict. The details of these arguments will be presented elsewhere. \end{remark} \section{Numerical experiments} In this section, we present some numerical experiments to corroborate our theoretical findings and illustrate the features of the Stratonovitch-perturbed Langevin dynamics~\eqref{eq:sdestrato}. Although we are primarily interested in large dimensional problems, we consider for simplicity the following warped Gaussian distribution, as considered in \cite[Sec. 5.2]{DuncanLelievrePavliotis2016}, with density $\pi(x)=Z^{-1}e^{-V(x)}$ where $V(x)$ is the two-dimensional potential potential $V(x)=\frac{x_1^2}{100}+(x_2+bx_1^2-100b)^2,$ where the parameter $b=0.05$ is related to how warped the distribution is. For the purposes of this paper, it is sufficient to consider the family of vector fields $g(x) = J \nabla V(x), \ J = - J^T$, for all constant skew-symmetric matrices $J$. In particular, we consider the vector field $g(x)$ defined by \eqref{eq:gdelta} and we compare the effect of the nonreversible perturbation with $\theta=1$ in \eqref{eq:sdedet} (Figure \ref{fig:1a}) and the new reversible Stratonovitch perturbation with $\theta=1/2$ in \eqref{eq:sdestrato} (Figure \ref{fig:1b}) for several sizes $\delta=1,64,256$ of the perturbation. We also include for reference the results for the standard overdamped Langevin equation \eqref{e:langevin}. \begin{figure}[htb] \centering \begin{subfigure}[t]{0.49\textwidth} \centering \includegraphics[width=1\linewidth]{wrappeddet} \caption{Nonreversible Langevin dynamics~\eqref{eq:sdedet}.} \label{fig:1a} \end{subfigure} \begin{subfigure}[t]{0.49\textwidth} \centering \includegraphics[width=1\linewidth]{wrappedsto} \caption{Stratonovitch-perturbed Langevin dynamics~\eqref{eq:sdestrato}.} \label{fig:1b} \end{subfigure} \caption{Error evolution along time of the average over $M=10^3$ trajectories of the nonreversible and the Stratonovitch-perturbed Langevin dynamics for different sizes $\delta=0,16,128,256$ of the perturbation.} \label{fig:1} \end{figure} We consider the observable $\phi(x)=x_1^2+x_2^2$ and consider the estimator $\frac1M\sum_{i=1}^M \phi(X^{(i)}(t)) \simeq {\mathbb E}(\phi(X(t))$. We take the initial condition $X_0=(0,0)$ and we plot for $M=10^3$ independent realisations $X^{(i)}(t),i=1,\ldots,M$ the error $|\frac1M\sum_{i=1}^M \phi(X^{(i)}(t)) -\pi(\phi)|$ as a function of time $t\in[0,4]$. The solution is approximated using the simplest Euler-Maruyama method with very small stepsize $\Delta t=10^{-5}$ (considering the It\^o formulation \eqref{eq:stratotointo}). We observe that although the speed of the convergence ${\mathbb E}(\phi(X(t))\rightarrow \pi(\phi)$ as $t\rightarrow \infty$ is very slow for the standard overdamped Langevin dynamics (see the nearly horizontal black curve for $\delta=0$), both perturbations lead to an increase in the speed of the convergence to equilibrium (see the transient phase for small time $t$) while reducing the asymptotic variance (see the equilibrium phase for large time $t\geq2$ where the oscillations are only due to Monte-Carlo errors), which corroborates Theorem \ref{th:pres_inv_measure} and Theorem \ref{th:reveribility}. In addition, the Stratonovitch perturbation yields no oscillatory behavior in contrast to the nonreversible one (see Theorem \ref{th:reversibility}). This feature renders the new sampling scheme more amenable to efficient numerical methods. This will be explored further in a future study. \bibliographystyle{abbrv}
1,314,259,994,133
arxiv
\section{Proof of theorem \ref{thm: DSD}} \label{appendix: proof of DSD} \begin{proof} Let's first define the Stein operator as \begin{equation} \mathcal{S}_{p_Y}[\bm{g}]=\bm{s}_{p_Y}(\bm{y})^T\bm{g}(\bm{x})+\nabla_{\bm{y}}^T\bm{g}(\mathbf{y}) \end{equation} for the test function $\bm{g}(\bm{y})$ and density $p_Y(\mathbf{y})$. Thus, the Stein discrepancy can be rewritten as \begin{equation} \mathcal{S}(q_Y,p_Y)=\sup_{\bm{g}\in\mathcal{H}}\mathbb{E}_{q_Y}[\mathcal{S}_{p_Y}[\bm{g}]] \end{equation} In the following, we will focus on the Stein operator. From the change of variable formula $\bm{y}=\bm{T}(\bm{x})$, we have \begin{equation} p_Y(\mathbf{y})=p_X(\bm{T}^{-1}(\mathbf{y}))\left|\frac{\partial \bm{T}^{-1}(\mathbf{y})}{\partial \mathbf{y}}\right|,\;\;\;\;\; \bm{g}(\mathbf{y})=\bm{f}(\bm{T}^{-1}(\mathbf{y})) \end{equation} Now we can rewrite the Stein operator: \begin{equation} \begin{split} &\mathcal{S}_{p_Y}[\bm{g}]=\nabla_{\mathbf{y}}\log p_Y(\mathbf{y})^T\bm{g}(\mathbf{y})+\nabla_{\mathbf{y}}^T\bm{g}(\mathbf{y})\\ =&\nabla_{\mathbf{y}}\log p_X(\bm{T}^{-1}(\mathbf{y}))^T\bm{g}(\mathbf{y})+(\nabla_{\mathbf{y}}\log \left|\frac{\partial \bm{T}^{-1}(\mathbf{y})}{\partial \mathbf{y}}\right|)^T\bm{g}(\mathbf{y})\\ &+\nabla_{\mathbf{y}}\bm{g}(\mathbf{y})\\ =&\left[(\nabla_{\mathbf{y}}\bm{T}^{-1}(\mathbf{y}))^T(\nabla_{\bm{T}^{-1}(\mathbf{y})}\log p_X(\bm{T}^{-1}(\mathbf{y})))\right]^T\bm{g}(\mathbf{y})\\ &+\underbrace{(\nabla_{\mathbf{y}}\log \left|\frac{\partial \bm{T}^{-1}(\mathbf{y})}{\partial \mathbf{y}}\right|)^T\bm{g}(\mathbf{y})}_{\circled{1}}\\ &+Tr[(\nabla_{\mathbf{y}}\bm{T}^{-1}(\mathbf{y}))\nabla_{\bm{T}^{-1}(\mathbf{y})}\bm{f}(\bm{T}^{-1}(\mathbf{y}))] \end{split} \end{equation} The second equality is from the chain rule and definition of divergence operator $\nabla^T$. For the layout of the matrix calculus, we follow the column vector layout as the following: for a function $\bm{h}:\mathbb{R}^D\rightarrow \mathbb{R}$, and $\bm{f}:\mathbb{R}^D\rightarrow\mathbb{R}^N$, we have \begin{equation} \begin{split} &\frac{\partial h(\bm{x})}{\partial \bm{x}} = \left[\begin{array}{c} \frac{\partial h(\bm{x})}{\partial x_1} \\ \vdots\\ \frac{\partial h(\bm{x})}{\partial x_D} \end{array}\right]\\ &\frac{\partial \bm{f}(\bm{x})}{\partial \bm{x}}=\left[\begin{array}{ccc} \frac{\partial f_1(\bm{x})}{\partial x_1}&\ldots&\frac{\partial f_1(\bm{x})}{\partial x_D}\\ \vdots&\vdots&\vdots\\ \frac{\partial f_N(\bm{x})}{\partial x_1}&\ldots&\frac{\partial f_N(\bm{x})}{\partial x_D} \end{array}\right] \end{split} \end{equation} Now, we focus on $\circled{1}$ term: \begin{equation} \begin{aligned} &\nabla_{\mathbf{y}} \log |\frac{\partial T^{-1}(\mathbf{y})}{\partial \mathbf{y}}| = Tr[(\nabla_{\mathbf{y}} T^{-1}(\mathbf{y}))^{-1} \nabla_{\mathbf{y}} \nabla_{\mathbf{y}} T^{-1}(\mathbf{y})] \\ &= Tr[\nabla_{\bm{x}}T(\bm{x}) \nabla_{\mathbf{y}} (\nabla_{\bm{x}} T(\bm{x}))^{-1}] \\ &= Tr[\nabla_{\bm{x}}T(\bm{x}) \nabla_{\mathbf{y}} T^{-1}(\mathbf{y}) \nabla_{\bm{x}} (\nabla_{\bm{x}} T(\bm{x}))^{-1}] \\ &= Tr[\nabla_{\bm{x}} (\nabla_{\bm{x}} T(\bm{x}))^{-1}] \end{aligned} \end{equation} where we use the inverse function theorem $\nabla_{\mathbf{y}}\bm{T}^{-1}(\mathbf{y})=(\nabla_{\bm{x}}\bm{T}(\bm{x}))^{-1}$. In addition, we define $\nabla_{\bm{x}}^T (\nabla_{\bm{x}}\bm{T}(\bm{x}))^{-1} = Tr[\nabla_{\bm{x}} (\nabla_{\bm{x}}\bm{T}(\bm{x}))^{-1}]$. So we can set $\bm{m}(\bm{x})=(\nabla_{\bm{x}}\bm{T}(\bm{x}))^{-1}$, we can obtain: \begin{equation} \begin{split} &\mathcal{S}_{p_Y}[\bm{g}]=(\bm{m}(\bm{x})^T\bm{s}_p(\bm{x}))^T\bm{f}(\bm{x})+(\nabla_{\bm{x}}^T\bm{m}(\bm{x}))^T\bm{f}(\bm{x})\\ &+Tr[\bm{m}(\bm{x})\nabla_{x}\bm{f}(\bm{x})]\\ &=(\bm{m}(\bm{x})^T\bm{s}_p(\bm{x}))^T\bm{f}(\bm{x})+\nabla_{\bm{x}}^T[\bm{m}(\bm{x})\bm{f}(\bm{x})] \end{split} \end{equation} which is exactly the same as the inner part of DSD (Eq.\ref{eq: DSD}). So with change of variable formula, we can easily show \begin{equation} \mathcal{S}(q_Y,p_Y)=DSD_m(q_X,p_X) \end{equation} \end{proof} \section{Proof of proposition \ref{prop: Riemannian}} \label{appendix: Riemannian manifold} With the definition of the Riemannian manifold $(\mathcal{M},\bm{g})$, for any point $\bm{a}\in\mathcal{M}$ with local coordinates $\bm{x}\in\mathbb{R}^D$, and two vectors $\bm{u},\bm{v}$ from its tangent plane $T_a\mathcal{M}$, we can represents $\bm{u}$, $\bm{v}$ using the basis $(\frac{\partial}{\partial x_i})_{\bm{a}}$ as \begin{equation} \bm{u}=\sum_{i=1}^D{u_i(\frac{\partial}{\partial x_i})_{\bm{a}}},\;\;\;\bm{v}=\sum_{i=1}^D{v_i(\frac{\partial}{\partial x_i})_{\bm{a}}} \end{equation} The inner product defined by the metric $\bm{g}$ can be expressed as \begin{equation} \bm{g}(\bm{u},\bm{v}) = \sum_{i,j}^D{u_iv_j\langle(\frac{\partial}{\partial x_i})_{\bm{a}},(\frac{\partial}{\partial x_j})_{\bm{a}}\rangle_{g}} = \sum_{i,j}^D{u_ig_{ij}(\bm{x})v_j} \end{equation} where $g_{ij}(\bm{x})$ is the $ij-\text{th}$ element of matrix $\bm{G}(\bm{x})$ and $\langle\cdot,\cdot\rangle_{g}$ is the inner product defined by Riemannian metric $\bm{g}$. We assume the measure $\mathcal{M}(\bm{x})$ is absolutely continuous w.r.t. Lebesgue measure, then we have the following change of variable formula \begin{equation} d\mathcal{M}(\bm{x})=\sqrt{\left|\bm{G}(\bm{x})\right|}d\bm{x} \end{equation} Then we can represents the densities $\tilde{p}$, $\tilde{q}$ under Lebessgue measure \begin{equation} p(\bm{x}) = \frac{d\mathbb{P}}{d\mathcal{M}(\bm{x})}\frac{d\mathcal{M}(\bm{x})}{d\bm{x}}=\tilde{p}(\bm{x})\sqrt{\left|\bm{G}(\bm{x})\right|} \end{equation} and $q(\bm{x})$ is defined accordingly. The score matching loss for $\tilde{p}$ and $\tilde{q}$ is \begin{equation} \begin{split} \mathcal{F}_{\mathcal{M}}(\tilde{q},\tilde{p})&=\frac{1}{2}\int \tilde{q}(\bm{x})||\nabla \log \tilde{p}(\bm{x})-\nabla \log \tilde{q}(\bm{x})||_{g}^2d\mathcal{M}(\bm{x})\\ &=\frac{1}{2}\int q(\bm{x})||\nabla \log \tilde{p}(\bm{x})-\nabla \log \tilde{q}(\bm{x})||_{g}^2d\bm{x} \end{split} \label{inter: Riemannian Score matching} \end{equation} Now let's define $\nabla\log \tilde{p}(\bm{x})$. From the basics of Riemannian manifold, for a point $\bm{a}\in\mathcal{M}$ with local coordinate $\bm{x}$, and $\mathcal{X}$ is a vector field on $\mathcal{M}$, we have the following definition \begin{equation} \langle \sum_{i=1}^D (\nabla\log \tilde{p}(\bm{x}))_i(\frac{\partial}{\partial x_i})_{\bm{a}},\sum_{j=1}^D{\mathcal{X}_j(\frac{\partial}{\partial x_j})}\rangle_{g}=\sum_{i=1}^D{\mathcal{X}_i\frac{\partial \log \tilde{p}}{\partial x_i}} \end{equation} Written in terms of matrix form, assume $\bm{X}=[\mathcal{X}_1,\ldots,\mathcal{X}_D]^T$, and $g_{ij}(\bm{x})$ is the element of symmetric positive definite matrix $\bm{G}(\bm{x})$, we have \begin{equation} \begin{split} &(\nabla \log \tilde{p})^T\bm{G}(\bm{x})\bm{X} = (\frac{\partial \log \tilde{p}}{\partial \bm{x}})^T\bm{X} \\ \Longrightarrow&\nabla \log \tilde{p} = \bm{G}^{-1}(\bm{x})(\frac{\partial \log \tilde{p}}{\partial \bm{x}}) \end{split} \end{equation} Therefore, we have \begin{equation} \begin{split} &||\nabla \log \tilde{p}(\bm{x})-\nabla\log \tilde{q}(\bm{x})||_g^2\\ =&\langle \nabla \log \tilde{p}(\bm{x})-\nabla\log \tilde{q}(\bm{x}),\nabla \log \tilde{p}(\bm{x})-\nabla\log \tilde{q}(\bm{x})\rangle_{g}\\ =&\langle \bm{G}^{-1}(\bm{x})\underbrace{(\frac{\partial \log \tilde{p}}{\partial \bm{x}}-\frac{\partial \log \tilde{q}}{\partial \bm{x}})}_{\tilde{\Delta}(\bm{x})},\bm{G}^{-1}(\bm{x})(\frac{\partial \log \tilde{p}}{\partial \bm{x}}-\frac{\partial \log \tilde{q}}{\partial \bm{x}})\rangle_{g}\\ =&\tilde{\Delta}(\bm{x})^T\bm{G}^{-1}(\bm{x})\bm{G}(\bm{x})\bm{G}^{-1}(\bm{x})\tilde{\Delta}(\bm{x})\\ =&\tilde{\Delta}(\bm{x})^T\bm{G}^{-1}(\bm{x})\tilde{\Delta}(\bm{x}) \end{split} \end{equation} By change of variable formula, it is also easy to show that \begin{equation} \tilde{\Delta}(\bm{x})=\underbrace{(\frac{\partial \log {p}}{\partial \bm{x}}-\frac{\partial \log {q}}{\partial \bm{x}})}_{\Delta(\bm{x})} \end{equation} Therefore, we have \begin{equation} ||\nabla \log \tilde{p}(\bm{x})-\nabla\log \tilde{q}(\bm{x})||_g^2=\Delta^T(\bm{x})\bm{G}^{-1}(\bm{x})\Delta(\bm{x}) \end{equation} Substitute back to $\mathcal{F}_{\mathcal{M}}(\tilde{q},\tilde{p})$ (Eq.\ref{inter: Riemannian Score matching}), we can obtain the result. Particularly, compare to \emph{diffusion Fisher divergence} (Eq.\ref{eq: DSM}), we can observe that if $\bm{G}(\bm{x})=\bm{m}(\bm{x})^{-T}\bm{m}(\bm{x})^{-1}$, the $\mathcal{F}_{\mathcal{M}}(\tilde{q},\tilde{p})$ is equivalent to \emph{diffusion Fisher divergence}. Indeed, as $\bm{m}(\bm{x})\in\mathbb{R}^{D\times D}$ is an invertible matrix, then $\bm{m}(\bm{x})^{-T}\bm{m}(\bm{x})^{-1}$ must be symmetric positive definite, which satisfies the requirements for $\bm{G}(\bm{x})$. \section{Proof of proposition \ref{prop: continuous DSM}} \label{appendix: proof of continous DSM} An ODE flow is defined by the solution of an ODE: \begin{equation} d\bm{x} = \bm{g}(\bm{x})dt \end{equation} with $\bm{g}(\bm{x})$ the deterministic drift term. Let us consider the forward Euler discretisation of the ODE, which gives \begin{equation} \bm{x}(t+\delta) = \bm{x}(t) + \delta \bm{g}(\bm{x}(t)) := \bm{T}_{\delta}(\bm{x}(t)). \end{equation} With $\delta \approx 0$ we see that $T_\delta$ is an invertible transformation. Now consider $\mathbf{y} = \bm{x}(t+\delta)$ and $\bm{x}(t) = \bm{x}$. This again pushes $p_X(\bm{x})$ and $q_X(\bm{x})$ to $p_Y(\mathbf{y})$ and $q_Y(\mathbf{y})$, respectively. Therefore we can reuse results from theorem \ref{thm: DSM} and derive \begin{equation} \mathcal{F}(p_Y, q_Y) = \frac{1}{2}\mathbb{E}_{q_X}[||\bm{m}(\bm{x})^T(\bm{s}_{p_X}(\bm{x})-\bm{s}_{q_X}(\bm{x}))||^2], \end{equation} where $\bm{m}(\bm{x}) = (\nabla_{\bm{x}}\bm{T}_{\delta}(\bm{x}))^{-1}$ Notice that $T_{\delta}(\bm{x}) = \bm{x}$ when $\delta = 0$. This means we can compute the change of score matching at time $t$ as: \begin{equation} \begin{aligned} &\frac{\partial}{\partial t} \mathcal{F}(p_Y, q_Y) = \lim_{\delta \rightarrow 0^{+}} \frac{F(p_Y, q_Y) - F(p_X, q_X)}{\delta} \\ &= \frac{1}{2}\lim_{\delta \rightarrow 0^{+}} \mathbb{E}_{q_X(\bm{x})}[\Delta(\bm{x})^\top \delta^{-1}(m(\bm{x})m(\bm{x})^{\top} - \mathbf{I}) \Delta(\bm{x})], \end{aligned} \end{equation} with $\Delta(\bm{x}) = \nabla_{\bm{x}}\log p_X(\bm{x}) - \nabla_{\bm{x}}\log q_X(\bm{x})$. As $\nabla_{\bm{x}} T_{\delta}(\bm{x}) = \mathbf{I} + \delta \nabla_{\bm{x}} \bm{g}(\bm{x})$, simple calculation shows that \begin{equation} \begin{aligned} &\delta^{-1}(m(\bm{x})m(\bm{x})^{\top} - \mathbf{I}) \\ =& \delta^{-1}[[(\mathbf{I} + \delta \nabla_{\bm{x}} \bm{g}(\bm{x}))^\top(\mathbf{I} + \delta \nabla_{\bm{x}} \bm{g}(\bm{x}))]^{-1} - \mathbf{I}] \\ =& \delta^{-1}[[\mathbf{I} + \delta (\nabla_{\bm{x}}\bm{g}(\bm{x}) + \nabla_{\bm{x}} \bm{g}(\bm{x})^\top) + \mathcal{O}(\delta^2)]^{-1} - \mathbf{I}] \\ =& -[\nabla_{\bm{x}}\bm{g}(\bm{x}) + \nabla_{\bm{x}} \bm{g}(\bm{x})^\top + \mathcal{O}(\delta)]\\ &[\mathbf{I} + \delta (\nabla_{\bm{x}}\bm{g}(\bm{x}) + \nabla_{\bm{x}} \bm{g}(\bm{x})^\top) + \mathcal{O}(\delta^2)]^{-1}, \end{aligned} \end{equation} which leads to \begin{equation} \begin{aligned} &\frac{\partial}{\partial t} \mathcal{F}(p_Y, q_Y) \\ =& \lim_{\delta \rightarrow 0^{+}} \frac{1}{2}\mathbb{E}_{q_X(\bm{x})}[\Delta(\bm{x})^\top \delta^{-1}(m(\bm{x})m(\bm{x})^{\top} - \mathbf{I}) \Delta(\bm{x})] \\ =& -\frac{1}{2}\mathbb{E}_{q_X(\bm{x})}[\Delta(\bm{x})^\top ( \nabla_{\bm{x}}\bm{g}(\bm{x}) + \nabla_{\bm{x}} \bm{g}(\bm{x})^\top ) \Delta(\bm{x})] \end{aligned} \end{equation} As this quantifies the instantaneous changes, replacing $p_t$ and $q_t$ for $p_Y$, $p_X$, $q_Y$ and $q_X$ gives the instantaneous change of score matching loss. \section{Additional plots} \label{apendix: additional plot} \begin{figure*} \centering \includegraphics[scale=0.25]{Transformed_Density.pdf} \caption{\textbf{Left}: The log likelihood plot for original densities $q$, $p$. \textbf{Middle}: The log likelihood function for transformed density $p_Y$ \textbf{Right}: The log likelihood function for $q_Y$. We choose $\theta = -2.5$ and $b=0.6$. Notice that the transformed densities $p_Y$, $q_Y$ are periodic as we consider $y\in\mathbb{R}$. This won't happen if we consider $\bm{y}=\bm{T}(\bm{x})$. Because all $\bm{x}$ value will be squeezed inside the period containing $0$, i.e. $\bm{y}$ will inside $[-3.37,3.37]$ in this case. } \label{fig:transformed density} \end{figure*} From the motivating example and theorem \ref{thm: DSM}, we know $\bm{m}(\bm{x}) = (1+\frac{(\bm{x}-\theta)^2}{b})$. Therefore, by simple calculus, the corresponding transformation $y=\bm{T}(\bm{x})$ can be defined as \begin{equation} \begin{split} \mathbf{y}&=\bm{T}(\bm{x}) = \frac{1}{b\sqrt{b}}\tan^{-1}(\frac{\bm{x}-\theta}{\sqrt{b}})\\ \bm{x} &= \bm{T}^{-1}(\mathbf{y}) = \sqrt{b}\tan(b\sqrt{b}\mathbf{y})+\theta \end{split} \end{equation} Let's define the transformed densities $p_Y(\mathbf{y})$ and $q_Y(\mathbf{y})$ as \begin{equation} \begin{split} p_Y(\mathbf{y}) &= p(\bm{T}^{-1}(\mathbf{y}))\left|\nabla_{\mathbf{y}}\bm{T}^{-1}(\mathbf{y})\right|\\ q_Y(\mathbf{y}) &= q(\bm{T}^{-1}(\mathbf{y}))\left|\nabla_{\mathbf{y}}\bm{T}^{-1}(\mathbf{y})\right| \end{split} \end{equation} Therefore, we can plot the log likelihood for the original densities $p$,$q$ and transformed densities $p_Y$, $q_Y$ as Figure \ref{fig:transformed density}. In this case, we set $p(\bm{x})$ has the mean $-2.5$ with the same scale $0.3$ as $q$, whereas $q$ has mean $0$. For the transformation $\bm{T}$, we set $\theta=-2.5$ with $b=0.6$. \section{Robustness of Gaussian flow} \label{appendix: robustness} \wg{Here, we investigate the robustness of the diffusion matrix w.r.t. the degree-of-freedom (DoF) of studnet-t distribution. We adopt the similar settings as the motivating example (Section \ref{subsec: motivating example}) where $q$ and $p_\theta$ are Student-t distribution with same scale parameter. We vary their DoF together to investigate the changes in DSM loss. Because the manual flow lacks a proper interpretation, so it is difficult to adapted to the change of DoF. Thus, we use the same $\bm{m}(\bm{x})=1+\frac{(\bm{x}-\theta)^2}{0.6}$ for all DoF. On the other hand, Gaussian flow is designed to transform from Student-t to standard Gaussian. So it can be easily adapted to the change of DoF by using the corresponding $F_\theta$. Figure \ref{fig: Robustness} plots the DSM losses with both manual flow and Gaussian flow. From the top panel, we can clearly observe that manual flow only works with $\text{DoF}=5$. For other DoF, the corresponding DSM fails to recover the ground truth $\theta$. On the other hand, the bottom panel shows that Gaussian flow is robust to the change of DoF, and consistently gives the correct ground truth $\theta$ with wide fast convergence region. We emphasize again that for both flows, the resulting $DSM_m(q, p_\theta)$ is not equivalent to $\mathcal{F}_m(q, p_\theta)$ (Eq.~\ref{eq: DSM}) as $C_{q, m}$ in Eq.~\ref{eq: DSM_expand} is dependent on $\theta$. However, unlike the manually designed flow by \citet{barp2019minimum}, the Gaussian flow returns surrogate losses that have only one global optimum at the desired solution in all cases considered. Future work will evaluate the Gaussian flow construction of DSM objectives beyond student-t case for further understandings. } \begin{figure}[b] \centering \includegraphics[scale=0.27]{Robustness_Plot.pdf} \caption{The $DSM_m(q, p_\theta)$ losses using $p_\theta$, $q$ with different degree-of-freedom (DoF). They are plotted when $m(\bm{x})$ is constructed using the manual flow (top) or Gaussian flow (bottom).} \label{fig: Robustness} \end{figure} \section{Background: Diffusion Stein discrepancy} \label{Sec: background} \subsection{Score matching and Stein discrepancy} Let $\mathcal{P}$ be the space of Borel probability measures on $\mathbb{R}^D$, $\mathbb{Q}\in\mathcal{P}$ to be a probability measure, the objective for model learning is to find a sequence of probability measures $\{\mathbb{P}_{\theta}:\theta\in\Theta\}\subset \mathcal{P}$ that approximates $\mathbb{Q}$ in an appropriate sense. One common way to achieve this is by defining a discrepancy measure $\mathcal{D}:\mathcal{P}\times\mathcal{P}\rightarrow \mathbb{R}$, which quantifies the differences between two probability measures. Thus, the optimal parameters $\theta^*$ can be obtained by $\theta^*=\text{argmin}\mathcal{D}(\mathbb{Q}||\mathbb{P}_{\theta})$. The choice of discrepancy depends on the properties of the probability measures, the efficiency and its robustness. The one we are focused on is called \emph{Fisher divergence}. Assuming for probability measures $\mathbb{Q}$ and $\mathbb{P}_\theta$, we have corresponding twice differentiable densities $q(\bm{x})$, $p_{\theta}(\bm{x})$. The \emph{Fisher divergence} \citep{johnson2004information} is defined as \begin{equation} \mathcal{F}(q,p)=\frac{1}{2}\mathbb{E}_{q}[||\bm{s}_p(\bm{x})-\bm{s}_q(\bm{x})||^2] \label{eq: Fisher divergence} \end{equation} where $\bm{s}_p(\bm{x})=\nabla_{\bm{x}}\log p_\theta(\bm{x})$ is called the score of $p_\theta$, and $\bm{s}_q$ is defined accordingly. Despite that $q$ is often used for underlying data densities with the intractable $\bm{s}_q$, $\bm{s}_q$ in fact acts as a constant for parameter $\theta$. Thus, one can use integration-by-part to derive the following: \begin{equation} \mathcal{F}(q,p_\theta) =\underbrace{\mathbb{E}_q \left[\frac{1}{2}||\bm{s}_p(\bm{x})||^2+Tr(\nabla_{\bm{x}}\bm{s}_p(\bm{x})) \right]}_{SM(q, p_\theta)} +C_{q} \label{eq: Score matching} \end{equation} with $C_q$ a constant w.r.t.~$\theta$. This equivalent objective $SM(q, p_\theta)$ is referred as \emph{score matching} \citep{hyvarinen2005estimation}. Another discrepancy measure we are interested is called \emph{Stein discrepancy}, which is defined as \begin{equation} \mathcal{S}(q,p_\theta) = \sup_{\bm{f}\in\mathcal{H}}\mathbb{E}_{q}[\bm{s}_p(\bm{x})^T\bm{f}(\bm{x})+\nabla_{\bm{x}}^T\bm{f}(\bm{x})] \label{eq: Stein discrepancy} \end{equation} where $\bm{f}:\mathbb{R}^D\rightarrow\mathbb{R}^D$ is a test function, and $\mathcal{H}$ is an appropriate test function family, e.g. reproducing kernel Hilbert space \cite{liu2016kernelized,chwialkowski2016kernel} or Stein class \citep{gorham2017measuring,liu2016kernelized}. Recent work \citep{hu2018stein} proved a connection between Stein discrepancy and Fisher divergence by showing the optimal test function: \begin{equation} \bm{f}^*(\bm{x})\propto \bm{s}_p(\bm{x})-\bm{s}_q(\bm{x}). \label{eq: optimal test function for Stein} \end{equation} thus we can show Stein discrepancy is equivalent to Fisher divergence up to a multiplicative constant. \citet{barp2019minimum,gorham2019measuring} further extend the score matching and Stein discrepancy by incorporating a diffusion matrix $\bm{m}(\bm{x}):\mathbb{R}^D\rightarrow \mathbb{R}^{D\times D}$. It starts from defining \emph{diffusion Fisher divergence} \begin{equation} \mathcal{F}_{m}(q,p_\theta)=\frac{1}{2}\mathbb{E}_q[||\bm{m}(\bm{x})^T(\bm{s}_p(\bm{x})-\bm{s}_q(\bm{x}))||^2] \label{eq: DSM} \end{equation} where $\bm{m}(\bm{x})$ is a matrix-valued function. Expanding eq.~\ref{eq: DSM} and applying integration by parts (with $\bm{m}$ short-handing $\bm{m}(\bm{x})$ and $\bm{s}_p$ short-handing $\bm{s}_p(\bm{x})$): \begin{equation} \mathcal{F}_{m}(q,p_\theta)= \underbrace{\mathbb{E}_q \left[ \frac{1}{2} ||\bm{m}^T\bm{s}_p||^2 + \nabla^\top (\bm{m} \bm{m}^\top \bm{s}_p)\right]}_{DSM_m(q, p_{\theta})} + C_{q, m}, \label{eq: DSM_expand} \end{equation} where $C_{q, m}$ depends on both $q$ and $\bm{m}(\bm{x})$. Similar to the derivation of $SM(q, p_\theta)$, this also returns an alternative diffusion score matching (DSM) objective $DSM_m(q, p_{\theta})$. Similarly, Diffusion Stein discrepancy (DSD) is defined as \begin{equation} \hspace{-1em} \begin{aligned} &DSD_m(q,p_\theta)\\ &= \sup_{\bm{f}\in\mathcal{H}}\mathbb{E}_q[(\bm{m}(\bm{x})^T\bm{s}_p(\bm{x}))^T\bm{f}(\bm{x})+\nabla_{\bm{x}}^T(\bm{m}(\bm{x})\bm{f}(\bm{x}))] \end{aligned} \label{eq: DSD} \end{equation} It can be shown that as long as $\bm{m}(\bm{x})$ is invertible, $\mathcal{F}_m(q,p_\theta)$ and $DSD_m(q,p_\theta)$ are valid divergences. These two extensions have demonstrated superior performances when dealing with certain type of distributions. In the following, we give a motivating example similar to \citet{barp2019minimum}. \subsection{Motivating example: Student-t distribution} \label{subsec: motivating example} Let assume $q$, $p_\theta$ to be 1 dimensional student-t distribution. The target is to approximate $q$ by $p_\theta$. The training set is $300$ i.i.d data sampled from $q$ with mean $0$ and scale $0.3$. We assume the scale parameter for $p_\theta$ is the same as $q$, and the only trainable parameter $\theta$ is the mean. The degree of freedom is $5$ for both $q$, $p_\theta$. \begin{figure*} \centering \includegraphics[scale=0.16]{Three_Figure_Combined.pdf} \caption{The $SM(q, p_\theta)$ and $DSM_m(q, p_\theta)$ losses computed with different mean parameters $\theta$. \textbf{Left}: This \textcolor{BurntOrange}{orange line} plots the vanilla SM loss between $q$ and $p_\theta$. The arrow indicates the gradient descent direction of $\theta$. The \textcolor{red}{red dot $\bullet$} is the ground truth for $\theta$. \textbf{Middle}: The \textcolor{Blue}{blue line} plots the DSM loss with \textcolor{Blue}{$\bm{m}(\bm{x})=1+\frac{(\bm{x}-\theta)^2}{0.6}$}. The \textcolor{Blue}{blue rectangle} indicates the region with large gradient descent magnitude (fast convergence). \textbf{Right}: The \textcolor{red}{red line} plots the DSM loss with Gaussian flow. The\textcolor{red}{red rectangle} indicates the fast convergence region.} \label{fig:Student-t} \end{figure*} The left panel of figure \ref{fig:Student-t} shows the score matching loss computed for different $\theta$. We can observe that for original $SM(q, p_\theta)$ loss, it is highly non-convex, and the loss value does not correlate well with likelihood. Indeed, we can see the true location $\theta = 0$ is protected by two high 'walls'. In other words, unlike maximum likelihood estimator, a parameter $\theta$ that is closer to the ground truth does not necessarily produce low SM loss. One important consequence is that unless the initialized $\theta$ is within the narrow \textcolor{BurntOrange}{valid region}, the gradient-based optimization will never recover the truth. On the other hand, the middle panel of figure \ref{fig:Student-t} shows that if we chose $\bm{m}(\bm{x}) = (1+\frac{(\bm{x}-\theta)^2}{0.6})$ (Manual Flow) as the diffusion matrix, the corresponding \textcolor{Blue}{$DSM_m(q,p_\theta)$} loss is convex. The ground truth can be recovered by minimizing DSM with a proper gradient-based optimizer. \wg{However, this $DSM_m(q, p_\theta)$ is only a surrogate objective for learning $\theta$ because the diffusion matrix $\bm{m}$ contains $\theta$. Thus, the dropped term $C_{q, m}$ in eq.\ref{eq: DSM_expand} is no longer a constant. Although one can treat the $\theta$ in $\bm{m}$ as constant during training and ignore its contribution when taking the derivative, this is equivalent to use different losses after each $\theta$ update. We leave its convergence analysis for the future work. } The selection of the diffusion matrix is crucial to the success of the estimator. Unfortunately, the interpretation of this matrix is unclear, not mentioning a selection algorithm. In the following, we aim to shed lights on this problem by connecting this diffusion with normalizing flows. \section{Diffusion matrix as normalizing flow} \label{Sec: normalizing flow} \subsection{Interpreting DSM/DSD using normalizing flow} Let assume we have two densities $q_X(\bm{x})$, $p_X(\bm{x})$ defined on $\mathbb{R}^D$ and are twice differentiable. We further define an differentiable invertible transformation $\bm{T}(\bm{x}):\mathbb{R}^D\rightarrow \mathbb{R}^D$: \begin{equation} \bm{y}=\bm{T}(\bm{x}) \end{equation} with the corresponding induced densities $q_Y(\bm{y})$ and $p_Y(\bm{y})$. We can prove the following theorem: \begin{theorem} \label{thm: DSM} For twice differentiable densities $q_X(\bm{x})$, $p_X(\bm{x})$ and an invertible differentiable transformation $T:\mathbb{R}^D\rightarrow\mathbb{R}^D$, the diffusion Fisher divergence (Eq.\ref{eq: DSM}) is equivalent to the original Fisher divergence in $\bm{y}$ space: \begin{equation} \mathcal{F}(q_Y,p_Y) = \frac{1}{2}\mathbb{E}_{q_Y}[||\bm{s}_{p_Y}(\bm{y})-\bm{s}_{q_Y}(\bm{y})||^2] \end{equation} where $\bm{y}=T(\bm{x})$, and $p_Y$, $q_Y$ are corresponding densities after the transformation. The diffusion matrix $\bm{m}(\bm{x})$ is the inverse of the Jacobian matrix $(\nabla_{\bm{x}}\bm{T}(\bm{x}))^{-1}$ \end{theorem} \begin{proof} From the change of variable formula, the corresponding densities $p_Y(\mathbf{y})$, $q_Y(\mathbf{y})$ can be defined as: \begin{equation*} \begin{aligned} p_Y(\mathbf{y}) &= p_X(T^{-1}(\mathbf{y})) |\frac{\partial T^{-1}(\mathbf{y})}{\partial \mathbf{y}}|, \\ q_Y(\mathbf{y}) &= q_X(T^{-1}(\mathbf{y})) |\frac{\partial T^{-1}(\mathbf{y})}{\partial \mathbf{y}}|. \end{aligned} \end{equation*} Then the Fisher divergence $\mathcal{F}(q_Y, p_Y)$ is formulated as: \begin{equation} \begin{aligned} &\mathcal{F}(q_Y, p_Y) := \frac{1}{2}\mathbb{E}_{q_Y}[||\nabla_{\mathbf{y}} \log p_Y(\mathbf{y}) - \nabla_{\mathbf{y}} \log q_Y(\mathbf{y}) ||_2^2] \\ =& \frac{1}{2}\mathbb{E}_{q_Y}[||\nabla_{\mathbf{y}} \log p_X(T^{-1}(\mathbf{y})) - \nabla_{\mathbf{y}} \log q_X(T^{-1}(\mathbf{y})) ||_2^2] \\ =& \frac{1}{2}\mathbb{E}_{q_Y}[|| \nabla_{\mathbf{y}} T^{-1}(\mathbf{y})^\top \\ &(\nabla_{T^{-1}(\mathbf{y})} \log p_X(T^{-1}(\mathbf{y})) - \nabla_{T^{-1}(\mathbf{y})} \log q_X(T^{-1}(\mathbf{y})) ) ||_2^2] \\ =& \frac{1}{2}\mathbb{E}_{q_X}[|| (\nabla_{\bm{x}} T(\bm{x}))^{-\top} (\nabla_{\bm{x}} \log p_X(\bm{x}) - \nabla_{\bm{x}} \log q_X(\bm{x}) ) ||_2^2], \end{aligned} \end{equation} where the last step comes from changing the variable to $\bm{x} = T^{-1}(\mathbf{y})$ and noticing that $\nabla_{\mathbf{y}} T^{-1}(\mathbf{y}) = (\nabla_{\bm{x}} T(\bm{x}))^{-1}$ from the inverse function theorem. This objective coincides with the \emph{diffusion Fisher divergence} (Eq.\ref{eq: DSM}). Importantly, $\mathcal{F}_m(q_X, p_X)$ is a valid divergence (i.e.~$\mathcal{F}_m(p_X, q_X) = 0$ iff.~$p_X = q_X$) when $m(\bm{x})$ is an invertible matrix for every $\bm{x}$. As normalising flow transformations naturally give invertible Jacobian matrices, we can easily extablish the connection $\mathcal{F}(q_Y, p_Y) = \mathcal{F}_m(q_X, p_X)$ with $m(\bm{x}) = (\nabla_{\bm{x}} T(\bm{x}))^{-1}$. \end{proof} We also include the likelihood plots afte the transformation in Appendix \ref{apendix: additional plot}. Similarly, we can prove the connections between DSD (Eq.\ref{eq: DSD}) and normalizing flow. The proof is in appendix \ref{appendix: proof of DSD}. \begin{theorem} For twice differentiable densities $q_X(\bm{x})$, $p_X(\bm{x})$, an invertible differentiable transformation $\bm{T}(\bm{x}):\mathbb{R}^D\rightarrow \mathbb{R}^D$ and differentiable test function in suitable test function family $\mathcal{H}$: $\bm{f}:\mathbb{R}^D\rightarrow \mathbb{R}^D\in\mathcal{H}$, the diffusion Stein discrepancy (Eq.\ref{eq: DSD}) is equivalent to the original Stein discrepancy \begin{equation} \mathcal{S}(q_Y,p_Y)=\sup_{\bm{g}\in\mathcal{H}'}\mathbb{E}_{q_Y}[\bm{s}_{p_Y}(\bm{y})^T\bm{g}(\bm{y})+\nabla_{\bm{y}}^T\bm{g}(\bm{y})] \end{equation} where $\bm{g}(\mathbf{y})=\bm{f}(\bm{T}^{-1}(\bm{y}))$, $\mathcal{H}'$ is the corresponding function space for $\bm{g}$, $p_Y$ and $q_Y$ are transformed densities by $\bm{T}(\cdot)$. The diffusion matrix $\bm{m}(\bm{x})=(\nabla_{\bm{x}}\bm{T}(\bm{x}))^{-1}$. \label{thm: DSD} \end{theorem} Based on the above two theorems, we formally establish the connections between the \emph{diffusion Fisher divergence}/\emph{DSD} with normalizing flows. This gives us an interpretation of the diffusion matrix as the inverse of the Jacobian matrix defined by the flow. \subsection{Better flow design} \wg{ Based on the interpretation, we try to give a better design for the diffusion matrix $\bm{m}$. Here, we design a flow $\bm{T}_G(\cdot)$ that transforms the Student-t distribution $p_\theta$ to a standard Gaussian $\mathcal{N}(0,1)$, which we named as \textbf{Gaussian flow}: \begin{equation} \bm{T}_G(\bm{x}) = F_G^{-1}\circ F_{\theta}(\bm{x}). \label{eq: Gaussian Flow} \end{equation} Here $F_G$ and $F_\theta$ are cumulative density functions for $\mathcal{N}(0,1)$ and $p_\theta$ respectively. We plot the corresponding $DSM_m(q, p_\theta)$ loss in the right panel of Figure \ref{fig:Student-t}. Both the manually designed flow and Gaussian flow can recover the ground truth $\theta$ regardless of initialization. However, Gaussian flow allows faster convergence during training. The fast convergence regions is the region where the gradient of the DSM w.r.t $\theta$ has a magnitude greater than $1$. The Gaussian flow has a much wider region compared to manual flow. The length of the region is $4.44$ and $10.56$ respectively (more than $2$ times). For high dimensional distributions, this area of the region can scale up with $O(2^D)$, which can have significant impact on convergence speed. Another advantage of this systematic design of the diffusion matrix is its robustness, which is further discussed in Appendix \ref{appendix: robustness}. } \subsection{Interpreting DSM using Riemannian manifold} Assume we have a Riemannian manifold $(\mathcal{M},\bm{g})$ with Riemannian metric tensor $\bm{g}$. For each point $\bm{a}\in\mathcal{M}$, we assume it has a local coordinates $\bm{x}_a=[x_a^1,\ldots,x_a^D]$. We can prove the following proposition: \begin{proposition} Define two probability measures $\mathbb{Q}$, $\mathbb{P}$ on the Riemannian manifold $(\mathcal{M},\bm{g})$ as defined above. We denote the corresponding densities (in terms of local coordinates $\bm{x}$) w.r.t. Riemannian manifold as $\tilde{p}(\bm{x})=\frac{d\mathbb{P}}{d\mathcal{M}(\bm{x})}$ and $\tilde{q}(\bm{x})=\frac{d\mathbb{Q}}{d\mathcal{M}(\bm{x})}$. Then, the Fisher divergence from $\tilde{q}$ to $\tilde{q}$ is \begin{equation} \mathcal{F}_{\mathcal{M}}(\tilde{q},\tilde{p})=\frac{1}{2}\mathbb{E}_{q}[\bm{\Delta} (\bm{x})^T\bm{G}(\bm{x})^{-1}\bm{\Delta} (\bm{x})] \end{equation} where $p(\bm{x})=\frac{d\mathbb{P}}{d\mathcal{M}(\bm{x})}\frac{d\mathcal{M}(\bm{x})}{d\bm{x}}$, $q(\bm{x})$ is defined similarly, and $\bm{\Delta} (\bm{x})=\bm{s}_p(\bm{x})-\bm{s}_q(\bm{x})$. $\bm{G}(\bm{x})$ is an symmetric positive definite matrix representing the Riemannian metric tensor. Particularly, if $\bm{G}(\bm{x})=\bm{m}(\bm{x})^{-T}\bm{m}(\bm{x})^{-1}$, then $\mathcal{F}_{\mathcal{M}}(\tilde{q},\tilde{p})$ is equivalent to the diffusion Fisher divergence (Eq.\ref{eq: DSM}) with diffusion matrix $\bm{m}(\bm{x})$. \label{prop: Riemannian} \end{proposition} The proof is in appendix \ref{appendix: Riemannian manifold}. This result is more general than theorem \ref{thm: DSM}. Specifically, theorem \ref{thm: DSM} only proves a sufficient condition for the \emph{diffusion Fisher divergence} to be a valid discrepancy. Namely, if we have an invertible flow, the diffusion matrix $\bm{m}(x)$ must be invertible. However, the converse is not true. On the other hand, proposition \ref{prop: Riemannian} only requires $\bm{m}(\bm{x})$ to be invertible, which is more general. Indeed, from the topological point of view, if we have an invertible and differentiable flow $\bm{T}$, then the transformed space (Riemannian manifold) is actually diffeomorphic to the original space (e.g. $\mathbb{R}^D$). Thus, this flow can be viewed as a special case of \citet{gemici2016normalizing}. But in general, Riemannian manifold may not be diffeomorphic to $\mathbb{R}^D$, which explains why theorem \ref{thm: DSM} is only a sufficient condition. \subsection{Continuous DSM with ODE flow} Previous sections assume a deterministic transformation $\bm{T}(\bm{x})$. Recent work has shown promising results for continuous flows characterised by an ODE \citep{chen2018neural,grathwohl2018ffjord}. \begin{equation} d\bm{x}=\bm{g}(\bm{x}(t))dt \label{eq: ODE flow} \end{equation} where $\bm{g}(\bm{x}(t))$ is a deterministic drift that is uniformly Lipschitz continuous w.r.t. $\bm{x}$. We define $p_t$ and $q_t$ to be the corresponding densities for $\bm{x}(t)$. Inspired by \citet{chen2018neural}, we can characterise the instantaneous change of the score matching loss $\frac{d\mathcal{F}(q_t,p_t)}{dt}$ by the following proposition: \begin{proposition} Let $p_t(\bm{x}(t))$, $q_t(\bm{x}(t))$ be two probability density functions, where $\bm{x}(t)$ is characterized by an ODE defined in eq.\ref{eq: ODE flow}. Assume $\bm{g}(\bm{x}(t))$ is uniformly Lipschitz continuous w.r.t. $\bm{x}(t)$. Then, the instantaneous change of score matching loss follows: \begin{equation} \frac{d\mathcal{F}(q_t,p_t)}{dt} = -\frac{1}{2}\mathbb{E}_{q_t}[\Delta(\bm{x})^T(\nabla_{\bm{x}}\bm{g}(\bm{x})+\nabla_{\bm{x}}\bm{g}(\bm{x})^T)\Delta(\bm{x})] \end{equation} where $\Delta(\bm{x})=\bm{s}_{p_t}(\bm{x})-\bm{s}_{q_t}(\bm{x})$. \label{prop: continuous DSM} \end{proposition} The proof is in appendix \ref{appendix: proof of continous DSM}. \section{Introduction} \label{Sec: Introduction} Recently, score matching \citep{hyvarinen2005estimation} and its closely related counterpart, Stein discrepancy \citep{gorham2017measuring} have made great progress in both understanding their theoretical properties and practical usage. Particularly, unlike Kullback–Leibler (KL) divergence which can only be used for distributions with known normalizing constant, SM (or SD) can be evaluated for unnormalized densities, and requires fewer assumptions for the probability distributions \citep{fisher2021measure}. Such useful properties enable them to be widely applied in training energy-based model (EBM) \citep{song2020sliced,grathwohl2020learning,wenliang2019learning}, state-of-the-art score-based generative model \citep{song2019generative, song2020score}, statistical tests \citep{liu2016kernelized,chwialkowski2016kernel} and variational inference \citep{hu2018stein,liu2016stein}. Despite their elegant statistical properties, recent work \citep{barp2019minimum} demonstrated their failure when dealing with certain type of distributions (e.g. heavy-tailed distributions). For instance, when the data and the model are heavy tailed distributions, the model can fail to recover the true mode even in one dimensional case. The root of this problem is that the SM (or SD) objective is highly non-convex and does not correlate well with likelihood. To fix it, \citet{barp2019minimum} proposed a variant called \emph{diffusion score matching} (and \emph{diffusion Stein discrepancy}), where a diffusion matrix is introduced. However, the author did not provide us an interpretation of this diffusion matrix. In fact, the diffusion used by the author \citep{barp2019minimum} is manually chosen for toy densities. Such lack of interpretation hinders further development of a proper training method of the diffusion matrix. In this paper, we aim to give an interpretation based on normalizing flows, which sheds light on developing training method for the diffusion. We summarize our contributions as follows: \begin{itemize} \item We theoretically prove that DSM (or DSD) is equivalent to the original SM (or SD) performed in the transformed space defined by the normalizing flow. The diffusion matrix is exactly the same as the inverse of the flow's Jacobian matrix. \item We further show that its connection to Riemannian manifold. Specifically, we show the diffusion matrix is closely related to the Riemannian metric tensor. \item We further extend DSM to their continuous version. Namely, we derive an ODE to characterize its instantaneous change. \end{itemize} We hope that by building these connections, a broad range of techniques from normalizing flow communities can be leveraged to develop training methods for the diffusion matrix. \section{Conclusion} \label{Sec: conclusion} In this paper, we discuss the connections of the \emph{diffusion score matching} and \emph{diffusion Stein discrepancy} to normalizing flows. Specifically, we theoretically prove that the \emph{diffusion Fisher divergence} (or {DSD}) is equivalent to performing the original \emph{Fisher divergence} (or \emph{Stein discrepancy}) on the transformed densities. The diffusion matrix $\bm{m}(\bm{x})$ is defined by the inverse of the flow's Jacobian matrix. We also establish the connection of \emph{diffusion Fisher divergence} with densities defined on Riemannian manifolds. In the end, we extend the \emph{diffusion Fisher divergence} by continuous flow, and derive an ODE characterizing its instantaneous changes. By building the connections, we hope to shed lights on developing training method for the diffusion matrix to enable the practical usage for large models.
1,314,259,994,134
arxiv
\section{Introduction} \IEEEPARstart{M}{ultispectral} imaging sensors from a satellite capture different types of spectral band informations. For instance, a typical high-resolution satellite has several imaging sensors for multi-spectral bands such as red (R), green (G), blue (B), near-infrared (N), etc. Each spectral band signal has unique spectroscopic characteristics, resulting in a variety of remote sensing applications such as agricultural planning \cite{Reference:mulla2013twenty}, traffic monitoring \cite{Reference:larsen2009traffic}, city planning \cite{Reference:pham2011case}, disaster analysis \cite{Reference:voigt2007satellite}, etc. Unfortunately, the quality of satellite images are often affected by various noise sources such as system calibration error, intrinsic properties of the hardware, sensitivity of the sensors, photon effect, and thermal noise. Fig.~\ref{figure1} shows typical examples of structured noise patterns in images from a high-resolution satellite such as vertical stripe noises and wave noises. The main cause of vertical stripe noises is an interference from the different scan timings of multi-spectral imaging sensors in a push broom scanner. Different sampling timings, and also the sensitivity of sensors, induce a different offset in each detector and generate vertical stripe noise patterns. Horizontal wave noise is an irregular wave noise pattern that is caused by interference from various hardware components. Noises in images degrade the quality of the scenes and limit the use of satellite imagery. Therefore, one of the most important pre-processing for satellite images is the elimination of image noises that occur during the image acquisition process. \begin{figure}[!t] \centering \includegraphics[width=0.9\linewidth]{Fig1} \vspace{-0.3cm} \caption{Examples of satellite images with (a) vertical stripe noise, and (b) wave noise.} \label{figure1} \end{figure} Previously, various methods have been proposed for the removal of noise in satellite images. Conventional denoising methods follow model-based approaches using hand-crafted features and prior knowledge of data \cite{Reference:yuan2012hyperspectral, Reference:chang2015anisotropic, Reference:zhang2013hyperspectral, Reference:he2015hyperspectral, Reference:he2015total, Reference:du2016pltd, Reference:wu2017structure, Reference:wang2017hyperspectral, Reference:fan2018spatial}. However, the limitation of traditional model-based methods is a degradation in performance if predefined features or prior knowledges of the model do not fully reflect the properties of new data. Recently, deep convolutional neural network (CNN) have shown extraordinary performance in the image denoising problem \cite{burger2012image,mao2016image,Reference:zhang2017beyond,Reference:zhang2018ffdnet}. The advantage of using deep learning methods comes from the data-driven nature that automatically learns the optimal features for the task from the data. In remote sensing applications, CNN-based denoising algorithms have been proposed and shown promising results \cite{Reference:yuan2018hyperspectral, Reference:chang2018hsi, Reference:zhang2019hybrid, Reference:guan2019wavelet, Reference:liu20193, Reference:shan2019hyperspectral, Reference:chang2019toward}. However, most CNN-based denoising methods for the satellite images are trained in a supervised manner. Supervised learning scheme requires structurally matched noisy image and clean target image pairs, which are difficult to obtain in real situations. To utilize unmatched image pairs, unsupervised learning methods should be used. Among the various approaches for unsupervised learning, generative adversarial network (GAN) \cite{Reference:goodfellow2014generative} was proposed as a distribution matching scheme so that it learns the distribution of the target domain from the input distributions. However, the standard GAN approaches often suffer from mode-collapsing behavior, which often generates artificial features. To address the mode-collapsing problem, unsupervised image-to-image translation using the cycle-consistent adversarial network (CycleGAN) was proposed \cite{Reference:zhu2017unpaired}. Specifically, the network is trained in an unsupervised manner using generative networks, and the cyclic consistency alleviates the generation of artificial features due to the mode collapsing problem of GAN. Inspired by the success of cycleGAN, Kang \textit{et al.} \cite{Reference:kang2019cycle} proposed cycle-consistent adversarial denoising network for multiphase coronary computed tomography (CT) angiography. Here, the denoising problem was considered as the image-to-image translation problem between two domains: noisy image domain and clean image domain, and the results by Kang \textit{et al.} \cite{Reference:kang2019cycle} shows that cycleGAN is a promising tool for unsupervised denoising. Another limitation of most CNN-based denoising algorithms for satellite imagery is that the methods are designed in the image domain, which often leads to blurred output and loss of the edges and details information especially when the training data is not sufficiently many. High frequency components in satellite images contain important information that is crucial for the use of the satellite. Therefore, a desirable denoising algorithm should only remove noise components while preserving image details. Transform domain learning approaches are an alternative to image domain denoising methods. For instance, the advantage of using wavelet transform is that the image can be decomposed to directional subbands that can be used effectively to remove noise while preserving high frequency components. Based on this observation, a denoising method based on the wavelet domain deep learning was proposed in a supervised learning framework, which effectively removes noises without affecting image details \cite{Reference:guan2019wavelet, Reference:kang2017deep}. Inspired by these approaches, here we propose a unsupervised multi-spectral denoising method for satellite imagery using wavelet subband cycle-consistent adversarial network (WavCycleGAN), and demonstrate its superior performance for the removal of two structured noise patterns: vertical stripe noise and wave noise. Specifically, based on the property of target noises, specific wavelet subbands that contain majority of noises are selected for the wavelet recomposition to obtain the wavelet subband image. Then, our cycleGAN network is trained in an unsupervised manner to learn the distribution matching between two wavelet subband domains from clean and noisy images, respectively. Our experimental results show that our multi-spectral denoising method using WavCycleGAN effectively removes vertical stripe noise and wave noise while preserving edges and details of images. \section{Related Works} \subsection{Model based Approaches} Conventional satellite image denoising methods typically exploit model-based approaches which utilize hand-crafted representation and intrinsic properties of satellite images. Satellite images tend to be piecewise smooth in the spatial domain \cite{Reference:he2015total}. Total variation (TV) denoising model \cite{Reference:rudin1992nonlinear} have been applied to the noise removal of satellite imagery because it effectively preserves high frequency information and enforces piecewise smoothness \cite{Reference:yuan2012hyperspectral, Reference:chang2015anisotropic, Reference:he2015total}. Yuan \textit{et al.} \cite{Reference:yuan2012hyperspectral} extended TV model to the spectral-spatial adaptive TV denoising model which considers spectral and spatial information of images. Chang \textit{et al.} \cite{Reference:chang2015anisotropic} proposed the image destriping method using the anisotropic spectral-spatial total variation model. He \textit{et al.} \cite{Reference:he2015total} regularized their model with TV to enforce piecewise smoothness of images. Clean satellite images that consist of multi-spectral images can be considered to have low-rank property \cite{Reference:zhang2013hyperspectral}. Based on the intrinsic sparsity of satellite images, low-rank matrix recovery (LRMR) approaches have been applied to noise removal problems in remote sensing \cite{Reference:zhang2013hyperspectral, Reference:he2015hyperspectral, Reference:he2015total}. Zhang \textit{et al.} \cite{Reference:zhang2013hyperspectral} introduced an image restoration method using LRMR. He \textit{et al.} \cite{Reference:he2015hyperspectral} proposed noise-adjusted framework that takes into account different properties of noises in different bands. He \textit{et al.} \cite{Reference:he2015total} introduced a total variation-regularized low-rank matrix factorization (LRTV). Recent works exploit tensor-based approaches with low-rank property and TV regularization to utilize spatial-spectral correlations of multi-spectral satellite imagery \cite{Reference:du2016pltd, Reference:wu2017structure, Reference:wang2017hyperspectral, Reference:fan2018spatial}. However, the drawback of these model-based image restoration is the use of hand-crafted features and data model, which may degrade the performance of the algorithm if the image data has unexpected properties beyond their assumptions. \subsection{Deep Learning Approaches} In the field of remote sensing, many deep learning based methods have been proposed for the removal of noise in satellite imagery. Yuan \textit{et al.} \cite{Reference:yuan2018hyperspectral} proposed a spatial-spectral deep residual learning method using CNN for hyperspectral images (HSID-CNN). Their method utilized spatial and spectral information by using noisy input and adjacent spectral bands. Chang \textit{et al.} \cite{Reference:chang2018hsi} proposed a method for hyperspectral image denoising via CNN (HSI-DeNet). HSI-DeNet consists of dilated convolution layers and exploits residual learning approach to effectively remove noise. Zhang \textit{et al.} \cite{Reference:zhang2019hybrid} introduced a spatial-spectral gradient network (SSGN) for the removal of hybrid noise in hyperspectral images. SSGN use spatial and spectral gradient information to extract important features of satellite images. Guan \textit{et al.} \cite{Reference:guan2019wavelet} proposed wavelet deep neural network for stripe noise removal. They trained the network in wavelet domain for the effective denoising. Other deep learning based denoising method for satellite imagery can be found in \cite{Reference:liu20193, Reference:shan2019hyperspectral}, and \cite{Reference:chang2019toward}. Most deep learning based image restorations for satellite imagery follow a supervised learning scheme. A supervised learning method requires a paired dataset consisting of a noisy image and spatially matched clean image to train the network. The need for a paired dataset, however, limits the use of a supervised learning scheme in practice, since paired satellite images are difficult to collect in real situations. To mitigate this problem, many researchers added synthesized noise to relatively clean images. Although the trained model using synthetic noise works well for artificial noise, it is difficult to estimate real noise components that are complex in practice. \subsection{Our Contributions} \subsubsection{Unsupervised Learning Approach} Although it is difficult to obtain matched clean and noisy image pairs from satellite imagery, it is much easier to obtain {\em unmatched} clean and noisy image data sets in practice. This is because in some situations sensors are affected less by the noises, or the assumption of the model-based approaches are sufficiently accurate to generate clean images. However, the practical issue is that these clean images are not matched to the noisy multi-spectral image data that one is interested in processing. In this scenario, an unsupervised learning scheme that uses the unmatched clean and noisy image data set is a perfect fit. Therefore, one of the most important contributions of our paper is an unsupervised learning scheme that uses the unmatched clean and noisy images for neural network training. Specifically, to train the network in an unsupervised way, we use adversarial loss and cycle-consistency loss. Accordingly, noise patterns can be removed efficiently without requiring the matched data set. \subsubsection{Wavelet Subband Learning} Typically, deep learning based image restorations in remote sensing are designed in the image domain. Unfortunately, perfect noise separation using image domain deep network is often difficult especially when enough training data sets are not available. Thus, edges and detail information are often removed by the denoising networks that are applied directly in the image domain. Unlike the existing deep learning based algorithms designed in the image domain, we train our model using wavelet subband images that are obtained from subset of wavelet bands containing noises. Accordingly, the spectral contents in the other bands are not altered by the neural network so that we can achieve efficient noise removal without sacrificing high frequency information. \section{Theory} \subsection{Wavelet Subband Image} As described before, one of the disadvantages of the image domain deep learning is that output images from neural networks tend to be blurry since high frequency components such as edges and details of images can be altered by the reconstruction algorithm. To remove noise patterns while preserving image details, here we propose a wavelet subband deep learning. The procedure for generating wavelet subband images is as follows. First, we used the 2D Daubechies-3 wavelet transform (db3) to decompose the input image to subband images such as approximation (LL), horizontal detail (LH), vertical detail (HL), and diagonal detail (HH) bands. With $K$-th level wavelet decomposition, we have $\{LL_{i}\}_{i=1}^{K}$, $\{LH_{i}\}_{i=1}^{K}$, $\{HL_{i}\}_{i=1}^{K}$, and $\{HH_{i}\}_{i=1}^{K}$ subband images. The advantage of using the wavelet transform is that we can decompose an image into directional subbands. Therefore, if the noises have specific directional properties, the noises can be usually confined in specific subset of wavelet bands. This is the prior information we want to explore in designing the neural network. For example, for the case of vertical stripe noise in Fig.~\ref{figure2}(a), the wavelet subband images are obtained by wavelet recomposition using the vertical detail subbands $\{HL_{i}\}_{i=1}^{9}$ and zeroing out the other bands. This generates an wavelet subband image shown in Fig.~\ref{figure2}(b), which clearly shows the noise signals without too much of underlying structures of the scene. In the case of images corrupted with the wave noise as shown in Fig.~\ref{figure2}(c), we can find that the subbands $\{LH_{i}\}_{i=1}^{6}$ contains the most of the noises, so we use these band to obtain the wavelet subband images in Fig.~\ref{figure2}(d). After a denoising network remove noise patterns in noisy wavelet subband images, clean output images can be acquired by subtracting predicted noise patterns from noisy images. \begin{figure}[!t] \centering \includegraphics[width=0.9\linewidth]{Fig2} \vspace{-0.2cm} \caption{Examples of satellite images and wavelet subband images. (a) Image with vertical stripe noise, and (b) vertical wavelet subband image from (a). (c) Image with wave noise, and (d) horizontal wavelet subband image from (c).} \label{figure2} \end{figure} \subsection{Wavelet Subband Cycle-consistent Adversarial Network} As for an unsupervised denoising network for the wavelet subband images, we use the cycleGAN architecture. More details are provided in the following. \begin{figure}[!t] \centering \includegraphics[width=1\linewidth]{Fig3} \vspace{-0.5cm} \caption{Architecture of our WavCycleGAN for denoising satellite images. $x$ and $y$ are wavelet subband images from the clean domain $\mathcal{X}$ and the noisy domain $\mathcal{Y}$, respectively. The full objective consists of the adversarial loss $\ell_{GAN}$, cycle-consistency loss $\ell_{cycle}$, and identity loss $\ell_{identity}$. By minimizing the full objective function with respect to generators ($G_\Theta$ and $F_\Lambda$) and discriminators ($\psi_\Xi$ and $\varphi_\Phi$), the denoising network $G_\Theta$ can be trained in an unsupervised manner using wavelet subband images.} \label{figure3} \end{figure} \subsubsection{Loss Formulation} We consider two domains: clean domain ($\mathcal{X}$) and noisy domain ($\mathcal{Y}$). The clean domain contains wavelet subband images without noise patterns, while the noisy domain consists of wavelet subband images with noise patterns. The two data domains are composed of data that are not matched to each other. $P_\mathcal{X}$ and $P_\mathcal{Y}$ are probability distributions of the clean domain and noisy domain, respectively. $y$ is a sample from the noisy wavelet subband image distribution, and $x$ is a sample from the clean wavelet subband image distribution. As shown in Fig.~\ref{figure3}, $G_\Theta:\mathcal{Y}\mapsto \mathcal{X}$ is the generator parameterized with $\Theta$, which convert a noisy wavelet subband image to a clean wavelet subband image; the generator $F_\Lambda:\mathcal{X}\mapsto \mathcal{Y}$ is a generator parameterized by $\Lambda$ which generates a synthetic noisy wavelet subband image from a clean wavelet subband image. $\psi_\Xi$ is a adversarial discriminator parameterized by $\Xi$ that distinguishes synthetic noisy wavelet subbands from real noisy wavelet subbands. Similarly, $\varphi_\Phi$ is adversarial discriminators that distinguish denoised wavelet subband images from real clean wavelet subband images. To train the wavelet subband cycle-consistent adversarial network for the denoising problem, our objective consists of three loss functions: adversarial loss $\ell_{GAN}$, cycle-consistency loss $\ell_{cycle}$, and identity loss $\ell_{identity}$. Specifically, the typical adversarial loss for the generator $G_\Theta$ and the discriminator $\varphi_\Phi$ is as follows: \small \begin{align} \begin{split} \ell_{GAN}(\Theta,\Phi) & =\mathbb{E}_{x \sim P_\mathcal{X}}[\log \varphi_\Phi(x)] \\ & \quad + \mathbb{E}_{y \sim P_\mathcal{Y}}[\log (1-\varphi_\Phi(G_\Theta(y)))], \label{eqn:gan_loss} \end{split} \end{align} \normalsize \noindent To train $G_\Theta$ and $\varphi_\Phi$, we need to solve the min-max problem as follows: \small \begin{align} \min_{\Theta}\max_{\Phi}\ell_{GAN}(\Theta,\Phi) \label{eqn:min-max_gan_loss} \end{align} \normalsize \noindent The least squares GAN (LSGAN) \cite{Reference:mao2017least} uses the least square loss function instead of the cross entropy loss to overcome the problem of vanishing gradients. We adopted LSGAN for the min-max problem as follows: \small \begin{align} \label{eqn:lsgan_loss} &\min_{\Theta}\mathbb{E}_{y \sim P_\mathcal{Y}}[(\varphi_\Phi(G_\Theta(y))-1)^2], \\ &\min_{\Phi}\frac{1}{2}\mathbb{E}_{x \sim P_\mathcal{X}}[(\varphi_\Phi(x)-1)^2] + \frac{1}{2}\mathbb{E}_{y \sim P_\mathcal{Y}}[\varphi_\Phi(G_\Theta(y))^2], \end{align} \normalsize \noindent By solving the min-max game, $G_\Theta$ is trained to generate synthesized clean wavelet subband images from real noisy wavelet subband images and deceive the discriminator $\varphi_\Phi$, while $\varphi_\Phi$ learns to discriminate between synthesized clean wavelet subband images $G_\Theta(y)$ and real clean wavelet subband images $x$. When the networks converge, $G_\Theta$ produces realistic clean wavelet subband images, and $\varphi_\Phi$ cannot distinguish between real clean wavelet subband images and synthesized clean wavelet subband images from $G_\Theta$. The role of the adversarial loss for $F_\Lambda$ and $\psi_\Xi$ is similar to that of $G_\Theta$ and $\varphi_\Phi$. The generators $G_\Theta$ and $F_\Lambda$ can be trained to generate realistic clean wavelet subband images by minimizing the adversarial loss. However, using only the adversarial loss may cause artificial features due to the mode collapsing problem. We used cycle-consistency loss to impose one-to-one mapping between input images and output images to reduce artifacts and to maintain important features other than noise components. The cycle-consistency loss is defined using the L1 norm as follows: \small \begin{align} \begin{split} \ell_{cycle}(\Theta,\Lambda) & =\mathbb{E}_{y \sim P_\mathcal{Y}}[||F_\Lambda(G_\Theta(y))-y||_1] \\ & \quad + \mathbb{E}_{x \sim P_\mathcal{X}}[||G_\Theta(F_\Lambda(x))-x||_1], \label{eqn:cycle_loss} \end{split} \end{align} \normalsize \noindent By enforcing the cycle-consistency for the networks, the generators $G_\Theta$ and $F_\Lambda$ can be inverse mappings of each other, in which important features of images can be maintained during the domain translation. Once the network is trained, at the inference phase, the denoiser $G_\Theta$ is only used. However, in many practical situations, many input images or image patches for the denoiser $G_\Theta$ may not be corrupted by the noise patterns. A desired generator $G_\Theta$ therefore should remove the noise pattern in the noisy wavelet subband image while maintaining the input wavelet subband images if noises are not present. Also, the desired generator $F_\Lambda$ adds the noise pattern when the input is clean, while maintaining the input image when the input has the noise pattern. This condition is often called identity property, i.e. $G_\Theta(x) \simeq x$ and $F_\Lambda(y) \simeq y $ \cite{Reference:kang2019cycle}. To enforce this, we define the identity loss as follows: \small \begin{align} \begin{split} \ell_{identity}(\Theta,\Lambda) & =\mathbb{E}_{x \sim P_\mathcal{X}}[||G_\Theta(x)-x||_1] \\ & \quad + \mathbb{E}_{y \sim P_\mathcal{Y}}[||F_\Lambda(y)-y||_1] \quad . \label{eqn:identity_loss} \end{split} \end{align} \normalsize The overall loss function is defined using $\ell_{GAN}$, $\ell_{cycle}$, and $\ell_{identity}$ as follows: \small \begin{align} \begin{split} \ell(G_\Theta,F_\Lambda,\psi_\Xi,\varphi_\Phi) & = \ell_{GAN}(\Theta,\Phi) + \ell_{GAN}(\Lambda,\Xi) \\ & \quad + \lambda \ell_{cycle}(\Theta,\Lambda) + \gamma \ell_{identity}(\Theta,\Lambda), \label{eqn:overall_loss} \end{split} \end{align} \normalsize \noindent where $\lambda$ and $\gamma$ are hyperparameters for controlling the ratio of the losses between $\ell_{GAN}$, $\ell_{cycle}$, and $\ell_{identity}$. To train the WavCycleGAN for the denoising problem, we aim to optimize the following the problem: \small \begin{align} \min_{\Theta,\Lambda}\max_{\Xi,\Phi}\ell(G_\Theta,F_\Lambda,\psi_\Xi,\varphi_\Phi), \label{eqn:min-max_overall_loss} \end{align} \normalsize The corresponding architecture is given in Fig.~\ref{figure3}. Notice that our WavCycleGAN uses wavelet subband images consisting of selected directional subbands, while standard CycleGAN use typical images. By considering prior knowledge of noise patterns, the networks easily learn the properties of structured noise patterns and show improved performance compared to results of learning typical images, as will shown in the experimental section. \subsection{Reconstruction Flow for Specific Noise Patterns} \subsubsection{Vertical Stripe Noise Removal} We found that vertical stripe noise patterns are distributed globally in images. Therefore, the networks need to see the image in full resolution and capture the overall trend of the stripe patterns to learn the relationship between clean and noisy images. However, the full resolution of a test scene with vertical stripe noise patterns is 3000 $\times$ 3000 pixels, which requires huge GPU memory and high computational cost. To mitigate this problem, we used a prior knowledge that vertical stripe noise patterns are similar in the vertical direction. Accordingly, we applied downsampling along the vertical direction of the wavelet subband images by a factor of 32. By using downsampled images, the networks can be trained using images with global appearance of the stripe pattern and the computational costs can be also reduced. To train the WavCycleGAN for the removal of vertical stripe noise, we used randomly cropped patches with a size of 2048 $\times$ 32 pixels from downsampled vertical wavelet subband images. \begin{figure}[!hbt] \centering \includegraphics[width=1\linewidth]{Fig4} \vspace{-0.5cm} \caption{Overall flow of our denoising method for the vertical stripe noise. (a) A process of making a vertical wavelet subband image. WT denotes a wavelet transform, and IWT refers to an inverse wavelet transform. The red-colored subbands are only used for wavelet recomposition. (b) A process of estimating a noise pattern. (c) The process of reconstructing a clean output from a noisy input and an upsampled noise pattern.} \label{figure4} \end{figure} Fig.~\ref{figure4} shows the overall flow of our denoising method for the vertical stripe noise. First, the vertical wavelet subband image is generated by using db3 wavelet transform at the 9 decomposition level from the noisy input. When we apply the inverse wavelet transform, we preserve the coefficients of vertical bands (HL bands) and make the coefficients of the other bands (LL, LH, and LL bands) to zero. Second, the generator $G_\Theta$ removes the noise pattern of the downsampled vertical wavelet subband image. The estimated noise pattern can be acquired by subtracting the denoised wavelet subband image from the downsampled wavelet subband image. Finally, the clean output can be reconstructed by subtracting the upsampled noise pattern from the noisy input. \begin{figure}[!t] \centering \includegraphics[width=1\linewidth]{Fig5} \vspace{-0.5cm} \caption{Overall framework of our denoising method for the wave noise. (a) A procedure for generating a horizontal wavelet subband image. WT and IWT denote a wavelet transform and an inverse wavelet transform, respectively. We created horizontal wavelet subband image for each spectral-band image. The red-colored subbands are only used for wavelet recomposition. (b) A process of estimating a noise pattern. The predicted noise pattern can be calculated by using wavelet subband images of the green band. (c) A process of reconstructing a clean output image from a noisy image and a predicted noise pattern.} \label{figure5} \end{figure} After the training process, we can only use the denoiser $G_\Theta$ in the inference stage for the denoising problem. Specifically, the noise pattern can be calculated by subtracting the wavelet subband image reconstructed by the generator $G_\Theta$ from the noisy wavelet subband image. However, due to the downsampling of the input image by the factor of 32, the resolution of the estimated noise pattern differs from that of the input image. To increase the resolution of the noise pattern, we applied the upsampling process to the estimated noise pattern. The final reconstruction result can be obtained by subtracting the upsampled noise pattern from the noisy input. \subsubsection{Horizontal Wave Noise Removal} We utilized spatially registered RGBN images for the removal of wave noise. The use of spatially registered RGBN images makes the network to use the spatial correlation between multi-channel images, which improves reconstruction performance. To train the WavCycleGAN for the removal of wave noise, we used randomly cropped RGBN image patches with the size of 128 $\times$ 128 pixels from horizontal wavelet subband images. Fig.~\ref{figure5} shows the overall framework of our denoising method for the wave noise. The first step is generating horizontal wavelet subband images. We produced horizontal wavelet subband images for each channel image. Next, the reconstructed horizontal wavelet subband image can be acquired by using the generator $G_\Theta$. Since wave noise patterns are only present in green channel images, we calculated the noise pattern by subtracting the predicted horizontal wavelet subband image from the noisy horizontal wavelet subband image of the green channel. Finally, the clean output image can be acquired by subtracting the estimated noise pattern from the noisy input image of the green channel. At the inference stage, the algorithms are applied by overlapping patch images by half to avoid blocking artifacts. The reconstructed full scene are then acquired by assembling only center parts of reconstructed patches. Specifically, we reconstruct 128 $\times$ 128 pixel patches, and use only the center parts of patches with the size of 64 $\times$ 64 pixels. \section{Methods} \subsection{Data Set} \subsubsection{Real Noisy Data} In this study, we utilized multi-spectral images from a high-resolution satellite. The multi-spectral images are composed of multi-spectral images from red (R), green (G), blue (B), near-infrared (N) imaging sensors. The data set are corrupted by either stripe noise or wave noise depending on the type of satellite imagery. In order to develop a denoising algorithm for the vertical stripe noises that are mainly contained in the B channel, we used blue channel images from 14 scenes with a size of 6000 $\times$ 3000 pixels. This is because the multi-spectral data were provided by the data distributor (Korea Aerospace Research Institute: KARI) without registration, so we could not use them all together. For the removal of wave noise, we used 16 RGBN images in which each band image is of the size of 6000 $\times$ 6000 pixels. In this case, the RGBN data were distributed by KARI with the image registration, so we aim to exploit the multi-spectral band redundancy. In our data, only green channel images have wave noise patterns. For every scene, the upper part was used as training data and the lower part was used as test data. In real situation, it is difficult to have completely clean images. To get clean images, we applied the conventional model-based reconstruction methods and used the resulting processed images as clean image reference for training the denoising network in unsupervised set-ups. \subsubsection{Synthetic Noisy Data} The development of denoising algorithms with real samples leads to the difficulty of quantitative evaluation, as there is no clean ground-truth corresponding to a noisy image. Without ground truths, it is not possible to calculate quantitative metrics for the image reconstruction such as the peak signal-to-noise ratio (PSNR) and structural similarity index metric (SSIM) \cite{Reference:wang2004image}. For quantitative evaluation of the algorithm, we therefore added synthesized noise to relatively clean data. To obtain a ground-truth image for quantitative evaluation, we obtain synthetic noise patterns by subtracting the conventional model-based reconstruction results from the noisy images. Then, the synthetic noise patterns are added to the ground-truth image to generate synthetic noisy image data. For the task of vertical stripe noise removal, we generated eight synthetic image pairs with a size of 3000 $\times$ 3000 pixels that are not used for the training of the denoising network. For the wave noise removal, we generated four synthetic image pairs with a size of 3000 $\times$ 6000 pixels which are never used in the training data. When we added synthesized noise to green channel images, spatial correlation with other channels (red, blue, and near-infrared bands) were found different from real data. For instance, if the pixel values of the synthesized noisy image exceed the specified interval (e.g. [0, 65535]), values outside the interval are need to be clipped, which leads to an incorrect spatial correlation with other bands. Therefore, for a quantitative evaluation of horizontal wave images, we only compared results of neural network using green band images. It is remarkable that these synthetic data are only used at the inference phase. \subsection{Implementation Details} \subsubsection{The Architecture of Generators and Discriminators} For the generators $G_\Theta$ and $F_\Lambda$ in our denoising model, we used the tight-frame U-net \cite{Reference:han2018framing} structure with the skip connection between the input and output nodes. The tight-frame U-net uses wavelet decomposition and concatenation instead of conventional pooling and unpooling layers in order to satisfy the frame condition so that the networks effectively reconstruct high frequency components \cite{Reference:ye2018deep}. Furthermore, by adding the skip-connection between input and output nodes, we exploited the residual learning scheme, which is effective for denoising \cite{Reference:zhang2017beyond}. We also replaced batch normalization layers \cite{Reference:pmlr-v37-ioffe15} with instance normalization layers \cite{Reference:ulyanov2016instance} which is known for improving the quality of image generation. The discriminators $\varphi_\Phi$ and $\psi_\Xi$ are constructed based on the structure of PatchGAN \cite{Reference:isola2017image}, which penalizes image patches to capture the texture and style of images. We used PatchGAN consisting of five convolutional layers and the fully connected layer with the instance normalization. \begin{figure*}[!hbt] \centering \includegraphics[width=1\linewidth]{Fig6} \vspace{-0.8cm} \caption{Results of the vertical stripe noise removal in the first scene (agricultural area): (a) noisy image, and results of (b) image-domain CycleGAN, (c) WavCycleGAN, and (d) the conventional model-based approaches, respectively.} \label{figure6} \end{figure*} \begin{figure*}[!hbt] \centering \includegraphics[width=1\linewidth]{Fig7} \vspace{-0.8cm} \caption{Results of the vertical stripe noise removal in the second scene (cloud area): (a) noisy image, and results of (b) image-domain CycleGAN, (c) WavCycleGAN, and (d) the conventional model-based approaches, respectively.} \label{figure7} \end{figure*} \subsubsection{Training details} For the success of supervised deep neural networks, a large amount of training data is often required. In addition, the variety of samples is an important factor. However, in many situations, the large number of data sets are not available and we consider such extreme situation to validate the advantages of our network. Specifically, due to the security issues, our training data had fewer than 20 scenes for the development of the noise removal algorithm. To mitigate the deficiency of training data, we utilized image patches cropped from the full scenes. Specifically, we randomly cropped image patches with the size of 2048 $\times$ 32 pixels from the downsampled vertical wavelet subband images for the algorithm of denoising the vertical stripe noise. For the denoising method of the wave noise, we utilized image patches with the size of 128 $\times$ 128 pixels randomly cropped from the horizontal wavelet subband images. We also used data augmentation strategies such as horizontal flipping and vertical flipping. The use of image patches, which are cropped randomly at each iteration in the training phase, increases the variety of the samples and a large number of training images can be acquired. For unsupervised training, we randomly shuffle image pairs so that the network use unmatched data for the training. Our WavCycleGAN was trained by solving the optimization problem (\ref{eqn:min-max_overall_loss}) with $\lambda=10$ and $\gamma=5$. The size of mini-batch was 1. Adam optimizer \cite{Reference:kingma2014adam} was used to optimize the loss function with $\beta_1=0.5$ and $\beta_2=0.999$. The network was trained for 200 epochs. The initial learning rate was $2\times10^{-3}$ during the first 100 epochs, and gradually decreased to 0 through the last 100 epochs. The implementation of our method was based on PyTorch library \cite{Reference:paszke2019pytorch} using a NVIDIA GeForce GTX 1080 Ti GPU. \subsection{Comparative Methods} To evaluate the performance of vertical stripe noise removal, we compared our method (WavCycleGAN) with various methods. Specifically, in order to investigate the effectiveness of learning wavelet subband images, we also generate reconstruction results using standard image domain cycleGAN (CycleGAN) that does not utilize any directional decomposition using wavelet transform. We also compared the conventional model-based approach. The conventional model-based method was based on a prior model of the strip noises. The conventional model also exploited a moment matching approach \cite{Reference:gadallah2000destriping}, in which the sensors are assumed to have a linear relationship with each other. Specifically, the model-based algorithm estimates the initial points of vertical stripes, and use edge information to calculate the positions of the noise. The initial points of stripes are calculated based on information of sensors. Using edge information of the input, homogeneous areas are selected and the start and end points of the noise are calculated based on the initial points of the noise in the homogeneous area. Vertical stripe noise patterns are estimated by subtracting the average value of the areas near the vertical pattern from the average value of vertical stripe area. For the case of wave noise removal, multi-spectral images (RGBN bands) are registered for the case of wave noise, so we compared our results (WavCycleGAN$_{RGBN}$) with various variations to verify the benefits of our framework. Specifically, we generated comparative reconstruction results by the image domain cycleGAN using green channel images (CycleGAN$_{G}$), wavelet subband domain cycleGAN using green channel images (WavCycleGAN$_{G}$), and the image domain cycleGAN using multi-spectral bands (CycleGAN$_{RGBN}$). We also compared the conventional model-based approach for the wave noise. The conventional method assumed that the panchromatic image can be represented by a linear combination of multi-spectral band images with the least square regression coefficients\cite{Reference:price1987combining}. Clean green band images are then calculated using the relationship of the panchromatic image and the multi-spectral images according to the block-based scheme. \begin{figure*}[!hbt] \centering \includegraphics[width=1\linewidth]{Fig8} \vspace{-0.8cm} \caption{Results of the wave noise removal (first row) and difference images (second row) in the third scene (ocean): (a) noisy image, and results of (b) CycleGAN$_{G}$, (c) WavCycleGAN$_{G}$, (d) CycleGAN$_{RGBN}$, (e) the proposed WavCycleGAN$_{RGBN}$, and (f) the conventional model-based approach.} \label{figure8} \end{figure*} \begin{figure*}[!hbt] \centering \includegraphics[width=1\linewidth]{Fig9} \vspace{-0.8cm} \caption{Results of the wave noise removal (first row) and difference images (second row) in the fourth scene (cloud): (a) noisy image, and results of (b) CycleGAN$_{G}$, (c) WavCycleGAN$_{G}$, (d) CycleGAN$_{RGBN}$, (e) the proposed WavCycleGAN$_{RGBN}$, and (f) the conventional model-based approach.} \label{figure9} \end{figure*} \section{Experimental Results} \subsection{Real Experiments} \subsubsection{The removal of vertical stripe noise} To evaluate the performance of our denoising method for the vertical stripe noise, we visually inspected our results (WavCycleGAN) and compared them with other methods. Fig.~\ref{figure6} and Fig.~\ref{figure7} show denoising results for the image patches from the first scene (agricultural area) and second scene (cloud area), respectively. The reason we chose two drastically different scenes is to validate the generalization capability of our neural network. For Fig.~\ref{figure6}, we selected image patches with the size of 400 $\times$ 400 pixels showing significant vertical stripe noise from the first scene. The image patch of size 800 $\times$ 800 pixels was cropped from the second scene for Fig.~\ref{figure7}. As shown in figures, our results of learning wavelet subband images (WavCycleGAN) effectively remove vertical stripe noise, while results of the image domain cycleGAN (CycleGAN) fail to capture the noise patterns. In particular, our method successfully removed noises without affecting high frequency components such as edges and textures. Compared with the conventional model-based results, our results show improved performance in terms of image homogeneity. For instance, in Fig.~\ref{figure7}(d), the middle part of the conventional model-based result shows image inhomogeneity, while our method shows a homogeneous denoising result. \begin{figure*}[!hbt] \centering \includegraphics[width=1\linewidth]{Fig10} \vspace{-0.8cm} \caption{Results of the synthetic vertical stripe noise removal in the scene 8 (mountain area) listed in Table~\ref{table1}. (a) Ground truth image, (b) noisy image, and results of (c) CycleGAN, (d) our WavCycleGAN, and (e) the conventional model-based approach.} \label{figure10} \end{figure*} \begin{figure*}[!hbt] \centering \includegraphics[width=1\linewidth]{Fig11} \vspace{-0.8cm} \caption{Results of the synthetic wave noise removal (first row) and difference images (second row) in the scene 2 (ocean area) listed in Table~\ref{table2}. (a) Noisy image, (b) ground truth image, and results of (c) CycleGAN$_{G}$, (d) WavCycleGAN$_{G}$, and (e) the conventional model-based approach.} \label{figure11} \end{figure*} \subsubsection{The removal of wave noise} Fig.~\ref{figure8} and Fig.~\ref{figure9} show results of the wave noise removal in the third scene (ocean) and the fourth scene (cloud), respectively. Again, the reason to show two very different scenes at the test phase is to validate the generalization power of our method. We used the image patch with the size of 200 $\times$ 200 pixels for the third scene, and the image patch of size 400 $\times$ 400 pixels for the fourth scene. We also visualized the difference images by subtracting denoised results from the noisy images. As shown in Figs.~\ref{figure8} and \ref{figure9}, results of the image domain cycleGAN using only G channel do not successfully remove wave noise. Specifically, we found that CycleGAN$_{G}$ erroneously remove structural features of objects, while results of WavCycleGAN$_{G}$ preserve these high frequency features. In Fig.~\ref{figure8}(b), the difference image contains edges of the object, while only horizontal wave patterns are present in Fig.~\ref{figure8}(c). In addition, our experimental results show that using multi-spectral images improves performance, since the network can utilize the spatial correlation between the individual spectral bands for noise reduction. However, results of CycleGAN$_{RGBN}$ tend to blur edges and details of images. The difference image of Fig.~\ref{figure8}(d) shows that the multi-spectral image domain cycleGAN (CycleGAN$_{RGBN}$) removed high frequency features which are important information for the application of satellite imagery. Compared with results of CycleGAN$_{RGBN}$, our results using WavCycleGAN$_{RGBN}$ show effective removal of noise without sacrificing high frequency components as shown in Fig.~\ref{figure8}(e). Furthermore, in contrast to our proposed method (WavCycleGAN$_{RGBN}$), we found that the use of multi-spectral image domain cycleGAN (CycleGAN$_{RGBN}$) often introduce unexpected artifacts to the green band reconstruction images. For instance, in Fig.~\ref{figure9}(d), the green band reconstruction results are corrupted by other streaks that are not present in the input image. We noticed that these artifacts are from other channel images (R, B, and N bands) during the unsupervised image domain learning. On the other hand, by using wavelet subband images with horizontal bands, only horizontal components can be reconstructed, while other directional components can be retained. Therefore, no such artifacts are observed in the proposed method. It is also remarkable that although we used conventional model-based results as a clean domain in training the networks, our unsupervised learning results show improved denoising performance than the conventional methods by learning the image distribution matching rather than pair-wise matching. For example, the model-based approach completely blurred out the cloud image in Fig.~\ref{figure9}(f), whereas our method provides very high resolution image reconstruction without noises. \begin{table}[!t] \caption{Quantitative comparison for the vertical stripe noise removal} \renewcommand{\arraystretch}{1.3} \scalebox{0.85}{% \begin{tabular}{|c|c|c|c|c|} \hline \multirow{2}{*}{Scene \#} & \multicolumn{4}{c|}{PSNR [dB] / SSIM}\\\cline{2-5} & Noisy image & CycleGAN & WavCycleGAN & Model-based\\ \hline 1 & 65.73 / 0.99982 & 61.92 / 0.99956 & \textbf{66.04} / \textbf{0.99983} & 58.42 / 0.99905 \\ \hline 2 & 66.68 / 0.99985 & 64.47 / 0.99976 & \textbf{67.14} / \textbf{0.99987} & 61.04 / 0.99946 \\ \hline 3 & 67.02 / 0.99994 & 64.60 / 0.99990 & \textbf{67.93} / \textbf{0.99995} & 55.12 / 0.99908 \\ \hline 4 & \textbf{66.49} / 0.99994 & 63.70 / 0.99988 & 66.28 / \textbf{0.99994} & 63.35 / 0.99987 \\ \hline 5 & 63.95 / 0.99992 & 63.54 / 0.99991 & 64.60 / 0.99993 & \textbf{69.85} / \textbf{0.99996} \\ \hline 6 & 65.00 / 0.99991 & 62.62 / 0.99987 & 65.11 / 0.99991 & \textbf{68.52} / \textbf{0.99993} \\ \hline 7 & 66.45 / 0.99983 & 64.13 / 0.99972 & \textbf{66.83} / \textbf{0.99985} & 64.38 / 0.99979 \\ \hline 8 & 63.63 / 0.99971 & 61.93 / 0.99958 & \textbf{65.45} / \textbf{0.99981} & 64.24 / 0.99974 \\ \hline \textbf{Average} & 65.62 / 0.99986 & 63.36 / 0.99977 & \textbf{66.17} / \textbf{0.99988} & 63.11 / 0.99961 \\ \hline \end{tabular}} \label{table1} \centering \end{table} \subsection{Numerical simulation} For the quantitative evaluation, we performed inferences using synthetic noisy data and calculated quantitative metrics such as PSNR and SSIM. \begin{table}[!t] \caption{Quantitative comparison for the wave noise removal} \renewcommand{\arraystretch}{1.3} \scalebox{0.85}{% \begin{tabular}{|c|c|c|c|c|} \hline \multirow{2}{*}{Scene \#} & \multicolumn{4}{c|}{PSNR [dB] / SSIM}\\\cline{2-5} & Noisy image & CycleGAN$_{G}$ & WavCycleGAN$_{G}$ & Model-based\\ \hline 1 & 51.61 / 0.99275 & 53.41 / 0.99672 & 52.92 / 0.99467 & \textbf{57.66} / \textbf{0.99836} \\ \hline 2 & 53.47 / 0.99636 & 53.41 / 0.99678 & \textbf{53.94} / \textbf{0.99686} & 49.05 / 0.99348 \\ \hline 3 & 55.94 / 0.99743 & 59.38 / 0.99894 & \textbf{59.78} / \textbf{0.99903} & 54.74 / 0.99768 \\ \hline 4 & 62.50 / 0.99948 & 60.01 / 0.99913 & \textbf{62.63} / \textbf{0.99952} & 53.25 / 0.99605 \\ \hline \textbf{Average} & 55.88 / 0.99651 & 56.55 / \textbf{0.99789} & \textbf{57.32} / 0.99752 & 53.67 / 0.99639 \\ \hline \end{tabular}} \label{table2} \centering \end{table} \subsubsection{Removal of vertical stripe noise} Table~\ref{table1} lists the PSNR and SSIM values of the scenes from 8 noisy images and the reconstruction results by the image domain cycleGAN (CycleGAN), wavelet subband domain cycleGAN (WavCycleGAN), and the model-based method. Our results from WavCycleGAN outperform the results of CycleGAN in terms of PSNR and SSIM for the all scenes. Furthermore, we observed that the mean PSNR and SSIM values of our results are highest among other methods, which confirms that our method improves the performance by using the wavelet subband image learning scheme. Fig.~\ref{figure10} illustrates results of denoising synthetic vertical noise patterns from the image patch with the size of 600 $\times$ 600 pixels. It can be seen that WavCycleGAN outperforms CycleGAN, and our method shows homogeneous image reconstruction compared to the conventional model-based approach. \subsubsection{Removal of wave noise} Table~\ref{table2} shows the PSNR and SSIM scores of the scenes from 4 noisy images and the reconstruction results using image-domain cycleGAN (CycleGAN$_{G}$), wavelet subband domain cycleGAN (WavCycleGAN$_{G}$), and results of the conventional model-based method. We observed that our method (WavCycleGAN$_{G}$) outperforms the existing method (CycleGAN$_{G}$) in terms of PSNR and SSIM values with the exception of Scene 1. Also, our method yields the highest average values of PSNR. Fig.~\ref{figure11} shows reconstruction results and difference images for the removal of synthetic wave noise using the image patch with the size of 100 $\times$ 100 pixels. In contrast to the results of CycleGAN$_{G}$ and the conventional model, our method effectively removes noise pattern without removing edges and detail information. On the other hand, the model-based approach removed the details of the scene, providing a bit blurry image. \section{Conclusion} In this paper, we proposed the wavelet subband cycle-consistent adversarial network (WavCycleGAN) for the multi-spectral denoising in satellite imagery. The main motivation for using WavCycleGAN is that our target noise patterns are directionally structured and only directional components of the noise pattern can be reconstructed using the wavelet subband learning scheme for efficient noise removal. Furthermore, to alleviate the problem of unpaired data in practice, we trained the denoising network in an unsupervised manner. Thanks to the use of WavCycleGAN, the denoising network could be trained efficiently in an unsupervised manner. Experimental results demonstrated that our method effectively removes noise patterns without sacrificing high frequency components. \ifCLASSOPTIONcaptionsoff \newpage \fi \bibliographystyle{ieeetran}
1,314,259,994,135
arxiv
\section*{Introduction} In recent years several applications of quantum nonlocality have been proposed \cite{diqkd,dirandom,distateest,rabelo2011device} (see a recent review on Bell nonlocality \cite{review}). These proposals are based on the fact that nonlocal correlations can be certified without any assumption on the internal mechanisms of the devices used in the experiment. Thus, once established, nonlocal correlations can be used in what is now referred as, {device-independent} protocols. Nonlocal correlations can be obtained by measuring entangled quantum systems in appropriately chosen local observables. This is called a Bell test since the nonlocal nature of the measurement outcomes can be certified by the violation of certain constraints known as Bell inequalities \cite{review}. Many Bell tests have been performed in the last few decades, but no nonlocal correlations have been strictly established so far. This is because all of the performed experiments suffered either from the detection loophole or the locality loophole \cite{review} \dani{Experiments using entangled photons have reported Bell inequality violations closing separetly the locality \cite{aspect_experimental_1982,weihs_violation_1998,scheidl_violation_2010} and the detection \cite{zeilinger12} loopholes. On the other hand, the detection loophole has been closed with stationary systems like atoms, ions and circuits \cite{matsukevich_bell_2008, rowe_experimental_2001, ansmann_violation_2009}. The main technological challenge to close both loopholes simultaneously is to have both efficient detection long-distance entanglement.} \dani{In this work, we propose an experimental setup involving available technology to implement a loophole-free Bell test. It uses a single atom coupled to the field of a cavity in order to produce a specific entangled state between the atom and the light emitted by the cavity and combines efficient detection schemes on the atomic side and the coherent nature of the light field to perform hybrid detection on the photonic side \cite{cavalcanti10,quintino12}. Our scheme considers experimental effects neglected in previous proposals \cite{araujo11,sangouard11} and thus puts current atom-photon systems as good candidates to demonstrate loophole-free nonlocal correlations} The paper is structured as follows: first, we introduce the target state and the measurement settings used in the Bell test. We then give an explanation of how this state can be produced by means of cavity quantum electrodynamics (QED) techniques and discuss the relevant and feasible parameter regime which yields the desired state. We also discuss an implementation of our scheme in optical cavities, and optimize the \colin{Clauser-Horne-Shimony-Holt (CHSH) inequality \cite{chsh69}} violation for currently available experimental parameters. Finally, we discuss possible circuit QED implementations in the final section and compare our proposal to previous ones. \section*{Results} \subsection*{The ideal case} \label{sec:ideal} Our proposal takes advantage of two facts first noticed in \cite{brunner07} and \cite{cabello07}. First, highly efficient detections are typically available for the electronic levels of single atoms \cite{henkel2010highly}. At the same time, photons propagating either in free space or in low loss optical fibers are excellent candidates for carriers of information over long distances. \dani{Furthermore, the combination of high efficient homodyne detection with photodetection greatly reduces the} \colin{required minimum} \dani{photodetection efficiency \cite{cavalcanti10,quintino12, araujo11,sangouard11,Brask12}.} Our starting point is to consider a state of the form \begin{equation} \ket{\psi_\alpha} = \cos\nu\ket{s,0}+\sin\nu\ket{g,\alpha}, \label{eqn:target_state} \end{equation} where $\ket{g}$, $\ket{s}$ are two atomic states and $\ket{0}$ and $\ket{\alpha}$ denote states of the electromagnetic field (vacuum and a coherent state respectively) with \begin{equation} \ket{\alpha} := e^{-\frac{\abs{\alpha}^2}{2}}\sum_{n=0}^\infty \frac{\alpha^n}{\sqrt{n!}}\ket{n}, \end{equation} where $\ket{n}$ are the energy eigenstates of the field. As we are going to see, this state violates a Bell inequality even for low efficiency photodetection and can be well approximated with current technology. Although we motivated this work by considering a real atom, the state Eq. {\ref{eqn:target_state}} can be in principle devised using different platforms, \eg, superconducting qubits, quantum dots, nitrogen-vacancy center and equivalent systems. We consider the \colin{CHSH} Bell inequality \cite{chsh69} which states that the correlations obtained are nonlocal if the following inequality is violated: \begin{equation} \mean{\mathcal{B}}= \operatorname{Tr}(\rho\mathcal{B})\leq2, \end{equation} where $\rho$ is the quantum state under scrutiny and $\mathcal{B}$ denotes the Bell operator \begin{equation} \mathcal{B}=A_0\otimes B_0 + A_0\otimes B_1 + A_1\otimes B_0 -A_1\otimes B_1, \end{equation} with $A_i$ and $B_j$ being observables with outcomes $\pm1$. Here, we set the atomic observables as \begin{equation} A_0 = \cos\gamma \sigma_z + \sin\gamma \sigma_x , \quad A_1= \cos\gamma \sigma_z - \sin\gamma \sigma_x, \end{equation} where $\sigma_i$ are the usual Pauli matrices. In the photonic part we consider dichotomized photodetection and $X$ quadrature operators \dani{(homodyne detection)}\cite{cavalcanti10,quintino12}: \begin{equation} B_0=2\ketbra{0}{0} - \begin{picture}(8,8)\put(0,0){1}\put(4.8,0){\line(0,1){7}}\end{picture}, \quad B_1= 2\int_{-b}^b \dint x\,\ketbra{x}{x} - \begin{picture}(8,8)\put(0,0){1}\put(4.8,0){\line(0,1){7}}\end{picture}, \end{equation} where $\ket{x}$ is the eigenstate of the quadrature operator $X = \frac{a + a^{\dag}}{\sqrt{2}}$, and $a$ is the annihilation operator of the field. \begin{figure}[ht] \centering \begin{tikzpicture} \begin{axis}[ xlabel=$\eta$, ylabel=$|t_{\rm line}|^2$, legend style={anchor=north west,cells={anchor=west},at={(0.05,.3)}, font = \small} ] \addplot[color=red,domain=0.6666:1,dashed]{2/(3*x)}; \addlegendentry{$\eta |t_{\rm line}|^2 = 2/3$} \addplot[color=green,mark=none,dashdotted] coordinates {(1, 0.615395) (0.91, 0.64827) (0.8, 0.698019) (0.7, 0.749074) (0.62, 0.797871) (0.55, 0.846303) (0.477214, 0.9) (0.42, 0.948192) (0.366631, 1)}; \addlegendentry{Ref. \cite{sangouard11}} \addplot[color=blue,mark=none] coordinates {(1,0.53701)(0.99631,0.538)(0.91466,0.561)(0.834,0.58592)(0.756,0.61234)(0.67932,0.641)(0.605,0.67133)(0.53292,0.704) (0.46235,0.739)(0.395,0.776)(0.329,0.81544)(0.266,0.858)(0.206,0.90201)(0.148,0.94901)(0.093,0.999)(0.092001,1)}; \addlegendentry{$\ket{\psi_\alpha}$} \addplot[only marks,color=black,mark=x] coordinates {(0.09, 1) (1, 0.54) (.8, .8) (1, 1)}; \node[color=black,font=\small] at (axis cs:0.95, 1) {A}; \node[color=black,font=\small] at (axis cs:0.75, 0.8) {B}; \node[color=black,font=\small] at (axis cs:0.04, 1) {C}; \node[color=black,font=\small] at (axis cs:0.95, 0.54) {D}; \end{axis} \end{tikzpicture} \caption \textbf{Critical line above which \dani{nonlocal correlations can be obtained.}} The blue line corresponds to the ideal state of our proposal. The parameters $\alpha, \gamma, \nu, b$ were optimized for each point. For comparison, we include the curve $\eta |t_{\rm line}|^2 = 2/3$ (red) that results from the Eberhard bound \cite{eberhard_93,larsson01}, and the curve from the best experimental proposal to date in Ref \cite{sangouard11} (green). For the sake of illustration, we give the specific numbers for the points represented by crosses \colin{A, B, C and D in Table \ref{tab:Ideal_pts}.}} \label{comparison} \end{figure} \begin{table}[h!] \centering \begin{tabular}[b]{*{5}{|c}|} \hline & A & B &C & D\\ \hline $|t_{\rm line}|^2$ & 1 & 0.8 & 1 & 0.55 \\ \hline $\eta$ &1 & 0.8 & 0.15 & 1 \\ \hline $|\alpha|$ & 2.1 & 2.33 & 3.35 & 2.38\\ \hline $\gamma$ & 0.55 & 0.34 & 0.14 & 0.03 \\ \hline $\nu$ & 0.77 & 0.66 & 0.16 & 0.33\\ \hline $b$ & 0.53 & 0.53 & 0.34 & 0.44 \\ \hline $\mean{\mathcal{B}}$ & 2.32 & 2.07& $2^+$ & $2^+$\\ \hline \end{tabular} \caption{{\bf Optimized parameters for 4 points on Figure {\ref{comparison}}.} The state and measurement parameters, $|\alpha|,\nu,\gamma$ and $b$, are each numerically optimized, given some detector efficiency $\eta$, and some transmission $|t_{\rm line}|^2$.} \label{tab:Ideal_pts} \end{table} \dani{Since quadrature measurements can be made very efficient (nearly perfect), there will be an asymmetry in the total efficiency of the photonic measurements: the total homodyning efficiency will be basically determined by transmission losses, while the total photodetection efficiency will be composed by transmission and typical photodetector efficiency. We thus model the transmission losses with a beam splitter with transmittance $t_{\rm line}$ (affecting both the photodetector and the homodyning apparatus) and the photodetector intrinsic efficiency as a beam splitter with transmittance $\sqrt{\eta}$ followed by a perfect detector.} Figure {\ref{comparison}} summarizes the results. In ideal conditions (\textit{i.e.}\@\xspace point A in Table \ref{tab:Ideal_pts}), the CHSH value can reach $\mean{\mathcal{B}}=2.32$ for $|\alpha|=2.1$, which translates to an average of 4.41 photons in the coherent state. Moreover, a violation can be found even for $\eta=0.15$ with $|t_{\rm line}|^2=1$ (Point C in Table \ref{tab:Ideal_pts}), or transmittance $|t_{\rm line}|^2=0.55$ with $\eta = 1$ (Point D in Table \ref{tab:Ideal_pts}). Also, for a detection efficiency of $\eta =0.8$ and a transmission of $|t_{\rm line}|^2 = 0.8$ we find a CHSH value of $2.07$ (Point B in Table \ref{tab:Ideal_pts}). For comparison, we have also included the curve $\eta|t_{\rm line}|^2 = 2/3$ that results from the Eberhard bound \cite{eberhard_93,larsson01} , which is the required efficiency and transmission to perform a loophole-free experiment with photon polarization, and the curve from the best experimental proposal to date involving an atom and a photonic mode \cite{sangouard11}. \subsection*{Realistic scenario} \label{sec:real_case} We now show a scheme aiming to produce the state \eqref{eqn:target_state} in a cavity QED scenario. The first part of the scheme \comm{realizing the state Eq. {\ref{eqn:target_state}}}is depicted in Figure {\subfigref[a]{fig:cavity}}. An input field is incident on a cavity with a single three level atom prepared initially in a superposition of the states $g$ and $s$: $\ket{\psi}_{\rm atom} = \cos{\nu}\ket{s} + \sin{\nu}\ket{g}$. Fig.~\subfigref[b]{fig:cavity} shows the level structure of the atom. Transition $\ket{g}-\ket{e}$ is coupled dispersively to the cavity with detuning $\Delta = \omega_{\rm ge} - \omega_{\rm c}$, where $\omega_{\rm c}$ is the cavity frequency and $\omega_{\rm ge}$ is the frequency of the transition $\ket{g}-\ket{e}$. The cavity is asymmetric with decay rates of the mirrors $\kappa_{\rm b} \ll \kappa_{\rm c}$ and the cavity decay rate $\kappa = \kappa_{\rm b} + \kappa_{\rm c}$. This means that the field leaks out of the cavity essentially only through the right mirror (cf.\ Fig.~\subfigref[a]{fig:cavity}). The fact that one needs an asymmetric cavity is not \textit{a priori} obvious and will be explained below. We assume that the level $\ket{s}$ is detuned far enough such that it does not interact with the cavity field. \begin{figure}[ht!] \includegraphics[width = \columnwidth]{figure_2_abc_new_rm.pdf} \caption \textbf{State preparation.} \textbf{a.} An input field is incident on a cavity with an atom in the dispersive regime. This causes some reflection and some transmission of the cavity field. We assume that the mirror on the right has a lower reflectivity than the mirror on the left, such that the field predominantly leaves the cavity to the right. The final beam splitter performs a displacement operation which creates a superposition of propagating vacuum and coherent state. \textbf{b.} Level structure of the atom. The cavity field is dispersively coupled to the $\ket{g}-\ket{e}$ transition, and the $\ket{s}$ state is assumed to be far detuned such that it does not interact with the cavity field. \colin{{\bf c.} Beam splitter convention used. A displacement operation is applied on the mode labeled $\hat{c}_{\rm out}$ by combining it with a local oscillator (mode labeled $\hat{d}_{\rm LO}$). }} \label{fig:cavity} \end{figure} Since the cavity transmission depends on the atomic state, the atom-cavity system acts as a filter that transforms the initial state $\ket{\alpha_{\rm in}}\ket{\psi}_{\rm atom}$ in the following way: \begin{align} \ket{\alpha_{\rm in}} \otimes \ket{g} & \longrightarrow \ket{r_{\rm g} \alpha_{\rm in}}_{\rm refl} \otimes \ket{g} \otimes \ket{t_{\rm g} \alpha_{\rm in}}_{\rm trans} \\ \ket{\alpha_{\rm in}} \otimes \ket{s} & \longrightarrow \ket{r_{\rm s} \alpha_{\rm in}}_{\rm refl} \otimes \ket{s} \otimes \ket{t_{\rm s} \alpha_{\rm in}}_{\rm trans} \label{eqn:principle} \end{align} where $\ket{}_{\rm refl}$ and $\ket{}_{\rm trans}$ represent the reflected and transmitted photonic states, $\alpha_{\rm in}$ is the complex amplitude of the input coherent state and $r_j$ and $t_j$ ($j=s,g)$ are the amplitude reflectivity and transmitivity of the empty cavity (atom in the state $\ket{s}$, \textit{i.e.}\@\xspace $j=s$) and of the cavity with the atom (atom in the state $\ket{g}$,\textit{i.e.}\@\xspace $j=g$) and $|r_j|^2 + |t_j|^2 = 1$. Notice that we assume that both the reflected and transmitted fields are still coherent states (see Methods for details). In order to produce a state of the form \eqref{eqn:target_state}, one needs to meet the following conditions: $t_{\rm s} = 0$ with $\braket{r_{\rm s} \alpha_{\rm in}}{r_{\rm g} \alpha_{\rm in}} \cong 1$ and $\ket{t_{\rm g} \alpha_{\rm in}} = \ket{\alpha}$, where $\ket{\alpha}$ is the desired coherent state in Eq. {\ref{eqn:target_state}}. An alternative is to apply a displacement on the transmitted field \cite{matteo_g.a._displacement_1996}. \colin{The displacement is just the result of combining a field exiting the atom-cavity system ($\hat{c}_{\rm out}$ mode) with a coherent local oscillator $\ket{\beta}$ ($\hat{d}_{\rm LO}$ mode) on a beam splitter, as depicted in Figure {\subfigref[c]{fig:cavity}}}. The output port of interest is the port with the output field \colin{$\ket{t_{\rm BS}\hat{c}_{\rm out} - r_{\rm BS}\hat{d}_{\rm LO}}$}, where $r_{\rm BS}$ and $t_{\rm BS}$ are the amplitude reflectivity and transmitivity of the beam splitter respectively. \colin{For a coherent state, $\ket{\alpha_x}$, in the $\hat{c}_{\rm out}$ mode, one can achieve a zero amplitude in this output port by tuning the amplitude of $\ket{\beta}$ such that $r_{\rm BS}\beta = t_{\rm BS}\alpha_x$.} In our case, $\alpha_x=t_{\rm s}\alpha_{\rm in}$ and one can obtain the state $\ket{s,0}$ contained in Eq. {\ref{eqn:target_state}}. \begin{figure}[ht!] \centering \includegraphics[width=\columnwidth]{figure_3_new_rm.pdf} \caption \textbf{Possible Experimental setup.} A pulse of light is first split on a beam splitter (blue). Part of it goes to the atom-cavity system (lower branch) described in Fig. \ref{fig:cavity}. The other part of the split pulse passes through an auxiliary cavity (upper branch) which is used to produce a pulse with the same spectral properties as that produced when the atom is in the $\ket{s}$ state. This requirement has to be met for a perfect displacement in the second beam splitter. Both beam splitters have the property that $|t_{\rm BS}|^2 \gg |r_{\rm BS}|^2$. After the state preparation, the photonic part of the state goes to a location that, given the times necessary for choosing the measurement and registering the results, assures space-like separation, and is submitted to either photodetection or homodyne measurement. Possible detection schemes of the atomic state are explained in the text (sec. \ref{sec:expt_feasible}).} \label{fig:setup} \end{figure} The full Bell test setup is shown in Figure {\ref{fig:setup}}. The beam splitters are identical with the property $|t_{\rm BS}|^2 \gg |r_{\rm BS}|^2$. Note that, in order to satisfy the condition $r_{\rm BS}\beta = t_{\rm BS}\alpha_x$, the spatiotemporal modes of the two coherent states $\alpha_x$ and $\beta$ must be identical. This can be achieved e.g. by using an auxiliary cavity similar to the one containing the atom (see Fig. \ref{fig:setup}), where $\alpha_x = t_{\rm s}\alpha_{\rm in}$. Eventually, instead of using the auxiliary cavity, one might look for an atom-cavity-laser configuration in which the coupling of the field of polarization connecting the $\ket{s}$ state with some excited state is negligible compared to the dispersive coupling of the $\ket{g}-\ket{e}$ transition, and the orthogonal polarization coupling to neither $\ket{s}$ or $\ket{g}$ or $\ket{e}$. In such a situation one might use this field to produce the displacement pulse using only a single cavity. The problem of an input coherent pulse with frequency spectrum $s_{\rm L}(\omega)$ and amplitude $\alpha_{\rm in}$ impinging on a atom-cavity system can be treated using input-output theory \cite{walls_quantum_2008} (see Methods for details). This gives the form of the atomic state dependent amplitude reflection and transmission coefficients, $r_j(\omega)$ and $t_j(\omega)$, $j=s,g$ as functions of the input pulse. \dani{The final atom-field state is obtained upon tracing away non-radiative losses in the cavity mirrors, the fields reflected by the cavity, emitted by the atom and the other output port of the beam splitter. This naturally leads to coherence (and entanglement) loss and results, when the atom is initially prepared in the state $\cos\nu \ket{s} + \sin\nu e^{i \phi}\ket{g}$, in the mixed state} \begin{equation} \rho = V \proj{\psi_{\rm f}}{\psi_{\rm f}} + (1-V) \sigma, \label{eqn:real_state} \end{equation} where \begin{align} \ket{\psi_{\rm f}} &= \cos\nu \ket{s,0} + \sin\nu \ket{g,\{\tilde{\alpha}\}}, \label{eqn:final_state}\\ \sigma &= \cos^2\nu \proj{s,0}{s,0} + \sin^2\nu \proj{g,\{\tilde{\alpha}\}}{g,\{\tilde{\alpha}\}}, \end{align} and the visibility $V$ is (see Methods for details), \begin{align} V &= \exp\Big[-F \frac{|\tilde{\alpha}|^2}{2 t_{\rm BS}^2} \Big], \label{eqn:vis}\\ F&= r_{\rm BS}^2+f_{\rm cav} + I_{s_{\rm L}} \inv{4 C} (1+f_{\rm cav}), \label{eqn:vF} \\ I_{s_{\rm L}} &= \frac{\int d\omega |s_{\rm L}(\omega)\inv{D(\omega)}|^2}{\int d\omega |s_{\rm L}(\omega)\inv{D(\omega)} \inv{1+i2 (\omega-\omega_{\rm c})/\kappa}|^2}, \\ D(\omega) &=(\half\Gamma + i(\omega-\omega_{\rm ge}))(\kappa /2 +i(\omega-\omega_{\rm c})) +g^2, \label{eqn:D_w} \end{align} where $C$ is the usual single-atom cooperativity $C=\frac{g^2}{\Gamma \kappa}$ and $f_{\rm cav} = \frac{\kappa_{\rm b} + \kappa_{\rm L}}{\kappa_{\rm c}} $ is a factor which describes the asymmetry of the cavity. Here, the continuous frequency coherent state, $\ket{\{\tilde{\alpha}\}}$, is \begin{align} \ket{\{\tilde{\alpha}\}} &= {\rm exp} \big[ t_{\rm BS}\alpha_{\rm in} \int d\omega\, s_{\rm L}(\omega) \pare{t_{\rm g}(\omega) - t_{\rm s}(\omega)} c^{\dag}_{\rm out}(\omega) +h.c.\big] \ket{0}, \\ |\tilde{\alpha}|^2 &= |t_{\rm BS}\alpha_{\rm in}|^2 \int d\omega \, \big| s_{\rm L}(\omega) (t_{\rm g}(\omega) - t_{\rm s}(\omega))\big|^2, \label{eqn:alpha2} \end{align} with the transmission coefficients \begin{align} t_{\rm s}(\omega) &= \frac{\sqrt{\kappa_{\rm b} \kappa_{\rm c}}}{\kappa /2+i(\omega-\omega_{\rm c})}, \label{eqn:ts}\\ t_{\rm g}(\omega) &= \frac{\sqrt{\kappa_{\rm b}\kappa_{\rm c}}}{D(\omega)}\pare{\half\Gamma + i(\omega-\omega_{\rm ge})}, \label{eqn:tg} \end{align} and the atomic state is initially prepared with $\phi$ such that the final state is of the form \eqref{eqn:final_state}. In the above, $s_{\rm L}(\omega)$ is the frequency spectrum of the laser field, $g$ is the coupling constant of the cavity mode to the $\ket{g}-\ket{e}$ transition, $\Gamma$ is the transverse decay rate of the $\ket{e}$ state and $\kappa_{\rm L}$ is the loss rate of the cavity mirrors. From equation \eqref{eqn:vis}, it is easy to see that for $F\to 0$, we have $V \to 1$. Also for $F\to0$, we need the 3 conditions, $r_{\rm BS} \to 0$, $f_{\rm cav}\to 0$ and $C\to \infty$. In practice, the first condition means that one should use a small value of the beam splitter reflectivity and adjust the amplitude of the local oscillator, such that the condition $r_{\rm BS} \beta = t_{\rm BS} t_{\rm s} \alpha_{\rm in}$ is still satisfied. The second condition is precisely the requirement of the asymmetric cavity we have mentioned at the beginning of this section and is dependent only on the transmission and losses of the cavity mirrors used. For completeness, we have also included non-radiative mirror losses. The third condition means that one needs large single-atom cooperativity. One remarkable feature of this result is that for $C\gg 1$, the visibility $V$ (but of course not the state) is independent of the details of the spectrum of the input pulse. The final point to note is that to satisfy the approximations used in our derivation, we require a negligible probability of exciting the atom throughout the duration of the input pulse. This is important, since, to maintain some specific amplitude of the output coherent state $\ket{\{\tilde{\alpha}\}}$ after the beam splitter, one might have to use a very large amplitude input laser if the integral in equation \eqref{eqn:alpha2} is small. For the sake of this theoretical analysis, we consider the case of an input Gaussian pulse with the normalized spectrum \begin{equation} |s_{\rm L}(\omega)|^2 = \inv{\gamma_{\rm L}\sqrt{\pi}} e^{-\pare{\frac{\omega-\omega_{\rm L}}{\gamma_{\rm L}}}^2}, \end{equation} where $\omega_{\rm L}$ is the central laser frequency, and $\gamma_{\rm L}$ is the bandwidth of the pulse. \comm{The spectral shape of the input pulse we have chosen is arbitrary and the results should depend only on the pulse bandwidth.} To close the locality loophole, one has to propagate the outgoing pulse for some minimum distance, given by the larger of the measurement times of the atom and the pulse, which depends on the pulse duration. The pulse duration is given by the relevant time scale of the system, \textit{i.e.}\@\xspace, either by the input pulse duration or the cavity lifetime, whichever is greater. This means that, from the point of view of closing the locality loophole, it is useful to push the duration of the input pulse down to the cavity lifetime, but not necessarily further. Since the pulse duration and the cavity lifetime are proportional to the inverse of the pulse bandwidth $\gamma_{\rm L}$ and the cavity decay rate $\kappa$ respectively, the best case would be if $\gamma_{\rm L} \approx \kappa$. \begin{figure}[ht!] \centering \includegraphics[width = \columnwidth]{figure_4_new_rm.pdf} \caption \textbf{Plots of the maximum probability of exciting the atom over the duration of the input pulse.} This figure shows plots of $\log_{10}[\max_t P_{\rm e}(t)]$ as a function of the parameter $\frac{g}{\kappa}$, assuming that the central laser frequency is on resonance with the bare cavity, for different values of $\frac{g}{\Delta}$. The left panel is the case with input pulse with minimal bandwidth setting $g/\gamma_{\rm L} = \max(g/\kappa)$. The right panel is the case with the input pulse bandwidth set equal to the cavity bandwidth ($g/\gamma_{\rm L} = g/\kappa$). \colin{For $g/\Delta$ small enough, the probability of exciting the atom becomes insensitive to the precise value of $g/\Delta$. This fact is illustrated by the overlap of curves corresponding to $g/\Delta=1/100$ and $g/\Delta=1/1000$.} The parameters used in this plot are $|\alpha|=2.1$, $f_{\rm cav} = 4/100$ and $g/\Gamma=5/3$.} \label{fig:validity} \end{figure} Fig.~\ref{fig:validity} shows plots of $\log_{10} (\max_t P_{\rm e}(t))$, the maximum probability of exciting the atom as a function of the parameter $\frac{g}{\kappa}$ (the coupling strength over the total cavity decay rate), assuming that the central laser frequency is on resonance with the bare cavity, for $\frac{g}{\Delta}=1/10,1/100,1/1000$. The left figure is the case for a fixed minimal bandwidth of the input pulse in the sense that we choose $\gamma_{\rm L}$ such that $g/\gamma_{\rm L} = \max(g/\kappa)$. The right figure corresponds to the situation when the input pulse bandwidth is set equal to the cavity bandwidth in order to minimize the pulse duration. In the following, we will consider the regions where $(\max_t P_{\rm e}(t))\leq 0.1$, to be the parameter regimes when our analysis is suitable. As is evident from Fig.~\ref{fig:validity}, for some large $g/\gamma_{\rm L}$, there always exists some $g/\kappa$ for which our solutions are self-consistent, which is not the case when $\gamma_{\rm L} = \kappa$, the situation minimizing the pulse duration. We can thus look for some intermediate value of $\gamma_{\rm L}$ (having all the other parameters fixed), which minimizes the pulse duration while still respecting the validity of the used approximation. This is the strategy we employ in the next section, where we will discuss a possible implementation of our scheme using realistic experimental parameters. \subsection*{Experimental feasibility} \label{sec:expt_feasible} So far, we theoretically described a general cavity scenario. In the following, we will focus our attention on possible implementations of our scheme in optical cavities. This implementation has the advantage of allowing large propagation distances due to the availability of low loss optical fibers. We start by general constraints on measurement times and propagation distances imposed by the locality loophole. To close the locality loophole, one requires the start of the measurement event on one side to be space-like separated {\col{from}} the end of the measurement event on the other side. In other words, we need that the space-time coordinates of the end of the atomic measurement and the choice of measurement basis (photon detection or homodyning) on the photonic side be space-like separated, and vice versa. This constraint translates into the minimum {\col{propagation}} distance \begin{equation} d \geq c\pare{\max\pare{(\Delta t_{\rm ph,c} + \Delta t_{\rm ph,m}) , (\Delta t_{\rm at,c} + \Delta t_{\rm at,m})}}, \label{eqn:locality} \end{equation} where $c$ is the speed of light in free-space, $\Delta t_{\rm at/ph,c}$ is the time required on the atomic/photonic side to choose the measurement settings, which also includes the time to change the measurement basis, $\Delta t_{\rm at/ph,m}$ is the time required on the atomic/photonic side to perform the actual measurement, which necessarily includes the duration of the light pulse. We assume that the main time constraint is set by the atomic measurement, which is typically slower than the measurement of photons. The above equation shows, that even if we used a very short pulse, we would still need to propagate it at least to a distance, given by the sum of atomic measurement time and the time required to choose a basis (a rotation in the space of $\ket{s}$ and $\ket{g}$, {\col{typically using RF pulses}}), multiplied by $c$. The best trade-off in this case, is thus to have $\Delta t_{\rm ph,m} \approx \Delta t_{\rm at,m}+\Delta t_{\rm at,c}$. This is possible because the time needed to choose a basis on the photonic side can be very fast \cite{weihs_violation_1998}. One possibility to perform a fast atomic state detection would be to use a 2-photon ionization technique \cite{henkel2010highly}. This technique has been performed in free-space configurations and can achieve 98\% efficiency in < \SI{1}{\micro s}. Although the restriction of a small cavity might make such a photo-ionization technique technically challenging, proposals to make focusing cavities with large physical volumes (lengths of $\sim$ 1\,cm) while still maintaining large coupling constants ($g \sim$ 100\,MHz) exist \cite{syed2010,morrin1994} potentially making such techniques compatible. Here, we assume $\Delta t_{\rm at,m}+\Delta t_{\rm at,c} = \SI{1}{\micro s}$. We should thus seek to produce a pulse which requires a total measurement duration up to \SI{1}{\micro s}. {\col{For the Gaussian pulse example considered in our calculation, we take the total pulse measurement duration to be $\Delta t_{\rm ph,m} = 6\sqrt{2} / \gamma_{\rm L,m}$. The factor $6\sqrt{2}$ is chosen such that if the bandwidth of the pulse incident on the cavity is $\gamma_{\rm L,m}$, then the neglected tails of the outgoing pulse correspond to <0.5\% of the total integrated intensity of the outgoing pulse (for the parameters used in Table \ref{tab:parameters}, see below). For a measurement time of $\SI{1}{\micro s}$, this corresponds to $\gamma_{\rm L,m}= 2\pi \times 1.35\textrm{MHz}$. It also means that for outgoing pulse bandwidths $\gamma_{\rm L} > \gamma_{\rm L,m}$ (shorter pulses), the total measurement time is set by $\Delta t_{\rm at,m} + \Delta t_{\rm at,c} = \SI{1}{\micro s}$ and the minimum propagation distance is 300 m.}} Next, we use experimental parameters given in \cite{birnbaum2005photon,hood_thesis,ritter_elementary_2012}. These experiments use \SI{852}{nm} and \SI{780}{nm} light respectively, which are subject to about \SI{2}{dB/km} loss in optical fibers. In the following, we compare both experiments and possible modifications. As discussed previously, we will work in the large detuning regime, $\Delta \gg g$. We choose an arbitrary, but subjected to experimental constraints, value of $g/\Delta = 1/10$. This ensures that we respect all approximations used, as long as we also satisfy condition $\max_t P_{\rm e}(t) \leq 0.1$. {\col{The figure of merit in both cases will be $\gamma_{0.1}$, which is the largest acceptable laser bandwidth that satisfies condition $\max_t P_{\rm e}(t) \approx 0.1$, and the state production visibility \eqref{eqn:vis}, which is computed using the actual laser bandwidth $\equiv \min(\kappa,\gamma_{0.1},\gamma_{\rm L,m})$. We also include the effect of finite pulse measurement time on the visibility, using \eqref{eqn:vis}-\eqref{eqn:D_w}. We assume $|\alpha|=2.1$ and $|r_{\rm BS}|^2 =0.001$. Table \ref{tab:parameters} summarizes the results.}} \begin{table} [h!] \begin{center} \begin{tabular}{*{7}{|c}|} \hline $g/(2\pi)$ & $\kappa/(2\pi)$ & $\Gamma/(2\pi)$ & $f_{\rm cav}$ & $\gamma_{0.1}/(2\pi)$ & $V$ &$d_{min}$\\ \hline 34 MHz & 4.1 MHz & 2.6 MHz & 10/4 & 13.3 MHz & 0.4\% & 300 m\\ \hline 34 MHz & 4.1 MHz & 2.6 MHz & 14/100 & 63.3 MHz & 72.7\% & 300 m \\ \hline 34 MHz & 4.1 MHz & 2.6 MHz & 4/100 & 65.2 MHz & 90.8\%& 300 m \\ \hline 5 MHz & 3 MHz & 3 MHz & 14/100 & 1.1 MHz & 56.3\%& 370 m\\ \hline 5 MHz & 3 MHz & 3 MHz & 4/100 & 1.3 MHz & 71.3\%& 310 m\\ \hline 5 MHz & 1.5 MHz & 3 MHz & 4/100 & 3.1 MHz & 77.2\% & 300 m\\ \hline \end{tabular} \caption{\textbf{Expected visibilities {\col{and minimum propagation distances}} for available experimental parameters for $^{133}$Cs (first 3 rows) and $^{87}$Rb (last 3 rows).} All parameters ($g/(2\pi),\kappa/(2\pi),\Gamma/(2\pi),f_{\rm cav}$) in the first and fourth row are actual cavity parameters (including mirror losses) obtained from \cite{birnbaum2005photon,hood_thesis,ritter_elementary_2012}. The second row shows the effect on the visibility and $\gamma_{0.1}$ by reducing $f_{\rm cav}$ to the current value in the experiment of Ref.~\cite{ritter_elementary_2012}. The third row is obtained by neglecting the mirror losses, which further decreases the value of $f_{\rm cav}$. The fifth row shows the effect of neglecting mirror losses, and the last row shows the effect of increasing $g/\kappa$. {\col{Notice that $\gamma_{0.1}>2\pi \times \SI{1.35}{MHz}$ in the first three and last rows, which means that the propagation distance is limited by the detection time on the atomic side and not by the pulse duration and corresponds to the propagation distance of 300 m.}} The state production visibility $V$ is computed assuming $|\alpha|=2.1$, $|r_{\rm BS}|^2 = 0.001$, using the laser bandwidth $\min(\kappa,\gamma_{0.1},\gamma_{\rm L,m})$ \colin{and taking into account the truncation of the pulse due to finite measurement time.} } \label{tab:parameters} \end{center} \end{table} The parameters of Ref.~\cite{birnbaum2005photon,hood_thesis} are $(g/(2\pi),\kappa/(2\pi),\Gamma/(2\pi),f_{\rm cav})$ = (34 MHz, $\SI{4.1}{MHz}$, 2.6\,MHz, 10/4) (we assume that the experiment performed implements a symmetric cavity). Due to the symmetry of the cavity, one obtains a small effective visibility. Assuming that the cavity could be made asymmetric reducing $f_{\rm cav}$ to the current value in the experiment of Ref.~\cite{ritter_elementary_2012}, the visibility dramatically increases to 72.7\% (second row). Neglecting the mirror losses, we show in the third row that $V$ can be as high as 90.8\%. The parameters of Ref.~\cite{ritter_elementary_2012} are $(g/(2\pi),\kappa/(2\pi),\Gamma/(2\pi),f_{\rm cav})$ = ($\SI{5}{MHz}$, $\SI{3}{MHz}$, $\SI{3}{MHz}$, 14/100). As the setup stands, the visibility is 56.3\%. However, neglecting the mirror losses, $V$ increases to 71.3\% (fifth row). If it were further possible to reduce the total cavity decay rate by a factor of 2, thus increasing the cooperativity, while maintaining the same asymmetry, the visibility further increases to 77.2\% (sixth row). The required incident photon number can be calculated from \eqref{eqn:alpha2}. For the parameters in Table \ref{tab:parameters} and requiring the resulting photon number, $|\tilde{\alpha}|^2 = 2.1^2$, one requires $|\alpha_{\rm in}|^2\approx 25-400 $ input photons. For the specific case of $^{87}$Rb, one might also identify possible states playing the role of $\ket{g}$, $\ket{s}$ and $\ket{e}$. We may choose for example the $\ket{s}$ state to be the hyperfine state $\ket{5S_{1/2}, F = 1, m_F = 1}$, the $\ket{g}$ state $\ket{5S_{1/2}, F = 2, m_F = 2}$ and the $\ket{e}$ state $\ket{5P_{3/2}, F = 3, m_F = 3}$. In this case, the input pulse and cavity field would have a $\sigma_+$ polarization coupling the $\ket{g}-\ket{e}$ transition. Due to the large detuning of the hyperfine states (\SI{6.8}{GHz}), and the fact that the $s$ state is far-detuned to any other $\sigma_+$ transitions, these states are possible candidates for the experiment. \begin{figure}[ht] \centering \begin{tikzpicture} \begin{axis}[xlabel=$\eta$,ylabel=$|t_{\rm line}|^2$,legend style={anchor=north west,cells={anchor=west},at={(0.05,.5)}}] \addplot[color=magenta,mark=none] coordinates {(1,0.67701)(0.961,0.69014)(0.92206,0.705)(0.884,0.71903)(0.846,0.73405)(0.809,0.75)(0.772,0.766)(0.735,0.78211)(0.699,0.79912)(0.664,0.817)(0.629,0.835) (0.59401,0.854)(0.56003,0.873)(0.527,0.89207)(0.49402,0.913)(0.46201,0.934)(0.431,0.955)(0.4,0.977)(0.36901,1)}; \addplot[color=green,mark=none] coordinates {(1,0.82699)(0.95,0.84015)(0.899,0.85456)(0.849,0.86973)(0.8,0.88567)(0.752,0.90241)(0.70489,0.92)(0.658,0.93875)(0.613,0.958)(0.568,0.9786)(0.52431,1)}; \addplot[color=red,mark=none] coordinates {(1,0.91154)(0.949,0.92114)(0.89719,0.932)(0.847,0.94366)(0.798,0.9562)(0.749,0.96998)(0.702,0.98445)(0.65568,1)}; \addplot[color=black,mark=none] coordinates {(1,0.98057)(0.97312,0.984)(0.947,0.9876)(0.921,0.99145)(0.895,0.99558)(0.86903,1)}; \addplot[color=blue,mark=none,dashed] coordinates {(1,0.53701)(0.99631,0.538)(0.91466,0.561)(0.834,0.58592)(0.756,0.61234)(0.67932,0.641)(0.605,0.67133)(0.53292,0.704) (0.46235,0.739)(0.395,0.776)(0.329,0.81544)(0.266,0.858)(0.206,0.90201)(0.148,0.94901)(0.093,0.999)(0.092001,1)};% \node[color=magenta,font=\small] at (axis cs:0.37, 1.02) {2}; \node[color=green,font=\small] at (axis cs:0.53, 1.02) {2.05}; \node[color=red,font=\small] at (axis cs:0.655, 1.02) {2.1}; \node[color=black,font=\small] at (axis cs:0.87, 1.02) {2.15}; \node[color=blue,font=\small] at (axis cs:0.092,1.02) {$\ket{\psi_\alpha}$}; \addplot[only marks,color=black,mark=x] coordinates (0.629,0.835) (0.752,0.90241) (1, 1)}; \node[color=black,font=\small] at (axis cs:0.97, 1) {A}; \node[color=black,font=\small] at (axis cs:0.7, 0.90241) {B}; \node[color=black,font=\small] at (axis cs:0.58, 0.835) {C}; \end{axis} \end{tikzpicture} \caption \textbf{Contour lines of $\mean{\mathcal{B}}$ as a function of $\eta$ and $|t_{\rm line}|^2$.} In this plot we fixed $|\alpha|=2.1$ and optimized the measurement and state parameters $\gamma, b, \nu$ for each point. The parameters used are $(g/(2\pi),\kappa/(2\pi),\Gamma/(2\pi),f_{\rm cav})$ = (34\,MHz, 4.1\,MHz, 2.6\,MHz, 14/100) and $\abs{r_{\rm BS}}^2 = 0.001$ (second row of Table \ref{tab:parameters}). The result for the ideal state in \eqref{eqn:target_state} (dotted line) is included for comparison. \colin{For the sake of illustration we give the specific numbers for the points represented by crosses A, B and C in Table \ref{tab:realistic_pts}.} } \label{contourlines} \end{figure} \begin{table}[h!] \centering \begin{tabular}[b]{*{4}{|c}|} \hline & A & B & C\\ \hline $|t_{\rm line}|^2$ & 1 & 0.9 & 0.83 \\ \hline $\eta$ &1 & 0.75 & 0.63\\ \hline $\gamma$ & 0.42 & 0.33 & 0.02 \\ \hline $\nu$ & 0.75 & 0.6 & 0.03 \\ \hline $b$ & 0.53 & 0.56 & 0.58 \\ \hline $\mean{\mathcal{B}}$ & 2.17 & 2.05 & $2^+$\\ \hline \end{tabular} \caption{{\bf Optimized parameters for 3 points on Figure {\ref{contourlines}}.} These parameters are results from numerical optimization of the state \eqref{eqn:real_state}, given some detector efficiency $\eta$ and some transmission $|t_{\rm line}|^2$. We have fixed $|\alpha|=2.1$, and used the Visibility $V$ \eqref{eqn:vis} from the second row of Table \ref{tab:parameters}.} \label{tab:realistic_pts} \end{table} We now maximize $\mean{\mathcal{B}}$ as a function of transmission and detector inefficiency for a set of realistic parameters. We thus set $|\alpha| = 2.1$, $(g/(2\pi),\kappa/(2\pi),\Gamma/(2\pi),f_{\rm cav})$ = (34\,MHz, 4.1\,MHz, 2.6\,MHz, 14/100) and $\abs{r_{\rm BS}}^2 = 0.001$ (second row of Table \ref{tab:parameters}). Notice that we have included possible non-radiative losses from cavity mirrors. Given these constraints, the maximal CHSH violation is $2.17$. Moreover, we can attain a violation even for $\eta=0.37$ with $|t_{\rm line}|^2=1$, or transmittance $|t_{\rm line}|^2=0.68$ with $\eta=1$. Figure {\ref{contourlines}} summarizes these results. Note that the higher the visibility of the produced state, the closer one gets to the ideal scenario (dashed line in Figure {\ref{contourlines}}) \section*{Discussion} In this work, we have shown that using current technology, it is possible to produce a hybrid atom-photon entangled state that still violates the CHSH inequality up to a value of 2.17. We also showed that we can attain a violation even for a low photon counting efficiency of 37\% with perfect transmission, or a line transmission of 68\% with perfect detection efficiency. Moreover, assuming that it is possible to perform photoionization measurements < \SI{1}{\micro s}, the required propagation distances to close the locality loophole are of the order of 300m for optical setups. This gives a very good outlook for optical systems in eventually performing a loophole-free Bell test. In principle, one may also consider circuit QED setups. The advantages of such setups are twofold. First, the very fast and efficient qubit state detection, on the order of \SI{10}{ns} \cite{ansmann_violation_2009} potentially lowers the required propagation distances to laboratory scale distances (\SI{10}{m} or less). Second, in this system, large ratios of coupling constant to cavity decay, $g/\kappa$, can be achieved. The drawbacks with current technology are the limited efficiencies of both photodetection and homodyne detection (private communication with Steve Girvin), and the requirement to cool the propagation line down to cryogenic temperatures. The hope is that both these drawbacks, being of technological nature, can be eventually overcome in near future experiments. Before finishing let us compare our present results with previous proposals involving similar setups. In Ref. \cite{sangouard11} a Bell test involving the production of an entangled state between an atom and a photonic field created by atomic decay was studied (see also \cite{BC12}). It was shown that Bell violations with photodetection efficiency of $39\%$ (see green curve in Fig.~\ref{comparison}) can be achieved. Note however that this number refers to the ideal state, considering that all light emitted by the atom is collected. Our scheme overcomes this problem since the photonic part of the state is the transmitted mode, and, moreover, requires lower efficiencies. We believe that our proposal will trigger possible implementations of loophole-free Bell tests with atom-photon interfaces, which are particularly important in quantum communication and cryptographic applications. \section*{Methods} \subsection*{Input-output relations} Here we derive the input-output relations \cite{walls_quantum_2008} for a two-sided cavity, clearly stating the approximations used, and their validity. We consider a cavity mode ($a$ mode) which couples to a left mode ($b$ mode) and a right mode ($c$ mode). We also include loss in the mirrors of the cavity as the coupling of the cavity $a$ mode to an additional $L$ mode. \\ These give the following Heisenberg equation of motion for the cavity field, \begin{align} \partial_t a(t) &= -\frac{i}{\hbar} \com{H_{\rm sys}}{a(t)} - \frac{\kappa}{2} a(t) {} \nonumber\\ & {} + \sqrt{\kappa_{\rm b}}b_{\rm in}(t) + \sqrt{\kappa_{\rm c}}c_{\rm in}(t) + \sqrt{\kappa_{\rm L}}L_{\rm in}(t), \label{aeqn:starting} \end{align} where $\kappa = \kappa_{\rm b} +\kappa_{\rm c} + \kappa_{\rm L}$ and $H_{\rm sys}$ is the system Hamiltonian (empty cavity or cavity with atom) without considering the baths, which is either \begin{align} H_{\rm empty} &= \hbar \omega_{\rm c} a^{\dag} a \quad \textrm{or,}\\ H_{\rm atom-cavity} &= \hbar \omega_{\rm c} a^{\dag} a + \hbar \omega_{\rm a} \sigma^{\dag} \sigma + \hbar g (a^{\dag} \sigma + a \sigma^{\dag}). \end{align} Equation \eqref{aeqn:starting} can also be written in the equivalent way \begin{align} \partial_t a(t) &= -\frac{i}{\hbar} \com{H_{\rm sys}}{a(t)} - \frac{k_{\rm c}}{2} a(t) {} \nonumber \\ & {}+ \sqrt{\kappa_{\rm b}}b_{\rm in}(t) - \sqrt{\kappa_{\rm c}}c_{\rm out}(t) + \sqrt{\kappa_{\rm L}}L_{\rm in}(t), \end{align} where $k_\alpha = \kappa-2\kappa_\alpha$ and we now consider the output operator for the $c$ mode, where the input and output operators are defined as, \begin{align} O_{\rm in}(t) &= \frac{-1}{\sqrt{2\pi}} \int d\omega \, O_\omega (t_0) e^{-i\omega(t-t_0)}, \\ O_{\rm out}(t) &= \inv{\sqrt{2\pi}} \int d\omega \, O_\omega (t_1) e^{-i\omega(t-t_1)}. \end{align} For the case of an empty cavity, these equations are just a linear system of differential equations, and can be solved simply by Fourier transforms defined as \begin{align} f(t) &= \inv{\sqrt{2\pi}} \int \dint\omega\,e^{-i\omega t}f(\omega), \\ f(\omega) &= \inv{\sqrt{2\pi}} \int \dint t \,e^{i\omega t}f(t), \end{align} to give \begin{eqnarray} \begin{pmatrix} b_{\rm in}(\omega) \\ c_{\rm in}(\omega) \\ L_{\rm in}(\omega) \end{pmatrix} = {\cal U}_{\rm s} \begin{pmatrix} b_{\rm out}(\omega)\\ c_{\rm out}(\omega) \\ L_{\rm out}(\omega) \end{pmatrix}, \label{aeqn:emptyio} \end{eqnarray} where \begin{align} {\cal U}_{\rm s} &= \begin{pmatrix} r_{\rm b}(\omega) & t_{\rm b}(\omega) & l_{\rm b}(\omega) \\ t_{\rm b}(\omega) & r_{\rm c}(\omega) & l_{\rm c}(\omega)\\ l_{\rm b}(\omega) & l_{\rm c}(\omega) & r_{\rm L}(\omega) \end{pmatrix} \label{aeqn:cavU} \\ r_\alpha(\omega) &= - \big(\frac{\frac{k_\alpha}{2}+ i (\omega-\omega_{\rm c})}{\frac{\kappa}{2}+i(\omega-\omega_{\rm c})}\big) \\ t_{\rm b}(\omega) &= \frac{\sqrt{\kappa_{\rm b}\kappa_{\rm c}}}{\frac{\kappa}{2}+i(\omega-\omega_{\rm c})} \\ l_{\rm b/c}(\omega) &= \frac{\sqrt{\kappa_{\rm L}\kappa_{\rm b/c}}}{\frac{\kappa}{2}+i(\omega-\omega_{\rm c})} \end{align} These are relations used in the main text when the atom is in the $\ket{s}$ state, since it is assumed to be decoupled from all cavity and environmental modes. However, when the atom is in the $\ket{g}$ state, it is assumed to couple to the cavity $a$ mode. In this situation, one has to include atomic spontaneous emission. This can be done by including the coupling of the $\ket{g}-\ket{e}$ transition to an additional bath, $E$ mode, different from the cavity. This gives the set of equations \begin{align} \partial_t a(t) &= -\frac{i}{\hbar} \com{H_{\rm atom-cavity}}{a(t)} - \frac{\kappa}{2} a(t) {} \nonumber \\ & {} + \sqrt{\kappa_{\rm b}}b_{\rm in}(t) + \sqrt{\kappa_{\rm c}}c_{\rm in}(t) + \sqrt{\kappa_{\rm L}}L_{\rm in}(t), \\ \partial_t \sigma(t) &= -\frac{i}{\hbar} \com{H_{\rm atom-cavity}}{\sigma(t)} - \frac{\Gamma}{2} \sigma(t) - \sqrt{\Gamma}\sigma_z E_{\rm in}(t), \end{align} where $\Gamma$ describes the rate of emission into modes other than the cavity modes, and $\sigma_{\rm in}$ is the input operator of the environment. Note that $\Gamma$ can be made small if the physical cavity mode has a large spatial overlap with the emission pattern of the $\ket{g}-\ket{e}$ transition. In the following, we will investigate the system dynamics in the low excitation regime. The reason is twofold. First, we want to avoid exciting the atom in order to prevent the decay into the environment (spontaneous emission with the decay rate $\Gamma$). Second, populating the excited state would induce a more complex dynamics producing in general a nontrivial entangled state between the atom and input, output and cavity fields. This is certainly an interesting regime to investigate in the context of quantum state engineering, but in the present paper we focus on the more intuitive picture in the spirit of Eq. {\ref{eqn:principle}}. The assumption of the atom occupying mostly the ground state can be translated as $\sigma_z \approx -\begin{picture}(8,8)\put(0,0){1}\put(4.8,0){\line(0,1){7}}\end{picture}$. With this assumption, the above set of equations can be solved analytically. Proceeding analogously to the empty cavity case, we have \begin{equation} \begin{pmatrix} b_{\rm in}(\omega) \\ c_{\rm in}(\omega) \\ L_{\rm in}(\omega) \\ E_{\rm in}(\omega) \end{pmatrix} ={\cal U}_{\rm g} \begin{pmatrix} b_{\rm out}(\omega) \\ c_{\rm out}(\omega) \\ L_{\rm out}(\omega) \\ E_{\rm out}(\omega) \end{pmatrix}, \end{equation} where the unitary matrix \begin{align} {\cal U}_{\rm g} &= \inv{D(\omega)} \left( {\begin{pmatrix} {\cal U}_{\rm s} (\frac{\Gamma}{2} + i \delta_{\rm a})(\half{\kappa}+i\delta_{\rm c}) & ig\sqrt{\Gamma}\vec{\nu} \\ \\ ig\sqrt{\Gamma}\vec{\nu}^T & (\frac{\Gamma}{2} - i \delta_{\rm a})(\half{\kappa}+i\delta_{\rm c}) \end{pmatrix} - g^2 \begin{picture}(8,8)\put(0,0){1}\put(4.8,0){\line(0,1){7}}\end{picture}} \right), \label{eqn:2lvlio} \end{align} with $\delta_{\rm a} = \omega-\omega_{\rm ge}$, $\delta_{\rm c} = \omega-\omega_{\rm c}$, ${\cal U}_{\rm s}$ is defined in equation \eqref{aeqn:cavU}, the vector $\vec{\nu}$ reads \begin{equation} \vec{\nu} = \begin{pmatrix} \sqrt{\kappa_{\rm b}} \\ \sqrt{\kappa_{\rm c}} \\ \sqrt{\kappa_{\rm L}} \end{pmatrix} \end{equation} and \begin{equation} D(\omega) = (\frac{\Gamma}{2} + i \delta_{\rm a})(\half{\kappa}+i\delta_{\rm c})+g^2. \end{equation} Notice that this set of equations reduces to \eqref{aeqn:emptyio} for the $b$ and $c$ modes, when $g=0$. Finally, we also assume that all output operators ($b_{\rm out},c_{\rm out},L_{\rm out},E_{\rm out}$) commute with each other, which is not true in general. This approximation is required, for an input coherent field in the $b_{\rm in}$ mode to transform into a coherent reflected field in the $b_{\rm out}$ mode and coherent fields in the $c_{\rm out}$, $L_{\rm out}$ and $E_{\rm out}$ modes. Lastly, as a consistency check of our work, we note that the approximation $\sigma_z \approx -\begin{picture}(8,8)\put(0,0){1}\put(4.8,0){\line(0,1){7}}\end{picture}$ necessarily requires that the atom remains in the ground state throughout its evolution. Solving for $\sigma(\omega)$, one obtains \begin{equation} \sigma(\omega) \approx \frac{-ig \sqrt{\kappa_{\rm b}} b_{\rm in}(\omega)}{(\frac{\Gamma}{2} - i \delta_{\rm a})(\half{\kappa}-i\delta_{\rm c}) + g^2}, \label{aeqn:sigw} \end{equation} where we have used the fact that the $c_{\rm in}$ and $\sigma_{\rm in}$ modes represent thermal noise and are thus taken to be negligible compared to the $b_{\rm in}$ mode. Taking the inverse fourier transform, one obtains \begin{equation} P_{\rm e}(t) = \mean{\sigma^{\dag}\sigma(t)} = \inv{2\pi} \left| \int \dint\omega \, \frac{ig \sqrt{\kappa_{\rm b}} s_{\rm L}(\omega)}{(\half{\Gamma}-i(\omega-\omega_{\rm a}))(\half{\kappa}-i(\omega-\omega_{\rm c})) + g^2} e^{-i\omega t}\right|^2. \end{equation} We then conclude that the self-consistency of our approximation requires that $\mean{\sigma^{\dag}\sigma(t)} \ll 1$, $\forall t$, which implies that $\max_t P_{\rm e}(t) \ll 1$. Notice that this condition is a necessary but not sufficient condition for the approximation $\sigma_z \approx -\begin{picture}(8,8)\put(0,0){1}\put(4.8,0){\line(0,1){7}}\end{picture}$, since this condition itself is derived under the approximation. However, it should be noted that equation \eqref{aeqn:sigw} can be written in terms of $a(\omega)$ to obtain \begin{align} \sigma(\omega) &\approx \frac{-ig}{\half\Gamma - i (\omega-\omega_{\rm a})} a(\omega). \\ &= \frac{-ig}{i\Delta(1-\frac{\omega-\omega_{\rm c}}{\Delta}+\frac{\Gamma}{2i\Delta})} a(\omega) \end{align} Then, for $\Delta\gg \omega-\omega_{\rm c}, \Gamma$, meaning that if the laser addresses only wavelengths close to the bare cavity resonance compared to the detuning between the atom and the cavity, and the atom-cavity detuning is many atomic linewidths away from resonance, one has the condition \begin{equation} \mean{\sigma ^{\dag} \sigma(t)} \approx \pare{\frac{g}{\Delta}}^2 \mean{a^{\dag} a(t)} \ll 1, \end{equation} which is the condition of validity of the dispersive approximation (see \cite{boissonneault2009dispersive}). This means that the condition $\Delta \gg \omega-\omega_{\rm c},\Gamma$, together with condition $\max_t P_{\rm e}(t) \ll 1$, is sufficient to justify the approximation $\sigma_z \approx -\begin{picture}(8,8)\put(0,0){1}\put(4.8,0){\line(0,1){7}}\end{picture}$. \subsection*{Derivation of visibility} The state produced after the cavity and beam splitter is of the form \begin{equation} \cos\nu\ket{s,0}\otimes \ket{\alpha_{\rm o},\alpha_{\rm b},\alpha_{\rm L},\alpha_{\rm E}}+ \sin\nu\ket{g,\tilde{\alpha}}\otimes \ket{\alpha_{\rm o}\,',\alpha_{\rm b}\,',\alpha_{\rm L}\,',\alpha_{\rm E}\,'} \end{equation} where o,b,L and E are the other port of the beam splitter, the reflected field from the cavity, the field loss in cavity mirrors and the spontaneous emission term respectively. If we measure only the atomic state and the $a$ mode, we lose information in all the other modes. Tracing over all the rest of the modes, we obtain the state \eqref{eqn:real_state}, where the visibility, $V$ is, \begin{equation} V = |\braket{\alpha_{\rm o}}{\alpha_{\rm o}\,'}\braket{\alpha_{\rm b}}{\alpha_{\rm b}\,'} \braket{\alpha_{\rm L}}{\alpha_{\rm L}\,'} \braket{\alpha_{\rm E}}{\alpha_{\rm E}\,'}| \end{equation} We take only the magnitude of the inner products, since the total phase can, in principle, be compensated by suitable atomic state preparation. This gives the expression \begin{align} V &= \exp\Big[-F \frac{|\tilde{\alpha}|^2}{2 t_{\rm BS}^2} \Big], \label{eqn:V_app} \\ F&= f_{\rm cav} + I_{s_{\rm L}} \inv{4 C} (1+f_{\rm cav}) + r_{\rm BS}^2, \\ I_{s_{\rm L}} &= \frac{\int d\omega |s_{\rm L}(\omega)\inv{D(\omega)}|^2}{\int d\omega |s_{\rm L}(\omega)\inv{D(\omega)} \inv{1+i2 (\omega-\omega_{\rm c})/\kappa}|^2}, \\ C &= \frac{g^2}{\Gamma \kappa}, \\ f_{\rm cav} &= \frac{\kappa_{\rm b} + \kappa_{\rm L}}{\kappa_{\rm c}}, \label{eqn:f_cav_app} \end{align} where, as in the main text, $t_{\rm BS}$ and $r_{\rm BS}$ are the transmittivity and reflectivity of the beam splitter used in the displacement operation, $s_{\rm L}(\omega)$ is the spectrum of the laser input field and $\kappa_i$ is the coupling rate of the cavity mode to the $i$th bath.
1,314,259,994,136
arxiv
\section*{Introduction} Light in vacuum is quantized as massless photons, which in equilibrium obey Bose-Einstein statistics. If the photons were massive, they could break the gauge symmetry under certain conditions and condense to the lowest energy state sharing a single wave function. However, the light nowadays is considered to be massless, so can one expect photons to form the Bose-Einstein condensate? A short answer is yes. To observe it, the group of Weitz \cite{Klaers2010a,Klaers2010b} used an optical microcavity, where the spectra of light modes have a cut-off due to a geometrical constraint. This cut-off acts as an effective mass for a two-dimensional photon. Dimensionality here refers to the motional degree of freedom of photons. Although the 2D photons now possess a mass, it is not enough: there is no BEC transition in a uniform two-dimensional system. The condition of uniformity is however broken by the slight curvature of the cavity walls, so the trapped light can be mapped on a 2D field of massive nonrelativistic quasiparticles experiencing harmonic potential \cite{Klaers2010b}, -- the system that is known to undergo the BEC transition. In the experiments \cite{Klaers2010b}, the controllable thermalization process \cite{Klaers2010a,Klaers2011} picks up a single light mode, and small photon losses are compensated by a weak external laser pumping. Therefore, it was shown that the number of photons is conserved in average, and the researchers can keep the system close to its thermodynamical equilibrium. The quasiequilibrium BEC of photons is observed at room temperatures \cite{Klaers2010b,Marelic2015}. The system being argued to be different from conventional lasers becomes of interest for various theoretical \cite{Sobyanin2012,Sobyanin2013,Kruchkov2013,Kirton2013,Leeuw2013, Kruchkov2014,Nyman2014,Leeuw2014a,Leeuw2014b,Leeuw2014c, Strinati2014,Klaers2012,Weitz2013,Sela2014,Kirton2015} and experimental studies \cite{Schmitt2014,Marelic2015,Schmitt2015}. The growing experimental and theoretical interest in the topic requires broadening the variety of systems for which the condensation of photons could be observed. In particular, it was explicitly discussed for dimensionalities $D=2$ (see e.g. \cite{Klaers2011,Sobyanin2013,Kruchkov2014,Chiao1999}) and in different contexts for $D=3$ (see e.g. \cite{Kruchkov2013,Kuzmin1978,Zeldovich1969}), but never in one dimension. Therefore, there's a need to complete the study for photons with the one-dimensional degree of freedom. The theoretical methods applied to the system strongly vary. Some authors are using a phenomenological nonlinear Schr\"{o}dinger equation (Gross-Pitaevskii equation) in different forms \cite{Klaers2010b,Nyman2014,Strinati2014}. The non-interacting $T\ne 0$ theory is also in use in different forms \cite{Klaers2010b,Sobyanin2013,Kruchkov2013,Kruchkov2014}. The fully off-equilibrium condensate is studied either with an effective kinetic equation with Jaynes-Cummings interaction \cite{Kirton2013,Kirton2015} in the approximation with real-time propagators, or the off-equilibrium $T \ne 0$ Green's function formalism (Schwinger-Keldysh formalism, complex-time propagators, Matsubara's frequencies, etc.) \cite{Leeuw2013,Leeuw2014a,Leeuw2014b,Leeuw2014c}. In my opinion, the Schwinger-Keldysh formalism is the most general and the most powerful approach here. This paper introduces the one-dimensional quasiequilibrium condensation of photons in a microtube. In my opinion, the Matsubara's Green's function formalism is appropriate here for the near-equilibrium system. Of course, the Schwinger-Keldysh formalism may be used here but once the near-equilibrium properties of the system are well understood. Matsubara's formalism describes the finite-temperature close-to-equilibrium systems, and it is valid for $T \ge T_C$. The main advantage of the approach for this study is that one can calculate the critical parameters of the interacting system. In this paper, I write down the Hamiltonian, which takes into account one-photon and two-photon processes of interaction with atoms and treat them perturbatively. As a result, I can describe the influence of indirect photon-photon interactions on the critical parameters. There are some limitations of the model that I use. First, I do not study the thermalization process, as also the time-evolution of the system in general, bounding myself to the steady state only. Second, I restrict myself to the (first two) leading corrections to self-energy, which in terms of direct photon-photon interactions, if they were present, would correspond to the Hartree-Fock mean field theory. The system under study is a bit more subtle and to obtain these effectively mean-field contributions one need to go to the fourth order in perturbation theory. For the same reasons, optical collisions, i.e., two-atom mechanical collisions leading to creating of photons, are not taken into account even though the model in use can do it. Other conditions of validity, which shape up the model, are discussed in the main text of the paper as they appear. This paper is organized into three sections followed by appendices. In the first section, the effective mass of light modes is introduced in tubes with varying geometry; Then, I discuss the conditions for condensation in 1D and estimate critical parameters. The second section deals with the interacting theory at $T=T_C$, where the effective Hamiltonian of photon-photon interaction is derived; this section is the heart of the paper and is organized into subsections for more comfortable reading: it describes the perturbation theory both for the uniform case and for the trapped case. The summarizing section is organized more like an outline and discussion, yet the deeper study of the problem is still needed. \section{ \label{Section1} General idea: light trapping and condensation} To condense photons in a cavity means to force them going to the lowest-energy thermodynamical state in the system \cite{Klaers2010b,Klaers2011,Sobyanin2013,Kruchkov2014}. In this section I skip the details of the thermalization process because they are studied sufficiently well \cite{Klaers2010a,Klaers2011,Kirton2013,Leeuw2013}, thus restricting myself to mentioning the three important ideas. First, the losses of photons are compensated by a weak external pumping, so the number of photons conserves in average. Second, the cavity gives a discrete set of light modes with different cut-offs. Third, it is possible to thermalize one of the modes hence ensuring the single cut-off frequency for all the thermalized photons. As a result, supporting only one of the modes, we ascribe the effective mass to a photon as it is described in the first subsection of this section. The second challenge for condensing photons in 1D is to choose the shape of the waveguide (microtube) where the condensation is possible. This choice is done in the context of the non-interacting model in the second subsection. At the end of this section, we discuss the condition for condensation and estimate the critical number of photons for the set of parameters, similar to those from Refs.\cite{Klaers2010a,Klaers2010b,Klaers2011,Marelic2015}. \subsection{ \label{Subsection1.1} Light trapping and effective mass of photons} For simplicity, we consider here the waveguides made of a microtube with axial symmetry, as it is sketched on Fig.~\ref{scheme}. The shape of the tube in the general case is given by a rather smooth function $\rho(z)$ (see Fig.~\ref{scheme}). Due to the cylindrical symmetry, a photon's energy $\hbar \omega$ is described by two quantum numbers $k_z$ and $k_\rho$, \begin{equation}\begin{split}\begin{gathered} \label{spectrum} \hbar \omega ({\vec k}) = \hbar \tilde c |{\vec k}| = \hbar \tilde c \left( k_z^2 + k_\rho^2 \right) ^{1/2}, \end{gathered}\end{split}\end{equation} \noindent where $\omega$ is the frequency of a photon with the momentum ${{\vec k}}$ decomposed for convenience into longitudinal $k_z$ and polar $k_\rho$ components. In the microtube the polar wave number $k_\rho$ is strongly discrete while the longitudinal component $k_z$ can be taken continuous because of the strong inequality $R_0 \ll l$. In the general case, the set of $k_\rho$'s follows from Maxwell equations in the microtube shaped as $\rho (z)$ with the boundary conditions on its walls. For the mirror walls one gets \begin{equation}\begin{split}\begin{gathered} \label{k rho} {k_\rho (z) } = \frac{{{q_{mn}}}}{{\rho \left( z \right)}}, \end{gathered}\end{split}\end{equation} \noindent where $q_{mn}$ is the $n$-th root of Bessel funcion of the $m$-th order, $J_{m} \left( q_{mn} \right) = 0$ (see e.g. \cite{Morse Feshbach}). The formula \eqref{k rho} was obtained in the approximation of a tube with a slightly changing cross-section radius, \begin{figure}[t] \includegraphics[width=1.0 \columnwidth]{scheme.eps} \caption{\label{scheme} A scheme of a microtube waveguide for trapping photons. The shape of the tube is determined by the relative deviation $\varphi(z)$ of the inner radius. To ease visual presentation the function $\varphi(z)$ is taken linear.} \end{figure} \begin{equation}\begin{split}\begin{gathered} \label{form} \rho \left( z \right) = {R_0}\left[ {1 - \varphi ( z ) } \right], \ \ \ \ \ \varphi \left( z \right) \ll 1, \end{gathered}\end{split}\end{equation} \noindent where $R_0$ is the radius of the tube at $z=0$, see Fig.~\ref{scheme}. The dimensionless quantity $\varphi \left( z \right)$ shows the small relative deviation of the tube's radius. Due to the the strong demand $R_{0} \ll l$, where $l$ is the half-length of the microtube (see the scheme on Fig.~\ref{scheme}), one expects ${\overline{k_z}} \ll k_0$, where ${k_0} = {q_{mn}}/{R_0}$ is the minimal polar wave number. As the consequence, the expression \eqref{spectrum} can be asymptotically expanded, \begin{equation}\begin{split}\begin{gathered} \label{expansion} \hbar \omega \simeq \hbar \tilde c{k_0} \left( 1 + \frac{{k_z^2}}{{2k_0^2}} + \varphi ( z ) \right). \end{gathered}\end{split}\end{equation} \noindent This expression can be rewritten in a more intuitive way, \begin{equation}\begin{split}\begin{gathered} \label{energy} \hbar \omega \simeq \hbar \omega_{0} + \frac{{{\hbar ^2}k_z^2}}{{2{m^*}}} + V(z), \end{gathered}\end{split}\end{equation} \noindent which reminds a particle with a mass $m^{*}$ and one-dimensional degree of freedom $k_z$ in the field of external potential $V(z)$. In our case, the effective mass of a photon, as follows from comparison between Eqs. \eqref{energy} and \eqref{expansion}, is defined as \begin{equation}\begin{split}\begin{gathered} \label{mass} m^{*} = \frac{\hbar \, q_{mn}}{ \tilde{c} \, R_{0} } , \end{gathered}\end{split}\end{equation} \noindent and is related to the cut-off frequency $\omega_{0}$ as $\hbar \omega_{0} =m^{*} \tilde{c}^{2}$, which is a measure of the minimum energy of photons. The trapping (pseudo)potential, caused by the geometry of the reflective inner surface, is \begin{equation}\begin{split}\begin{gathered} \label{kappa} V(z) = \frac{q_{mn}\hbar \, \tilde{c} }{R_{0} } \varphi(z). \end{gathered}\end{split}\end{equation} \noindent Thus, both the effective mass of a photon inside the tube and the effective potential take their origin from the specific geometry under consideration or strictly talking, from the $k_\rho \left( z \right)$ component of a photon's wave vector. Summarizing the main idea of the subsection, one can say that the system of photons trapped inside the microtube can be considered as an ensemble of quasiparticles with the mass $m^*$ and the one-dimensional degree of freedom $\kappa = k_z$ placed in the potential $V(z)$. The form of the potential $V(z)$ is determined by the shape of the microtube waveguide $\varphi(z)$. Therefore, changing the shape of this waveguide, one can change the trapping potential. \subsection {\label{Subsection1.2} Non-interacting theory and critical number of photons} Noninteracting model is good for a primal estimate. In this model, the photons in the microtube are considered as noninteracting particles with the one-dimensional degree of freedom. The total number of photons is given by integrating the Bose-Einstein distribution over the configurational space. The condensation condition can be expressed as follows: the chemical potential of photons $\mu_0$ at the critical point reaches the minimum energy of photons in the system, i.e. $\mu_0 = \hbar \omega_0$ (see \cite{Klaers2010b,Kruchkov2014,Kruchkov2013,Sobyanin2013}). For the ideal photon gas with the one-dimensional degree of freedom, the critical number of particles for Bose-Einstein condensation can be estimated in Wigner approximation, \begin{equation}\begin{split}\begin{gathered} \label{Ncrit} N_{0} = \int \frac{ d{k_z} d{z}} {2\pi} \text{g}^* \left\{ {\exp\left[ \frac{\hbar ^2 k_z^2/2 m^* +V(z)} { T } \right] - 1} \right\}^{-1}, \end{gathered}\end{split}\end{equation} \noindent where $\text{g}^*$ takes into account the possible degeneracy in photon energy (see e.g. \cite{Kruchkov2013,Kruchkov2014}). The integral in \eqref{Ncrit} may or may not converge, which is a consequence of the Bogoliubov theorem stating, in particular, that there is no BEC in dimensions below three if the system is uniform. However, the presence of the external potential can be considered as nonhomogeneity and the integral in \eqref{Ncrit} is convergent for certain types of potentials; for example, in the case of the one-minimum symmetrical potentials, the singularity is integrable if only the dimensionless potential $\varphi(z)=V(z)/\hbar \omega_0$ grows slower than a parabolic function, \begin{equation}\begin{split}\begin{gathered} \label{condition} \varphi \left( z \right) = {\left| z / L_0 \right|^\alpha }, \ \ \ \alpha \in \left( {0,2} \right), \end{gathered}\end{split}\end{equation} \noindent where $L_0$ is a parameter in units of length. Even though one can imagine more sophisticated potentials (for example, multiple-minimum potentials), for simplicity, we restrict ourselves to the case of the one-minimum potential of the form \eqref{condition}. To calculate the integral \eqref{Ncrit}, we introduce the new variables $\xi$, $\eta$, such as \begin{equation}\begin{split}\begin{gathered} \label{new variables} {\xi ^2} = \frac{{{\hbar ^2}k_z^2}}{{2{m^*}T}}, \ \ \ \ \ {\eta ^\alpha } = \frac{\hbar \omega_0 }{T}{\left| {\frac{z}{{{L_0}}}} \right|^\alpha }, \end{gathered}\end{split}\end{equation} \noindent and after some hackneyed algebra obtain the expression for the critical number of photons in the system to observe the phase transition at the temperature $T$, \begin{equation}\begin{split}\begin{gathered} \label{N_C} N_0 (T; \alpha) = \text{g}^* \frac{ 2 \sqrt{2} }{\pi } \frac{ L_0 \omega_0}{\tilde c} \left( \frac{T}{\hbar \omega_0} \right)^{\frac{1}{2}+\frac{1}{\alpha}} I\left( \alpha \right), \end{gathered}\end{split}\end{equation} \noindent where we have introduced the dimensionless normalization integral, \begin{equation}\begin{split}\begin{gathered} \label{normalization integral} I\left( \alpha \right) = \int\limits_0^\infty {\int\limits_0^\infty {\frac{{d\xi d\eta }}{{\exp \left( {{\xi ^2} + {\eta ^\alpha }} \right) - 1}}} }. \end{gathered}\end{split}\end{equation} \begin{figure} \includegraphics[width=0.9\columnwidth]{normintegral.eps} \caption{\label{normintegral} Normalization integral $I(\alpha)$ as a function of the trapping parameter $\alpha$, $V(z) \propto |z|^\alpha$. The local minimum is situated at $\alpha_{\text{min}} = 0.71$. The two asymptotes (not shown) are $I(\alpha) \to + \infty$ as $\alpha \to 0$ and $\alpha \to 2$, limiting the region of desirable trapping parameters to $\alpha \in (0,2)$, where the condensation of an ideal gas of photons is allowed in one dimension. } \label{tube} \end{figure} \noindent The normalization integral remains finite while the trapping parameter is $\alpha \in \left( 0, 2 \right)$. The dependence $ I \left( \alpha \right)$ is shown in Fig.~\ref{normintegral}. Nameworthy, there is a minimum at the trapping parameter $\alpha_{\text{min}} = 0.71$, $ I \left( \alpha_{\text{min}} \right) = 1.9$. However, a more fruitful trapping parameter is $\alpha =1$, i.e. $\varphi (z) \propto \left| z \right| $, where the value of the normalization integral (as also other quantities of non-interacting and interacting theory) can be calculated analytically, $I(1) = \Gamma(3/2) \zeta(3/2)$. In this case, the expression \eqref{N_C} for the critical number of photons simplifies, \begin{equation}\begin{split}\begin{gathered} \label{N crit linear 1} N_0 = \sqrt{\frac{2}{\pi}} \, \text{g}^{*} \zeta \left(3/2 \right) \frac{L_0 \omega_0}{\tilde c} \left(\frac{T}{\hbar \omega_0} \right)^{3/2}. \end{gathered}\end{split}\end{equation} \noindent Taking into account the explicit expression for $\omega_0$, and $\zeta (3/2) \approx 2.6 $, $\text g^{*} \approx 3$ the formula \eqref{N crit linear 1} can be simplified and given in terms of direct experimental parameters, \begin{equation}\begin{split}\begin{gathered} \label{N crit linear 2} N_0 \approx \left( \frac{T}{\hbar \tilde c k_{\Lambda}} \right)^{3/2}, \ \ \ \ \ k_{\Lambda} = \left( \frac{q_{mn}}{4 \pi^2 R_0 L_0^2 } \right)^{1/3} . \end{gathered}\end{split}\end{equation} \noindent The formula \eqref{N crit linear 2} defines the critical number of photons in the tube with V-like trapping profile, $\varphi (z ) \propto \left| z \right|$. Such a biconical waveguide can be indeed manufactured \cite{Vogl2011}. It is remarkable that the $T^{3/2}$ dependence, as in the formula \eqref{N crit linear 2} for 1D gas of particles under V-like potential, also holds for 3D uniform gas of bosons. This similarity rises from the composition of the Wigner integral \eqref{Ncrit}. An estimate for the biconical waveguide gives following. For definiteness, we take the lowest Bessel root $q_{01} \approx 2.4$. The radius of the waveguide should be such to ensure closeness of cut-off and the atomic transition frequencies, giving $R_0 \approx q_{01} \lambdabar_{\text{at}}$, where $\lambdabar_{\text{at}}$ is atomic transition wavelength reduced by $2 \pi$. Taking now for estimate $\lambdabar_{\text{at}} \sim 10^{-6} \, \text{m} $, $\tilde c \approx 2.2 \cdot 10^8 \, \text{m/s}$, $L_0 \sim 10^{-2} \, \text{m}$, and the room temperature $T=300 \, \text{K}$, the threshold number of photons to trig condensation is $N_0 \sim 10^4$, which is even smaller than the one reported for the 2D condensation, $N_0 \sim 10^5$. Thus, one may conclude that even at room temperatures the one-dimensional condensation of photons is possible. \section{\label{Section2} Interacting theory} As we have seen in the previous section, the non-interacting model is good. It works for a range of potentials and predicts the BEC transition of photons in one dimension. The non-interacting model would be exact for photons in a vacuum where their scattering cross-section is negligible. However, the photons in the system under study do interact with each other, be it indirectly. These interactions arise from the multiple acts of scattering, absorption and emission of photons. One can classify all these processes by the number of photons involved in a single act, and then construct a hierarchy of irreducible acts (the events, which cannot be represented as a product of two different acts). This hierarchy defines the form of an effective interacting Hamiltonian, which then is treated perturbatively. To build up the perturbation theory, first I write down the Hamiltonian of the irreducible interactions, which is done in the first subsection. In the second subsection, the renormalized Green's functions of photons are derived for the uniform (non-trapped) case. The effect of the trapping potential is considered in the third subsection. Finally, the last subsection gives contributions for all the one-photon and two-photon processes. This section is rich on physics and intended to be longer. \subsection{ \label{Subsection2.1} Interacting Hamiltonian and observables} The interacting Hamiltonian ${\mathcal H}$ should include processes of absorption, emission and scattering of photons on atoms. It can be naturally written in a secondly-quantized form. For this, we introduce the operators of creation $\phi^{\dag}_{\kappa_{\vec k}}$ and annihilation $\phi^{}_{\kappa_{\vec k}}$ of a photon as a massive quasiparticle with the one-dimensional degree of freedom $\kappa_{\vec k} = ({{\vec k}^2-k_{0}^2})^{1/2} \equiv \kappa$. In the cylindrical microtube, as we have already shown, these quasiparticles have the quadratic dispersion law $\hbar \omega_\kappa = \hbar \omega_0 + \hbar^2 \kappa^2/2m^*$ and are placed in the field of trapping potential. Consequently, the secondly quantized Hamiltonian is given in a general form as \begin{equation}\begin{split}\begin{gathered} {\mathcal H} = \sum_{{\vec k}} \hbar \omega_{\kappa_{\vec k}} \phi^{\dag}_{\kappa_{\vec k}} \phi^{}_{\kappa_{\vec k}} +\sum_{{\vec k},{\vec q}} V_{{\vec q}} \, \phi^{\dag}_{\kappa_{{\vec k}+{\vec q}}} \phi^{}_{\kappa_{\vec k}}+{\mathcal H}_{I}, \end{gathered}\end{split}\end{equation} \noindent where $V_{{\vec q}} $ is the Fourier-transform of the trapping potential $V(z)$ and ${\mathcal H}_{I}$ reflects the photon-atom interactions. Here are some examples of the elementary interaction acts: A photon can be absorbed by an atom in the ground state; A photon can be emitted by an atom in an excited state; A photon can be scattered by an atom (or in secondly-quantized language destroyed and then created again). To describe these events quantomechanically, an adequate atomic model in use is the so-called two-level model, where an atom can be in two states: the ground state $\left| {E_{\sigma_1}({\vec p})} \right\rangle$ and the excited state $\left|{E_{\sigma_2}({\vec p}) }\right\rangle$. The validity of this model for this study is satisfied by the two reasons mainly: First, the cut-off of photons is set closer to the chosen atomic transition, $\hbar \omega_0 \approx \hbar \omega_{\text{at}}$, and the other transitions are energetically separated; Second, the number of photons is small in comparison to the number of atoms, $N_\phi \ll N_{\text{at}}$. As a consequence, the probability of exciting the higher states is strongly suppressed and can be neglected in the main approximation. \begin{figure*} \includegraphics[width=0.98 \textwidth]{diagrams.eps} \caption{\label{diagrams} Hierarchy of the interaction processes: a)-c) 0 One-photon processes, d-i) Two-photon processes. Notations: single line ground-state atom; double line -excited-state atom; curly line - photon. Time flows from up to top of a diagram.} \end{figure*} We introduce the operators of creation and annihilation of atoms in the ground state $a^{\dag}_{{\vec p}}$, $a^{}_{{\vec p}}$ and in the excited state $\tilde a^{\dag}_{{\vec p}}$, $\tilde a^{}_{{\vec p}}$, where ${\vec p}$'s label the atomic momenta. The photon-atom interactions are described now as all the possible combinations of $\phi$'s, $a$'s and $\tilde a$'s (times the complex-valued coupling vertices), and the number of these combinations is, in principle, infinite. The good news here is that one can, for example, build up a hierarchy of irreducible processes based on the number of photons involved in a process. ``Irreducible'' stands here for a process which cannot be decomposed into two (or more) simpler processes. The important one-photon and two-photon processes are sketched in Fig.~\ref{diagrams}. For example, the left diagram in the subfigure (a) of the Fig.~\ref{diagrams} shows a simplest one-photon process: a ground-state atom absorbs a photon and goes to the excited state. In the conjugated process, shown in the right diagram of the subfigure (a), the excited atom emits a photon and goes to the ground state. Hence, the interacting Hamiltonian can be expressed as \begin{equation}\begin{split}\begin{gathered} \label{hierarchy} {\mathcal H}_{I} = {\mathcal H}_{I}^{11} + {\mathcal H}_{I}^{12} + {\mathcal H}_{I}^{13} + {\mathcal H}_{I}^{21} + {\mathcal H}_{I}^{22} + {\mathcal H}_{I}^{23} +\dots , \end{gathered}\end{split}\end{equation} \noindent where the one-photon processes are described by \begin{equation}\begin{split}\begin{gathered} \label{one-photon} {\mathcal H}_{I}^{11} = \frac{1}{\sqrt{N_{\text{at}}}} \sum_{{\vec p}, {\vec k}} \Gamma^{11}_{{\vec k}} \ \tilde a^{\dag} _{{\vec p} + {\vec k}} \, a^{}_{{\vec p}} \, \phi^{}_{\kappa_{\vec k}} +H.c. , \\ {\cal H}_{I}^{12}= \frac{1}{\sqrt{N_{\text{at}}}} \sum_{{\vec p}, {\vec k}} \Gamma^{12}_{{\vec k}} \ a_{{\vec p} + {\vec k}}^{\dag} \, a^{}_{{\vec p}} \, \phi^{}_{\kappa_{\vec k}} + H.c. , \\ {\cal H}_{I}^{13}= \frac{1}{\sqrt{N_{\text{at}}}} \sum_{{\vec p}, {\vec k}} \Gamma^{13}_{{\vec k}} \ \tilde a_{{\vec p} + {\vec k}}^{\dag} \, \tilde a^{}_{{\vec p}} \, \phi^{}_{\kappa_{\vec k}} + H.c. , \end{gathered}\end{split}\end{equation} \noindent the two-photon processes are described by \begin{equation}\begin{split}\begin{gathered} \label{two-photon} {\mathcal H}_{I}^{21} = \frac{1}{\sqrt{N_{\text{at}}}} \sum_{{\vec p}, {\vec k},{\vec q}} \Gamma^{21}_{{\vec k} {\vec q}} \ \tilde a^{\dag} _{{\vec p} + {\vec k} - {\vec q}} \, \phi^{\dag}_{\kappa_{\vec q}} \, a^{}_{{\vec p}} \, \phi^{}_{\kappa_{\vec k}} + H.c., \\ {\mathcal H}_{I}^{22} = \frac{1}{\sqrt{N_{\text{at}}}} \sum_{{\vec p}, {\vec k},{\vec q}} \Gamma^{22}_{{\vec k} {\vec q}} \ a^{\dag} _{{\vec p} + {\vec k}-{\vec q}} \, \phi^{\dag}_{\kappa_{\vec q}} \, a^{}_{{\vec p}} \, \phi^{}_{\kappa_{\vec k}} + H.c., \\ {\mathcal H}_{I}^{23} = \frac{1}{\sqrt{N_{\text{at}}}} \sum_{{\vec p}, {\vec k},{\vec q}} \Gamma^{23}_{{\vec k} {\vec q}} \ \tilde a^{\dag} _{{\vec p} + {\vec k}-{\vec q}} \, \phi^{\dag}_{\kappa_{\vec q}} \, \tilde a^{}_{{\vec p}} \, \phi^{}_{\kappa_{\vec k}} + H.c., \\ {\mathcal H}_{I}^{24} = \frac{1}{\sqrt{N_{\text{at}}}} \sum_{{\vec p}, {\vec k},{\vec q}} \Gamma^{24}_{{\vec k} {\vec q}} \ \tilde a^{\dag} _{{\vec p} + {\vec k} + {\vec q}} \, \phi^{}_{\kappa_{\vec q}} \, a^{}_{{\vec p}} \, \phi^{}_{\kappa_{\vec k}} + H.c., \\ {\mathcal H}_{I}^{25} = \frac{1}{\sqrt{N_{\text{at}}}} \sum_{{\vec p}, {\vec k},{\vec q}} \Gamma^{25}_{{\vec k} {\vec q}} \ a^{\dag} _{{\vec p} + {\vec k}+{\vec q}} \, \phi^{}_{\kappa_{\vec q}} \, a^{}_{{\vec p}} \, \phi^{}_{\kappa_{\vec k}} + H.c., \\ {\mathcal H}_{I}^{26} = \frac{1}{\sqrt{N_{\text{at}}}} \sum_{{\vec p}, {\vec k},{\vec q}} \Gamma^{26}_{{\vec k} {\vec q}} \ \tilde a^{\dag} _{{\vec p} + {\vec k} + {\vec q}} \, \phi^{}_{\kappa_{\vec q}} \, \tilde a^{}_{{\vec p}} \, \phi^{}_{\kappa_{\vec k}} + H.c., \end{gathered}\end{split}\end{equation} \noindent and so on. For beauty, I adopt $\hbar=1$ in indices labelling, for example $\tilde a^{\dag} _{{\vec p} + {\vec k}} \equiv \tilde a^{\dag} _{{\vec p} + \hbar {\vec k}}$ reads as an excitation of an atom with the momentum ${\vec p} +\hbar {\vec k} $. In general, I tend to keep ${\vec p}$ and ${\vec p}'$ for atomic momenta and ${\vec k}$ and ${\vec q}$ for photon wave vectors, so it is easy to distinguish. The coupling parameters $\Gamma_{{\vec k}}$ should be also read as $\Gamma_{{\vec k}} = \Gamma( \omega_{{\vec k}})$ due to their scalar nature. In the present paper, we neglect the contributions from optical collisions, i.e. the processes of form $a^{\dag}_{{\vec p}_1} \, a^{\dag}_{{\vec p}_1} \, \phi^{\dag}_{\kappa_{\vec q}} \, a^{}_{{\vec p}'_1} \, a^{}_{{\vec p}'_2}$ (with ${\vec q} = {\vec p}_1+{\vec p}_2 - {\vec p}'_1 - {\vec p}'_2$) and others, even though these processes can be taken into account in this model by writing down their one-photon and two-photon Hamiltonians in the secondly-quantized form. For simplicity here we consider $\hbar = 1$ until the end of the section, where it is restored. We define the Matsubara's Green's function of a photon as \begin{equation}\begin{split}\begin{gathered} {\mathcal G} ( \kappa, \tau, \tau_0) = - \left \langle \, \text{T}_{\tau} \, \phi_{\kappa} (\tau) \, \phi^{\dag}_{\kappa} (\tau_0) \right \rangle_{\text{th}}, \end{gathered}\end{split}\end{equation} \noindent where all the operators are in the (imaginary-time) Heisenberg representation, and $\text{T}_{\tau} $ stands for Matsubara time ordering. We introduce now Fourier-transformed Green's functions, \begin{equation}\begin{split}\begin{gathered} \label{Fourier transf} {\mathcal G}(\kappa, i \omega_n) = \int \limits_{0}^{\beta} d\tau e^{i \omega_n \tau} {\mathcal G} (\kappa, \tau) , \end{gathered}\end{split}\end{equation} \noindent where $i \omega_n$ are Matsubara frequencies, which are of discrete nature, $\omega_n = 2 \pi n/\beta$ for bosons, and $\beta=1/T$ is the inversed temperature as usually. We also introduce the atomic Green's functions as \begin{equation}\begin{split}\begin{gathered} {\mathcal G}_{\text{at}} ( {\vec p}, \tau, \tau_0) = - \left \langle \, \text{T}_{\tau} \, a_{{\vec p}} (\tau) \, a^{\dag}_{{\vec p}} (\tau_0) \right \rangle _{\text{th}}, \\ \tilde {\mathcal G}_{\text{at}} ( {\vec p}, \tau, \tau_0) = - \left \langle \, \text{T}_{\tau} \, \tilde a_{{\vec p}} (\tau) \, \tilde a^{\dag}_{{\vec p}} (\tau_0) \right \rangle_{\text{th}}. \end{gathered}\end{split}\end{equation} \noindent Some properties of the atomic Green's functions are given in Appendix A. Matsubara Green's functions link observables at finite temperatures with the equal-time response of quantum operators. For instance, the occupation number of photons with one-dimensional degree of freedom $\kappa$ is given by \begin{equation}\begin{split}\begin{gathered} \label{occ number photons} f_{\kappa} = \langle \phi^{\dag}_{\kappa} \phi_{\kappa} \rangle = - \lim_{\tau \to 0^{-} } {\mathcal G} (\kappa,\tau, 0), \end{gathered}\end{split}\end{equation} \noindent and similar expression are for the occupation number of atoms in the three-dimensional reciprocal space ${\vec p}$, \begin{equation}\begin{split}\begin{gathered} n_{{\vec p}} = \langle a^{\dag}_{{\vec p}} a^{}_{{\vec p}} \rangle = - \lim_{\tau \to 0^{-} } {\mathcal G}_{\text{at}} ({\vec p},\tau, 0), \\ \tilde n_{{\vec p}} = \langle \tilde a^{\dag}_{{\vec p}} \tilde a^{}_{{\vec p}} \rangle = - \lim_{\tau \to 0^{-} } \tilde {\mathcal G}_{\text{at}} ({\vec p},\tau, 0). \end{gathered}\end{split}\end{equation} \noindent One may notice that the photon occupation number $f_{\kappa}$ and atomic occupation numbers $n_{{\vec p}}$, $\tilde n_{{\vec p}}$ are introduced by different literals. This is intended. First, I wish to distinguish between them two without adding additional indices. Second, the two quantities are of different units, so they represent different physical notions. \subsection{Perturbation theory in a uniform medium} In this subsection, we derive the renormalized photon propagator and the corresponding self-energies in the absence of the trapping potential. In the Matsubara's formalism, the perturbed Green's function is given by a series expansion \begin{equation}\begin{split}\begin{gathered} \label{perturbation series} {\mathcal G}(\kappa, \tau, \tau_0 ) = - \sum \limits_{n=0}^{\infty} \frac{(-1)^n}{n!} \int \limits_{0}^{\beta} d\tau_1 ... \int \limits_{0}^{\beta} d\tau_n \ \\ \times { \left \langle \text{T}_{\tau} \ \phi^{}_{\kappa}(\tau) \ \phi^{\dag}_{\kappa}(\tau_0) \ {\mathcal H}_{I} (\tau_1) ... {\mathcal H}_{I} (\tau_n) \, \right \rangle}_{0} , \end{gathered}\end{split}\end{equation} \noindent where the thermal averaging includes only connected diagrams, and $0$ stands for the non-interacting eigenstates. To simplify the discussion, in this subsection I consider a single process only, namely the first one, \begin{equation}\begin{split}\begin{gathered} \label{11 Ham} {\mathcal H}_{I}^{11} = \frac{1}{\sqrt{N_{\text{at}}}} \sum_{{\vec p}, {\vec k}} \Gamma^{11}_{{\vec k}} \, \tilde a^{\dag} _{{\vec p} + {\vec k}} \, a^{}_{{\vec p}} \, \phi^{}_{\kappa_{\vec k}} + \Gamma^{11*}_{{\vec k}} \, \phi^{\dag}_{\kappa_{\vec k}} \, a^{\dag}_{{\vec p}} \, \tilde a^{} _{{\vec p} + {\vec k}} . \end{gathered}\end{split}\end{equation} \noindent This choice is not arbitrary. Indeed, it is ${\mathcal H}_{I}^{11}$ that gives one of the most significant contributions to the self-energy. At the end of this section, the contributions from all the other one-photon and two-photon processes are taken into account. To simplify the relevant formulas further, I drop in this subsection all the ''$11$'' superscripts of expression \eqref{11 Ham}. According to the formula \eqref{perturbation series}, the first non-vanishing correction that is given due to acts of absorption and re-emission is given by the first mean-field correction, is \begin{equation}\begin{split}\begin{gathered} \delta {\mathcal G}^{(1)}(\kappa, \tau) = - \frac{1}{2!} \int \limits_{0}^{\beta} d\tau_1 \int \limits_{0}^{\beta} d\tau_2 \\ \times { \left \langle \text{T}_{\tau} \, \phi^{}_{\kappa}(\tau) \, \phi^{\dag}_{\kappa}(0) \, {\mathcal H}_{I} (\tau_1) \, {\mathcal H}_{I} (\tau_2) \, \right \rangle}_0 . \end{gathered}\end{split}\end{equation} \noindent Using the explicit expression \eqref{11 Ham}, and decoupling the time-ordered average by using the Wick's theorem, the contributions of the connected diagrams are determined as \begin{equation}\begin{split}\begin{gathered} \label{G1} \delta {\mathcal G}^{(1)}(\kappa_{\vec k}, \tau) = \frac{|\Gamma_{{\vec k}}|^2}{N_{\text{at}}} \int \limits_{0}^{\beta} d\tau_1 \int \limits_{0}^{\beta} d\tau_2 \ {\mathcal G} (\kappa_{{\vec k}}, \tau - \tau_2 ) \, {\mathcal G} (\kappa_{{\vec k}}, \tau_1 ) \\ \times \sum_{{\vec p}} {\mathcal G}_{\text{at}} ({\vec p},\tau_1 - \tau_2) \, \tilde {\mathcal G}_{\text{at}} ({\vec p} + {\vec k}, \tau_2 - \tau_1) . \end{gathered}\end{split}\end{equation} \iffalse \begin{figure} \includegraphics[width=1.0\columnwidth]{interaction_diagrams.eps} \caption{\label{interaction} Diagrams describing the effective photon-photon interaction, mediated by photon-atom interaction.} \end{figure} \fi \noindent Going to the $i \omega_n$-representation of formula \eqref{G1}, one shows the non-vanishing contribution corresponds to $\tau_1 = \tau_2$. For the non-degenerate ensemble of atoms, the product of the atomic propagators is given as ${\mathcal G}_{at} ({\vec p},\tau_1 - \tau_2) \tilde {\mathcal G}_{at} ({\vec p} + {\vec k}, \tau_2 - \tau_1) = n_{{\vec p}} \tilde n_{{\vec p} + {\vec k}}$. Thus, after Fourier-transforming Eq.\eqref{G1}, one obtains the first renormalization to the Green's function \begin{equation}\begin{split}\begin{gathered} \delta {\mathcal G}^{(1)}(\kappa,i \omega_n) = \Sigma^{(1)} (\kappa, i \omega_n) \, {\mathcal G}^2_{0}(\kappa,i \omega_{n}), \end{gathered}\end{split}\end{equation} \noindent where the first (on-shell) self-energy $\Sigma^{(1)} (\kappa, i \omega_n) = \Sigma^{(1)}_\kappa$ is contributed by one-photon emission/absorption, \begin{equation}\begin{split}\begin{gathered} \Sigma^{(1)}_{\kappa_{\vec k}} = \frac{|\Gamma_{{\vec k}}|^2}{N_{\text{at}}} \sum_{{\vec p}} n_{{\vec p}} \tilde n_{{\vec p} + {\vec k}}. \end{gathered}\end{split}\end{equation} \noindent To calculate the density of particles, we go back to the $\tau$-representation, namely considering the perturbed Green's function as \begin{equation}\begin{split}\begin{gathered} {\mathcal G}(\kappa,\tau ) \approx \frac{1}{\beta} \sum_{i \omega_n} e^{- i \omega_n \tau } {\mathcal G}_{0}(\kappa,i \omega_{n} ) \\ + \frac{1}{\beta} \sum_{i \omega_n} e^{- i \omega_n \tau } {\mathcal G}^2_{0}(\kappa,i \omega_{n} ) \Sigma^{(1)}_{\kappa} . \end{gathered}\end{split}\end{equation} \noindent As a result, the occupation number of photons, renormalized due to interactions, is obtained through the $\tau \to 0^{-}$ limit, in the main order one obtains \begin{equation}\begin{split}\begin{gathered} \label{first order occ number} f_{\kappa} = - \frac{1}{\beta} \sum_{i \omega_n} {\mathcal G}_{0}(\kappa,i \omega_{n} ) - \frac{1}{\beta} \sum_{i \omega_n} {\mathcal G}^{2}_{0}(\kappa,i \omega_{n}) \, \Sigma^{(1)}_{\kappa} \\ = - \frac{1}{\beta} \sum_{i \omega_n} \frac{1}{i \omega_n - \omega_{\kappa}} - \frac{1}{\beta} \sum_{i \omega_n} \frac{\Sigma^{(1)}_{{\vec k}} }{(i \omega_n - \omega_{\kappa})^2}. \end{gathered}\end{split}\end{equation} \noindent Using now the Matsubara frequencies summation rules (see Ref.\cite{Mahan}), one derives \begin{equation}\begin{split}\begin{gathered} \label{first order occ number 2} f_{\kappa} = \frac{1}{e^{\beta \omega_{\kappa}}-1} - \frac{ \beta \Sigma_{\kappa}^{(1)} e^{\beta \omega_{\kappa}} }{(e^{\beta \omega_{\kappa} }-1)^2 }. \end{gathered}\end{split}\end{equation} \noindent So far the existence of the non-zero chemical potential $\mu$ was for simplicity avoided in \eqref{first order occ number 2}, but it is easily included switching $\omega_{\kappa} \to \omega_{\kappa} - \mu$ in the Green's functions of photons. For the non-zero chemical potential, the renormalized occupation number in the first order is \begin{equation}\begin{split}\begin{gathered} \label{first occ number} f (\omega_{\kappa} +\Sigma_{\kappa}^{(1)} - \mu) \approx \frac{1}{e^{\beta (\omega_{\kappa}- \mu)}-1} - \frac{ \beta \Sigma_{\kappa}^{(1)} e^{\beta (\omega_{\kappa}- \mu)} }{\left(e^{\beta (\omega_{\kappa}- \mu) }-1 \right)^2 }. \end{gathered}\end{split}\end{equation} \noindent The first term in \eqref{first occ number} is intuitively understandable as it describes the occupation number for the non-interacting gas of photons while the second term gives the first interacting correction. We still keep here $\hbar=1$. The next non-vanishing correction is given by the fourth order of the perturbation theory, \begin{equation}\begin{split}\begin{gathered} \label{Gf2} \delta {\mathcal G}^{(2)}(\kappa, \tau) = - \frac{1}{4!} \int \limits_{0}^{\beta} \int \limits_{0}^{\beta} \int \limits_{0}^{\beta} \int \limits_{0}^{\beta} d\tau_1 d\tau_2 d\tau_3 d\tau_4 \\ \times { \left \langle \text{T}_{\tau} \ \phi^{}_{\kappa}(\tau) \phi^{\dag}_{\kappa}(0) {\mathcal H}_{I} (\tau_1) {\mathcal H}_{I} (\tau_2) {\mathcal H}_{I} (\tau_3) {\mathcal H}_{I} (\tau_4) \right \rangle}. \end{gathered}\end{split}\end{equation} \noindent The unique connected diagrams are given, for example, by $\tau_4 = \tau_1$, $\tau_3 = \tau_2$. Decoupling operators with the use of Wick's theorem, one finds the quantity under averaging $\langle \text{T}_{\tau} \dots \rangle$ in Eq.\eqref{Gf2} containing the following combination of the atomic Green's functions, \begin{equation}\begin{split}\begin{gathered} {\mathcal G}_{\text{at}} ({\vec p}, \tau_{12}) {\mathcal G}_{\text{at}}({\vec p}', \tau_{21}) \tilde {\mathcal G}_{\text{at}} ({\vec p} + {\vec k}, \tau_{21}) \tilde {\mathcal G}_{\text{at}} ({\vec p}' + {\vec q}, \tau_{12}) \\ + {\mathcal G}_{\text{at}} ({\vec p}, 0 ) {\mathcal G}_{\text{at}}({\vec p}', 0) \tilde {\mathcal G}_{\text{at}} ({\vec p} + {\vec k}, 0) \tilde {\mathcal G}_{\text{at}} ({\vec p}' + {\vec k}, 0) \\ + {\mathcal G}_{\text{at}} ({\vec p}, \tau_{12}) {\mathcal G}_{\text{at}}({\vec p}+{\vec k} - {\vec q}, \tau_{21}) \tilde {\mathcal G}_{\text{at}}^2({\vec p} + {\vec k}, 0) \\ + {\mathcal G}_{\text{at}}^2 ({\vec p}, 0 ) \tilde {\mathcal G}_{\text{at}} ({\vec p} + {\vec k}, \tau_{21}) \tilde {\mathcal G}_{\text{at}} ({\vec p} + {\vec q}, \tau_{12}), \end{gathered}\end{split}\end{equation} \noindent where we have introduced $\tau_{12} = \tau_1 - \tau_2$ and $\tau_{21} = \tau_2 - \tau_1$ for brevity. Using the properties of the Matsubara's Green's functions, this expression can be transformed to \begin{equation}\begin{split}\begin{gathered} {\mathcal G}_{\text{at}} ({\vec p}, \tau_{12}) {\mathcal G}_{\text{at}}({\vec p}', \tau_{21}) \tilde {\mathcal G}_{\text{at}} ({\vec p} + {\vec k}, \tau_{21}) \tilde {\mathcal G}_{\text{at}} ({\vec p}' + {\vec q}, \tau_{12}) \\ \times \left( 1 + \delta_{{\vec p}, {\vec p}'} + \delta_{{\vec k}, {\vec q}}+ \delta_{{\vec p} + {\vec k}, {\vec p}' + {\vec q}} \right). \end{gathered}\end{split}\end{equation} \noindent This, in turn, gives the second perturbative correction to the photon Green's function as \begin{equation}\begin{split}\begin{gathered} \delta {\mathcal G}^{(2)}(\kappa_{\vec k}, \tau) = \frac{|\Gamma_{{\vec k}}|^2}{N_{\text{at}}} \sum_{{\vec q}} \frac{ |\Gamma_{{\vec q}}|^2} {N_{\text{at}}} \sum_{{\vec p},{\vec p}'} \int \limits_{0}^{\beta} \int \limits_{0}^{\beta} d\tau_1 d\tau_2 \, \\ \times {\mathcal G} \left(\kappa_{\vec k}, \tau- \tau_2 \right) \, {\mathcal G} \left( \kappa_{\vec q}, \tau_2 - \tau_1 \right) \, {\mathcal G} \left(\kappa_{\vec k}, \tau_1 \right) \\ \times {\mathcal G}_{\text{at}} ({\vec p}, \tau_1 - \tau_2) {\mathcal G}_{\text{at}}({\vec p}', \tau_2 - \tau_1) \tilde {\mathcal G}_{\text{at}} ({\vec p} + {\vec k}, \tau_2 - \tau_1) \\ \times \tilde {\mathcal G}_{\text{at}} ({\vec p}' + {\vec q}, \tau_1 - \tau_2) \left( 1 + \delta_{{\vec p}, {\vec p}'} + \delta_{{\vec k}, {\vec q}}+ \delta_{{\vec p} + {\vec k}, {\vec p}' + {\vec q}} \right). \end{gathered}\end{split}\end{equation} \noindent Now, we use again the condition that the atomic ensemble is non-degenerate, yielding a quasiclassical propagator $ {\mathcal G}_{\text{at}} ({\vec p}, \tau) = - e^{- E_{{\vec p}} \tau} n_{{\vec p}} $, and a similar expression for $ \tilde {\mathcal G}_{\text{at}} ({\vec p}, \tau)$. Therefore, one obtains \begin{equation}\begin{split} \begin{gathered} \label{prod at} {\mathcal G}_{\text{at}} ({\vec p}, \tau_{12}) \tilde {\mathcal G}_{\text{at}} ({\vec p} + {\vec k}, \tau_{21}) {\mathcal G}_{\text{at}}({\vec p}', \tau_{21}) \tilde {\mathcal G}_{\text{at}} ({\vec p}' + {\vec q}, \tau_{12}) \\ = n_{{\vec p}} \tilde n_{{\vec p}+ {\vec k}} \ n_{{\vec p}'} \tilde n_{{\vec p}'+ {\vec q}} \ e^{(\omega_{{\vec q}} - \omega_{{\vec k}}) \tau_{21} }, \end{gathered} \end{split}\end{equation} \noindent where we have used the energy conservation laws, $ E_{{\vec p}} + \omega_{{\vec k}} = E_{{\vec p}+{\vec k}} + \Delta$, with $\Delta = \omega_{\text{at}}$ is the energy distance between the ground state and the excited state, and a similar expression for ${\vec q}$. The vector labelling of photon energies is used here to emphasize that this expression \eqref{prod at} holds in general. For definiteness, we consider $\tau_2 > \tau_1$ in the present calculation. Next step is proceeding to the Matsubara frequencies, and we are doing the Fourier transform to Matsubara frequencies, \begin{equation}\begin{split} \begin{gathered} \delta {\mathcal G} ^{(2)}(\kappa_{\vec k}, i \omega_n) = \frac{|\Gamma_{{\vec k}}|^2}{N^2_{\text{at}}} \sum_{{\vec q}} |\Gamma_{{\vec q}}|^2 \sum_{{\vec p},{\vec p}'} n_{{\vec p}} \tilde n_{{\vec p}+ {\vec k}} \ n_{{\vec p}'} \tilde n_{{\vec p}'+ {\vec q}} \\ \times {\mathcal G} (\kappa_{\vec k},i\omega_{n} ) {\mathcal G} (\kappa_{\vec k}, i\omega_{n} ) , {\mathcal G} (\kappa_{\vec q}, i \omega_n + \omega_{{\vec q}} - \omega_{{\vec k}}) \\ \times \left( 1 + \delta_{{\vec p}, {\vec p}'} + \delta_{{\vec k}, {\vec q}}+ \delta_{{\vec p} + {\vec k}, {\vec p}' + {\vec q}} \right). \end{gathered} \end{split}\end{equation} \noindent We introduce the effective interaction coupling parameter \begin{equation}\begin{split} \begin{gathered} F({\vec k};{\vec q}) = \frac{1}{N^2_{\text{at}}} \sum_{{\vec p},{\vec p}'} \left( 1 + \delta_{{\vec p}, {\vec p}'} + \delta_{{\vec k}, {\vec q}}+ \delta_{{\vec p} + {\vec k}, {\vec p}' + {\vec q}} \right) \\ \times n_{{\vec p}} \tilde n_{{\vec p}+ {\vec k}} \ n_{{\vec p}'} \tilde n_{{\vec p}'+ {\vec q}}, \end{gathered} \end{split} \end{equation} \noindent which simplifies the formula for the perturbed Green's function \begin{equation}\begin{split}\begin{gathered} \delta {\mathcal G} ^{(2)}(\kappa, i \omega_n) = |\Gamma_{{\vec k}}|^2 \, {\mathcal G}^3(\kappa, i \omega_n) \sum_{{\vec q}} |\Gamma_{{\vec q}}|^2 \, F({\vec k};{\vec q}). \end{gathered}\end{split}\end{equation} \noindent The function $F({\vec k};{\vec q})$ can be calculated explicitly (see Appendix C). Next step, we sum up over Matsubara frequencies to obtain the equal-time response \begin{equation}\begin{split} \begin{gathered} {\mathcal G}^{(2)}(\kappa,\tau = 0 ) = |\Gamma_{{\vec k}}|^2 \sum_{{\vec q}} |\Gamma_{{\vec q}}|^2 F({\vec k};{\vec q}) \ \frac{1} {\beta} \sum_{i \omega_n} {\mathcal G}^3(\kappa, i \omega_n) . \end{gathered} \end{split}\end{equation} \noindent To proceed in the on-shell approximation, we use non-interacting propagators. The sum can be calculated using the standard Matsubara machinery \cite{Mahan}, yielding the result \begin{equation}\begin{split} \begin{gathered} \frac{1} {\beta} \sum_{i \omega_n} {\mathcal G}^3_0(\kappa, i \omega_n) = \frac{\beta^2 e^{\beta \omega_{\kappa}}}{2} f^2_0(\omega_{\kappa}) - \beta^2 e^{2 \beta \omega_{\kappa}} f^3_0(\omega_{\kappa}), \end{gathered} \end{split}\end{equation} \noindent where again $f_0(\omega) = \left\{ e^{\beta \omega} - 1 \right\}^{-1}$ is the Bose-Einstein distribution for photons. In the leading order, the perturbed Green's function is given by ($\mu=0$) \begin{equation}\begin{split}\begin{gathered} \label{perturbed nontrapped 2} {\mathcal G}(\kappa,\tau = 0 ) = {\mathcal G}_0(\kappa,\tau = 0 ) + \beta e^{\beta \omega_{\kappa}} \left(\Sigma^{(1)}_{\kappa} + \Sigma^{(2)}_{\kappa} \right) f^2_0(\omega_{\kappa}) \\ +{\mathcal{O}}\left[ f^3_0(\omega_{\kappa}) \right], \end{gathered}\end{split}\end{equation} \noindent where the second self-energy contribution is determined as \begin{equation}\begin{split}\begin{gathered} \label{self-energy 2} \Sigma^{(2)}_{\kappa_{\vec k}} = \frac{\beta}{2} |\Gamma_{\vec k}|^2 \sum_{{\vec q}} |\Gamma_{{\vec q}}|^2 F({\vec k};{\vec q}). \end{gathered}\end{split}\end{equation} \noindent The formulas \eqref{perturbed nontrapped 2}-\eqref{self-energy 2} completely describe the renormalized equal-time response for the one-photon process ${\cal{H}}^{11}$ in the leading orders for a uniform system at $T \ne0$. The interactions thus modify the photon's spectrum, \begin{equation}\begin{split}\begin{gathered} \tilde \omega_{\kappa} = \omega_{\kappa} +\Sigma^{(1)}_{\kappa} +\Sigma^{(2)}_{\kappa}, \ \ \ \ \ \omega_{\kappa} = \omega_0 + \frac{\kappa^2}{2 m^*}. \end{gathered}\end{split}\end{equation} \noindent Including now the non-vanishing chemical potentials $\mu$, and keeping only linear terms in $\delta \mu =\mu - \mu_0 = \lim_{\kappa \to 0}[ \Sigma^{(1)}_{\kappa} +\Sigma^{(2)}_{\kappa} ]$, one obtains the distribution function of photons, modified by interactions, as \begin{equation}\begin{split}\begin{gathered} \label{47} f(\tilde \omega_\kappa - \mu ) \approx f_0(\omega_\kappa - \mu_0 ) \\ - \beta e^{\beta ( \omega_{\kappa} - \mu_0) } f^2_0 (\omega_\kappa - \mu_0) \left[ \Sigma^{(1)}_{\kappa} +\Sigma^{(2)}_{\kappa} - \delta \mu \right] . \end{gathered}\end{split}\end{equation} \noindent If integrated over $\kappa$, Eq.\eqref{47} gives the critical number of photons at temperature $T$. Unfortunately, in one dimension the Bose-Einstein singularity is not integrable, so we need to modify the formalism by considering the trapping potential. \subsection{Perturbation theory in the trapping potential} In this subsection, we calculate the renormalization of photon's propagators ${\mathcal G}$ as they are trapped in the microtube waveguide. This is done again by means of the perturbation theory. First, we look for the first corrections to the non-trapped propagator and then sum up corrections in all orders. This procedure gives a new propagator $G$, describing non-interacting but trapped photons. At the end of this subsection, this result is merged with the outcome of the previous subsection, giving the interacting trapped propagator in the leading order. Consider non-interacting photons trapped in the potential $V({\vec r})$ with its Fourier-image $V_{{\vec k}}$. The Hamiltonian responsible for this reads \begin{equation}\begin{split} \begin{gathered} \label{48} {\cal H} = \sum_{{\vec k}} \omega_{\kappa_{\vec k}} \, \phi^{\dag}_{\kappa_{\vec k}} \phi^{}_{\kappa_{\vec k}} + \sum_{{\vec k},{\vec q}} V_{{\vec q}} \, \phi^{\dag}_{\kappa_{{\vec k}+{\vec q}}} \phi^{}_{\kappa_{\vec k}}. \end{gathered} \end{split}\end{equation} \noindent The perturbation theory is given by the same formalism [see \eqref{perturbation series}]. The renormalized photon propagator $G$ origins from the non-trapped propagator $G_0 \equiv {\mathcal G}$ and is augmented by the series of perturbative corrections, \begin{equation}\begin{split} \begin{gathered} \label{series trapped} G(\kappa, \tau) = {\mathcal G}(\kappa, \tau) + \delta G^{(1)} (\kappa, \tau) + \delta G^{(2)} (\kappa, \tau) + ... , \end{gathered} \end{split}\end{equation} \noindent as it is sketched in Fig.~\ref{trapped_propagator}. The first correction to the propagator due to interaction with the external potential (second diagram on Fig.~\ref{trapped_propagator}) is expressed as \begin{figure*} \includegraphics[width=1.0\textwidth]{trapped_propagator.eps} \caption{\label{trapped_propagator} Renormalization of a photon's propagator in the external field. Wavy propagators stand for one-dimensional photon Green's functions ${\mathcal G} (\kappa, \tau)$, whereas dashed lines denote interaction with external potential. The series converge up to the trapped photon propagator ${\text{G}}(\kappa,\tau )$. } \end{figure*} \begin{equation}\begin{split}\begin{gathered} \delta G^{(1)}(\kappa_{\vec k}, \tau ) = \sum_{{\vec q},{\vec k}'} V_{{\vec q}} \int \limits_{0}^{\beta} d\tau_1 \\ \times \langle \text{T}_{\tau} \, \phi^{}_{\kappa_{\vec k}}(\tau) \, \phi^{\dag}_{\kappa_{\vec k}}(0) \, \phi^{\dag}_{\kappa_{{\vec k}'+{\vec q}}} (\tau_1) \, \phi^{}_{\kappa_{{\vec k}'}} (\tau_1) \rangle . \end{gathered}\end{split}\end{equation} \noindent Using Wick's theorem, one obtains: \begin{equation}\begin{split}\begin{gathered} \delta G^{(1)}\left(\kappa_{\vec k}, \tau \right) = V_0 \int \limits_{0}^{\beta} d\tau_1 \, {\mathcal G} \left(\kappa_{\vec k}, \tau-\tau_1 \right) \, {\mathcal G} \left( \kappa_{\vec k}, \tau_1 \right) , \end{gathered}\end{split}\end{equation} \noindent where $V_0 \equiv V_{\kappa_{\vec k}=0}$. The convolution vanishes as we go to the Fourier transform, which gives \begin{equation}\begin{split}\begin{gathered} \delta G^{(1)}(\kappa, i \omega_n) = V_0 \ {\mathcal G}^2 (\kappa, i \omega_n) . \end{gathered}\end{split}\end{equation} \noindent As in the previous calculations, the equal-time response is extracted from summing up over Matsubara frequencies, \begin{equation}\begin{split}\begin{gathered} \delta G^{(1)}(\kappa, \tau = 0) = V_0 \ \frac{1}{\beta} \sum_{i \omega_n} {\mathcal G}^2 (\kappa, i \omega_n) = V_0 \, \frac {\beta e^{\beta \omega_{\kappa}}}{(e^{\beta \omega_{\kappa}}-1)^2} . \end{gathered}\end{split}\end{equation} \noindent The long-wavelength asymptote $\kappa \to 0$ of the Fourier image $V_{\kappa}$ of potential $V(z)$ is finite and given by $V_0 = V(l)/(1+\alpha)$ (see Appendix B). The theoretical model presented in this section is not limited to $V(z) \propto |z|^{\alpha}$ with $\alpha = 1$, however for the sake of simplicity and beauty of the expressions we choose the linearly growing trapping potential, $\alpha = 1$, which gives \begin{equation}\begin{split}\begin{gathered} \label{54} \delta G^{(1)}(\kappa, \tau = 0) = \frac{e^{\beta \omega_k} f_0^2(\omega_{\kappa}) }{2 } \, \beta V(l) . \end{gathered}\end{split}\end{equation} The next correction is given by the third diagram in Fig.~\ref{trapped_propagator}, which is the only connected diagram one can build up with the perturbed Hamiltonian \eqref{48}. This diagram is expressed mathematically as \begin{equation}\begin{split}\begin{gathered} \delta G^{(2)}(\kappa_{\vec k}, \tau ) = \sum_{{\vec q}} V_{{\vec q}} V_{-{\vec q}} \int \limits_{0}^{\beta} \int \limits_{0}^{\beta} d\tau_1 d\tau_2 \\ \times {\mathcal G} (\kappa_{\vec k}, \tau_1 ) \ {\mathcal G}(\kappa_{{\vec k}+ {\vec q}}, \tau_2 - \tau_1) \ {\mathcal G}(\kappa_{\vec k}, \tau - \tau_2), \end{gathered}\end{split}\end{equation} \noindent which upon Fourier transform yields to \begin{equation}\begin{split}\begin{gathered} \delta G^{(2)} (\kappa_{\vec k}, i \omega_n ) = {\mathcal G}^2( \kappa_{\vec k}, i \omega_n) \sum_{{\vec q}} | V_{{\vec q}} |^2 {\mathcal G}( \kappa_{{\vec k}+ {\vec q}}, i \omega_n) . \end{gathered}\end{split}\end{equation} \noindent Therefore, the equal-time response is expressed in terms of the Matsubara frequency sum, \begin{equation}\begin{split}\begin{gathered} \delta G^{(2)} (\kappa_{\vec k},\tau = 0 ) = \frac{1}{\beta} \sum_{{\vec q}} | V_{{\vec q}} |^2 \sum_{i \omega_n} {\mathcal G}^2(\kappa_{\vec k}, i \omega_n) {\mathcal G}(\kappa_{{\vec k}+ {\vec q}}, i \omega_n) \\ = \sum_{{\vec q}} | V_{{\vec q}} |^2 \ \frac{1}{\beta} \sum_{i \omega_n} \frac{1}{(i \omega_n - \omega_{\kappa_{\vec k}})^2 (i \omega_n - \omega_{\kappa_{{\vec k} + {\vec q}}})}. \end{gathered}\end{split}\end{equation} \noindent Using the rules of summation, one obtains the correction to the density of photons due to the interaction with external potential as \begin{equation}\begin{split}\begin{gathered} \label{58} \delta G^{(2)}(\kappa_{\vec k},\tau = 0 ) = - \sum_{{\vec q}} | V_{{\vec q}} |^2 \\ \times \left\{ \frac{f_0(\omega_{\kappa_{{\vec k} + {\vec q}}}) - f_0(\omega_{\kappa_{\vec k}}) }{(\omega_{\kappa_{{\vec k} + {\vec q}}} - \omega_{\kappa_{\vec k}})^2} + \frac{\beta e^{\beta \omega_{\kappa_{\vec k}}} \, f_0^2(\omega_{\kappa_{\vec k}}) }{ \omega_{\kappa_{{\vec k}+{\vec q}}} - \omega_{\kappa_{\vec k}}} \right\}. \end{gathered}\end{split}\end{equation} \noindent It's clear that the main contribution to the sum in Eq.\eqref{58} is given by the region where $\omega_{\kappa_{{\vec k} + {\vec q}}} \approx \omega_{\kappa_{\vec k}}$. Therefore, we can expand the numerators in the Taylor series around $\omega_{\kappa_{\vec k}}$. It is important to go up to the third order because some terms get canceled. This in turn leads to \begin{equation}\begin{split}\begin{gathered} \delta G^{(2)}(\kappa,\tau = 0 ) = - \frac{e^{\beta \omega_{\kappa}}(1+ e^{\beta \omega_{\kappa}} ) }{2} \beta^2 f_0^3 (\omega_{\kappa}) \sum_{{\vec q}} | V_{{\vec q}} |^2 . \end{gathered}\end{split}\end{equation} \noindent Now one can calculate the sum here explicitly, which gives $\sum_{{\vec q}} | V_{{\vec q}} |^2 = V^2(l)/(1+2 \alpha)$ (see Appendix B). Taking again the linearly growing potential, $\alpha=1$, one therefore obtains the expression \begin{equation}\begin{split}\begin{gathered} \label{60} \delta G^{(2)} (\kappa,\tau =0 ) = - \frac{e^{\beta \omega_{\kappa}} (1+ e^{\beta \omega_{\kappa}} ) f_0^3 (\omega_{\kappa}) }{6} \, \beta^2 V^2(l) , \end{gathered}\end{split}\end{equation} \noindent which is the second-order correction to the free propagator due to the light trapping. Therefore, the first two corrections are given by Eq.\eqref{54} and Eq.\eqref{60}. For this study, it is important to go to the further orders. One can verify that in all orders the series \eqref{series trapped} converges to \begin{equation}\begin{split}\begin{gathered} \label{trapped propagator} G(\kappa, \tau = 0) = - \frac{1}{\beta V(l)} \ln \frac{1- e^{\beta \omega_\kappa} e^{\beta V(l)} } {(1- e^{\beta \omega_\kappa}) e^{\beta V(l)}}, \end{gathered}\end{split}\end{equation} \noindent which decribes the propagator in the external potential $V(z) = u |z|$. Again, the results can be obtained in the $l/R_0 \to \infty$ approximation and proceeding to the continuous spectrum by $\sum_{{\vec k}} \to 2l \int \frac{d\kappa}{2 \pi} $. Therefore, throughout the machinery of the previous derivations, one needs to replace the non-trapped propagators ${\mathcal G}(\kappa, \tau)$ by the propagators $G(\kappa, \tau)$ of trapped photons. Up to the first order in photon-atom interactions (i.e. neglecting all the higher-order terms), one obtains \begin{equation}\begin{split}\begin{gathered} \label{62} \tilde f (\tilde \omega_{\kappa} - \mu) \approx -G(\kappa, \tau = 0) - \beta e^{\beta(\omega_{\kappa} - \mu)} G^2(\kappa, \tau = 0) \\ \times \left(\Sigma_{\kappa} - \delta \mu \right) . \end{gathered}\end{split}\end{equation} \noindent To calculate the critical photon number, one needs to sum up over momenta. However, a photon in the system under study is described by the motional degree of freedom $\kappa=k_z$, but there also could be other degrees of freedom, for example, a polarizational degree of freedom. Recall that it was taken into account in formula \eqref{Ncrit} by introducing $\text{g}=\text{g}(k)$, which describes degeneracy in photon energy levels. For the estimate in \eqref{Ncrit}, we took an effective $\text{g}^{*} \approx 3$, as a massive boson can exist in three polarization states even in on-shell approximation. Therefore, in this approximation the continuous limit is introduced as $\sum_{{\vec k}} \to \frac{ \text{g} l}{\pi} \int_{-\infty}^{\infty} d \kappa$. This essentially leads to multiplying by $\text{g}^{*}$ each time we have wave-vector summation. Therefore, taking into account Eq.\eqref{62}, one obtains \begin{equation}\begin{split}\begin{gathered} \label{Ncrit_int} N_{\phi} \approx \frac{2 \text{g}^{*} l }{\pi \beta V(l)} \int \limits_{0}^{\infty} d\kappa \, \ln \left[ \frac{1- e^{\beta (\omega_\kappa - \mu_0)} e^{\beta V(l)} } {(1- e^{\beta (\omega_\kappa - \mu_0)}) \, e^{\beta V(l)}} \right] \\ - \frac{2 \text{g}^{*} l }{\pi \beta V^2(l)} \int \limits_{0}^{\infty} d \kappa \, e^{\beta (\omega_{\kappa} - \mu_0)} \left( \Sigma_\kappa- \delta \mu \right) \\ \times \ln^{2} \left[ \frac{1- e^{\beta (\omega_\kappa - \mu_0) } \, e^{\beta V(l)} } {(1- e^{\beta ( \omega_\kappa - \mu_0) }) \, e^{\beta V(l)}} \right] . \end{gathered}\end{split}\end{equation} \noindent One can verify -- either analytically or numerically -- that the first term is exactly the non-interacting result we discussed before [see the expression \eqref{N crit linear 1}]. Indeed, taking $l \to \infty$, and $\mu_0 = \hbar \omega_0$, the first term turns into \begin{equation}\begin{split}\begin{gathered} N_{0} = \frac{2 \text{g}^{*} T}{\pi u} \int \limits_{0}^{\infty} d\kappa \, \ln \left[1+ \frac{1} { \exp( \hbar^2 \kappa^2/2 m^{*} T)-1 } \right] \\ = \int \limits_{-\infty}^{+\infty} \int \limits_{-\infty}^{+\infty} \frac{d \kappa d z}{2 \pi} \frac{ \text{g}^{*}} { \exp\left[ \frac{\hbar^2 }{2 m^{*}T} \kappa^2 + \frac{u}{T}|z| \right]- 1 }. \end{gathered}\end{split}\end{equation} \noindent where $V(z) = u |z|$, i.e. $u = \hbar \omega_0/L_0$. One notices that it's exactly the formula \eqref{Ncrit} after relabelling motional degree of freedom $\kappa$ as $k_z$. \subsection{Contributions from one-photon and two-photon processes} Finally, in this subsection we list the contributions to self-energies from one-photon and two-photon processes \eqref{hierarchy}-\eqref{two-photon} without a detailed derivation. The derivation procedure is the same as for the process ${\cal{H}}^{11}$ in the two previous subsections. The critical number of photons in the interacting case involves the self-energies from the processes ${\cal{H}}^{11}$, ${\cal{H}}^{12}$, ${\cal{H}}^{13}$, ${\cal{H}}^{21}$, ${\cal{H}}^{22}$, ${\cal{H}}^{23}$, ${\cal{H}}^{24}$, ${\cal{H}}^{25}$, ${\cal{H}}^{26}$ which are given by \begin{equation}\begin{split}\begin{gathered} \Sigma_{\kappa} \approx \Sigma_{\phi} (\kappa) +\Sigma_{\phi \phi} (\kappa), \end{gathered}\end{split}\end{equation} \noindent where the number of ``$\phi$'' in the subscripts stands for the number of photons in an irreducible process, again in this study it is either one or two. The self-energies like $\Sigma_{\phi \phi \phi} (\kappa)$ and of higher orders are neglected. The contributions from one-photon processes are given by \begin{equation}\begin{split}\begin{gathered} \label{self-energy one-photon} \Sigma^{(1)}_{\phi}(\kappa_{\vec k}) = \sum_{{\vec p}} \gamma^{11}_{{\vec k}} \, n_{{\vec p}} \tilde n_{{\vec p} + {\vec k}} + \gamma^{12}_{{\vec k}} \, n_{{\vec p}} n_{{\vec p} + {\vec k}} + \gamma^{13}_{{\vec k}} \, \tilde n_{{\vec p}} \tilde n_{{\vec p} + {\vec k}}, \\ \Sigma^{(2)}_{\phi }(\kappa_{{\vec k}}) = \frac{\beta}{2} \sum_{{\vec q}} \gamma^{11}_{{\vec k}} \gamma^{11}_{{\vec q}} F_1({\vec k};{\vec q}) + \gamma^{12}_{{\vec k}} \gamma^{12}_{{\vec q}} F_2({\vec k};{\vec q}) \\ + \gamma^{13}_{{\vec k}} \gamma^{13}_{{\vec q}} F_3({\vec k};{\vec q}), \end{gathered}\end{split}\end{equation} \noindent where $\gamma^{11}_{{\vec k}} \approx |\Gamma^{11}_{{\vec k}}|^2$, $\gamma^{12}_{{\vec k}} = |\Gamma^{12}_{{\vec k}}|^2$, $\gamma^{13}_{{\vec k}} = |\Gamma^{13}_{{\vec k}}|^2$ are positive factors. In the main approximation the effective interaction parameters (see Appendix C) are give by $F_{\alpha}({\vec k};{\vec q}) \approx \sigma_{\alpha} ({\vec k}) \, \sigma_{\alpha} ({\vec q})$, where the quantities \begin{equation}\begin{split}\begin{gathered} \sigma_1({\vec k}) = \frac{1}{N_{\text{at}}} \sum_{{\vec p}} n_{{\vec p}} \tilde n_{{\vec p}+{\vec k}}, \\ \sigma_2({\vec k}) = \frac{1}{N_{\text{at}}} \sum_{{\vec p}} n_{{\vec p}} n_{{\vec p}+{\vec k}}, \\ \sigma_3({\vec k}) = \frac{1}{N_{\text{at}}} \sum_{{\vec p}} \tilde n_{{\vec p}} \tilde n_{{\vec p}+{\vec k}}, \end{gathered}\end{split}\end{equation} \noindent can be calculated analytically (see Appendix C). The contribution of the two-photon processes are given by \begin{equation}\begin{split}\begin{gathered} \label{self-energy two-photon} \Sigma^{(1)}_{\phi \phi} (\kappa_{{\vec k}})= \frac{1}{N_{\text{at}}} \sum_{{\vec p}} r^{22}_{{\vec k}} \, n_{{\vec p}} + r^{23}_{{\vec k}} \, \tilde n_{{\vec p}} = r^{22}_{{\vec k}} \frac{N_a}{N_{\text{at}}} + r^{23}_{{\vec k}} \frac{\tilde N_{a}}{N_{\text{at}}}, \\ \Sigma^{(2)}_{\phi \phi}(\kappa) \approx \frac{\beta}{2} \sum_{{\vec q}} \gamma^{21}_{{\vec k}{\vec q}} \, \sigma_1 ({\vec k}- {\vec q}) +\gamma^{22}_{{\vec k} {\vec q}} \, \sigma_2({\vec k}- {\vec q}) \\ + \gamma^{23}_{{\vec k} {\vec q}} \, \sigma_3 ({\vec k} - {\vec q}). \end{gathered}\end{split}\end{equation} \noindent where $r^{ab}_{{\vec k}} =\Re \, \Gamma^{ab}_{{\vec k}}$, and $\gamma^{2 a}_{{\vec k} {\vec q}} = |\Gamma^{2 a}_{{\vec k}}|^2$. The expressions \eqref{self-energy one-photon}-\eqref{self-energy two-photon} describe the one-photon and two-photon processes without taking into account optical collisions of atoms (or molecules). Therefore, the critical number of photons to observe the Bose-Einstein condensation at the temperature $T$ is defined by Eq.\eqref{Ncrit_int}, with the self-energies given by expressions \eqref{self-energy one-photon} - \eqref{self-energy two-photon} and the renormalization of the non-interacting chemical potential, $\delta \mu = \lim _{\kappa \to 0} \Sigma_{\kappa}$. An important simplification comes in the limit $l \to \infty$. In this case, the critical number of photons is given by \begin{equation}\begin{split}\begin{gathered} \label{69} N_{C} \approx N_0 - \frac{2 \text{g}^{*} L_0^2}{l} \frac{T}{\hbar \omega_0} \int \limits_{0}^{\infty} d \kappa \exp \left(\frac{\hbar^2 \kappa^2}{2 m^{*} T} \right) \frac{ \Sigma_\kappa - \delta \mu }{\hbar \omega_0} \\ \times \ln^{2} \left[ 1+f_0 \left(\frac{\hbar^2 \kappa^2}{2 m^{*}T} \right) \right] , \end{gathered}\end{split}\end{equation} \noindent where still $f_0 (x) = \left(e^x - 1\right)^{-1}$; the noninteracting critical number $N_0$ is given by formulas \eqref{N crit linear 1} and \eqref{N crit linear 2}. Note that in the case $l \to \infty$ contributions from the lowest mean-field self-energies $\Sigma^{(1)}_{\phi}$ and $\Sigma^{(1)}_{\phi \phi}$ vanish, but contributions from $\Sigma^{(2)}_{\phi}$ and $\Sigma^{(2)}_{\phi \phi}$ remain finite as they involve continuous-limit summation over photon momenta, returning the factor $l$. \section{Discussion and Outline} The main goal of this paper was to introduce the condensation of photons in one dimension: Find the necessary conditions, estimate the critical parameters, and look for the role of light-matter interactions. However, it was not my goal to plan a particular experiment, and neither it was to plot the observables, since such calculations make sense only after (and only if) the experiment succeeds. The analysis, presented in Sec.~\ref{Section1}, shows that in the weakly-interacting case, the condensation is possible if the light is trapped in a prolongated microtube, $l \gg R_0$, which is slowly narrowing towards the ends as a power-law function weaker than parabolic. The analysis has not been done for the strongly varying shape, $l \sim R_0$, as the quantization procedure in that case is not straightforward. However, I would not be surprised if a similar phenomenon, yet less distinct, could be observed for $l \sim R_0$. The experiments on Bose-Einstein condensation of photons \cite{Klaers2010a,Klaers2010b,Klaers2011,Marelic2015} has an interesting distinguishing feature: the temperature of the setup is fixed, and the number of particles is tuned by external pumping. Therefore, one of the natural observables is $N_C(T)$ (compare to $T_C(N)$ in atomic BECs). In this study, the noninteracting model contributes the critical number of photons as $N_0 \propto T^{3/2}$, whereas the first perturbative corrections contribute in a more complicated manner (see formulas \eqref{Ncrit_int},\eqref{69} and the linked expressions in Appendix C). Again, as in the case of the two-dimensional BEC of photons \cite{Klaers2010b,Kruchkov2014}, the geometry of the system is important for tuning the system, since the parameters $R_0$, $l$, $L_0$ appear both in the non-interacting and interacting context. The other interesting feature here is the sharp response to the atomic frequency resonance. As it was already mentioned, for the thermalization process based on the repeated processes of absorption and emission, it is important to ensure the closeness of the cut-off frequency $\omega_{0}$ and the main atomic transition frequency $\omega_{\text{at}}$, so the absorption processes are favorable enough comparing to scattering processes. Even though the thermalization processes were not considered implicitly in this study, just referring to the earlier studies, the importance of the relation $\omega_{0} \approx \omega_{\text{at}}$ is apparent, as it appears throughout the paper, both in the noninteracting and interacting cases: The quantity $\Theta = e^{(\hbar \omega_0 - \hbar \omega_{\text{at}} )/T}$ reflects the strength of this resonance for this system (see for example, Appendix C); also as the coupling parameters $\Gamma$ introduced in the Hamiltonian \eqref{hierarchy}-\eqref{two-photon} will have local extrema for momenta of photons, satisfying the relation $\hbar \omega( {\vec k}) \simeq \hbar \omega_{\text{at}}$. Finally, for the completeness of this consideration, one should also add into account the average number of photons that are coupled with atoms, which also depends on the closeness to the atomic resonance. In equilibrium this quantity is linearly proportional to the number of atoms and is given by $N_{\text{at}}\left[ 1+ \text{g}_{12} \exp \left( \frac{\hbar \omega_{\text{at}}- \hbar \omega_0 }{T}\right) \right]^{-1} $, where $\text{g}_{12}$ is the ratio between degeneracy of the atomic ground state and the first excited state, see for details Refs.\cite{Kruchkov2013,Kruchkov2014}. The influence of indirect photon-photon interactions, mediated through the different processes of absorption, emission, and scattering, was studied in terms of an effective Hamiltonian, taking into account the hierarchy of multi-photon processes. Because the photon number in the system under study is significantly smaller than the number of atoms, the hierarchy graph can be truncated on the one-photon and two-photons processes, which give the leading contributions to self-energy, corresponding to the effectively Hartree-Fock terms if the direct photon-photon scattering was present. The temperature-dependent perturbation theory, represented here by Matsubara formalism, is valid in the symmetrical phase ($N \le N_C$), thus allowing us to calculate the critical parameters of the system. I should make here two important remarks. First, the different combinations of one-photon and two-photon processes can give interfering terms which, of course, will contribute to the self-energy, however this contribution appears to be significantly smaller; the lowest contributions of the three-photon processes are of the higher order -- at least with eight photon operators, which is beyond the present study. Second, there could also be be present different one-photon and two-photon processes, involving more than a pair of atoms, for example, optical collisions of a form $a^{\dag} a^{\dag} \phi^{\dag} a^{} \, a^{}$, $a^{\dag} a^{\dag} \phi^{\dag} \tilde a^{} \, \tilde a^{}$, etc. Even though formally these processes contribute into effectively Hartree-Fock decouplings, at least for the values of parameters used in the present paper the corresponding self-energies are negligible comparing to the self-energy contributions given by formulas \eqref{self-energy one-photon} - \eqref{self-energy two-photon}. The problem, however, requires further study. For example, for the photons in Bose-Einstein condensate, a more general formalism, allowing broken symmetry, is required. A suitable machinery is given by the Popov approximation, -- I am currently working on it. It will be published elsewhere.
1,314,259,994,137
arxiv
\section{Introduction}\label{sec:intro} Fractional calculus has been around for hundreds of years and came around the same time as classical calculus. After years of development, fractional calculus has been widely applied in control theory, image processing, elastic mechanics, fractal theory, energy, medicine, and other fields \cite{1,2,3,4,5,6,7,8}. Fractional calculus is an extension of the integer-order calculus and the common fractional derivatives include Grunwald-Letnikov (GL) \cite{9}, Riemann-Liouville (RL) \cite{10}, Caputo \cite{11}, and so on. Although continuous fractional-order grey models have been applied in various fields, it is seldom used in the grey systems, while discrete fractional-order difference is mostly used at present. The grey model was first proposed by Professor Deng. It solves the problem of small sample modeling, and the grey model does not need to know the distribution rules of data \cite{GM}. The potential rules of data can be fully mined through sequence accumulation, which has a broad application \cite{GM}. With the development of grey theory during several decades, grey prediction models have been developed very quickly and have been applied to all walks of life. For example, Li et al. \cite{13} used a grey prediction model to predict building settlements. Cao et al. \cite{14} proposed a dynamic Verhulst model for commodity price and demand prediction. Zhang et al. \cite{15} applied a grey prediction model and neural network model for stock prediction. Ma et al. \cite{16} presented a multi-variable grey prediction model for China's tourism revenue forecast. Wu et al. \cite{17} proposed a fractional grey Bernoulli model to forecast the renewable energy of China. Zeng et al. \cite{18} used a new grey model to forecast natural gas demand. Wu et al. \cite{19} put forward a fractional grey model for air quality prediction. Ding et al. \cite{20} presented a multivariable grey model for the prediction of carbon dioxide in China. Modeling background in the real world becomes more and more complex, which puts forward higher requirements for modeling. Many scholars have improved various grey prediction models. For example, Xie et al. \cite{21} proposed a grey model and the prediction formula was derived directly from the difference equation, which improved the prediction accuracy. Cui et al. \cite{22} presented a grey prediction model and it can fit an inhomogeneous sequence, which improved the range of application of the model. Chen et al. \cite{23} put forward a nonlinear Bernoulli model, which can capture nonlinear characteristics of data. Wu et al. \cite{FGM} proposed a fractional grey prediction model and it successfully extended the integer-order to the fractional-order, at the same time, they proved that the fractional-order grey model had smaller perturbation bounds integer order derivative. Ma et al. \cite{25} put forward a fractional-order grey prediction model that was simple in the calculation and was easy to be popularized and applied in engineering. Zeng et al. \cite{26} proposed an adaptive grey prediction model based on fractional-order accumulation. Wei et al. \cite{27} presented a method for optimizing the polynomial model and obtained expected results. Liu et al. \cite{28} proposed a grey Bernoulli model based on the Weibull Cumulative Distribution, which improved the modeling accuracy. In \cite{29}, a mathematical programming model was established to optimize the parameters of grey Bernoulli. Although the above models have achieved good results, they all use continuous integer-order derivatives. In fact, the continuous derivative has many excellent characteristics, such as heritability \cite{30}. At present, there is little work on the grey prediction model based on continuous fractional derivative, and the corresponding research is still in early stage. In recent years, a new limit-based fractional order derivative is introduced by Khalil et al. in 2014 \cite{CF_define}, which is called ¡®¡®the conformable fractional derivative¡¯¡¯. It is simpler than the previous fractional order derivatives, such as the Caputo derivative and Riemann-Liouville derivative, so it can easily solve many problems, compared with other derivatives with complex definitions. In 2015, Abdeljawad \cite{develop_com} developed this new fractional order derivative and proposed many very useful and valuable results, such as Taylor power series expansions, Laplace transforms based on this novel fractional order derivative. Atangana et al. \cite{properties_cf} introduced the new properties of conformable derivative and proved some valuable theorems. In 2017, Al-Rifae and Abdeljawad proposed \cite{Complexity_com} a regular fractional generalization of the Sturm-Liouville eigenvalue problems and got some important results. The Yavuz and Ya\c{s}k\i ran \cite{equation_com} suggested a new method for the approximate-analytical solution of the fractional one-dimensional cable differential equation (FCE) by employing the conformable fractional derivative. In this paper, we propose a new grey model based on conformable fractional derivative, which has the advantage of simplicity and efficiency. The organization of this paper is as follows: In the second section, we introduce a few kinds of fractional-order derivatives. In the third section, we show a grey model with Caputo fractional derivative and in the fourth section, we present a new grey prediction model containing conformable derivative. In the fifth section, we give the optimization methods of the order and background-value coefficient. In the sixth section, two practical cases are used to verify the validity of the model and the seventh section is a summary of the whole paper. \section{Fractional-order derivative} Fractional derivatives have rich forms, three common forms are Grunwald-Letnikov (GL), Riemann-Liouville (RL), and Caputo \cite{31}. \begin{definition}[See \cite{31}]GL derivative with $\alpha$ order of function $f(t)$ is defined as \begin{equation} _a^{GL}D_t^\alpha f(t) = \sum\limits_{k = 0}^n {\frac{{{f^{(k)}}(a){{(t - a)}^{ - \alpha + k}}}}{{\Gamma ( - \alpha + k + 1)}}} + \frac{1}{{\Gamma (n - \alpha + 1)}}\int_a^t {{{(t - \tau )}^{n - \alpha }}} {f^{(n + 1)}}(\tau )d\tau \end{equation} where $_a^{GL}D_t^\alpha$ is the form of fractional derivative of GL, $\alpha > 0, n - 1 < \alpha < n, n \in N$, $[a,t]$ is the integral interval of $f(t)$, $\Gamma ( \cdot )$ is Gamma function, which has the following properties: $\Gamma (\alpha ) = \int_0^\infty {{t^{\alpha - 1}}} {e^{ - t}}dt$. \end{definition} \begin{definition}[See \cite{31}] RL derivative with order $\alpha$ of function $f(t)$ is defined as \begin{equation} _a^{RL}D_t^\alpha f(t){\rm{ }} = {\frac{{{d^n}}}{{d{t^n}}}_a}D_t^{ - (n - \alpha )}f(t) = \frac{1}{{\Gamma (n - \alpha )}}\frac{{{d^n}}}{{d{t^n}}}\int_a^t {{{(t - \tau )}^{n - \alpha - 1}}} f(\tau )d\tau \end{equation} where $_a^{RL}D_t^\alpha f(t)$ is the fractional derivative of RL, $a$ is an initial value, $\alpha$ is the order, $\Gamma (\cdot)$ is Gamma function. \end{definition} \begin{definition}[See \cite{31}] Caputo derivative with $\alpha$-order of function $f(t)$ is defined as \begin{equation} _a^{C}D_t^\alpha f(t) = \frac{1}{{\Gamma (n - \alpha )}}\int_a^t {{{(t - \tau )}^{n - \alpha - 1}}} {f^{(n)}}(\tau )d\tau ,{\rm{Among them,}}_a^{C}D_t^\alpha f(t) \end{equation} where $a$ is an initial value, $\alpha$ is the order, $\Gamma (\cdot)$ is Gamma function. In particular, if the derivative order is ranged from 0 to 1, the Caputo derivative can be written as follows \begin{equation} _a^{C}D_t^\alpha f(t) = \frac{1}{{\Gamma (1 - \alpha )}}\int_a^t {{{(t - \tau )}^{ - \alpha }}} {f^\prime }(\tau )d\tau \end{equation} \end{definition} Although the above derivatives have been successfully applied in various fields, it is difficult to be applied in engineering practice due to the complicated definition in the calculation. In recent years, some scholars have proposed a simpler fractal derivative called conformable derivative \cite{32} defined as follows. \begin{definition}[See \cite{32}] Assume ${T_\alpha }(f)(t)$ is the derivative operator of $f$: $[0,\infty) \to R$, $t > 0$, $\alpha \in (0,1)$, and ${T_\alpha }(f)(t)$ is defined as \begin{equation} {T_\alpha }(f)(t) = \mathop {\lim }\limits_{\varepsilon \to 0} \frac{{f\left( {t + \varepsilon {t^{1 - \alpha }}} \right) - f(t)}}{\varepsilon } \end{equation} when $\alpha \in (n,n + 1]$, $f$ is differentiable at $t (t >0)$, the $\alpha$-order derivative of the function $f$ is \begin{equation} {T_\alpha }(f)(t) = \mathop {\lim }\limits_{\varepsilon \to 0} \frac{{{f^{(\lceil \alpha \rceil - 1)}}\left( {t + \varepsilon {t^{(\lceil \alpha \rceil - \alpha )}}} \right) - {f^{(\lceil \alpha \rceil - 1)}}(t)}}{\varepsilon } \end{equation} where $\lceil \alpha \rceil$ is the smallest integer greater than or equal to $\alpha$. \end{definition} The conformable derivative has the following properties, \begin{definition}[See \cite{32}] Let $\alpha \in (0,1]$ and $f$, $g$ be $\alpha$-differentiable at a point $t > 0$, then (1) ${T_\alpha}(af + bg) = a{T_\alpha }(f) + b{T_\alpha }(g)$ for all $a, b \in R$. (2) ${T_\alpha}\left( {{t^p}} \right) = p{t^{p - \alpha }}$ for all $p \in R$. (3) ${T_\alpha}(\lambda) = 0$, for all constant functions $f(t) = \lambda$. (4) ${T_\alpha} (fg) = f{T_\alpha }(g) + g{T_\alpha }(f)$. (5) ${T_\alpha}\left( {\frac{f}{g}} \right) = \frac{{g{T_\alpha }(f) - f{I_\alpha }(g)}}{{{g^2}}}$. where $T_\alpha$ is $a$-order conformable derivative. \end{definition} \begin{theorem}[See \cite{32}] Let $\alpha \in (0,1]$ and $f$, $g$ be $\alpha$-differentiable at a point $t > 0$. Then \begin{equation} \label{pro} {T_\alpha }(f)(t) = {t^{1 - \alpha }}\frac{{df}}{{dt}}(t) \end{equation} \end{theorem} {\it\textbf{ Proof.}} Let $h = \varepsilon {t^{1 - \alpha}}$, then $\begin{array}{l} {T_\alpha }(f)(t) = \mathop {\lim }\limits_{\varepsilon \to 0} \frac{{f\left( {t + \varepsilon {t^{1 - \alpha }}} \right) - f(t)}}{\varepsilon }= {t^{1 - \alpha }}\mathop {\lim }\limits_{h \to 0} \frac{{f(t + h) - f(t)}}{h}= {t^{1 - \alpha }}\frac{{df(t)}}{{dt}} \end{array}$, where $\frac{{df}}{{dt}}$is first-order Riemann derivative, ${T_\alpha }(f)(t)$ is $a$-order conformable derivative. \begin{definition}[See \cite{32}] $I_\alpha ^a(f)(t) = I_1^a\left( {{t^{\alpha - 1}}f} \right) = \int_a^t {\frac{{f(x)}}{{{x^{1 - \alpha}}}}} dx$, where the integral is the usual Riemann improper integral, and $\alpha \in (0,1)$. Based on the above definitions, we give the definitions of conformable fractional-order difference and derivative. \end{definition} \begin{definition}[See \cite{33}] The conformable fractional accumulation (CFA) of $f$ with $\alpha$-order is expressed as \begin{equation} \label{cfade} \begin{array}{l} {\nabla ^\alpha }f(k) = \nabla \left( {{k^{\alpha - 1}}f(k)} \right) = \sum\limits_{i = 1}^k {\frac{{f(i)}}{{{i^{1 - \alpha }}}}} ,\alpha \in (0,1],k \in {N^ + }\\ {\nabla ^\alpha }f(k) = {\nabla ^{(n + 1)}}\left( {{k^{\alpha - [\alpha ]}}f(k)} \right),\alpha \in (n,n + 1],k \in {N^ + }. \end{array} \end{equation} The conformable fractional difference (CFD) of $f$ with $\alpha$-order is given by \begin{equation} \begin{array}{*{20}{l}} {{\Delta ^\alpha }f(k) = {k^{1 - \alpha }}\Delta f(k) = {k^{1 - \alpha }}[f(k) - f(k - 1)],\alpha \in (0,1],k \in {N^ + }}\\ {{\Delta ^\alpha }f(k) = {k^{[\alpha ] - \alpha }}{\Delta ^{n + 1}}f(k),\alpha \in (n,n + 1],k \in {N^ + }}. \end{array} \end{equation} \end{definition} In the next section, We give a brief introduce for the fractional grey model with Caputo derivative. This model uses continuous fractional derivative for modeling at the first time and achieves good results. \section{Grey model with Caputo fractional derivative} Most of the previous grey models were based on integer-order derivatives. Wu first proposed a grey prediction model based on the Caputo fractional derivative, and the time response sequence of the model was directly derived from the fractional derivative of Caputo, which achieved good results \cite{34}. In this section, we will introduce the modeling mechanism of this model. \begin{definition}[See \cite{34}] Assume ${X^{(0)}} = \left\{ {{x^{(0)}}(1),{x^{(0)}}(2), \cdots ,{x^{(0)}}(n)} \right\}$ is a non-negative sequence, the grey model with univariate of $p (0 < p < 1)$ order equation is \begin{equation} \label{gmp_b1} {\alpha ^{(1)}}{x^{(1 - p)}}(k) + a{z^{(0)}}(k) = b \end{equation} \end{definition} where ${z^{(0)}}(k) = \frac{{{x^{(1 - p)}}(k) + {x^{(1 - p)}}(k - 1)}}{2}$, ${\alpha ^{(1)}}{x^{(1 - p)}}(k)$ is a $p$-order difference of ${x^{(0)}}(k)$, the least squares estimation of $GM(p,1)$ satisfies $\left[ {\begin{array}{*{20}{l}} a\\ b \end{array}} \right] = {\left( {{B^{\rm{T}}}B} \right)^{ - 1}}{B^{\rm{T}}}Y$, where \begin{equation} B = \left[ {\begin{array}{*{20}{c}} { - {z^{(0)}}(2)}&1\\ { - {z^{(0)}}(3)}&1\\ \vdots & \vdots \\ { - {z^{(0)}}(n)}&1 \end{array}} \right],Y = \left[ {\begin{array}{*{20}{c}} {{\alpha ^{(1)}}{x^{(1 - p)}}(2)}\\ {{\alpha ^{(1)}}{x^{(1 - p)}}(3)}\\ \vdots \\ {{\alpha ^{(1)}}{x^{(1 - p)}}(n)} \end{array}} \right] \end{equation} The winterization equation of $GM(p,1)$ is \begin{equation} \label{GMP} \frac{{{{\rm{d}}^p}{x^{(0)}}(t)}}{{{\rm{d}}{t^p}}} + a{x^{(0)}}(t) = b. \end{equation} Assume ${\hat x^{(0)}}(1) = {x^{(0)}}(1) $, the solution of the fractional equation calculated by the Laplace transform is \begin{equation} {x^{(0)}}(t) = \left( {{x^{(0)}}(1) - \frac{b}{a}} \right)\sum\limits_{k = 0}^\infty {\frac{{{{\left( { - a{t^p}} \right)}^k}}}{{\Gamma (pk + 1)}}} + \frac{b}{a} \end{equation} Then, the restored values of can be obtained \begin{equation} \label{resGMP} {x^{(0)}}(k) = \left( {{x^{(0)}}(1) - \frac{b}{a}} \right)\sum\limits_{i = 0}^\infty {\frac{{{{\left( { - a{k^p}} \right)}^i}}}{{\Gamma (pi + 1)}}} + \frac{b}{a} \end{equation} Although many fractional grey models have achieved good results, most of the fractional gray prediction models use fractional difference and fractional accumulation, while those model still use integer derivative. Although there are some studies on grey models with fractional derivatives, they are more complicated to calculate than previous grey models. In order to simplify calculation, we will propose a novel fractional prediction model with conformable derivative. \section{Grey system model with conformable fractional derivative} In this section, based on the conformable derivative, we propose a simpler grey model, named continuous conformable fractional grey model, abbreviated as CCFGM(1,1). Wu et al. \cite{35} first gives the unified form of conformable fractional accumulation Eq. (\ref{cfade}). On this basis, we use the matrix method to give the equivalent form of unified conformable fractional order accumulation. \begin{theorem}The conformable fractional accumulation is \label{CFA} \begin{equation} {x^{(\alpha )}}(k) = \sum\limits_{i = k}^n {\left[ {\begin{array}{*{20}{c}} { \lceil \alpha \rceil }\\ {k - i} \end{array}} \right]} \frac{{{x^{(0)}}(i)}}{{i \lceil \alpha \rceil - \alpha }},\alpha \in {R^ + } \end{equation} where $\lceil \alpha \rceil$ is the smallest integer greater than or equal to $\alpha$, $\left[ {\begin{array}{*{20}{c}} { \lceil \alpha \rceil }\\ {k - i} \end{array}} \right] = \frac{{ \lceil \alpha \rceil ( \lceil \alpha \rceil + 1) \cdots ( \lceil \alpha \rceil + i - 1)}}{{\left( {k - i} \right)!}} = \left( {\begin{array}{*{20}{c}} {k - i + \lceil \alpha \rceil - 1}\\ {k - i} \end{array}} \right) = \frac{{(k - i + \lceil \alpha \rceil - 1)!}}{{(k - i)!( \lceil \alpha \rceil - 1)!}}$. $\alpha$ is the order of the model. Theoretically, the order of grey model can be any positive number. In order to simplify the calculation, we will make the order of the model between 0 and 1 in the later modeling. \end{theorem} {\it\textbf{ Proof.}} if $\alpha \in (0,1], \lceil \alpha \rceil = 1$, $\begin{array}{l} {x^{(\alpha )}}(k) = \sum\limits_{i = 1}^k {\frac{{{x^{(0)}}(i)}}{{{i^{1 - \alpha }}}}} {x^{(0)}}(i) = \left[ {{x^{(0)}}(1),{x^{(0)}}(2), \cdots ,{x^{(0)}}(n)} \right]\left( {\begin{array}{*{20}{c}} 1&1& \cdots &1&1\\ 0&{\frac{1}{{{2^{1 - \alpha }}}}}& \cdots &{\frac{1}{{{2^{1 - \alpha }}}}}&{\frac{1}{{{2^{1 - \alpha }}}}}\\ \vdots & \vdots & \vdots & \vdots & \vdots \\ 0&0& \cdots &{\frac{1}{{{{\left( {n - 1} \right)}^{1 - \alpha }}}}}&{\frac{1}{{{{\left( {n - 1} \right)}^{1 - \alpha }}}}}\\ 0&0& \cdots &0&{\frac{1}{{{n^{1 - \alpha }}}}} \end{array}} \right)\\ {\rm{ }}\\ = \left[ {{x^{(0)}}(1),{x^{(0)}}(2), \cdots ,{x^{(0)}}(n)} \right]\left( {\begin{array}{*{20}{c}} 1&0& \cdots &0&0\\ 0&{\frac{1}{{{2^{1 - \alpha }}}}}& \cdots &0&0\\ \vdots & \vdots & \vdots & \vdots & \vdots \\ 0&0& \cdots &{\frac{1}{{{{\left( {n - 1} \right)}^{1 - \alpha }}}}}&0\\ 0&0& \cdots &0&{\frac{1}{{{n^{1 - \alpha }}}}} \end{array}} \right)\left( {\begin{array}{*{20}{c}} 1&1& \cdots &1&1\\ 0&1& \cdots &1&1\\ \vdots & \vdots & \vdots & \vdots & \vdots \\ 0&0& \cdots &1&1\\ 0&0& \cdots &0&1 \end{array}} \right)\\ = \sum\limits_{i = 1}^k {\left( {\begin{array}{*{20}{c}} {k - i}\\ {k - i} \end{array}} \right)} \frac{{x(i)}}{{{i^{1 - \alpha }}}}$, $k = 1,2, \cdots, n. \end{array}$ if $\alpha \in (1,2], \lceil \alpha \rceil = 2$. $\begin{array}{l} {x^{(\alpha)}}(j) = \sum\limits_{j = k}^n {\sum\limits_{i = j}^n {\frac{{{x^{(0)}}(i)}}{{{i^{2 - r}}}}} } \\ = {\left[ \begin{array}{l} {x^{(0)}}(1)\\ {x^{(0)}}(2)\\ \cdots \\ {x^{(0)}}(n) \end{array} \right]^{\rm{T}}}\left( {\begin{array}{*{20}{c}} 1&0& \cdots &0&0\\ 0&{\frac{1}{{{2^{2 - \alpha }}}}}& \cdots &0&0\\ \vdots & \vdots & \vdots & \vdots & \vdots \\ 0&0& \cdots &{\frac{1}{{{{\left( {n - 1} \right)}^{2 - \alpha }}}}}&0\\ 0&0& \cdots &0&{\frac{1}{{{n^{2 - \alpha }}}}} \end{array}} \right)\left( {\begin{array}{*{20}{c}} 1&1& \cdots &1&1\\ 0&1& \cdots &1&1\\ \vdots & \vdots & \vdots & \vdots & \vdots \\ 0&0& \cdots &1&1\\ 0&0& \cdots &0&1 \end{array}} \right)\left( {\begin{array}{*{20}{c}} 1&1& \cdots &1&1\\ 0&1& \cdots &1&1\\ \vdots & \vdots & \vdots & \vdots & \vdots \\ 0&0& \cdots &1&1\\ 0&0& \cdots &0&1 \end{array}} \right)\\ = {\left[ \begin{array}{l} {x^{(0)}}(1)\\ {x^{(0)}}(2)\\ \cdots \\ {x^{(0)}}(n) \end{array} \right]^{\rm{T}}}\left( {\begin{array}{*{20}{c}} 1&0& \cdots &0&0\\ 0&{\frac{1}{{{2^{2 - \alpha }}}}}& \cdots &0&0\\ \vdots & \vdots & \vdots & \vdots & \vdots \\ 0&0& \cdots &{\frac{1}{{{{\left( {n - 1} \right)}^{2 - \alpha }}}}}&0\\ 0&0& \cdots &0&{\frac{1}{{{n^{2 - \alpha }}}}} \end{array}} \right)\left( {\begin{array}{*{20}{c}} 1&2& \cdots &{n - 1}&n\\ 0&1& \cdots &{n - 2}&{n - 1}\\ \vdots & \vdots & \vdots & \vdots & \vdots \\ 0&0& \cdots &1&2\\ 0&0& \cdots &0&1 \end{array}} \right)\\ = {\left[ \begin{array}{l} {x^{(0)}}(1)\\ {x^{(0)}}(2)\\ \cdots \\ {x^{(0)}}(n) \end{array} \right]^{\rm{T}}}\left( {\begin{array}{*{20}{c}} 1&0& \cdots &0&0\\ 0&{\frac{1}{{{2^{2 - \alpha }}}}}& \cdots &0&0\\ \vdots & \vdots & \vdots & \vdots & \vdots \\ 0&0& \cdots &{\frac{1}{{{{\left( {n - 1} \right)}^{2 - \alpha }}}}}&0\\ 0&0& \cdots &0&{\frac{1}{{{n^{2 - \alpha }}}}} \end{array}} \right)\left( {\begin{array}{*{20}{c}} 1&{\left( {\begin{array}{*{20}{c}} 2\\ 1 \end{array}} \right)}& \cdots &{\left( {\begin{array}{*{20}{c}} {n - 1}\\ {n - 2} \end{array}} \right)}&{\left( {\begin{array}{*{20}{c}} n\\ {n - 1} \end{array}} \right)}\\ 0&1& \cdots &{\left( {\begin{array}{*{20}{c}} {n - 2}\\ {n - 3} \end{array}} \right)}&{\left( {\begin{array}{*{20}{c}} {n - 1}\\ {n - 2} \end{array}} \right)}\\ \vdots & \vdots & \vdots & \vdots & \vdots \\ 0&0& \cdots &1&{\left( {\begin{array}{*{20}{c}} 2\\ 1 \end{array}} \right)}\\ 0&0& \cdots &0&1 \end{array}} \right)\\ = \sum\limits_{i = 1}^k {\left( {\begin{array}{*{20}{c}} {k - i + 1}\\ {k - i} \end{array}} \right)} \frac{{x(i)}}{{{i^{2 - \alpha }}}}$, $k = 1,2, \cdots, n. \end{array}$ Assuming that the equation holds when $\alpha \in (m - 1,m]$, then $\lceil \alpha \rceil =m$, ${x^{(\alpha )}}(k) = \sum\limits_{i = k}^n {\left[ {\begin{array}{*{20}{c}} m\\ {k - i} \end{array}} \right]} \frac{{{x^{(0)}}(i)}}{{i \lceil \alpha \rceil - \alpha }},\alpha \in {R^ + }$, when $\alpha \in (m,m+1]$, $\lceil \alpha \rceil = m+1$,\\ let $\left( {\begin{array}{*{20}{c}} 1&0& \cdots &0&0\\ 0&{\frac{1}{{{2^{m + 1 - \alpha }}}}}& \cdots &0&0\\ \vdots & \vdots & \vdots & \vdots & \vdots \\ 0&0& \cdots &{\frac{1}{{{{\left( {n - 1} \right)}^{m + 1 - \alpha }}}}}&0\\ 0&0& \cdots &0&{\frac{1}{{{n^{m + 1 - \alpha }}}}} \end{array}} \right){\rm{ = A}}$, we have $\begin{array}{l} {x^\alpha }(k) = {\left[ \begin{array}{l} {x^{(0)}}(1)\\ {x^{(0)}}(2)\\ \cdots \\ {x^{(0)}}(n) \end{array} \right]^{\rm{T}}}{\rm{A}}{\left[ {\begin{array}{*{20}{c}} 1&0& \cdots &0&0\\ 1&1& \cdots &0&0\\ \vdots & \vdots & \ddots & \vdots & \vdots \\ 1&1& \cdots &1&0\\ 1&1& \cdots &1&1 \end{array}} \right]^{m + 1}}\\ {\rm{ = }}{\left[ \begin{array}{l} {x^{(0)}}(1)\\ {x^{(0)}}(2)\\ \cdots \\ {x^{(0)}}(n) \end{array} \right]^{\rm{T}}}{\rm{A}}\left( {\begin{array}{*{20}{c}} 1&{\left( {\begin{array}{*{20}{c}} m\\ 1 \end{array}} \right)}& \cdots &{\left( {\begin{array}{*{20}{c}} {m + n - 2}\\ {n - 1} \end{array}} \right)}\\ 0&1& \cdots &{\left( {\begin{array}{*{20}{c}} {m + n - 3}\\ {n - 2} \end{array}} \right)}\\ \vdots & \vdots & \vdots & \vdots \\ 0&0& \cdots &{\left( {\begin{array}{*{20}{c}} m\\ 1 \end{array}} \right)}\\ 0&0& \cdots &1 \end{array}} \right)\left( {\begin{array}{*{20}{c}} 1&1& \cdots &1&1\\ 0&1& \cdots &1&1\\ \vdots & \vdots & \vdots & \vdots & \vdots \\ 0&0& \cdots &1&1\\ 0&0& \cdots &0&1 \end{array}} \right)\\ = {\left[ \begin{array}{l} {x^{(0)}}(1)\\ {x^{(0)}}(2)\\ \cdots \\ {x^{(0)}}(n) \end{array} \right]^{\rm{T}}}{\rm{A}}\left( {\begin{array}{*{20}{c}} 1&{\left( {\begin{array}{*{20}{c}} m\\ 0 \end{array}} \right) + \left( {\begin{array}{*{20}{c}} m\\ 1 \end{array}} \right)}& \cdots &{\sum\limits_{i = 0}^{n - 3} {\left( {\begin{array}{*{20}{c}} {m + i}\\ {i + 1} \end{array}} \right)} }&{\sum\limits_{i = 0}^{n - 2} {\left( {\begin{array}{*{20}{c}} {m + i}\\ {i + 1} \end{array}} \right)} }\\ 0&1& \cdots &{\sum\limits_{i = 0}^{n - 4} {\left( {\begin{array}{*{20}{c}} {m + i}\\ {i + 1} \end{array}} \right)} }&{\sum\limits_{i = 0}^{n - 3} {\left( {\begin{array}{*{20}{c}} {m + i}\\ {i + 1} \end{array}} \right)} }\\ \vdots & \vdots & \vdots & \vdots & \vdots \\ 0&0& \cdots &1&{\left( {\begin{array}{*{20}{c}} m\\ 0 \end{array}} \right) + \left( {\begin{array}{*{20}{c}} m\\ 1 \end{array}} \right)}\\ 0&0& \cdots &0&{} \end{array}} \right)\\ = {\left[ \begin{array}{l} {x^{(0)}}(1)\\ {x^{(0)}}(2)\\ \cdots \\ {x^{(0)}}(n) \end{array} \right]^{\rm{T}}}{\rm{A}}\left( {\begin{array}{*{20}{c}} 1&{\left( {\begin{array}{*{20}{c}} {m + 1}\\ 1 \end{array}} \right)}& \ldots &{\left( {\begin{array}{*{20}{c}} {m + n - 2}\\ {n - 2} \end{array}} \right)}&{\left( {\begin{array}{*{20}{c}} {m + n - 1}\\ {n - 1} \end{array}} \right)}\\ 0&1& \cdots &{\left( {\begin{array}{*{20}{c}} {m + n - 3}\\ {n - 3} \end{array}} \right)}&{\left( {\begin{array}{*{20}{c}} {m + n - 2}\\ {n - 2} \end{array}} \right)}\\ \vdots & \vdots & \vdots & \vdots & \vdots \\ 0&0& \cdots &1&{\left( {\begin{array}{*{20}{c}} {m + 1}\\ 1 \end{array}} \right)}\\ 0&0& \cdots &0&1 \end{array}} \right)\\ = \sum\limits_{i = 1}^k {\left( {\begin{array}{*{20}{c}} {k - i + m}\\ {k - i} \end{array}} \right)} {x^{(0)}}(i) \end{array}$. So the result is proved. \begin{remark} Similarly, the Refs. \cite{25,35} give the other two methods to get the same result. It can be proved that the definitions of these accumulation are essentially the same. Using the matrix method can help us better understand the fractional accumulation. Secondly, it can better help us write computer programs. \end{remark} Next, we will derive the grey differential equation with continuous conformable fractional derivatives. \begin{definition} Assume ${X^{(0)}} = \left\{ {{x^{(0)}}(1),{x^{(0)}}(2), \cdots ,{x^{(0)}}(n)} \right\}$ is a non-negative sequence, $r(0 < r < 1)$-order winterization equation can be dedined as follows, \begin{equation}\label{CCFGM} \frac{{{{\rm{d}}^r}{x^{(q)}}(t)}}{{{\rm{d}}{t^r}}} + a{x^{(q)}}(t) = b, \end{equation} where ${X^{(q)}} = \left( {{x^{(q)}}(1),{x^{(q)}}(2), \cdots ,{x^{(q)}}(n)} \right)$ is the q-order $(0<q<1)$ cumulative sequence of ${X^{(0)}}$, and $\frac{{{{\rm{d}}^r}{x^{(q)}}(t)}}{{{\rm{d}}{t^r}}} = {T_r}\left( {{x^{(q)}}(1)} \right)$ is continuous conformable fractional-order derivative. \end{definition} \begin{remark} If r=1 and q=1, the equation (\ref{CCFGM}) is equivalent to GM(1,1) (see \cite{GM}), if $r \in \left[ {0,1} \right]$ and q =0, the equation equation (\ref{CCFGM}) is equivalent to the equation (\ref{GMP}) in form, if r=0 and $q \in \left[ {0,1} \right]$, the equation equation (\ref{CCFGM}) is equivalent to the FGM(1,1) (see \cite{FGM}) in form. \end{remark} \begin{theorem} The exact solution of the conformable fractional-order differential equation is \begin{equation} \label{CCF_time_response} {\hat x^{(r)}}(k) = \frac{{\hat b + \left( {\widehat a{x^{(0)}}(1) - \hat b} \right)e{}^{\frac{{\widehat a\left( {1 - {k^r}} \right)}}{r}}}}{{\widehat a}},k = 1,2,3,...,n{\rm{ }}(n > 4) \end{equation} \end{theorem} {\it\textbf{Proof.}} Using equation (\ref{pro}) to convert the fractional order derivative into integer order derivative, we can find the exact solution of equation (\ref{CCFGM}). $\frac{{{{\rm{d}}^r}{x^{(q)}}(t)}}{{{\rm{d}}{t^r}}} + a{x^{(q)}}(t) = b$, ${t^{1 - r}}\frac{{{\rm{d}}{x^{(q)}}(t)}}{{{\rm{d}}t}} + a{x^{(q)}}(t) = b$, by integrating the two sides, we have $\int {\frac{{{\rm{d}}{x^{(q)}}(t)}}{{(b{\rm{ - }}a{x^{(q)}}(t))}}} = \int {\frac{{dt}}{{{t^{1 - r}}}}}$, so $\ln \left| {b - a{x^{(q)}}(t)} \right| = \left( {\frac{{ - a}}{r}} \right){t^r} + {C_{\rm{1}}}$, $b - a{x^{(q)}}(t = \pm {e^{{C_{\rm{1}}}}}e{}^{\left( {\frac{{ - a}}{r}} \right){t^r}}$, it can be sorted out, ${x^{(q)}}(t) = \frac{{b{\rm{ + }}Ce{}^{\left( {\frac{{ - a}}{r}} \right){t^r}}}}{a}$, assume $\hat a$, $\hat b$ is estimated parameters, ${\hat x^{(0)}}(k)$ is an estimated value of ${x^{(0)}}(k)$, $k$ is a discrete variable with respect to $t$, with ${\hat x^{(q)}}(0) = {x^{(0)}}(1)$, then $C = \left( {\widehat a{x^{(0)}}(1) - \hat b} \right){e^{\frac{{\widehat a}}{r}}}$, so the time response function of the CCFGM model is Eq. (\ref{CCF_time_response}). \begin{remark} If r=1 and q=1, the equation (\ref{CCF_time_response}) is equivalent to response function of GM(1,1) (see \cite{GM}), if $r \in \left[ {0,1} \right]$ and q =0, the equation equation (\ref{CCF_time_response}) is equivalent to the equation (\ref{resGMP}) in form (Mittag Leffler is a direct generalization of exponential function.), if r=0 and $q \in \left[ {0,1} \right]$, the equation equation (\ref{CCF_time_response}) is equivalent to the response function of FGM(1,1) (see \cite{FGM}) in form. \end{remark} Next, we will derive the discrete form of CCFGM(1,1) model. Through the discrete difference equation, we can use least squares algorithm to get the parameters of the model. The predicted value can be obtained by q-order difference of the obtained predicted value, as follows, ${{\hat x}^{(0)}}(k) = \Delta {\nabla ^{1 - q}}{{\hat x}^{(q)}}(k)$. ${x^{(1 - q)}}(t)$ stands for $1-q$-order accumulation, and it is equal to $\Delta {}^1{\nabla ^q}{x^{(0)}}(t)$. ${\nabla ^q}{x^{(0)}}(t)$ is the q-order accumulation of ${x^{(0)}}(t)$, $\Delta {}^r{x^{(q)}}(t)$ is the r-order difference of ${x^{(q)}}(t)$, $q \in \left[ {0,1} \right]$. \begin{theorem} The difference equation of the continuous conformable grey model is \begin{equation} \label{dccfgm} {x^{(q - r)}}(t) + a\frac{{\rm{1}}}{{\rm{2}}}\left[ {{x^{(q)}}(k - 1) + {x^{(q)}}(k)} \right] = b, q \in \left[ {0,1} \right], r \in \left[ {0,1} \right]. \end{equation} \end{theorem} {\it\textbf{ Proof.}} Integrate CCFGM with $r$-order on both sides of Eq. (\ref{CCFGM}): \begin{equation} \iint \cdots \int_{k - 1}^k {\frac{{{d^r}{x^{(q)}}}}{{d{t^r}}}} d{t^r} + a\iint \cdots \int_{k - 1}^k {{x^{(q)}}} (t)d{t^r} = b\iint \cdots \int_{k - 1}^k d {t^r} \end{equation} where \begin{equation}\label{dccfgm1} \iint \cdots \int_{k - 1}^k {\frac{{{d^r}{x^{(q)}}(t)}}{{d{t^r}}}} d{t^r} \approx \frac {\Delta ^r}{x^{(q)}} = {x^{(q - r)}}(t) \end{equation} ${x^{(q - r)}}(t)$ stands for $q-r$-order accumulation, and it is equal to $\Delta {}^r{\nabla ^q}{x^{(0)}}(t)$. ${\nabla ^q}{x^{(0)}}(t)$ is the q-order accumulation of ${x^{(0)}}(t)$, $\Delta {}^r{x^{(0)}}(t)$ is the r-order difference of ${x^{(0)}}(t)$, $r \in \left[ {0,1} \right]$, $r \in \left[ {0,1} \right]$. According to the generalized trapezoid formula (see \cite{mao}), we have, \begin{equation}\label{dccfgm2} \iint \cdots \int_{k - 1}^k {{x^{(q)}}} (t)d{t^r} \approx \frac{{\rm{1}}}{{\rm{2}}}\left[ {{x^{(q)}}(k - 1) + {x^{(q)}}(k)} \right] \end{equation} According to equation (\ref{gmp_b1}) and equation (\ref{GMP}),we have \begin{equation}\label{dccfgm3} \iint \cdots \int_{k - 1}^k b d {t^r}=b \iint \cdots \int_{k - 1}^k d {t^r}\approx \int_{k - 1}^k {bd} t \approx b. \end{equation} By equation (\ref{dccfgm1}), equation (\ref{dccfgm2}), and equation (\ref{dccfgm3}), the basic form of CCFGM(1,1) can be written as equation (\ref{dccfgm}). Through the least square method, we can get the parameter of the CCFGM(1,1) is \begin{equation} \label{parameter_model} \hat{a} = \left[ {\begin{array}{*{20}{l}} a\\ b \end{array}} \right] = {\left( {{B^{\rm{T}}}B} \right)^{ - 1}}{B^{\rm{T}}}Y \end{equation} where \begin{equation}\label{B_para} B = \left[ {\begin{array}{*{20}{c}} { - \frac{{\rm{1}}}{{\rm{2}}}\left[ {{x^{(q)}}(1) + {x^{(q)}}(2)} \right]}&1\\ { - \frac{{\rm{1}}}{{\rm{2}}}\left[ {{x^{(q)}}(2) + {x^{(q)}}(3)} \right]}&1\\ \vdots & \vdots \\ { - \frac{{\rm{1}}}{{\rm{2}}}\left[ {{x^{(q)}}(n - 1) + {x^{(q)}}(n)} \right]}&1 \end{array}} \right],Y = \left[ {\begin{array}{*{20}{c}} {{x^{_{(q - r)}}}(2)}\\ {{x^{_{(q - r)}}}(3)}\\ \vdots \\ {{x^{_{(q - r)}}}(n)} \end{array}} \right] \end{equation} Let $\varepsilon = Y - B\hat a$ be the error sequence and $s=\varepsilon \cdot \varepsilon^{\mathrm{T}}=Y-B \hat{a}^{\mathrm{T}}(Y-B \hat{a})=\sum\limits_{k = 2}^n {{{\left\{ {{x^{(q - r)}}(t) + a\frac{1}{2}\left[ {{x^{(q)}}(k - 1) + {x^{(q)}}(k)} \right] - b} \right\}}^2}} dx$, when s is minimized, values of a and b satisfy \begin{equation} \left\{ {\begin{array}{*{20}{l}} {\frac{{\partial s}}{{\partial a}} = \sum\limits_{k = 2}^n {\left\{ {{x^{(q - r)}}(t) + a\frac{1}{2}\left[ {{x^{(q)}}(k - 1) + {x^{(q)}}(k)} \right] - b} \right\}} \left[ {{x^{(q)}}(k - 1) + {x^{(q)}}(k)} \right]dx = 0}\\ {\frac{{\partial s}}{{\partial b}} = - 2\sum\limits_{k = 2}^n {\left\{ {{x^{(q - r)}}(t) + a\frac{1}{2}\left[ {{x^{(q)}}(k - 1) + {x^{(q)}}(k)} \right] - b} \right\}} = 0} \end{array}} \right., \end{equation} where ${\hat a}$ is defined in the Eq. (\ref{parameter_model}), $B$ and $Y$ defined in the Eq. (\ref{B_para}). \section{Optimization of the optimal difference order $r$ and accumulation order $q$} The accumulative order is usually given by default, but in fact, the difference order $r$ and accumulation order $q$ as part of the model greatly affect the model accuracy. Their values can be dynamically adjusted according to different modeling content. So the correct order of the model are particularly important. In the following, we first established the following mathematical programming model to optimize the two super parameters and used a whale optimization algorithm for optimization \cite{36}. \begin{equation} \begin{array}{l} {\min _{r,q}}\frac{1}{n}\sum\limits_{i = 1}^n {\left| {\frac{{{{\hat x}^{(0)}}\left( {{k_i}} \right) - {x^{(0)}}\left( {{k_i}} \right)}}{{{x^{(0)}}\left( {{k_i}} \right)}}} \right|} \times 100\% \\ {\rm{s}}{\rm{.t}}\left\{ \begin{array}{l} r \in \left[ {0,1} \right],q \in \left[ {0,1} \right]\\ {x^{(q)}}(k) = \sum\limits_{i = k}^n {\left[ {\begin{array}{*{20}{c}} { \lceil q \rceil }\\ {k - i} \end{array}} \right]{x^{(0)}}(i)} \frac{{x(i)}}{{{i^{ \lceil q \rceil - q}}}},q > 0\\ B = \left[ {\begin{array}{*{20}{c}} { - \frac{{\rm{1}}}{{\rm{2}}}\left[ {{x^{(q)}}(1) + {x^{(q)}}(2)} \right]}&1\\ { - \frac{{\rm{1}}}{{\rm{2}}}\left[ {{x^{(q)}}(2) + {x^{(q)}}(3)} \right]}&1\\ \vdots & \vdots \\ { - \frac{{\rm{1}}}{{\rm{2}}}\left[ {{x^{(q)}}(n - 1) + {x^{(q)}}(n)} \right]}&1 \end{array}} \right],Y = \left[ {\begin{array}{*{20}{c}} {{x^{_{(q - r)}}}(2)}\\ {{x^{_{(q - r)}}}(3)}\\ \vdots \\ {{x^{_{(q - r)}}}(n)} \end{array}} \right]\\ {{\hat x}^{(q)}}(k) = \frac{{\hat b + \left( {\widehat a{x^{(0)}}(1) - \hat b} \right)e{}^{\frac{{\widehat a\left( {1 - {k^r}} \right)}}{r}}}}{{\widehat a}},k = 2,3,4,...,n{\rm{ }}(n > 4)\\ {{\hat x}^{(0)}}(k) = \Delta {\nabla ^{1 - q}}{{\hat x}^{(q)}}(k) \end{array} \right.{\rm{ }} \end{array} \end{equation} \section{Application} In order to verify the validity of the model, we test the model with two actual cases, and compare it with other forecasting models. \textbf{Case 1.} Prediction of domestic energy consumption in China (Ten thousand ton standard coal) {In this case, we select the data of domestic energy consumption in China from 2005 to 2015 for fitting and the data from 2016 to 2017 for testing. The corresponding results are shown in Table \ref{tcase1} and Figure 1.} \begin{figure}[H] \begin{minipage}[t]{0.5\linewidth} \centering \includegraphics[height=5cm,width=8cm]{case1a.eps} \caption{Test results of five models.} \end{minipage} \hfill \begin{minipage}[t]{0.5\linewidth} \centering \includegraphics[height=5cm,width=8cm]{case1_b.eps} \caption{Error comparison of five grey models.} \end{minipage} \label{fc1} \end{figure} \begin{table}[H] \caption{Comparison of test results of five grey models.}\centering \begin{tabular}{p{12mm}p{18mm}p{18mm}p{18mm}p{18mm}p{18mm}p{25mm}} \hline Year &Raw data &FGM &PR(2) &ANN &SVR &CCFGM\\ \hline 2005 &27573 &27573.00 &27461.01 &27576.24 &27572.90 &27573.00\\ 2006 &27765 &28776.69 &28414.85 &28801.47 &29574.73 &27207.68\\ 2007 &30814 &30529.07 &29920.42 &30403.47 &31576.57 &28992.48\\ 2008 &31898 &32510.82 &31879.33 &32409.59 &33578.40 &31373.76\\ 2009 &33843 &34650.97 &34193.17 &34790.27 &35580.23 &33965.07\\ 2010 &36470 &36925.23 &36763.53 &37441.64 &37582.07 &36671.07\\ 2011 &39584 &39324.40 &39492.00 &40193.56 &39583.90 &39459.34\\ 2012 &42306 &41845.55 &42280.18 &42848.65 &41585.73 &42317.23\\ 2013 &45531 &44488.93 &45029.65 &45235.79 &43587.57 &45239.62\\ 2014 &47212 &47256.57 &47642.01 &47249.65 &45589.40 &48224.63\\ 2015 &50099 &50151.64 &50018.85 &48859.33 &47591.23 &51271.90\\ \hline MAPE & & 1.4358 &0.9604 &1.8158 &3.6857 &1.5942\\ \hline 2016 &54209 &52721.73 &53852.78 &50091.35 &49593.07 &54381.79\\ 2017 &57620 &55350.93 &57254.38 &51003.38 &51594.90 &57555.00\\ \hline MAPE & & 3.3408 &0.6458 & 9.5395 &9.4858 &0.2158\\ \hline \end{tabular} \label{tcase1} \end{table} {The test errors of five grey models are shown in Figure 2. The experimental results show that the fitting error and test error of the proposed model are 1.5942\% and 0.2158\% respectively, and the fitting error and test error of the FGM model are 1.4358\% and 3.3408\% respectively. The fitting error and test error of PR(2) are 0.9604\% and 0.6458\%, respectively, ANN are 1.8158\% and 9.5395\%, respectively, SVR are 3.6857\% and 9.4858\%, respectively. The fitting errors of PR(2) are slight lower than ours. However, the test error of our model are smaller than other models.} \textbf{Case 2.} Prediction of domestic coal consumption in China (ten thousand tons). {Coal consumption is related to the sustainable development of society. Accurate and effective prediction of coal consumption can contribute to effective decision-making and early warning.} \begin{figure}[H] \begin{minipage}[t]{0.5\linewidth} \centering \includegraphics[height=5cm,width=8cm]{case2a.eps} \caption{Test results of five models.} \end{minipage} \hfill \begin{minipage}[t]{0.5\linewidth} \centering \includegraphics[height=5cm,width=8cm]{case2_b.eps} \caption{Error comparison of five grey models.} \end{minipage} \end{figure} \begin{table}[H] \caption{Comparison of test results of five grey models.}\centering \begin{tabular}{p{12mm}p{18mm}p{18mm}p{18mm}p{18mm}p{18mm}p{25mm}} \hline Year &Raw data &FGM &PR(2) &ANN &SVR &CCFGM\\ \hline 2005 &10039.00 &10039.00 &10039.00 &10031.67 &9917.90 &10039.00\\ 2006 &10036.00 &9633.62 &9600.63 &10029.69 &9839.40 &9687.54\\ 2007 &9761.00 &9464.86 &9545.90 &9755.68 &9760.90 &9434.76\\ 2008 &9148.00 &9372.94 &9491.48 &9232.88 &9682.40 &9326.97\\ 2009 &9122.00 &9319.89 &9437.36 &9225.60 &9603.90 &9274.31\\ 2010 &9159.00 &9290.99 &9383.56 &9225.57 &9525.40 &9248.88\\ 2011 &9212.00 &9279.07 &9330.07 &9225.57 &9446.90 &9238.87\\ 2012 &9253.00 &9280.18 &9276.87 &9225.57 &9368.40 &9238.35\\ 2013 &9290.00 &9291.95 &9223.99 &9225.57 &9289.90 &9243.99\\ 2014 &9253.00 &9312.86 &9171.40 &9225.57 &9211.40 &9253.82\\ 2015 &9347.00 &9341.96 &9119.11 &9225.57 &9132.90 &9266.55\\ \hline MAPE & &1.4856 &2.1776 &0.5641 &2.3623 &1.3237\\ 2016 &9492.00 &9378.60 &9620.06 &9225.57 &9054.40 &9281.34\\ 2017 &9283.00 &9422.38 &9860.18 &9225.57 &8975.90 &9297.62\\ \hline MAPE & &1.3481 &3.7834 &1.7128 &3.9592 &1.1884\\ \hline \end{tabular} \label{tcase2} \end{table} {Table \ref{tcase2}, Figure 3 and Figure 4 show the prediction of carbon dioxide emission with our model. From Table \ref{tcase2}, we can see that the fitting error and test error of our model are 1.3237\% and 1.1884\%, respectively. The fitting error and test error of FGM model are 1.4856\% and 1.3481\%, respectively, PR(2) are 2.1776\% and 3.7834\%, respectively, ANN are 0.5641\% and 1.7128\%, respectively, SVR are 2.3623\% and 3.9592\%, respectively. It can be seen that our model has smaller test errors compared with other models, which means that our model is superior to other models. } \section{Conclusion} In this paper, we propose a grey forecasting model with a conformable fractional derivative. Compared with integer derivatives, continuous fractional derivatives have been proved to have many excellent properties. However, the most existing grey models are modeled by integer derivatives. Secondly, it has been proved that the integer derivative cannot simulate some special development laws in nature, the model can be further optimized by extending the grey model with the integer derivative to the fractional derivative. The existing fractional order grey model with continuous fractional-order derivative, achieved good result, but its calculation is complicated. This paper proposes a new grey model with conformable fractional-order derivative, further to simplify the calculation. Two actual cases show that our model has high precision, and it can be easily promoted in engineering. The contributions of this paper are as follows: (1) We constructed a fractional-order differential equation with a conformable derivative as a whitening form of our model. (2) We built the mathematical programming model to optimize the order and of CCFGM(1,1) by whale optimizer, which further improved the prediction accuracy of the model. (3) We verify the validity of the model in this paper through two actual cases. This model with a simpler structure can achieve similar or even better accuracy than other models. Although the model in this paper has some advantages, it can be further improved from the following aspects: (1) In order to improve the modeling accuracy of the model, a more efficient optimization algorithm can be used to optimize parameters. (2) The model proposed in this paper is linear and cannot capture the nonlinear characteristics of the data. Accordingly, nonlinear characteristics can be studied for establishing a more universal and robust grey prediction model. \section {Conflicts of Interest} No potential conflict of interest was reported by the authors. \section {Acknowledgement} The work was supported by grants from the Postgraduate Research \& Practice Innovation Program of Jiangsu Province [Grant No.KYCX19\_0733]; grants from the Postgraduate Research \& Practice Innovation Program of Jiangsu Province [Grant No.KYCX20\_1144].
1,314,259,994,138
arxiv
\section{Conclusion} In this work, we proposed a novel subspace sparse coding framework regarding data clustering. Our non-negative local subspace clustering (NLSSC) benefits from a novel locality objective in its formulation which focuses on improving the separability of data points in the coding space. In addition, NLSSC also obtains low-rank and affine sparse codes for the representation of the data. Implementations on real clustering benchmarks showed that this locality constraint is effective when performing a clustering based on the obtained representation graph. In addition, the kernel extension of the algorithm (NLKSSC) is also provided in order to benefit from kernel-based representations of data. Furthermore, we introduced the link-restore algorithm as an effective solution to the sparse coding redundancy issue when it has negative effects on clustering performance. This post-processing algorithm which is suitable for non-negative sparse representations corrects the broken links between close data points in the representation graph. Empirical evaluations demonstrated that link-restore can act as an effective post-processing step for different types of SSC methods which use non-negative sparse coding models. As a future step, we are interested in combining our framework with dimension reduction strategies to better deal with multi-dimensional data types. \section{Experiments}\label{secexp} For empirical evaluation of our proposed NNLSSC and NLKSSC algorithms, we implement them on 4 different widely-used benchmarks of clustering datasets: \begin{itemize} \item Hopkins155 \cite{tron2007benchmark}: Segmentation of 156 video sequences with a setup similar to \cite{elhamifar2013sparse}. \item COIL-20 \cite{nene1996columbia}: A dataset of 1440 gray-scale images of 20 different objects with the pixel size of $32\times 32$. \item Extended YaleB\cite{georghiades2001few}: Contains frontal face images taken from 38 subjects with the average of 64 samples per subject. Feature extraction is done based on \cite{vidal2014low}. \item AR-Face \cite{martinez1998ar}: An image dataset including more than 4000 frontal faces of 126 different subjects. We use 2600 images from 100 subjects and use the pre-processing procedure from \cite{xiao2016robust}. \end{itemize} The basis of evaluation is the clustering error as $CE=\frac{\text{\# of miss-clustered samples}}{\text{\# of data samples}}$ using the posterior labeling of the clusters and the normalized mutual information ($NMI$) \cite{ana2003robust}. For each method, an average $CE$ is calculated over 10 runs of the algorithm. $NMI$ measures the amount of information shared between the clustering result and the ground-truth which lays in range of $\big[0,1\big]$ with the ideal score of 1. Based on the common practice in the literature, we use average $CE$ along with its median value for the Hopkin155 dataset. We compare our algorithms' performance to baseline methods {SSC}~\cite{elhamifar2013sparse}, {LRSC} \cite{vidal2014low}, {SSOMP}~\cite{you2016scalable}, {S$^3$C} \cite{li2015structured}, {GNLMF} \cite{li2017graph}, {KSSC}~\cite{patel2014kernel} {KSSR} \cite{bian2016kernelized} and {RKNNLRS} \cite{xiao2016robust}. These algorithms are selected from major sparse coding-based clustering approaches, among which {KSSC}, {KSSR}, and {RKNNLRS} are kernel-based methods. The spectral clustering step of the baselines is performed via using the correct number of clusters. To compute the kernels required for kernel-based we use Histogram Intersection Kernel (HIK) as in \cite{wu2009beyond} for AR dataset as it is a proper choice regarding its frequency-based features \cite{xiao2016robust}. For the implementations on the rest of the datasets we adopted the Gaussian kernel ${\mathcal K} (x,y)=exp(-\frac{\| x-y \|^2}{\sigma})$, where $\delta$ is the average of $\| {\vec{x}}_i-{\vec{x}}_j \|^2$ over all data samples. \subsection{Parameter Settings} In order to tune the parameters $\lambda,\mu,k$ we utilize a grid-search method. We do the search for $\lambda$ in the range of $\{1,1.5,...,7\}$, for $\mu$ in the range of $\{0.1,0.2,...,1\}$ and $k$ in $\{3,4,...,8\}$. We implement a similar parameter search for the baselines to find their best settings. Although for the link-restore parameter, $\tau=0.2$ generally works well, one can do a separate grid-search for $\tau$. \begin{table*}[!b] \caption{Average clustering error ($CE$) and $NMI$ for YALE, COIL20, AR datasets. $CE$ and its median value for Hopkins155-(2 motions and 3 motions) datasets.} \vspace{-0.5cm} \label{tab:result} \LARGE \begin{center} \resizebox{1\textwidth}{!} \begin{tabular}{|l|c|c||c|c||c|c||c|c||c|c|} \hline \multirow{2}{*}{Methods} & \multicolumn{2}{c||}{YALE B} & \multicolumn{2}{c||}{COIL20} & \multicolumn{2}{c||}{AR-Face}& \multicolumn{2}{c||}{Hopkins-2m} & \multicolumn{2}{c|}{Hopkins-3m}\\ \cline{2-11} & $CE$ & $NMI$ & $CE$ & $NMI$ & $CE$ & $NMI$ & $CE$ & med. & $CE$ & med.\\ \hline {SSC}~\cite{elhamifar2013sparse} &0.1734&0.8902&0.1737&0.9104&0.1065&0.9103& 0.0289 &0 &0.0663 &0.0114 \\ \hline {LRSC}\cite{vidal2014low} &0.3136 &0.7340&0.2943&0.7838&0.0938&0.9037 & 0.0369&0.2127 &0.0746 &0.0245 \\ \hline {SSOMP}\cite{you2016scalable} & 0.3214&0.6792&0.7652&0.5274&0.1012&0.8353 & 0.1432 & 0.0328& 0.1973& 0.1504 \\ \hline {S$^3$C}\cite{li2015structured} &0.1565 &0.9104&0.1635&0.9063&0.0897&0.9117 & 0.0263& 0& 0.0527& 0.0089\\ \hline {GNLMF}\cite{li2017graph} & 0.3074&0.4172&0.3972&0.6421&0.1544&0.8769 &0.1052 & 0.0216& 0.1239& 0.0841 \\ \hline {KSSC}\cite{patel2014kernel} & 0.1504&0.8907&0.1833&0.9039&0.0678&0.9241 & 0.0275& 0& 0.0584& 0.0096\\ \hline {KSSR}\cite{bian2016kernelized} &0.1598&0.8864&0.1983&0.9027&0.0742&0.8983 & 0.0437& 0.6121& 0.0756& 0.0151\\ \hline {RKNNLRS}\cite{xiao2016robust} &0.1493&0.9035&0.1672&0.9126&0.0886&0.9131 & 0.0254& 0& 0.0512& 0.0087\\ \hline \hline \textbf{NLSSC}\large(Proposed) &0.1242&0.9146&\textbf{0.1409}&\textbf{0.9254}&0.0832&0.9125& 0.0189& 0& 0.0427 & 0.0079\\ \hline \textbf{NLKSSC}\large(Proposed) &\textbf{0.1107}&\textbf{0.9163}&0.1528&0.9147&\textbf{0.0542}&\textbf{0.9364} & \textbf{0.0122}& 0& \textbf{0.0331}& \textbf{0.0065}\\ \hline \end{tabular} } \footnotesize The best result (\textbf{bold}) is according to a two-valued t-test at a $5\%$ significance level. \end{center} \end{table*} \subsection{Clustering Results} According to the results summarized in Tables. (\ref{tab:result}), the proposed methods outperformed the benchmarks regarding the clustering error. Comparing NLKSSC to NLSSC, the kernel-based algorithm resulted in a smaller $CE$ compared to NLSSC (except for COIL20), which shows that the kernel-based framework was able to better represent cluster distributions. Regarding the COIL20 dataset, via comparing kernel-based methods to other baselines, it can be concluded that the utilized kernel function was not strongly effective for cluster-based representation of the dataset. However, NLKSSC still outperformed other baselines due to the effectiveness of its sparse subspace model. Among other methods, S$^3$C, RKNNLRS, and KSSC have comparable results, especially for the Hopkins dataset. This means, although KSSC and RKNNLRS benefited from kernel representation, the S$^3$C algorithm was relatively effective regarding capturing the data structure. However, KSSR presented low performance even in comparison to vectorial methods such as SSC and LRSC. This behavior is due to lack of having any strong regularization term in its model regarding the subspace structure of data. Among non-negative methods, GNLMF performance is relatively below average. This may suggest that its NMF-based structure is not suitable for grasping cluster distribution in comparison to self-representative methods. On the other hand, RKNNLRS performance shows that its non-negative model is more effective for clustering purposes compared to NMF-based models. Comparing NLSSC (the proposed algorithm) to other baselines with low-rank regularizations in their models, we can conclude that proper combination of the locality term and the affine constraints aided NLSSC to obtain higher performance. The same conclusion can be derived via comparing NLSSC/NLKSSC to KSSC as an affine subspace clustering algorithm. \begin{table*}[!b] \caption{Application of the link-restore method on the non-negative approaches.} \label{tab:link} \vspace{-0.5cm} \LARGE \begin{center} \resizebox{1\textwidth}{!} \begin{tabular}{|l|c|c||c|c||c|c||c|c||c|c|} \hline \multirow{2}{*}{Methods} & \multicolumn{2}{c||}{YALE} & \multicolumn{2}{c||}{COIL20} & \multicolumn{2}{c||}{AR}& \multicolumn{2}{c||}{Hopkins-2m} & \multicolumn{2}{c|}{Hopkins-3m}\\ \cline{2-11} & $CE$ & $NMI$ & $CE$ & $NMI$ & $CE$ & $NMI$ & $CE$ & median & $CE$ & median\\ \hline {GNLMF-link}\cite{li2017graph} &0.2514& 0.6564&0.2674&0.7161&0.1251&0.8846&0.0793&0.0147&0.1025&0.0649\\ \hline {RKNNLRS-link}\cite{xiao2016robust} &0.1237&0.9103&0.1602&0.9137&0.0823&0.9135&0.0230&{0}&0.0469&0.0081\\ \hline \hline \textbf{NLSSC-link} &0.1027&0.9182&\textbf{0.1409}&\textbf{0.9254}&0.0776&0.9153&0.0189&{0}&0.0392&0.0064\\ \hline \textbf{NLKSSC-link} &\textbf{0.0842}&\textbf{0.9326}&0.1523&0.9148&\textbf{0.0482}&\textbf{0.9381}&\textbf{0.0122}&{0}&\textbf{0.0301}&\textbf{0.0054}\\ \hline \end{tabular} } \footnotesize The best result (\textbf{bold}) is according to a two-valued t-test at a $5\%$ significance level. \end{center} \end{table*} \subsection{Effect of Link-Restore} To investigate the effect of the proposed link-restore algorithm we apply it on GNLMF, RKNNLRS, NLSSC, and NLKSSC as a post-processing step. This selection is based on the fact that link-restore is designed based on the non-negativity assumption about columns of $\mathbf{\Gamma}$. Also regarding its application on GNLMF and NLSSC, we use the kernel matrix ${\mathcal K}(\mathbf{X},\mathbf{X})$ related to the kernel baselines. According to Table \ref{tab:link}, the application of link-restore was effecting regarding almost all the cases. It reduced the clustering error of all the relevant methods to some extent, which demonstrates its ability to correct broken links in the representation graph $\MC{G}$. Nevertheless, the amount of improvements in NLSS/NLKSSC methods vary among datasets. For the 2-motions subset of Hopkins and for COIL20 datasets it did not add any important link to graph $\MC{G}$ which consequently did not change the value of $CE$. However, for YALE and AR datasets the amount of decreases in $CE$ shows the effectiveness of link-restore in correcting the missing connections in $\MC{G}$. \begin{figure}[tb] \centering \begin{subfigure}{0.45\textwidth} \centering \includegraphics[width=.9\linewidth]{link_orig.eps} \caption{} \end{subfigure} \begin{subfigure}{0.45\textwidth} \centering \includegraphics[width=.9\linewidth]{link_restore.eps} \caption{} \end{subfigure} \caption{A subset of the affinity matrix resulted by the implementation of NLKSSC on the AR dataset: (a) Before application of link-restore. (b) After application of link-restore.} \label{fig:link} \end{figure} Figure \ref{fig:link} visualizes the affinity matrix for implementation of NKLSSC on the AR dataset. The figure is zoomed in on two of the clusters showing that the representation graph contains more intra-cluster connections after applying link-restore (figure \ref{fig:link}-b). \subsection{Sensitivity to the Parameter Settings} Due to the space limits, we study the sensitivity of NLKSSC to the choice of parameters only for the AR dataset considering 3 different experiments. In each experiment, we fix two of $\lambda,\mu,k$ and change the other one and study the effect of this variation on clustering error ($CE$). Based on Figure \ref{fig:sens}, the algorithm sensitivity to $\lambda$ is acceptable when $2\le\lambda\le4.5$. Having $\lambda \ge 6$ does not change $CE$ since it makes the loss term $\MC{E}_{rep}:=\| \Phi(\mathbf{X})-\Phi(\mathbf{X})\mathbf{\Gamma}\|_F^2$ more dominant in optimization problem of Eq. \ref{eq:nklssc}. By choosing $0.25 \le \mu \le 0.5$, the algorithm's performance does not change drastically. However, NLKSSC shows a considerable sensitivity if $\mu$ goes beyond 0.6. High values of $\mu$ weaken the role of $\MC{E}_{rep}$ (the main loss term) in the sparse coding model. Studying the sensitivity curve of $k$, its starting point has a similar $CE$ to the start of $\mu$ sensitivity curve, as in both cases effect of $\MC{E}_{lsp}$ becomes zero in the optimization. Figure \ref{fig:sens}-b shows that $k\in\{3,4,5\}$ is a good choice. However, with $k\leq3$ the objective term $\MC{E}_{lsp}$ is not effective enough and with $k \ge 10$ the $CE$ curve does not follow any constant pattern, but generally becomes worse because it increases $\frac{\|W_w\|_0}{\|W_c\|_0}$ and it may infringe the pre-assumption of Proposition \ref{prop1}. It is important to note that even a small neighborhood radius (e.g. $k=4$) could have a wide impact on the global representation if the local neighborhoods can have overlapping. Generally, similar sensitivity behaviors are also observed for the other datasets. \begin{figure}[!b] \begin{subfigure}{0.32\textwidth} \centering \includegraphics[width=.95\linewidth]{sens_lam.eps} \caption{} \end{subfigure} \begin{subfigure}{0.32\textwidth} \centering \includegraphics[width=.95\linewidth]{sens_kb.eps} \caption{} \end{subfigure} \begin{subfigure}{0.32\textwidth} \centering \includegraphics[width=0.95\linewidth]{sens_mu.eps} \caption{} \end{subfigure} \caption{Sensitivity analysis of NLKSSC to parameter selection (a)$\lambda$, (b)$\mu$ and (c)$k$ for AR dataset.} \label{fig:sens} \end{figure} \section{Acknowledgment} This research was supported by the Cluster of Excellence Cognitive Interaction Technology 'CITEC' (EXC 277) at Bielefeld University, which is funded by the German Research Foundation (DFG). \bibliographystyle{IEEEtran} \section{Introduction} Clustering is one of the challenging problems in the area of machine learning and data analysis~\cite{xu2005survey}, for which unsupervised methods try to discover the hidden structure of the data. On the other hand, sparse coding algorithms aim for finding a latent representation of data points based on a weighted combination of sparsely selected base vectors \cite{rubinstein2008efficient}. Such a sparse representation has the potential to capture the essential characteristics of the data including its hidden structure \cite{kim2010sparse}. Therefore, in recent years, several studies have tried and succeeded in using sparse coding models for clustering purposes~\cite{liu2013robust,you2016scalable,xu2016novel}. Calling the weighting coefficients sparse codes, the clustering phase is applied to the learned sparse codes using common clustering methods such as spectral clustering~\cite{yang2014data}. An important group of sparse coding methods for clustering is called sparse subspace clustering algorithms (SSC) \cite{elhamifar2013sparse}. Assuming the data is distributed on a union of linear subspaces, SSC methods focus on obtaining self-expressive representations, such that each data point could be represented by using other samples from its cluster (subspace) ~\cite{Cheng:2010:LLG:1820776.1820778,liu2013robust}. There are considerable variations in the structure of existing SSC algorithms \cite{vidal2014low,Gao2012,you2016scalable}, which leads to different optimization schemes. From another point of view, some of the sparse coding approaches restrict the sparse codes to non-negative values to obtain a more interpretable representation for the data, especially when the data is related to biological models~\cite{hoyer2003modeling}. Such non-negativity also often results in a better construction of the subsequent clustering graph~\cite{xiao2016robust,zhuang2012non}. Benefiting from kernel functions, it could be possible to transfer data to a high-dimensional space in which clusters are more separable. Hence, a subset of SSC algorithms focused on developing kernel-based SSC methods \cite{patel2014kernel,bian2016kernelized,xiao2016robust} which typically achieve higher clustering accuracies in comparison to their vectorial versions. \subsubsection*{Contributions:} In this work, we propose a non-negative SSC algorithm with a unique structure. The method combines nuclear-norm with a local-separability objective term. In addition, it preserves the affine representation of data in the latent space in accordance with an affine assumption about the underlying subspaces. Accordingly, our explicit contributions are as follows: \begin{itemize} \item We introduce and add a novel objective term to the problem which focuses on increasing local separability of data. This term is used in an unsupervised way, and it affects the sparse representation of data to have a better cluster separability. \item An efficient post-processing method is introduced regarding the negative effect of sparse coding redundancies on clustering performance. \item Our algorithm is also extended to the nonlinear version via incorporating a kernel function in its framework. \end{itemize} In the next section, we briefly review SSC algorithms. Afterward, we present our proposed approaches in Sec. \ref{sec:method} and the optimization procedure in Sec. \ref{sec:optim}. Then, we carry out empirical evaluations in Sec.~\ref{secexp}, and make the conclusion in the final section. \section{Proposed Non-Negative SSC algorithm}\label{sec:method} In this section, we introduce our proposed SSC algorithms NLSSC and NKLSSC. Although they are explained in individual subsections, NKLSSC is the kernel extension of NLSSC which is optimized similarly to NLSSC's optimization. \subsection{Non-Negative Local Subspace Sparse Clustering} We formulate our non-negative local SSC algorithm (NLSSC) using the following self-representative framework: \begin{equation} \begin{array}{ll} \underset{\mathbf{\Gamma}}{\min}&\|\mathbf{\Gamma}\|_*+\frac{\lambda }{2} \| \mathbf{X}-\mathbf{X}\mathbf{\Gamma}\|_F^2 +\mu\mathcal{E}_{lsp}(\mathbf{\Gamma},\mathbf{X})\\ \mathrm{s.t} & ~ \mathbf{\Gamma}^{\top}\vec{\MB{1}}=\vec{\MB{1}}, \gamma_{ij} \ge 0 , ~~\gamma_{ii}=0~,~\forall ij \label{eq:nnlssc} \end{array} \end{equation} where $\gamma_{ii}=0$ prevents data points from being represented by own contributions. The constraint $\mathbf{\Gamma}^{\top}\vec{\MB{1}}=\vec{\MB{1}}$ focuses on the affine reconstruction of data points which coincides with having the data lying in an affine union of subspaces $\MC{S}_l$. The nuclear norm regularization term $\|\mathbf{\Gamma}\|_{*}=trace(\sqrt{\mathbf{\Gamma}^*~\mathbf{\Gamma}})$ is employed to ensure the sparse coding representations are low-rank. This helps the sparse model to better capture the global structure of data distribution. The non-negativity constraint on $\gamma_{ij}$ is employed to enforce the data combinations to happen mostly between similar samples. The novel term $\mathcal{E}_{lsp}(\mathbf{\Gamma},\mathbf{X})$ is a loss function which focuses on the local separability of data points in the coding space based on values of $\mathbf{\Gamma}$. Accordingly, scalars $\lambda$ and $\mu$ are constants which control the contribution of the objective terms. The goal of having $\mathcal{E}_{lsp}(\mathbf{\Gamma},\mathbf{X})$ in the SSC model is to reduce intra-cluster distance and increase inter-cluster distance. To do so in an unsupervised way, we define \begin{equation} \mathcal{E}_{lsp}(\mathbf{\Gamma},\mathbf{X}):=\frac{1}{2}\sum_{i,j} \big[ w_{ij}\|{\vec{\gamma}}_i-{\vec{\gamma}}_j\|_2^2 + b_{ij} ({{\vec{\gamma}}_i}^\top {\vec{\gamma}}_j)\big] \label{eq:olsp} \end{equation} in which the binary regularization weighting matrices $\MB{W}$ and $\MB{B}$ are computed as \begin{equation} w_{ij}= \begin{cases} 1, & \text{if } {\vec{x}}_j \in \MC{N}_i^k\\ 0, & \text{otherwise} \end{cases} ,\qquad b_{ij}= \begin{cases} 1, & \text{if } {\vec{x}}_j \in \MC{F}_i^{k}\\ 0, & \text{otherwise} \end{cases} \label{eq:weight} \end{equation} The two sets $\MC{N}_i^k$ and $\MC{F}_i^k$ refer to the $k$-nearest and $k$-farthest data points to ${\vec{x}}_i$, and are determined via computing Euclidean distance $\|{\vec{x}}_i-{\vec{x}}_j\|_2$ between each ${\vec{x}}_i$ and ${\vec{x}}_j$. Defining $\MC{D}(\MB{W},\mathbf{\Gamma}):=\sum_{i,j} w_{ij}\|{\vec{\gamma}}_i-{\vec{\gamma}}_j\|_2^2$ and $\MC{H}(\MB{B},\mathbf{\Gamma}):=\sum_{i,j} b_{ij} ({{\vec{\gamma}}_i}^\top {\vec{\gamma}}_j)$, the first part reduces the distance between $({\vec{\gamma}}_i,{\vec{\gamma}}_j)$ if they belong to $\mathcal{N}_i^k$ while the latter focuses on incoherency of each pair of $({\vec{\gamma}}_i,{\vec{\gamma}}_j)$ if they are members of $\mathcal{F}_i^k$. The following explains the effect of $\mathcal{E}_{lsp}$ on the separability of the clusters in the coding space. Assuming there exist the labeling scalars $\{l_i\}_{i=1}^N \in \MBB{R}$, we prefer ${\vec{x}}_i$ and members of $\MC{N}_i^k$ to belong to the same class while the set $\MC{F}_i^k$ to contain data from other clusters. We define $\MB{W}_c$ and $\MB{W}_m$ such that $\MB{W}=\MB{W}_c+\MB{W}_m$, and they respectively denote the correct and wring assignments regarding the label information ${\vec{l}}$. more precisely, if $w(i,j)=1$ then in case $l_i=l_j$ we have ${w_c}(i,j)=1$, otherwise ${w_m}(i,j)=1$. The rest of the entries in $({\MB{W}_c},{\MB{W}_m})$ are set to zero. \begin{definition} The neighborhoods in $\mathbf{X}$ are cluster representative to the order of $o_r$, if $\exists k\in \mathbb{N}:\|{\MB{W}}_c\|_0/\|\MB{W}_m\|_0=o_r,~\text{and}~ o_r<1$. \label{def:cr} \end{definition} Definition \ref{def:cr} means that in the neighborhoods of data samples in $\mathbf{X}$ there are more points of the same class than of different ones. \begin{proposition} Minimizing $\mathcal{E}_{lsp}$ in Eq. (\ref{eq:olsp}) makes columns of $\mathbf{\Gamma}$ to be better locally separable regarding the underlying classes, if the neighborhoods in $\mathbf{X}$ are cluster representative with a sufficiently small $o_r$. \begin{proof}\{\textit{sketch}\} Eq.~\ref{eq:olsp} can be rewritten as $$\mathcal{E}_{lsp}=\DW{W}{c}{}+\DW{W}{m}{}+\MC{H}(\MB{B},\mathbf{\Gamma})$$ Therefore, $\mathbf{\Gamma}^* = \underset{\mathbf{\Gamma}}{\argmin}~\MC{E}_{lsp}$ generally works in favor of decreasing $\DW{W}{c}{}$ and $\MC{H}(\MB{B},\mathbf{\Gamma})$ compared to an initial $\mathbf{\Gamma}^0$. \\Consequently, a small $\DW{W}{c}{}$ leads to compact same-label neighborhoods in $\mathbf{\Gamma}^*$, and decreasing $\MC{H}(\MB{B},\mathbf{\Gamma})$ generally increases $\DW{B}{}{}$ and more provides a more localized structure for $\mathbf{\Gamma}^*$. \\Denoting $\Delta\DW{W}{}{*}:=\DW{W}{}{*}-\DW{W}{}{0}$, according to the definition~\ref{def:cr}, ${\Delta\DW{W}{m}{*}}/{\Delta\DW{W}{c}{*}}$ is a decreasing function of $1/o_r$. \\Hence, the smaller $o_r$ becomes the more columns of $\mathbf{\Gamma}^*$ can be locally separated from data samples of the other classes ($\MB{W}_m$) in their neighborhoods. \end{proof} \label{prop1} \end{proposition} Proposition \ref{prop1} shows the effect of minimizing the loss term $\mathcal{E}_{lsp}$ on having localized and condense neighborhoods in the sparse codes $\mathbf{\Gamma}$ by making the sparse codes of the neighboring samples more similar (identical in ideal case) while making those of far away points incoherent (orthogonal in ideal case). It also provides the desired condition by which the local neighborhoods in $\mathbf{\Gamma}$ can better respect the class labels $\vec{l}$ and leading to a better alignment between $\mathbf{\Gamma}$ and the underlying subspaces. \textbf{Note:} Here we referred to ${\vec{l}}$ only to explain the reason behind our specific model design; however, the algorithm does need the labeling information in any of its steps. \subsection{Clustering based on $\mathbf{\Gamma}$} Similar to other SSC algorithms, the resulted sparse coefficients are used to construct an adjacency matrix $\MB{A}=\mathbf{\Gamma}+\mathbf{\Gamma}^{\top}$ defining a a sparse representation graph $\MC{G}$. This undirected graph consists of weighted connections between pairs of $({\vec{x}}_i,{\vec{x}}_j)$. Therefore, $\MB{A}$ is used as the affinity matrix in the spectral clustering algorithm \cite{yang2014data} to find the data clusters. \subsection{Link-Restore} After constructing the affinity matrix based on the resulting $\mathbf{\Gamma}$, it is desired to have positive weights in the representation graph $\MC{G}$ between every two points of a cluster. However, in practice, it is possible to see non-connected nodes (broken links) even inside condense clusters. This happens due to the redundancy issue related to sparse coding algorithms. In Eq. \ref{eq:nnlssc}, $\mathbf{X}$ is used as an over-complete dictionary for reconstruction of each ${\vec{x}}_i$, therefore we can assume ${\vec{x}}_i\approx \mathbf{X}{\vec{\gamma}}_i$. Nevertheless, as a common observation in sparse coding models the solution for the value of ${\vec{\gamma}}_i$ is suboptimal because of the utilized $\|{\vec{\gamma}}_i\|_p$ relaxations. Thus for ${\vec{x}}_s$ as a close data point to ${\vec{x}}_i$, it is possible to have ${\vec{x}}_s\approx \mathbf{X} {\vec{\gamma}}_s$, but with a big ${{\vec{\gamma}}}_i^\top{{\vec{\gamma}}}_s$. This means ${\vec{\gamma}}_i$ and ${\vec{\gamma}}_s$ are not similar in the entries. Consequently, $a_{ij}$ can be small resulting from dissimilar ${\vec{\gamma}}_i$ and ${\vec{\gamma}}_s$, albeit ${\vec{x}}_i$ and ${\vec{x}}_s$ are very similar. \begin{algorithm}[!b] \small \SetAlgoLined \KwInput{Sparse code ${\vec{\gamma}}$, data matrix $\mathbf{X}$, threshold $\tau \in [0,1]$} \KwOutput{Corrected ${\vec{\gamma}}$ by restoring its connections to other data points} \Kwinit{$I=\{i\mid \gamma_i\neq 0\}$ \scriptsize (except index of ${\vec{x}}$)} \KwProc{$\{$over all elements $i \in I$ $\}$} \quad $\hat{{\vec{\gamma}}}={\vec{\gamma}}$, \quad $\bar{I}:=\{s\mid ({\vec{x}}_s^\top{\vec{x}}_s-2{\vec{x}}_i^\top{\vec{x}}_s)<(\tau -1){\vec{x}}_i^\top{\vec{x}}_i ~,~ \gamma_s=0 \}$\\ \label{line:is} \quad $\hat{\gamma}_i={\gamma}_i ({\vec{x}}_i^\top{\vec{x}}_i/{\sum_{s\in \{\bar I \cup i\}}{{\vec{x}}_i^\top{\vec{x}}_s}})$\\ \label{line:yi} \quad $\hat{\gamma}_s=\hat{\gamma}_i({\vec{x}}_i^\top{\vec{x}}_s/{{\vec{x}}_i^\top{\vec{x}}_i})~,~\forall s\in \bar{I}$\\ \label{line:ys} \quad ${\vec{\gamma}}=\hat{{\vec{\gamma}}}$,\quad $I=I\backslash\{i\}$\\ \caption{Link-Restore post-processing} \label{alg:graph} \end{algorithm} As a workaround to the mentioned issue, we propose the {Link-Restore} method (Algorithm~\ref{alg:graph}) as an effective step regarding these situations. It acts as a post-processing step on the obtained $\mathbf{\Gamma}$ before application of spectral clustering. Link-restore corrects entries of each ${\vec{\gamma}}$ by restoring the broken connections between ${\vec{x}}$ and other points in the dataset. To do so, it first obtains the current set of data points connected to ${\vec{x}}$ as $I=\{i\mid \gamma_i\neq 0\}$, where $\gamma_i$ denotes $i$-th entry in vector ${\vec{\gamma}}$. Then for each ${\vec{\gamma}}_i$ that $i \in I$, the algorithm collects the indices $\bar{I}$ of data points which are close to ${\vec{x}}_i$ but not used in the sparse code of ${\vec{x}}$ (line~\ref{line:is}). To that aim, for each ${\vec{x}}_s \in \bar{I}$ the criterion ${\|{\vec{x}}_i-{\vec{x}}_s\|^2_2}/{\|{\vec{x}}_i\|^2_2}<\tau$ should be fulfilled, where $0\leq \tau \leq 1$. Then in order to incorporate members of $\bar{I}$ into ${\vec{\gamma}}$, the entry $\gamma_i$ is projected to $\bar{I}\cup i$ based on the value of ${{\vec{x}}_i^\top{\vec{x}}_s}/{{\vec{x}}_i^\top{\vec{x}}_i} ~~\forall s\in \bar{I}$ while also maintaining the affinity constraint on ${\vec{\gamma}}$ (lines~\ref{line:yi}-\ref{line:ys}). It is important to point out that the pre-assumption for the above is that $\gamma_{i} \ge 0~~\forall i$. Therefore link-restore method can be assumed as a proper post-processing method for \textit{non-negative} subspace clustering algorithms. \subsection{Kernel Extension of NLSSC} Assume $\Phi:\MBB{R}^d\rightarrow\MBB{R}^m$ is an implicit nonlinear mapping to a Reproducing Kernel Hilbert Space (RKHS) such that $m \gg d$. Thus, there exists a kernel function ${\mathcal K}({\vec{x}}_i,{\vec{x}}_j)=\Phi({\vec{x}}_i)^{\top}\Phi({\vec{x}}_j)$. Doing so, we can benefit from the non-linear characteristics of this implicit mapping to obtain better representation for the data. Accordingly, we can reformulate our NLSSC method (Eq. \ref{eq:nnlssc}) into its kernel extension as the non-negative local kernel SSC algorithm (NLKSSC): \begin{equation} \begin{array}{ll} \underset{\mathbf{\Gamma}}{\min}&\|\mathbf{\Gamma}\|_*+\frac{\lambda }{2} \| \Phi(\mathbf{X})-\Phi(\mathbf{X})\mathbf{\Gamma}\|_F^2 +\mu\mathcal{E}_{lsp}(\mathbf{\Gamma},\Phi(\mathbf{X}))\\ \mathrm{s.t} & ~ \mathbf{\Gamma}^{\top}\vec{\MB{1}}=\vec{\MB{1}}, ~\gamma_{ij} \ge 0 , ~~\gamma_{ii}=0~,~\forall ij \label{eq:nklssc} \end{array} \end{equation} Comparing to Eq. \ref{eq:nnlssc}, the second term in the objective of Eq.\ref{eq:nklssc} means a self-representation in the feature space, and the local-separability term ($\MC{E}_{lsp}$) is equivalent to the one used in \ref{eq:nnlssc}. However, $\MB{W}$ and $\MB{W}_m$ in $\MC{E}_{lsp}$ are computed based on the entries ${\mathcal K}({\vec{x}}_i,{\vec{x}}_j)$ which directly indicate the pair-wise similarity of each data ${\vec{x}}_i$ to its surrounding neighborhood. The benefit of having a kernel representation of $\mathbf{X}$ is that a proper kernel function facilitates the validity of the pre-assumption for Proposition \ref{prop1}, which leads to the more efficient role of $\MC{E}_{lsp}$. As we see in Sec. \ref{sec:optim}, we can use the same optimization regime for both NLSSC and NLKSSC. In addition, the lines~\ref{line:is}-\ref{line:ys} of the link-restore algorithm can be implemented using the above dot-product rule. \section{Optimization Scheme of Proposed Methods}\label{sec:optim} Putting Eq. \ref{eq:olsp} into Eq. \ref{eq:nnlssc} the following optimization framework is derived \begin{equation} \begin{array}{ll} \underset{\mathbf{\Gamma}}{\min}&\|\mathbf{\Gamma}\|_*+\frac{\lambda }{2} \| \mathbf{X}-\mathbf{X}\mathbf{\Gamma}\|_F^2 +\frac{\mu}{2}\sum_{i,j} \big[ w_{ij}\|{\vec{\gamma}}_i-{\vec{\gamma}}_j\|_2^2 + b_{ij} ({{\vec{\gamma}}_i}^\top{\vec{\gamma}}_j)\big]\\ \mathrm{s.t} & ~ \mathbf{\Gamma}^{\top}\vec{\MB{1}}=\vec{\MB{1}}, ~\gamma_{ij} \ge 0 , ~~\gamma_{ii}=0~,~\forall ij \label{eq:optim} \end{array} \end{equation} To simplify the 3rd loss term in (\ref{eq:optim}), we symmetrize $\MB{W}\rightarrow \frac{\MB{W}+\MB{W}^{\top}}{2}$ and do the same for $\MB{B}$. Then according to \cite{von2007tutorial} we compute the Laplacian matrix $\MB{L}=\MB{D}-\MB{W}$, where $\MB{D}$ is a diagonal matrix such that $d_{ii}=\sum_{j}w_{ij}$. Then, with simple algebric operations we can rewrite $\mathcal{E}_{lsp}(\mathbf{\Gamma},\mathbf{X})=\Tr(\mathbf{\Gamma} \MB{L} \mathbf{\Gamma}^\top) +\frac{1}{2}\Tr(\mathbf{\Gamma} \MB{B} \mathbf{\Gamma}^\top)$, and reformulate Eq.~\ref{eq:optim} as: \begin{equation} \begin{array}{ll} \underset{\mathbf{\Gamma}}{\min}&\|\mathbf{\Gamma}\|_*+\frac{\lambda }{2} \| \mathbf{X}-\mathbf{X}\mathbf{\Gamma}\|_F^2 +\mu \Tr(\mathbf{\Gamma}\hat{\MB{L}}\mathbf{\Gamma}^{\top}) \\ \mathrm{s.t} & ~ \mathbf{\Gamma}^{\top}\vec{\MB{1}}=\vec{\MB{1}}, ~\gamma_{ij} \ge 0 , ~~\gamma_{ii}=0~,~\forall ij \label{opttr} \end{array} \end{equation} where $\Tr(.)$ is the trace operator and $\hat{\MB{L}}=(\MB{L}+\frac{1}{2} \MB{B})$. The objective of Eq. \ref{opttr} is sum of convex functions (trace, inner-product, and convex norms), therefore the optimization problem is a constrained convex problem and can be solved using the alternating direction method of multipliers (ADMM) \cite{boyd2011distributed} as presented in Algorithm \ref{alg:admm}. Optimizing Eq.~\ref{opttr} coincides with minimizing the following augmented Lagrangian which is derived by adding its constraints as penalty terms in the objective function. \begin{equation} \begin{array}{ll} \MC{L}_{\rho}&(\mathbf{\Gamma},\mathbf{\Gamma}_+,\MB{U},\alpha_+,\alpha_U,\vec{\alpha_\MB{1}}) =\|\MB{U}\|_*+\lambda \MC{E}_{rep}(\mathbf{X},\mathbf{\Gamma})+\mu\MC{E}_{lsp}(\mathbf{X},\mathbf{\Gamma})\\ &+\frac{\rho}{2}\|\mathbf{\Gamma}-\mathbf{\Gamma}_+ \|_F^2+\Tr(\alpha_{+}^\top(\mathbf{\Gamma}-\mathbf{\Gamma}_+)) +\frac{\rho}{2}\|\mathbf{\Gamma}-\MB{U} \|_F^2\\ &+\Tr(\alpha_{U}^\top(\mathbf{\Gamma}-\MB{U})) +\frac{\rho}{2}\|\mathbf{\Gamma}^\top\vec{\MB{1}}-\vec{\MB{1}} \|_2^2 +\langle\vec{\alpha_\MB{1}},\mathbf{\Gamma}^\top\vec{\MB{1}}-\vec{\MB{1}}\rangle \end{array} \label{eq:lagran} \end{equation} in which $\MC{E}_{rep}:=\frac{1}{2} \| \mathbf{X}-\mathbf{X}\mathbf{\Gamma}\|_F^2$, and matrices $(\mathbf{\Gamma}_+,\MB{U})$ are axillary matrices related the non-negativity constraint and the term $\|\mathbf{\Gamma}\|_*$. Eq~\ref{eq:lagran} contains the Lagrangian multipliers $\alpha_+,\alpha_U \in \MBB{R}^{N\times N}$ and $\vec{\alpha_\MB{1}} \in \MBB{R}^{N}$, and the penalty parameter $\rho \in \MBB{R}^+$. Minimizing $\MC{L}_{\rho}$ Eq.\ref{eq:lagran} is carried out in an alternating optimization framework, such that at each step of the optimization all of the parameters $\{\mathbf{\Gamma},\mathbf{\Gamma}_+,\MB{U},\alpha_+,\alpha_U,\vec{\alpha_\MB{1}}\}$ are fixed except one. Therefore, the updating steps are described as follows. \newline \textit{\textbf{Updating $\mathbf{\Gamma}$}}: At iteration $t$ of ADMM, via fixing $\mathbf{\Gamma}_+^t,\MB{U}^t,\alpha_+^t,\alpha_U^t,\vec{\alpha}_\MB{1}^t$, the matrix $\mathbf{\Gamma}^{t+1}$ is updated as the solution to this Sylvester linear system of equations \cite{kirrinnis2001fast} \begin{equation} [2\lambda\mathbf{X}^\top\mathbf{X}+2\rho \MB{I}+\vec{\MB{1}}\vec{\MB{1}}^\top]\mathbf{\Gamma}^{t+1}+\mathbf{\Gamma}^{t+1}[2\mu\hat{\MB{L}}] =\rho[\mathbf{\Gamma}_+^t+\MB{U}^t+\vec{\MB{1}}\vec{\MB{1}}^\top]-\alpha_{\MB{U}}^t-\alpha_{+}^t -\vec{\MB{1}}{\vec{\alpha^t}_{\MB{1}}}^\top \label{eq:up_x} \end{equation} \textit{\textbf{Updating $\MB{U}$}}: Updating $\MB{U}^{t+1}$ which is associated with $\|\mathbf{\Gamma}\|_*$ can be done via fixing other parameters and using the singular value thresholding method \cite{cai2010singular} as $\MB{U}^{t+1}=\MC{T}_{1/\rho}(\mathbf{\Gamma})$ where term $\MC{T}(.)$ is the thresholding operator from \cite{cai2010singular}(Eq. 2.2). \newline \begin{algorithm} [!b \caption{Optimization Scheme of NLSSC} \label{alg:admm} \LinesNumberedHidden \KwInput{$\mathbf{X},\lambda,\mu,k,\Delta_\rho=0.1,\rho_{max}=10^6$} \KwOutput{Sparse coefficient matrix $\mathbf{\Gamma}$} \Kwinit{Compute $\{\MB{W},\MB{B},\hat{\MB{L}}\}$. Set all $\{\mathbf{\Gamma}_+,\mathbf{\Gamma},\MB{U},\alpha_+,\alpha_U,\vec{\alpha_\MB{1}}\}$ to zero} \Repeat{Convergence Criteria is fulfilled} { Updating $\mathbf{\Gamma}$ by solving Eq.~\ref{eq:up_x}\\ Updating $\MB{U}$ based on \cite{cai2010singular}(Eq. 2.2)\\ Updating $\mathbf{\Gamma}_+,\alpha_+,\alpha_U,\vec{\alpha_\MB{1}}$ based on Eq.~\ref{eq:up_rest} } \end{algorithm} \textit{\textbf{Updating $\mathbf{\Gamma}_+,\alpha_+,\alpha_U,\vec{\alpha_\MB{1}},\rho$}}: The matrix $\mathbf{\Gamma}_+$ and the multipliers are updated using the following projected gradient descent and gradient ascent steps respectively \begin{equation} \begin{array}{ll} \mathbf{\Gamma}_+^{t+1}=\max(\mathbf{\Gamma}+\frac{1}{\rho} \alpha_+,0),& \qquad\alpha_+^{t+1}=\alpha_+^{t}+\rho(\mathbf{\Gamma}-\mathbf{\Gamma}_+)\\ \vec{\alpha}_{\MB{1}}^{t+1}=\vec{\alpha}_{\MB{1}}^{t}+\rho(\mathbf{\Gamma}^\top\vec{\MB{1}}-\vec{\MB{1}}),& \qquad\rho^{t+1}=\min(\rho^t(1+\Delta_\rho),\rho_{max}) \end{array} \label{eq:up_rest} \end{equation} in which $(\Delta_\rho,\rho_{max})$ are the update step and higher bound of $\rho$ respectively. \newline \textit{\textbf{Convergence Criteria}}: The algorithm reaches its convergence point when for a fixed $\epsilon>0$, $\|\mathbf{\Gamma}^t-\mathbf{\Gamma}^{t-1}\|_\infty \leq \epsilon$, $\|\mathbf{\Gamma}_+^t-\mathbf{\Gamma}^t\|_\infty \leq \epsilon$, $\|\MB{U}^t-\mathbf{\Gamma}^t\|_\infty \leq \epsilon$, and $\|{\mathbf{\Gamma}^t}^\top \vec{\MB{1}}-\vec{\MB{1}}\|_\infty \leq \epsilon$. \subsubsection*{Optimizing NLKSSC:} The kernel-based algorithm (NLKSSC) is optimized also using Algorithm \ref{alg:admm} while the kernel trick $\Phi({\vec{x}}_i)^\top\Phi({\vec{x}}_j)={\mathcal K}({\vec{x}}_i,{\vec{x}}_j)$ is applied to replace $\mathbf{X}^\top\mathbf{X}$ by ${\mathcal K}(\mathbf{X},\mathbf{X})$ in Eq. \ref{eq:up_x}, and to kernelize the link-restore algorithm as well. \section{Related Works} Consider the data matrix $\mathbf{X}=[{\vec{x}}_1,...,{\vec{x}}_N] \in \mathbb{R}^{d\times N}$ which lies in the union of $n$ linear subspaces $\cup_{l=1}^n \MC{S}_l$ each with the dimension of $\{d_l\}_{l=1}^n$. Subspace clustering tries to cluster the data such that each cluster $i$ contains samples lying in one individual subspace $\MC{S}_i$. Therefore, each data point ${\vec{x}}_i$ can be represented by other data points in $\mathbf{X}$ as a linear combination ${\vec{x}}_i \approx \mathbf{X}{\vec{\gamma}}_i$. Focusing on the sparseness of the coding vectors ${\vec{\gamma}}_i$, subspace sparse clustering \cite{elhamifar2013sparse} can be formulated as \begin{equation} \begin{array}{ll} \underset{\mathbf{\Gamma}} {\min} \|\mathbf{\Gamma}\|_0\quad s.t. ~~\mathbf{X}=\mathbf{X}\mathbf{\Gamma}, \gamma_{ii}=0~,~\forall i \label{eq:ssc} \end{array} \end{equation} where $\mathbf{\Gamma}$ is the matrix of sparse codes, $\gamma_{ii}$ points to diagonal elements of $\mathbf{\Gamma}$, and $\|.\|_0$ denotes the cardinality norm. It is assumed each resulting ${\vec{\gamma}}_i$ from Eq. \ref{eq:ssc} represents ${\vec{x}}_i$ using only data points from the subspace in which ${\vec{x}}_i$ lies as well. In that case, computing an affinity matrix $\MB{A}=\left|\mathbf{\Gamma}\right|^{\top}+\left|\mathbf{\Gamma}\right|$ which represents the pairwise similarities of data points, and using it in graph-based methods such as spectral clustering should identify the clusters. However, the problem in Eq.\ref{eq:ssc} is NP-hard to solve \cite{elhamifar2013sparse} in its original format. As a solution, $\|.\|_0$ can be relaxed into other norms. For instance \cite{elhamifar2013sparse,patel2014kernel,bian2016kernelized,Gao2012} use the $l1$-norm to achieve sparse $\mathbf{\Gamma}$, while \cite{you2016scalable} aims for the approximate solution of Eq.~\ref{eq:ssc} while having $\|{\vec{\gamma}}_i\|_0 \leq T_0$. Another group of SSC methods \cite{vidal2014low,xiao2016robust,liu2013robust,zhuang2012non} focuses on shrinking the nuclear norm $\|\mathbf{\Gamma}\|_*$ and making $\mathbf{\Gamma}$ low-rank to better represent the global structure of data. Among SSC algorithms, \cite{elhamifar2013sparse,patel2014kernel} enforced $\mathbf{\Gamma}$ to provide affine representations by using the constraint $\mathbf{\Gamma}^\top \vec{\MB{1}}=\vec{\MB{1}}$ based on the idea of having the data points lying on an affine combination of subspaces. Despite continuous improvements in clustering results of aforementioned SSC methods, there is no direct link between the quality of the sparse coding part and the subsequent clustering goal. Consequently, they suffer from performance variations across different datasets and high sensitivity of their results to the choice of parameters. On the other hand, another group of algorithms called Laplacian sparse coding encourage the sparse coefficient vectors ${\vec{\gamma}}_i$ related to each cluster to be as similar as possible~\cite{Gao2012,yang2014data}. In their SSC formulation (Eq. \ref{optlap}) they employ a similarity matrix $\MB{W}$ in which each $w_{ij}$ measures the pair-wise similarity between a pair $({\vec{x}}_i,{\vec{x}}_j)$. \begin{equation} \begin{array}{ll} \underset{\mathbf{\Gamma}}{\min}&\|\mathbf{X}-\mathbf{X}\mathbf{\Gamma}\|_F^2+\lambda \|\mathbf{\Gamma}\|_1+\frac{1}{2}\sum_{i,j} w_{ij}\|{\vec{\gamma}}_i-{\vec{\gamma}}_j\|_2^2 \qquad s.t. ~\gamma_{ii}=0 ~,~ \forall i \label{optlap} \end{array} \end{equation} Nevertheless, the optimization frameworks like this suffer from two issues: \begin{enumerate} \item Columns of $\mathbf{\Gamma}$ are forced to become similar to each other while the similarity matrix is used as the weighting coefficients. Hence, at best the sparse codes ${\vec{\gamma}}_i$ obtain a distribution similar to the neighborhoods in $\MB{W}$. Consequently, their performance is comparable to kernel-based clustering with direct use of the kernel information. \item Although Eq. \ref{optlap} tries to decrease the intra-cluster distances, the inter-cluster structure of data is ignored in such frameworks; however, typically both of these terms have to be adopted when focusing on the separability of clusters. \end{enumerate} Contrary to the previous works, our algorithm benefits from a clustering-based objective term in its framework. Therefore, its resulting sparse codes are more suitable for the clustering purpose. In addition, our post-processing technique can contribute to non-negative SSC methods such as \cite{li2017graph,xiao2016robust,zhuang2012non} to improve their latent representations.
1,314,259,994,139
arxiv
\section{Introduction} Providing \emph{predictable} performance is an important ongoing challenge for distributed computing systems. In distributed settings, a \emph{job} is split into multiple smaller \emph{tasks}, which get spread over separate resources for parallel execution. Task execution times in modern systems are known to exhibit significant runtime variability due to many factors such as power management, software or hardware failures, maintenance, and most importantly, resource sharing \cite{Dryad:IsardBY07, MapReduce:DeanG08, Mantri:AnanthanarayananKG10, ResilientDistributedDatasets:ZahariaCD12, TailAtScale:DeanB13, StragglerRootCauseAnalysisInDatacenters:OuyangGY16, RootCauseAnalysisOfStragglersInBigDataSystem:ZhouLY18}. Runtime variability may cause some tasks to \emph{straggle} and take much longer to complete than other tasks in the job. Since a distributed job completes only when all its tasks complete, straggler tasks significantly delay the job completion. As the number of tasks in a job increases so does the chance that at least one of them will be a straggler, thus the impact of stragglers on the job completion time is greater at scale \cite{AchievingRapidResponseTimesInLargeOnlineServices:Dean12, TailAtScale:DeanB13}. Straggler problem has received significant attention from the systems research community. Existing solution techniques fall into two categories: i) Squashing runtime variability via preventive actions such as blacklisting faulty machines that frequently exhibit high variability \cite{MapReduce:DeanG08, AchievingRapidResponseTimesInLargeOnlineServices:Dean12} or learning the characteristics of task-to-node assignments that lead to high variability and avoiding such problematic task-node pairings \cite{ProactiveStragglerAvoidance:YadwadkarC12}, ii) Speculative execution by launching the tasks together with replicas and waiting only for the fastest copy to complete \cite{ImprovingMapReducePerformance:ZahariaKJ08, Mantri:AnanthanarayananKG10, Dremel:MelnikGL10, AttackOfClones:AnanthanarayananGS13, LowLatencyviaRed:VulimiriGM13}. Because runtime variability is caused by intrinsically complex reasons, preventive measures for stragglers could not fully solve the problem and runtime variability continued plaguing the compute workloads \cite{AchievingRapidResponseTimesInLargeOnlineServices:Dean12, AttackOfClones:AnanthanarayananGS13}. Speculative task execution on the other hand has proved to be an effective remedy, and indeed the most widely deployed solution for stragglers \cite{TailAtScale:DeanB13, DecentralizedSpeculationAwareClusterScheduling:RenAW15}. For instance with task replication, median runtime slowdown experienced by the tasks within a job is brought down from 8 (and 7) to 1.08 (and 1.1) in Facebook's production Hadoop cluster (and Bing’s Dryad cluster) \cite{DecentralizedSpeculationAwareClusterScheduling:RenAW15}. Executing tasks with greater number of copies will surely reduce the chance of having to wait for a straggler. However, task replicas occupy system resources that could otherwise be used to execute other tasks. Furthermore, if task replicas are employed excessively, they can overburden the system and further aggravate the runtime variability, given that the primary cause of runtime variability is resource sharing. Therefore, replicas are employed with care in practice, e.g., replica tasks are used only for jobs with a few tasks \cite{AttackOfClones:AnanthanarayananGS13}, or only tasks that straggle beyond some threshold are replicated \cite{ImprovingMapReducePerformance:ZahariaKJ08}. More recently, replicas are proposed to be dispatched for single-task jobs only if any server is found idle, which is shown, with a queueing theoretic analysis, to not drive the system to instability by dispatching excessive number of replicas \cite{DecouplingSlowdownJobsize:GardnerHS17}. This paper focuses on two important performance metrics for distributed job execution: 1) \emph{Latency,} measuring the time to complete the job, and 2) \emph{Cost,} measuring the total resource time spent to execute the job. Job execution is desired to be fast and with low cost, but these are often conflicting objectives. Cost of executing a job depends on the number of tasks\footnote{Resource usage of tasks vary across different jobs or might vary even within the same job in practice \cite{Kubernetes:BurnsGO16}. We abstract this complexity by assuming that each task uses one unit of resource per unit time.} and the time each task takes to finish. Executing a job with task replicas is expected to reduce the time spent by the tasks in the system, while also increasing the total number of tasks involved in completing the job, which is likely to increase the cost. It is important to understand the effect of added redundancy not only on the latency but also on the cost because the load exerted on the system by a job execution is determined by its cost (as elaborated in Sec.~\ref{subsec:subsec_on_the_cost}). Erasure coding implements a more general form of redundancy than simple replication, and has been considered for straggler mitigation both in data download \cite{Codes&Qs:JoshiLS12, Codes&Qs:HuangPZ12, CodesQs:KadheSS15_Allerton}, and more recently in distributed computing context \cite{ShortDot:DuttaCG16, MachineLearningWithCodes:LeeLP17, GradientCoding:TandonLD16, CodedGradientDescent:LiKA17, StragglerMitigationWithDataEncoding:KarakusSD17, CodedMatrixMultiplication:YuMA18, CodedGradientDescent:HalbawiAS18}. With coding, a job of $k$ tasks is expanded into a job of $n$ tasks with $n-k$ \textit{parity} tasks. Parity tasks are constructed by encoding the initial $k$ tasks, which is done by embedding redundancy either in the computational procedure collaboratively implemented by the tasks (e.g., \cite{ShortDot:DuttaCG16}) or in the data the tasks consume during execution (e.g., \cite{StragglerMitigationWithDataEncoding:KarakusSD17}). If coded tasks are created with \textit{MDS} code, the most commonly used encoding model, any $k$ of the $n$ tasks would be sufficient to recover the desired outcome of the job, thus only the fastest $k$ tasks would be sufficient for completing the job. Modeling task execution times and the variability they exhibit is crucial for the theoretical analysis of straggler mitigation techniques to match with the experimental measurements. In the analysis of straggler mitigation techniques, variability in execution times is commonly expressed with a fixed straggling factor. The straggling factor for each task is typically assumed to be independently drawn from a fixed random variable, which we also adopt in this paper. However, runtime variability is known to be to a large part caused by resource sharing in practice \cite{TailAtScale:DeanB13, StragglerRootCauseAnalysisInDatacenters:OuyangGY16, RootCauseAnalysisOfStragglersInBigDataSystem:ZhouLY18}, and the redundant tasks added into system exert additional load on the system resources, which is expected to aggravate the runtime variability. Therefore, we believe that the model of variability should account for the redundancy added into system. In Sec.~\ref{sec:sec_when_red_changes_tail}, we consider a model where the tail of task execution times changes with the level of redundancy added into system, and we study the cost and latency of redundancy under this model. There are various decisions to make while employing redundancy for straggler mitigation. The first natural step is to decide adding whether replicated or coded tasks, and how many of them. Secondly, waiting for some time before launching the replica tasks has been considered to reduce the cost of redundancy \cite{RepedComputing:WangJW15}. A natural question is that does waiting before launching the redundant (replicated or coded) tasks help in general to reduce cost. In this paper, we analyze the cost vs.\ latency tradeoff to find out the best practice in making these decisions. As an alternative to adding redundancy, cancelling and relaunching the tasks that appear to be straggling after waiting some time has been considered \cite{ImprovingMapReduceInHeteroEnvironments:Zaharia08}. This is justified by the heavy tailed nature of task execution times as observed in practice \cite{TailAtScale:DeanB13, GoogleTraceAnalysis:ReissTG12, GRASS:AnanthanarayananHR14}. We quantify the effect of straggler relaunch on the cost vs.\ latency tradeoff in terms of the tail heaviness pronounced by the service time variability. We also consider employing straggler relaunch together with redundancy, and analyze its effects on cost and latency. Parts of the results presented in this paper were published in \cite{MAMA:AktasPS17, IFIP:AktasPS18}. This paper is structured as follows. In Sec.~\ref{sec:sec_system_model}, we explain the system model that is used for the presented analysis, and formally define the cost and latency of distributed job execution. In Sec.~\ref{sec:sec_coding_vs_rep}, we examine the effect of the type and level of redundancy, and the launch time of redundant tasks on the cost vs.\ latency tradeoff. In Sec.~\ref{sec:sec_when_red_changes_tail}, we evaluate the performance of job execution with redundancy when the redundant tasks added into system changes the tail of service time variability. In Sec.~\ref{sec:sec_straggler_relaunch}, we study straggler relaunch and investigate its impact on the cost and latency. In Sec.~\ref{sec:sec_red_togetherwith_relaunch}, we consider employing straggler relaunch together with redundancy. In Sec.~\ref{sec:sec_conclusions}, we summarize our key findings, discuss the shortcomings of our analysis and possible future directions. \vspace{0.5em} \noindent \textbf{Summary of observations.} Coding allows increasing the level of added redundancy with finer steps than replication, which translates into greater achievable cost vs.\ latency region. Waiting for some time before launching the redundant tasks is not effective in trading off latency for reduced cost when the employed redundancy is coding, that is, one can obtain lower latency for the same cost by launching less number of coded tasks rather than delaying their launch time. When the employed redundancy is replication, some cost reduction is possible by launching the replica tasks after waiting some time. Coding is more efficient than replication in the cost vs.\ latency tradeoff; adding coded tasks into job execution yields higher reduction in latency per incurred cost (hence per incurred additional load on the system) compared to adding replicated tasks. Execution with redundancy reduces the cost and latency together when enough tail heaviness is pronounced by the service time variability. The required tail heaviness is smaller when coding is employed compared to replication. The advantage of coding over replication becomes greater when the job is executed at higher scale, i.e., when the job consists of greater number of parallel tasks. Relaunching tasks that appear to be straggling after some time reduces the cost and latency when relaunching is performed at the right time and enough tail heaviness is pronounced by the service time variability. Redundancy and straggler relaunch serve the same purpose of mitigating stragglers, hence employing both together require greater tail heaviness in service time variability in order to reduce the cost and latency. \section{System Model} \label{sec:sec_system_model} We adopt a system model that is an extension of what is adopted in \cite{StragglerRep:WangJW15}; execution time (duration from its launch time to completion) of each task is modeled with a single random variable. All $k$ tasks of a job are launched simultaneously and the execution time of each is assumed to be identically and independently distributed (i.i.d.). We use two canonical distributions to model task execution times: 1) Shifted exponential $\mathrm{SExp}(s, \mu)$ with a positive minimum value $s$ and a tail decaying exponentially at rate $\mu$, and 2) $\mathrm{Pareto}(s, \alpha)$ with a positive minimum value $s$ and a power law tail with index $\alpha$. Minimum value of the distribution models the minimum service time of the tasks (i.e., task size), while tail of the distribution models the slowdown due to runtime variability; smaller $\mu$ or $\alpha$ implies greater chance for stragglers. Task execution times in modern compute systems are known to exhibit heavy tail \cite{TailAtScale:DeanB13, GoogleTraceAnalysis:ReissTG12, GRASS:AnanthanarayananHR14}. In Fig.~\ref{fig:figs/plot_google_empiricaltail}, we plot the tail distribution of the task execution times\footnote{Task execution times are calculated as the difference between the timestamps for SCHEDULE and FINISH events for each task as given in \cite{GoogleTraceAnalysis:ReissTG12}.} that we extracted from a Google Trace data for jobs with $15$, $400$, or $1050$ tasks \cite{GoogleTraceAnalysis:ReissTG12}. Note that both axes in the plots are in log scale, hence an exponential tail would have appeared as an exponentially decaying curve, while a true power law tail (e.g., tail of $\mathrm{Pareto}$) would have pronounced a linear decay at a constant rate \cite{FundamentalsOfHeavyTails:NairWZ13}. Tail distributions shown in the figure exhibit exponential decay at small values and a linearly decaying trend at larger values, which indicates a heavy tailed runtime variability \cite{PerfEvalWithHeavyTails:Crovella01}. Note that the steep decay of the tail at the far right edge is due to the bounded support of the distributions. Assuming execution times to be identically distributed for tasks within the same job is appropriate since jobs in practice are known to be a collection of one or more usually identical tasks \cite{GoogleClusterTrace:ReissWH11, GoogleTraceAnalysis:ReissTG12}. However, when task execution times are modeled using a distribution with a minimum value of zero (e.g., exponential distribution), assuming independent execution times across the initial and redundant tasks proved to be problematic because added redundancy in this case can make job execution time arbitrarily small. This is in contrast to reality where tasks have an inherent size and due to this, job execution times are lower bounded by a positive value regardless of the level of added redundancy. Modeling task execution times with a minimum value of zero have previously led to theoretical results that are at odds with experimental measurements. Implications of this are discussed in detail in \cite{DecouplingSlowdownJobsize:GardnerHS17} and authors propose a better model for service times in which the time due to task size is decoupled from the time due to runtime variability. Specifically, service times are modeled as $s \times Sl$ where $s$ represents the inherent task size and $Sl$ is the slowdown factor, which is assumed to be i.i.d. across tasks and servers with a minimum value of $1$. Distributions that we adopt for modeling task execution times can be expressed using the decoupling method introduced in \cite{DecouplingSlowdownJobsize:GardnerHS17}; $\mathrm{SExp}(s, \mu)$ can be written as $s \times \mathrm{SExp}(1, \mu)$ or $\mathrm{Pareto}(s, \alpha)$ as $s \times \mathrm{Pareto}(1, \alpha)$. In our model, redundant tasks are added into execution only if the job does not complete within some time $\Delta$. Redundancy is introduced either in the form of task replicas or coded parity tasks. When replication is employed, $c$ replicas are launched for every remaining task at time $\Delta$. When coding is employed, $n-k$ MDS coded parity tasks are launched at time $\Delta$ (see Fig.~\ref{fig:fig_delayed_red}). When straggler relaunch is implemented, tasks (initial or redundant) remaining at time $\Delta$ are canceled and fresh replacements are immediately launched in their place. \begin{figure}[t] \centering \includegraphics[width=0.5\textwidth, keepaspectratio=true]{figs/fig_delayed_red.pdf} \caption{A job of four tasks is executed by launching replicated (Left) or coded (Right) tasks, some time $\Delta$ after launching job's initial tasks. Check marks represent task completions while crosses represent cancellations. With replication, exact clones of the remaining tasks are launched, while with coding, parity tasks can be used as a ``clone'' for any task, therefore, stragglers do not have to be tracked down.} \label{fig:fig_delayed_red} \end{figure} \begin{figure*}[t] \centering \begin{subfigure}[]{.25\textwidth} \centering \includegraphics[width=1\textwidth, keepaspectratio=true]{figs/pplot_task_lifetime_hist_k_15.png} \end{subfigure} \hspace{1em} \begin{subfigure}[]{.25\textwidth} \centering \includegraphics[width=1\textwidth, keepaspectratio=true]{figs/pplot_task_lifetime_hist_k_400.png} \end{subfigure} \hspace{1em} \begin{subfigure}[]{.25\textwidth} \centering \includegraphics[width=1\textwidth, keepaspectratio=true]{figs/pplot_task_lifetime_hist_k_1050.png} \end{subfigure} \caption{Empirical tail distribution of task execution times for Google cluster jobs with number of tasks $k=15, 400, 1050$.} \label{fig:figs/plot_google_empiricaltail} \end{figure*} We define the cost of executing a job as the sum of the lifetimes of all the tasks (including the redundant ones) involved in its execution. Lifetime of a task is the duration from its launch to its completion or cancellation. Depending on the application domain, there are two possible cost definitions: 1) \textit{Cost with task cancellation}; outstanding redundant tasks are canceled as soon as the job completes (as illustrated in Fig.~\ref{fig:fig_delayed_red}), which is a viable option for distributed job execution, 2) \textit{Cost without task cancellation}; outstanding redundant tasks are left to run until they complete, which for instance is the only option for routing messages with redundancy in an opportunistic network \cite{ErasureCodingBasedRoutingForOpportunisticNetworks:WangJM05}. We assume that task cancellation takes place instantly and does not incur any delay. In the following subsection, we elaborate on the meaning and consequences of the job execution cost. \subsection{On the cost of job execution} \label{subsec:subsec_on_the_cost} Cost, as is defined here, reflects the total resource time spent while executing a job. Lower cost translates into executing the same job by occupying less area in $\textnormal{system capacity}\times\textnormal{time}$ space. Thus, reducing the cost of job executions allows fitting more jobs per area and leads to higher system throughput \cite{BoostingThroughput:Joshi17}. As we show in the following sections, adding redundant tasks into a job execution can increase or \emph{decrease} the cost depending on the variability pronounced in task execution times, and the type and level of introduced redundancy. Since redundancy can lead to higher cost, it should be employed with care. Executing jobs at a higher cost implies occupying greater portion of system's overall capacity per job, which increases the load on the system. This may translate into greater congestion in the system resources, hence aggravate job slowdowns or even drive the system to instability. For instance, \cite{DecouplingSlowdownJobsize:GardnerHS17} shows, with a queueing theoretic analysis, how excessive replication of single-task job arrivals can drive system to instability. As another example, \cite{AttackOfClones:AnanthanarayananGS13} introduces a system, named as \emph{Dolly}, which launches replicas only for small jobs that consist of $\leq 10$ tasks. This is shown to achieve significant reduction in latency without overburdening the system according to the traces collected on two clusters at Facebook and Microsoft Bing. The underlying reason for the success of their replication scheme is the workload characteristics; small jobs were observed to tend to have short duration, thus, replicating them did not introduce substantial cost overhead in the system, while returning substantial reduction in the latency of short jobs. The workload and system characteristics considered in \cite{AttackOfClones:AnanthanarayananGS13} are not universal; execution with redundancy is relevant in general not only for small jobs but also for jobs that run at higher scale for large duration. Job slowdowns due to stragglers is an emerging problem for future high performance computing (HPC) systems. Exascale computing is expected to be implemented by systems that are much larger in size and will enable execution at unprecedented levels of parallelism. These future systems are anticipated to be prone to much higher node level runtime variability \cite{ExascaleDOE:BrownMB10}. Moreover, to implement high resource utilization, resource scheduling in these systems is suggested to be realized with time-sharing rather than today's de facto batch scheduling \cite{TimeSharingInHPC:HofmeyrIC16}. As resource sharing is pointed out as the primary cause of stragglers in data centers \cite{TailAtScale:DeanB13}, performance of future HPC systems is likely to greatly suffer from stragglers. Simulations over the traces collected on Edison Supercomputer demonstrate that jobs with larger number of tasks and shorter duration experience higher slowdowns due to runtime variability under batch scheduling, while under time-sharing based resource scheduling, slowdowns are observed to be relatively uniform regardless of the number of tasks or the job duration \cite{TimeSharingInHPC:HofmeyrIC16}. \subsection{Notation and Tools for Analysis} Expected cost ($C$) and latency ($T$) are the two metrics that we use to quantify the pain and gain of distributed execution of jobs with redundancy and/or straggler relaunch. Thus, the cost and latency by themselves imply their expected values throughout, and any other quantity associated with them is made explicit. Note that the cost and latency depend on the number of tasks $k$ that constitute the job, task sizes $s$, runtime variability (determined by $\mu$ or $\alpha$), the level of redundancy added into the job ($c$ task replicas or $n-k$ coded tasks), as well as the time $\Delta$ at which redundant tasks are launched and/or straggler relaunch is performed. Derivations of the cost and latency expressions make frequent use of the law of total probability since we consider adding redundancy and/or performing straggler relaunch after waiting some time $\Delta$. Results from order statistics are essential for the derivations since only a subset of the launched tasks is necessary for job completion when redundancy is employed. Derivations presented in the paper require tedious algebra at times and the expressions involve some special functions that commonly appear while working with order statistics. For completeness, we kept every non-trivial step in the proofs. This made some proofs lengthy and we placed them in the Appendix that is made available as a supplement to this paper. We here give an overview of the notation and special functions that appear throughout the paper. For their detailed definitions and interesting properties, we refer the reader to \cite{NIST:DLMF}. $X_{n:i}$ denotes the $i$th order statistic of $n$ i.i.d. samples drawn from a random variable $X$. $H_n$, the $n$th harmonic number, is defined as $\sum_{i=1}^n 1/i$ for $n \in \mathbb{Z}^+$ or as $\int_0^1 (1-x^n)/(1-x) \dx{x}$ for $n \in \mathbb{R}$. $H_{n^2}$, the $n$th generalized harmonic number of order two, is defined as $\sum_{i=1}^n 1/i^2$. Incomplete Beta function $B(q;m,n)$ is defined for $q \in [0,1]$, $m, n \in \mathbb{R}^+$ as $\int_0^q u^{m-1}(1-u)^{n-1} \dx{u}$, Beta function $B(m,n)$ as $B(1;m,n)$ and its regularized form $I(q;m,n)$ as $B(q;m,n)/B(m,n)$. Gamma function $\Gamma(x)$ is defined as $\int_0^{\infty} u^{x-1}e^{-u} \dx{u}$ for $x \in \mathbb{R}$ or as $(x-1)!$ for $x \in \mathbb{Z}^+$. \section{Coding vs.\ Replication} \label{sec:sec_coding_vs_rep} In this section, we study the cost vs.\ latency tradeoff in executing a distributed job by adding task replicas or coded tasks after waiting some time $\Delta$. Note that we do not consider straggler relaunch until Sec.~\ref{sec:sec_straggler_relaunch}. Theorems given below firstly present expressions for the cost and latency assuming exponential task execution times, which we then use to derive the cost and latency for shifted-exponential task execution times. \vspace{2ex} \noindent \textbf{Consider executing a job of $k$ tasks by adding $c$ replicas for each remaining task after waiting some time $\Delta$.} \begin{theorem} Suppose task execution times are i.i.d. with $\mathrm{Exp}(\mu)$. Distribution of job execution time is given as \begin{equation} \begin{split} \Pr\{T \leq t\} &= \Bigl(1 - \mathbbm{1}(t \leq \Delta)(e^{-\mu t} - e^{-\mu\Delta}) \\ &\qquad - \mathbbm{1}(t > \Delta)e^{-\mu\left((c+1)(t-\Delta) + \Delta\right)} \Bigr)^k \end{split} \label{eqn:eq_k_cd_Exp_tail} \end{equation} Latency is well approximated as \begin{equation} \mathbb{E}[T] \approx \frac{1}{\mu}\left(H_k - \frac{c}{c+1}H_{k-kq}\right). \label{eqn:eq_k_cd_Exp_ET} \end{equation} Cost with ($C^c$) or without ($C$) task cancellation is given as \begin{equation} \mathbb{E}[C^c] = \frac{k}{\mu}, \quad\quad \mathbb{E}[C] = \left(c(1-q) + 1\right)\frac{k}{\mu}. \label{eqn:eq_k_cd_Exp_EC} \end{equation} where $q = 1 - e^{-\mu\Delta}$. \label{thm_k_cd_Exp_T_C} \end{theorem} \begin{theorem} Suppose task execution times are i.i.d. with $\mathrm{SExp}(s, \mu)$. Distribution and the expected value of job execution time are given as \begin{equation} \begin{split} \Pr\{T > t\} &= \Pr\{T_e > t - s\}, \\ \mathbb{E}[T] &= s + \mathbb{E}[T_e]. \end{split} \label{eqn:eq_k_cd_SExp_tail__ET} \end{equation} where $T_e$ is the job execution time when task execution times are distributed as $\mathrm{Exp}(\mu)$, for which the distribution and expected value are given in Thm.~\ref{thm_k_cd_Exp_T_C}. Cost with task cancellation is given as \begin{equation} \mathbb{E}[C^c] = \begin{cases} \begin{aligned} & k(c+1)\Biggl(s + \frac{1}{\mu} \\ &~ \times \left(1 - \frac{c}{c+1}(e^{-\mu\Delta} + \mu\Delta) \right)\Biggr) \end{aligned} & \Delta \leq s, \\ k\left(s + \frac{1}{\mu}\left(1 + c\left(1-q-e^{-\mu\Delta}\right)\right)\right) & o.w. \end{cases} \label{eqn:eq_k_cd_SExp_ECwcancel} \end{equation} Cost without task cancellation is given as \begin{equation} \mathbb{E}[C] = k\left(c(1-q)+1\right)(s + 1/\mu). \label{eqn:eq_k_cd_SExp_EC} \end{equation} where $q = \mathbbm{1}(\Delta > s)\left(1 - e^{-\mu(\Delta - s)}\right)$. \label{thm_k_cd_SExp_T_C} \end{theorem} \vspace{2ex} \noindent \textbf{Consider executing a job of $k$ tasks by adding $n-k$ coded tasks after waiting some time $\Delta$.} \begin{theorem} Suppose task execution times are i.i.d. with $\mathrm{Exp}(\mu)$. Distribution and the expected value of job execution time are well approximated as \begin{longaligned}[\label{eqn:eq_k_nd_Exp_tail__ET}] & \Pr\{T > t\} \approx \mathbbm{1}(t \leq \Delta)\left(q^k - (1-e^{-\mu t})^k\right) \tag{\ref{longaligned@\thelongaligned}} \\ &\qquad\qquad\quad + I\left(\mathbbm{1}(t > \Delta)e^{-\mu(t-\Delta)}; n-k+1, k(1-q)\right) \\ &\qquad\qquad\quad - q^k I\left(\mathbbm{1}(t > \Delta)e^{-\mu(t-\Delta)}; n-k+1, 0\right), \\ & \mathbb{E}[T] \approx \Delta - \frac{1}{\mu}\left(B(q;k+1,0) + H_{n-kq} - H_{n-k}\right). \end{longaligned} Cost with ($C^{c}$) or without ($C$) task cancellation is given as \begin{equation} \begin{split} \mathbb{E}[C^c] = \frac{k}{\mu}, \qquad \mathbb{E}[C] = \frac{k}{\mu}q^k + \frac{n}{\mu}\left(1-q^k\right), \end{split} \label{eqn:eq_k_nd_Exp_EC} \end{equation} where $q = 1 - e^{-\mu\Delta}$. \label{thm_k_nd_Exp_T_C} \end{theorem} \begin{theorem} Suppose task execution times are i.i.d. with $\mathrm{SExp}(s, \mu)$. Distribution and the expected value of job execution time are given as \begin{equation} \begin{split} \Pr\{T > t\} &= \Pr\{T_e > t - s\}, \\ \mathbb{E}[T] &= s + \mathbb{E}[T_e], \end{split} \label{eqn:eq_k_nd_SExp_tail__ET} \end{equation} where $T_e$ is the job execution time when task execution times are distributed as $\mathrm{Exp}(\mu)$, for which the distribution and the expected value are given in Thm.~\ref{thm_k_nd_Exp_T_C}. Cost with ($C^{c}$) or without ($C$) task cancellation is given as \begin{equation*} \mathbb{E}[C] = \begin{cases} n\left(s + 1/\mu\right) & \Delta \leq s, \\ \left(k + (1-\tilde{q}^k)(n-k)\right)\left(s + 1/\mu\right) & o.w. \end{cases} \end{equation*} \begin{equation*} \mathbb{E}[C^c] = \begin{cases} \begin{aligned} & k/\mu + ns - (n-k)q^k \\ &~ \times \left(\Delta + k\mu\left(\frac{\zeta}{\mu q^k} - \Delta\left(\frac{1}{q}-1\right)\right)\right) \end{aligned} & \Delta \leq s, \\ \begin{aligned} & (\approx)~ \mathbb{E}[C] - \frac{n-k}{\mu}\Bigl(1-q^k + \zeta^{-k(1-q)} \\ &~ \times B(\zeta; k-kq+1, 0)\left(\tilde{q}^k-q^k\right) \Bigr) \end{aligned} & o.w. \end{cases} \end{equation*} where $q = \mathbbm{1}(\Delta > s)\left(1-e^{-\mu(\Delta-s)}\right)$, $\tilde{q} = 1-e^{-\mu\Delta}$ and $\zeta = 1-e^{-\mu s}$. \label{thm_k_nd_SExp_T_C} \end{theorem} \begin{figure}[ht] \centering \begin{subfigure}[]{.4\textwidth} \centering \includegraphics[width=1\textwidth, keepaspectratio=true]{figs/plot_EC_vs_ET_wdelay_SExp_k_10.pdf} \label{fig:plot_EC_vs_ET_wdelay_SExp_k_10} \end{subfigure} \begin{subfigure}[]{.4\textwidth} \centering \includegraphics[width=1\textwidth, keepaspectratio=true]{figs/plot_EC_vs_ET_wdelay_Pareto_k_10.pdf} \label{fig:figs/plot_EC_vs_ET_wdelay_Pareto_k_10} \end{subfigure} \caption{Achievable cost (with task cancellation) vs.\ latency in executing a job of $k$ tasks with replicated ($c=1,2$) or coded ($n \in [k+1, 3k]$) redundancy. Each cost vs.\ latency curve is drawn for a fixed number of redundant tasks by varying the launch time $\Delta$ of redundant tasks. Task execution times are i.i.d. with $\mathrm{SExp}$ (Top) and $\mathrm{Pareto}$ (Bottom).} \label{fig:figs/plot_EC_vs_ET_wdelay} \end{figure} In distributed computing systems, outstanding redundant tasks can be canceled by signaling the computing nodes as soon as the job completes, hence we always refer to the cost with task cancellation in the discussions throughout the paper. \begin{figure*}[t] \centering \begin{subfigure}[]{.32\textwidth} \centering \includegraphics[width=1\textwidth, keepaspectratio=true]{figs/plot_zerodelay_reped_vs_coded_SExp_k_10.pdf} \end{subfigure} \begin{subfigure}[]{.32\textwidth} \centering \includegraphics[width=1\textwidth, keepaspectratio=true]{figs/plot_zerodelay_reped_vs_coded_Pareto_k_10_a_1_5.pdf} \end{subfigure} \begin{subfigure}[]{.32\textwidth} \centering \includegraphics[width=1\textwidth, keepaspectratio=true]{figs/plot_zerodelay_reped_vs_coded_Pareto_k_10_a_1_2.pdf} \end{subfigure} \caption{Cost vs.\ latency of executing a job of $k=10$ tasks by employing zero-delay replicated or coded redundancy. The level of employed redundancy, $c$ for replication and $n$ for coding, varies along each curve. Tail heaviness of task execution times increases from left to right.} \label{fig:plot_zerodelay_reped_vs_coded_k_10} \end{figure*} When task execution times are exponentially distributed, expressions in Thm.~\ref{thm_k_cd_Exp_T_C} and \ref{thm_k_nd_Exp_T_C} tell us that job execution cost neither depends on the time $\Delta$ at which redundant tasks are launched nor the level of employed replicated ($c$) or coded ($n$) redundancy. Therefore, according to our model with exponentially distributed task execution times, launching all the available redundant tasks at the beginning (i.e., $\Delta=0$) achieves the minimum latency with zero penalty in cost. Recent research proposes waiting for some time before replicating the tasks to reduce the cost of redundancy \cite{StragglerRep:WangJW15}. Using the expressions given in Thm.~\ref{thm_k_cd_SExp_T_C} and \ref{thm_k_nd_SExp_T_C}, Fig.~\ref{fig:figs/plot_EC_vs_ET_wdelay} plots the cost vs.\ latency tradeoff in executing the same job with different levels of replicated or coded redundancy, by varying the launch time $\Delta$ of the redundant tasks between $0$ and $\infty$. First, let us focus on the case with $\mathrm{SExp}$ task execution times as shown in Fig.~\ref{fig:figs/plot_EC_vs_ET_wdelay} (Top). Cost monotonically decreases while latency monotonically increases with $\Delta$. Let $C(c, \Delta)$, $T(c, \Delta)$ be the cost and latency when $c$ replicas are added for each remaining task after waiting some time $\Delta$. Increasing $\Delta$ initially allows significant reduction in cost while causing a slight increase in latency. However, as soon as $T(c, \Delta)$ exceeds $T(c-1, 0)$ (plot shows this for $c=2$) increasing $\Delta$ further does not make sense; one can achieve less cost for the same latency by reducing $c$ rather than increasing $\Delta$. This behavior of cost vs. latency tradeoff is more apparent when coded redundancy is employed. Increasing $\Delta$ from zero initially returns no visible cost reduction while incurring significant increase in the latency, while it does yield visible reduction in cost only after $\Delta$ reaches a certain value. In other words, adding coded tasks with delay can yield significant cost reduction only after significant sacrifice in latency. Consider the cost and latency value at a sufficiently large value of $\Delta$ on a curve for a number of coded tasks $n-k = r > 1$, then the curve below for $n-k = r-1$ attains the same latency at less cost at a smaller value of $\Delta$. Thus, given a job execution with a sufficiently large value of $\Delta$ and $r > 1$ coded tasks, same latency can be achieved for less cost by reducing $\Delta$ and decrementing $r$. The same conclusions hold for the case with $\mathrm{Pareto}$ task execution times (shown in Fig.~\ref{fig:figs/plot_EC_vs_ET_wdelay} (Bottom)). The remainder of this section is concerned with the cost vs.\ latency tradeoff when redundant tasks are launched together with the original tasks (i.e., $\Delta = 0$), which we refer to as \emph{zero-delay redundancy}. For the case with $\Delta > 0$, we could derive the cost and latency expressions only when the distribution of task execution times, which we refer to as $X$ here, has exponential tail. This is because in the absence of memoryless property (e.g., when $X$ is heavy tailed), derivations require working with the order statistics of samples drawn from two different distributions\footnote{This issue disappears when the remaining tasks at time $\Delta$ are relaunched. This is why in Sec.~\ref{sec:sec_straggler_relaunch}, we will be able to derive the cost and latency expressions for the case of jointly performing straggler relaunch and launching redundant tasks after waiting some time $\Delta$.}; residual execution time of the remaining task copies after time $\Delta$ is distributed as $X|X > \Delta$, while the execution time of copies that are newly launched at time $\Delta$ is distributed as $X$. Order statistics of independent but non-identical random variables has been studied in the literature \cite{OrderStatisticsForINID:BapatB89}. Using the results available in the literature, cost and latency expressions in the case with non-exponential $X$ could be written out, however, the expressions are unwieldy (relatively bearable for job execution with replicas compared to execution with coded tasks). We did not pursue deriving such cumbersome expressions because our purpose was to observe the effect of $\Delta$ on the cost vs. latency tradeoff, which was very well served by the expressions we derived for the case with shifted-exponential $X$. When $\Delta = 0$, cost and latency expressions can be derived with fairly tractable steps when $X$ has either exponential or heavy tail. \begin{theorem} Let the cost (with task cancellation) and latency of executing a job of $k$ tasks be $C_c$, $T_c$ when each task is launched together with $c$ replicas, and let them be $C_n$, $T_n$ when job is launced with $n-k$ additional coded tasks. When task execution times are i.i.d. with $\mathrm{SExp}(s, \mu)$, \begin{equation*} \begin{split} \mathbb{E}[T_c] &= s + \frac{H_k}{(c+1)\mu}, \qquad \mathbb{E}[C_c] = k\left((c+1)s + \frac{1}{\mu}\right), \\ \mathbb{E}[T_n] &= s + \frac{1}{\mu}(H_n-H_{n-k}), \quad \mathbb{E}[C_n] = ns + \frac{k}{\mu}. \end{split} \label{eqn:eq_k_c_SExp_ET__EC} \end{equation*} When task execution times are i.i.d. with $\mathrm{Pareto}(s, \alpha)$, \begin{equation*} \begin{split} \mathbb{E}[T_c] &= s k!\frac{\Gamma\left(1-1/\left((c+1)\alpha\right)\right)}{\Gamma\left( k+1-1/\left((c+1)\alpha\right)\right)}, \\ \mathbb{E}[C_c] &= s k(c+1)\frac{\alpha}{\alpha-1/(c+1)}, \\ \mathbb{E}[T_n] &= s\frac{n!}{(n-k)!}\frac{\Gamma(n-k+1-1/\alpha)}{\Gamma(n+1-1/\alpha)}, \\ \mathbb{E}[C_n] &= s\frac{n}{\alpha-1}\left(\alpha - \frac{\Gamma(n)}{\Gamma(n-k)}\frac{\Gamma(n-k+1-1/\alpha)}{\Gamma(n+1-1/\alpha)}\right). \end{split} \label{eqn:eq_k_c_Pareto_ET__EC} \end{equation*} \label{thm_k_cn_ET__EC} \end{theorem} Using the expressions given in Thm.~\ref{thm_k_cn_ET__EC}, Fig.~\ref{fig:plot_zerodelay_reped_vs_coded_k_10} plots the cost vs.\ latency tradeoff in executing the same job by introducing varying levels of zero-delay replicated or coded redundancy. Under both $\mathrm{SExp}$ and $\mathrm{Pareto}$ task execution times, coding always achieves less latency for the same cost compared to replication. This observation is formally stated in Thm~\ref{thm_coding_vs_rep_less_latency_cost}. \begin{theorem} Consider launching a job of $k$ tasks with redundant tasks. Cost and latency is lower when $kc$ MDS coded tasks are added compared to adding $c$ replicas for each task. \label{thm_coding_vs_rep_less_latency_cost} \end{theorem} When task execution times are light tailed, adding redundant tasks into the job reduces its latency but increases its cost. In \cite{RepedComputing:WangJW15}, replication is demonstrated to reduce the cost and latency together when task execution times are heavy tailed. Using the exact expressions in Thm.~\ref{thm_k_cn_ET__EC}, Fig.~\ref{fig:plot_zerodelay_reped_vs_coded_k_10} illustrates that redundancy can reduce cost and latency together when the tail of task execution times is \textit{heavy enough}. Reduction in the cost and latency is greater with coding compared to replication. We elaborate at the end of this section on the tail heaviness required to achieve reduction in the cost and latency together. Although closed form expressions are formidable to derive, second moments of the cost and latency can be exactly computed as described in Thm.~\ref{thm_k_cn_ET_2__EC_2}, which enables us to compute the standard deviation of the cost and latency. Fig.~\ref{fig:plot_zerodelay_reped_vs_coded_wstdev_k_10} plots the expected cost and latency values with error bars of width equal to the standard deviation in respective dimensions. Variability in the cost and latency naturally decreases with increasing levels of redundancy. Fixing the number of added redundant tasks, coding achieves less variability compared to replication. \begin{theorem} Consider launching a job of $k$ tasks with redundant tasks. Let us denote the cost and latency as $C_c$, $T_c$ when $c$ replicas are added for each task, and as $C_n$, $T_n$ when $n-k$ coded tasks are added. For $X \sim \mathrm{Exp}(\mu)$ and $j \geq i$, we have \begin{equation*} \begin{split} \mathbb{E}[X_{n:i}X_{n:j}] &= \frac{1}{\mu^2}\bigl(H_{n^2} - H_{(n-i)^2} \\ &\qquad\quad + (H_n - H_{n-i})(H_n - H_{n-j})\bigr). \end{split} \label{eqn:eq_Exp_orderstat_joint_moment} \end{equation*} as given in \cite[Pg.~73]{OrderStat:Arnold08}. Let $Y \sim \mathrm{Exp}((c+1)\mu)$. When task execution times are i.i.d. with $\mathrm{SExp}(s, \mu)$, second moments of the cost and latency are given as \begin{equation*} \begin{split} \mathbb{E}[T_c^2] &= \left(s + \frac{H_k}{(c+1)\mu}\right)^2 + \frac{H_{k^2}}{(c+1)^2\mu^2}, \\ \mathbb{E}[C_c^2] &= \left(k(c+1)s\right)^2 + 2k(c+1)s\frac{k}{\mu} \\ &\quad + (c+1)^2 \sum_{i,j=1}^k \mathbb{E}[Y_{n:i}Y_{n:j}] \\ \mathbb{E}[T_n^2] &= \frac{H_{n^2} - H_{(n-k)^2}}{\mu^2} + \left(s + \frac{H_n - H_{n-k}}{\mu}\right)^2, \\ \mathbb{E}[C_n^2] &= \left(n s\right)^2 + 2ns\frac{k}{\mu} + (n-k)^2 \mathbb{E}[X_{n:k}^2] \\ &\quad + 2(n-k)\sum_{i=1}^k \mathbb{E}[X_{n:i}X_{n:k}] + \sum_{i,j=1}^k \mathbb{E}[X_{n:i}X_{n:j}]. \end{split} \label{eqn:eq_k_cn_SExp_ET_2__EC_2} \end{equation*} For $X \sim \mathrm{Pareto}(s, \alpha)$, given $\alpha > \max\{2/(n-i+1), 1/(n-j+1)\}$ and $j \geq i$, we have \begin{equation*} \begin{split} \mathbb{E}[X_{n:i}X_{n:j}] = &s^2\frac{n!}{\Gamma(n+1-2/\alpha)} \\ &\times \frac{\Gamma(n-i+1-2/\alpha)}{\Gamma(n-i+1-1/\alpha)} \frac{\Gamma(n-j+1-2/\alpha)}{\Gamma(n-j+1)}. \end{split} \label{eqn:eq_Pareto_orderstat_joint_moment} \end{equation*} as given in \cite[Pg.~62]{Pareto:Arnold15}. Let $Y \sim \mathrm{Pareto}(s, (c+1)\alpha)$. When task execution times are distributed as $\mathrm{Pareto}(s, \alpha)$, second moments of the cost and latency are given as \begin{equation*} \begin{split} \mathbb{E}[T_c^2] &= \mathbb{E}[Y_{k:k}^2], \\ \mathbb{E}[C_c^2] &= (c+1)^2 \sum_{i,j=1}^k \mathbb{E}[Y_{k:i}Y_{k:j}], \\ \mathbb{E}[T_n^2] &= \mathbb{E}[X_{n:k}^2] \\ \mathbb{E}[C_n^2] &= (n-k)^2 \mathbb{E}[X_{n:k}^2] + 2(n-k)\sum_{i=1}^k \mathbb{E}[X_{n:i}X_{n:k}] \\ &\quad + \sum_{i,j=1}^k \mathbb{E}[X_{n:i}X_{n:j}]. \end{split} \label{eqn:eq_k_cn_Pareto_ET_2__EC_2} \end{equation*} \label{thm_k_cn_ET_2__EC_2} \end{theorem} \begin{IEEEproof}[Proof Sketch] Derivations follow from the cost and latency formulation given in the proof of Thm.~\ref{thm_k_cn_ET__EC}. \end{IEEEproof} When task execution times are heavy tailed, it is possible to reduce latency by adding redundant tasks and still pay for the baseline cost of executing the job with no redundancy (cf. Fig.~\ref{fig:plot_zerodelay_reped_vs_coded_k_10}). We refer to this as \textit{latency reduction at no cost}. \begin{figure}[t] \centering \includegraphics[width=0.36\textwidth, keepaspectratio=true]{figs/plot_zerodelay_reped_vs_coded_Pareto_k_10_a_3.pdf} \caption{Cost vs.\ latency for zero-delay redundancy systems. The width of horizontal error bars is equal to the standard deviation of latency and the width of vertical bars is equal to the standard deviation of cost.} \label{fig:plot_zerodelay_reped_vs_coded_wstdev_k_10} \end{figure} \begin{corollary} Suppose task execution times are i.i.d. with $\mathrm{Pareto}(s, \alpha)$. Launching a job of $k$ tasks by adding $c$ replicas for each task can reduce its latency up to a minimum value $\mathbb{E}[T_{\min}]$ without incurring any additional cost if and only if $\alpha < 1.5$, and for $c_{\max} = \max\Set{\floor*{1/(\alpha-1)}-1, 0}$, we have \begin{equation} \mathbb{E}[T_{\min}] = s k!\frac{\Gamma\left(1-1/\left(\alpha(c_{\max}+1)\right)\right)}{\Gamma\left(k+1-1/\left(\alpha(c_{\max}+1)\right)\right)}. \label{eqn:eq_k_c_Pareto_reduc_in_ET_for_base_EC} \end{equation} A sufficient condition to reduce latency with no additional cost by adding $n-k$ coded tasks is given as \begin{equation} \alpha^{\alpha} \leq \frac{n}{n-k+1}, \label{eqn:eq_k_n_suffcond_on_a} \end{equation} a necessary condition is given as \begin{equation} \alpha^{\alpha} \leq \frac{n+1}{n-k}, \label{eqn:eq_k_n_neccond_on_a} \end{equation} the minimum latency at no additional cost is given as \begin{equation} \begin{split} \mathbb{E}[T_{\min}] = f(n_{\max}). \end{split} \label{eqn:eq_k_n_Pareto_reduc_in_ET_for_base_EC} \end{equation} such that \begin{equation*} \begin{split} & f(n) = s\frac{n!}{(n-k)!}\frac{\Gamma(n-k+1-1/\alpha)}{\Gamma(n+1-1/\alpha)}, \\ & n_{\max} = \max\Set{n~|~f(n) - \frac{f(k)}{(n-k)} - \alpha \leq 0}, \end{split} \end{equation*} or it is bounded as follows \begin{equation} \mathbb{E}[T_{\min}] < s\left(\alpha + k!\frac{\Gamma(1-1/\alpha)}{\Gamma(k+1-1/\alpha)}\right). \label{eqn:eq_k_n_Pareto_reduc_in_ET_for_base_EC__ineq} \end{equation} \label{cor_k_cn_Pareto_reduc_in_ET_for_baseline_EC} \end{corollary} Fig.~\ref{plot_k_cn_pareto_reduc_in_ET_for_baseline_EC} plots the maximum relative latency reduction at no cost in executing the same job under varying degree of tail heaviness in task execution times. Maximum relative latency reduction at no cost is defined as $\left(\mathbb{E}[T_0]-\mathbb{E}[T_{\min}]\right)/\mathbb{E}[T_0]$ where $\mathbb{E}[T_{\min}]$ is the minimum possible latency at no cost, and $\mathbb{E}[T_0]$ is the baseline latency of executing the job with no redundancy. As stated in Cor.~\ref{cor_k_cn_Pareto_reduc_in_ET_for_baseline_EC}, when the employed redundancy is replication, latency reduction at no cost is possible only when the tail index of task execution times is less than $1.5$, that is, only when the tail of task execution times is quite heavy. Employing coded redundancy relaxes this requirement on the tail heaviness, as also shown in the plot. When the employed redundancy is replication, the tail heaviness requirement is independent of the number of tasks $k$ that constitute the job, while employing coded redundancy relaxes the requirement on the tail index further at larger $k$, i.e., the upper threshold on the tail index increases with $k$. This can be explained as follows. A task replica can only replace its original copy, while a coded task can replace any of the $k$ initial tasks. Thus, coded tasks can mitigate stragglers more effectively when the job is executed at higher scale (larger $k$), while the effectiveness of task replicas is not associated with the scale of execution. Consequently for jobs that run at higher scale, coding can reduce latency at no cost even under lighter tailed task execution times, while the scale of execution does not change the tail heaviness requirement for replication. \begin{figure}[ht] \centering \includegraphics[width=0.4\textwidth, keepaspectratio=true]{figs/plot_reduct_in_ET_atnocost_wred.pdf} \caption{Maximum relative latency reduction at no cost by employing replicated or coded redundancy vs. the tail of task execution times.} \label{plot_k_cn_pareto_reduc_in_ET_for_baseline_EC} \end{figure} \begin{figure*}[t] \centering \begin{subfigure}[]{.32\textwidth} \centering \includegraphics[width=1\textwidth, keepaspectratio=true]{figs/plot_zerodelay_reped_vs_coded_Google_k_15.pdf} \end{subfigure} \begin{subfigure}[]{.32\textwidth} \centering \includegraphics[width=1\textwidth, keepaspectratio=true]{figs/plot_zerodelay_reped_vs_coded_Google_k_400.pdf} \end{subfigure} \begin{subfigure}[]{.32\textwidth} \centering \includegraphics[width=1\textwidth, keepaspectratio=true]{figs/plot_zerodelay_reped_vs_coded_Google_k_1050.pdf} \end{subfigure} \caption{Simulated cost vs.\ latency curves for executing jobs with $k=15, 400, 1050$ tasks by employing zero-delay replicated or coded redundancy. Task execution time distributions used in the simulations are extracted from a Google Cluster Trace data \cite{GoogleTraceAnalysis:ReissTG12}.} \label{fig:figs/plot_reped_vs_coded_Google} \end{figure*} \vspace{1ex} \noindent\textbf{Demonstration using Google cluster data.} We simulated job executions with replicated or coded redundancy by using task execution time distributions extracted from a Google Cluster data \cite{GoogleClusterTrace:ReissWH11}. Google released this data from a cluster running a mixed workload of short or long running MapReduce batch jobs, services and interactive queries \cite{GoogleTraceAnalysis:ReissTG12}. Fig.~\ref{fig:figs/plot_reped_vs_coded_Google} plots the cost vs. latency curves using the three empirical distributions for jobs with $k=15, 400, 1050$ tasks that were previously illustrated in Fig.~\ref{fig:figs/plot_google_empiricaltail}. In all three, coding is doing better than replication in the cost vs. latency tradeoff. Execution with redundancy could reduce the cost and latency together because each of these distributions pronounces heavy tail at large values (cf. Fig.~\ref{fig:figs/plot_google_empiricaltail}). In the execution of jobs with $15$ or $1050$ tasks, employing replication does not allow for latency reduction at no cost but coding does. In the execution of job with $400$ tasks, although replication seems to achieve less cost and latency at first, coding outperforms replication beyond a certain level of redundancy. \section{When redundancy changes the tail} \label{sec:sec_when_red_changes_tail} So far we have ignored the impact of redundancy on the system. Redundant tasks exert extra load on the system, which is likely to aggravate the existing contention in the system resources. Given that resource contention is the primary cause of runtime variability \cite{TailAtScale:DeanB13}, the added redundant tasks are likely to increase the variability in task execution times. Compute servers are typically shared by the tasks of jobs that simultaneously execute on the cluster \cite{Kubernetes:BurnsGO16}. Two canonical server sharing strategies are 1) Processor sharing: tasks time-share the server according to a round-robin scheduling, 2) Queueing: tasks wait in a queue and are accepted into service one at a time. Modern Operating Systems implement a mix of processor sharing and First-come First-served (FCFS) queueing to host multiple processes on a server, e.g., scheduling classes SCHED\_FIFO and SCHED\_RR in the Linux Kernel \cite{UnderstandingLinuxKernel:BovetC05}. A compute server in reality hosts several shared resources (e.g., CPU, memory, I/O bus, etc.) and each with its own scheduling scheme. For simplicity, we here model servers to host only CPU. We adopt \emph{limited processor sharing} model in which tasks are allowed to time-share the server (while being served over multiple CPU cores or threads) until a limited number of them accumulate, beyond which the remaining tasks wait in a FCFS queue. Limited processor sharing is shown to implement robust performance (in terms of the tail of response time) for both heavy and light tailed task sizes \cite{LimitedPS:Nair10}. In order to understand the impact of added redundancy on the system's runtime variability, we simulated a cluster of servers, each implementing a limited processor sharing queue. Jobs of varying number of tasks and size (minimum task execution time) arrive to cluster according to a Poisson process. Distribution of task sizes and number of tasks within real compute jobs are known to exhibit heavy tail \cite{HeavyTailedJobs:Leland86, HeavyTailedJobs:Harchol97, GoogleClusterDataAnalysis:ChenGG10, GoogleTraceAnalysis:ReissTG12}. Therefore in our simulation: i) Task size for each arriving job is independently sampled from a Truncated-Pareto (a canonical continuous heavy tailed) distribution with minimum value of $1$, maximum value of $10^{10}$ and tail index of $1.1$. The choice of Truncated-Pareto distribution and the values for its parameters come from the distribution of real compute task sizes presented in \cite{UnfairnessSRPT:BansalH01}. ii) Number of tasks that constitute each arriving job is independently sampled from a Zipf (a canonical discrete heavy tailed) distribution. Each arriving job is expanded with the same rate $r > 1$; a job of $k$ tasks gets expanded into $n = \floor{rk}$ tasks by adding $\floor{rk} - k$ coded tasks, and the resulting $n$ tasks are dispatched to the $n$ servers with the least number of tasks in the cluster. As soon as any $k$ of the $n$ tasks of a job is completed, the job completes and its remaining $n-k$ outstanding tasks (either in service or waiting in a queue) get immediately removed from the cluster. Expanding jobs with the same rate $r$ ensures fairness by introducing redundancy in proportion to the scale $k$ at which a job is executed. Cost and latency values for a particular type of job with a fixed number of tasks of unit size are plotted in Fig.~\ref{fig:plot_EC_vs_ET_red_affects_load} for increasing values of $r$. Each simulated server in the cluster implements limited processor sharing queue with a limit of 8 tasks. The latency of the job is the time span between its arrival and departure to and from the system. The cost of the job is the sum of the service time of every task involved in its execution. Simulated curve shows that redundancy initially reduces latency significantly with little change in cost, then reduces latency but increases cost, and finally beyond a level increases latency with little change in cost. In order to evaluate the appropriateness of modeling task execution times with canonical heavy tailed distributions, we first fitted Pareto and Truncated-Pareto distributions on the task execution times sampled from the simulation, then substituted these fitted models in the analytical cost and latency expressions. We presented cost and latency expressions for Pareto task execution times in Thm.~\ref{thm_k_cn_ET__EC}. Cost and latency are formidable to derive in closed form for Truncated-Pareto task execution times, but their computation involves a single integral which we evaluate numerically (refer to \cite[Pg.~63]{Pareto:Arnold15}). Parameters of the Pareto (minimum value and tail index) and Truncated-Pareto (minimum and maximum values, and tail index) models are estimated using the unbiased MLE estimators that are respectively presented in \cite{ParetoEstimation:Rytgaard90} and \cite[Thm.~1]{TParetoEstimation:AbanMP06}. The comparison given in Fig.~\ref{fig:plot_EC_vs_ET_red_affects_load} between the simulated and fitted values of cost and latency shows that modeling task execution times with Pareto distribution is fairly appropriate to study the cost vs. latency tradeoff. This is not surprising since the asymptotic approximations of the tail of waiting times in FCFS or processor sharing queues have demonstrated that heavy tailed task sizes result in heavy tailed delay~\cite{QueueingWithHeavyTails:Zwart01, MG1AsymTail:Sakurai04, MG1HeavyTailAsymp:OlveraBG11}. However one caveat of the model is that it cannot capture the case we observe in (the top right of) Fig.~\ref{fig:plot_EC_vs_ET_red_affects_load} in which adding more redundancy increases latency with little change in cost. In the remainder of this section, we study the cost vs.\ latency tradeoff by adopting a Pareto task execution time model that is dependent on the rate $r$ at which redundancy is added into all the jobs executing in the system. Expansion of a job with task replicas at (an integer) rate of $r$ refers to launching $r-1$ replicas for each of the $k$ tasks within the job. \begin{figure}[t] \centering \includegraphics[width=0.45\textwidth, keepaspectratio=true]{figs/plot_EC_vs_ET_red_affects_load.pdf} \caption{Cost vs. latency for a particular type of job with 20 tasks of unit size. Arriving jobs are expanded with coded tasks at a multiplicative factor of $r$. Simulated values are given for increasing values of $r$; we start with $r=1$ (No redundancy) and then increase $r$ by $0.1$ at each step.} \label{fig:plot_EC_vs_ET_red_affects_load} \end{figure} Redundant tasks added into the system are expected to aggravate resource contention, and consequently increase the variability in task execution times. Therefore, the impact of redundant load exerted on the system should be incorporated in the $\mathrm{Pareto}(s, \alpha)$ distribution that we use to model task execution times. Under stability, an arriving job, with nonzero probability, can find the system empty and complete execution without having to share servers with any other job. Thus, we assume that minimum task execution time $s$ solely reflects the task size and is not affected by resource contention. Then, the impact of added redundant load should be captured by the only remaining parameter, the tail index $\alpha$. Smaller $\alpha$ implies greater variability (implying greater chance and impact of resource contention), so $\alpha$ is expected to get smaller as more redundancy is added into the jobs, which is indeed what we observe in the simulations. We directly use the job expansion rate $r$ to quantify the level of added redundancy and model $\alpha$ as a function of $r$. Note that we do not study the exact trend which describes how $\alpha$ changes with $r$, but rather try to understand the requirements on the relationship between $\alpha$ and $r$ that leads to gain or pain in the cost vs.\ latency tradeoff. We firstly present sufficient conditions in terms of $\alpha$ and $r$ to yield a reduction or incur an increase in latency. \begin{theorem} Suppose that task execution times are i.i.d. with $\mathrm{Pareto}$ with tail index $\alpha_i$ when jobs arriving to the system are expanded with redundant tasks by a multiplicative factor of $r_i > 1$. Consider increasing $r_i$ to $r_j$. If jobs are expanded with coded tasks, a sufficient condition to reduce the latency of a job of $k$ tasks by the change $r_i \to r_j$ is \begin{equation} \alpha_i/\alpha_j \leq \log\left(\frac{n_i}{n_i-k+1}\right)/\log\left(\frac{n_j+1}{n_j-k}\right), \label{eq:eq_suffcond_ETred_coding} \end{equation} a sufficient condition to incur an increase in job's latency is \begin{equation} \alpha_i/\alpha_j \geq \log\left(\frac{n_i+1}{n_i-k}\right)/\log\left(\frac{n_j}{n_j-k+1}\right), \label{eq:eq_suffcond_ETinc_coding} \end{equation} where $n_i = \floor{k r_i}$ and $n_j = \floor{k r_j}$. If jobs are expanded with task replicas, a necessary and sufficient condition for the change $r_i \to r_j$ to reduce latency is given for any job as \begin{equation} \alpha_i/\alpha_j < r_j/r_i. \label{eq:eq_necessandsuffcond_ETred_rep} \end{equation} \label{thm_suffcond_ETgainpain} \end{theorem} Condition \eqref{eq:eq_suffcond_ETred_coding} for the case of expanding jobs with coded tasks is sufficient to reduce latency, but it may not give tight guarantees. It can be made easier to interpret by expressing the expansion rate $r$ as $n/k$ for a given job of $k$ tasks. Increasing the rate from $n/k$ to $(n+1)/k$, the condition \eqref{eq:eq_suffcond_ETred_coding} becomes \[ \frac{\alpha_n}{\alpha_{n+1}} \leq \log\left(\frac{n}{n-k+1}\right)/\log\left(\frac{n+2}{n-k+1}\right) < 1. \] This says that if the tail heaviness of task execution times (or $\alpha$) stays the same or becomes lighter as $r$ increases, increasing $r$ reduces latency for all jobs regardless of $k$. This is not informative since we already know that latency monotonically decreases in $n$ when the tail heaviness of task execution times stays the same let alone when it gets lighter (cf.\ Thm.~\ref{thm_k_cn_ET__EC}). Next we derive an \emph{approximate} necessary and sufficient condition to reduce latency of a particular job by increasing $r$, in the case where jobs are expanded with coded tasks. Presented approximation yields close estimates for large enough values of $r$, in particular when $r > 2$. Approximating the quotient of Gamma functions with Sterling's approximation \cite{AsymptoticApproxOfQuotientOfGamma:Tricomi51}, latency of executing a job of $k$ tasks in a system with coded expansion rate $r = n/k$ is approximately given as \[ \mathbb{E}[T_n] \approx s\left(1 + \frac{k}{n-k+1}\right)^{1/\alpha_n}, \] which gives us the following approximation for the ratio \[ \frac{\mathbb{E}[T_{n+1}]}{\mathbb{E}[T_n]} \approx \left(1 + \frac{k}{n-k+2}\right)^{1/\alpha_{n+1}}\left(1 + \frac{k}{n-k+1}\right)^{-1/\alpha_n}. \] This gives us the following approximate necessary and sufficient condition on the growth of tail index to reduce the latency for jobs of $k$ tasks by increasing $r$ from $n/k$ to $(n+1)/k$, \[ \frac{\mathbb{E}[T_{n+1}]}{\mathbb{E}[T_n]} \lesssim 1 \iff \frac{\alpha_{n+1}}{\alpha_n} \gtrsim \frac{\log(1 + k/(n-k+2))}{\log(1 + k/(n-k+1))}. \] The condition above and the ones given in Thm.~\ref{thm_suffcond_ETgainpain} are quantitative expressions of our intuition; when the redundant load exerted on the system increases the runtime variability, it gets harder to reduce latency by executing jobs with more redundancy as the level of employed redundancy gets higher. When jobs are expanded with task replicas, the condition to reduce latency with more redundancy does not depend on the number of tasks $k$ (scale) within the job; higher level of replication achieves less latency as long as the relative growth in the job expansion rate $r$ is larger than the relative reduction in the tail index $\alpha$ (i.e., relative growth in tail heaviness) of task execution times. In Sec.~\ref{sec:sec_coding_vs_rep}, coded redundancy is shown to be more effective for jobs that run at greater scale. Similarly here when jobs are expanded with coded tasks, increased runtime variability due to redundant load can be better compensated by jobs that run at greater scale. In addition, the threshold for redundancy to start incurring higher latency grows at a slower rate in $r$ when coded tasks are used compared to using task replicas. This is due to the fact that coded redundancy is more efficient; it yields greater reduction in latency per introduced redundant task compared to replication. \section{Straggler Relaunch} \label{sec:sec_straggler_relaunch} Throughout this section, we assume task execution times are heavy tailed. There are two properties of heavy tailed task execution times that greatly affect the distributed job execution \cite{PerfEvalWithHeavyTails:Crovella01}. Firstly, the longer a task has taken to execute, the longer its average residual lifetime is expected to be. Secondly, the majority of the mass in a set of sample observations drawn from a heavy tailed distribution is contributed by few samples. This suggests that among all tasks within a job, few of them are expected to be stragglers with much longer completion time compared to the non-stragglers. After launching a job, let us wait for a reasonably large $\Delta$ amount of time and check whether the job is completed or not. If the job is still running, we expect only a few tasks remaining which we refer to as stragglers. Heavy tailed nature of the task execution times suggests that the tasks straggling beyond time $\Delta$ are expected to take at least $\Delta$ more to complete on average. It also suggests that if a fresh copy is launched at time $\Delta$ for each straggling task, fresh copies are likely to complete before their corresponding old copies. In this section, we show that \textit{straggler relaunch}, that is, replacing the straggling tasks with fresh copies after waiting for some time, can yield significant reduction in cost and latency when the task execution times are \textit{heavy tailed enough}. We investigate the level of tail heaviness required for straggler relaunch to be effective. The selection of the tasks to be relaunched is decided by the time $\Delta$ we wait before relaunching the remaining tasks. Untimely relaunch might be either late and cause delayed cancellation of the stragglers, or might be early and cause killing the non-straggler tasks as well. We find an approximation for the optimal time to perform straggler relaunch, which turns out to have a simple and insightful form. Lastly, we consider performing straggler relaunch jointly with adding redundant tasks into the job execution. Exact expressions for the cost and latency of job execution with straggler relaunch are given in Thm.~\ref{thm_k_wrelaunch_T_C}. Note that we assume relaunching tasks takes place instantly and does not incur any additional delay. Performing straggler relaunch before the minimum task completion time $s$ causes meaningless work loss and further delays the job completion, while performing straggler relaunch at the right time significantly reduces the latency. The cost is a direct function of the latency in the absence of redundant tasks, hence reduced latency implies reduced cost as well (as illustrated in Fig.~\ref{fig:fig_k_wrelaunch_ET__EC}). \begin{theorem} Suppose task execution times are i.i.d. with $\mathrm{Pareto}(s, \alpha)$. Consider executing a job of $k$ tasks by relaunching all the remaining tasks after waiting some time $\Delta$. Then, the distribution of job completion time is given as \begin{longaligned}[\label{eqn:eq_k_wrelaunch_tail}] \Pr\{T &> t\} = 1 - \left(\mathbbm{1}(t > s)\left(1 - (s/t)^{\alpha}\right)\right)^k \tag{\ref{longaligned@\thelongaligned}} \\ &+ \left(q + \mathbbm{1}(t > \Delta)\left(1 - (\Delta/t)^{\alpha}\right)(1-q)\right)^k \\ &+ \left(q + \mathbbm{1}(t > \Delta+s)\left(1 -\left(s/(t-\Delta)\right)^{\alpha}\right)(1-q)\right)^k. \end{longaligned} Latency is given as \begin{equation} \mathbb{E}[T] = \begin{cases} \Delta + L & \Delta \leq s, \\ \begin{aligned} & \Delta(1-q^k) + L\bigl((s/\Delta-1) \\ &\qquad \times I(1-q; 1-1/\alpha, k) + 1\bigr) \end{aligned} & o.w. \end{cases} \label{eqn:eq_k_wrelaunch_tail__ET} \end{equation} Cost with ($C^c$) or without ($C$) task cancellation is given as \begin{longaligned}[\label{eqn:eq_k_wrelaunch_EC}] &\mathbb{E}[C^c] = \begin{cases} k\Delta + \frac{1}{\alpha-1}\left(ks\alpha - L\right) + k(1-q)\Delta & \Delta \leq s, \\ \frac{\alpha}{\alpha-1}\left(k(1-q)(s-\Delta) + ks\right) & o.w. \end{cases} \\ &\mathbb{E}[C] = \begin{cases} k\Delta + ks\frac{\alpha}{\alpha-1} & \Delta \leq s, \tag{\ref{longaligned@\thelongaligned}} \\ \frac{\alpha}{\alpha-1}\left(ks(2-q)\right) - \frac{k\Delta(1-q)}{\alpha-1} & o.w. \end{cases} \end{longaligned} where $q = \mathbbm{1}(\Delta > s)\left(1 - (s/\Delta)^{\alpha}\right)$, and $L = s k!\Gamma(1-1/\alpha)/\Gamma(k+1-1/\alpha)$ is the baseline latency of executing the job without straggler relaunch. \label{thm_k_wrelaunch_T_C} \end{theorem} \captionsetup[subfigure]{labelformat=empty} \begin{figure}[t] \centering \includegraphics[width=.4\textwidth, keepaspectratio=true]{figs/plot_k_wrelaunch_EC_vs_ET_Pareto_k_100.pdf} \caption{Cost vs. latency of executing a job of $100$ tasks by relaunching the remaining tasks after waiting some time $\Delta$. Relaunch time $\Delta$ is varied from $0$ to $\infty$ along the curve.} \label{fig:fig_k_wrelaunch_ET__EC} \end{figure} \begin{lemma} Suppose task execution times are distributed as $\mathrm{Pareto}(s, \alpha)$, and let $T_{no rel}$ denote the baseline completion time for executing a job of $k$ tasks without straggler relaunch. A sufficient condition for reducing the cost and latency of job execution by performing straggler relaunch is given by \begin{equation} \mathbb{E}[T_{no rel}] > 4 s. \label{eqn:eq_k_wrelaunch_suffcond} \end{equation} This gives a looser sufficient condition on the tail index as \begin{equation} \alpha < \ln(k)/\ln(4). \label{eqn:eq_k_wrelaunch_suffcond_a} \end{equation} Optimal relaunch time to execute the job with minimum cost and latency is approximately given as \begin{equation} \Delta^* \approx \sqrt{s \mathbb{E}[T_{no rel}]} = s\sqrt{\frac{k!\Gamma(1-1/\alpha)}{\Gamma(k+1-1/\alpha)}}. \label{eqn:eq_k_wrelaunch_approx_optd} \end{equation} This implies that average fraction of the tasks that are relaunched by the optimal strategy is approximately given as \begin{equation} p^* \approx \left(s/\mathbb{E}[T_{no rel}]\right)^{\alpha/2} \approx \frac{\Gamma(1-1/\alpha)^{-\alpha/2}}{\sqrt{k+1}}. \label{eqn:eq_k_wrelaunch_optimal_fraction} \end{equation} Sufficient conditions and the approximations given above are asymptotic and become exact in the limit $k \to \infty$. \label{lm_k_wrelaunch_ET__opt_d_suff_a} \end{lemma} An approximate expression for the relaunch time $\Delta^*$ that minimizes the cost and latency of job execution is given in Lemma~\ref{lm_k_wrelaunch_ET__opt_d_suff_a}. The given approximation converges to the true optimal as $k$ gets larger, e.g., approximate $\Delta^*$ is very close to the true optimal for the case shown in Fig.~\ref{fig:fig_k_wrelaunch_ET__EC} with $k=100$. Optimal relaunch time is an increasing function of the number of tasks $k$ and the task sizes $s$, which intuitively makes sense. Also it is a decreasing function of $\alpha$, meaning that it is better to relaunch earlier when the tail of task execution times is lighter, while for heavier tail, delaying relaunch further helps to identify the stragglers better. This is because relaunching tasks is a choice of canceling the work that is already completed in order to get possibly lucky and execute the replacement copies much faster. When the task execution times are heavier in tail, the residual lifetime of the straggler tasks is expected to be much larger, and the gain of relaunching stragglers can compensate for the work loss. However with lighter tailed task execution times, it is better to try our chance with relaunching earlier and keep the work loss limited. As discussed above $\Delta^*$ gets smaller as $\alpha$ increases, but this does not imply relaunching a larger fraction of the job's tasks. When relaunching is performed after waiting $\Delta^*$, fraction $p^*$ of the tasks that are relaunched monotonically decreases\footnote{$p^*$ is a monotonically decreasing function of $\alpha$. As the tail of task execution times becomes very heavy; $\lim_{\alpha \rightarrow 1} \Gamma(1-1/\alpha)^{-\alpha/2} = 1$, and as the tail becomes very light; $\lim_{\alpha \rightarrow \infty} \Gamma(1-1/\alpha)^{-\alpha/2} \approx 0.749$.} with $\alpha$, that is, fewer tasks get relaunched on average by the optimal strategy as the tail gets lighter. In addition, $p^*$ decreases with $k$, which means for jobs with larger number of tasks, optimal strategy dictates relaunching smaller fraction of the tasks. For instance, suppose $\alpha=2$ and $k=10$, then $p^* \approx 0.17$, which implies $17\%$ of the tasks would need to be relaunched on average with the optimal strategy, while if $k=100$, then only $6\%$ of the tasks would need to be relaunched on average. We assume relaunching tasks does not introduce any additional delay. Given that, the cost of job execution directly changes with the latency, thus, optimal relaunch time that minimizes latency also minimizes the cost. Note that cost of relaunching may not be ignored in practice, which is why results presented here on the performance of straggler relaunch can only be taken as optimistic guidelines. \begin{figure}[b] \centering \includegraphics[width=0.35\textwidth, keepaspectratio=true]{figs/plot_k_wrelaunch_Pareto__reduc_in_ET_vs_a.pdf} \caption{Maximum reduction in the latency of executing a job of $k$ tasks with straggler relaunch (relative to the baseline without relaunch) depends on the tail of the task execution times. Vertical dashed lines indicate the sufficient condition given on $\alpha$ in Lemma~\ref{lm_k_wrelaunch_ET__opt_d_suff_a}.} \label{fig:fig_k_wrelaunch__reduc_in_ET_vs_a} \end{figure} For relaunching to be effective, work loss due to the cancellation of already running tasks should be compensated by the gain of not having to wait very long for the stragglers. In other words, straggler relaunch is effective only if the tail of task execution times is heavy beyond a level. Otherwise relaunching tasks hurts performance; it incurs additional cost and latency in the job execution. For instance, relaunching always hurts when task execution times have light tail, i.e., when the tail decays at least exponentially fast. Lemma~\ref{lm_k_wrelaunch_ET__opt_d_suff_a} presents an asymptotic sufficient condition for straggler relaunch to be effective, which has a particularly nice form; if the baseline latency without relaunching is greater than 4 times the minimum task completion time $s$, then relaunching stragglers at the right time will reduce the cost and latency of job execution. Reformulation of this condition in terms of the tail index $\alpha$ suggests that straggler relaunch is effective as long as $\alpha$ is less than a threshold, which is the same as saying the tail of task execution times should be heavy beyond a level. Note that this condition on the tail index $\alpha$ does not depend on the minimum task completion time $s$ and is only proportional with the logarithm of the scale $k$ of job execution, which we also validate by numerically computing the exact necessary and sufficient condition on $\alpha$ (see Fig.~\ref{fig:fig_k_wrelaunch__reduc_in_ET_vs_a}). \section{Redundancy together with Relaunch} \label{sec:sec_red_togetherwith_relaunch} In this section, we consider employing redundant tasks and straggler relaunch jointly for straggler mitigation. \subsection{Zero-delay redundancy with relaunch} Firstly, we consider launching the redundant tasks ($c$ replicas for each task or $n-k$ coded tasks) together with the $k$ initial tasks of a job, then relaunching each remaining task (initial or redundant) after waiting some time $\Delta$. Thm.~\ref{thm_k_cn_wrelaunch_ET} gives exact expressions for the latency and Lemma~\ref{lm_k_cn_wrelaunch_ET__opt_d_suff_a} presents an asymptotic sufficient condition for relaunching to be effective in reducing latency, and also presents an approximate value for the optimal relaunch time. Relaunch time $\Delta$ does not affect the level of added redundancy, hence the optimal relaunch time that minimizes latency also minimizes cost. \begin{theorem} Suppose task execution times are i.i.d. with $\mathrm{Pareto}(s, \alpha)$. Consider launching a job of $k$ tasks together with redundant tasks then relaunching all remaining tasks after waiting some time $\Delta$. Let $\mathbb{E}[T_{norel}]$ denote the baseline latency without straggler relaunch as given in Thm~\ref{thm_k_cn_ET__EC}. When job is launched by adding $c$ replicas for each task, \begin{longaligned}[\label{eqn:eq_k_c_wrelaunch_ET}] \mathbb{E}[T] = \begin{cases} \Delta + \mathbb{E}[T_{norel}] & \Delta \leq s, \tag{\ref{longaligned@\thelongaligned}} \\ \begin{aligned} & \Delta(1 - q^k) + \mathbb{E}[T_{norel}]\Bigl(1 + \\ &~ (s/\Delta - 1)I(1-q; 1-1/\alpha, k) \Bigr) \end{aligned} & o.w. \end{cases} \end{longaligned} where $q = \mathbbm{1}(\Delta > s)\left(1 - (s/\Delta)^{(c+1)\alpha}\right)$. When job is launched by adding $n-k$ coded tasks, \begin{longaligned}[\label{eqn:eq_k_n_wrelaunch_ET}] \mathbb{E}[T] = \begin{cases} \Delta + \mathbb{E}[T_{norel}] & \Delta \leq s, \tag{\ref{longaligned@\thelongaligned}} \\ \begin{aligned} & \Delta I(1-q; n-k+1, k) + \mathbb{E}[T_{norel}]\Bigl(1 + \\ &~ (s/\Delta-1)I(1-q; n-k+1-1/\alpha, k) \Bigr) \end{aligned} & o.w. \end{cases} \end{longaligned} where $q = \mathbbm{1}(\Delta > s)\left(1 - (s/\Delta)^{\alpha}\right)$. \label{thm_k_cn_wrelaunch_ET} \end{theorem} \begin{lemma} Suppose task execution times are i.i.d. with $\mathrm{Pareto}(s, \alpha)$. Let $\mathbb{E}[T_{no rel}]$ denote the latency for a job of $k$ tasks that is launched together with task replicas or coded tasks (without straggler relaunch), which is given in Thm.~\ref{thm_k_cn_ET__EC}. A sufficient condition that guarantees reduction in cost and latency by also performing straggler relaunch is given as \begin{equation} \mathbb{E}[T_{no rel}] > 4s. \label{eqn:eq_k_cn_wrelaunch_suffcond} \end{equation} A looser sufficient condition is, when task replicas are used \begin{equation} \alpha < \frac{\ln(k)}{(c+1)\ln(4)}, \label{eqn:eq_k_c_wrelaunch_suffcond_a} \end{equation} or when coded tasks are used \begin{equation} \alpha < \frac{\ln\left(n/(n-k+1)\right)}{\ln(4)}. \label{eqn:eq_k_n_wrelaunch_suffcond_a} \end{equation} Optimal relaunch time for minimum cost and latency (either when task replicas or coded tasks are used) is approximately \begin{equation} \Delta^* \approx \sqrt{s \mathbb{E}[T_{no rel}]}. \label{eqn:eq_k_cn_wrelaunch_approx_optd} \end{equation} Sufficient conditions and the approximations given above are asymptotic and becomes exact in the limit $k \to \infty$. \label{lm_k_cn_wrelaunch_ET__opt_d_suff_a} \end{lemma} \begin{proof}[Proof Sketch] Very similar to the proof of Lemma~\ref{lm_k_wrelaunch_ET__opt_d_suff_a}. \end{proof} \begin{figure*}[t] \centering \begin{subfigure}[]{.32\textwidth} \centering \includegraphics[width=1\textwidth, keepaspectratio=true]{figs/plot_k_n_wrelaunch_EC_vs_ET_Pareto_k_100_n+1.pdf} \end{subfigure} \begin{subfigure}[]{.32\textwidth} \centering \includegraphics[width=1\textwidth, keepaspectratio=true]{figs/plot_k_n_wrelaunch_EC_vs_ET_Pareto_k_100_n+10.pdf} \end{subfigure} \begin{subfigure}[]{.32\textwidth} \centering \includegraphics[width=1\textwidth, keepaspectratio=true]{figs/plot_k_c_wrelaunch_EC_vs_ET_Pareto_k_100.pdf} \end{subfigure} \caption{Cost vs.\ latency curves for executing a job of $100$ tasks by adding redundancy and performing straggler relaunch after time $\Delta$. Each curve is plotted by interpolating between the incremental steps of time $\Delta$.} \label{fig:fig_k_nc_wrelaunch_EC_vs_ET} \end{figure*} Launching a job with redundant tasks mitigates the effect of stragglers, so does relaunching the stragglers after waiting some time. Therefore, the relative latency and cost reduction harvested from straggler relaunch decreases when it is used jointly with redundancy. Straggler relaunch is effective only when the effect of stragglers is significant, i.e., when the tail of task execution times is heavy beyond a level (cf.\ Lemma~\ref{lm_k_wrelaunch_ET__opt_d_suff_a}). Adding redundant tasks into the job execution already ``cuts'' some of the tail, hence the initial tail heaviness that is required for relaunching to be effective increases with the level of added redundancy. Sufficient conditions \eqref{eqn:eq_k_c_wrelaunch_suffcond_a} and \eqref{eqn:eq_k_n_wrelaunch_suffcond_a} given on the tail heaviness are asymptotic representations of this observation. The upper threshold given on the tail index as the sufficient condition decays (i.e., required tail heaviness increases) with the level of added redundancy faster when task replicas are used (decays as $1/(c+1)$) compared to using coded tasks (decays as $\ln\left(n/(n-k+1)\right)$). \subsection{Delayed redundancy with relaunch} Secondly, we consider adding redundant tasks and performing straggler relaunch jointly after waiting some time $\Delta$. Cost and latency of job execution in this case are are presented in Thm.~\ref{thm_k_cnd_wrelaunch_ET__EC}. Using these expressions, Fig.~\ref{fig:fig_k_nc_wrelaunch_EC_vs_ET} plots the cost vs.\ latency tradeoff by varying $\Delta$ from $0$ to $\infty$ for different levels of added redundancy. When the number of added coded tasks is low, there exists an optimal time $\Delta$ that minimizes the cost and latency of job execution (Left, Fig.~\ref{fig:fig_k_nc_wrelaunch_EC_vs_ET}). This is the same observation that we previously made for the case of performing straggler relaunch without adding any redundancy (cf.\ Fig.~\ref{fig:fig_k_wrelaunch_ET__EC}). As the number of added coded tasks increases, redundancy becomes a greater effect on the cost and latency than straggler relaunch, hence waiting for some time before adding redundant tasks becomes ineffective to reduce the cost (Middle, Fig.~\ref{fig:fig_k_nc_wrelaunch_EC_vs_ET}). This is the same observation that we made previously for the case of employing delayed redundancy without straggler relaunch (cf.\ Fig.~\ref{fig:fig_delayed_red}). When task replicas are used rather than coded tasks, regardless of the number of added replicas, delaying $\Delta$ is ineffective to reduce the cost (Right, Fig.~\ref{fig:fig_k_nc_wrelaunch_EC_vs_ET}). This is because replicating each remaining task after some time $\Delta$ even by one is enough to dominate the effect of straggler relaunch on the cost vs.\ latency tradeoff. \begin{theorem} Suppose task execution times are i.i.d. with $\mathrm{Pareto}(s, \alpha)$. Let $\mathbb{E}[T_{no red}]$ denote the latency of executing a job of $k$ tasks by relaunching each remaining task after some time $\Delta$ (without adding redundant task) as given in Thm.~\ref{thm_k_wrelaunch_T_C}. Consider relaunching and adding $c$ replicas for each remaining task after some time $\Delta$. Then, latency is given as \begin{longaligned}[\label{eqn:eq_k_cd_wrelaunch_ET}] \mathbb{E}[T] \approx \begin{cases} \Delta + s k! \frac{\Gamma(1-1/\tilde{\alpha})}{\Gamma(k+1-1/\tilde{\alpha})} & \Delta \leq s, \\ \begin{aligned} & \mathbb{E}[T_{no red}] + f(\tilde{\alpha}) - f(\alpha). \end{aligned} & o.w. \tag{\ref{longaligned@\thelongaligned}} \end{cases} \end{longaligned} for $f(\alpha) = s\;\frac{\Gamma(1-1/\alpha)}{\Gamma(-1/\alpha)}B(k-kq+1, -1/\alpha)$. Cost with ($C^c$) or without ($C$) task cancellation is given as \begin{equation*} \begin{split} \mathbb{E}[C^c] &= \begin{cases} k\Delta + ks(c+1)\frac{\tilde{\alpha}}{\tilde{\alpha}-1} & \Delta \leq s, \\ \begin{split} & \frac{k\alpha}{(\alpha-1)}(s - \Delta(1-q)) \\ &+ k(1-q)\Delta + ks(c+1)(1-q)\frac{\tilde{\alpha}}{\tilde{\alpha}-1} \end{split} & o.w. \end{cases} \\ \mathbb{E}[C] &= \begin{cases} k\Delta + ks(c+1)\frac{\alpha}{\alpha-1} & \Delta \leq s, \\ \begin{split} & \frac{k\alpha}{(\alpha-1)}(s - \Delta(1-q)) \\ &+ k(1-q)\Delta + ks(c+1)(1-q)\frac{\alpha}{\alpha-1} \end{split} & o.w. \end{cases} \end{split} \label{eqn:eq_k_cd_wrelaunch_EC} \end{equation*} where $\tilde{\alpha} = (c+1)\alpha$ and $q = \mathbbm{1}(\Delta > s)(1 - (s/\Delta)^{\alpha})$. Consider adding $n-k$ coded tasks instead of task replicas. Then, latency is given as \begin{equation} \begin{split} \mathbb{E}[T] \approx \begin{cases} \Delta + s \frac{n!}{(n-k)!}\frac{\Gamma(n-k+1-1/\alpha)}{\Gamma(n+1-1/\alpha)} & \Delta \leq s, \\ \begin{split} & \Delta(1-q^k) + s\Bigl(\frac{B(n-kq+1,-1/\alpha)}{B(n-k+1,-1/\alpha)} \\ &+ kB(q;k,1-1/\alpha) - q^k\Bigr) \end{split} & o.w. \end{cases} \end{split} \label{eqn:eq_k_nd_wrelaunch_ET} \end{equation} Cost with ($C^c$) or without ($C$) task cancellation is given as \begin{longaligned}[\label{eqn:eq_k_nd_wrelaunch_EC}] \mathbb{E}[C^c] &= \begin{cases} k\Delta + s\frac{n}{\alpha-1}\left(\alpha - \frac{\Gamma(n)}{\Gamma(n-k)}\frac{\Gamma(n-k+1-1/\alpha)}{\Gamma(n+1-1/\alpha)}\right) & \Delta \leq s, \\ \begin{aligned} & \frac{\alpha}{\alpha-1}(k(1-q)(s-\Delta) + ns) \\ &+ k(1-q)\Delta - s(n-k)q^k \\ &- \frac{s}{\alpha-1}(n-k)\frac{B(n-kq+1, -1/\alpha)}{B(n-k+1, -1/\alpha)}. \end{aligned} & o.w. \end{cases} \\ \mathbb{E}[C] &= \begin{cases} k\Delta + ns/(1-1/\alpha) & \Delta \leq s, \\ \begin{aligned} & \frac{\alpha}{\alpha-1}\bigl(ks(1-q+q^k) \\ &\quad + ns(1-q^k)\bigr) - \frac{k\Delta(1-q)}{\alpha-1} \end{aligned} & o.w. \tag{\ref{longaligned@\thelongaligned}} \end{cases} \end{longaligned} where $q = \mathbbm{1}(\Delta > s)(1 - (s/\Delta)^{\alpha})$. \label{thm_k_cnd_wrelaunch_ET__EC} \end{theorem} \section{Conclusions} \label{sec:sec_conclusions} This paper presented a theoretical performance evaluation of the two most widely deployed straggler mitigation techniques for distributed job execution: i) adding redundant tasks (together with the original tasks or after waiting some time) into the job and waiting only for a sufficient subset of all launched tasks for job completion, ii) waiting for some time after launching the job and relaunching its remaining tasks. We derived the cost and latency expressions for executing the job by applying either one of these techniques or both jointly. Using the derived expressions, we found the following guidelines for the application of these techniques: i) Waiting for some time before launching redundant tasks is not effective to reduce the cost of redundancy. ii) Launching a job with redundant tasks can reduce not only its latency but also its cost. iii) Launching a job with MDS coded tasks achieves less cost (hence incurs less additional load on the system) and latency than using task replicas. iv) Relaunching remaining tasks after waiting some time is effective only if the tail of task execution times is heavier beyond a level, and employing redundant tasks together with straggler relaunch increases this tail heaviness requirement. In our system model, we abstract away the job dispatching and resource sharing dynamics by modeling execution times of tasks within a job as i.i.d. random variables. This rather lumped model allows deriving insightful expressions that allows evaluating and comparing widely deployed straggler mitigation techniques. However, application of these techniques modifies the system dynamics and it is necessary to augment the model to reflect the impact of this modification on task execution times. This is an ongoing challenge for us and Sec.~\ref{sec:sec_when_red_changes_tail} presented a simulation driven attempt in this direction. \bibliographystyle{IEEEtran}
1,314,259,994,140
arxiv
\section{ Introduction} \label{se:intro} The progress in micro--structure technology of semi--conductors has made it possible to fabricate a pair of parallel two dimensional (2D) electronic layers that are spatially close to each other. Experimental \cite{DR:Solomon89,DR:Gramila91,DR:Eisenstein92,DR:Gramila93,DR:Gramila94,DR:Sivan92,DR:Giordano94} and theoretical efforts \cite{DR:Laikhtman90,DR:Jauho93,DR:Zheng93,DR:Kamenev95,DR:Flensberg95,DR:Ussishkin97,DR:Kim96,DR:Oreg98} are being performed to understand the behavior of these systems. The physics of bi--layer system is interesting in its own right , but it also serve as a tool to test the internal layer properties. In a typical experimental set up, a current $I$ is sent through one of the layers (layer 1) and a voltage drop, $V_t$, is measured by separate contacts on the other one (layer 2) (see Fig.~\ref{fg:setup}). The ratio between $V_t$ and $I$ defines the transresistance, $R_t$. In a similar way we can define a Hall transresistance $R_t^H$, when a magnetic field is applied. [For precise definitions see Eqs.~(\ref{def:Rt}) and (\ref{def:RtH}) in Sec.~\ref{se:setup}.] The behavior of the transresistances is rather rich due to the possibility to control the layer areas, the electron density in the 2D layers, their mobility, the interlayer tunneling rate, and external parameters like the temperature and a magnetic field. In this work we concentrate on the corner of the parameter space where the mobility is relatively low (disorder is large), a tunneling between the layers through local pinholes (or bridges) is possible and a weak magnetic field can be turned on. This situation occurs when the average distance between the layers, $d$, is relatively small so we expect the Coulomb forces to be dominant\cite{DR:Bonsager98}. [We consider here only the cases of a weak or zero magnetic field, with weak interlayer tunneling and weak interlayer interaction; generalizations to strong magnetic fields and nonperturbative tunneling and interactions are subjects for future studies.] In the absence of tunneling, a transresistance arises by frictional (drag) forces due to the Coulomb repulsion between electrons in the two layers. This mechanism involves thermal density fluctuations around a mean value of the electron density and therefore vanishes at low temperatures. In the remainder of the article, this effect will be referred to as the {\em classical drag} mechanism. In the presence of tunneling through pinholes there are additional physical processes that lead to a finite transresistance. The first has to do with the fact that, in the presence of pinholes, current can flow from layer 1 to layer 2 through a pinhole close to the source lead and flow back through another pinhole close to the drain lead. This purely classical effect leads to a net current flow in layer 2, to a potential drop and finally to finite transresistance, which we refer to as the {\em leakage contribution}. The potential due to this mechanism depends on the distribution of the pinholes in the sample and, through the weak temperature dependence of the pinholes' conductance, on the temperature. In addition, there is a more subtle mechanism due to the interplay between Coulomb repulsion, tunneling and disorder\cite{DR:Oreg98}. We shall see that the main part of this effect arises from frequencies that are larger than the temperature [see discussion after Eq.~(\ref{eq:Fa})]. In that sense, the effect involves virtual processes of quantum origin, and will be called the {\em quantum mechanism}. The quantum effect is a generalization of the intralayer interaction correction in a single layer disorder system \cite{DS:Altshuler79} to the geometry of bi--layer systems. Unlike the classical drag mechanism, which vanishes at low temperature, and the leakage contribution, which depends weakly on the temperature, due to the electrons diffusive motion in layer 1 and layer 2 {\em the quantum mechanism increases in a singular way when the temperature decreases}. Without pinholes the Hall transresistance vanishes rapidly ($\propto T^4$) at low temperatures \cite{DR:Kamenev95,DR:Kuang97}. (We do not discuss the case where there are strong correlations between the layers \cite{DR:Yang98}.) We will see below, however, that in the presence of tunneling, there are nonvanishing leakage and quantum contributions to the Hall transresistance. Thus, a measurement of the Hall transresistance is a direct measurement of the leakage and quantum effects. In most geometries, it might be difficult experimentally to separate the quantum and the leakage contributions because the temperature dependence of the transresistance measurements has to be very accurate \cite{DR:Oreg98}. However, we argue below that in a geometry where the pinhole distribution is deliberately concentrated near the middle of the sample, at low enough temperature the quantum mechanism is larger than both the classical drag mechanism and the leakage contribution. While the standard drag measurement in the absence of tunneling gives information on thermal fluctuations and interlayer interactions of the system, in the presence of tunneling through pinholes the transresistance measurements can provide interesting information about quantum processes that involve an interplay between disorder and interaction. \begin{figure}[h] \vglue 0cm \hspace{0.05\hsize} \epsfxsize=0.8\hsize \epsffile{transresistance.eps} \refstepcounter{figure} \label{fg:setup} \\ {\small FIG.\ \ref{fg:setup} Geometry for a drag experiment. The lightly shaded areas denote the overlapping regions of the 2D electron gases. In a typical transresistance measurement a current $I$ is sent through the 2D layer 1 and a (trans)potential $V_t$ is developed in the 2D layer 2. In the absence of tunneling between the layers, the Hall transpotential $V_t^H$ is zero at a weak magnetic field $\vec H$ perpendicular to the layers. In the text we discuss the influence of tunneling through local pinholes (or bridges) between the layers on the transresistance and the Hall transresistance [defined in Eqs.~(\ref{def:Rt}) and (\ref{def:RtH})], in the presence of Coulomb repulsion. The dark shaded regions denote the area where pinholes exist and tunneling between the layers is possible. We consider cases where the tunneling region is much smaller then the overlapping region and where the two regions are equal.} \end{figure} For simplicity we quote here results for 2D layers that have identical properties; i.e., they have the same sheet resistance $R_\square$, diffusion constant $D$, Fermi energy $E_F$, Fermi momentum $k_F$, mean free path $l$ and mean free time $\tau$, total density of states (including spin) $\nu$ and inverse Thomas-Fermi screening length $\kappa=2\pi e^2 \nu$. Since the measured transvoltages $V_t$ and $V_t^H$ depend on the precise location of the voltage contacts, we use here an average over the voltages along the appropriate edges in the definition of the transresistances; e.g., for rectangular geometry we integrate the potential along the boundary and divide by its length. [For precise definition of the transresistances see Sec. \ref{se:defs}.] We assume that the length $L_1$ of layer 1 is large compared to its width $W_1$, so that the current density is uniform through the interaction region. A discussion of the actual voltage distribution is given in Sec.~\ref{se:V2} below, for several cases of interest. In case where $L_2 \gg W_2$ and the measuring probes for the longitudinal transresistance $R_t$ are very far from the tunneling places, the potential in layer 2, $V^{(2)}(x,y)$, is practically independent of $y$ near the points of measurement. In that case the average over $y$ in the definition of $R_t$ [see Eq.~(\ref{def:Rt})] is not needed. However, in this geometry the Hall transvoltage must be measured close to the tunneling point and is sensitive to the distance from it along the $x$ axis. [In fact, as shown in Sec.~\ref{se:middle}, the Hall transvoltage falls off exponentially with the distance from the tunneling points.] To avoid this factor, it is preferable to measure the Hall transvoltage in a ``cross'' geometry where $L_2 \approx W_1 \ll W_2$. In that case the Hall transresistance is measured far from the tunneling points, the potential $V^{(2)}(x,y)$ depends weakly on $x$, and the average over $x$ in Eq.~(\ref{def:RtH}) is not needed. We shall see, below, that $R_t$ and $R_t^H$ may be written as the $x$ and $y$ components of a vector $\vec R_t$, which has the form \begin{equation} \label{eq:Rtint} \vec R_t = \frac{1}{I W_2} \left[ {\vec P} + {\vec F} \right], \end{equation} where $I$ is the total current flowing in layer 1, and $W_2$ is the width of layer 2 (see a generalization for nonrectangular configurations in Sec.\ref{se:defs}). The components of the vector $\vec P$, arising from the leakage contribution, are given by the product of resistivity tensor of layer 2 and the dipole moment of the tunneling current distribution. The vector $\vec F$ is due to the momentum transferred from layer 1 to layer 2, it includes both the classical drag contribution and the quantum effect in the presence of tunneling between the layers. If the tunneling between the layers occurs uniformly all over the sample then the dipole moment $\vec P$ is large and the quantum mechanism is a small correction to the leakage contribution to the transresistances. However, when the pinholes are concentrated in the middle of the samples the dipole moment is small and the quantum contribution is dominant. When layer 1 and layer 2 have rectangular shapes of sizes $L_1 \times W_1$ and $L_2 \times W_2$ respectively and the pinholes distribution is concentrated in a rectangle of dimensions $a \times b$ near the middle of the sample we find: \begin{eqnarray} R_t= - \frac{S_{\rm int}}{W_1 W_2} \rho_D +\left(\frac{ a^2}{W_1 W_2}\left[1+ \alpha_t t_\square \log\left( \frac{1}{T \tau} \right) \right] \right. \nonumber \\ \label{eq:Rt} \left. \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;+ t_\square \pi \frac{\log(\kappa d)}{ \kappa d} \frac{L_T^2}{W_1 W_2} \right) \frac{R_\square^2}{12 R_\perp}, \end{eqnarray} and \begin{eqnarray} R_t^H= \left(\frac{ a^2+b^2}{2 W_1 L_2}\left[1 + \alpha_t^H t_\square\log\left(\frac{1}{ T \tau} \right) \right] \right. \nonumber \\ \label{eq:RtH} \left. \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; + t_\square \pi \frac{\log(\kappa d)}{ \kappa d} \frac{L_T^2}{W_1 L_2} \right) \frac{ R_\square R_H}{6 R_\perp}. \end{eqnarray} where $\rho_D \propto T^2 $ is the bulk drag coefficient [see the precise expression for it in Eq.~(\ref{eq:rhoD})], $t_\square = R_\square e^2 / 2 \pi^2 \hbar$, $S_{\rm int}= L \times W$ is the area of the layers' overlapping region, $L=\min \{L_1,L_2\}$ and $W=\min \{ W_1, W_2 \}$, $R_H$ is the Hall resistivity of a single layer, and $R_\perp$ is the total resistance between the layers. The term proportional to $a^2$ (or $a^2+b^2$ for the Hall transresistance) is from the leakage contribution and the term proportional to $L_T^2 = D/T$ is the quantum contribution. The coefficients $\alpha_t$ and $\alpha_t^H$ are numbers of order unity and the corresponding terms are due to the zero bias anomaly correction to $R_\perp$, and due to intralayer interaction and weak localization corrections to the conductivity within each layer \cite{RFS:Altshuler85} (see also Sec.~\ref{sse:rev}). Expressions (\ref{eq:Rt}) and (\ref{eq:RtH}) are valid for \begin{equation} \label{eq:validity} T > \max \{ \frac{D}{L^2_{\min}}, \frac{1}{\tau} e^{-\pi R_\perp/R_\square} \} \mbox{ and } H \le \frac{1}{\mu}, \end{equation} where $L_{\min} = \min \{L,W \}$, $H$ is the external magnetic field perpendicular to the layers, and $\mu$ is the sample mobility. If $L_2$, $W_2$ and $W_1$ are all comparable to each other, then the quantum contribution simply flattens out and becomes temperature independent for $T< D/L_{\min}^2$. On the other hand if $L_2$ is much larger than $W_1$ and $W_2$, then there could be an intermediate temperature $D/L_2^2 \ll T \ll D/L_{\rm min}^2$ where the system is quasi one--dimensional, and the temperature dependence of the quantum contribution to $R_t$ may be even more singular then in Eq.~(\ref{eq:Rt}). If $R_\perp$ is not very large so that the energy scale $(1/\tau) \exp\left[-\pi R_\perp/R_\square\right]$ may be larger than $D/L^2_{\min}$, then effects which are non linear in $R_\perp^{-1}$ may need to be taken into account at low temperatures. If $H$ exceeds $1/\mu$ then effects nonlinear in magnetic field become important. We have also assumed through out the paper that the current in layer 1 is so low that the cut off of the quantum process is determined by the temperature and not by the voltage difference between layer 1 and layer 2. This assumption should hold as long as $J R_\square L_T, J R_\square b \ll T/e$ where $J$ is the current density in layer 1. Let us examine now expressions (\ref{eq:Rt}) and (\ref{eq:RtH}) for the transresistances. If we further assume that $a,b \ll L_T \ll L_{\min}$ then the leakage contribution is suppressed and the quantum contribution is larger than the leakage contribution. On the other hand, the classical drag contribution could be larger than the quantum contribution at finite temperatures, even though $\rho_D$ vanishes as $T^2$, because the classical drag is effective over the entire area of the overlap region, as reflected in the prefactor $S_{\rm int}$. In order to minimize the classical drag contribution, one should choose the dimension of the sample as small as possible, consistent with the requirement that $L_{\min}$ remain larger than $L_T$ at the lowest achievable temperatures. By contrast, there is no contribution from the classical Coulomb drag to the Hall transvoltage. This happens because in the absence of tunneling no current is flowing in layer 2, there is no Lorentz force that should be compensated, and as a result no Hall transvoltage is developed at low temperatures \cite{DR:Kuang97}. In case when $a,b$ are comparable to the layer size, i.e., when the pinhole distribution is uniform, the quantum contribution is a small correction to the leakage contribution, similar to the small interaction corrections in single layer substances \cite{RFS:Altshuler85}. In that case the temperature dependence is mainly determined by the intralayer interaction, and weak localization corrections to $R_\square$ and the zero bias anomaly correction to $R_\perp$. Examining Eq.~(\ref{eq:Rt}) we see that at low temperatures the contribution from the classical drag vanishes and the contribution from the leakage term $\propto a^2$ is a temperature--independent constant. The quantum contribution increases as $1/T$. Thus with the right choice of parameters and at low enough temperatures the quantum contribution is dominant. We note that while the usual drag mechanism leads to a potential drop in layer 2 that is opposite to the voltage drop in layer 1, both the leakage and the quantum mechanisms give rise to a potential drop in the same direction as in layer 1. Therefore $\rho_D$ has a sign which is opposite to leakage and the quantum contributions. Thus we expect $\cite{DR:Raichev97}$ that at a temperature $T^*$ the transresistance will change signs. The Hall transresistance has no contributions from the classical drag mechanism \cite{DR:Kamenev95}, hence, it gives a direct measurement of the leakage and the quantum contribution. For a $GaAs/ AlGaAs$ rectangular sample of dimensions $20 \mu m \times 5\mu m$, mobility $\mu= 5 \times 10^4 {cm^2}/{Vs}$, electron density $n=4 \times 10^{10} cm^{-2}$, $R_\perp=20 k\Omega$ and $a=0.1\mu m, b=0.1\mu m$, yielding $R_\square \cong 3 k\Omega$, we find (with $\kappa d \cong 3$) that the total contributions of the classical, leakage and quantum mechanisms (neglecting zero bias anomaly, weak localization and intralayer interaction corrections) are: \begin{equation} \label{eq:Rtest} R_t (m \Omega) \cong -300 T^2 + 16 + \frac{4}{T}, \end{equation} with $T$ measured in Kelvin, for $T > 2 mK$. At $T^* \cong 0.3 K $ the transresistance is zero. This means that the leakage and quantum contributions win over the classical drag and at $T < 0.2 K$ the quantum contribution is larger than the leakage one. At $T\cong 2 mK$ the system become quasi--$1D$ and the behavior of the quantum corrections is even more singular $\propto 1/ T^{3/2}$ [See Eq.~(\ref{eq:Q1D})], eventually the at $T \cong 0.1mK$ the quantum contribution becomes temperature independent. The Hall transresistance in the ``cross'' geometry, i.e., when the dimensions of layer 1 are $20 \mu m \times 5\mu m$ and of layer 2 are $5 \mu m \times 20 \mu m$ with the same 2D electron gas parameters as above, is found to be \begin{equation} \label{eq:RtHest} R_t^H (m \Omega) \cong (160 + \frac{40}{T}) H, \end{equation} with $H$ measured in Tesla, for $T> 2mK$, $H< 0.2 {\rm T}$. The last condition insures that we are in the linear regime with respect to the magnetic field. Since in the absence of tunneling the Hall transresistance is zero, this finite Hall transresistance is a direct measurement of the leakage and quantum corrections. We note that the tunneling region in the middle of the sample does not have to be a square. If it has the shape of a slit geometry, e.g., $a=0.01 \mu m$ and $b=1 \mu m$ then the leakage term in $R_t$ is even smaller. One should have in mind, though, that in this case the slit has to be aligned very precisely perpendicular to the current in layer 1, in order to keep the leakage term small. In the slit geometry it is easier to make $R_\perp$ larger. The structure of the remainder of the paper is as follows: in Sec.~\ref{se:setup} we discuss the formulation of the problem in terms of the resistivity tensor $\rho_{ij}^{\alpha \beta}({\vec r}, {\vec r'})$, the appropriate boundary conditions and the continuity equation. This leads to a set of integro--differential equations (\ref{eq:con}), (\ref{eq:ohmslawf}) and (\ref{eq:bc12}). Their solution, combined with the appropriate generalization of Gauss's law, gives the transresistances in terms of the conductivity tensor $\sigma_{ij}^{\alpha \beta}({\vec r}, {\vec r'})$, inverse of the resistivity tensor, which can be determined using a Kubo formalism. In Sec.~\ref{se:micro} a microscopical analysis of different parts of $\sigma_{ij}^{\alpha \beta}({\vec r}, {\vec r'})$ is performed using the linear response formalism (which we generalized to include the tunneling through local pinholes) for dirty materials. Later we discuss in some more details the potential distribution in \mbox{layer~2.} In Sec.~\ref{se:middle} we solve the integro--differential equations in the case where the tunneling occurs only in the middle. We find that the current flow lines in layer 2 are similar to the field lines of a dipole in 2D. In Sec.~\ref{se:uni} we discuss the case when the pinhole distribution is uniform. We solve Eq.~ (\ref{eq:con}), (\ref{eq:ohmslawf}) and (\ref{eq:bc12}) perturbatively and find expression for the potential in \mbox{layer 2.} Finally, after a concluding section, appendixes with details of calculations are presented. \section{Macroscopic Equations} \label{se:setup} To be specific we will analyze a system with the geometry depicted in Fig.~\ref{fg:setup}. We use a notation where the indices $i,j,k$ run over the directions $x,y$ and $\alpha,\beta=1,2$ are layer indices. Summation over repeated indices is understood. The local current (per unit length) in layer $\alpha$ and direction $i$, $J_{i}^{\alpha}$, is related to the potential difference between the layers by the continuity equation: \begin{equation} \label{eq:con} \nabla_i J_i^\alpha({\vec r})=(-1)^\alpha J_T\left({\vec r}\right)= g^t({\vec r}) A^{\alpha \beta}V^{\beta}({\vec r}),\;\;\; \end{equation} where $J_T\left({\vec r}\right)$ is the tunneling current density between the layers, the matrix $A^{\alpha \beta}$ is $-1$ if $ \alpha = \beta $ and $ 1 $ if $ \alpha \ne \beta$, $$ g^t({\vec r}) = \sum_l \delta ({\vec r} - {\vec R}^l) g^l, \;\;\;\; g^l= \frac{e^2}{2 \pi \hbar} |t^l|^2, $$ and $t^l$ is the transmission amplitude of the $l$th pinhole located at ${\vec R}^l$. In addition, the current at point ${\vec r}$ is related to the electric fields via a generalized Ohm's law that includes the momentum transfer from the other layer: \begin{equation} \label{eq:ohmslawf} J_i^\alpha({\vec r}) = \sigma^{(\alpha)}_{ik} \left[ -\bbox{\nabla}_k V^{(\alpha)} ({\vec r}) + f^{(\alpha)}_k({\vec r}) \right], \end{equation} where $\sigma_{ij}^{(\alpha)}$ are the conductivities of layer $\alpha$ in the absence of the other layer and the vector $\vec f^{(\alpha)}$, describing momentum transfer from layer $\beta \ne \alpha$ to layer $\alpha$, is defined below. To complete the set of the differential equations (\ref{eq:con}), (\ref{eq:ohmslawf}) we impose the following boundary conditions. (i) No current can flow perpendicular to lateral edges of layer 1 or to the boundaries of layer 2; (ii) the current enters and leaves layer $1$ with a uniform current density $J$. When layer 1 has the shape of a rectangular of length $L_1$ and width $W_1$, the boundary conditions on $J^\alpha_i({\vec r})$ are \begin{mathletters} \label{eq:bc12} \begin{equation} \label{eq:bc1} J^{(1)}_{x}(\pm L_1/2,y)=J, \;\;\;\; J^{(1)}_{y}(x,\pm W_1/2)=0, \end{equation} and \begin{equation} \label{eq:bc2} \left. n_i J_i^{(2)} \right|_{\partial S_2} = 0, \end{equation} \end{mathletters} where $S_2$ is the region of layer $2$, $\partial S_2$ its boundary and $\vec n$ is a vector normal to $\partial S_2$. Notice that, since charge can not be accumulated in layer 2, (\ref{eq:bc2}) is possible only if \mbox{$\int\!\!\!\int_{S_2} d^2 r \nabla \cdot \vec J^{(2)}=0$}. A solution of (\ref{eq:con})-(\ref{eq:bc12}) gives the transresistances in terms of $\vec f^{(\alpha)}$. [Different boundary conditions reflecting different experimental configurations can be analyzed in a way similar to the one discussed below, they will change the results for the measured transvoltages.] The components of the vector $\vec f^{(\alpha)}$ are given by \begin{equation} \label{def:f} f^{(\alpha)}_k({\vec r}) \equiv \int\!\!\!\!\int_{S_\beta} d^2 r' \tilde \rho^{\alpha \beta}_{kj}({\vec r}, {\vec r'}) J_j^{\beta}({\vec r'}), \end{equation} where \begin{equation} \label{def:tilderho} \tilde \rho^{\alpha \beta}_{kj}({\vec r}, {\vec r'}) \equiv \rho^{(\alpha)}_{kj} \delta_{\alpha \beta} \delta({\vec r}-{\vec r'}) - \rho^{\alpha \beta}_{kj}({\vec r}, {\vec r'}). \end{equation} The symbol $\rho^{\alpha \beta}_{kj}({\vec r}, {\vec r'})$ is a matrix in the variables $\vec r$ and $ \vec r'$, and in the layer and Cartesian indices, which is the matrix--inverse of the conductivity tensor $\sigma_{i j}^{\alpha \beta} ({\vec r}, {\vec r'})$ that can be found from a microscopic treatment (see Sec.~\ref{se:micro})]. The tensor $\rho^{(\alpha)}_{kj} \delta({\vec r}-{\vec r'}) \equiv \rho^{\alpha \alpha}_{kj}({\vec r}, {\vec r'})$ is the resistivity tensor of layer $\alpha$ in the absence of the other layer (inverse to $\hat \sigma^{(\alpha)}$), and $S_\beta$ denotes the region of layer $\beta$. The electric field $f^{(\alpha)}_k({\vec r)}$ describes the field formed in layer $\alpha$ due to processes that involve the other layer. For the case of weak coupling between the layers, which we consider here, only the elements of $\tilde \rho$ which are off--diagonal in the layer index need be included, as the diagonal elements are negligible. There are two essential contributions to $\tilde \rho$: one is due to frictional forces (the standard classical drag) and the other is related to the quantum mechanism mentioned earlier. [The leakage contribution is captured by the continuity equation~(\ref{eq:con}).] The drag mechanism does not involve any tunneling through local pinholes between the layers. In this mechanism the electrons in layer 2 interact with thermal fluctuations of the electrons current density in layer 1 via Coulomb forces. They rectify them and in this way are dragged in the direction of the current in layer 1. This process leads to accumulation of charges at the edges of layer 2. As in the case of a standard Hall effect, the charge accumulated at the edges develops an electric field that opposes and cancels the force on the electrons due to drag. Therefore the voltage drop in layer 2 is opposite to the direction of the current in layer 1, i.e., the drag contribution, $\rho_D$, to the transresistance, $R_t$, has a negative sign. Since the amount of fluctuation is proportional to the temperature, $T$, and due to the exclusion principle the average energy of the particle hole excitations participating in the rectification effect is proportional to $T$ as well, $\rho_D \propto T^2$\cite{DR:Zheng93}, i.e., it vanishes as $T \rightarrow 0$. The second mechanism contributing to $\tilde \rho$ is the quantum process. In this process, an electron--hole pair created in one of the two layers, tunnels into the second layer and is annihilated there. The creation and annihilation processes occur due to Coulomb interactions with the other electrons in the system, which do not tunnel between the layers, but which take part in a density fluctuation that is shared between the two layers as a result of the interlayer Coulomb interaction. As we shall see in Section \ref{sse:calQ} below, this process gives rise to correlations in the momenta of the two layers, in the absence of an applied electric field, and hence a mechanism for momentum transfer when a field is applied to one of the layers. To solve Eqs.~(\ref{eq:con})-(\ref{eq:bc12}) perturbatively we define: \begin{eqnarray} \label{def:V1} V^{\alpha}(\vec r)& =& V^{\alpha}_{(0)}({\vec r})+V^{\alpha}_{(1)}({\vec r}), \nonumber \\ \vec J^{\alpha}(\vec r)& =&\vec J^{\alpha}_{(0)}({\vec r})+\vec J^{\alpha}_{(1)}({\vec r}) \end{eqnarray} where $V_{(0)}^{\alpha}({\vec r}) \; \left(\vec J_{(0)}^{\alpha}({\vec r})\right)$ is the potential (current) in the absence of coupling between the layers and $V_{(1)}^{\alpha}({\vec r})\; \left(\vec J_{(1)}^{\alpha}({\vec r})\right)$ is proportional to the first order of coupling between the layers. Lack of coupling to the second layer reduces Eq.~(\ref{eq:ohmslawf}) to \begin{equation} \label{eq:J0} {J_i}_{(0)}^\alpha ({\vec r}) = -\sigma^{(\alpha)}_{ik} \nabla_k V^{(\alpha)}_{(0)}(\vec r). \end{equation} Substitution of Eq.~ (\ref{eq:J0})in Eq.~(\ref{eq:con}) (with vanishing right hand side for the discussed case) and using Onsager relations [see details in Eq.~(\ref{eq:conten})] we find the Laplace equation $\Delta V_{(0)}^\alpha=0$ with the boundary conditions (\ref{eq:bc12}). The solution is straight forward and given by \begin{equation} \label{sol:V0} V_{(0)}^{(1)}= - J \rho^{(1)}_{xx} x - J \rho^{(1)}_{xy} y,\;\;\; V_{(0)}^{(2)}= \mbox{ const } \end{equation} \begin{equation} \label{sol:J0} {J_x}_{(0)}^{(1)}= J,\; {J_y}_{(0)}^{(1)}= 0; \;\;\; {J_x}_{(0)}^{(2)}={J_y}_{(0)}^{(2)}=0. \end{equation} Treating the coupling between the layers perturbatively, the current and the potential in layer 1 are unaffected, to first order, while \begin{equation} \label{eq:J2} {J_i}_{(1)}^{(2)}({\vec r}) = \sigma^{(2)}_{ik} \left[ -\nabla_k V^{(2)}_{(1)}({\vec r})+ f_k^{(2)} ({\vec r}) \right] \end{equation} where \begin{equation} \label{eq:f2} f^{(2)}_k ({\vec r}) = \int d^2 r' \tilde \rho_{kj}(\vec r, \vec r') {J_j}_{(0)}^{(1)}(\vec r'). \end{equation} The potential pattern in the second layer depends on the location of the pinholes, and the measured resistances generally depend on the locations of the voltage contacts. Therefore we define integrated potentials by: \begin{equation} \label{def:U} U_i = -\oint_{\partial S_2} V^{(2)}(x,y) n_i dl = - \int \!\!\!\! \int_{S_2} \nabla_i V^{(2)}(\vec r) d^2 r. \end{equation} The last equality in~(\ref{def:U}) follows from Gauss's theorem $\int\!\!\! \int_{S_2} \nabla \cdot \vec G dx dy = \oint_{\partial S_2} \vec G \cdot \vec n d l$ with $\vec G= \vec w V^{(2)} $ where $\vec w$ is an arbitrary constant non zero vector. If $S_2$ has a shape of a rectangular of length $L_2$ and width $ W_2$ then $U_x= \int dy \left[ V^{(2)}(-L_2/2,y) - V^{(2)}(L_2/2,y)\right]$ and $U_y = \int dx \left[V^{(2)}(x,-W_2/2) - V^{(2)}(x,W_2/2)\right]$. Thus, the $x$ and $y$ components of $\vec U$ are a generalization of the integrated transvoltage and Hall transvoltage respectively. Using (\ref{eq:J2}) we find, in a matrix notation: \begin{equation} \label{eq:U1} \vec U = \int\!\!\!\!\int_{S_2} d^2 r \hat \rho^{(2)} \vec J^{(2)}_{(1)}(\vec r) - \int \!\!\!\! \int_{S_2} d^2 r \vec f^{(2)}(\vec r). \end{equation} In order to continue further with the first term we use the identity $J_i^{(2)}=\nabla_j (r_i J_j^{(2)})- r_i \nabla_j J^{(2)}_j$ and Gauss's theorem with $G_j=r_i J^{(2)}_j$. Since $\hat \rho^{(2)}$ does not depend on space, we find $$ \int\!\!\!\!\int_{S_2} d^2 r \hat \rho^{(2)} \vec J^{(2)}\left({\vec r}\right) = $$ $$ \hat \rho^{(2)} \int_{\partial S_2} \vec r \left( \vec n \cdot \vec J^{(2)} \right) dl - \hat \rho^{(2)} \int\!\!\!\! \int_{S_2} d^2 r \vec r \vec \nabla\cdot J^{(2)}_{(1)}(\vec r). $$ The first term vanishes due to the boundary condition (\ref{eq:bc2}) and therefore we find: \begin{equation} \label{eq:U} \vec U= \vec P + \vec F, \end{equation} where using the Eq.~(\ref{eq:con}) we have: \begin{equation} \label{def:P} \vec P = -\hat \rho^{(2)} \int \!\!\!\! \int_{S_2} d^2 r \vec r J_T(\vec r) = - \hat \rho^{(2)} \int\!\!\!\! \int_{S_2} d^2 r \vec r g^t V^{(1)}_{(0)}\left({\vec r}\right), \end{equation} and the vector $\vec F$ is given by \begin{equation} \label{def:F} \vec F = -\int \!\!\!\! \int_{S_2} \hat {\tilde \rho}(\vec r, \vec r') \vec J^{(1)} (\vec r') d^2r d^2 r'. \end{equation} To summarize, we find in this section that the $x$ and $y$ components of $\vec U$ are related to the integrated transvoltage and Hall transvoltage and have two contributions. The first term, $ \vec P$ describes the leakage contribution to the transvoltages. The second term $\vec F$ describes the momentum transfer from the first layer to the second due to the classical drag effect and due to the quantum contribution that exist only in the presence of tunneling. We have not actually assumed that the magnetic field is weak in this section. However, when the Hall angle becomes very large, the assumption that the current density is uniform in layer 1 may become inappropriate for a fixed experimental geometry. In particular, if the length $L$, is not much larger then $W$, current maybe concentrated near the the edges of the sample, if the Hall angle is very large. \section{Definitions and calculation of the transresistance and the Hall transresistance} \label{se:transresistances} \subsection{$\vec F$: Momentum transfer due to frictional forces and quantum effects} \label{se:F} In the solution of Eqs.~(\ref{eq:con})-(\ref{eq:bc12}) given in Eqs.~(\ref{def:U})-(\ref{def:F}) we have assumed that the couplings to the second layer (due to frictional forces and/or tunneling) are weak and treat them perturbatively. In that approximation $\tilde \rho$ has only off diagonal elements (in the layer index) given by \begin{equation} \label{eq:rhoapp} \tilde \rho^{\alpha \beta}_{kj}({\vec r}, {\vec r'}) \approx \rho_{kl}^{(\alpha)} \sigma^{(\alpha \beta)}_{li}({\vec r}, {\vec r'}) \rho^{(\beta)}_{ij}\left(1-\delta_{(\alpha \beta)} \right). \end{equation} Notice that in the last equation the layer indices are in parentheses to emphasize that here there is no summation over these repeated indices. For simplicity we assume henceforth that the layers are identical and isotropic; extension to non identical layers is straight forward. We may then use the following notation for different parts of the conductance tensor $\sigma_{ij}^{\alpha \beta}({\vec r}, {\vec r'})$: \begin{mathletters} \label{eq:conten} \begin{equation} \label{eq:contena} \sigma_{ij}^{\alpha \beta} ({\vec r}, {\vec r'})= \sigma_{ij} \delta_{\alpha \beta}+ \tilde \sigma_{ij} \left(1-\delta_{\alpha \beta} \right), \end{equation} where the intralayer conductivity tensor maybe approximated as local and independent of position: \begin{equation} \label{eq:contenb} \hat \sigma= \pmatrix{ \sigma & \sigma_H \cr -\sigma_H & \sigma } \,\delta ({\vec r} - {\vec r'}),\;\; \end{equation} and \begin{equation} \label{eq:contenc} {\tilde \sigma}_{ij}(\vec r, \vec r') \equiv \sigma_{ij}^{12}( \vec r, \vec r') = \sigma_{ij}^{21}( \vec r, \vec r') \end{equation} \end{mathletters} The conductance $\sigma$ is given by the inverse of the (bare) sheet resistance $R_\square= 1/e^2 \nu D$. The Hall conductance coefficient $\sigma_H$ is essentially given by $ R_H /R_\square^2$, where $R_H=H/ne $, $n$ is the carrier density, $e$ is the carrier charge, and $H$ is the magnetic field. There are corrections to $\sigma$ and $\sigma_H$ due to weak localization and Coulomb repulsion in combination with disorder \cite{RFS:Altshuler85} that should be included. The tunneling conductance $g^l$ has, similar to $\sigma$, corrections due to the interplay between Coulomb repulsion and disorder, (see details in Sec.~\ref{se:micro}). [The correction to $\sigma$ and $\sigma_H$ due to drag and interlayer tunneling are small and will be ignored.] We have to discuss now the behavior of $\tilde \sigma_{ij} \left({\vec r}, {\vec r'}\right)$. In the approximations we use, both frictional forces and tunneling amplitudes are small and we obtain to first order in the tunneling rate and frictional forces: \begin{eqnarray} \label{def:tilsigma} \tilde \sigma_{ij}({\vec r}, {\vec r'})&=& \sum_l {\cal Q}^l_{ij} ({\vec r} - {\vec R}^l, {\vec r'} - {\vec R}^l) + \sigma^D_{ij} \delta ( {\vec r} - {\vec r'}) \nonumber \\ &&+\; O[(\sigma^D/\sigma)^2, |t^l|^4, |t^l|^2 (\sigma^D/\sigma)]. \end{eqnarray} The coefficient $\sigma^D_{ij}$ is related to the drag coefficient $\rho_D$ in clean systems ($d \ll l$) \cite{DR:Zheng93}, \cite{DR:Kamenev95} by: \begin{eqnarray} \label{eq:sigmaD} \sigma^D_{xx}&\equiv&\sigma^D \cong \rho_D/R_\square^2, \nonumber \\ \sigma^D_{xy}&\equiv&\sigma^D_H= 2\sigma_H \sigma_D/ \sigma \cong 2 R_H \rho_D / R_\square^3 \end{eqnarray} where \begin{equation} \label{eq:rhoD} \rho_D = \frac{\hbar}{e^2} \frac{\zeta(3) \pi^2}{16} \frac{1}{(\kappa d)^2 (k_F d)^2} \left(\frac{T}{E_F}\right)^2. \end{equation} [In Eq.~(\ref{eq:sigmaD}) we have neglected weak localization corrections. If $d > l$ then $ \zeta[3] / 16 (k_F d)^2 \rightarrow \log[D \kappa /2 d T] / 12 (k_F l)^2$. When correlations between the disorder in the layers are included, $\rho_D$ may be multiplied by an $O(1)$ factor \cite{DR:Gornyi98}]. The term ${\cal Q}$ in Eq.~(\ref{def:tilsigma}) arises due to the interplay between the interlayer tunneling and the Coulomb interaction. We will see that the essential contributions to this term are from frequencies larger than the temperature, i.e., it is of quantum origin, (therefore we use the letter ${\cal Q}$ to denote it). We find in Sec.~\ref{se:micro} that ${\cal Q}_{jk}^{l}({\vec r}, {\vec r'})$ has a range $\propto L_T=\sqrt{D/T}$ which increases for $T \rightarrow 0$. In the expression for $\vec F$ we are interested only in the integral of $\tilde \sigma (\vec r, \vec r')$ and the diagonal part of the matrix \begin{equation} \label{eq:Qij0} {\sf Q}_{ij} =\sum_l{\sf Q}^l_{ij}= \frac{1}{L_T^2} \sum_l \int d {\vec s} d{\vec s'} {\cal Q}^l_{ij}({\vec s}, {\vec s'}). \end{equation} was calculated in Ref.~\cite{DR:Oreg98} (see also Sec.~\ref{se:Qii}) and is given (to first order in tunneling) by: \begin{equation} \label{eq:rQl} {\sf Q}\equiv{\sf Q}_{xx} = \frac{e^2}{ \hbar} \frac{\log(\kappa d)}{\kappa d} \frac{1}{24 \pi} \frac{R_\square}{R_\perp}. \end{equation} (Notice that there is a difference of the sample area factor divided by $L_T^2$ between the expression here and the expression in Ref.~\cite{DR:Oreg98}. (We introduce it to make the temperature dependence clearer.) This expression is valid as long as higher order corrections are small and $L_T \ll L_{\min}$. In Sec.~\ref{se:Qxy} in Eq.~\ref{eq:sQH} we show that for a weak magnetic field \begin{equation} \label{eq:QH} {\sf Q}_{H}\equiv{\sf Q}_{xy}=-{\sf Q}_{yx}= 0. \end{equation} Using now the definition of ${\vec F}$ in~(\ref{def:F}) with (\ref{sol:J0}) we find to first order in tunneling and frictional forces at weak magnetic field \begin{equation} \label{eq:F} \begin{array}{ccccc} F_x &=& -J S_{\rm int} \rho_D &+& J R_\square^2 {\sf Q} L_T^2 \\ F_y &=& & & 2 J R_\square R_H {\sf Q} L_T^2, \end{array} \end{equation} where $S_{\rm int}= S_1 \cap S_2$ is the overlapping region of the region of layer 1, $S_1$, and the region of layer 2, $S_2$. Notice that in the absence of tunneling $F_y=0$. The quantum corrections $\propto {\sf Q}$ to $F_x$ and $F_y$ are related by a factor $2 R_\square/ R_H$; this is the analog of the ``rule of two'' corrections to the Hall resistance in a single layer samples \cite{RFS:Altshuler85}. \subsection{$\vec P$: The leakage contribution} \label{se:P} The leakage contribution to transvoltages depends crucially on the distribution of the pinholes. When the pinholes are distributed uniformly inside a rectangle of size $a \times b$ centered at the origin we find $$ {\vec P} = \frac{J}{R_\perp a b} \int_{-a/2}^{a/2} dx \int_{-b/2}^{b/2} dy \hat \rho \vec r \left ( x/ \sigma + y \sigma_H/\sigma^2 \right) \Rightarrow $$ \begin{equation} \label{eq:P} {\vec P} = \frac{ J}{12 R_\perp} \left[ \frac{a^2}{ \sigma^2} \hat x + \left(a^2 + b^2 \right) \frac{\sigma_H}{\sigma^3} \hat y \right]. \end{equation} [Notice that to fulfill the requirement $\int \!\!\! \int_{S_2} d^2 r \nabla \cdot \vec J^{(2)} =0$, we have for a symmetric distribution of pinholes around the origin, $V^{(2)}_{(0)} = {\rm const} = 0 $ in Eq.~(\ref{sol:V0}).] The quantum contribution depends on the temperature. Thus, to make the analysis complete we should include also in the leakage contribution temperature dependent corrections to $R_\perp$, $\sigma$ and $\sigma_H$ arising from intralayer electron--electron interactions and weak localization corrections (see Sec.~\ref{sse:rev}). Including these corrections we find: \begin{equation} \label{eq:Pc} \begin{array}{ccc} {\vec P}&=& \frac{ J a^2 R_\square^2}{12 R_\perp} \left[1+ \alpha_t t_\square \log\frac{ 1}{T \tau } \right] \hat x \\ &&+ \frac{ J \left(a^2 + b^2 \right) R_H R_\square}{12 R_\perp} \left[1 + \alpha_t^H t_\square\log\frac{1} {T \tau} \right] \hat y , \end{array} \end{equation} where $\alpha_t = 2 \alpha_{\rm wl}+ 2 \alpha_{\rm in} - \beta_{\rm zba}$, $\alpha_t^H= 3 \alpha_{\rm in} + \alpha_{\rm wl} - \beta_{\rm zba} $ and $t_\square = R_\square e^2/ 2 \pi^2 \hbar$. The coefficients $\alpha_{\rm wl}, \alpha_{\rm in}, \beta_{\rm zba}$ defined in Eqs.~(\ref{eq:sig}) and (\ref{eq:tl}) are associated with weak localization, intralayer Coulomb repulsion, and Coulomb blocking (Zero bias anomaly) of the tunneling between the layers. In zero magnetic field the weak localization corrections are cut off by the dephasing time, $\tau_\phi$ (that is proportional to $1/T$ in 2D). In case where $H \gg \hbar c /D \tau_\phi$ we have to cut off the weak localization corrections [ $\propto \alpha_{\rm wl}$ in expression (\ref{eq:RtH})] by $T_H= e H D /\hbar c$. \subsection{Definition of transresistances} \label{se:defs} Now we are at the position to define transresistances. As was demonstrated, $U_x$ and $U_y$ present a generalization of transvoltage and Hall transvoltage that are integrated on the boundary. In order to define the transresistances properly we divide $U_i$ by the current injected from the leads to layer 1 and by the appropriate circumference. This gives the following definition: \begin{equation} \label{def:RtxRty} R_{ti} = \frac{U_i}{ I \oint _{\partial S_2} \left|n_i\right| dl/2} = \frac{-\int \!\!\! \int_{S_2} \nabla_i V^{(2)}(x,y) dx dy}{J W_1 \oint_{\partial S_2} | n_i| dl/2} \end{equation} A rectangular shape of both layers (of size $L_1 \times W_1$ and $L_2 \times W_2$) reduces these definitions to: \begin{eqnarray} \label{def:Rt} R_{tx} \equiv R_t=\frac{U_x}{J W_1 W_2} \;\;\;\;\;\;\;\;\;\;\;\;\;\nonumber\\ = \frac{1}{W_2 I} \int_{-W_2/2}^{W_2/2} dy \left[ V^{(2)}(-L_2/2,y)- V^{(2)}(L_2/2,y) \right] \end{eqnarray} and \begin{eqnarray} \label{def:RtH} R_{ty} \equiv R_t^H=\frac{U_y}{J W_1 L_2} \;\;\;\;\;\;\;\;\;\;\;\;\; \nonumber \\ =\frac{1}{L_2 I} \int_{-L_2/2}^{L_2/2} dx\left[ V^{(2)}(x,-W_2/2)- V^{(2)}(x,W_2/2) \right]. \end{eqnarray} (The integration over the edges can be done experimentally by connecting metallic measuring probes through high barriers.) Using the relation ${\vec U}={\vec P}+{\vec F}$ [see Eq.~(\ref{eq:U})] and expressions (\ref{eq:Pc}) for ${\vec P}$ and (\ref{eq:F}) for ${\vec F}$, we find the expressions for the transresistance (\ref{eq:Rt}) and the Hall transresistance (\ref{eq:RtH}) cited in the Introduction. \section{Microscopic theory of $\bbox{\sigma}$, $\bbox{\sigma_H}$, $\bbox{\sigma^D}$, $\bbox{\sigma^D_H}$, \lowercase{$\bbox{g^l}$}, $\bbox{{\sf Q}^{\lowercase{l}}}$ and $\bbox{{\sf Q}^{\lowercase{l}}_H}$} \label{se:micro} In the section we review the dependence of the microscopical, geometry--independent parameters $\sigma$, $\sigma_H$, $\sigma^D$, $\sigma^D_H$ and $g^l$ on the temperature, the interlayer tunneling amplitude, the level of disorder in each layer, and the strength of the intralayer and interlayer interaction. We derive the behavior of ${\cal Q}_{ij}^{\lowercase{l}} ({\vec r}, {\vec r'})$ introduced in Eq.~(\ref{def:tilsigma})and derive ${\sf Q}^{\lowercase{l}}_{ij}$ defined in Eq.~(\ref{eq:Qij0}). \subsection{Review : \bbox{$g^l$, $\sigma$, $\sigma_H$} and \bbox{$\sigma^D$}} \label{sse:rev} The effect of disorder and Coulomb interaction in a single 2D layer was studied intensively at the beginning of the 1980s \cite{DS:Altshuler82}. The interplay between weak disorder, interference and Coulomb interaction leads to various logarithmic corrections to the sheet conductance $\sigma$ and to the Hall conductance $\sigma^H$. It was found that \cite{DS:Altshuler82}: \begin{equation} \label{eq:sig} \sigma = \left(1/R_\square \right) \left[ 1 - t_\square (\alpha_{\rm int}+ \alpha_{\rm wl}) \log( 1/ \tau T) \right] \end{equation} where $t_\square = R_\square e^2/2 \pi^2 \hbar$, and the $O(1)$ dimensionless coefficients, $\alpha_{\rm int}$ and $\alpha_{\rm wl}$, describe the corrections due to interaction and weak localization respectively. The parameters $g^l$ describe tunneling between the two layers. The finite conductance inside each layer reduces the speed of the charge spreading after the electron tunnels, and leads to a suppression of the tunneling rate \cite{DS:Altshuler79,DS:Levitov96}. To first order in the intralayer $e$--$e$ interaction \begin{equation} \label{eq:tl} g^l \Rightarrow g^l \left[ 1 - t_\square \beta_{\rm zba} \log( 1/ \tau T) \right], \end{equation} where $\beta_{\rm zba} = \log(\kappa d)$, $\kappa= 2 \pi e^2 \nu $ is the inverse Thomas - Fermi screening radius. There are no interaction corrections to $\sigma^H$, however weak localization corrections exist and they are twice as larger as the weak localization corrections to the conductance, \begin{equation} \label{eq:sigH} \sigma^H \Rightarrow \sigma^H \left[ 1 - 2 t_\square \alpha_{ \rm wl} \log(1 / \tau T)\right]. \end{equation} They tend to be suppressed by a weak magnetic field, however in the limit when the magnetic length $\sqrt{c / 2 e H }$ is larger than the dephasing length ( that is proportional to the temperature in 2D) they should be included. The expressions for $\sigma^D$ and $\sigma_H^D=2 \sigma_H \sigma_D/ \sigma$ are given in Eqs.~(\ref{eq:sigmaD}) and (\ref{eq:rhoD}). They were derived in that form in Eqs. (35) and (26) of Ref.~\cite{DR:Kamenev95}. \subsection{The matrix ${\cal Q}^{l}_{ij} ({\vec r}, {\vec r'})$} \label{sse:calQ} By definition the matrix ${\cal Q}^{l}_{ij} ({\vec r}, {\vec r'})$ is given (using a linear response formalism) by the correlation function of the currents $J^{(1)}_i({\vec r})$ with $J^{(2)}_j({\vec r'})$. The Hamiltonian of the system is given by \cite{DR:Oreg98} \begin{equation} H=H_{1}+H_{2}+H_{\mbox{\footnotesize int}}+H_{T}\,, \label{H} \end{equation} where $H_{1(2)}$ is the Hamiltonian of the isolated layer 1 (2), including elastic disorder, and $H_{\mbox{\footnotesize int}}$ includes interlayer as well as intralayer Coulomb interactions. The first three terms on the r.h.s. of Eq.~(\ref{H}) are traditionally involved in the description of the drag effect \cite{DR:Zheng93,DR:Kamenev95,DR:Flensberg95}. We add the term describing pointlike tunneling processes through pinholes \cite{DR:foot5} \begin{equation} H_{T} = \sum\limits_{l=1}^{N} V^l \sum\limits_{{\vec k},{\vec p}} e^{i \vec R^l \cdot ({\vec k}-{\vec p})} a_{\vec k}^{\dagger}b_{\vec p} + \mbox{h.c.} \, , \label{HT} \end{equation} where $a(a^\dagger)$ and $b(b^\dagger)$ are the annihilation (creation) operators of electrons in the first and the second layers respectively, and ${\vec R}^l$, $l=1\ldots N$, are the positions of $N$ local pinholes. The coupling energy $V^l$ is related to the tunneling amplitude $t^l$ by $t^l = \pi \nu \sqrt {2 L_1 W_1 L_2 W_2} V^l$. The actual calculations of current--current correlation functions are very similar to the calculations of interaction corrections to the conductivity and the Hall coefficient in a single layer. In the description of the calculation here step by step we will give references to the equivalent steps in case of single layer calculation that are described in some details in Ref. \cite{DS:Altshuler80}. \subsubsection{Diagonal elements: ${\cal Q}^{l}_{xx} ({\vec r}, {\vec r'})$} \label{se:Qii} The largest contributions to ${\cal Q}^{l}_{xx} ({\vec r}, {\vec r'})$ (to second order in tunneling amplitude, and first order in interlayer interaction) is coming from the diagrams depicted in Fig.~\ref{fg:Qrr}, (two additional diagrams where the direction of the arrows is inverted should be added). These diagrams are equivalent to the``three diffuson'' diagrams contributing to the first order intralayer interaction corrections to the conductivity of a single layer. [Diagrams 5 (d) and (e) of Ref.~\cite{DS:Altshuler80}.] In the present case, due to the tunneling between the layers, the corrections (to second order in tunneling amplitude) are due to interlayer interaction and there are two diffusons in each layer, i.e., four diffusons in total, and we expect more singular temperature dependence. \begin{figure}[h] \vglue 0cm \hspace{0.1\hsize} \epsfxsize=1 \hsize \epsffile{Qrr.eps} \refstepcounter{figure} \label{fg:Qrr} \\ {\small FIG.\ \ref{fg:Qrr} Two diagram describing the transconductivity matrix ${\cal Q}^{l}_{ii}$ that are second order in interlayer tunneling (denoted by $\times$)} (two additional diagrams with opposite arrows should be included). Solid lines with arrows are electrons propagators. Full circles denote current vector vertexes at point $\vec r$ in layer 1 where the external electric field is applied and at point $\vec r'$ in layer 2 where the current is measured. The crosses denote tunneling through a pinhole located at ${\vec R}^l$. Since impurity (dashed) lines are local on scale of the mean free path, the interlayer interaction (wavy) line is of range of the order of the distance between the layers $d$, and the relevant frequencies in the diffuson (group of dashed lines) are of the order of the temperature $T$, ${\cal Q}_{xx}^l( {\vec r}, {\vec r'})$ decays strongly when the distance between ${\vec R}_l$ and ${\vec r}$ or ${\vec r'}$ is larger than the thermal length $L_T= \sqrt{ D/ T}$. \end{figure} These diagrams are calculated using the Matsubara frequency formalism \cite{RFS:Mahan90}. The analytical structure of the frequencies in them is identical to the structure in case of single layer interaction corrections to the conductivity \cite{DS:Altshuler80}. In case of a single layer there are, in addition to diagrams equivalent to the ones in Fig~\ref{fg:Qrr}, ``two diffuson'' diagrams that eventually cancel each other out. [Diagrams 5 (a) (b) and (c) in Ref.~\cite{DS:Altshuler80}.] This cancellation involves a single impurity line [Diagram 5 (c) in Ref.~\cite{DS:Altshuler80}] that is not present here, because by assumption there are no common impurities (besides the tunneling impurity) to the two layers. However, in the present case, due to the integration over the different fast momenta in the current vertexes and the singular behavior of the four diffuson diagrams, the contribution of two diffuson diagrams is less singular and can be neglected. After integration over the fast momenta, the formal expression for the longitudinal term ${\cal Q}^l_{xx}(\vec r - \vec R^l,r'- R^l)$ associated with the diagrams in Fig.~\ref{fg:Qrr} reduces to \cite{DR:Oreg98}: \begin{eqnarray} {\cal Q}^l_{xx}(\vec r - \vec R^l, \vec r' - \vec R^l)= \;\;\;\;\;\;\;\;\;\;\;\; \nonumber \\ i \frac{\sigma}{4\pi} \int\limits_{-\infty }^{\infty}\!\!\!\!d\omega \frac{\partial }{\partial \omega } \left[ \omega \coth \frac{\omega }{2T} \right] F_{xx}(\vec r- \vec R^l,r'-\vec R^l;\omega )\,. \label{AA1rr} \end{eqnarray} The function $F_{xx}(\vec r - \vec R^l ,\vec r' - \vec R^l;\omega )$ to second order in the tunneling amplitude is given by: \begin{eqnarray} \label{eq:Fa} F^{\rm (a)}_{xx}(\vec r - \vec R^l ,\vec r' - \vec R^l;\omega )= 8 \frac{R_\square}{R^l_\perp} \frac{D}{(L_1 W_1 L_2 W_2)} \times \nonumber \\ \sum_{Qkq}D (Q_x + q_x/2)(Q_x + k_x/2) U_{12}(Q,\omega) \times \nonumber \\ {(DQ^{2}-i\omega)^{-2}}[(D(\vec Q + \vec k)^{2}-i\omega)(D(\vec Q + \vec q)^{2}-i\omega)]^{-1} \times \\ e^{i\vec q \cdot (\vec r - \vec R^l)} e^{-i \vec k \cdot (\vec r' - \vec R^l)} \, , \nonumber \end{eqnarray} where $R_\perp^l=2\pi \hbar/ e^2 |t^l|^2$ is the resistance due to tunneling through the pinhole located at ${\vec R}^l$, and $U_{12}(Q,\omega)$ is the interlayer screened Coulomb interaction, which is given in the diffusive case, by \cite{DR:Kamenev95}: \begin{equation} \label{eq:intDif} U_{12}(\vec q, \omega )= \frac{1}{S_{\rm int}}\frac{\pi e^2 q}{\kappa^2 \, \sinh qd} \left(\frac{D q^2 - i \omega}{D q^2}\right)^2, \end{equation} where the divergences at small $q$ are cut off at $D q^2 \approx \omega/(\kappa d)$. Performing the integration over $Q$ we find \begin{equation} \label{eq:intQ} \int d^2 s d^2 s' {\cal Q}^l_{xx}(\vec s, \vec s') = -\frac{e^2}{\hbar} \frac{1}{24 \pi}\frac{\ln (\kappa d) }{\kappa d} \frac{R_\square}{R_\perp^l} L_T^2. \end{equation} On the other hand since the frequencies are of the order of the temperature and the momenta $Q q k$ are of the order of $\sqrt {D/ \omega}$ the function $F^{\rm a}$ and therefore ${\cal Q}^l$ decays at range of order of $L_T$. Physically, the relevant time for the quantum phenomena under discussion has to be smaller than $\hbar /T$. At time $\propto 1/T$ the electron diffuses for a length $L_T$, hence the quantum correction has a range $L_T$. This allows us to write ${\cal Q}^l$ as \begin{equation} \label{eq:Ql} {\cal Q}^l_{ij}({\vec r}, {\vec r'}) \cong {\sf Q}^l_{ij} \delta({\vec r} -{\vec r'}) S_T^l(|{\vec r'}|) \end{equation} where $S_T^l(\vec r)$ is a function of range $L_T$ around ${\vec R}^l$. We normalized it in such a way that $\int d^2 r S_T^l(\vec r) = L_T^2$. Integrating the right hand side of Eq.~(\ref{eq:Ql}) with respect to $\vec r$ and $\vec r'$ on all space we find that: \begin{equation} \label{eq:sfQ} \int {\sf Q}_{ij}^l \delta (\vec s - \vec s') S^l_T(|\vec s|) d^2 s d^2 s' = L_T^2 {\sf Q}_{ij}^l \end{equation} Using Eqs.~(\ref{eq:intQ}-\ref{eq:sfQ}, ) we find: \begin{eqnarray} \label{eq:sfQa} {\sf Q}=\sum_l{\sf Q}_{xx}^l= \sum_l \frac{1}{L_T^2} \int d^2 s d^2 s' {\cal Q}^l_{xx}(\vec s, \vec s') = \nonumber \\ -\frac{e^2}{\hbar} \frac{1}{24 \pi}\frac{\ln (\kappa d) }{\kappa d} \frac{R_\square}{R_\perp}, \end{eqnarray} where $1/R_\perp = \sum_l 1/R_\perp^l$. The formulas assume that $L_T$ is small compared to the system dimensions $L$ and $W$. When the temperature $T$ becomes so low that $L_T$ is greater then both $L$ and $W$ the transresistance should be flatten out and become independent on the temperature. For a system where $L \gg W$, there should also be an intermediate temperature regime $L > L_T > W$, where the system becomes quasi--one--dimensional, the integration over the momenta in Eq.~(\ref{eq:Fa}) are one dimensional and the divergences are more singular in the temperature. In that case integration over the momenta in Eq.~(\ref{eq:Fa}) [assuming that $L_{\min} > d$ and therefore the screening properties have a $2D$ character] gives: \begin{equation} \label{eq:Q1D} {\sf Q} = - c \frac{e^2}{2 \pi \hbar} \frac{1}{\sqrt {\kappa d}} \frac{R_\square}{R_\perp} \frac{L_T}{L_{\min}}, \end{equation} where $c= 1/\left(2 \pi^{7/2} \right) \zeta\left(5/2 \right) \sim 1/13$. In addition to the exchange diagrams depicted in Fig.~\ref{fg:Qrr} diagrams of Hartree type should be included. However, unlike the situation in a single layer system \cite{DS:AltLee80}, the contribution of Hartree type diagrams is not singular as the contribution of the exchange diagrams of Fig.~\ref{fg:Qrr}. Formally it happens because the Coulomb line in a Hartree diagram does not transfer momenta between the layers. Let us examine now what happens for higher order terms, i.e., when we have more then a single event of interlayer tunneling. For two interlayer tunneling events (fourth order in the tunneling amplitude) at point ${\vec R}^l$ and ${\vec R}^{l'}$, we have to consider several diagrams. A typical contribution of them is the term: \begin{eqnarray} \label{eq:Fb} F^{\rm (b)}_{xx}({\vec r} - {\vec R}^l ,{\vec r}'- {\vec R}^l;{\vec r} - {\vec R}^{l'} ,{\vec r}' - {\vec R}^{l'}\omega ) =\nonumber \\ 8 \left[\frac{R_\square}{R_\perp} \frac{D}{\sqrt{L_2 W_2 L_1 W_1}}\right]^2 \frac{1}{L_2 W_2} \times \nonumber \\ \sum_{PQkq}D (Q_x + q_x/2)(P_x + k_x/2) U_{11}(Q,\omega) \times \nonumber \\ {(DQ^{2}-i\omega)^{-2}} [(D({\vec Q} + {\vec q})^{2}-i\omega) (DP^{2}-i\omega)]^{-1} \times \\ (D(\vec P + \vec k)^{2}-i\omega)^{-1} \times \nonumber \\ e^{i {\vec q} \cdot ({\vec r} - {\vec R}^l)} e^{-i {\vec k} \cdot (\vec r' - {\vec R}^l)}e^{ -i ({\vec Q} - {\vec P}) \cdot {\vec R}^l} e^{ i (\vec Q - {\vec P}) \cdot {\vec R}^{l'}} \, . \nonumber \end{eqnarray} If we assume that the tunneling is at one point than integration over the direction of the momentum $\vec P$ causes the integral to vanish. When we average on the position of the tunneling points at an area $a \times b$ we find for $a \ll L_T$ that the result is less singular than $1/T$ and for $a> L_T$ we recover the results 3(b) and (15) of Ref.~\cite{DR:Oreg98} in which ${\sf Q} \propto 1/T^2 \log T$. Other diagrams give similar results. Practically, if we are interested in situations when the quantum contribution is dominant, we are interested in the case $a \ll L_T$, and we can ignore the second order contribution. \begin{figure}[h] \vglue 0cm \hspace{0.1\hsize} \epsfxsize=1 \hsize \epsffile{Qrr3.eps} \refstepcounter{figure} \label{fg:Qrr3} \\ {\small FIG.\ \ref{fg:Qrr3} The most divergent diagrams contributing to the third order tunneling (6th order in tunneling amplitude) transconductance. Two additional diagrams with opposite arrows should be included.} \end{figure} The third order contributions, involving three tunneling events (6th order in tunneling amplitudes), may also be computed. The most divergent contributions, depicted in Fig.~\ref{fg:Qrr3}, are found by inserting four additional crosses in the diagrams of Fig.~\ref{fg:Qrr} in a way that gives the maximal number of diffusons. For the case $a \ll L_T$ we find that their contribution to ${\sf Q}$ is \begin{equation} \label{eq:3order} {\sf Q}^{\rm (c)}= {\sf Q}^{\rm (a)} \left(\frac{R_\square} {\pi R_\perp} \log {\frac{1}{T \tau}} \right)^2, \end{equation} where ${\sf Q}^{\rm (a)}$ is the first order contribution to {\sf Q} given by (\ref{eq:sfQa}). Thus, the first order results are valid as long as $[R_\square/ (\pi R_\perp)] \log (1/T \tau) < 1$. In the quasi 1D case the condition is $(R_\square/ R_\perp)(L_T/ L_{\min}) \ll 1$. By examining carefully the analytical structure of the diagrams in Fig.~\ref{fg:Qrr}, we can try to understand the physics leading to the singular quantum contribution (\ref{eq:sfQa}) to the transconductance. The ground state of the interacting system may be built up from virtual particle--hole ($p$--$h$) excitations, relative to the Fermi sea of the non--interacting disordered system. The $e$--$e$ interaction, at some instant of time, can create two $p$--$h$ pairs, which propagate forward in time; it can annihilate two $p$--$h$ pairs which originated at an earlier time; it can annihilate one $p$--$h$ pair and create another; it can lead to scattering among existing holes and/or particles; or it can create or annihilate a single pair while scattering an existing particle or hole. The most commonly considered correction to the groundstate energy of the Fermi liquid, beyond the Hartree--Fock, is the RPA correction, which is represented diagrammatically by closed chains of two or more $p$--$h$ bubbles. In these diagrams, each particle is annihilated by the same hole that was created with it originally, and there are no scattering processes for existing electrons or holes. If one ignores the momentum vertices, the diagram in Fig.~\ref{fg:Qrr} is a contribution to the RPA ground state energy in which one of the $p$--$h$ pairs initially created by the Coulomb interaction at a time $t_0$, tunnels at from one layer to the other, before being annihilated at a later time $t_f$. The second $p$--$h$ pair does not tunnel between the two layers, but forms part of a polarization cloud, represented by the screened interaction propagator, which propagates in both layers because of the interlayer Coulomb interaction. Now let us insert momentum vertices in the diagram, as shown in Fig 2, so that we can compute the time-dependent correlation function between the momenta in the two different layers. From this correlation function, using the Kubo formula, one may compute the transconductance. If the $p$--$h$ pair is created at time $t_0$ with total momentum $q$, the polarization propagator will carry wavevector $-q$. Although the system contains impurities, so that the electron momentum is not conserved, the diagrams which include averaging over impurity positions preserve the total momentum of the $p$--$h$ pair, and of the screened electron propagator. The tunneling processes randomize the momenta of the particle and hole, when they cross from one layer to the other. Consequently, in the absence of Coulomb interactions, there would be no correlation in the total momenta of the two layers. Now however there is an induced correlation: in order for the $p$--$h$ pair to be annihilated at time $t_f$ by the polarization propagator created at time $t_0$, it must have the same total momentum $q$ as the original pair. The overall sign of the correlation depends on the sign of the interlayer $e$--$e$ interaction. Indeed, if one changes the sign of the interlayer $e$--$e$ layer, while keeping the fixed the interaction between electrons in the same layer, the contribution of the diagrams in Fig.\ref{fg:Qrr} will simply change sign. (This follows from the fact that the bare interlayer interaction appears an odd number of times in the screened interaction propagator for this case.) For the actual situation, where the interlayer interaction is repulsive, we find a negative contribution to the transconductance. When the layers are in the diffusive regime, the particle and hole of an excitation stay closer to each other in position space, compared to the ballistic case, and stay closer to their initial position. This increases the probability that they tunnel through the same pinhole, and that they eventually annihilate the polarization cloud that was originally created together with them. In the diagrams, this effect is presented by the dressing of the vertexes by the diffuson propagators. Our analysis shows that the correlation of the momentum fluctuations, at $T=0$, extends for very long times, giving rise to a singular contribution to the transconductance. At finite temperatures, when the discontinuity at the Fermi energy is rounded, the contribution of the virtual processes will be cut off at times of order $1/T$, and the singular contribution is reduced. \subsubsection{The Hall coefficient ${\cal Q}^{l}_{xy} ({\vec r}, {\vec r'})$} \label{se:Qxy} In the absence of a magnetic field and without any $e$--$e$ interactions the electrons' wave front propagating in layer 2 (after tunneling from layer 1) is spherically symmetric~\cite{DR:foot5}. This argument is not changed in the presence of an external magnetic field, (though the current distribution has now a component perpendicular to the radial direction), and leads us to the conclusion that {\it without} $e$--$e$ interaction the tunneling itself between the layers does not lead to a finite Hall transconductivity coefficient. Generalizing the calculation of the Hall coefficient in a single layer [see Figs. 2 (a) and (b) in Ref.~\cite{DS:Altshuler80}] to the case of two layers we find that the diagrams describing the Hall transconductivity coefficient, ${\cal Q}^l_{xy}={\cal Q}^l_{H}$, without $e$--$e$ interactions, are the ones depicted in Fig.~\ref{fg:Hdni}. An average over the current direction in the vertexes (full dot in Fig.~\ref{fg:Hdni}) proves that this contribution vanishes. The insertion of Cooperons [compare to Fig 3. in Ref.~\cite{DS:Altshuler80}] will not change that result. \begin{figure}[h] \vglue 0cm \hspace{0.05\hsize} \epsfxsize=1\hsize \epsffile{Hdni.eps} \refstepcounter{figure} \label{fg:Hdni} \\ {\small FIG.\ \ref{fg:Hdni} Diagrams describing the Hall transconductivity that are second order in interlayer tunneling (denoted by $\times$) and does not include $e$--$e$ interaction. Dashed vector signifies an external magnetic field, solid lines are electron propagators and the full dots are current vertexes. An average over the current direction at the vertexes shows that this contribution vanishes.} \end{figure} We shall now show that that inclusion of interlayer Coulomb interactions does not affect the results that the tunneling give no contribution to the Hall transconductivity. As in the case without interaction, we insert a magnetic vertex in all possible ways to the diagrams contributing to the transconductivity. Concentrating first in the part related to layer 1 of the diagrams depicted in Fig~\ref{fg:Qrr} we find three possible ways to enter the magnetic vertex as shown in Fig.~\ref{fg:Hdi}. \begin{figure}[h] \vglue 0cm \hspace{0.05\hsize} \epsfxsize=1\hsize \epsffile{Hdi.eps} \refstepcounter{figure} \label{fg:Hdi} \\ {\small FIG.\ \ref{fg:Hdi} The different ways to insert a magnetic vertex (dashed line with an arrow) to the left part of the conductivity corrections depicted in \protect{\ref{fg:Qrr}}. These diagrams are similar to those of Fig.~6 in \protect{\cite{DS:Altshuler80}}. The sum of these diagrams is zero. The contribution of ``two diffuson'' diagrams, (without the diffuson connecting the two sides of the current vertex) vanishes as well.} \end{figure} These diagrams are similar to those of interaction corrections to the Hall coefficient in a single layer [figure 6 of Ref.~\cite{DS:Altshuler80}]. It was shown in Ref.~\cite{DS:Altshuler80} that they cancel each other out. We showed that there is no noninteracting Hall transconductivity and that the most singular terms contributing to the transconductivity do not give rise to any Hall transconductivity as well. Since we look for a term different from zero we have to check also that the ``two diffuson'' diagrams does not contribute to the Hall transconductivity. But, the insertion of a magnetic field vertex to the two diffuson diagrams gives three diagrams similar to the one depicted in figure \ref{fg:Hdi}. For the ``two diffuson'' diagrams the diffuson connecting the two parts of the current vertex is missing and the Green functions in the two sides of the current vertex has opposite imaginary part. It can be shown, however, that in this case as in the case of the diagrams in Fig.~\ref{fg:Hdi} the sum of the three vanishes. The calculation is almost identical to the calculation in Ref.~\cite{DS:Altshuler80} and we will not repeat it here. This leads us to the conclusion that to first order in interaction \begin{equation} \label{eq:sQH} {\cal Q}_{xy}^l={\cal Q}^l_H={\sf Q}^l_H=0 \end{equation} This implies, in turn, that the quantum process gives no contribution to the Hall trans{\it conductance}. When the conductivity matrix is inverted, however, this leads to a {\it nonvanishing} contribution to the Hall trans{\it resistance}, following the analysis of Sec.~\ref{se:F}. \section{The potential in layer $\bbox{2}$: $\bbox{V^{(2)}\lowercase{(x,y)}}$} \label{se:V2} In the previous sections we have discussed the relation between integrated potentials measured on the edges of layer 2 and the applied current in layer 1. We have used Gauss's theorem and did not need to calculate the potential in layer 2, $V^{(2)}(x,y)$ in details. Nevertheless, it is instructive to understand, in few simple examples, the detailed behavior of $V^{(2)}(x,y)$. In principle, with advanced technology, $V^{(2)}(x,y)$ can be measured directly experimentally \cite{QHE:Ashoori98}. We will discuss the two situations mentioned in the introduction: the ``parallel strip'' and ``cross'' geometries with pinhole distribution that is concentrated in the middle of the sample, and identical layers with pinholes that are distributed uniformly. In all cases we assume that the 2D gases have rectangular shapes of sizes $L_1 \times W_1$ and $L_2 \times W_2$. \subsection{Tunneling through pinholes in the middle of the sample } \label{se:middle} Now we assume that there are $N$ pinholes distributed inside a rectangular of size $a \times b$ at the origin and $a, b \ll L_1,L_2,W_1,W_2$. For simplicity, as before, we assume that the frictional forces and the tunneling between the layers are weak, i.e., $\rho_D \ll R_\square$ and $R_\square \ll R_\perp $. In that case we can solve the equations (\ref{eq:con}), (\ref{eq:ohmslawf}) with the boundary condition (\ref{eq:bc12}) perturbatively in $\rho_D/R_\square$ and $R_\square/R_\perp$. If $W_2 \le W_1$ and $L_2 \le L_1$ then the classical drag is present all over layer $2$ and its contribution to the potential in layer $2$ can be readily found by changing Eq.~(\ref{eq:J0}) to: \begin{equation} \label{eq:J0c} {J_i}_{(0)}^\alpha ({\vec r}) = -\left[\sigma_{ij} \delta_{\alpha \beta} + \sigma_{ij}^D X^{\alpha \beta} \right] \nabla_j V_{(0)}^\beta(\vec r) ,\;\;\; \end{equation} where the diagonal elements of the matrix $X^{\alpha \beta}$ are zero and the off diagonal are $1$. Substitution in Eq.~(\ref{eq:con}) (with vanishing right hand side at the discussed case) and using (\ref{eq:contenb}) we find the Laplace equation $\Delta V_{(0)}^\alpha=0$ with the boundary condition (\ref{eq:bc12}). The solution is straight forward and in the limit of $\sigma_D \ll \sigma$ given by: \begin{equation} \label{sol:V0D} V_{(0)}^{(1)}= - J\frac{\sigma}{\sigma^2+\sigma_H^2} x - J \frac{\sigma_H}{\sigma^2+\sigma_H^2} y,\;\;\; V_{(0)}^{(2)}= J \rho_D x \end{equation} Notice that the Hall transvoltage vanishes. This happens due to the relation (\ref{eq:sigmaD}),\cite{DR:Kamenev95}. In order to find (perturbatively) the voltage in layer 2 in the presence of tunneling we substitute Eqs.~(\ref{def:V1}) and (\ref{eq:Ql}) in Eq.~(\ref{eq:ohmslawf}), keeping terms up to first order in $R_\square / R_\perp$ and $\rho_D/R_\perp$ we find: \begin{eqnarray} \label{eq:J1} {J}_i^\alpha ({\vec r}) = {J_{(0)}}_i^\alpha(\vec r) + & \sigma_{ij} \delta_{\alpha \beta} & \bbox{\nabla}_j V_{(1)}^\beta({\vec r}) \nonumber \\ +& \sum_l{\sf Q}^l_{ij} S_T^l(|{\vec r}|)X^{\alpha \beta } & \bbox{\nabla}_j V_{(0)}^\beta ({\vec r}). \end{eqnarray} By assumption the corrections to the currents in layer 1 are small and we neglect them. A substitution in the continuity equation (\ref{eq:con}) and a use of (\ref{eq:contenb}) yields the following equation for $V_{(1)}^{(2)}$, representing the correction to the voltage in layer 2 due to tunneling \begin{eqnarray} \label{eq:V1} -(\sigma+\sigma^D) \Delta V_{(0)}^{(2)} -\bbox{\nabla}_i \sigma_{ij} \bbox{\nabla}_j V_{(1)}^{(2)} = \nonumber\\ -g^t V^{(1)}_{(0)} - \bbox{\nabla}_i \sum_l{\sf Q}_{ij}^l S_T^l(|{\vec r}|) \bbox{\nabla}_j V^{(1)}_{(0)} \end{eqnarray} The first term in the left hand side of that equation vanishes, and we are left with an equation for $V^{(2)}_{(1)}$. The boundary condition for $V^{(2)}_{(1)}$ should be such that the residual current due to the potential $V^{(2)}_{(1)}$ vanishes on all boundaries, i.e, \begin{eqnarray} \label{eq:bcv12} {J_{(1)}^{(2)}}_x &=& \sigma \nabla_x V_{(1)}^{(2)}(\pm L_2/2,y) \nonumber \\ & &+\sigma_H \nabla_y V_{(1)}^{(2)} (\pm L_2/2,y) = 0 \nonumber \\ {J_{(1)}^{(2)}}_y&=& -\sigma_H \nabla_x V_{(1)}^{(2)} (x,\pm W_2/2) \nonumber \\ & &+ \sigma \nabla_y V_{(1)}^{(2)} (x,\pm W_2/2)= 0 \end{eqnarray} Using (\ref{eq:contenb}) for the intralayer conductivities we end up with a Poisson equation: \begin{equation} \label{eq:So} \sigma \Delta V^{(2)}_1 = {S}, \end{equation} where the source term, ${S}$, is $(-)$ the right hand side of Eq.~(\ref{eq:V1}). The Green function of this equation is given by the solution of Eq.~(\ref{eq:So}) and boundary condition (\ref{eq:bcv12}) with a source term equal to $ \delta(\vec r-\vec r')$. In the absence of magnetic field the boundary conditions can be easily satisfied, by introducing a series of image charges. Since the boundary conditions are of Neumann type, all the charges have the same sign. In the presence of magnetic a field we should add to the image charges a source of circulation that will cancel the twist of the current field due to the magnetic field. The exact forms of the Green function $G_{L_2,W_2}(\vec r, \vec r')$ in the cases $W_2 \ll L_2$ and $W_2 \gg L_2$ are given in Eq.~(\ref{eq:Gsol}). For a general source term we find: $$ V_{(1)}^{(2)} (\vec r) = \int d^2 r' S(\vec r') G_{L_2,W_2}(\vec r, \vec r'). $$ By assumption, the source term in Eq.~(\ref{eq:V1}) is concentrated near the origin and is local, we are interested in the voltage at distances much larger than $a$ or $b$ and the thermal length $L_T$. In addition, due to the boundary condition and conservation of charge $\int d^2 r' S(\vec r') =0$, so we can use the a dipole approximation to find: $$ V_{(1)}^{(2)} (\vec r) = \vec P \cdot \nabla' G (\vec r), \;\;\; \vec P= \int d^2 r \vec r S (\vec r), $$ \begin{equation} \label{eq:V1a} \nabla' G(\vec r) = \left. \nabla' G_{L_2,W_2}(\vec r, \vec r') \right|_{\vec r'=0}. \end{equation} If we assume that the pinholes distribute uniformly over a rectangle of size $a \times b$, the dipole moment associated with $g^t$ is given by: $$ {\vec P}_{\rm g} = \frac{J}{R_\perp a b} \int_{-a/2}^{a/2} dx \int_{-b/2}^{b/2} dy \left( x \; \hat x + y \; \hat y \right) \left( R_\square x + R_H y\right) \Rightarrow $$ \begin{equation} \label{eq:Pg} {\vec P}_{\rm g} = \frac{ J}{12 R_\perp} \left( a^2 R_\square \hat x + b^2 R_H \hat y \right) \end{equation} The dipole moment arising from the quantum corrections can be larger than the leakage contribution since at low temperatures $L_T$ can be larger than $a$ and than $b$. Using (\ref{eq:contenb}) and the solution (\ref{sol:V0D}) for $V_0$ we arrive at: \begin{eqnarray} \label{eq:PQ} {\vec P}_{\sf Q} = \int dx dy \times \nonumber \\ \sum_l\Bigl[ x{\sf Q}^l_{xx} \nabla_x \left( S^l_T(\vec r ) \nabla_x V_0^{(1)} \right) \;\; \hat x + \nonumber \\ y {\sf Q}^l_{xx} \nabla_y \left( S^l_T(\vec r )\nabla_x V_0^{(1)} \right) \;\; \hat y \Bigr] = \nonumber\\ - {\sf Q} J L_T^2 \left(R_\square \hat x + R_H \hat y \right). \end{eqnarray} where in the last equality we have used integration by parts and the normalization condition $\int d^2 r S^l_T(|\vec r|)=L_T^2 $. Even in the dipole approximation, the exact form of $V^{(2)}(x,y)$ depends on $W_2$ and $L_2$ since the information about the magnetic field enters through the boundary conditions. For the ``parallel strip'' configuration, i.e., $L_2 \gg W_2$ using Eq.~\ref{eq:Gstrip} we finally find, (keeping only terms linear in $H$): \begin{eqnarray} \label{sol:V2} V^{2}(x,y) = J \rho_D x \;\;\;\;\;\;\;\;\;\;\;\;\; \nonumber \\ - \frac{J}{2W_2}\left( \frac{ a^2}{12 R_\perp}+|{\sf Q}| L_T^2 \right) \times \;\;\;\;\;\;\;\;\;\;\;\;\; \nonumber \\ \frac{\sinh(2 \pi x /W_2)}{ \cosh(2 \pi x /W_2)- \cos(2 \pi y /W_2)} R_\square^2 \nonumber \\ - \frac{J}{W_2} \left( \frac{ a^2+b^2}{12 R_\perp} + 2|{\sf Q}| L_T^2 \right) \times \;\;\;\;\;\;\;\;\;\;\;\;\; \nonumber \\ \frac{ \cosh(\pi x/W_2) \sin(\pi y /W_2)}{ \cosh(2 \pi x /W_2)- \cos(2 \pi y /W_2)} R_\square R_H \nonumber \\ +\frac{J}{2W_2}\frac{ a^2}{12 R_\perp} \frac{ \sin(2 \pi y /W_2)}{ \cosh(2 \pi x /W_2)- \cos(2 \pi y /W_2)} R_\square R_H \end{eqnarray} A contour plot of $V^{(2)}(x,y)$ with typical parameters is depicted in Fig.~\ref{fg:v2con}. Since the potentials depend both on $x$ and $y$, the transresistances depend on the locations where the potentials are measured. Therefore we have defined the integrated transresistances in Eqs.~(\ref{eq:Rt}) and (\ref{eq:RtH}). In the ``parallel strip'' configuration far away from the tunneling points, e.g., at $x \rightarrow \pm L_2/2$, the potential practically does not depend on $y$. Thus, in the ``parallel strip'' configuration while measuring $R_t$ the integration on the potential is unnecessary. In the ``cross'' configuration where $L_2 \ll W_2$, we use Eq.~(\ref{eq:GL}) with $W_2 \rightarrow L_2$, now $x$ and $y$ change their role. Far away from the tunneling points, e.g., at $y \rightarrow \pm W_2/2$ the potential practically does not depend on $x$. In that ``cross'' geometry integration on the potential on the boundary is not needed while measuring $R_t^H$. \begin{figure}[h] \vglue 0cm \hspace{0.05\hsize} \epsfxsize=1\hsize \epsffile{v2con.eps} \refstepcounter{figure} \label{fg:v2con} \\ {\small FIG.\ \ref{fg:v2con} A contour plot $V^{(2)}(x,y)$ according to Eq.~(\ref{sol:V2}) at $T=0.5 K^\circ$. We have used the same sample parameters as in the introduction: $W_2= 5 \mu m$, mobility $\mu= 5 \times 10^4 {cm^2}/{Vs}$, electron density $n=4 \times 10^{10} cm^{-2}$, $R_\perp=20 k\Omega$, $a=0.1\mu m, b=0.1\mu m$, $R_\square \cong 3 k\Omega$, $\kappa d \cong 3$, with total current $I=0.1 \mu A$ and magnetic filed $H=0.05 {\rm T}$.} \end{figure} \subsection{Tunneling through uniformly distributed pinholes} \label{se:uni} Now we consider the case where the pinholes are distributed uniformly in the sample. In this situation, we do not restrict ourselves to first order in the tunneling conductance $R_\perp^{-1}$. However, we assume that $W_1=W_2=W,\; L_1=L_2=L$. In that case we can approximate $\sum_l \left( g^l \cdots \right) \approx \int dx dy \left( 1/ R_\perp L W \cdots \right)$, and substitute the expression due to quantum correction in Eq.~(\ref{eq:J1}) by: \begin{equation} \label{eq:sQ} \sigma^Q_{ij} \equiv \sum_l {\sf Q}^l_{ij} S_T^l(r)= {\sf Q}_{ij} \frac{L_T^2}{ L W} \end{equation} in that case Eq.~(\ref{eq:ohmslawf}) should be read: \begin{equation} \label{eq:Ju} {J}_i^\alpha ({\vec r}) = \left[\sigma_{ij} \delta_{\alpha \beta} + \left( \sigma_{ij}^D + \sigma_{ij}^Q \right)X^{\alpha \beta} \right] \bbox{\nabla}_j V_0^\beta ({\vec r}) \end{equation} Now it is convenient to introduce upper and lower case variables: \begin{eqnarray} \label{eq:var} J_i = J^{(1)}_i + J^{(2)}_i, &\;\;\;\; &j_i =J^{(1)}_i - J^{(2)}_i \nonumber\\ \hat S = \hat \sigma + (\hat \sigma^D+\hat \sigma^Q), &\;\;\;\;& \hat s= \hat \sigma-(\hat \sigma^D+\hat \sigma^Q), \\ V= V^{(1)} + V^{(2)},&\;\;& v= V^{(1)} - V^{(2)} \nonumber \end{eqnarray} Using now Eqs.~(\ref{eq:con}), (\ref{eq:Ju}), boundary conditions (\ref{eq:bc12}), definitions (\ref{eq:var}) and the antisymmetry of $\hat S$ and $ \hat s$ we arrive at the boundary problem: \begin{eqnarray} \label{EQ:V} & &\;\; \Delta V = 0 \nonumber \\ \mbox { at } x=& \pm L/2 : &\;\; -S_{xx} \bbox{\nabla}_x V - S_{xy} \bbox{\nabla}_y V =J \\ \mbox { at } y=& \pm W /2: &\;\; -S_{yx} \bbox{\nabla}_x V - S_{yy} \bbox{\nabla}_y V =0 \nonumber \end{eqnarray} \begin{eqnarray} \label{eq:v} &&\Delta v = q^2v \nonumber \\ \mbox { at } x=& \pm L/2 :& -s_{xx} \bbox{\nabla}_x v - s_{xy} \bbox{\nabla}_y v =J \\ \mbox { at } y=& \pm W/2 :& -s_{yx} \bbox{\nabla}_x v - s_{yy} \bbox{\nabla}_y v = 0 \nonumber \end{eqnarray} where the characteristic momentum $q^2=2/(R_\perp L W s_{xx})$. Eq.~(\ref{eq:v}) is a partial differential equation with boundary conditions that are not Dirichlet nor Neumann type. A simple straight forward solution is not available for general values of the magnetic field, indicating the complexity of the current flow in layer 2 in the presence of a magnetic field and uniform tunneling. The detailed solution of Eqs.~(\ref{eq:v}) and (\ref{EQ:V}) (for weak magnetic field) are given in App. \ref{app:Vv}. We find, then, \begin{eqnarray} \label{sol:V2uni} V^{(2)}(x,y)= -\frac{J}{2} \left\{ \frac{1}{S_{xx}} x + \frac{S_{xy}}{S_{xx}^2} y + \frac{1}{s_{xx} q} \times \right. \;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \nonumber \\ \left. \left[ \frac{\sinh(q x)}{\cosh(q L/2)}+\frac{s_{xy}}{s_{xx}} \mbox{ thc } ( qL ) g \left(\frac{x}{L}, \frac{y}{W}; q L, q W\right) \right] \right\}, \!\!\! \end{eqnarray} where $\mbox{thc} (x)$ and $g(u,z;a,b)$ are defined in Eq.~({\ref{def:thc}) and(\ref{def:g}) respectively in App.~\ref{app:Vv}. \section{Conclusions} To summarize, we have studied the effect of local pinholes on the drag coefficient, $R_t$, and Hall drag coefficient, $R_t^H$, in bi--layer dirty 2D systems at weak or vanishing magnetic field. We found that there are three contributions to $R_t$: the classical drag, the leakage contribution and the quantum contribution. The last two exist only when there is tunneling through local pinholes between the layers. At low temperature the classical drag vanishes. If $a$, the characteristic size of the region where pinholes exist, is smaller then the thermal length, $L_T$, but $L_T$ is small compared to the overall system size, then the quantum contribution $\propto 1/T$ is dominant. The measurement of voltage in layer 2 in a typical drag experiment does not allow current to flow perpendicular its edges. Therefore, the classical drag does not lead to any Hall drag coefficient at low temperatures. In contrast, in the presence of pinholes, we find contributions to $R_t^H$ due to the leakage and the quantum contributions. As for $R_t$, if $a< L_T$ the quantum contribution $\propto 1/T$ is dominant at low temperatures. The ``topographic map'' of the actual voltage in layer 2 is rather complicated. We suggest therefore to study $R_t$ in a ``parallel strip'' geometry (where the the width of layer 1 and layer 2 is smaller than their length) and to study $R_t^H$ in a ``cross'' geometry where the width of layer 2 is larger than it length. In these geometries, the voltage measurements are made far from the tunneling region and the measured potentials do not depend on the precise position of the probe along the boundary. The drag and Hall drag measurements give direct information on the quantum corrections arising from the interplay between disorder interaction and tunneling. In this work, we have studied the quantum contribution up to first order in the measuring current, the tunneling conductance $R_\perp^{-1}$, the magnetic field, and the interlayer interaction. We have also estimated the limits for the validity of the calculations. \acknowledgments It is our pleasure to thank A.~Kamenev, for useful discussions. YO is thankful for the support by the Rothschild fund. The work was supported by the NSF under grants no.\ DMR 94-16910, DMR 96-30064, DMR 97-14725, and DMR 98-09363.
1,314,259,994,141
arxiv
\section{Background and Related Work} \textbf{Neural ranking models.} These models focus on measuring the relevance of sentence pairs. A common practice is to map each sentence to a dense vector separately, and then measure their relevance with a similarity function \citep{DBLP:conf/cikm/HuangHGDAH13, karpukhin-etal-2020-dense, ren-etal-2021-rocketqav2}. These models are known as dual-encoder models. Dual-encoder models can pre-compute the candidate representations offline, since the candidate encoding is conducted independent of the query. Recently, pre-trained Transformer-based models (cross-encoder) have achieved great success on many sentence pair tasks \citep{li-etal-2022-miner, guo2022semantic}. These models take the concatenation of one sentence pair as input and perform cross-attention at each layer. This brings deep interactions between the input query and the candidate. Despite the promising performance, cross-encoder models will face significant latency in online inference since all the candidates are encoded online. \noindent \textbf{Late-interaction models.} Various late interaction models have been proposed to combine the advantages of the dual-encoder and the cross-encoder. Specifically, these models disentangle the sentence pair modeling into separate encoding followed by a late interaction. They can pre-compute candidate representations offline, and model the relationship of query-candidate pairs by cross-attention online. For instance, the late-interaction models, including Deformer and PreTTR \citep{DBLP:conf/sigir/MacAvaneyN0TGF20}, is based on a decomposed transformer, where low-level layers encode the query and candidate separately and the higher-level layers process them jointly. As shown in Figure \ref{fig:paradigm}(c), given $N$ candidates, the late Transformer layers have to encode the query $N$ times. It results in extensive computation costs. Other models propose to adopt a light weight interaction mechanism, such as poly-attention \citep{DBLP:conf/iclr/HumeauSLW20} and MaxSim \citep{DBLP:conf/sigir/KhattabZ20}, instead of Transformer layers to speed up the online inference. Our MixEncoder\xspace can behave as a late interaction model by replacing the upper Transformer layers of a dual-encoder with our interaction layer. The novelty of the MixEncoder\xspace lies in the light-weight cross-attention mechanism and pre-computed context embeddings. \section{Conclusion} In this paper, we propose MixEncoder\xspace which provides a good trade-off between the performance and efficiency. MixEncoder\xspace involves a light-weight cross-attention mechanism which allows us to encode the query once and process all the candidates in parallel. We evaluate MixEncoder\xspace with four datasets. The results demonstrate that MixEncoder\xspace can speed up sentence pairing by over 113x while achieving comparable performance as the more expensive cross-attention models. \section*{Limitations} \section{EXPERIMENTS}\label{sec:setup} \subsection{Datasets} To fully evaluate the proposed MixEncoder\xspace, we conduct an empirical evaluation on four paired-input datasets, including natural language inference(NLI), information retrieval, and utterance selection for dialogue. \textbf{MNLI} (Multi-Genre Natural Language Inference) \citep{williams-etal-2018-broad} is a crowd-sourced classification dataset. It contains sentence pairs annotated with textual entailment information. \textbf{MS MARCO Passage Reranking} \citep{DBLP:conf/nips/NguyenRSGTMD16} is a large collection of passages collected from Bing search logs. Given a query, the goal is to rank provided 1000 passages. We use a subset of training data\footnote{https://github.com/UKPLab/sentence-transformers/tree/master}. Following previous work, we evaluate the models on the 6980 development queries \citep{DBLP:conf/sigir/KhattabZ20, gao-etal-2020-modularized}. \textbf{DSTC7} \citep{DBLP:journals/corr/abs-1901-03461} is a chat log corpus contained in the DSTC7 challenge (Track 1). It consists of multi-turn conversations where one partner seeks technical support from the other. \textbf{Ubuntu V2} \citep{DBLP:conf/sigdial/LowePSP15} is a popular corpus similar to DSTC7. It is proposed earlier and contains more data than DSTC7. These four datasets have the same form that every sample in the dataset contains one text and several candidates. The statistics of these datasets are detailed in Table~\ref{table:dataset}. We use $accuracy$ to evaluate the classification performance on MNLI. For other datasets, MRR and $recall$ are used as evaluation metrics. \begin{table}[thbp] \caption{Statistics of experimental datasets. } \label{table:dataset} \centering \aboverulesep=0ex \belowrulesep=0ex \resizebox{\linewidth}{!}{ \begin{tabular}{clcccc} \toprule \multicolumn{2}{c}{Dataset} & MNLI & MS MACRO & DSTC7 & Ubuntu V2 \\ \midrule \multirow{3}{*}{Train} & \# of queries & 392,702 & 498,970 & 200,910 & 500,000 \\ & Avg length of queries & 27 & 9 & 153 & 139 \\ & Avg length of candidates & 14 & 76 & 20 & 31 \\ \midrule \multirow{4}{*}{Test} & \# of queries & 9,796 & 6,898 & 1,000 & 50,000 \\ & \# of candidates per query & 1 & 1000 & 100 & 10 \\ & Avg length of queries & 26 & 9 & 137 & 139 \\ & Avg length of candidates & 14 & 74 & 20 & 31 \\ \bottomrule \end{tabular} } \end{table} \subsection{Baselines} MixEncoder\xspace is compared to following baselines: \textbf{Cross-BERT} refers to the original BERT \citep{devlin-etal-2019-bert}. We take the output at CLS token as the representation of the pair. This embedding is fed into a feedforward network to generate logits for either classification tasks or matching tasks. \textbf{Dual-BERT (Sentence-BERT)} is proposed by Reimers et al. \citep{reimers-gurevych-2019-sentence}. This model uses siamese architecture and encodes text pairs separately. \textbf{Deformer} \citep{cao-etal-2020-deformer} is a decomposed Transformer, which utilizes lower layers to encode query and candidates separately and then uses upper layers to encode text pairs together. We followed the settings reported in the original paper and split BERT-base into nine lower layers and three upper layers. \textbf{Poly-Encoder} \cite{DBLP:conf/iclr/HumeauSLW20} encodes the query and its candidates separately and performs a light-weight late interaction. Before the interaction layer, the query is compressed into several context vectors. We set the number of these context vectors as 64 and 360 respectively. \textbf{ColBERT} \cite{DBLP:conf/sigir/KhattabZ20} is a late interaction model for information retrieval. It adopts the MaxSim operation to obtain relevance scores after encoding the sentence pairs separately. Note that the design of ColBERT prohibits the utilization on classification tasks. \subsection{Training Details} While training models on MNLI, we follow the conventional practice that uses the labels provided in the dataset. While training models on other three datasets, we use in-batch negatives \citep{karpukhin-etal-2020-dense, qu-etal-2021-rocketqa} that considers the positive candidate of other queries in a training batch as negative candidates. For Cross-BERT and Deformer, that require exhaustive computation, we set batch size as 16 due to the limitation of computation resources. For other models, we set batch size as 64. All the models use one BERT (based, uncased) with 12 layers and fine-tune it for up to 50 epochs with a learning rate of 1e-5 and a linear scheduling. All experiments are conducted on a server with 4 Nvidia Tesla A100 GPUs which has 40 GB graphic memory. \section{Introduction} \begin{figure*} \centering \includegraphics[width=0.95\textwidth]{figures/paradigm_v3.0.pdf} \caption{Architecture illustration of three popular sentence pair approaches and proposed MixEncoder\xspace, where $N$ denotes the number of candidates and $s$ denotes the relevance score of candidate-query pairs. The cache is used to store the pre-computed embeddings. } \label{fig:paradigm} \end{figure*} Transformer-based models \citep{DBLP:conf/nips/VaswaniSPUJGKP17, devlin-etal-2019-bert} have shown promising performance on sentence pair modeling tasks, such as natural language inference, question answering, information retrieval, etc \citep{DBLP:journals/corr/abs-1901-04085, qu-etal-2021-rocketqa, zhao-etal-2021-sparta}. Most pair modeling tasks can be depicted as a procedure of scoring the candidates given a query. A fundamental component of these models is the pre-trained cross-encoder, which models the interaction between the query and the candidates. As shown in Figure \ref{fig:paradigm}(a), the cross-encoder takes a pair of query and candidate as input, and calculates the interaction between them at each layer by the input-wide self-attention mechanism. This interaction will be calculated $N$ times if there are $N$ candidates. Despite the effective text representation power, it leads to exhaustive computation cost especially when the number of candidates is very large. This computation cost therefore restricts the use of these cross-encoder models in many real-world applications \citep{chen-etal-2020-dipair}. Extensive studies, including dual-encoder \citep{DBLP:conf/cikm/HuangHGDAH13, reimers-gurevych-2019-sentence} and \textit{late interaction} models \citep{DBLP:conf/sigir/MacAvaneyN0TGF20, gao-etal-2020-modularized, chen-etal-2020-dipair, DBLP:conf/sigir/KhattabZ20}, have been proposed to accelerate the transformer inference on sentence pair modeling tasks. As shown in Figure \ref{fig:paradigm}(b), the query and candidates are processed separately in dual-encoders, thus the candidates can be pre-computed and cashed for online inference, resulting in fast inference speed. However, this speedup is built upon sacrificing the expressiveness of cross-attention \citep{luan-etal-2021-sparse, hu-etal-2021-context}. Alternatively, late-interaction models adjust dual-encoders by appending an interaction component, such as a stack of Transformer layers \citep{cao-etal-2020-deformer, DBLP:conf/sigir/NieZGRSJ20}, for modelling the interaction between the query and the cashed candidates, as illustrated in Figure \ref{fig:paradigm}(c). Although this interaction components better preserve the effectiveness of cross-attention than dual-encoders, they still suffer from the heavy costs of the interaction component. Clearly, the computation cost of late-interaction models will still be dramatically increased as the number of candidates grows \citep{chen-etal-2020-dipair, zhang-etal-2021-embarrassingly}. To tackle the above issues, we propose a new paradigm named MixEncoder\xspace to speed up the inference while maintaining the expressiveness of cross-attention. In particular, MixEncoder\xspace involves a light-weight cross-attention mechanism which mostly disentangles the query encoding from query-candidate interaction. Specifically, MixEncoder\xspace encodes the query along with pre-computed candidates during runtime, and conducts the light-weight cross-attention at each interaction layer (named as \textit{interaction layer\xspace}), as illustrated in Figure~\ref{fig:paradigm}(d). This design of light-weight cross-attention allows the interaction layer to process all the candidates in parallel. Thus, MixEncoder\xspace is able to encode the query only once, regardless of the number of candidates. MixEncoder\xspace accelerates the online inference from two aspects. Firstly, MixEncoder\xspace processes each candidate into $k$ dense context embeddings offline and cache them, where $k$ is a hyper-parameter. This setup speeds up the online inference using pre-computed representations. Secondly, our interaction layer performs attention only from candidates to the query. This disentangles the query encoding from query-candidate interaction, thus avoiding repeatedly query encoding and supporting processing multiple candidates in parallel. We evaluate the capability of MixEncoder\xspace for sentence pair modeling on four benchmark datasets, related to tasks of natural language inference, dialogue and information retrieval. The results demonstrate that MixEncoder\xspace better balances the effectiveness and efficiency. For example, MixEncoder\xspace achieves substantial speedup more than 113x over the cross-encoder and provides competitive performance. In summary, our main contributions can be summarized as follows: \begin{itemize} \item A novel framework MixEncoder\xspace is proposed for fast and accurate sentence pair modeling. MixEncoder\xspace involves a light-weight cross-attention mechanism which allows us to encode the query once and process all the candidates in parallel. \item Extensive experiments on four public datasets demonstrate that the proposed MixEncoder\xspace provides better trade-offs between effectiveness and efficiency than state-of-the-art models. \end{itemize} \section{Limitations} Although MixEncoder\xspace is demonstrated effective, we recognize that MixEncoder\xspace does not performs well on MS MARCO. It indicates that our MixEncoder\xspace falls short of detecting token overlapping, since it may lose token level semantics of candidates during pre-computation. Moreover, the MixEncoder\xspace is not evaluated on a large scale evaluation dataset, such as an end-to-end retrieval task, which requires model to retrieve top-$k$ candidates from millions of candidates \citep{qu-etal-2021-rocketqa, DBLP:conf/sigir/KhattabZ20}. \section{Method}\label{sec:method} \begin{figure*} \centering \includegraphics[width=0.95\textwidth]{figures/overview_v6.0.pdf} \caption{Overview of proposed MixEncoder\xspace. } \label{fig:overview} \end{figure*} In this section, we first introduce the details of proposed MixEncoder\xspace, which mainly includes two stages, i.e., candidate pre-computation stage and query encoding stage. Figure \ref{fig:overview} provides the architecture of MixEncoder\xspace. We then describe how to apply MixEncoder\xspace for different tasks such as classification task and ranking task. \subsection{Problem Statement} Given a sentence pair, models are required to generate either a prediction or a ranking score. The former is known as a linear-probe classification task \citep{DBLP:conf/lrec/ConneauK18} and the latter is a multi-candidate ranking task \citep{DBLP:conf/nips/NguyenRSGTMD16}. For classification task, the training set consists of paired samples, $\{q_i, p_i, y_i\}^N_{i=1}$, where $y_i$ is the label of the sentence pair, $N$ is the size of the dataset, and $q_i$, $p_i$ denotes the query and the candidate, respectively. For ranking task, the samples in the training set can be denoted as $\{q_i, p_i, C_i\}^N_{i=1}$, where $p_i$ is the positive candidate for the $q_i$ while $C_i$ is a set of negative candidates. \subsection{Candidate Pre-computation} \label{sec:pre-computation} We describe how MixEncoder\xspace pre-compute each existing candidate into several context embeddings offline. Let the token embeddings of one candidate be $T_i = [t_1, \cdots, t_d]$. We experiment with two strategies to obtain $k$ context embeddings from these token embeddings: (1) prepending $k$ special tokens $\{S_i\}^k_{i=1}$ to $T_i$ before feeding $T_i$ into an Transformer encoder \citep{DBLP:conf/nips/VaswaniSPUJGKP17, devlin-etal-2019-bert}, and using the output at these special tokens as context embeddings ($S$-strategy); (2) maintaining $k$ context codes \cite{DBLP:conf/iclr/HumeauSLW20} to extract global features from the last layer output of the encoder by attention mechanism ($C$-strategy). The default configuration is $S$-strategy as it provides slightly better performance. Suppose there are $N$ candidates; we use $E_0 \in {\mathbb{R}}^{N \times k \times d}$ to denote the pre-computed context embeddings of these candidates, where $d$ indicates the embedding size. \subsection{Query Encoding} During the online inference stage, for a query with $N$ candidates, models have to measure the relevance of $N$ query-candidate pairs. A typical cross-encoder repeatedly concatenates the query with each candidate and encodes it $N$ times. It leads to prohibitive computation costs. One of the most effective ways to reduce the computation is to reduce the encoding times of the query. In this section, we first depict the overview of the query encoder. Then, we introduce the core component of our MixEncoder\xspace: \textit{interaction layer}. It performs a light-weight candidate-to-query cross-attention to estimate relevance scores in a single pass of the query encoding, no matter how many candidates the query has. \subsubsection{Overview of Encoder} Take an encoder that consists of five Transformer layers $L_1, L_2, \dots, L_5$ as an example. When encoding the incoming query online, we replace the second and fifth Transformer layers $L_2, L_5$ with two interaction layers, denoted as $I_2^1, I_5^2$. Now the encoder can be depicted as $\{ L_1, I_2^1, L_3, L_4, I_5^2\}$, shown in Figure \ref{fig:overview}(b) . These layers are applied to the incoming query sequentially to produce contextualized representations of the query and the candidates. Formally, each Transformer layer $L_i(\cdot)$ takes the query token representations $q_{i-1} \in {\mathbb{R}}^{m \times d}$ from the previous layer and produces a new representation matrix $q_i = L_i(q_{i-1})$, where $m$ denotes the query length and $q_i \in {\mathbb{R}}^{m \times d}$. Each interaction layer\xspace $I_i^j(\cdot)$ takes the query token representations $q_{i-1}$ from the previous layer as input, along with the context embeddings $E_{j-1}$ and a set of \textit{state vectors} $H_{j-1} \in {\mathbb{R}}^{N \times d}$ from the previous interaction layer\xspace (or cache): \begin{align} q_i, E_j, H_j = I_i^j(q_{i-1}, E_{j-1}, H_{j-1}). \end{align} The output $E, H$ of the last interaction layer\xspace are fed into a classifier to generate predictions for each query-candidate pair. \subsubsection{Interaction Layer} \label{sec:enricher} This section describes the details of how the interaction layer generates candidate and query representations. \textbf{Candidate Representation.} Given $q_{i-1}$ and $E_{j-1}$, layer $I_i^j$ performs a self-attention over $q_{i-1}$, and a candidate-to-query cross-attention over $q_{i-1}, E_{j-1}$ simultaneously, as shown in Figure \ref{fig:overview}(b). Formally, the query self-attention is conducted as \begin{align} &Q_{i-1}, K_{i-1}, V_{i-1} = \textrm{LN}(q_{i-1}), \label{eq:query-qkv} \\ &q_i = \textrm{FFN}(\textrm{Att}(Q_{i-1}, K_{i-1}, V_{i-1})), \label{eq:query-encoding} \end{align} where we write $\textrm{LN}(\cdot)$ for a linear transformation, $\textrm{FFN}(\cdot)$ for a feed-forward network and $\textrm{Att}(Q, K, V)$ for a self-attention operation \cite{DBLP:conf/nips/VaswaniSPUJGKP17}. The cross-attention is formulated as \begin{align} &Q^\prime_{j-1}, K^\prime_{j-1}, V^\prime_{j-1} = \textrm{LN}(E_{j-1}), \\ &E_j = \textrm{FFN}(\textrm{Att}(Q^\prime_{j-1}, [K^\prime_{j-1};K_{i-1}], [V^\prime_{j-1};V_{i-1}])). \label{eq:cross-attention} \end{align} By simply concatenating $K_{i-1}, V_{i-1}$ generated from the query with $K^\prime_{j-1}, V^\prime_{j-1}$ generated from the candidates, the cross-attention operation dominated by $Q^\prime_{j-1}$ aggregates the semantics for each query-candidate pair and produces new context embeddings $E_j \in {\mathbb{R}}^{N \times k \times d}$. As shown in Eq.(\ref{eq:query-encoding}) and (\ref{eq:cross-attention}), the interaction layer\xspace separates the query encoding and the cross-attention, thus the candidates embeddings are transparent to query. This design allows encoding the query only once regardless of the number of its candidates. \textbf{Query Representation.} As shown in Eq.(\ref{eq:cross-attention}), the context embedding matrix $E$ contains the semantics from both the query and the candidates. It can be used to estimate the relevance score of candidate-query pairs as \begin{align} s=\textrm{LN}((\textrm{Avg}(E))), \label{eq:only_e} \end{align} where $s \in \mathbb{R} ^N$. Since $E$ may not be sufficient to represent semantics for each candidate-query pair, we choose to maintain a separate embedding $h$ to represent the query. Concretely, we conduct an attention operation at each interaction layer and obtain a unique query state for each candidate. We first employ a pooling operation followed by a linear transformation on $E_{j-1}$ and obtain $Q^* \in {\mathbb{R}}^{N \times d}$. Then, the query semantics w.r.t. the candidates are extracted as \begin{equation} H^* = \textrm{FFN}(\textrm{Att}(Q^*, K_{i-1}, V_{i-1})), \end{equation} where $K_{i-1}, V_{i-1}$ are generated by Eq.(\ref{eq:query-qkv}). Next, the gate proposed by \citep{cho-etal-2014-properties} is utilized to fuse $H^*$ with the query states $H_{j-1}$ : \begin{align} H_j = \textrm{Gate}(H^*, H_{j-1}), \end{align} where $H_j \in {\mathbb{R}}^{N \times d}$. Each row of $H_j$ stands for the representation of the incoming query with respect to one candidate. \begin{table*}[thbp] \caption{Time Complexity of the attention module in MixEncoder\xspace, Dual-BERT and Cross-BERT. We use $q$, $d$ to denote query and candidate length, respectively. $h$ indicates the hidden layer dimension, $N_{c}$ indicates the number of candidates for each query and $k$ indicates the number of context embeddings for each candidate.} \label{table:complexity} \centering \aboverulesep=0ex \belowrulesep=0ex \resizebox{\linewidth}{!}{ \begin{tabular}{cccc} \toprule Model & Total ($N_c = 1$) & Pre-computation ($N_c = 1$) & Online\\ \specialrule{0.05em}{1pt}{1pt} Dual-BERT & $h(d^2 + q^2) + h^2(d+q)$ & $hd^2 + h^2d$ & $hq^2 + h^2q$ \\ Cross-BERT & $h(d+q)^2 + h^2(d+q)$ & $0$ & $N_{c}(h(q+d)^2 + h^2(q+d))$ \\ \specialrule{0.05em}{1pt}{1pt} \multirow{2}{*}{MixEncoder\xspace} & \multirow{2}{*}{$h(d^2 + q(q+k) + k^2) + h^2(d+q+k)$} & \multirow{2}{*}{$hd^2 + h^2d$} & $h(q(q + kN_{c}) + k^2N_{c}) + h^2(q+kN_{c})$ \\ & & & $ = hq^2 + h^2q + N_c(k + q + h)hk$\\ \bottomrule \end{tabular} } \end{table*} \subsection{Classifier} Let $H$ and $E$ denote the query states and the candidate context embeddings generated by the last interaction layer\xspace, respectively. For the $i$-th candidate, the representation of the query is the $i$-th row of $H$, denoted as $h_i$. The representation of this candidate is the mean of the $i$-th row of context embeddings $E$, denoted as $e_i$. \textbf{Classification Task}: For a classification task such as NLI, we concatenate the embeddings $h_i$ and $e_i$ with the element-wise difference $|h_i - e_i|$ \citep{reimers-gurevych-2019-sentence} and feed them into a feed-forward network: \begin{equation} logits = \textrm{FFN}(h_i, e_i, |h_i - e_i|). \end{equation} The network is trained to minimize a cross entropy loss. \textbf{Ranking Task}: For ranking tasks such as passage retrieval, we estimate the relevance score of candidate-query pairs as: \begin{equation} s_i = h_i \cdot e_i, \end{equation} where $\cdot$ denotes dot product. The network is optimized by minimizing a cross-entropy loss in which the logits are $s_i, \cdots, s_N$. \subsection{Time Complexity} Table \ref{table:complexity} presents the time complexity of the Dual-BERT, Cross-BERT, and our proposed MixEncoder\xspace. We can observe that the dual-encoder and MixEncoder\xspace support offline pre-computation to reduce the online time complexity. During the online inference, the query encoding cost term ($hq^2 + h^2q$) of both Dual-BERT and MixEncoder\xspace does not increase with the number of candidates, since they conduct query encoding only once. Moreover, the MixEncoder\xspace's query-candidate term $N_c(c + q + h)hc$ can be reduced by setting $c$ as a small value, which can further speed up the inference. \section{Results} \begin{table}[thbp] \caption{Time to evaluate 100 queries with 1k candidates. The Space used to cache the pre-computed embeddings for 1k candidates are shown.} \label{table:speedup} \centering \aboverulesep=0ex \belowrulesep=0ex \resizebox{\linewidth}{!}{ \begin{tabular}{ccc} \toprule \multirow{2}{*}{Model} & \multicolumn{1}{c}{Time (ms)} & Space (GB)\\ & 1k & 1k\\ \midrule Dual-BERT & $7.2$ & $0.3$\\ PolyEncoder-64 & $7.3$ & $0.3$\\ PolyEncoder-360 & $7.5$ & $0.3$\\ ColBERT & $27.0$ & $8.6$\\ Deformer & $488.7$ & $52.7$\\ Cross-BERT & $949.4$ & -\\ \hline MixEncoder\xspace-a & $8.4$ & $0.3$\\ MixEncoder\xspace-b & $10.6$ & $0.3$\\ MixEncoder\xspace-c & $11.2$ & $0.6$\\ \bottomrule \end{tabular} } \end{table} \begin{table*}[thbp] \caption{Performance of Dual-BERT, Cross-BERT and three variants of MixEncoder\xspace on four datasets.} \label{table:performance} \centering \aboverulesep=0ex \belowrulesep=0ex \resizebox{\linewidth}{!}{ \begin{tabular}{ccccccccc} \toprule \multirow{2}{*}{Model} & MNLI & \multicolumn{2}{c}{Ubuntu} & \multicolumn{2}{c}{DSTC7} & \multicolumn{2}{c}{MS MARCO} & Speedup \\ & Accuracy & R1@10 & MRR & R1@100 & MRR & R1@1000 & MRR(dev) & Times\\ \midrule Cross-BERT & \bm{$83.7$} & $83.1$ & $89.4$ & $66.0$ & $75.2$ & \bm{$23.3$} & \bm{$36.0$} & $1.0$x\\ Dual-BERT & $75.2$ & $81.6$ & $88.5$ & $65.8$ & $73.7$ & $20.3$ & $32.2$ & \bm{$132$}x \\ PolyEncoder-64 & $76.8$ & $82.3$ & $88.9$ & \bm{$67.5$} & $75.2$ & $20.3$ & $32.3$ & $130$x\\ PolyEncoder-360 & $77.3$ & $81.8$ & $88.6$ & $65.7$ & $73.4$ & $20.5$ & $32.4$ & $127$x\\ ColBERT & $\times$ & $82.8$ & $89.3$ & $67.2$ & $74.8$ & $22.8$ & $35.4$ & $35.2$x\\ Deformer & \bm{$82.0$} & \bm{$83.2$} & \bm{$89.5$} & $66.3$ & \bm{$75.3$} & \bm{$23.0$} & \bm{$35.7$} & $1.9$x\\ \midrule MixEncoder\xspace-a & $77.5$ & $83.2$ & \bm{$89.5$} & $67.3$ & $74.7$ & $19.8$ & $31.6$ & $113$x\\ MixEncoder\xspace-b & $77.8$ & $83.2$ & \bm{$89.5$} & \bm{$68.7$} & \bm{$76.1$} & $20.7$ & $32.5$ & $89.6$x \\ MixEncoder\xspace-c & $78.4$ & \bm{$83.3$} & \bm{$89.5$} & $66.8$ & $74.9$ & $19.3$ & $31.0$ & $84.8$x\\ \bottomrule \end{tabular} } \end{table*} \begin{table}[thbp] \caption{Ablation analysis for MixEncoder\xspace-a and MixEncoder\xspace-b. (MRR)} \label{table:ablation} \centering \aboverulesep=0ex \belowrulesep=0ex \begin{tabular}{ccccc} \toprule & \multicolumn{2}{c}{Ubuntu} & \multicolumn{2}{c}{DTSC7} \\ Variants & -a & -b & -a & -b \\ \midrule Original & \bm{$89.5$} & \bm{$89.5$} & $74.7$ & \bm{$76.1$} \\ w/o $H$ & $88.9$ & $89.1$ & $74.0$ & $73.9$ \\ w/o $E$ & $89.2$ & $89.3$ & \bm{$74.8$} & $75.2$\\ Eq. \ref{eq:only_e} & $89.1$ & $89.2$ & $72.3$ & $74.4$ \\ \bottomrule \end{tabular} \end{table} Table~\ref{table:performance} shows the experimental results of Dual-BERT, Cross-BERT, existing late interaction models and three variants of MixEncoder\xspace on four datasets. We measure the inference time of all the baseline models and present the results in Table~\ref{table:speedup}. \subsection{Performance Comparison} \textbf{Variants of MixEncoder\xspace.} To study the effect of the number of interaction layers and that of the number of context embeddings per candidate, we present three variants in the tables, denoted as MixEncoder\xspace-a, -b and -c, respectively. Specifically, MixEncoder\xspace-a and -b set $k$ as $1$. The former has interaction layer $I_{12}^1$ and the latter has layers $\{I_{10}^1, I_{11}^2, I_{12}^3\}$. MixEncoder\xspace-c has the same layers as MixEncoder\xspace-b but with $k=2$. \noindent \textbf{Inference Speed.} We conduct speed experiments to measure the online inference speed for all the baselines. Concretely, we samples 100 samples from MS MARCO. Each of samples has roughly 1000 candidates. We measure the time for computations on the GPU and exclude time for text reprocessing and moving data to the GPU. \noindent \textbf{Dual-BERT and Cross-BERT.} The performance of the dual-BERT and cross-BERT are reported in the first two rows of Table \ref{table:performance}. We can observe that the MixEncoder\xspace consistently outperforms the Dual-BERT. The variants with more interaction layers or more context embeddings generally yield more improvement. For example, on DSTC7, MixEncoder\xspace-a and MixEncoder\xspace-b achieves an improvement by $1.0\%$ (absolute) and $2.4\%$ over the Dual-BERT, respectively. Moreover, MixEncoder\xspace-a provides comparable performance to the Cross-BERT on both Ubuntu and DSTC7. MixEncoder\xspace-b can even outperform the Cross-BERT on DSTC7 ($+0.9$), since MixEncoder\xspace can benefit from a large batch size \citep{DBLP:conf/iclr/HumeauSLW20}. On MNLI, MixEncoder\xspace-a retains $92.6\%$ of effectiveness of the Cross-BERT and MixEncoder\xspace-c can retain $93.7\%$ of that. However, the effectiveness of the MixEncoder\xspace on MS MARCO is slight. We can find that the difference of the inference time for processing samples with 1k candidates between the Dual-BERT and MixEncoder\xspace is minimal. Cross-BERT is 2 orders of magnitude slower than these models. \noindent \textbf{Late Interaction Models.} From table \ref{table:speedup}, \ref{table:performance}, we can have following observations. First, among all the late interaction models, Deformer that adopts a stack of Transformer layers as the late interaction component consistently shows the best performance on all the datasets. This demonstrates the effectiveness of cross-attention in transformer layers. In exchange, Deformer shows limited speedup (1.9x) compared to Cross-BERT. Compared to the ColBERT and Poly-Encoder, our MixEncoder\xspace outperforms them on the datasets except MS MARCO. Although ColBERT consumes more computation than MixEncoder\xspace, it shows worse performance than MixEncoder\xspace on DSTC7 and Ubuntu. This demonstrates the effectiveness of the light-weight cross-attention, which can achieve a trade-off between the efficiency and effectiveness. However, on MS MARCO, our MixEncoder\xspace and poly-encoder lag behind the ColBERT with a large margin. We conjecture that our MixEncoder\xspace falls short of handling token-level matching. We will elaborate it in section \ref{sec:error} . \begin{figure*}[t] \centering \begin{subfigure}[h]{0.47\textwidth} \centering \includegraphics[width=1 \textwidth]{figures/param_1_new_2.pdf} \caption{$i$: using interaction layer\xspace $I_{i}^1$ and $I_{12}^2$.} \end{subfigure} \hfill \begin{subfigure}[h]{0.47\textwidth} \centering \includegraphics[width=1 \textwidth]{figures/param_2_new.pdf} \caption{$i$: replacing Transformer layers above $i$-th layer with interaction layer\xspace.} \end{subfigure} \begin{subfigure}[h]{0.47\textwidth} \vspace*{0.15cm} \centering \includegraphics[width=1 \textwidth]{figures/param_3.pdf} \caption{$k$: number of context embeddings per candidate.} \end{subfigure} \hfill \begin{subfigure}[h]{0.47\textwidth} \vspace*{0.15cm} \centering \includegraphics[width=1 \textwidth]{figures/param_4_new.pdf} \caption{batch size of in-batch negative training.} \end{subfigure} \caption{Parameter analysis on the number and the position of interaction layers, the length and the strategy of context embeddings and the batch size. } \label{fig:param_analysis} \end{figure*} \subsection{Effectiveness of Interaction Layer} \textbf{Representations.} We conduct ablation studies to quantify the impact of two key components ($E$ and $H$) utilized in MixEncoder\xspace. The results are shown in Table~\ref{table:ablation}. Every component results in a gain in performance compared to Dual-BERT. It demonstrates that our simplified cross-attention can produce effective representations for both the candidate and query. An interesting observation is that removing $E$ can lead to a slight improvement on DSTC7. Moreover, we also implement MixEncoder\xspace based on Eq. \ref{eq:only_e} that a linear transformation is applied to $E$ to estimate relevance scores, which leads to a drop in performance. \noindent \textbf{Varying the Interaction Layers.} To verify the impact of the interaction layer\xspace, we perform ablation studies by varying the number and the position of layers. First, we use two interaction layers $\{I_i^1, I_{12}^2\}$, and choose $i$ from the set $\{1, 2, 4, 6, 8, 10, 11\}$. The results are shown in Figure \ref{fig:param_analysis}(b). We can find that MixEncoder\xspace on Ubuntu is insensitive to $i$ while MixEncoder\xspace on DSTC7 can be enhanced with $i=11$. Moreover, Figure \ref{fig:param_analysis}(c) shows the results when MixEncoder\xspace has interaction layers $\{I_i^1, I_{i+1}^2, \cdots, I_{12}^{13-i}\}$. Increasing interaction layers cannot always improve the ranking quality. On Ubuntu, replacing all the transformer layers provides close performance to that with the last layer replaced. On DSTC7, the performance of MixEncoder\xspace achieves a peak with last three layer replaced by our interaction layers. \subsection{Candidate Pre-computation} We study the effect of the number of candidate embeddings, denoted as $k$, and the pre-computation strategies introduced in section \ref{sec:pre-computation}. Specifically, We choose the value of $k$ from the set $\{1, 2, 3, 10\}$ with one interaction layer $I_{12}^1$. From Figure \ref{fig:param_analysis}(c), we can observe that as $k$ gets larger, the performance of the MixEncoder\xspace increases first, and then descends. Moreover, two pre-computation strategies have different impacts on the model performance. S-strategy generally outperforms C-strategy with the same $k$. \subsection{In-batch Negative Training} We change the batch size and show the results in Figure \ref{fig:param_analysis}(d). It can be observed that increasing batch size contributes to good performance. Moreover, we have the observation that models may fail to diverge with a small batch size. Due to the limitation of computation resources, we set the batch size as 64 for our training. \subsection{Error Analysis} \label{sec:error} In this section, we take a sample from MS MARCO to analyze our errors. We observe that MixEncoder\xspace falls short of detecting token overlapping. Given the query "foods and supplements to lower blood sugar", MixEncoder\xspace fails to pay attention to the keyword ``supplements" which appears in both the query and the positive candidate. We conjecture that this drawback is due to the pre-computation that represents each candidate into $k$ context embeddings. It lose the token-level feature of the candidates. On the contrary, ColBERT caches all the token embeddings of the candidates and estimate relevance scores based on token-level similarity.
1,314,259,994,142
arxiv
\section{Introduction} It has been established at lower energies that the ${\rm J}/\psi$s created in nuclear collisions are suppressed when the system size and the centrality of the collision increase~\cite{SPS}. For heavy enough nuclei the suppression exceeds the {\em normal} nuclear absorption observed with light nuclei. Several mechanisms, including color screening in the quark gluon plasma, have been proposed to explain this {\em abnormal} suppression observed at low energy. Measurements performed at higher energy are therefore crucial to constrain the models and discriminate between them. Moreover they may uncover new mechanisms such as quark recombination. \section{Experimental setup and datasets} The PHENIX experiment measures ${\rm J}/\psi\rightarrow e^+e^-$ decay at mid rapidity ($|\eta|<0.35$) and ${\rm J}/\psi\rightarrow \mu^+\mu^-$ decay at forward rapidity ($|\eta|\in[1.2,2.2]$). Electrons are identified using RICH detectors and by matching their momentum measured in drift chambers with the energy deposited in electromagnetic calorimeters. Muons are selected using a thick absorber located close to the interaction point, tracked using cathode strip chambers and triggered on using a succession of Iarocci tube planes and steel walls. The results presented here correspond to 240~$\mu$b$^{-1}$ Au+Au collisions collected in 2004, and 3.1 nb$^{-1}$ Cu+Cu collisions in 2005, both at $\sqrt{s_{NN}}=200$~GeV. For each centrality, rapidity or transverse momentum bin, the ${\rm J}/\psi$ yield is obtained by counting the number of unlike-sign dilepton pairs in a mass window centered on the ${\rm J}/\psi$ mass after the background has been subtracted using either an event mixing technique or the mass distribution of the like-sign pairs. It is corrected by the efficiency and acceptance of the spectrometer as well as the trigger efficiency and normalized to the number of recorded collisions. This yield enters the numerator of the nuclear modification factor which is the ratio between the measured yield in A+B collisions over the expected yield when assuming binary scaling from p+p collisions. The ${\rm J}/\psi$ yield measured in p+p collisions is taken from PHENIX 2003 published results~\cite{PHENIX_RUN3}. Systematic errors have been assigned to each contribution. For Au+Au collisions, the dominant systematic error comes from the background subtraction, due to the poor signal over background ratio, especially for central collisions. \section{Results} Figures~\ref{fig_vogt} and~\ref{fig_theo} represent the ${\rm J}/\psi$ nuclear modification factor as a function of the number of participant for Au+Au and Cu+Cu collisions both at mid an forward rapidity together with the published measurement from d+Au~\cite{PHENIX_RUN3}. A suppression of about a factor 3 is observed for the most central collisions. On figure~\ref{fig_vogt}, the data are compared to R. Vogt prediction~\cite{VOGT}, assuming 3~mb nuclear absorption on top of EKS98 shadowing. The prediction miss the most central points. Moreover, from~\cite{PHENIX_RUN3} it appears that 3~mb is an upper bound for the nuclear absorption cross-section in d+Au collisions. \begin{figure}[h] \vspace*{-5mm} \begin{center} \includegraphics[width=75mm,clip=true]{eps/r_aa_n_part_vogt_bw.eps} \end{center} \vspace*{-15mm} \caption{\label{fig_vogt}${\rm J}/\psi$ nuclear modification factor as a function of the number of participant, compared to R. Vogt predictions for the normal nuclear absorption. Vertical bars are statistical errors, brackets are point to point systematics and boxes are global systematics.} \vspace*{-5mm} \end{figure} On figure~\ref{fig_theo} the same data are compared to two sets of models. They all involve additional final state interactions and reproduce both the CERN SPS results~\cite{SPS} and the PHENIX low statistics results from 2002~\cite{PHENIX_RUN2}. The predictions plotted on the left panel (\cite{CAPELLA,GRANDCHAMP,KOSTYUK}) overestimate the ${\rm J}/\psi$ suppression. The predictions plotted on the right panel show better agreement with the data or even underestimate the suppression. The predictions on the right involve either quark recombination mechanisms \cite{GRANDCHAMP,KOSTYUK,ANDRONIC,BRATKOVSKAYA} or detailed ${\rm J}/\psi$ transport in medium \cite{ZHU}. Note however that these predictions differ in the way the cold nuclear absorption is accounted for, in the p+p ${\rm J}/\psi$ production cross-section used for normalization and in the open charm production cross-section entering recombination mechanism. \begin{figure}[h] \vspace*{-5mm} \begin{center} \begin{tabular}{cc} \includegraphics[width=75mm,clip=true]{eps/r_aa_n_part_extreme_bw.eps}& \includegraphics[width=75mm,clip=true]{eps/r_aa_n_part_moderate_bw.eps}\\ \end{tabular} \end{center} \vspace*{-15mm} \caption{\label{fig_theo}${\rm J}/\psi$ nuclear modification factor as a function of the number of participant compared to various models of final state interaction in the medium.} \vspace*{-5mm} \end{figure} The figure~\ref{fig_y} represents the ${\rm J}/\psi$ invariant yield $BdN/dy$ as a function of the ${\rm J}/\psi$ rapidity for different centrality bins in Au+Au, Cu+Cu and p+p collisions. The vertical lines are statistical errors, the bands are point-to-point systematics. Within the large error bars, no significant change in the shape of the distribution is observed from p+p collisions to most central Au+Au. \begin{figure}[h] \vspace*{-5mm} \begin{center} \includegraphics[width=75mm,clip=true]{eps/y_bw.eps} \end{center} \vspace*{-15mm} \caption{\label{fig_y}${\rm J}/\psi$ yield as a function of rapidity and centrality.} \vspace*{-5mm} \end{figure} The figure~\ref{fig_pt} represents the ${\rm J}/\psi$ mean square transverse momentum as a function of the number of collisions for Au+Au, Cu+Cu, d+Au and p+p collisions. Thews predictions~\cite{THEWS} without (dashed lines) and with recombination (solid lines) are also shown. \begin{figure}[h] \vspace*{-5mm} \begin{center} \begin{tabular}{cc} \includegraphics[width=75mm,clip=true]{eps/mean_pt_electron_bw.eps}& \includegraphics[width=75mm,clip=true]{eps/mean_pt_muon_bw.eps}\\ \end{tabular} \end{center} \vspace*{-15mm} \caption{\label{fig_pt}${\rm J}/\psi$ mean square transverse momentum as a function of the number of collisions, at mid rapidity (left panel) and forward rapidity (right panel).} \vspace*{-5mm} \end{figure} \section{Summary} PHENIX measured ${\rm J}/\psi$ in Au+Au and Cu+Cu collisions at $\sqrt{s_{NN}}=200$~GeV at both mid and forward rapidity. The dependence of the ${\rm J}/\psi$ nuclear modification factor on the number of participants allows to discriminate between some of the models capable of reproducing the suppression seen at lower energy. The ${\rm J}/\psi$ rapidity spectra show no obvious variation in shape. Interpretation of the ${\rm J}/\psi$ mean square transverse momentum is unclear due to large errors.
1,314,259,994,143
arxiv
\section{Introduction} The mapping of natural language text to a defined dictionary of concepts is one of the signature tasks of information extraction and a variety of software has been written to address this problem. We informally assessed a small set of open source and freely available software for this task, before selecting MetaMap \cite{Aronson:2010kv} and YTEX \cite{garla:2011ml} for a more formal evaluation. Our objective was to find "off the shelf" software that could find disease mentions in clinical text and be robust with regard to clinical domain as well as document structure and syntax. Although MetaMap was designed for publications, it continues to be on clinical text, often as a standard of comparison. Although YTEX has received comparatively little attention, it is based on the more popular cTAKES \cite{Savova:2010hy} system developed for the Mayo Clinic and is specifically designed for clinical text. We did download and informally evaluate cTAKES but found the dictionary lookup annotator component by matching tokenized Lucene (http://lucene.apache.org/) indexed dictionary entries and was therefore unable to distinguish different concepts sharing the same lexical tokens. Recently YTEX has improved on cTAKES dictionary lookup by adding a sense disambiguation component \cite{garla2012knowledge} that allows the most appropriate concept for that text. It uses the adapted Lesk Method \cite{banerjee2002adapted} to compute semantic similarly over a context window, whereas MetaMap uses a series of weighted heuristics to select the appropriate candidate \cite{Aronson:2001jv}. As part of our evaluation, we hoped to learn the relative strengths and weakness of these two very different approaches. \section{Methodology} The CORAL.1 system utilizes the 2012 version of MetaMap for concept recognition whereas the CORAL.2 system utilizes YTEX 0.8. MetaMap was called using the MetaMap UIMA annotator to allow for integration into our NLP framework which is also UIMA \cite{Ferrucci:2004kt} based. YTEX was run as a standalone system, and then a custom written UIMA annotator was used to transfer results from the YTEX database into a format compatible with our system. Hits from both systems were then processed in the same fashion, going through an identical set of annotators. Processing included filtering to remove high level stop concepts (20 in total) not typically used or useful for fine grained concept recognition. Two such examples are 'Disease' (C0012634) and 'Injury' (C0175677). Our system also removed concepts that had names containing text with 'M/mouse' and 'M/mice' as earlier work revealed some animal models of disease were being mapped to inappropriate UMLS semantic types. Finally we restricted resulting hits to those matching at least one of the requisite UMLS semantic types for this contest. Filtered hits generated by our system often failed to identify the full span of the annotation in some cases if the identified text started with acronyms and adjectives. For example, for concepts such as `LA enlargement' and `lower abdominal tenderness' present in the training data, our systems were able to capture `enlargement' and `abdominal tenderness' but left words `LA' and `abdominal tenderness' uncaptured. Thus, an additional post-processing step was done to improve results for the strict task. During the postprocessing step, we searched the concepts annotated with our models if they were preceded by any of the words `LV', `MCA',`LA',`abd',`PEA',`LE', `LGI', `ICA',`C2', `B12', `RCA', `RUQ', `GI', `VF', `lower', `chronic', but are not captured by our models. If so, the concepts were expanded to include these words within the concepts boundaries. These missing abbreviations were identified as the largest and most readily correctable error classes for both systems based on training data performance It should be emphasized that the CORAL system framework built around both MetaMap and YTEX does not do any concept prediction itself, it simply refines predicted concept boundaries and removes concepts predicted by MetaMap and YTEX it determines are incorrect. \subsection{Parameter Settings} Default settings were used for YTEX, including a concept window of length 10 and the default INTRINSIC setting as a semantic similarity metric. MetaMap was run with the included word sense disambiguation server (-y option) and restricting allowable concepts to the SNOMED CT\textregistered and RXNORM vocabularies. This is implicit in the default YTEX configuration which only indexes these two vocabularies by default. The 2012AB distribution of UMLS was used by both programs. \subsection{Patient Tracking List (PTL) Data Set} Prior to running data for the ShareClef task we evaluated both MetaMap and YTEX on PTL notes from the University of Alabama at Birmingham Health System. PTL documents consist of a summary of the patient's condition, with the majority of text in point form format along with some full sentences; a precise breakdown was not calculated. The document was irregularly formatted but highly structured, so in contrast to the ShareClef analysis, only disease mentions in the appropriate "problem" sections of the PTL document were analyzed. Document segmentation was done by a complex manually derived regular expression to identify the start and stop of the problem section. The same such expression was used for both YTEX and MetaMap. In total, there are 68 such annotated PTL documents with 603 annotated entities of which 223 are problems. Annotation was performed by two annotators (including one physician) and observed agreement was 91.9\% (uncorrected Cohen's kappa) \cite{carletta1996assessing}. \section{Results} \subsection{Concept Boundary Detection Results} Results for concept boundary detection (Task 1a) are shown in Table 1 and Table 2 for training and test data respectively. \begin{table} \caption{System Boundary Detection Results on Training Data} \begin{center} \begin{tabular}{| l | l | l | l | l | } \hline System & Task & Precision & Recall & Score \\ \hline TeamCORAL.1 (YTEX) & Strict & 0.512 & 0.440 & 0.473 \\ \hline TeamCORAL.2 (MetaMap) & Strict & 0.722 & 0.460 & 0.562 \\ \hline TeamCORAL.1 (YTEX) & Relaxed & 0.915 & 0.639 & 0.752 \\ \hline TeamCORAL.2 (MetaMap) & Relaxed & 0.875 & 0.556 & 0.680 \\ \hline \end{tabular} \end{center} \end{table} \begin{table} \caption{System Boundary Detection Results on Test Data} \begin{center} \begin{tabular}{| l | l | l | l | l | } \hline System & Task & Precision & Recall & Score \\ \hline TeamCORAL.1 (YTEX) & Strict & 0.584 & 0.446 & 0.505 \\ \hline TeamCORAL.2 (MetaMap) & Strict & 0.796 & 0.487 & 0.604 \\ \hline TeamCORAL.1 (YTEX) & Relaxed & 0.942 & 0.601 & 0.734 \\ \hline TeamCORAL.2 (MetaMap) & Relaxed & 0.909 & 0.554 & 0.688 \\ \hline \end{tabular} \end{center} \end{table} Boundary detection was difficult for both systems. This was in part because neither MetaMap nor YTEX have the ability to annotate discontinuous concept boundaries limiting the effectiveness of both systems and effectively capping the maximum performance. Additionally MetaMap tended to include additional text (mostly prepositions and modifiers) that the ShareClef annotators did not. YTEX precision was significantly reduced by the inclusion of simple nouns when a compound noun was expected by the annotators. As a result YTEX performed poorly relative to MetaMap on the strict task. Results were harder to compare for Task 1b, the mapping of text to CUIs. The PTL results (shown in Table 3) do not include accuracy data, since false negatives or false negative frames are not annotated in the PTL data set as they are in the ShareClef data set (Table 4). Instead results for precision, recall and F Score are shown for the PTL document. \begin{table} \caption{CUI Mapping Results on PTL Datasets} \begin{center} \begin{tabular}{| l | l | l | l | l | } \hline System & Precision & Recall & Score & Accuracy \\ \hline TeamCORAL.1 (YTEX) & 55.72 & 68.33 & 61.38 & NA \\ \hline TeamCORAL.2 (MetaMap) & 80.28 & 79.19 & 79.73 & NA \\ \hline \end{tabular} \end{center} \end{table} \begin{table} \caption{CUI Mapping Results on Training Data} \begin{center} \begin{tabular}{| l | l | l | } \hline System & Task & Accuracy \\ \hline TeamCORAL.1 (YTEX) & Strict & 0.414 \\ \hline TeamCORAL.2 (MetaMap) & Strict & 0.422 \\ \hline TeamCORAL.1 (YTEX) & Relaxed & 0.939 \\ \hline TeamCORAL.2 (MetaMap) & Relaxed & 0.916 \\ \hline \end{tabular} \end{center} \end{table} \section{Discussion} Strict boundary was difficult for both systems, a varied set of error classes were generated. The biggest problem for strict boundary detection however was the exclusion of adjacent relevant text (modifiers), a deficiency that partly overcome by our boundary extension rules. Also abbreviations were particularly and consistently difficult, both systems labelled text such as 'BACTERIA OCC' as Osteochondritis dissecans. In general, the MetaMap based CORAL.1 performed better than the YTex based CORAL.2 in the strict task. The relaxed task showed significantly higher scores for both systems, but both systems still failed to identify problems - particularly phrases containing common polysemous words such as "inability", as in "inability to walk'. Overall YTEX performed slightly better at this relaxed task than MetaMap, due to the inclusion of partially mapped annotations as fully scoring (YTEX had many such annotations) and the superior ability of YTEX to correctly identify polysemous text. In Task 1b a formatting error prevented our results from being processed but we show results for the training data and the PTL documents here. Results differed only slightly for YTEX and MetaMap. Common sources of error in both systems were stemmed from unrecognized abbreviations and low frequency concepts that neither the semantic distributional approaches used by YTEX or the heuristics and word sense disambiguation server employed by MetaMap could overcome. For example YTEX identifies the physician abbreviation "Dr." as diabetic retinopathy and MetaMap identified the word call in "Call or return immediately" as "c-ALL", a precursor B-cell lymphoblastic leukemia. Another small class of errors may be due to problems in the ShareClef annotation. For example the ShareClef annotation identifies fever as CUI-less instead of pyrexia, a synonym for fever. Results for both systems were significantly worse on the PTL data set (Table 3) than on the ShareClef data set (Table 4). The difference can be explained in large part to the annotation of the PTL data set, where the annotation guidelines specify that only the most precise concept possible should be annotated and thus penalizes YTEX which generates a larger number of more general (false positive) concepts . As described in the methodology section the PTL document set is an order of magnitude smaller than the ShareClef data set and thus less reliable. Nonetheless it underlies the fact that small differences in annotation guidelines can have a large impact on clinical information extraction evaluation. One system that was also given serious consideration for a more formal evaluation was the NCBO annotator. However due to the difficulty of the VM setup (including the loading of source vocabularies) and concerns about being banned for sending thousands of queries to the NCBO web service we declined to investigate this option further. Finally, a higher performance for YTEX could have likely been gained by parameter tuning. An earlier evaluation of YTEX sense disambiguation \cite{garla2012knowledge} revealed that no single semantic similarity performed best on all datasets and that parameter could have been adjusted to the training set. Additionally a higher context windows size for YTEX gave higher performance (at the cost of run time) and could have been adjusted upward to improve our performance here. MetaMap performance could have also been improved by taking advantage of the scoring information returned (not reported by YTEX) to select a more optimal cutoff level. In conclusion, given a choice between YTEX and MetaMap our results suggest that YTEX would be a better system for "off the shelf" concept mapping. Other factors such as active development and ability to scale favor YTEX. However MetaMap may be a better choice for precisely identifying concept boundaries. \section{Acknowledgements} This project was supported by the UAB Center for Clinical and Translational Science - grant number UL1 RR025777 from the NIH National Center for Research Resources, and the UAB Office of the Vice President for Information Technology.
1,314,259,994,144
arxiv
\section*{ {{ Dominance of the light-quark condensate \\ in the heavy-to-light exclusive decays}}} \vspace*{1.5cm} {\bf S. Narison} \\ \vspace{0.3cm} Theoretical Physics Division, CERN\\ CH - 1211 Geneva 23\\ and\\ Laboratoire de Physique Math\'ematique\\ Universit\'e de Montpellier II\\ Place Eug\`ene Bataillon\\ F-34095 - Montpellier Cedex 05\\ \vspace*{2.0cm} {\bf Abstract} \\ \end{center} \vspace*{2mm} \noindent Using the QCD {\it hybrid} (moments-Laplace) sum rule, we show $semi$-$analytically$ that, in the limit $M_b \rightarrow \infty$, the $q^2$ and $M_b$ behaviours of the heavy-to-light exclusive ($\bar B\rightarrow \rho~(\pi)$ semileptonic as well as the $ B\rightarrow \rho\gamma$ rare) decay--form factors are $universally$ dominated by the contribution of the soft light-quark condensate rather than that of the hard perturbative diagram. The QCD-analytic $q^2$ behaviour of the form factors is a polynomial in $q^2/M^2_b$, which mimics quite well the usual pole parametrization, except in the case of the $A_1^B$ form factor, where there is a significant deviation from this polar form. The $M_b$-dependence of the form factors expected from HQET and lattice results is recovered. We extract with a good accuracy the ratios: $V^B(0)/A^B_1(0) \simeq A^B_2(0)/A^B_1(0) \simeq 1.11\pm 0.01$, and $A^B_1(0)/F^B_1(0) \simeq 1.18 \pm 0.06$; combined with the ``world average" value of $f^B_+(0)$ or/and $F^B_1(0)$, these ratios lead to the decay rates: $\Gamma_{\bar B\rightarrow \pi e\bar \nu} \simeq (4.3 \pm 0.7) \times|V_{ub}|^2 \times 10^{12 }$ s$^{-1}$, $\Gamma_{\bar B\rightarrow \rho e\bar \nu}/ \Gamma_{\bar B\rightarrow \pi e\bar \nu} \simeq .9 \pm 0.2$, and to the ratios of the $\rho-$polarised rates: $\Gamma_+/\Gamma_- \simeq 0.20 \pm 0.01, {}~\alpha \equiv 2\Gamma_L/\Gamma_T-1 \simeq -(0.60 \pm 0.01)$. \vspace*{2.0cm} \noindent \begin{flushleft} CERN-TH.7237/94 \\ \today \end{flushleft} \vfill\eject \pagestyle{empty} \setcounter{page}{1} \pagestyle{plain} \section{Introduction} In previous papers \cite{SN1,SN2}, we have introduced the {\it hybrid} (moments-Laplace) sum rule (HSR), which is more appropriate than the {\it popular} double exponential Laplace (Borel) sum rule (DLSR) for studying the form factors of a heavy-to-light quark transition; indeed, the {\it hybrid} sum rule has a well-defined behaviour when the heavy quark mass tends to infinity. In \cite{SN2}, we studied analytically with the HSR the $M_b$-dependence of the $B\rightarrow K^* \gamma$ form factor and found that it is dominated by the light-quark condensate and behaves like $\sqrt{M_b}$ at $q^2=0$. We have also noticed in \cite{SN1} that the light-quark condensate effect is important in the numerical evaluation of the $\bar B \rightarrow \rho~(\pi)~ $ semileptonic form factors, while it has been noticed numerically in \cite{DOSCH} using the DLSR that for the $\bar B \rightarrow \rho$ semi-leptonic decays, the $q^2$ behaviour of the $A^B_1$ form factor in the time-like region is very different from the one expected from the $standard$ pole representation. In this paper, we shall study analytically the $M_b$-behaviour of the different form factors for a better understanding of the previous numerical observations. As a consequence, we shall re-examine with our analytic expression the validity of the $q^2$-dependence obtained numerically in \cite{DOSCH}, although we shall mainly concentrate our analysis in the Euclidian region ($q^2 \le 0$). There, the QCD calculations of the three-point function are reliable; also the lattice results have more data points. For this purpose, we shall analyse the form factors of the $\bar B \rightarrow \pi (\rho)~ $ semileptonic and $B\rightarrow \rho\gamma$ rare processes defined in a standard way as: \begin{eqnarray} \langle\rho(p')\vert \bar u \gamma_\mu (1-\gamma_5) b \vert B(p)\rangle &=&(M_B+M_\rho)A_1 \epsilon^*_\mu -\frac{A_2}{M_B+M_\rho}\epsilon^*p'(p+p')_\mu \nonumber \\ &&+\frac{2V}{M_B+M_\rho} \epsilon_{\mu \nu \rho \sigma}p^\rho p'^\sigma , \nonumber \\ \langle\pi(p')\vert \bar u\gamma_\mu b\vert B(p)\rangle &=& f_+(p+p')_{\mu}+f_-(p-p')_\mu , \nonumber \\ <\rho(p')\vert \bar s \sigma_{\mu \nu}\left( \frac{1+\gamma_5}{2}\right) q^\nu b\vert B(p)> &=& i\epsilon_{\mu \nu \rho \sigma}\epsilon^{*\nu}p^\rho p'^\sigma F^{B\rightarrow\rho}_1 \nonumber \\ &&+ \left\{ \epsilon^*_\mu(M^2_B-M^2_{\rho})-\epsilon^*q(p+p')_{\mu} \right\} \frac{F^{B\rightarrow \rho}_1}{2}. \end{eqnarray} In the QCD spectral sum rules (QSSR) evaluation of the form factors, we shall consider the generic three-point function: \begin{equation} V(p,p',q) = -\int d^4x \int d^4y\, \mbox{exp} (ip'x-ipy) \la0\vert TJ_L(x)O(0)J_b(y)\vert 0\rangle, \end{equation} whose Lorentz decompositions are analogous to the previous hadronic amplitudes. Here $J_L \equiv \bar u\gamma_\mu d {}~~(J_L\equiv (m_u+m_d) \bar u i\gamma_5 d)$ is the bilinear quark current having the quantum numbers of the $\rho~(\pi) $ mesons; $J_b \equiv (M_b +m_d) \bar d i\gamma_5 b$ is the quark current associated to the $B$-meson; $O\equiv \bar b\gamma_\mu u$ is the charged weak current for the semileptonic transition, while $O\equiv \bar b\frac{1}{2}\sigma^{\mu\nu}q_\nu$ is the penguin operator for the rare decay. The vertex function obeys the double dispersion relation: \begin{equation} V(p^2,p'^2,q^2)= \frac{1}{4\pi^2}\int_{M^2_b}^{\infty}\frac{ds}{s-p^2} \int_{0}^{\infty}\frac{ ds'}{s'-p'^2} \, \mbox{Im}\, V(s,s',q^2)+... \end{equation} As already emphasized in \cite{SN2}, we shall work with the HSR: \begin{eqnarray} \cal {H}(n,\tau) &\equiv & \frac{1}{n!}\left(\frac{\partial}{\partial p^2}\right)^n _{p^2=0}\cal{L}\left( V(p^2,p'^2,q^2) \right) \nonumber \\ &=& \frac{1}{\pi^2}\int_{M^2_b}^{\infty}\frac{ds}{s^{n+1}} \int_{0}^{\infty} ds' \, \mbox{exp}(-\tau' s') \mbox{Im}\, V(s,s',q^2), \end{eqnarray} rather than with the DLSR ($\cal{L}$ is the Laplace transform operator). This sum rule guarantees that terms of the type: \begin{equation} \frac{M^{2l}_b}{\left( M^2_b-p^2 \right)^{k} p'^{2k'}}, \end{equation} which appear in the successive evaluation of the Wilson coefficients of high-dimension operators, will not spoil the OPE for $M_b \rightarrow \infty$ unlike the case of the double Laplace transform sum rule, which blows up in this limit for some of its applications in the heavy-to-light transitions. \noindent In order to come to observables, we insert intermediate states between the charged weak and hadronic currents in (2), while we smear the higher-states effects with the discontinuity of the QCD graphs from a threshold $t_c$ ($t'_c$) for the heavy (light) mesons. Therefore, we have the sum rule: \begin{eqnarray} \cal{H}_{res}& \simeq& 2C_L f_B \frac{F(q^2)}{M_B^{2n}} \mbox{exp} \, (-M^2_L\tau) \nonumber \\ &\simeq& \frac{1}{4\pi^2}\int_{M^2_b}^{t_c}\frac{ds}{s^{n+1}} \int_{0}^{t'_c} ds' \, \mbox{exp}(-\tau s') \mbox{Im}\, V_{{PT}}(s,s',q^2) + {NPT}. \end{eqnarray} $PT~ (NPT)$ refers to perturbative (non--perturbative) contributions; $C_L \equiv f_P M_P^2$ for light pseudoscalar mesons, while $C_L\equiv M_V^2/(2\gamma_V)$ for light vector mesons; $M_L$ is the light meson mass. The decay constants are normalized as: \begin{eqnarray} & & (m_q+M_Q)\langle 0\vert \bar q (i\gamma_5)Q\vert P\rangle= \sqrt 2 M^2_Pf_P \nonumber \\ & &\langle 0\vert \bar q \gamma_\mu Q\vert V\rangle =\epsilon^*_\mu \sqrt 2 \frac{ M_V} {2 \gamma_V^2}. \end{eqnarray} $F(q^2)$ is the form factor of interest. For our purpose, we shall consider the expression of the decay constant $f_B$ from moments sum rule at the same order (i.e. to leading order) \cite{SN3}: \begin{equation} \frac{2f_B^2}{{(M_B^2)}^{n_2-1}} \simeq \frac{3}{8\pi^2}M_b^2 \int_{M^2_b}^{t_c} \frac{ds}{s^{n_2+1}} \; \frac{(s-M_b^2)^2}{s} -\frac{<\bar qq>}{M_b^{2n_2-1}}\left\{ 1-\frac{n_2(n_2+1)}{4} \; \left( \frac{M_0^2}{M_b^2}\right)\right\} . \end{equation} For convenience, we shall work with the non--relativistic energy parameters $E$ and $\delta M_{(b)}$: \begin{equation} s \equiv (M_b+E)^2 ~~~~~~~~~\mbox{and}~~~~~~~~~ \delta M_{(b)} \equiv M_B-M_b, \end{equation} where, as we saw in the analysis of the two-point correlator, the continuum energy $E_c$ is \cite{SN3}: \begin{eqnarray} E^D_c &\simeq& (1.08 \pm 0.26)~\mbox{GeV} \nonumber \\ E^B_c &\simeq& (1.30 \pm 0.10)~\mbox{GeV} \nonumber \\ E^{\infty}_c &\simeq& (1.5 \sim 1.7)~\mbox{GeV}. \end{eqnarray} In terms of these continuum energies, and at large values of $M_b $, the decay constant reads \cite{SN3}: \begin{eqnarray} f^2_B &\simeq &\frac{1}{\pi^2} \frac{\left( E^B_c \right) ^3}{M_b}\left( \frac{M_B}{M_b} \right)^{2n_2-1} \Bigg\{ 1-\frac{3}{2}(n_2+1)\left( \frac{E^B_c}{M_b}\right) \nonumber \\ &&+\frac{3}{5}\left( (2n_2+3)(n_2+1)+ \frac{1}{4} \right) \left( \frac{E^B_c}{M_b} \right) ^2 -\frac{\pi^2}{2}\frac{\langle {\bar qq} \rangle}{\left( E^B_c\right) ^3} \left( 1- \frac{n_2(n_2+1)}{4} \frac{M_0^2}{M^2_b}\right) \Bigg\}, \nonumber \\ \end{eqnarray} \section{The \boldmath{$\bar B \rightarrow \rho$} semileptonic decay } The corresponding form factors defined in (1) have been estimated with the HSR \cite{SN1} and the DLSR \cite{SN1}, \cite{DOSCH}. Instead of taking the average values from the two methods as was done in \cite{SN1}, we shall only consider the HSR estimates, because of the drawbacks previously found in the DLSR approach: \begin{equation} A^B_1(0) \simeq 0.16 - 0.41, \; \; \; A^B_2(0) \simeq 0.26 - 0.58, \; \; \; V^B(0) \simeq 0.28 - 0.61. \end{equation} The errors in these numbers are large, as the HSR has no $n$-stability. In the following, we derive semi-analytic formulae for the form factors. Using the leading order in $\alpha_s$ QCD results of the three-point function, and including the effect of the dimension-5 operators as given in \cite{OVI}, one deduces the sum rule ($q^2 \le 0$): \begin{equation} A^B_1(q^2) \simeq -\frac{1}{2}~{\langle \bar qq \rangle }~ \frac{ \rho_1}{f_B}\left( \frac{M_B}{M_b}\right)^{2n} \left\{ 1-\frac{q^2}{M^2_b} +\delta^{(5)}+\frac{\cal{I}_1}{M^2_b} \right\}, \end{equation} with: \begin{eqnarray} \rho_1 &\equiv& \left( \frac{\gamma_\rho}{M^2_\rho}\right) \frac{M_b}{(M_B+M_\rho)} \mbox{exp}(M^2_\rho \tau') \nonumber \\ \delta^{(5)}&\equiv& \frac{\tau'M^2_0}{6}\Bigg\{ n-\frac{1}{\tau'M^2_b} \left( 1-\frac{3}{4}n-\frac{3}{4}n^2\right) \nonumber \\ &&-\frac{q^2}{M^2_b}\left( (n+1)\left(\frac{3}{2}n-1\right) +2\tau' M^2_b(1+2n)+2(n+1)q^2\tau'\right) \Bigg\} \nonumber \\ \end{eqnarray} where $\cal{I}_1$ is the integral from the perturbative expression of the spectral function. It is constant for $M_b \rightarrow \infty$. Its value and behaviour at finite values of $M_b$ and for $q^2=0$ is given in Fig. 1. At $M_b=4.6$ GeV, it reads: $\cal{I}_1 \simeq (3.6 \pm 1.2)~\mbox{GeV}^2$ and behaves to leading order in $1/M_b$ as $t'^2_cE_c/\langle \bar qq \rangle$, which is reassuring as it gives a clear meaning of the expansion in (13). The other values of the QCD parameters are \cite{SN4}: $\langle \bar qq \rangle =-(189~\mbox{MeV})^3 \left(\log {M_b/\Lambda}\right)^{12/23}$ and $M^2_0 = (0.80\pm 0.01)~ \mbox{GeV}^2$ from the analysis of the $B,B^*$ sum rules. The $\rho$-meson coupling is $\gamma_\rho \simeq 2.55$. \noindent One can deduce from the previous expression that $A^B_1$ is dominated by the light-quark condensate in the $1/M_b$-expansion counting rule. Moreover, the perturbative contribution is also numerically small at the $b$-mass. The absence of the $n$-stability is explicit from our formula, due to the meson-quark mass difference entering the overall factor. This effect could be however minimized by using the expression of $f_B$ in (11) and by imposing that the effects due to the meson--quark mass differences from the three- and two-point functions compensate each other to leading order. This is realized by choosing: \begin{equation} 2n=n_2-\frac{1}{2}, \end{equation} which, fixes $n$ to be about 2, in view of the fact that the two-point function stabilizes for $n_2 \simeq$4-5. In this way, one would obtain the leading-order result in $\alpha_s$: \begin{equation} A_1^B \simeq 0.3 - 0.6, \end{equation} where we have used the leading-order value $f^{L.O}_B \simeq 1.24 f_\pi$. However, although this result is consistent with previous numerical fits in (12) and in \cite{DOSCH}, we only consider it as an indication of a consistency rather than a safe estimate because of the previous drawbacks for the $n-$stability. One should also keep it in mind that the values given in (12) correspond to the value of $f_B \simeq 1.6 f_\pi$, which includes the radiative corrections of the two-point correlator and which corresponds to smaller values of $n$. Improvements of the result in (16) need (of course) an evaluation of the radiative corrections for the three-point function. The $q^2$--dependence of $A^B_1$ can be obtained with good accuracy, without imposing the previous constraint. We obtain the numerical result in Fig. 2, which is well approximated by the effect from the light--quark condensate alone: \begin{equation} R^B_1(q^2) \equiv \frac{A_1^B(q^2)}{A_1(0)} \simeq 1-\frac{q^2}{M^2_b}. \end{equation} Performing an analytic continuation of this result in the time-like region, we reproduce the numerical result from the DLSR \cite{DOSCH}(see Fig. 2), which indicates that the result is independent of the form of the sum rule used, while in the time-like region the perturbative contribution still remains a small correction of the light-quark condensate one. This result is clearly in contradiction with the $standard$ pole-dominance parametrization, as, indeed, the form factor decreases for increasing $q^2$-values. A test of this result needs improved lattice measurements over the ones available in \cite{SACH}. From the previous expressions, and using the fact that $f_B$ behaves as $1/\sqrt{M_b}$, one can also predict the $M_b$-behaviour of the form factor at $q^2_{max} \simeq M^2_b+2M_\rho M_b$: \begin{equation} A^B_1(q^2_{max}) \sim \frac{1}{\sqrt{M_b}}, \end{equation} in accordance with the expectations from HQET \cite{HQET} and the lattice results \cite{SACH}. The analysis of the $V^B$ and $A^B_2$ form factors will be done in the same way. Here, one can realize that the inclusion of the higher dimension-5 and -6 condensates tends to destabilize the results, although these still remain small corrections to the leading-order results. Then, neglecting these destabilizing terms, one has: \begin{eqnarray} V^B(q^2) \simeq -\frac{1}{2}~\langle\bar qq\rangle~\frac{\rho_V}{f_B}~ \left( \frac{M_B}{M_b} \right)^{2n} \left\{ 1 + \frac{\cal{I}_V}{M^2_b}+... \right\} \nonumber \\ A^B_2(q^2) \simeq -\frac{1}{2}~\langle\bar qq\rangle~ \frac{\rho_{2}}{f_B}~ \left( \frac{M_B}{M_b} \right)^{2n} \left\{ 1 + \frac{\cal{I}_2}{M^2_b}+... \right\} \end{eqnarray} with: \begin{eqnarray} \rho_V &\equiv& \left( \frac{\gamma_\rho}{M^2_\rho}\right) \frac{M_b(M_B+M_\rho)}{M^2_B} \mbox{exp}(M^2_\rho \tau') \nonumber \\ \rho_2 &\equiv& \left( \frac{\gamma_\rho}{M^2_\rho}\right) \frac{(M_B+M_\rho)}{M_b} \mbox{exp}(M^2_\rho \tau'). \end{eqnarray} $\cal{I}_{V,2}$ are integrals from the perturbative spectral functions, which also behave like $\cal{I}_1$ to leading order in $1/M_b$. They are given in Fig. 1 for $q^2=0$ and for different values of $M_b$. As expected, they are constant when $M_b \rightarrow \infty$, although, as in the previous case, the asymptotic limit is reached very slowly. Here, the $n$-stability of the analysis is also destroyed by the overall $(M_B/M_b)^{2n}$ factor, which hopefully disappears when we work with the ratios of form factors. We show in Fig. 2 the $q^2$-dependence of the normalized $V^B$ and $A^B_2$, which is very weak since the dominant light-quark condensate contribution has no $q^2$-dependence. The small increase with $q^2$ is due to the $q^2$-dependence of the small and non-leading contribution from the perturbative graph. Lattice points in the Euclidian $q^2$-region \cite{SACH} agree with our results. An analytic continuation of our results at time-like $q^2$ agrees qualitatively with the one in \cite{DOSCH}. The numerical difference in this region is due to the relative increase of the perturbative contribution in the time-like region due to the effect of the additional non-Landau-type singularities. However, this effect does not influence the $M_b$ behaviour of the form factors at $q^2_{max}$, which can be safely obtained from the leading-order expression given by the light-quark condensate. One can deduce: \begin{equation} V^B(q^2_{max}) \sim \sqrt{M_b},~~~~~~~ A^B_2(q^2_{max}) \sim \sqrt{M_b}. \end{equation} This result is in agreement with HQET and lattice data points. Finally, we can also extract the ratios of form factors. At the $\tau'$-maxima and at the $n$-maxima or inflexion point, we obtain from Fig. 3: \begin{equation} r_2 \equiv \frac{A^B_2(0)}{A^B_1(0)} \simeq r_V \equiv \frac{V^B(0)}{A^B_1(0)} \simeq 1.11 \pm 0.01, \end{equation} where the accuracy is obviously due to the cancellation of systematics in the ratios. This result is again consistent with the lattice results \cite{SACH}, but more accurate. \section{ The \boldmath{$\bar B \rightarrow \pi~$}semileptonic decay} The relevant form factor defined in (1) has been numerically estimated within the HSR with the value \cite{SN1}: \begin{equation} f^B_+(0) \simeq 0.20 \pm 0.05, \end{equation} where the contribution of the $\pi'$(1.3) meson has been included for improving the sum rule variable stability of the result. In this paper, we propose to explain the meaning of this numerical result from an analytic expression of the sum rule. Using the QCD expression given in \cite{OVI}, we obtain for a pseudoscalar current describing the pion: \begin{equation} f^B_+(q^2) \simeq -\frac{(m_u+m_d)\langle \bar qq \rangle}{4f_\pi m^2_\pi} \frac{1}{f_B} \left(\frac{M_B}{M_b}\right)^{2n} \left\{ 1+\delta^{(5)}+\frac{\cal{I}_\pi}{M^2_b} \right\} , \end{equation} where $\cal{I}_\pi$ is the spectral integral coming from the perturbative graph. Its value at $q^2=0$ for different values of $M_b$ is shown in Fig. 1. It indicates that at $M_b=4.6$ GeV, the perturbative contribution, although large, still remains a correction compared with the light-quark condensate term; $\delta^{(5)}$ is the correction due to the dimension-5 condensate and reads: \begin{equation} \delta^{(5)} \simeq -\frac{\tau'M^2_0}{6} \left\{ 2n+ \frac{\tau'^{-1}}{4M^2_b}(n+1) \left( \frac{3}{2}n-1\right) \right\} . \end{equation} One can use the well-known PCAC relation \begin{equation} (m_u+m_d) \langle \bar qq \rangle = -m^2_\pi f^2_\pi, ~~~~~~~~~~~~~f_\pi=93.3 {}~\mbox{MeV} \end{equation} into the previous sum rule in order to express $f^B_+$ in terms of the meson couplings. Unlike the case of the $B\rightarrow \rho$ form factors where the scale dependence is contained in $\langle \bar qq \rangle$, $f^B_+$ is manifestly renormalization-group-invariant. It should be noted, as in the case of the sum rule determination of the $\omega\rho\pi$ coupling \cite{SN4}, that the $f_\pi$-dependence appears indirectly via (26) in a correlator evaluated in the deep Euclidian region, while the pion is off shell, which is quite different from soft-pion techniques with an on-shell Goldstone boson. One can also deduce from (24) that for large $M_b$, $f^B_+$ behaves like $\sqrt{M_b}$. In this limit the $q^2$-dependence is rather weak, as it comes only from the non-leading $1/M_b$ contributions; we therefore have, to a good accuracy: \begin{equation} f^B_+(q^2_{max}) \simeq f^B_+(0) \sim \sqrt{M_b}. \end{equation} As in the previous case, the slight difference between the $q^2$-behaviour in the time-like region and the one from that obtained in \cite{DOSCH}, at a finite value of $M_b$(=4.6 GeV), is only due to a numerical enhancement caused by the non-Landau singularities of the perturbative contribution in this region, but does not disturb the $M_b$-behaviour of the form factor. Finally, we extract the ratio of the form factor: \begin{equation} r_\pi \equiv \frac{A^B_1(0)}{f^B_+(0)}. \end{equation} Unfortunately, we do not have stabilities, as the stability points are different for each form factor, which is mainly due to the huge mass-difference between the $\rho$ and $\pi$ mesons. \section{{\boldmath{The $B \rightarrow \rho \gamma$}} rare decay} We can use the previous results into the HQET \cite{HQET} relation among the different form factors of the rare $B \rightarrow \rho \gamma$ decay ($F_1^{B}\equiv F_1^{B \rightarrow \rho}$) and the semileptonic ones. This relation reads around $q^2_{max}$: \begin{equation} F_1^{B }(q^2) = \frac{q^2+M^2_B-M^2_{\rho}}{2M_B} \frac{V^B(q^2)}{M_B+M_{\rho}}+\frac{M_B+M_{\rho}}{2M_B}A^B_1(q^2), \end{equation} from which we deduce: \begin{equation} F_1^{B}(q^2_{max}) \sim \sqrt{M_b}. \end{equation} However, we can also study, directly from the sum rule, the $q^2$-dependence of $F_1^{B }$. Using the fact that the corresponding sum rule is also dominated by the light-quark condensate for $M_b \rightarrow \infty$ \cite{SN2}, an evaluation of this contribution, at $q^2 \not= 0$, shows that the light-quark condensate effect has no $q^2$-dependence to leading order. Then, we can deduce, to a good accuracy: \begin{equation} F_1^B(q^2_{max}) \simeq F_1^B(0) \sim \sqrt{M_b}. \end{equation} Let us now come back to the parametrization of the form factor at $q^2=0$. We have given in \cite{SN2} an expanded interpolating formula that involves $1/M_b$ and $1/M^2_b$ corrections due to the meson-quark mass difference, to $f_B$ and to higher-dimension condensates. Here, we present a slightly modified expression, which is: \begin{equation} F_1^B(0) \simeq -\frac{1}{2}~\langle\bar qq\rangle~\frac{\rho_\gamma}{f_B}~ \left( \frac{M_B}{M_b} \right)^{2n} \left\{ 1 + \frac{\cal{I}_\gamma}{M^2_b}+... \right\}, \end{equation} with: \begin{eqnarray} \rho_\gamma &\equiv& \left( \frac{\gamma_\rho}{M^2_\rho}\right) \mbox{exp}(M^2_\rho \tau'), \nonumber \\ \cal{I}_\gamma &\simeq& (20\pm 4) \mbox{ GeV}^2 {}~~~~~ \mbox{for}~ M_b \ge 4.6~ \mbox{GeV}, \end{eqnarray} where we have neglected the effects of higher-dimension condensates; $\cal{I}_\gamma$ is the perturbative spectral integral. One should notice that unlike the other spectral integrals in Fig. 1, $\cal{I}_\gamma$ reaches quickly the asymptotic limit when $M_b \rightarrow \infty$. Using the estimated value of $F^B_1(0)$ in \cite{SN2}, we can have, in units of GeV: \begin{equation} F^B_1= \frac{1.6\times 10^{-2}}{f_B}\left( 1+\frac{20\pm 4}{M^2_b} \right), \end{equation} which leads of course to the same formula at large $M_b$ as in \cite{SN2}. However, due to the large coefficient of the perturbative contribution, it indicates that an extrapolation of the result obtained at low values of $M_c$ is quite dangerous, as it may lead to a wrong $M_b$-behaviour of the form factor at large mass. One should notice that (34) and the one in \cite{SN2} lead to the same numerical value of $F^D_1(0)$. Proceeding as for the former cases, we can also extract the ratio: \begin{equation} r_\gamma \equiv \frac{A_1^B(0)}{F_1^B(0)} \simeq 1.18\pm 0.06, \end{equation} from the analysis of the $\tau'$- and $n$-stability shown in Fig. 3. \section{Values of the \boldmath{$B$}-form factors} The safest prediction of the absolute value of the form factors available at present, where different versions of the sum rules and lattice calculations have a consensus, is the one for $f^B_+(0)$: \begin{eqnarray} f^B_+(0) \simeq & &0.26 \pm 0.12 \pm 0.04~~~~~~\mbox{Lattice}~ \cite{SACH}\nonumber \\ & &0.26 \pm 0.03 ~~~~~~~~~~~~~~~ \mbox{DLSR} ~\cite{DOSCH} (\mbox{see also} \cite{OVCH}) \nonumber \\ & &0.23 \pm 0.02 ~~~~~~~~~~~~~~~ \mbox{HSR+DLSR}~\cite{SN1} \nonumber \\ & &0.27 \pm 0.03 ~~~~~~~~~~~~~~~ \mbox{Light-cone}~\cite{RUCKL},\nonumber \\ \end{eqnarray} from which one can deduce the ``world average": \begin{equation} f^B_+(0) \simeq 0.25 \pm 0.02 . \end{equation} For estimating $A^B_1(0)$, one can use the present most reliable estimate of $F^B_1$ \cite{SN2}, \cite{ALI}: \begin{equation} F^B_1(0) \simeq 0.27 \pm 0.03, \end{equation} where we have used the strength of the $SU(3)$-breakings obtained in \cite{SN2}, in order to convert the result for $B\rightarrow K^*\gamma$ of \cite{ALI} into the $B\rightarrow \rho\gamma$ of interest here. Then,we deduce: \begin{equation} A_1^B(0) ~ \simeq ~ 0.32 \pm 0.02,~~~ \end{equation} which is consistent with a direct estimate \cite{SN1,ALI}, but the result is again more accurate. \section{\boldmath{$B$}-semileptonic-decay rates} We are now in a good position to predict the different decay rates. In so doing, we shall use the pole parametrization, except for the $A^B_1$ form factor. For the $B\rightarrow \pi$, we shall use the experimental value 5.32 GeV of the $B^*$ mass. For the $B\rightarrow \rho$, we shall use the fitted value ($6.6\pm 0.6$) GeV \cite{DOSCH} for the pole mass associated to $A^B_2$ and $V^B$. For $A^B_1$, we use the linear form suggested by (13), with an effective mass of ($5.3\pm 0.7$) GeV, which we have adjusted from the numerical behaviour given in \cite{DOSCH} (we have not tried to reproduce the change of the behaviours for $t\simeq (0.76-0.95)M^2_b$ obtained in \cite{DOSCH}, which is a minor effect). Using the standard definitions and notations, we obtain: \begin{equation} \Gamma_{\bar B\rightarrow \pi e\bar \nu} \simeq (3.6 \pm 0.6)\times |V_{ub}|^2\times 10^{12}~\mbox{s}^{-1} \end{equation} We also obtain the following ratios: \begin{equation} \frac{\Gamma_{\bar B\rightarrow \rho e\bar \nu}} {\Gamma_{\bar B\rightarrow \pi e\bar \nu}} \simeq 0.9 \pm 0.2,~~~~~ \frac{\Gamma_+}{\Gamma_-} \simeq 0.20 \pm 0.01,~~~~~ \alpha \equiv 2\frac{\Gamma_L}{\Gamma_T}-1 \simeq -(0.60 \pm 0.01). \end{equation} Thanks to a better control of the ratios of form factors, the ratio of the $\bar B$ decays into $\pi$ over the $\rho$ can be predicted, to a good accuracy. It becomes compatible with the prediction obtained by only retaining the contribution of the vector component of the form factors. Our predictions are compatible with the ones in \cite{DOSCH} except for $\Gamma_+/\Gamma_-$, where the one in \cite{DOSCH} is about one order of magnitude smaller. The difference of two of these three quantities with ones in \cite{SN1} (the large branching ratio into $\rho$ over $\pi$ and the positive value of the asymmetry $\alpha$ in \cite{SN1} and in most other pole dominance models for $A^B_1$) is mainly due to the different $q^2-$behaviour of $A^B_1$ used here. \section{Conclusions} We have studied, using the QCD $hybrid$ sum rule, the $M_b$- and $q^2$-behaviours of the heavy-to-light transition form factors. We find that these quantities are dominated in a $universal$ way by the light-quark condensate contribution. \noindent The $M_b$-dependence obtained here is in perfect agreement with the HQET and lattice expectations. \noindent The $q^2$-dependence of the $A^B_1$ form factor, which is mainly due to the one from the light-quark condensate contribution, is in clear contradiction with the one expected from a pole parametrization. The other form factors can mimic $numerically$ this pole parametrization. Our QCD-analytic $q^2$-behaviours confirm the previous numerical results given in \cite{DOSCH}. \noindent We have also shown that it can be incorrect to derive the $M_b$-behaviour of the form factors at $q^2=0$ by combining the HQET result at $q^2_{max}$ with the pole parametrization. \noindent We have also shown that the unusual $q^2-$behaviour of the $A^B_1$ form factor affects strongly the branching ratio of $B \rightarrow \rho $ over $B\rightarrow \pi$ and the $\rho$-polarisation parameter $\alpha$. A measurement of these quantities complemented by the one of the $q^2-$behaviour of the form factor should provide a good test of the sum rules approach. \noindent We want also to stress that the extrapolation of the results obtained in this paper to the case of the $D$-meson would be too audacious: the uses of the HSR in that case cannot be $rigorously$ justified since the value of the $c$-quark mass is smaller, although it may lead to acceptable phenomenological results. We are investigating this point at present. \section*{Acknowledgements} I thank Olivier P\`{e}ne and Chris Sachrajda for discussions of the lattice results. \noindent \section*{Figure captions} \noindent {\bf Fig. 1}~~~~ $M_b$-dependence of the perturbative spectral integrals at $q^2$ = 0. \vspace*{0.5cm} \noindent {\bf Fig. 2}~~~~$q^2$-behaviour of the normalized form factors: $R_1 \equiv A_1^B(q^2)/A_1^B(0)$, $R_2 \equiv A_2^B(q^2)/A_2^B(0)$, $R_V \equiv V^B(q^2)/V^B(0)$ and $R_\pi \equiv f_+^B(q^2)/f_+^B(0)$. The squared points in the timelike region are from \cite{DOSCH}. \vspace*{0.5cm} \noindent {\bf Fig. 3}~~~~$\tau'$- and $n$-dependences of the ratios of form factors at $q^2$ = 0: $r_2 \equiv A_2^B(0)/A_1^B(0)$, $r_V \equiv V^B(0)/A_1^B(0)$ and $r_\gamma \equiv A_1^B(0)/F_1^B(0)$. \vfill\eject \noindent
1,314,259,994,145
arxiv
\section{Introduction} \subsection{The PIR Model} In the classical model for private information retrieval (PIR) due to Chor, Goldreich, Kushilevitz and Sudan~\cite{CGKS98}, a database $\mathbf{X}$ is replicated across $n$ servers $S_1,S_2,\dotsc,S_n$. A user wishes to retrieve one bit of the database, so sends a query to each server and downloads their reply. The user should be able to deduce the bit from the servers' replies. Moreover, no single server should gain any information on which bit the user wishes to retrieve (without collusion). The resulting protocol is known as an (information-theoretic) \emph{PIR scheme}; there are also computational variants of the security model~\cite{KuOs97}. The goal of PIR is to minimise the total communication between the user and the servers. In practice, the assumption that the user only wishes to retrieve one bit of the database, and the assumption that there is no shortage of server storage seem unrealistic. Because of this, many recent papers assume that the database $\mathbf{X}$ consists of $k$ {\em records}, each of which is $R$ bits in length, so that the number of possible databases is $2^{kR}$. We denote the value of Record $i$ by $X_i$, and we write $X_{ij}$ for the $j$th bit of $X_i$. The aim of the protocol is for the user to retrieve the whole of $X_i$, rather than a single bit. We also, following Shah, Rashmi and Ramchandran~\cite{SRR14}, drop the assumption that the whole database is replicated across the $n$ servers $S_1,S_2,\ldots,S_n$ and so, for example, there is the possibility of using techniques from coding theory in general and from distributed storage codes in particular to reduce the total storage of the scheme. No restrictions are made on the particular encoding used to distribute the database across the servers other than to assume it is deterministic, {\it i.e.}~that there is a unique way to encode each database. This important generalisation of the model has led to very interesting recent work which we discuss in Subsection~\ref{subsec:context} below. More combinatorially, we define a private information retrieval scheme as follows. \begin{definition}[PIR scheme] Suppose a database $\mathbf{X}$ is distributed across $n$ servers $S_1,S_2,\dotsc,S_n$. A user who wishes to learn the value $X_\ell$ of Record $\ell$ submits a {\em query} $(q_1,q_2,\dotsc,q_n)$. For each $r\in \{1,2,\dotsc, n\},$ server $S_r$ receives $q_r$ and responds with a value $c_r$ that depends on $q_r$ and on the information stored by $S_r$. The user receives the {\em response} $(c_1,c_2,\dotsc,c_n)$. This system is a {\em private information retrieval (PIR) scheme} if the following two properties are satisfied: \begin{description} \item \textbf{(Privacy)} For $r=1,2,\dotsc,n$ the value $q_r$ received by server $S_r$ reveals no information about which record is being sought. \item \textbf{(Correctness)} Given a response $(c_1,c_2,\dotsc,c_n)$ to a query $(q_1,q_2,\dotsc,q_n)$ for Record $\ell$, the user is unambiguously able to recover the value $X_\ell$ of this record. \end{description} \end{definition} Note that while the query is drawn randomly according a pre-specified distribution on a set of potential queries, the response is assumed to be deterministic. \begin{example} \label{ex:1server} In the case of a single server, a trivial method for achieving PIR is for the user to download the entire $kR$-bit database. \end{example} Chor, Goldreich, Kushilevitz and Sudan showed that in the case of single-bit records ($R=1$), if there is a single server then PIR is only possible if the total communication is at least $k$ bits ({\it i.e.}~the size of the entire database) \cite{CGKS98}, and so the solution above is best possible. We are interested in finding solutions such as the scheme below, which transmit significantly fewer than $kR$ bits. \begin{example}\cite{CGKS98} \label{ex:chor} Suppose there are two servers, each storing the entire database. Suppose $R=1$. \begin{itemize} \item A user who requires Record $\ell$ chooses a $k$-bit string $(\alpha_1,\alpha_2,\dotsc, \alpha_k)$ uniformly at random. \item Server 1 is requested to return the value $c_1=\bigoplus_{i=1}^k \alpha_i X_i$, and Server 2 is requested to return $ c_2=\left(\bigoplus_{i=1}^k \beta_i X_i\right)$, where \[ \beta_i=\begin{cases} \alpha_i\oplus 1&\text{when }i=\ell,\\ \alpha_i&\text{otherwise.} \end{cases} \] \item The user computes $c_1\oplus c_2$ to recover the value $X_\ell$ of Record $\ell$. \end{itemize} \end{example} The strings $(\alpha_1,\alpha_2,\dotsc, \alpha_k)$ and $(\beta_1,\beta_2,\dotsc, \beta_{k})$ are both uniformly distributed, and are independent of the choice of $\ell$, hence neither server receives any information as to which record is being recovered by the user. We note that the scheme above works unchanged when the records are $R$-bit strings rather than single bits. The download complexity of the scheme, in other words the total number of bits downloaded from the servers, is $2R$. The upload complexity is $2k$, since each server receives a $k$-bit string from the user. Thus the total communication of the scheme is $2R+2k$ bits, which is significantly less than $kR$ bits for most parameters. Note that the upload complexity of this scheme does not depend on $R$, and so is an insignificant proportion of the total communication when $R$ is large. This is a general phenomenon: Chan, Ho and Yamamoto~\cite[Remark 2]{CHY15} observe the following. Let $m>1$ be an integer. Suppose we have an $n$-server PIR scheme for a database of $k$ records, each $R$ bits long. Suppose the scheme requires $u$ upload bits and $d$ download bits. Then we can construct an $n$-server PIR scheme for a database of $k$ records, each $mR$ bits long, which requires $md$ bits of download but still needs just $u$ bits to be uploaded. Note that when $m$ is large (so records are long) the communication complexity of the new scheme is dominated by the download complexity of the given scheme. Because of the observation of Chan et al., it is vital to find PIR schemes with low download complexity. We formalise the download complexity as follows. \begin{definition} A PIR scheme \emph{uses binary channels} if the response $c_j$ sent by server $S_j$ is a binary string of length $d_j$, where $d_j$ depends only on the query $q_j$ it receives. The \emph{download complexity} is the maximum of the sum $\sum_{j=1}^n d_j$ over all possible queries $(q_1,q_2,\dotsc,q_n)$. \end{definition} So the download complexity is the number of bits downloaded in the worst case. We emphasise that the length $d_j$ in the definition above does not depend on the database $\mathbf{X}$, but could depend on the query $q_j$ received by server $S_j$. We note that we allow for the possibility that $d_j=0$, so the server does not reply to the query. Finally, we note that if we know that there are more than $2^x$ distinct possibilities for $c_j$ as the database varies, we may deduce that $d_j\geq x+1$. Although it is possible to use non-binary channels for PIR, we restrict our exposition to PIR schemes using binary channels in this paper (as is standard in the literature). We should comment that, despite the observation of Chan et al., we should not ignore upload complexity completely, as there are scenarios (for example, when $R$ is not so large) when it might be dominant. Moreover, we cannot compare $2k$ bits of download with $2R$ bits of upload just by comparing $k$ and $R$, since we know (at least currently) that in practice it takes much more time to upload a bit than to download one. Of course, we don't know how the speed of downloading and uploading will change over time. But the obvious consequence of the current situation and future developments is to consider both upload and download complexities separately, and not to ignore one of them completely. This is something that will be done in this paper, although the download complexity will be the main target for optimisation since we generally assume that the size of the database is (considerably) larger than the one bit of the classical PIR model. We continue to another important measure that has motivated many papers in the last three years, after being introduced by Shah et el.\cite{SRR14}: \begin{definition} Suppose server $S_r$ stores $s_r$ bits of information about the database $\mathbf{X}$. \begin{itemize} \item The \emph{per-server storage} of the scheme is $\max\{ s_r\mid r=1,2,\ldots ,n\}$. \item The \emph{total storage} of the scheme is $\sum_{r=1}^ns_r$. \item The \emph{storage overhead} of the scheme is the ratio between the total storage and the size of the database, i.e. $kR$. \end{itemize} \end{definition} The classical model of PIR ignored storage issues: it was assumed that there is enough storage to allow the replication of the database at each server. But, with the quantity of information stored today in data centres, storage is an issue today and might be an important barrier in the future. Thus, it is important to reduce the storage overhead as much as possible, while keeping reliability, fast access, fast upload and fast download at reasonable levels. This is the perspective of the current paper, which is concerned with schemes whose download complexity is as small as possible whilst keeping the total storage at reasonable levels. \subsection{Our contributions} In Section~\ref{sec:lower_bounds}, we provide combinatorial results on the structure of a PIR scheme with small download complexity: \begin{itemize} \item We generalise (Theorem~\ref{thm:download}) a key theorem in the foundational paper of Chor et al.~\cite{CGKS98}, and use this result to generalise the lower bound of $R+1$ on download complexity in~\cite{SRR14}. The results imply (Theorem~\ref{thm:tdownload}) that an $n$-server PIR scheme must have download complexity at least $\frac{n}{n-1}R$ when $k>\lceil R/(n-1)\rceil$. (This last result can also be obtained as a corollary of a recent bound due to Sun and Jafar~\cite{SuJa16b}.) These results provide a bridge between classical PIR and the new models that are assuming the retrieval of long records. \item We provide (Corollary~\ref{cor:onebit}, Theorem~\ref{thm:almostall}) information on the structure of a PIR scheme with minimal download complexity $R+1$. In particular, Theorem~\ref{thm:almostall} provides a rigorous statement of~\cite[Theorem~1]{SRR14}. \end{itemize} In Section~\ref{sec:constructions}, we provide various constructions for PIR schemes with low download complexity: \begin{itemize} \item In Subsection~\ref{subsec:rplusone}, we provide two simple $(R+1)$-server PIR schemes with download complexity $R+1$. Both schemes have total storage which is quadratic in $R$. The first scheme is a natural generalisation of the scheme of Chor et al.\ given above. The second scheme is a close variant of the quadratic total storage PIR scheme in~\cite{SRR14}, which avoids having to design slightly different schemes depending on the parity of $R$. This second scheme is to be preferred due to its lower upload complexity. (Another, more complex, PIR scheme with download complexity $R+1$ is considered in detail in~\cite{SRR14}. This scheme has small per-server storage, but requires an exponential (in $R$) number of servers, and so has exponential total storage.) \item In Subsection~\ref{subsec:nserver}, we describe an $n$-server PIR scheme with download complexity $\frac{n}{n-1}R$. The total storage of the scheme is linear in $R$. This shows that for any $\epsilon>0$ there exists a PIR scheme with linear total storage and download complexity at most $(1+\epsilon)R$. (Schemes with linear total storage, but with download complexity between $2R$ and $4R$, are given in~\cite{SRR14}.) \item We describe (Subsection~\ref{subsec:perserver}) schemes that provide trade-offs between increasing the number of servers and reducing the per-server storage of the scheme in Subsection~\ref{subsec:nserver}. \item In Subsection~\ref{subsec:sunjafar}, we provide explicit schemes that achieve optimal asymptotic download cost. The performance of these schemes is equal to the inductively defined schemes in Sun and Jafar~\cite{SuJa16b}, but the description of these schemes is more concise, and the proof that they are indeed PIR schemes is much more straightforward. \item Finally, in Subsection~\ref{subsec:averaging}, we explain an averaging technique that allows a PIR scheme with good average download complexity to be transformed into a scheme with good download complexity in the worst case. \end{itemize} \subsection{Context} \label{subsec:context} We end this introduction with a discussion of some of the related literature. (Many of these papers appeared after the conference version of our paper~\cite{BEM17} was posted.) Private information retrieval was introduced in~\cite{CGKS98}, and has been an active area ever since. See, for example, Yekhanin~\cite{Y10} for a fairly recent survey. The papers by Shah et al.~\cite{SRR14} and (independently) by Augot, Levy-Dit-Vahel, and Shikfa~\cite{ALS14} are the first to consider PIR models where the information stored in the servers could be coded using techniques from distributed storage. Whereas~\cite{SRR14} is mainly concerned with download complexity, and also with total storage (with per-server storage, and query size also relevant parameters), the authors of~\cite{ALS14} emphasise measures of robustness against malicious servers, namely decoder locality and PIR locality. More recently, the literature has addressed several parallel and related issues, which can be categorised as follows: \begin{enumerate} \item Papers dealing with the download complexity, rate, and capacity of PIR schemes. \item Research which attempts to reduce the storage overhead of PIR schemes. \item Papers which present coding techniques, based on various error-correcting codes, e.g. MDS codes, to store the database in a distributed fashion. \item Papers which consider PIR schemes in the presence of unreliable servers. Servers might be colluding (so they have access to more than one query $q_r$), they might fail (and so do not reply with a value $c_r$), they might be adversarial (replying with incorrect values $c_r$), they might be unsynchronised (storing slightly different copies of the database) and so on. \item Research which aims to build PIR schemes into previously known architectures for distributed storage. \item Papers dealing with other PIR models, for example allowing broadcasting of some information, or allowing the user to possess side information such as the value of some records. \end{enumerate} Clearly, these issues are related, and a given paper might address aspects of more than one of these topics. In early papers, Fanti and Ramchandran~\cite{FaRa14,FaRa15} considered unsynchronized databases; the results are the same as for synchronized PIR at the expense of probabilistic success for information retrieval, and the use of two rounds of communication. We are not aware of recent work in this model, but we mention in this context the work of Tajeddine and El Rouayheb~\cite{TaElR17} which considers PIR schemes in the presence of some servers which do respond to a query. In a sequence of papers, Sun and Jafar~\cite{SuJa16b,SuJa16c,SuJa16d,SuJa16e,SuJa16f} consider the capacity of the channels related to PIR codes in various models. (The \emph{rate} of a PIR scheme is the ratio of $R$ and the download complexity, and the \emph{capacity} is the supremum of achievable rates.) In the model for PIR we consdier, they use information theoretic techniques to show~\cite{SuJa16b} that an $n$-server PIR scheme on a $k$ message database has rate at most \[ \left(1-\frac{1}{n}\right)\left(1-\left(\frac{1}{n}\right)^k\right)^{-1}. \] (Their model is restricted to the special case of replication, but it is easy to see that this restriction is not needed for this result to hold.) They also provide a scheme that attains this rate. The messages in their scheme are extremely long for most values of $n$ and $k$: the message length must be a multiple of $n^k$. Because of this, the scheme can be thought of as being tailored for the situation when $R\rightarrow\infty$. Their results show that when $R\rightarrow\infty$ with $n$ and $k$ fixed, there are schemes whose download complexity (and so whose communication complexity) has a leading term of the form \[ \frac{n}{n-1}\left(1-\left(\frac{1}{n}\right)^k\right)R, \] and that this term is best possible. (We give an explicit scheme with the same download complexity in Subsection~\ref{subsec:sunjafar}.) The results in~\cite{SuJa16b} have been generalised to the case when some of the servers collude. Sun and Jafar~\cite{SuJa16c} find the capacity of the channel in this more general case. (The results in~\cite{SuJa16b} can be thought of as the special case where each server can collude only with itself.) The capacity for the symmetric PIR model, where the user who retrieves a message will get no information about the other messages in the database, is determined in~\cite{SuJa16d}. The optimal download complexity in the situation when the messages in the database might be of different lengths is considered in~\cite{SuJa16e}. The most recent in this sequence of papers considers an interactive model, where a user can have several rounds of queries, the queries in a given round are allowed to depend on answers from previous rounds. Moreover, colluding servers are considered in this model. It is proved~\cite{SuJa16f} that for this case there is no change in the capacity, but that the storage overhead can sometimes be improved. Banawan and Ulukus~\cite{BaUl16} also generalise the results of Sun and Jafar~\cite{SuJa16b}, finding the capacity of the PIR scheme when the database is encoded with a linear code. Another generalisation due to Banawan and Ulukus~\cite{BaUl17,BaUl17a} is to the scenario that the user is allowed to request a few records in one round of queries. They provide capacity computations and schemes for this scenario. A similar case was also discussed in~\cite{ZhXu17}. Finally, Banawan and Ulukus~\cite{BaUl17b} consider the capacity of PIR schemes in the scenario where servers might not be synchronised, there might be adversarial errors, and some servers might collude. They compute the capacity when some or all of these events might occur. Wang and Skoglund~\cite{WaSk16} consider the capacity of a symmetric PIR scheme when the database is stored in a distributed fashion using an MDS code. Chan, Ho, and Yamamoto~\cite{CHY14,CHY15} consider the trade-off between the total storage and the download complexity when the size of a record is large; the trade-off depends on the number of records in the system. They also consider the case where the database is encoded with an MDS code. Fazeli, Vardy, and Yaakobi~\cite{FVY15,FVY15a} give a method to reduce the storage overhead based on any known PIR scheme which uses replication. Their method reduces the storage overhead considerably, without affecting the order of the download complexity or upload complexity of the overall scheme, by simulating the original scheme on a larger number of servers. Their key concept is an object they call a $\kappa$-PIR code (more generally a $\kappa$-PIR array code), where $\kappa$ is the number of servers used in the originally known PIR scheme, which controls how a database can be divided into parts and encoded within servers to allow a trade-off between the number of servers and the storage overhead. In particular, for all $\epsilon>0$, they show that there exist good schemes (in terms of communication requirements) where the amount of information stored in a server is bounded but the total storage is at most $(1+\epsilon)$ times the database size. Rao and Vardy~\cite{RaVa16} study PIR codes further, establishing the asymptotic behavior of $\kappa$-PIR codes. Vajha, Ramkumar, and Kumar~\cite{VRK17,VRK17a} find the redundancy of such codes for $\kappa =3,~4$ by using Reed-Muller codes. Lin and Rosnes~\cite{LiRo17} show how to shorten and lengthen PIR codes, and find the redundancy of such codes for $\kappa =5,~6$. Blackburn and Etzion~\cite{BE16,BlEt17} consider the optimal ratios between $\kappa$-PIR array codes and the actual number of servers used in the system. Zhang, Wang, Wei, and Ge~\cite{ZWWG16} consider these ratios further, and improve some of the results from~\cite{BE16,BlEt17}. We remark that though it is possible to reduce the storage overhead using the techniques of PIR array codes, it seems impossible to reduce the download complexity of the resulting schemes below $(3/2)R$ (and most codes give download complexity close to $2R$) because of restrictions on the PIR rate of such codes. It is interesting to note that Augot, Levy-Dit-Vahel, and Shikfa~\cite{ALS14} constructed PIR schemes by partitioning the database into smaller parts, as done later in~\cite{FVY15,FVY15a}, to reduce the storage overhead. But they applied this technique only to a certain family of multiplicity codes, and the parts of the partition were not encoded as in~\cite{FVY15,FVY15a}. Fazeli, Vardy, and Yaakobi~\cite{FVY15a} remark that the concept of a $\kappa$-PIR code is closely related to codes with locality and availability. Such codes were studied first by Rawat, Papailiopoulos, Dimakis, and Vishwanath~\cite{RPDV14b,RPDV16} and later also by others, for example~\cite{FFGW17,HYUP15}. A new subspace approach for such codes was given recently in~\cite{SES17,SES17a}. Another family of related codes with similar properties are batch codes, which were first defined by Ishai, Kushilevitz, Ostrovsky, and Sahai~\cite{IKOS04} and were recently studied by many others, for example~\cite{AsYa17,RSDG16}. It is important to note that all these codes are very important in the theory of distributed storage codes. This connection between the concepts of locality and PIR codes are explored in~\cite{FFGW17}. Error-correcting codes, and in particular maximum distance separable (MDS) codes, have been considered by many authors in various PIR models. It is natural to consider MDS codes, as they are very often used in various types of distributed storage codes (especially for locally repairable codes~\cite{GHSY12} and regenerating codes~\cite{DGWWR10,DRWS11}), and we expect that the servers in our PIR scheme will be part of a distributed storage system. We will now mention various examples. Colluding or malicious servers in PIR have been much studied over the last two years. Tajeddine and El Rouayheb~\cite{TaElR16,TaElR16a} consider PIR schemes where the information is stored using MDS codes. They describe schemes which have optimal download complexity in this model, as they attain the bounds in~\cite{CHY14,CHY15}, in the situation when one or two `spies' (colluding and/or malicious servers) are present. In the case of one spy (no collusion) a generalisation to any linear code with rate greater than half was given in~\cite{KRA16}. Freij-Hollanti, Gnilke, Hollanti, and Karpuk~\cite{FGHK16} give a PIR scheme coded with an MDS code which can be adjusted (by varying the rate of the MDS code) to combat against larger numbers of colluding servers. This scheme also attains the asymptotic bound on the related capacity of such a PIR scheme. This idea is generalised in~\cite{TGKFHR17}. The results in the latter paper are analysed (and one conjecture disproved) by Sun and Jafar~\cite{SuJa17,SuJa17a}. Another scheme based on MDS codes which can combat large number of colluding servers is given by Zhang and Ge~\cite{ZhGe17a}. A generalisation to the case where the user wants to retrieve several files is given by the same authors in~\cite{ZhGe17b}. Wang and Skoglund~\cite{WaSk17a} consider a symmetric PIR scheme using an MDS code, in which the user can retrieve the information about the file he wants, but can gain no information about the other files. This scheme attains the bound on the capacity which they derive earlier in~\cite{WaSk16}. They have extended their work to accommodate colluding servers in~\cite{WaSk17b}. PIR can be combined with other applications in storage and communication in many ways. One example is a related broadcasting scheme in~\cite{KSCF17}. Another example is cache-aided PIR, considered by Tandon~\cite{Tan17}. In this setup the user is equipped with a local cache which is formed from an arbitrary function on the whole set of messages, and this local cache is known to the servers. The situation when this cache is not known to the servers is considered by Kadhe, Garcia, Heidarzadeh, El Rouayheb, and Sprintson~\cite{KGHRS17}. Since the user has side information in these models, the problem is closely related to index coding~\cite{BBJK11} a topic which is also of great interest. \section{Optimal download complexity} \label{sec:lower_bounds} In this section, we give structural results for PIR schemes with optimal download complexity, given that the database consists of $k$ records of length $R$. For some of the results, we also assume that the PIR scheme involves $n$ servers, where $n$ is fixed. In Subsection~\ref{subsec:lower_bounds} we generalise a classical result in Private Information Retrieval due to Chor et al. We use this result to provide an alternative proof of the theorem of Shah, Rashmi and Ramchandran~\cite{SRR14} that a PIR scheme must have download complexity at least $R+1$ when $k\geq 2$, and to prove a lower bound of $\frac{n}{n-1}R$ for the download complexity of an $n$-server PIR scheme whenever $k$ is sufficiently large. In Section~\ref{subsec:R_plus_one} we present more precise structural results when the download complexity of a PIR scheme attains the optimal value of $R+1$ bits. \begin{definition} We say that a response $(c_1,c_2,\dotsc,c_n)$ is {\em possible} for a query $(q_1,q_2,\dotsc,q_n)$ if there exists a database $\mathbf{X}$ for which $(c_1,c_2,\dotsc,c_n)$ is returned as the response to the query $(q_1,q_2,\dotsc,q_n)$ when $\mathbf{X}$ is stored by the servers. \end{definition} \subsection{Lower bounds on the download complexity} \label{subsec:lower_bounds} We aim to generalise the following theorem, which was proved by Chor et al.\ in the very first paper on PIR~\cite[Theorem~5.1]{CGKS98}: \begin{theorem} \label{thm:chor} A PIR scheme that uses a single server for a database with $k$ records of size one bit is not possible unless the number of possible responses from the server to any given query is at least $2^k$. \end{theorem} Our generalisation shows a server must reply with at least $k(R-d)$ bits of download, if no more than at total of $d$ bits (where $0\leq d \leq R$) are downloaded from the other servers. We state our generalisation as follows. Without loss of generality we will focus on server $S_1$, so for ease of notation we will denote the tuple $(q_1,q_2,\dotsc,q_n)$ by $(q_1,q_\ensuremath{\mathrm{other}})$, and $(c_1,c_2,\dotsc, c_n)$ by $(c_1,c_\ensuremath{\mathrm{other}})$. \begin{theorem}\label{thm:download} Suppose $0\leq d \leq R$. Let $q_1$ be fixed. Suppose we have a PIR scheme with the property that for any query of the form $(q_1,q_\ensuremath{\mathrm{other}})$, we have \begin{equation*} |\{c_\ensuremath{\mathrm{other}}\mid\text{$\exists c_1$ such that $(c_1,c_\ensuremath{\mathrm{other}})$ is possible for $(q_1,q_\ensuremath{\mathrm{other}})$}\}|\leq 2^{d}. \end{equation*} Then for any query $(q_1,q'_\ensuremath{\mathrm{other}})$ we have \begin{equation*} |\{c_1\mid\text{$\exists c_\ensuremath{\mathrm{other}}$ such that $(c_1,c_\ensuremath{\mathrm{other}})$ is possible for $(q_1,q'_\ensuremath{\mathrm{other}})$}\}|\geq 2^{k(R-d)}. \end{equation*} \end{theorem} We remark that Theorem~\ref{thm:chor} is the case $d=0$ and $R=1$ of Theorem~\ref{thm:download}. \begin{proof} Let $q_1$ be fixed, and suppose we have a PIR scheme with the property that for any query $(q_1,q_\ensuremath{\mathrm{other}})$ \begin{equation*} |\{c_\ensuremath{\mathrm{other}}\mid\text{$\exists c_1$ such that $(c_1,c_\ensuremath{\mathrm{other}})$ is possible for $(q_1,q_\ensuremath{\mathrm{other}})$}\}|\leq 2^{d}. \end{equation*} Assume, for a contradiction, that there exists a query $(q_1,q_\ensuremath{\mathrm{other}}^\ast)$ corresponding to the $i$th record for which \begin{equation*} |\{c_1\mid\text{$\exists c_\ensuremath{\mathrm{other}}$ such that $(c_1,c_\ensuremath{\mathrm{other}})$ is possible for $(q_1,q_\ensuremath{\mathrm{other}}^\ast)$}\}|<2^{k(R-d)}. \end{equation*} There are $2^{kR}$ databases, and less than $2^{k(R-d)}$ possibilities for the reply $c_1$ of $S_1$ to the $(q_1,q_\ensuremath{\mathrm{other}}^\ast)$. So by the pigeonhole principle, there is a value $c_1^\ast$ for which there exists a set $T$ of databases with $|T|>2^{kR}/2^{k(R-d)}=2^{kd}$ having the property that for each $\mathbf{X}\in T$, the response of $S_1$ to $(q_1,q_\ensuremath{\mathrm{other}}^\ast)$ when the servers store $\mathbf{X}$ is $c_1^\ast$. If server $S_1$ receives the query $q_1$, it will thus return $c_1^\ast$ whenever a database in $T$ is being stored. Since the databases consist of $k$ records, the fact that $|T|>2^{kd}$ implies the existence of a record, say Record $\ell$, for which the number of distinct values $X_\ell$ that appear among the databases in $T$ is greater than $2^d$. Thus we can choose a set of databases $W\subseteq T$ such that no two databases in $W$ have the same value $X_\ell$. The requirement for privacy against server $S_1$ implies that if $(q_1,q_\ensuremath{\mathrm{other}}^\ast)$ is a query for Record $i$, then there exists a query for Record $\ell$ of the form $(q_1,q_\ensuremath{\mathrm{other}}^\ell)$, since otherwise $S_1$ could distinguish between queries for Record $i$ and Record $\ell$. If query $(q_1,q_\ensuremath{\mathrm{other}}^\ell)$ is made when a database in $T$ is stored, then server $S_1$ receives $q_1$ and responds $c_1^\ast$ as before. Now consider the databases in $W$. As there are $2^d+1$ of them, yet at most $2^d$ values for $c_\ensuremath{\mathrm{other}}$ for which there is a possible response $(c_1^\ast,c_\ensuremath{\mathrm{other}})$ to $(q_1,q_\ensuremath{\mathrm{other}}^\ell)$, it follows that there must be some value $c_\ensuremath{\mathrm{other}}^\ell$ for which there are two databases $\mathbf{X},\mathbf{Y}\in W$ such that the response to $(q_1,q_\ensuremath{\mathrm{other}}^\ell)$ is $(c_1^\ast,c_\ensuremath{\mathrm{other}}^\ell)$ when either of those databases is stored. This contradicts the correctness of our scheme, since the values of Record $\ell$ in $\mathbf{X}$ and $\mathbf{Y}$ are not equal yet the response $(c_1^\ast,c_\ensuremath{\mathrm{other}}^\ell)$ to the query $(q_1,q_\ensuremath{\mathrm{other}}^\ell)$ does not allow the user to distinguish between them. \end{proof} The following theorem is a key consequence of Theorem~\ref{thm:download}. \begin{theorem}\label{thm:binarydownload} Let $x$ be non-negative, and suppose we have a PIR scheme that has download complexity at most $R+x$. If the database contains $k$ records, where $k\geq x+2$, then the number of bits downloaded from any server is at most~$x$. \end{theorem} \begin{proof} Without loss of generality, consider the server $S_1$. Suppose for a contradiction that there exists a query $q_1$ so that at least $x+1$ bits are downloaded from $S_1$ (and so at most $(R+x)-(x+1)=R-1$ bits are downloaded from the other servers). Suppose that a total of $d$ bits are downloaded from the other servers in the worst case when $S_1$ receives~$q_1$. So $d\leq R-1$. Theorem~\ref{thm:download} implies that at least $k(R-d)$ bits are downloaded from $S_1$, and so at least $k(R-d)+d$ bits are downloaded from the servers in the worst case. But $d\leq R-1$ and $k\geq x+2$, so \[ k(R-d)+d=kR-(k-1)d \geq kR-(k-1)(R-1)=R+k-1\geq R+(x+2)-1 =R+x+1, \] which is impossible as the scheme has total download complexity $R+x$. This contradiction establishes the theorem. \end{proof} We are now in a position to provide a new short proof of the following corollary. The corollary is due to Shah et al.~\cite{SRR14}. \begin{corollary} \label{cor:rplusone} Let the database contain $k$ records with $k\geq 2$. Any PIR scheme requires a total download of at least $R+1$ bits. \end{corollary} \begin{proof} Suppose we have a scheme with total download of $R$ or fewer bits. Theorem~\ref{thm:binarydownload} with $x=0$ implies that $0$ bits are downloaded from each server, and so the user receives no information about the desired record. Hence such a scheme cannot exist. \end{proof} The following theorem improves the bound of Corollary~\ref{cor:rplusone} when $n< R+1$ and $k$ is sufficiently large. \begin{theorem}\label{thm:tdownload} Suppose a PIR scheme involves $n$ servers, where $n\geq 2$. Suppose the database contains $k$ records, where $k\geq \lceil \frac{1}{n-1}R\rceil+1$. Then the download complexity of the scheme is at least $\frac{n}{n-1}R$ bits. \end{theorem} \begin{proof} Assume for a contradiction that the scheme has download complexity $R+x$, where $x$ is an integer such that $x<\frac{1}{n-1}R$. Since $x\leq \lceil \frac{1}{n-1}R\rceil -1$, we see that $k\geq x+2$ and so Theorem~\ref{thm:binarydownload} implies that the number of bits downloaded by any server is at most $x$. Since we have $n$ servers, the total number of bits of download is always at most $xn$. Since our scheme has download complexity $R+x$, there is a query where a total of $R+x$ bits are downloaded from servers. Hence we must have that $nx\geq R+x$, which implies that $x\geq \frac{1}{n-1}R$. This contradiction establishes the result. \end{proof} \subsection{Download complexity $R+1$} \label{subsec:R_plus_one} The final two results of this section concentrate on the extreme case when the download complexity is exactly $R+1$. Recall that the download complexity is a worst case measure, so $R+1$ is an upper bound on the number of bits downloaded for any query, and there exists a query where $R+1$ bits are downloaded. \begin{corollary} \label{cor:onebit} Let the database contain $k$ records with $k\geq 3$. Any PIR scheme with a total download of exactly $R+1$ bits requires 1 bit to be downloaded from each of $R$ or $R+1$ different servers in response to any query. \end{corollary} \begin{proof} The special case of Theorem~\ref{thm:binarydownload} when $x=1$ shows that no server replies with more than $1$ bit. For the download complexity to be $R+1$, no more than $R+1$ servers can respond non-trivially. Since the user deduces the value of an $R$-bit record from the bits it has downloaded, at least $R$ servers must reply to any query. \end{proof} One might hope that the Corollary~\ref{cor:onebit} could be strengthened to the statement that exactly $R+1$ servers must respond non-trivially. However, examples show that this is not always the case: see the comments after Construction~\ref{con:manyserver} below. Shah et al.\ state~\cite[Theorem~1]{SRR14} that, in the situation above, ``for almost every PIR operation'' $R+1$ servers must respond, and they provide a heuristic argument to support this statement. The following result makes this rigorous, with a precise definition of `almost every'. \begin{theorem} \label{thm:almostall} Let the database contain $k$ records with $k\geq 3$. Suppose we have a PIR scheme with a total download of exactly $R+1$ bits (in the worst case). Suppose a user chooses to retrieve a record chosen with a uniform probability distribution on $\{1,2,\ldots,k\}$. Let $\alpha$ be the probability that only $R$ bits are downloaded. Then \[ \alpha\leq \frac{R+1}{kR+1}. \] \end{theorem} \begin{proof} By Corollary~\ref{cor:onebit}, each server replies to any query with at most one bit. We may assume, without loss of generality, that if a server replies with one bit then this bit must depend on the database in some way (since otherwise we may modify the scheme so that this server does not reply and the probability $\alpha$ will increase). Let $(q_1,q_2,\ldots,q_n)$ be a query for the $\ell$th record where only $R$ servers reply non-trivially. Since only $R$ servers reply, there are at most $2^R$ possible replies to the query (over all databases). But the value $X_\ell$ of the record is determined by the reply, and there are $2^R$ possible values of $X_\ell$. So in fact there must be exactly $2^R$ possible replies, and there is a bijection between possible replies and possible values $X_\ell$. We claim that the replies of each of these $R$ servers can only depend on $X_\ell$, not on the rest of the database. To see this, suppose a server $S_r$ replies non-trivially, and let $f:\{0,1\}^{kR}\rightarrow\{0,1\}$ be the function mapping each possible value of the database to the reply of $S_r$ to query $q_r$. Suppose $f$ is not a function of $X_\ell$ alone, so there are two databases $\mathbf{X}$ and $\mathbf{X}'$ whose $\ell$th records are equal and such that $f(\mathbf{X})\not=f(\mathbf{X}')$. Let $\rho$ be the common value of the $\ell$th record in both $\mathbf{X}$ and $\mathbf{X}'$. When $X_\ell=\rho$ there are at least two possible replies to the query, depending on the value of the remainder of the database. But this contradicts the fact that we have a bijection between possible replies and possible values $X_\ell$. So our claim follows. Let $A$ be the event that exactly $R$ servers reply, and for $r=1,2,\ldots,n$ let $B_r$ be the event that server $S_r$ replies non-trivially. Let $D_r$ be the indicator random variable for the event $B_r$. So $D_r$ is equal to $1$ when $S_r$ responds non-trivially and $0$ otherwise. Note that $D_r$ is always equal to the number of bits downloaded from $S_r$, thus the expected value of the sum of these variables satisfies \begin{equation} \label{eqn:deqn} \mathrm{E}\left(\sum_{r=1}^n D_r\right)=\alpha R + (1-\alpha)(R+1)=R+1-\alpha. \end{equation} Let $D'_r$ be the indicator random variable for the event $A\wedge B_r$. When $A$ does not occur, all the variables $D'_r$ are equal to $0$. When $A$ occurs, $D'_r$ is the number of bits downloaded from server $S_r$ and a total of $R$ bits are downloaded. So \begin{equation} \label{eqnddasheqn} \mathrm{E}\left(\sum_{r=1}^n D'_r\right)=(1-\alpha)0+\alpha R=\alpha R. \end{equation} Suppose a server $S_r$ uses the following strategy to guess the value of $\ell$ from the query $q_r$ it receives. If the server replies non-trivially using a function $f$ that depends on only one record, say Record $\ell'$, it guesses that $\ell=\ell'$. Otherwise, the server guesses a value uniformly at random. The server guesses correctly with probability $1/k$ when it responds trivially. The argument in the paragraph above shows the server always guesses correctly if it responds non-trivially and only $R$ servers reply. Thus the server is correct with probability at least $(1/k)\Pr(\overline{B_r})+\Pr(A\wedge B_r)$. The privacy requirement of the PIR scheme implies that the server's probability of success can be at most $1/k$, and so we must have that $\Pr(A\wedge B_r)\leq (1/k)\Pr(B_r)$. Hence \[ \mathrm{E}(D'_r)\leq (1/k)\mathrm{E}(D_r). \] By linearity of expectation, we see that \[ \mathrm{E}\left(\sum_{r=1}^n D'_r\right)=\sum_{r=1}^n\mathrm{E}(D'_r)\leq \frac{1}{k}\sum_{r=1}^n\mathrm{E}(D_r)=\frac{1}{k}\,\mathrm{E}\left(\sum_{r=1}^n D_r\right). \] So, using~\eqref{eqn:deqn} and~\eqref{eqnddasheqn}, we see that \[ \alpha R\leq \frac{1}{k}(R+1-\alpha). \] Rearranging this inequality in terms of $\alpha$, we see that the theorem follows. \end{proof} \section{Constructions} \label{sec:constructions} Recall the notation from the introduction: we are assuming that our database $\mathbf{X}$ consists of $k$ records, each of $R$ bits, and we write $X_{ij}$ for the $j$th bit of the $i$th record. \subsection{Two schemes with download complexity $R+1$} \label{subsec:rplusone} This section describes two schemes with download complexity $R+1$. Recall that this download complexity is optimal, by Corollary~\ref{cor:rplusone}. The first scheme is included because of its simplicity; it can be thought of as a variation of the scheme of Chor et al.\ described in Example~\ref{ex:chor}, and achieves optimal download complexity using only $R+1$ servers. It has a total storage requirement which is quadratic in $R$. But the scheme has high upload complexity: $kR(R+1)$. The second scheme is very closely related to a scheme mentioned in an aside in Shah et al.~\cite[Section IV]{SRR14}. This scheme has the same properties as the first scheme, except the upload complexity is improved to just $(R+1)k\lceil\log (R+1)\rceil$. We note that the main scheme described in Shah et al.~\cite[Section IV]{SRR14} also has optimal download complexity of $R+1$. Each server stores just $R$ bits, and so the storage per server is low. However, their scheme uses an exponential (in $R$) number of servers, and so has exponential total storage. \begin{construction}\label{con:manyserver} Suppose there are $R+1$ servers, each storing the whole database. \begin{itemize} \item A user who requires Record $\ell$ creates a $k\times R$ array of bits by drawing its entries $\alpha_{ij}$ uniformly and independently at random. \item Server $S_{R+1}$ is requested to return the bit $c_{R+1}=\bigoplus_{i=1}^k\bigoplus_{j=1}^R\alpha_{ij}X_{ij}$. \item For $r=1,2,\dotsc, R$, server $S_r$ is requested to return the bit $c_r=\bigoplus_{i=1}^k\bigoplus_{j=1}^R\beta_{ij}X_{ij}$, where \[ \beta_{ij}=\begin{cases} \alpha_{ij}\oplus 1&\text{if }i=\ell\text{ and }j=r,\\ \alpha_{i,j}&\text{otherwise}. \end{cases} \] \item To recover $X_{\ell r}$, namely bit $r$ of record $X_\ell$, the user computes $c_r\oplus c_{R+1}$. \end{itemize} \end{construction} \begin{theorem} \label{thm:construction1} Construction~\ref{con:manyserver} is a $(R+1)$-server PIR scheme with download complexity $R+1$. The scheme has upload complexity $kR(R+1)$ and total storage $(R+1)Rk$ bits. \end{theorem} \begin{proof} We note that \[ \alpha_{ij}\oplus\beta_{ij}=\begin{cases} 1&\text{if }i=\ell\text{ and }j=r,\\ 0&\text{otherwise}. \end{cases} \] Hence \begin{align*} c_r\oplus c_{R+1}&=\bigoplus_{i=1}^k\bigoplus_{j=1}^R(\alpha_{ij}\oplus \beta_{ij})X_{ij}\\ &=X_{\ell r}. \end{align*} So the user recovers the bit $X_{\ell r}$ correctly for any $r$ with $1\leq r\leq R$. This proves correctness. For privacy, we note that $S_{R+1}$ receives a uniformly distributed vector $q_{R+1}=(\alpha_{ij})\in\{0,1\}^{kR}$ in all circumstances. Since the distribution of $q_{R+1}$ does not depend on~$\ell$, no information about~$\ell$ is received by $S_{R+1}$. Similarly, for any $1\leq r\leq R$, the query $q_r=(\beta_{ij})\in\{0,1\}^{kR}$ is uniformly distributed irrespective of the value of $\ell$, and so no information about $\ell$ is received by $S_r$. We note that each query $q_r$ is $kR$ bits long (for any $r\in\{1,2,\ldots,R+1\}$) and so the upload complexity of the scheme is $kR(R+1)$. Each server replies with a single bit, and so the download complexity is $R+1$. The database is $kR$ bits long, and so (since each server stores the whole database) the total storage is $(R+1)Rk$ bits. \end{proof} We note that there are situations where one of the servers is asked for an all-zero linear combination of bits from the database. In this case, that server need not reply. So the number of bits of downloaded in Construction~\ref{con:manyserver} is sometimes $R$ (though usually $R+1$ bits are downloaded). See the comment following Corollary~\ref{cor:onebit}. We now describe a second construction with improved upload complexity. The construction can be thought of as a variant of Construction~\ref{con:manyserver} where the rows of the array $\alpha$ are all taken from a restricted set $\{e_0,e_1,\ldots,e_{R}\}$ of size $R+1$. A similar idea is used in the constructions in~\cite{SRR14}. For $i=1,2,\ldots,R$, let $e_i$ be the $i$\textsuperscript{th} unit vector of length $R$. Let $e_0$ be the all zero vector. For binary vectors $\mathbf{x}$ and $\mathbf{y}$ of length $R$, write $\mathbf{x}\cdot \mathbf{y}$ be their inner product; so $\mathbf{x}\cdot \mathbf{y}=\oplus_{j=1}^Rx_jy_j$. \begin{construction}\label{con:lowerupload} Suppose there are $R+1$ servers, each storing the whole database. \begin{itemize} \item A user who requires Record $\ell$ chooses $k$ elements $a_1,a_2,\ldots,a_k\in\mathbb{Z}_{R+1}$ uniformly and independently at random. For $r=1,\ldots,R+1$, server $S_r$ is sent the vector $q_r=(b_{1r},b_{2r},\ldots,b_{kr})\in\mathbb{Z}_{R+1}^k$, where \[ b_{ir}=\begin{cases} a_i+r\bmod R+1&\text{if }i=\ell,\\ a_i&\text{otherwise}. \end{cases} \] \item Server $S_r$ returns the bit $c_{r}=\bigoplus_{i=1}^k e_{b_{ir}}\cdot X_i$. \item To recover the $j$\textsuperscript{th} bit of $X_\ell$, the user finds the integers $r$ and $r'$ such that $b_{\ell r}=0$ and $b_{\ell r'}=j$. The user then computes $c_r\oplus c_{r'}$. \end{itemize} \end{construction} \begin{theorem} \label{thm:construction2} Construction~\ref{con:lowerupload} is an $(R+1)$-server PIR scheme with download complexity $R+1$. The scheme has upload complexity $k(R+1)\log (R+1)$ and total storage $(R+1)Rk$ bits. \end{theorem} \begin{proof} For correctness, we first note that $r$ and $r'$ exist since $b_{\ell r}\in\{0,1,2\ldots,R\}$ takes on each possible value once as $r\in\{0,1,\ldots,R\}$ varies. Also note that \[ e_{b_{ir}}\oplus e_{b_{ir'}}=\begin{cases} e_j&\text{if }i=\ell,\\ e_0&\text{otherwise}. \end{cases} \] So, since $e_0=0$, \[ c_r\oplus c_{r'}=\bigoplus_{i=1}^k(e_{b_{ir}}\oplus e_{b_{ir'}})\cdot X_{i}=e_j\cdot X_\ell =X_{\ell j}. \] So the user recovers the bit $X_{\ell j}$ correctly for any $j$ with $1\leq j\leq R$. For privacy, we note that $S_r$ receives a uniformly distributed vector $q_{r}\in(\mathbb{Z}_{R+1})^{k}$ in all circumstances. Since the distribution of $q_{r}$ does not depend on~$\ell$, no information about~$\ell$ is received by $S_r$. The calculations of the total storage and download complexity are identical to those in the proof of Theorem~\ref{thm:construction1}. For the upload complexity, note that it takes just $\log (R+1)$ bits to specify an element of $\mathbb{Z}_{R+1}$. Since each server receives $k$ elements from $\mathbb{Z}_{R+1}$, and since there are $R+1$ servers, the upload complexity of the scheme is $k(R+1)\log (R+1)$ as claimed. \end{proof} \subsection{Optimal download complexity for a small number of servers} \label{subsec:nserver} For an integer $n$ such that $(n-1)\mid R$, we now describe an $n$-server PIR scheme with download complexity $\frac{n}{n-1}R$ bits. By Theorem~\ref{thm:tdownload}, this construction provides schemes with an optimal download complexity for $n$ servers, provided the number $k$ of records is sufficiently large. This construction is closely related to Construction~\ref{con:lowerupload} above. Indeed, the construction below is a generalisation of Construction~\ref{con:lowerupload} where we work with strings rather than single bits. We first define an analogue of the bits $e_b\cdot X_i$ computed by servers in Construction~\ref{con:lowerupload}. We divide an $R$-bit string $X$ into $n-1$ blocks, each of size $R/(n-1)$. For $b\in\{1,2,\ldots,{n-1}\}$ we write $\pi_b(X)$ for the $b$th block (so $\pi_b(X)$ is an $R/(n-1)$-bit string). We write $\pi_0(X)$ for the all-zero string $0^{R/(n-1)}$ of length $R/(n-1)$. \begin{construction} \label{con:smallnumservers} Let $n$ be an integer such that $(n-1)\mid R$. Suppose there are $n$ servers, each storing the entire database.\begin{itemize} \item A user who requires Record $\ell$ chooses $k$ elements $a_1,a_2,\ldots,a_k\in\mathbb{Z}_{n}$ uniformly and independently at random. For $r=1,\ldots,n$, server $S_r$ is sent the vector $q_r=(b_{1r},b_{2r},\ldots,b_{kr})\in\mathbb{Z}_{n}^k$, where \[ b_{ir}=\begin{cases} a_i+r\bmod n&\text{if }i=\ell,\\ a_i&\text{otherwise}. \end{cases} \] \item Server $S_r$ returns the $R/(n-1)$-bit string $c_{r}=\bigoplus_{i=1}^k \pi_{b_{ir}}(X_i)$. \item To recover the $j$\textsuperscript{th} block of $X_\ell$, the user finds the integers $r$ and $r'$ such that $b_{\ell r}=0$ and $b_{\ell r'}=j$. The user then computes $c_r\oplus c_{r'}$. \end{itemize} \end{construction} \begin{theorem} \label{thm:construction3} Construction~\ref{con:smallnumservers} is an $n$-server PIR scheme with download complexity $\frac{n}{n-1}R$. The scheme has upload complexity $nk\log n$ and total storage is $nkR$. \end{theorem} \begin{proof} Exactly as in the proof of Theorem~\ref{thm:construction2}, we first note that $r$ and $r'$ exist since $b_{\ell r}\in\{0,1,2\ldots,n-1\}$ takes on each possible value once as $r\in\{0,1,\ldots,n\}$ varies. Also note that when $i\not=\ell$ \[ \pi_{b_{ir}}(X_i)\oplus \pi_{b_{ir'}}(X_i)=\pi_{a_{i}}(X_i)\oplus \pi_{a_{i}}(X_i)=0^{R/(n-1)}, \] but when $i=\ell$ \[ \pi_{b_{ir}}(X_i)\oplus \pi_{b_{ir'}}(X_i)=\pi_{0}(X_i)\oplus \pi_j(X_i)=\pi_j(X_i)= \pi_j(X_\ell). \] Hence \[ c_r\oplus c_{r'}=\bigoplus_{i=1}^k(\pi_{b_{ir}}(X_i)\oplus \pi_{b_{ir'}}(X_{i}))=\pi_j(X_{\ell}). \] So the user recovers the $j$th block of $X_{\ell}$ correctly for any $j$ with $1\leq j\leq (n-1)$. For privacy, we note that $S_r$ receives a uniformly distributed vector $q_{r}\in(\mathbb{Z}_n)^{k}$ in all circumstances. Since the distribution of $q_{r}$ does not depend on~$\ell$, no information about~$\ell$ is received by $S_r$. The total storage is $nkR$, since each of $n$ servers stores the entire $kR$-bit database. Each query $q_r$ is $k\log n$ bits long, since an element of $\mathbb{Z}_n$ may be specified using $\log n$ bits. Hence the upload complexity is $nk\log n$. Since each server returns an $R/(n-1)$- bit string, the download complexity is $\frac{n}{n-1}R$. \end{proof} Shah et al.~\cite[Section~V]{SRR14} provide PIR schemes with linear (in $R$) total storage and with download complexity between $2R$ and $4R$. Their scheme requires a number of servers which is independent of $R$ (but is linear in $k$). The construction above (taking $n$ to be fixed but sufficiently large) shows that for any fixed positive $\epsilon$ a PIR scheme with linear total storage exists with download complexity of $(1+\epsilon)R$: this is within an arbitrarily close factor of optimality. Moreover, the number of servers in our construction is independent of both $k$ and $R$. However, note that in our scheme each server stores the whole database, whereas the per server storage of the scheme of Shah et al.\ is a fixed multiple of $R$. This issue is addressed in Construction~\ref{con:smallperserver} below. \subsection{Schemes with small per-server storage} \label{subsec:perserver} We make the observation that the last construction may be used to give families of schemes with lower per-server storage; see~\cite[Section~V]{SRR14} for similar techniques. The point here is that we never XOR the first bit (say) from one block with the second bit (say) of any other block, so we can store these bits in separate servers without causing problems. More precisely, let $s$ be a fixed integer such that $s\mid R$ and let $t$ be a fixed integer such that $(t-1)\mid s$. We divide each record $X_i$ into $R/s$ blocks $\pi_1(X_i), \pi_2(X_i),\ldots,\pi_{R/s}(X_i)$, each $s$ bits long. We then divide each block $\pi_j(X_i)$ into $(t-1)$ sub-blocks $\pi_{j,1}(X_i)$, $\pi_{j,2}(X_i),\ldots ,\pi_{j,t-1}(X_i)$, each $s/(t-1)$ bits long. For any $i\in\{1,2,\ldots,k\}$ and any $j\in\{1,2,\ldots,R/s\}$, we define $\pi_{j,0}(X_i)$ to be the all zero string $0^{s/(t-1)}$ of length $s/(t-1)$. \begin{construction} \label{con:smallperserver} Let $s$ be a fixed integer such that $s\mid R$. Let $t$ be a fixed integer such that $(t-1)\mid s$. Let $n=t(R/s)$. Suppose there are $n$ servers. Each server will store just $ks$ bits. \begin{itemize} \item Index the $t(R/s)$ servers by pairs $(u,r)$, where $1\leq r\leq t$ and where $1\leq u\leq R/s$. Server $S_{(u,r)}$ stores the $u$th sub-block of every block. So $S_{(u,r)}$ stores $\pi_{u,j}(X_i)$ where $1\leq i\leq k$ and $1\leq j\leq t-1$. Note that each server stores $k(t-1)s/(t-1)=ks$ bits. \item A user who requires Record $\ell$ chooses $k$ elements $a_1,a_2,\ldots ,a_k\in\mathbb{Z}_t$ uniformly and independently at random. The server $S_{(k,r)}$ is sent the query $q_r=(b_{1r},b_{2r},\ldots,b_{kr})\in\mathbb{Z}_{t}^k$, where \[ b_{ir}=\begin{cases} a_i+r\bmod n&\text{if }i=\ell,\\ a_i&\text{otherwise}. \end{cases} \] (Note that many servers receive the same query.) \item Server $S_{(u,r)}$ returns the $s/(t-1)$-bit string $c_{(u,r)}=\oplus_{i=1}^k \pi_{u,b_{ir}}(X_i)$. \item To recover the $j$th sub-block of the $u$th block of $X_\ell$, the user finds integers $r$ and $r'$ such that $b_{\ell r}=0$ and $b_{\ell r'}=j$ and computes $c_{(u,r)}\oplus c_{(u,r')}$. \end{itemize} \end{construction} \begin{theorem} \label{thm:construction4} Construction~\ref{con:smallperserver} is a PIR scheme with download complexity $\frac{R}{s}\frac{r}{t-1}s=\frac{t}{t-1}R$. The scheme has upload complexity $nk\log t=(tkR/s)\log t$ and total storage $nks=tkR$ bits. \end{theorem} \begin{proof} As in the proofs of Theorems~\ref{thm:construction2} and~\ref{thm:construction3}, privacy follows since $S_{u,r}$ always receives a uniformly distributed vector $q_r\in\mathbb{Z}_t^k$ as a query. For correctness, observe that when $i\not=\ell$ \[ \pi_{u,b_{ir}}(X_i)\oplus \pi_{u,b_{ir'}}(X_i)=\pi_{u,a_{i}}(X_i)\oplus \pi_{u,a_{i}}(X_i)=0^{s/(t-1)}, \] but when $i=\ell$ \[ \pi_{u,b_{ir}}(X_i)\oplus \pi_{u,b_{ir'}}(X_i)=\pi_{u,0}(X_i)\oplus \pi_{u,j}(X_i)=\pi_{u,j}(X_i)= \pi_{u,j}(X_\ell). \] Hence \[ c_{(u,r)}\oplus c_{(u,r')}=\bigoplus_{i=1}^k(\pi_{u,b_{ir}}(X_i)\oplus \pi_{u,b_{ir'}}(X_{i}))=\pi_{u,j}(X_{\ell}). \] So the user can indeed compute the $j$-th sub-block of the $u$-th block as claimed. It is easy to calculate the upload complexity, download complexity and total storage complexity as before, remembering that each server stores $ks$ bits rather than the entire database.\end{proof} By fixing $t$ and $s$ to be sufficiently large integers, we can see that for all positive $\epsilon$ we have a family of schemes with download complexity at most $(1+\epsilon)R$, with total storage linear in the database size, with a linear (in $R$) number of servers, and where the per server storage is independent of $R$. So this family of schemes has a better download complexity and per-server storage than Shah et al.~\cite[Section~V]{SRR14}, and is comparable in terms of both the number of servers and total storage. The servers may be divided into $t$ classes $\mathcal{S}_1,\mathcal{S}_2,\ldots,\mathcal{S}_{t}$, where \[ \mathcal{S}_r=\{S_{(1,r)},S_{(2,r)},\ldots,S_{(R/s,r)}\}. \] Since servers in the same class receive the same query, the above construction still works if some of the servers within a class are merged. If this is done, the storage requirements of each merged server is increased, the download complexity and total storage are unaffected, and the number of servers required and upload complexity are reduced. So various trade-offs are possible using this technique. \subsection{An explicit asymptotically optimal scheme} \label{subsec:sunjafar} Sun and Jafar~\cite{SuJa16d} describe a PIR scheme that has the best possible asymptotic download complexity, as $R\rightarrow\infty$. Their scheme is constructed in a recursive fashion. In this subsection, we describe an explicit, non-recursive, scheme with the same parameters as the Sun and Jafar scheme. Our scheme has the advantages of a more compact description, and (we believe) a proof that is significantly more transparent. Our scheme is described in detail in Construction~\ref{con:sunjafar} below. But, to aid understanding, we first provide an overview of the scheme. Suppose that $n^k$ divides $R$. We split an $R$-bit string $X$ into $n^k$ blocks, each of length $R/n^k$. For $j\in\{1,2,\ldots,n^k\}$ we write $\pi_j(X)$ for the $j$-th block of $X$, and we write $\pi_0(X)$ for the all zero block $0^{R/n^k}$. Let $\mathcal{V}$ be the set of all non-zero strings $\mathbf{v}=v_1v_2v_3\ldots v_k\in\{0,1,2,\ldots,n-1\}^k$ such that $\sum_{i=1}^k v_i\equiv 0\bmod n-1$. (Note that our sum is taken modulo $n-1$, not modulo $n$.) Let $\mathcal{W}=\{1,2,\ldots,n\}\times \mathcal{V}$. There are $n$ servers in the scheme, each storing the whole database. Server $S_r$ receives a query consisting of integers $b_i(r,\mathbf{v})\in\{1,2,\ldots,n^k\}$ where $i\in\{1,2,\ldots,k\}$ and $\mathbf{v}\in \mathcal{V}$. The server replies with $|\mathcal{V}|$ strings, each of length $R/n^k$. Each string is a linear combination of blocks, at most one block from each record (the choice of each block being determined by an integer $b_i(r,\mathbf{v})$: see~\eqref{eqn:sj_combination} below). From the perspective of $S_r$, the distribution of the integers $b_i(r,\mathbf{v})$ does not depend on $\ell$, enabling us to attain privacy. However, the user chooses these integers so that $b_i(r,\mathbf{v})$ and $b_{i}(r',\mathbf{v}')$ are sometimes constrained to be equal when $r\not=r'$. This is done in such a way that the user can reconstruct Record $\ell$ from the replies of the servers. To describe the constraints the user imposes, which of course depend on $\ell$, we define a graph $\Gamma^{[\ell]}$ on the set $\mathcal{W}$, and constrain $b_i(r,\mathbf{v})$ and $b_{i}(r',\mathbf{v}')$ to be equal when $(r,\mathbf{v})$ and $(r',\mathbf{v}')$ lie in the same component of $\Gamma^{[\ell]}$. \begin{figure} \begin{tikzpicture}[scale=0.7] \draw (0,0) node[left]{$(1,101)$}-- (10,0.5); \draw (0,1) node[left]{$(3,202)$} -- (10,0.5) node[right]{$(2,002)$}; \draw (0,2) node[left]{$(1,110)$} -- (10,2.5); \draw (0,3) node[left]{$(3,220)$} -- (10,2.5) node[right]{$(2,020)$}; \draw (0,4) node[left]{$(1,112)$} -- (10,4.5); \draw (0,5) node[left]{$(3,222)$} -- (10,4.5) node[right]{$(2,022)$}; \draw (0,6) node[left]{$(1,211)$} -- (10,6.5); \draw (0,7) node[left]{$(2,121)$} -- (10,6.5) node[right]{$(3,011)$}; \draw (0,8) node[left]{$(2,211)$} -- (10,8.5); \draw (0,9) node[left]{$(3,121)$} -- (10,8.5) node[right]{$(1,011)$}; \draw (-1,5) ellipse (3 and 8); \draw (11,5) ellipse (3 and 6.5); \filldraw (0,0) circle (0.08); \filldraw (0,2) circle (0.08); \filldraw (0,4) circle (0.08); \filldraw (0,6) circle (0.08); \filldraw (0,8) circle (0.08); \filldraw (0,1) circle (0.08); \filldraw (0,3) circle (0.08); \filldraw (0,5) circle (0.08); \filldraw (0,7) circle (0.08); \filldraw (0,9) circle (0.08); \filldraw (10,0.5) circle (0.08); \filldraw (10,2.5) circle (0.08); \filldraw (10,4.5) circle (0.08); \filldraw (10,6.5) circle (0.08); \filldraw (10,8.5) circle (0.08); \filldraw (0,10) node[left]{$(1,200)$} circle (0.08); \draw (-1,-0.5) node{$\vdots$}; \draw (11,0) node{$\vdots$}; \draw (-1,14) node{$\mathcal{W}_1^{[\ell]}$}; \draw (11,12.5) node{$\mathcal{W}_2^{[\ell]}$}; \end{tikzpicture} \caption{Part of the graph $\Gamma^{[\ell]}$ when $n=k=3$ and $\ell=1$.} \label{fig:sjgraph} \end{figure} We now give details of the scheme. We begin by describing the graph $\Gamma^{[\ell]}$ (see Figure~\ref{fig:sjgraph}) and by detailing some of its structure. Let $\ell\in\{1,2,\ldots,k\}$. Let $\mathcal{W}_1^{[\ell]}$ consist of those elements $(r,\mathbf{v})\in \mathcal{W}$ such that $v_\ell\not=0$, and let $\mathcal{W}_2^{[\ell]}$ consist of the elements such that $v_\ell=0$. The graph $\Gamma^{[\ell]}$ is defined on the vertex set $\mathcal{W}$, and is bipartite with parts $\mathcal{W}_1^{[\ell]}$ and $\mathcal{W}_2^{[\ell]}$. We draw at most one edge from each element $(r,\mathbf{v})\in \mathcal{W}_1^{[\ell]}$ into $\mathcal{W}_2^{[\ell]}$ as follows. If $v_\ell$ is the only non-zero component of $\mathbf{v}$, we draw no edge from $(r,v_\ell)$, so we have an isolated vertex. Suppose two or more components of $\mathbf{v}$ are non-zero. We define $\ell_2\in\{1,2,\ldots,k\}$ to be the next component in $\mathbf{v}$ after the $\ell$th that is non-zero, taken cyclically. Let $w\in\{1,2,\ldots,n-1\}$ be such that $w\equiv v_\ell+v_{\ell_2}\bmod n-1$. Define $\mathbf{v}'=v'_1v'_2\cdots v'_k$ by \[ v'_i=\begin{cases} v_i&\text{if }i\in\{1,2,\ldots,k\}\setminus\{\ell,\ell_2\},\\ 0&\text{if }i=\ell,\\ w&\text{if }i=\ell_2. \end{cases} \] Let $r'\in\{1,2,\ldots,n\}$ be such that $r'\equiv r+v_\ell\bmod n$. We join $(r,\mathbf{v})$ to $(r',\mathbf{v}')$. Let $\mathcal{C}^{[\ell]}$ be the set of components of $\Gamma^{[\ell]}$. We note that $\Gamma^{[\ell]}$ has exactly $n$ isolated vertices, namely the vectors of the form $(r,\mathbf{v})$ where $r\in\{1,2,\ldots,n\}$ and where $\mathbf{v}$ is the single vector defined by \[ v_i=\begin{cases} 0&\text{if }i\not=\ell,\\ n-1&\text{if }i=\ell. \end{cases} \] The remaining components in $\mathcal{C}^{[\ell]}$ are stars consisting of a central vertex in $\mathcal{W}^{[\ell]}_2$ and $n-1$ other vertices all lying in $\mathcal{W}^{[\ell]}_1$. Moreover, we note that if $(r,\mathbf{v})$ and $(r',\mathbf{v}')$ are distinct vertices in the same component of $\Gamma^{[\ell]}$ then $r\not=r'$. We claim that the number of vertices $(r,\mathbf{v})\in\mathcal{W}^{[\ell]}_1$ is $n^k$. To see this, we note that there are $n$ choices for $r$, and then $n^{k-1}$ choices for $v_1,v_2,\ldots,v_{\ell-1},v_{\ell+1},\ldots,v_k$. Once these choices are made $v_\ell\in\{0,1,\ldots,n-1\}$ is determined, since $v_\ell\not=0$ and $\sum_{i=1}^kv_i\equiv 0\bmod n-1$. This establishes our claim. Since every component of $\Gamma^{[\ell]}$ contains a vertex in $\mathcal{W}^{[\ell]}_1$, we see that $|\mathcal{C}^{[\ell]}|\leq |\mathcal{W}^{[\ell]}_1|=n^k$. Indeed, the number of components of $\Gamma^{[\ell]}$ is: \[ |\mathcal{C}^{[\ell]}|=n+(|\mathcal{W}^{[\ell]}_1|-n)/(n-1)=n(1+(n^{k-1}-1)/(n-1)). \] \begin{construction} \label{con:sunjafar} Suppose that $n^k\mid R$. Suppose there are $n$ servers, each storing the whole database. \begin{itemize} \item A user who requires Record $\ell$ proceeds as as follows. In the notation defined above, for each $i\in\{1,2,\ldots,k\}\setminus\{\ell\}$ the user chooses (uniformly and independently) a random injection $f_i:\mathcal{C}^{[\ell]} \rightarrow \{1,2,\ldots,n^{k}\}$. The user chooses (again uniformly and independently) a random bijection $\psi: \mathcal{W}^{[\ell]}_1\rightarrow \{1,2,\ldots,n^{k}\}$. Define integers $b_i(r,\mathbf{v})\in\{0,1,\ldots ,n^k\}$ for $(r,\mathbf{v})\in\mathcal{W}$ and $i\in\{1,2,\ldots,k\}$ as follows. If $i\not=\ell$, define \[ b_i(r,\mathbf{v})=\begin{cases} 0&\text{if }v_i=0,\\ f_i(C)&\text{if }v_i\not=0 \text{ and $(r,\mathbf{v})$ lies in the component $C\in\mathcal{C}$}. \end{cases} \] Note that when $i=\ell$ we have that $v_i\not=0$ if and only if $(r,\mathbf{v})\in\mathcal{W}_1^{[\ell]}$. So when $i=\ell$ we may define \[ b_i(r,\mathbf{v})=\begin{cases} 0&\text{if }v_i=0,\\ \psi((r,\mathbf{v}))&\text{if }v_i\not=0. \end{cases} \] For $r=1,2,\ldots,n$, server $S_r$ is sent the vector $q_r=(b_i(r,\mathbf{v}):\mathbf{v}\in\mathcal{V}, i\in\{1,2,\ldots,k\})$. \item The server $S_r$ replies with the blocks \begin{equation} \label{eqn:sj_combination} s_{(r,\mathbf{v})}=\sum_{i=1}^{k}\pi_{b_i(r,\mathbf{v})}(X_i) \end{equation} for all $\mathbf{v}\in\mathcal{V}$. \item To recover block $j$ of $X_\ell$, the user finds $(r,\mathbf{v})=\psi^{-1}(j)\in\mathcal{W}_1^{[\ell]}$. Let $C\in\mathcal{C}^{[\ell]}$ be the component containing $(r,\mathbf{v})$. If $|C|>1$, let $(r',\mathbf{v}')\in C\cap \mathcal{W}_2^{[\ell]}$. Then (see below for justification) \[ \pi_j(X_\ell)=\begin{cases} s_{(r,\mathbf{v})}&\text{ if }|C|=1,\text{ and}\\ s_{(r,\mathbf{v})}\oplus s_{(r',\mathbf{v}')}&\text{ if }|C|>1. \end{cases} \] \end{itemize} \end{construction} \begin{theorem} \label{thm:construction5} Construction~\ref{con:sunjafar} is an $n$-server PIR scheme with download complexity $(1-1/n^k)(n/(n-1))R$. The total storage of the scheme is $nkR$. The upload complexity of the scheme is $k^2n^{k}\log n$ bits. \end{theorem} \begin{proof} We begin by establishing correctness of the scheme. Let $(r,\mathbf{v})=\psi^{-1}(j)$ and let $C\in \mathcal{C}^{[\ell]}$ be the component containing $(r,\mathbf{v})$. When $|C|=1$ we have $v_i\not=0$ if and only if $i=\ell$ and so \[ s_{(r,\mathbf{v})}=\sum_{i=1}^{k}\pi_{b_i(r,\mathbf{v})}(X_i)=\pi_{b_\ell(r,\mathbf{v})}(X_\ell)=\pi_j(X_\ell), \] the last equality following since $b_\ell(r,\mathbf{v})=j$. Hence the user recovers the $j$th block $\pi_j(X_\ell)$ of $X_\ell$ correctly in this case. Suppose now that $C$ contains two or more vertices, so there exists $(r',\mathbf{v}')\in C\cap \mathcal{W}_2^{[\ell]}$. When $i\not=\ell$, the values of $b_i(r,\mathbf{v})$ and $b_i(r',\mathbf{v}')$ are equal, since $(r,\mathbf{v})$ and $(r',\mathbf{v}')$ lie in the same component $C$ of $\Gamma^{[\ell]}$ and since $v_i=0$ if and only if $v'_i=0$. Moreover, $v_\ell\not=0$ and $v'_\ell=0$. Hence \begin{align*} s_{(r,\mathbf{v})}\oplus s_{(r',\mathbf{v}')}&=\sum_{i=1}^k \left(\pi_{b_i(r,\mathbf{v})}(X_i)\oplus\pi_{b_i(r',\mathbf{v}')}(X_i)\right)\\ &=\pi_{b_\ell(r,\mathbf{v})}(X_\ell)\oplus \pi_{b_\ell(r',\mathbf{v}')}(X_\ell))\\ &=\pi_{\psi((r,\mathbf{v}))}(X_\ell)\oplus\pi_{0}(X_\ell))\\ &=\pi_{j}(X_\ell). \end{align*} So the user recovers the $j$th block $\pi_j(X_\ell)$ of $X_\ell$ correctly in this case also. We have established correctness. We now aim to establish the security of the scheme. Let $\mathcal{A}$ be the set of integer vectors $(a_i(\mathbf{v})\in\{0,1,\ldots,n^k\}:i\in\{1,2,\ldots,k\}, \mathbf{v}\in\mathcal{V})$ with the restrictions that $a_i(\mathbf{v})=0$ if and only if $v_i=0$, and that for any fixed $i\in\{1,2,\ldots,k\}$ the integers $a_i(\mathbf{v})$ with $v_i\not=0$ are distinct. Let $r\in \{1,2,\ldots,n\}$ be fixed. The query $q_r=(b_i(r,\mathbf{v}):\mathbf{v}\in \mathcal{V},i\in\{1,2,\ldots,k\})$ lies in $\mathcal{A}$, since the functions $f_i$ and $\psi$ are injective and since (whether or not $i=\ell$) we have $b_i(r,\mathbf{v})=0$ if and only if $v_i=0$. Indeed, the query is uniformly distributed in $\mathcal{A}$. To see this, first note that the functions $f_i$ (for $i\not=\ell$) and $\psi$ are chosen independently. The values $b_\ell(r,\mathbf{v})$ for $v_\ell\not=0$ are uniform subject to being distinct since $\psi$ is a randomly chosen bijection. For $i\not=\ell$, the values $b_i(r,\mathbf{v})$ for $v_\ell\not=0$ are uniform subject to being distinct, since $f_i$ is a uniformly chosen injection from $\mathcal{C}^{[\ell]}$, and since at most one vertex in any component $C\in\mathcal{C}^{[\ell]}$ has its first entry equal to $r$. Hence the distribution of query $q_r$ is uniform on $\mathcal{A}$ as claimed. Since this distribution does not depend on $\ell$, privacy follows. Each server replies with $|\mathcal{V}|$ strings, each string of length $R/n^k$. Since there are $n$ servers, the download complexity is $nR|\mathcal{V}|/n^k$. So it remains to determine $|\mathcal{V}|$. For $0\leq s\leq k-1$, there are $n^{k-s-1}$ elements $v_1v_2\cdots v_k\in\mathcal{V}$ that begin with exactly $s$ zeros, since we may choose $v_{s+2},v_{s+3},\ldots,v_{k}\in\{0,1,\ldots,n-1\}$ arbitrarily and then $v_{s+1}$ is determined by the fact it is non-zero and $\sum_{j=1}^kv_j\equiv 0 \bmod n-1$. So \[ |\mathcal{V}|=\sum_{s=0}^{k-1}n^{k-s-1}=(n^k-1)/(n-1) \] and the download complexity is $(1-1/n^k)(n/(n-1))R$, as required. We may argue that the total upload complexity is $k^2n^{k}\log n$ as follows. Consider Server $S_r$. The integers $b_i(r,\mathbf{v})$ with $v_i=0$ are zero, and so do not need to be sent. There are exactly $kn^{k-1}$ integers $b_i(r,\mathbf{v})\in\{1,2,\ldots,n^k\}$ with $i\in\{1,2,\ldots,k\}$ and $\mathbf{v}\in\mathcal{V}$ with $v_i\not=0$. (To see this, note that there are $k$ choices for $i$, and $n$ choices for each component $\mathbf{v}$ except the $\ell$th. But then $v_\ell$ is determined by the fact that it is non-zero and $\sum_{j=1}^kv_j\equiv 0\bmod n-1$.) Each integer can be specified using $k\log n$ bits, and so the query $q_r$ is $k^2n^{k-1}\log n$ bits long. Since there are $n$ servers, the total upload complexity is $k^2n^{k}\log n$ bits, as required. \end{proof} \subsection{An averaging technique} \label{subsec:averaging} The download complexity of both the PIR scheme due to Sun and Jafar~\cite{SuJa16d} and the scheme in Construction~\ref{con:sunjafar} above is $(1-1/n^k)(n/(n-1))R$. This is only slightly smaller than the more practical scheme in Construction~\ref{con:smallnumservers}, which has download complexity $(n/(n-1))R$. In fact, the \emph{expected} number of bits downloaded in Construction~\ref{con:smallnumservers} is $(1-1/n^k)(n/(n-1))R$, since a server is asked for an all-zero linear combination of blocks with probability $1/n^k$ and need not reply in this case. This section describes an `averaging' technique which transforms Construction~\ref{con:smallnumservers} into a scheme with good (worst case) download complexity, at the price of a much stronger divisibility constraint on the length of blocks. This technique will work for a wide range of PIR schemes, but in the case of Construction~\ref{con:smallnumservers} it produces a scheme with optimal download complexity $(1-1/n^k)(n/(n-1))R$. Moreover, the upload complexity is considerably smaller than the schemes described in~\cite{SuJa16d} and Construction~\ref{con:sunjafar}. Before giving the detail, we describe the general idea. Chan, Ho and Yamamoto~\cite[Remark 2]{CHY15} observed that a PIR scheme with good upload complexity (but long record lengths) can be constructed by dividing each record into blocks, then using copies of a fixed PIR scheme for shorter records operating on each block in parallel. Crucially, the same randomness (and so the same queries) can be used for each parallel copy of the scheme, and so upload complexity is low. The `averaging' construction operates in a similar way. However, rather than using the same randomness we use different but predictably varying randomness for each parallel copy. The server can calculate queries for each copy of the scheme from just one query, so upload complexity remains low. But (because queries vary over all possibilities) the resulting scheme has (worst case) download complexity equal to the average number of bits of download in the Chan, Ho and Yamamoto construction. In more detail, we modify Construction~\ref{con:smallnumservers} as follows. Suppose that $n^k(n-1)\mid R$. We divide an $R$-bit string $X$ into $n^k(n-1)$ blocks, each of size $R/(n^k(n-1))$. We index these blocks by pairs $(b,\mathbf{x})$ where $b\in\{1,2,\ldots,n-1\}\subseteq\mathbb{Z}_n$ and $\mathbf{x}\in\mathbb{Z}_n^k$. We write $\pi_{(b,\mathbf{x})}(X)$ for the block of $X$ that is indexed by $(b,\mathbf{x})$. For any $\mathbf{x}\in \mathbb{Z}_n^k$, we write $\pi_{(0,\mathbf{x})}(X)$ for the all-zero string $0^{R/(n^k(n-1))}$ of length $R/(n^k(n-1))$. \begin{construction} \label{con:averaging} Let $n$ be an integer such that $n^k(n-1)\mid R$. Suppose there are $n$ servers, each storing the entire database. \begin{itemize} \item A user who requires Record $\ell$ chooses $k$ elements $a_1,a_2,\ldots,a_k\in\mathbb{Z}_{n}$ uniformly and independently at random. For $r=1,\ldots,n$, server $S_r$ is sent the vector $q_r=(b_{1r},b_{2r},\ldots,b_{kr})\in\mathbb{Z}_{n}^k$, where \[ b_{ir}=\begin{cases} a_i+r\bmod n&\text{if }i=\ell,\\ a_i&\text{otherwise}. \end{cases} \] \item For $r\in\{1,2,\ldots,n\}$ and $\mathbf{x}\in\mathbb{Z}_n^k$, define the string $c_{(r,\mathbf{x})}$ of length $R/(n^k(n-1))$ by \[ c_{(r,\mathbf{x})}=\bigoplus_{i=1}^k \pi_{(b_{ir}+x_i,\mathbf{x})}(X_i). \] The server $S_r$ returns the string $c_{(r,\mathbf{x})}$, for all $\mathbf{x}=(x_1,x_2,\ldots,x_k)\in\mathbb{Z}_n^k$ such that $\mathbf{x}+q_r\not=\mathbf{0}$. So $S_r$ returns $n^k-1$ strings. \item To recover the block of $X_\ell$ indexed by a pair $(j,\mathbf{x})$, the user finds the integers $r$ and $r'$ such that $b_{\ell r}+x_\ell=0$ and $b_{\ell r'}+x_{\ell}=j$. The user then computes $c_{(r,\mathbf{x})}\oplus c_{(r',\mathbf{x})}$. \end{itemize} \end{construction} \begin{theorem} \label{thm:construction6} Construction~\ref{con:averaging} is an $n$-server PIR scheme with download complexity $(1-1/n^k)\frac{n}{n-1}R$. The scheme has upload complexity $nk\log n$ and total storage is $nkR$. \end{theorem} \begin{proof} We begin with the correctness of the scheme. Exactly as in the proof of Theorem~\ref{thm:construction3}, we note that $r$ and $r'$ exist since $b_{\ell r}+x_\ell\in\{0,1,2\ldots,n-1\}$ takes on each possible value once as $r\in\{0,1,\ldots,n\}$ varies. Moreover, we note that the string $c_{(r,\mathbf{x})}$ is all zero if $\mathbf{x}+q_r=0$ (and similarly the string $c_{(r',\mathbf{x})}$ is all zero if $\mathbf{x}+q_{r'}=0$) and so the user always receives enough information to calculate $c_{(r,\mathbf{x})}\oplus c_{(r',\mathbf{x})}$. Let $\mathbf{x}=(x_1,x_2,\ldots,x_k)$. When $i\not=\ell$ \[ \pi_{(b_{ir}+x_i,\mathbf{x})}(X_i)\oplus \pi_{(b_{ir'}+x_i,\mathbf{x})}(X_i)=\pi_{(a_{i}+x_i,\mathbf{x})}(X_i)\oplus \pi_{(a_{i}+x_i,\mathbf{x})}(X_i)=0^{R/(n-1)}. \] When $i=\ell$ \[ \pi_{(b_{ir}+x_i,\mathbf{x})}(X_i)\oplus \pi_{(b_{ir'}+x_i,\mathbf{x})}(X_i)=\pi_{(0,\mathbf{x})}(X_i)\oplus \pi_{(j,\mathbf{x})}(X_i)=\pi_{(j,\mathbf{x})}(X_i)= \pi_{(j,\mathbf{x})}(X_\ell). \] Hence \[ c_{(r,\mathbf{x})}\oplus c_{(r',\mathbf{x})}=\bigoplus_{i=1}^k(\pi_{(b_{ir}+x_i,\mathbf{x})}(X_i)\oplus \pi_{(b_{ir'}+x_i,\mathbf{x})}(X_{i}))=\pi_{(j,\mathbf{x})}(X_{\ell}). \] So the user recovers the block of $X_{\ell}$ indexed by $(j,\mathbf{x})$ correctly. Privacy follows from the privacy of Construction~\ref{con:smallnumservers}, as the method for generating queries is identical. The total storage is $nkR$, since each of $n$ servers stores the entire $kR$-bit database. Each query $q_r$ is $k\lceil \log n\rceil $ bits long, since an element of $\mathbb{Z}_n$ may be specified using $ \log n$ bits. Hence the upload complexity is $nk\log n$. Since there are $n$ servers, and each server returns $n^k-1$ strings of length $R/(n^k(n-1))$, the download complexity is $(1-1/n^k)\frac{n}{n-1}R$. \end{proof} \section{Conclusions and future work} \label{sec:conclusion} In this paper, we have used classical PIR techniques to prove bounds on the download complexity of PIR schemes in modern models, and we have presented various constructions for PIR schemes which are either simpler or perform better than previously known schemes. Various interesting problems remain in this area. We first consider schemes with optimal download complexity: \begin{problem} Are there PIR schemes with fewer than $R+1$ bits of download complexity? \end{problem} Our paper, like the rest of the literature, only considers PIR schemes over binary channels, and in this model the answer is `no'. But the proofs of this fact in this paper and in Shah at el.~\cite{SRR14} both use the fact that we are working over binary channels: more than $R$ bits of download implies that at least $R+1$ bits are downloaded. So this problem is still open if we extend the model to schemes that do not necessarily use binary channels. We now return to the standard binary channel model. \begin{problem} \label{prob:Shah} Are there PIR schemes with download complexity $R+1$ and total storage linear in $R$? \end{problem} This result was claimed in Shah at el.~\cite{SRR14}, but we believe that a proof of this is still not known. A proof of this result might depend on a more detailed structural analysis of PIR schemes with $R+1$ bits of download. As a first step, we believe the following to be of interest: \begin{problem} Theorem~\ref{thm:almostall} bounds the probability that only $R$ bits are downloaded in a PIR scheme with (worst case) download complexity $R+1$. Is this bound tight? \end{problem} We conjecture that the bound could be significantly improved in some cases. We now consider families of schemes that have good asymptotic complexity as $R\rightarrow\infty$. \begin{problem} Does there exist a family of schemes with download complexity $(1+o(R))R$ and linear total storage? \end{problem} Note that an affirmative solution to Question~\ref{prob:Shah} will imply an affirmative solution to this question. \begin{problem} Are there practical PIR schemes that approach asymptotic capacity as $R$ grows? \end{problem} The schemes by Sun and Jafar~\cite{SuJa16b} and the schemes presented in this paper have the strong restriction that $n^k$ must divide $R$. This makes the schemes impractical for many parameters. \begin{problem} Is there a combinatorial proof that provides a tight upper bound on the asymptotic capacity as $R\rightarrow\infty$? \end{problem} We comment that the proof in Sun and Jafar~\cite{SuJa16b} uses information theoretic techniques. A combinatorial proof might give extra structural information for schemes meeting the bound, and might improve the bound in non-asymptotic cases. Finally, we turn to larger questions. \begin{problem} Can we find better constructions for PIR schemes? \end{problem} Schemes are of interest if they improve per server storage, total storage, upload or download complexity, if the number of servers needed was reduced, or if the divisibility conditions for parameters such as $R$ are weakened. \begin{problem} Can the techniques from this paper be applied to establish bounds or give constructions in other models, such as those discussed in Subsection~\ref{subsec:context}? \end{problem} \paragraph{Acknowledgement} The authors would like to thank Doug Stinson for comments on an earlier draft.
1,314,259,994,146
arxiv
\section{Introduction} Finite strongly coupled systems of charged particles in external traps are of high interest in many fields. Examples include ion crystals~\cite{Wineland1987,drewsen1998}, quantum dots~\cite{Alex2001} and dusty plasma crystals~\cite{Arp2004,bonitz2008pop}. Dusty plasmas allow for an easy realization of strong coupling in laboratory experiments. They typically consist of $\mu m$ sized particles in an rf discharge. Due to their high mass their motion occurs on a macroscopic timescale which makes them an ideal system for studying dynamical properties in the strong coupling limit. In the case of an isotropic parabolic confinement and (screened) Coulomb interaction the ground states are nested spherical shells (3D) or concentric rings (2D). For classical systems the ground states are found by minimizing the potential energy $U$ with respect to all particle positions. This can be a difficult task since in general $U$ has many minima which may be energetically very close to each other, particularly in 3D. To find the lowest energy configuration one has to avoid trapping in a metastable state, which can be a serious problem for numerical computations. A detailed analysis of the ground states of 3D Coulomb clusters was presented in~\cite{Patrick2005, Arp2005}. The ground states of small spherical Yukawa clusters for a wide range of the screening parameter can be found in~\cite{Henning}. Besides the ground state also metastable states were found in the simulations~\cite{Patrick2005, Arp2005, Apolinario}. Furthermore a fine structure was observed, i.e. states with the same number of particles on each shell but with a different arrangement on the same shell~\cite{Patrick2005}. Coulomb or Yukawa balls have been produced in dusty plasma experiments~\cite{Arp2004}. They are well explained by a simple model of harmonically confined particles interacting by a Yukawa potential for $N=100\dots 500$~\cite{bonitz2006}. Recently metastable states of Yukawa balls have been investigated in~\cite{block2008} for small particle numbers $N=27$ and $N=31$. It was found that often metastable states occurred with a higher probability than the ground state. This was confirmed by MD simulations but no theoretical explanation was given. This is the goal of the present paper. We apply Monte Carlo simulations (MC) as well as extensive molecular dynamics (MD) simulations with a broader parameter range than before, confirming the main results of~\cite{block2008}. For a theoretical explanation we apply an analytical method based on the classical canonical partition function~\cite{baletto2005}. This paper is organized as follows. In Sec.~\ref{sec:model} we present the Hamiltonian and explain our simulation methods. Results of the MD simulations are given in Sec.~\ref{sec:MDresults}. In Sec.~\ref{sec:analyt} we introduce an analytical method for the probabilities of stationary states in thermodynamic equilibrium. The results are compared to MC simulations. Section~\ref{sec:comp} compares the theoretical results with the experiments. The last section summarizes our findings and discusses the applicability range of our models. \section{Model and simulation idea}\label{sec:model} \subsection{\label{ssec:model}Hamiltonian} The system of $N$ identical particles with charge $Q$ and mass $m$ in an isotropic, parabolic confinement \begin{equation}\label{eqn:Vext} V_{ext}(r)=\frac{m}{2}\omega_0^2 {r}^2 \end{equation} ($r=|\bm{r}|$) is described by the Hamiltonian \begin{equation}\label{eqn:Hamiltonian} H=\sum_{i=1}^{N}\left\{\frac{p_ i ^2}{2m}+V_{ext}({r}_i)\right\} + \sum_{i>j}^N V(|\bm{r} _i-\bm{r} _j|). \end{equation} The interaction is assumed to be a shielded Coulomb-potential of the form \begin{equation}\label{eqn:Yukawa} V(r)=\frac{Q^2}{r}e^{-\kappa r}, \end{equation} where the range of the potential is controlled by the screening parameter $\kappa$. Despite its simplicity this model is of relevance for many systems, such as colloids, and has proven to accurately describe the spherical dust crystals (Yukawa balls) observed in experiments~\cite{bonitz2006}. In dusty plasmas $\kappa$ is given by the inverse Debye screening length. In the following we will treat it as a free parameter and focus on the general behavior of the model~(\ref{eqn:Hamiltonian}). Using, as a particular example, typical dusty plasma parameters will allow us to make comparisons with the experimental observations of Ref.~\cite{block2008}. Results will be given in units of the distance $r_0=({2Q^2}/{m\omega_0^2})^{1/3}$ and the corresponding Coulomb energy $E_0={Q^2}/{r_0}$. Frequencies and forces are given in units of $\omega_0$ and $m\omega_0^2 r_0$, respectively. The ground (metastable) states are the global (local) minima of the potential energy $U$, \begin{equation} U(\bm{r} _1,\dots,\bm{r} _N)=\sum_{i=1}^{N}V_{ext}(r_i) + \sum_{i>j}^N V(|\bm{r}_i-\bm{r} _j|). \end{equation} In both cases the total force on all particles vanishes and the system is in a stable configuration, i.e. stable against small perturbations. \subsection{\label{ssec:MC}Monte Carlo (MC)} The MC simulations use the standard Metropolis algorithm~\cite{computational} with the Hamiltonian~(\ref{eqn:Hamiltonian}), but without the kinetic energy part. Starting from the classical ground state at $T=0$ the system is given a finite temperature. For a fixed temperature we performed $10^7$ MC steps and determined the configuration every $10^4th$ step. The temperature is then increased and the same procedure repeated. Ergodicity of the procedure was checked by using different initial configurations. Following this method we calculate the probability as a function of $T$ from the number of occurrences of the different states. \subsection{\label{ssec:MD}Molecular dynamics (MD)} In the MD simulations we follow a different approach. Here we solve the equations of motion for particles in a parabolic trap interacting through the Yukawa potential~(\ref{eqn:Yukawa}) but include an additional damping term to simulate the annealing process the way it occurs in the experiment, as explained in~\cite{block2008}. This is different from the MC simulations where the particles are in contact with a heat bath and maintain a constant temperature. This also differs from the MD simulations in~\cite{block2008} which were also performed at finite temperature. Here, we perform substantially larger simulations and systematically scan a broad parameter range. For the $i$-th particle the equation of motion we solve is \begin{equation}\label{eqn:EOM} m\ddot{\bm{r}}_i=-\nabla_i U(\bm{r} _1,...,\bm{r} _N)-\nu m \dot{\bm{r}}_i, \end{equation} where $\nu$ is the collision frequency which will be given in units of $\omega_0$. In dusty plasmas friction is mainly due to the neutral gas. The simulation is initialized with random particle positions and velocities in a square box. To stop the simulation and determine the configuration we use two similar, but not equivalent conditions: \begin{enumerate} \item[(A)] The particles' mean kinetic energy drops below a value $\left< E_{kin}^{min}\right>$ of typically $10^{-6}-10^{-8}$. \item[(B)] The force on each particle due to the confinement and the other particles decreases below $10^{-4}$. \end{enumerate} It is tempting to define (A) as a proper condition but we will show that (B) has to be used, although they look equivalent at first glance. The difference lies in the definition of a stable configuration. If the particles lose their initial kinetic energy before they have reached a local minimum the simulation could be stopped before the particle motion has effectively ended. This problem can be circumvented by condition (B) which makes direct use of the definition of a stable state, namely that the force on each particle due to $U$ vanishes. The screening parameter, the friction coefficient as well as the lower limit for the mean kinetic energy are varied. For each parameter setting the simulation is repeated $3000-5000$ times to obtain accurate statistics. We consider systems with 31 and 27 particles as was done in the experiment. As another example we used a cluster with 40 particles because here the ground state shell configuration abruptly changes from (34,6) to (32,8) at $\kappa=0.415$ as the screening parameter is increased - without the configuration (33,7) ever being the ground state~\cite{Henning}. This gives rise to the question of how often this configuration can actually occur in experiments. \section{\label{sec:MDresults}MD simulation results} In this section we present the results of our first-principle MD simulations. The main parameters determining the occurrence frequencies of different metastable states for a given $N$ are the screening parameter $\kappa$ and the friction $\nu$. We therefore discuss the dependence on $\kappa$ and $\nu$ in detail. As an example of particular interest we will consider the parameter values of dusty plasma experiments which are in the range of $\kappa\approx0.4\dots 1.0$~\cite{block2008}. This case will be dealt with in Section~\ref{sec:comp}. We first discuss the effect of the damping rate on the occurrence probabilities. It will turn out that with a properly chosen rate we can produce very general results for different screening lengths which do not depend on the exact chosen damping coefficient and hold for any rate in the overdamped limit. The effect of screening will then be examined in the following section. \subsection{Effect of friction}\label{ssec:friction} A typical simulation result is shown in Fig.~\ref{fig:fricruns}. For slow cooling ($\nu=0.05$) the particles are not hindered by friction and can move according to the interparticle- and confinement forces. They continuously lose kinetic energy until they are trapped in a local minimum of the potential energy $U$. Here they are further being damped until the simulation is stopped. It is interesting to see that there exist more metastable states than different shell occupations, as was first observed in~\cite{Patrick2005}, see also~\cite{Apolinario}. Details are given in Table~\ref{tab:fricstates}. In the case of strong damping ($\nu=5.3$) the situation is different. Here the particles are readily slowed down after the initialization process in the box. Their motion is strongly affected by friction and interrupted even before they may be trapped in a local minimum. If condition~(A) is used to stop the simulation it is not clear if the particles are in a stable state. The reason is that due to the rapid damping they can be sufficiently slowed even though they are not in a potential minimum but on a descending path and would reach the stable configuration at a later time. \begin{figure} \includegraphics[width=0.45\textwidth]{kappa1.4-nu.eps} \caption{(Color online) Stationary states observed in the MD-simulations for $N=31,\,\kappa=1.4$ and $\left< E_{kin}^{min}\right>=10^{-8}$. The runs are sorted by the energy or the stationary state, see also table~\ref{tab:fricstates}. For slow cooling (black bars, $\nu=0.05$) one can clearly see distinct states which correspond to the horizontal lines. The length of the bold lines is proportional to the occurrence probabilities. In the case of strong friction (red, dashed line, $\nu=5.3$) the particles often lose their kinetic energy before they can settle into the equilibrium positions and the fine structure (different states with same shell configuration) cannot be resolved.\label{fig:fricruns}} \end{figure} \begin{table} \caption{Energy difference between metastable states and the ground state (the ground state and its energy is given by italic numbers) as seen in Fig.~\ref{fig:fricruns}. States with the same shell configuration but different energy differ only by the arrangement of the particles on the same shell (fine structure).} \begin{ruledtabular}\label{tab:fricstates} \begin{tabular}{cc|cc} $\Delta E/N$ & config. & $\Delta E/N$ & config.\\ \hline ${\textsl{3.030266}}$ & {\textsl{(27,4)}}& $0.000479$ & (26,5)\\ $0.000006$ & (27,4) & $0.000499$ & (26,5)\\ $0.000009$ & (27,4) & $0.000530$ & (26,5)\\ $0.000291$ & (26,5) & $0.000656$ & (25,6)\\ $0.000372$ & (26,5) & $0.000669$ & (25,6) \end{tabular} \end{ruledtabular} \end{table} \begin{figure} \centering \includegraphics*[width=0.42\textwidth]{N27kappa0.6MD.eps}\vspace{0.13cm} \includegraphics*[width=0.42\textwidth]{N31kappa0.8MD.eps}\vspace{0.13cm} \includegraphics*[width=0.42\textwidth]{N40kappa1.0MD.eps} \caption{(Color online) Effect of friction on the occurrence probabilities obtained with condition~(B) for three different numbers of particles. In a) and b) horizontal solid and dashed lines indicate experimental mean and standard deviation, respectively~\cite{block2008}. For $N=27$ the experimental values for the clusters (23,4) and (24,3) are the same. In c) solid lines indicate Yukawa interaction with $\kappa=1.0$ [ground state (32,8)] whereas dashed lines show results for Coulomb interaction [ground state (34,6)]. In all cases slow cooling favors the ground states over metastable states.\label{fig:friction}} \end{figure} Fig.~\ref{fig:friction} shows the influence of friction on the occurrence probabilities in more detail. For fixed screening the probability of finding the ground state configuration increases when the friction coefficient is decreased. Here the particles are cooled down more slowly and it is more likely that they reach the system's true ground state. During the cooling process they still have a sufficiently high kinetic energy and time to escape from a local minimum until the force on each particle vanishes. In the case of strong friction the particles can fall into a nearby minimum and leaving it becomes more difficult due to the rapid loss of kinetic energy. The typical simulation time until the forces are small enough is longer than for intermediate friction strength. Once cooled down the particles are pushed along the gradient of the potential energy surface until they reach a stable state. Thus the results can depend on how far the system's temperature is decreased. One can see that for $\nu>2$, i.e. in the overdamped regime, the probabilities have practically saturated. For fast cooling, i.e. large friction, metastable states can occur with a comparable or even higher probability than the ground state. The $N=40$ cluster shows a qualitatively different behavior compared to the $N=27,\,31$ clusters. In the case of $\kappa=1.0$ the lines corresponding to different configurations do not intersect and the ground state is the most probable state regardless of the damping coefficient. In contrast, in the Coulomb limit, $\kappa=0$, the most probable state is a always a metastable state, except for very small friction, $\nu \le 0.01$. Dusty plasma experiments are performed in the overdamped regime, i.e. here $\nu$ is of the order of $3-6$~\cite{block2008}. Since in this limit the probabilities depend only very weakly on the damping rate the results presented in the next section for $\nu=3.2$ should hold for any such damping coefficient. Even though this was shown only for a few examples we believe that this also holds for other particle numbers and screening lengths. \begin{figure} \centering {\includegraphics*[width=0.42\textwidth]{N27nu3.2.eps}}\vspace{0.13cm} {\includegraphics*[width=0.42\textwidth]{N31nu3.2.eps}}\vspace{0.13cm} {\includegraphics*[width=0.42\textwidth]{N40nu3.2.eps}} \caption{(Color online) Effect of screening for $\nu=3.2$. Solid lines show the results obtained with condition~(B) while dotted and dashed lines indicate use of condition~(A) with $\left< E_{\text{kin}}^{\text{min}}\right>=10^{-8},\,10^{-7}$, respectively. Arrows show the ground state configuration to the left or right from the vertical line. Where available horizontal solid and dashed lines indicate experimental mean and standard deviation~\cite{block2008}. For the $N=27$ cluster the experimental values for the configurations (23,4) and (24,3) are the same.\label{fig:kappa}} \end{figure} \subsection{Effect of screening}\label{ssec:screening} The screening dependence of the ground state shell configurations of spherical Yukawa clusters in the absence of damping has been analyzed in Ref. \cite{bonitz2006}. The general trend is that increased screening favors ground state configurations with more particles on the inner shell(s). A systematic analysis in a large range of particle numbers and screening parameters \cite{Henning} confirms this trend. Here we extend this analysis to spherical crystals in the presence of damping and also consider the screening dependence of the occurrence probability of metastable states. For a fixed friction coefficient in the overdamped limit the effect of screening is shown in Fig.~\ref{fig:kappa}. The different ground state configurations are indicated by the numbers with arrows in the figures. As in the undamped case, at some finite value of $\kappa$, a configuration with an additional particle on the inner shell becomes the ground state. Consider now the probability to observe the ground and metastable states. For weak screening the ground states (27,4) and (24,3) are the most probable states in the cases $N=31$ and $N=27$, respectively. At the same time in both cases, the probability of the configuration with one more particle on the inner shell grows with $\kappa$, until it eventually becomes even more probable than the ground state. Note that this occurs much earlier (at a significantly smaller value of $\kappa$) than the ground state change. For $N=31$ this trend is observed twice: the probability of the configuration (26,5) first increases with $\kappa$ and reaches a maximum around $\kappa\approx 1$. For $\kappa>2$ this configuration becomes less probable than the configuration (25,6), i.e. again a configuration with an additional particle on the inner shell becomes more probable with increased screening. Different behavior is observed for the $N=40$ cluster where the ground state for weak screening (34,6) is never the most probable state. For large screening, $\kappa\ge 0.6$, the new ground state (32,8) has the highest probability, but this happens only substantially later (for larger $\kappa$) after this state has become the energetically lowest one. This is due to the existence of a third state (33,7) which has the highest probability for $\kappa\le 0.6$ although it is never the energetically lowest one. Summarizing the above observations we confirm that in spherical Yukawa clusters the ground state is not necessarily the most probable state. Often, a metastable state with more particles on the inner shell is observed substantially (in some cases up to five times) more frequently. Further, increased screening tends to favor states with more particles on the inner shell. We will give an explanation for this behavior in the next section by using an analytical model for the partition function. Before doing this we comment on some technical details which are important in the present MD simulations. For certain intervals of the screening parameter the results for the probabilities depend on how far the system is cooled down. Here one state (generally the ground state) is favored over another the smaller $\left< E_{kin}^{min}\right>$ is chosen. This also means increasing the mean simulation time. As discussed before the particles are heavily damped and lose their initial kinetic energy on a short timescale. Their motion is then determined by the shape of the energy surface. Using condition~(B) to terminate the simulation we obtain converged results where the particles have reached a local minimum. Thus if the simulation would be continued the configuration would remain the same. \section{Analytical Theory of stationary state probabilities}{\label{sec:analyt}} \subsection{{\label{ssec:approx}}Harmonic approximation} The analytical approach to calculating the occurrence probabilities is based on the classical canonical partition function $Z(T,\omega_0,N)$. Instead of the dependence on volume (or density) as in a homogeneous system, here thermodynamic quantities depend on the confining strength $\omega_0$. The partition function can be evaluated analytically in the harmonic approximation, see e.g. Ref.~\cite{baletto2005}. Here the potential energy of a given state is expanded around a local minimum with energy $E_s^0$, where $s$ denotes the ground- or a metastable state. It can be written as \[ U_s\approx E_s^0+\frac{1}{2}\sum_{i,j=1}^N\sum_{\alpha,\beta=1}^{3}\left .\frac{\partial^2 U(\bm{r})}{\partial \bm{r}_{i,\alpha} \partial\bm{r}_{j,\beta}}\right|_{\bm{r}=\bm{r}^{0s}} \delta \bm{r}_{i,\alpha}\delta\bm{r}_{j,\beta}, \] where $\bm{r}^{0s}=(\bm{r}_{1}^{0s},\dots,\bm{r}_{N}^{0s})$ denotes the $3N$-dimensional vector of the particles' equilibrium positions and $\delta\bm{r}_{i,\alpha}=\bm{r}_{i,\alpha} - \bm{r}_{i,\alpha}^0$ the displacement vector. Transforming to normal coordinates $\xi_{s,i}$ this turns into a sum of decoupled harmonic oscillators \begin{equation}\label{eqn:expansion} U_s\approx E_s^0+\frac{1}{2}\sum_{i=1}^{f}m\omega_{s,i}^2\xi_{s,i}^2, \hspace{0.25cm} f=3N-3, \end{equation} with eigenfrequencies $\omega_{s,i}$, which are the square roots of the eigenvalues of the Hessian \[ U_{i,\alpha,j,\beta}=\left .\frac{\partial^2 U(\bm{r})}{\partial \bm{r}_{i,\alpha} \partial\bm{r}_{j,\beta}}\right|_{\bm{r}=\bm{r}^{0s}}. \] The expansion~(\ref{eqn:expansion}) includes the particles' three center of mass oscillations in the trap with $\omega=1$ (in units of $\omega_0$). Furthermore we assume that the vibrational and the three rotational modes of the whole system ($\omega=0$) are decoupled, the latter are, therefore, eliminated from the sum (\ref{eqn:expansion}). In the principal axes frame the rotational kinetic energy can then be expressed as \[ T_s^{rot}=\sum_{i=1}^3 \frac{L_{s,i}^2}{2I_{s,i}}, \] with angular momenta $L_{s,i}$ and constant principal moments of inertia $I_{s,i}$. In this approximation the full energy of the state $s$ is, to second order in the displacements, \begin{eqnarray}\label{eqn:energyapprox} E_s=E_s^0+\sum_{i=1}^{f}\left\{\frac{p_{\xi_{s,i}}^2}{2m}+\frac{m}{2}\omega_{s,i}^2\xi_{s,i}^2\right\}+\sum_{i=1}^3 \frac{L_{s,i}^2}{2I_{s,i}}. \end{eqnarray} The first term in parentheses denotes the vibrational kinetic energy $T_s^{vib}$. The harmonic approximation is only applicable for low temperatures (or strong coupling) when the particles oscillate around the equilibrium positions with a small amplitude. \subsection{{\label{ssec:partition}}Partition function} The general form of the classical canonical partition function is \begin{equation}{\label{eqn:classical}} Z_s=\frac{n_s}{(2\pi\hbar)^{3N}}\int_{-\infty}^{\infty}dp^{3N} dq^{3N} e^{-\beta H^s(p_i,q_i)}. \end{equation} Here it is written for a general Hamiltonian $H^s(p_i,q_i)$ with $3N$ degrees of freedom, generalized coordinates $q_i$ and conjugate momenta $p_i$. Since in our case the energy contributions are independent it can be factorized according to \begin{equation}\label{eqn:PartitionFunction} Z_s=n_sZ_s^{int} Z_s^{vib} Z_s^{rot} \end{equation} with the internal partition function \begin{equation} Z_s^{int}=e^{-\beta E_s^0} \end{equation} and the degeneracy factor $n_s$ calculated as \begin{equation}\label{eqn:deg} n_s=\frac{N!}{\prod_{i=1}^L N_i^s!}, \end{equation} where $L$ is the number of shells and $N_i^s$ the occupation number of shell $i$ with $\sum_{i=1}^L N_i=N$. The degeneracy factor $n_s$ denotes the number of possibilities to form a configuration with shell occupation $(N_1,N_2,\dots,N_L)$ from distinguishable particles. $Z_s^{vib}$ is the partition function for $f$ independent harmonic oscillators while $Z_s^{rot}$ is related to the rotational degrees of freedom. The results for our specific case with the energy given by Eq.~(\ref{eqn:energyapprox}) can be found in~\cite{baletto2005} and read \begin{subequations}\label{eqn:twopartition} \begin{align} Z_s^{vib}(T)&=\left(\frac{k_B T}{\hbar \Omega_{s}}\right)^f,\\ Z_s^{rot}(T)&=\left(\frac{2\pi k_B T \bar{I} _s}{\hbar^2}\right)^{3/2}. \end{align} \end{subequations} The expressions include the mean geometric eigenfrequency $\Omega_s=(\prod_{i=1}^f \omega_{s,i})^{1/f}$ and the mean moment of inertia $\bar{I} _s=(I_{s,1} I_{s,2} I_{s,3})^{1/3}$. To obtain the total partition function $Z(T,\omega_0,N)$ the contributions of all $M$ (metastable) states are summed up, i.e. \[ Z=\sum_{\sigma=1}^{M} n_{\sigma} Z_{\sigma}. \] \subsection{Probability of stationary states} Collecting the results of subsection~\ref{ssec:partition}, the stationary state probabilities are given by \begin{equation}\label{eqn:Probability} P_s=\frac{n_s Z_s}{Z}=\frac{n_s Z_s}{\sum_{\sigma=1}^{M} n_{\sigma} Z_{\sigma}}. \end{equation} For our clusters of interest with $27-40$ particles the moments of inertia for different states are equal to a good approximation~(cf.~Table~\ref{tab:inertia} for $N=27$) and can be canceled. Similar behavior is observed for $N=31,\,40$. For low particle numbers, $N\lesssim 10$, they should be included, since here a slight change of the configuration can alter the moment of inertia by a significant amount, but this is not of importance for the present analysis. \begin{table} \caption{Mean shell radii $R_1,\,R_2$ of first and second shell for states observed in the MD simulations for $N=27$ and $\kappa=0.6$. The relative statistical weight $\tilde{q}_s=(\bar{I}_s/\bar{I}_1)^{3/2}$ caused by different moments of inertia can be neglected in the computation of the probabilities since $\tilde{q}_s\approx 1$ for all states.} \begin{ruledtabular}\label{tab:inertia} \begin{tabular}{cccccc} state $s$& configuration & $R_2$ & $R_1$ & $\tilde{q}_s$ \\ \hline 1 & (24,3) & $1.6175$ & $0.5977$ & $1$\\ 2 & (23,4) & $1.6413$ & $0.6963$ & $1.0009$\\ 3 & (23,4) & $1.6413$ & $0.6957$ & $1.0009$\\ 4 & (25,2) & $1.5935$ & $0.4542$ & $1.0004$\\ 5 & (25,2) & $1.5934$ & $0.4543$ & $1.0004$\\ \end{tabular} \end{ruledtabular} \end{table} Using Eqs.~(\ref{eqn:twopartition}) we obtain from Eq.~(\ref{eqn:Probability}) \begin{equation}\label{eqn:prob} P_s\approx \frac{n_s e^{-\beta E_s^0} \Omega_s^{-f}}{\sum_{\sigma=1}^{M}n_{\sigma} e^{-\beta E_{\sigma}^0} \Omega_{\sigma}^{-f}}. \end{equation} To avoid computation of the full partition function [denominator of Eq.~(\ref{eqn:prob})] it is advantageous to compute probability ratios of two states $s$ and $s'$ \begin{eqnarray}\label{eqn:ratioprob} \frac{P_s}{P_{s'}}&=&\frac{n_s}{n_{s'}}\left(\frac{\Omega_{s'}}{\Omega_s}\right)^f\left(\frac{\bar{I}_s}{\bar{I}_{s'}}\right)^{3/2}e^{-\beta(E_s^0-E_{s'}^0)}\nonumber\\ &\approx&\frac{n_s}{n_{s'}}\left(\frac{\Omega_{s'}}{\Omega_s}\right)^fe^{-\beta(E_s^0-E_{s'}^0)}. \end{eqnarray} Thus the probability ratio of two states depends on three factors: their energy difference $E_s^0-E_{s'}^0$, the ratio of degeneracy factors $n_s/n_{s'}$ and the ratio of mean eigenfrequencies $\Omega_{s'}/\Omega_s$. The Boltzmann factor $e^{-\beta(E_s^0-E_{s'}^0)}$ gives preference to states with a low energy. For low temperatures it will be the most dominant factor but it becomes less important for higher temperatures when $k_BT\gg E_{s'}^0-E_s^0$ and $e^{-\beta(E_s^0-E_{s'}^0)}\approx 1$. According to Eq.~(\ref{eqn:deg}) the degeneracy factor assigns a large statistical weight to states with more particles on inner shells. As an example, for $N=27$, we obtain $n_{(25,2)}/n_{(23,4)}=\frac{23!4!}{25!2!}=1/50$. One can see that the configuration with only 2 particles on the inner shell is suppressed due to a lower degeneracy factor contrary to the states with an inner shell consisting of 4 particles, see also Table~\ref{tab:statesN27}. The reason is that there exist more combinatorial possibilities to construct configurations when the difference between the single shell occupation numbers is small. For $N=31$ (Table~\ref{tab:statesN40}) this ratio can be even larger. This shows that (even for low temperatures) this factor can strongly influence the occurrence probabilities. In the MD simulations we observe several states with the same shell configuration but different energies. Their energy difference can be as large as between states with different configurations (cf. Table.~\ref{tab:fricstates}). In Eq.~(\ref{eqn:Probability}) all states with the same shell configuration are added with the same degeneracy factor. Let us now consider the effect of the mean eigenfrequency, i.e. the effect of the local curvature of the potential energy surface. Written out explicitly, using Eq.~(\ref{eqn:ratioprob}), this factor reads \begin{equation} \left(\frac{\Omega_{s'}}{\Omega_s}\right)^f=\frac{\prod_{i=1}^f \omega_{s',i}}{\prod_{i=1}^f \omega_{s,i}}, \end{equation} i.e. it is the inverse ratio of the products of the eigenfrequencies. The main contribution here usually arises from the lowest eigenfrequencies. This can be seen in Fig.~\ref{fig:spectrum} showing the spectrum for the states of the cluster with $N=31,\,\kappa=0.8$. State \#7 has two very low eigenfrequencies~[cf. Fig.~\ref{fig:spectrum}, red arrow] which strongly increase its statistical weight (see also Table~\ref{tab:statesN31}). \begin{figure}[h] {\includegraphics*[width=0.42\textwidth]{eigen2.eps}} {\includegraphics*[width=0.42\textwidth]{eigen.eps}} \caption{Spectrum of the eigenfrequencies for the 9 states shown in Table~\ref{tab:statesN31}. The top figure shows the lowest modes in more detail.}{\label{fig:spectrum}} \end{figure} For two states with the same shell configuration we have $n_s=n_{s'}$, and the probability ratio is only determined by their energy difference and eigenfrequencies. Even though a state has a higher energy it can have a higher probability provided it has a lower mean eigenfrequency. Fig.~\ref{fig:ratio} shows the effect for $N=27$, for the states listed in Table~\ref{tab:statesN27}. The physical explanation of the eigenfrequency factor is very simple: states with low eigenfrequencies have a broad (flat) potential energy minimum and thus a larger phase space volume of attraction for the trajectories of $N$ particles. Thus initially randomly distributed particles will have a higher probability to settle in a minimum with small $\Omega_s$ compared to another minimum (when the energies and degeneracy factors are similar). \begin{table}[h] \caption{Energy difference between metastable states and the ground state (ground state energy given in italic numbers) that were used to compute the partition function for $N=27$ and $\kappa=0.6$. Also shown is the relative statistical weight $\tilde{n}_s=n_s/n_{1}$ and the statistical weight due to the eigenfrequencies $\tilde{w}_s=\left(\Omega_{1}/\Omega_s\right)^f$ compared to the ground state.\label{tab:statesN27}} \begin{ruledtabular} \begin{tabular}{ccrrr} state $s$& configuration & $\Delta E_s/N$ & $\tilde{n}_s$ & $\tilde{w}_s$ \\ \hline 1 & (24,3) & $\textsl{4.732856(4)}$ & $1$ & $1$\\ 2 & (23,4) & $0.001622(1)$ & $6$ & $0.24$\\ 3 & (23,4) & $0.001870(5)$ & $6$ & $0.67$\\ 4 & (25,2) & $0.004993(0)$ & $3/25$ & $14$\\ 5 & (25,2) & $0.004997(3)$ & $3/25$ & $3.3$\\ \end{tabular} \end{ruledtabular} \end{table} Because the harmonic approximation only describes a minimum's local neighborhood we mention that this could overestimate the weight of states with broad minima and low escape paths~\cite{baletto2005}, which are not taken into account in this approximation. This could be improved by changing the limits for the position integration in Eq.~(\ref{eqn:classical}) according to the potential barrier height and the temperature. This was done for 2D clusters in~\cite{schweigert1995} but requires knowledge of the barrier heights. This is not essential for the present analysis. Finally, we note that the value of $\tilde{w}_s$ is sensitive to numerical errors in the computation of the eigenvalues of the Hessians since the mean eigenfrequency is a product of $3N-3$ single values. In the present results we estimate the error not to exceed $5\,\%$ which is sufficient for our analysis. \begin{figure} {\includegraphics*[width=0.45\textwidth]{ratio.eps}} \caption{Probability of the two metastable states with conf. (23,4) compared to the ground state (24,3) for the Yukawa ball with $N=27$. The inset shows the ratio of the probabilities for states 2 and 3 from Table~\ref{tab:statesN27} at low temperatures. Although state 3 has the same configuration and a higher energy the probability of finding state 3 is higher for $T\ge 0.007$ due to the effect of the eigenfrequencies.}{\label{fig:ratio}} \end{figure} \begin{table} \caption{Same as Table~\ref{tab:statesN27} for $N=31$ and $\kappa=0.8$.\label{tab:statesN31}} \begin{ruledtabular} \begin{tabular}{ccrrr} state $s$& configuration & $\Delta E_s^0/N$ & $\tilde{n}_s$ & $\tilde{w}_s$ \\ \hline 1 & (27,4) & $\textsl{4.397858(8)}$ & $1$ & $1$\\ 2 & (27,4) & $0.000008(7)$ & $1$ & $0.82$\\ 3 & (27,4) & $0.000035(8)$ & $1$ & $1.7$\\ 4 & (26,5) & $0.001810(1)$ & $27/5$ & $0.84$\\ 5 & (26,5) & $0.001850(9)$ & $27/5$ & $1.4$\\ 6 & (26,5) & $0.002000(0)$ & $27/5$ & $5.3$\\ 7 & (26,5) & $0.002091(6)$ & $27/5$ & $9.7$\\ 8 & (25,6) & $0.003583(7)$ & $117/5$ & $1.4$\\ 9 & (25,6) & $0.003586(7)$ & $117/5$ & $1.1$\\ \end{tabular} \end{ruledtabular} \end{table} \begin{table} \caption{Same as Table~\ref{tab:statesN27} for $N=40$ and $\kappa=0$ (Coulomb interaction).\label{tab:statesN40}} \begin{ruledtabular} \begin{tabular}{ccrrr} state $s$& configuration & $\Delta E_s^0/N$ & $\tilde{n}_s$ & $\tilde{w}_s$\\ \hline 1 & (34,6) & $\textsl{12.150162(9)}$ & $1$ & $1$\\ 2 & (33,7) & $0.001143(4)$ & $34/7$ & $2.3$\\ 3 & (33,7) & $0.001190(3)$ & $34/7$ & $2.8$\\ 4 & (33,7) & $0.001236(9)$ & $34/7$ & $8.3$\\ 5 & (32,8) & $0.001862(8)$ & $561/28$ & $13$\\ 6 & (32,8) & $0.001863(1)$ & $561/28$ & $3.5$\\ 7 & (32,8) & $0.003482(4)$ & $561/28$ & $6.7$\\ 8 & (35,5) & $0.004201(7)$ & $6/35$ & $5.2$\\ 9 & (35,5) & $0.004392(7)$ & $6/35$ & $32$\\ \end{tabular} \end{ruledtabular} \end{table} \begin{figure} \centering {\includegraphics*[width=0.42\textwidth]{N27kappa0.6MC.eps}}\vspace{0.13cm} {\includegraphics*[width=0.42\textwidth]{N31kappa0.8MC.eps}}\vspace{0.13cm} {\includegraphics*[width=0.42\textwidth]{N40kappa0.0MC.eps}} \caption{(Color online) Analytical theory compared to MC-results. The solid lines show the probabilities as obtained from Eq.~(\ref{eqn:Probability}). The dashed lines neglect the statistical weight factor caused by the eigenfrequencies, i.e. here $\Omega_s\equiv 1$ for all states. For $N=27$ the dashed/dotted lines indicate the results of Langevin dynamics simulations. Analytical results for the configuration (22,5) are not available.}{\label{fig:MC_Analyt}} \end{figure} \subsection{{\label{ssec:model_mc}}Analytical results and comparison with Monte Carlo simulations} Let us now come to the results of the analytical model and compare them to Monte-Carlo simulations which were explained above in Section~\ref{ssec:MC}. The MC results have first principle character, in particular, they are not restricted to the harmonic approximation and fully include all anharmonic corrections. For $N=27$ we additionally verified the MC results by a Langevin dynamics simulation using the SLO algorithm of~\cite{Langevin}. Here the probabilities were obtained in an equilibrium calculation with a simulation time $t=10^5\,\omega_0^{-1}$ by determining the configurations at fixed time intervals. Results for three representative examples are shown in Fig.~\ref{fig:MC_Analyt}. We chose $N=27,\;\kappa=0.6$ and $N=31,\;\kappa=0.8$ since these will turn out to be close to the situation in the dusty plasma experiments, see. Sec.~\ref{sec:comp}. As a third example we present data for $N=40$ with Coulomb interaction. The input parameters of the analytical model, i.e. details on the (metastable) states are summarized in Tables~\ref{tab:statesN27}-\ref{tab:statesN40}. In Fig.~\ref{fig:MC_Analyt} we plot the occurrence probabilities as a function of temperature. This allows us to specifically study the effect of the depth of the potential energy minimum $E_s$. The latter effect should be dominant at low temperature, leading to a relatively high probability of the ground state. In contrast, this effect should become less important at high temperature where the degeneracy factors and the eigenfrequency ratio should play a decisive role for the probabilities. This general trend is indeed observed in all three cases. For $N=27$, top part of Fig.~\ref{fig:MC_Analyt}, the effects of the degeneracy factor and the mean eigenfrequencies act in opposite directions. While the state with $4$ particles on the inner shell gains statistical weight by having a high degeneracy, this effect is almost compensated by narrow minima and, consequently, a low $\tilde{w}_s$, cf.~Tab.~\ref{tab:statesN27}. Therefore, this state achieves comparable probability with the ground state (27,4) only at high temperature, $T \ge 0.03$ (in the MC simulation this is observed only for $T \ge 0.045$). For the configuration (25,2) the opposite is true. Here, the degeneracy is low and the minima broad, but due to its high energy this configuration has a nonvanishing probability only for high temperatures, $T \ge 0.03$. We did not find a stable state with configuration (22,5). The situation for $N=31$, central part of Fig.~\ref{fig:MC_Analyt}, is different. Here all metastable states have a higher degeneracy factor than the ground state configuration. In addition all states further gain statistical weight because of broad minima, except for state $s=4$, cf.~Tab.~\ref{tab:statesN31}. Thus one should expect that metastable states have a high probability even at low temperatures. This is indeed observed in the model and the MC simulations already below $T=0.02$. In the third case, $N=40$, bottom part of Fig.~\ref{fig:MC_Analyt}, we generally see the same trend. The metastable state (33,7) has a high degeneracy and frequency factor, cf.~Tab.~\ref{tab:statesN31}, and thus it becomes more probable than the ground state already for $T\ge 0.01$ ($0.015$ in the MC simulations). Let us now compare the analytical and MC results more in detail. Good agreement is found for $N=31$ up to $T\sim 0.02$, cf. full lines and symbols. For $N=27$ we find good agreement between MC and the analytical theory for $T<0.012$ but only if the effect of the eigenfrequencies is neglected, cf. dashed lines. With eigenfrequencies included the theory shows deviations for low temperatures but better agreement for higher temperatures. For the cluster with $40$ particles we observe moderate agreement for the configurations (34,6) and (33,7) up to $T=0.015$ whereas the deviations from MC for the remaining two configurations are rather large. This overall agreement is quite satisfactory keeping in mind that the melting temperature of these clusters is typically below $T=0.015$ \cite{vova2006,Apolinario07,Jens2008}. The reason for these discrepancies are due to the limitations of our simple harmonic model [the good agreement between the completely independent MC and Langevin MD results for $N=27$, cf. top part of Fig.~\ref{fig:MC_Analyt}, confirms the reliability of the simulations]. Since the discrepancies are growing with temperature, the main reason is probably the neglect of anharmonic effects. In some cases, when the barriers of the potential energy surface are low, these effects might already occur at low temperatures. Changing the limits of allowed particle motion in the integration of Eq.~(\ref{eqn:classical}) may help to reduce the deviations. A further reason for deviations from MC results could be an insufficient number of stationary states being taken into account. It is not clear if all stationary states have been found (they were pre-computed with MD simulations) and used in the partition function. To ensure a high probability we performed more than $10^4$ independent runs. For example, for the cluster with 40 particles we observe 9 states, but it was difficult to identify the states with 5 particles on the inner shell because they were found only a few times and were energetically close. The larger number of states given in~\cite{Apolinario} also suggests that we missed a few. Nevertheless, the effect originating from these states should give only a small statistical contribution to the probabilities. \section{Comparison with dusty plasma experiments}\label{sec:comp} To compare with experiments on metastable states in spherical dusty plasma crystals (Yukawa balls) we first need to establish the relation of our system of units to the experimental parameters. We use the temperature unit $k_B T_0=E_0=(\alpha Q^4/2)^{1/3}$ [in SI units $E_0=(\alpha Q^4/32\pi^2\epsilon_0^2)^{1/3}$] which depends on the trap parameter $\alpha=m\omega_0^2$ and the dust charge. Since the charge is not known very accurately the errors could be rather large. With $Z=2000\,e$ and $\alpha=5.2\times 10^{-11}\,\text{kg}\,\text{s}^{-2}$ given in~\cite{block2008}, room temperature ($300\,\text{K}$) corresponds to $T_{room}\approx 0.0015$. Also, the experimental screening parameter is known only approximately. From previous comparisons with simulations \cite{bonitz2006} it is expected to be in the range of $0.5 < \kappa < 1$. Reference~\cite{block2008} reported measurements on the probability of metastable states for two clusters with $N=27$ and $N=31$ which we now use for comparison with the MD and MC simulations and the analytical model. \subsection{{\label{ssec:md_exp}}MD results vs. experiment} We start with the molecular dynamics simulations since they model a situation which is closest to the experiment. In contrast to the experiment which is performed at room temperature, our simulations correspond to a Langevin dynamics simulation at $T=0$ (the system is cooled to almost zero kinetic energy). We have verified the influence of the final temperature by performing additional Langevin simulations for the cluster with 27 particles and $\kappa=0.6$ with temperatures up to $T=0.0035$ (Fig.~\ref{fig:Tfinite}) which is more than twice the experimental temperature. Apart from a finite temperature the simulations were done in the same way as explained in Section~\ref{sec:model}, but with a predefined simulation time. For high temperatures one has to pay attention to the time after which the configuration is determined since then transitions between states can easily occur. This can be seen in Fig.~\ref{fig:MC_Analyt} where for $T>0.01$ metastable states have a nonvanishing probability. In our Langevin simulations we used a simulation time of $t_{end}=400\,\omega_0^{-1}$, which corresponds to $t_{end}\approx 10\,\text{s}$ for a dust particle mass of $m=3.3\times 10^{-14}\,\text{kg}$. We find no systematic deviation from the results at zero temperature. The slight deviations for the configurations (23,4) and (24,3) are probably due to the insufficiently long simulation time with the same explanation as given at the end of Section~\ref{ssec:screening}. We thus conclude that for the present analysis an MD simulation without fluctuations and cooling towards zero temperature is adequate. \begin{figure} {\includegraphics*[width=0.45\textwidth]{T_finite.eps}} \caption{Langevin dynamics simulation for $N=27$, $\kappa=0.6$ and $\nu=3.2$. Horizontal lines indicate results of Section~\ref{ssec:screening}, Fig.~\ref{fig:kappa}.}{\label{fig:Tfinite}} \end{figure} Our data for comparison with the experimental results are shown in Figs.~\ref{fig:friction} and \ref{fig:kappa}. The friction parameter in the experiments is expected to be in the range $\nu=3\dots 6$ \cite{block2008}. This means the system is overdamped and any value above $\nu=2$ will not change the results significantly, cf. Fig.~\ref{fig:friction}. So in Fig.~\ref{fig:kappa} we used a value of $3.2$. The MD simulations agree well with the experiment in the case of screening parameters in the range $0.6<\kappa<0.8$ (for $N=31$) and $0.4<\kappa<0.6$ ($N=27$), for details cf. Table~\ref{tab:comp}. The lower screening parameter in the latter case is a consequence of the lower plasma density in the experiment, compared to the conditions under which the cluster with 31 particles was produced. This was also found in the MD simulations performed in~\cite{block2008}. The present simulations, being much more extensive, confirm these results. We may conclude that this comparison allows to determine the screening parameter in the experiment. \subsection{{\label{ssec:model_exp}}Analytical and MC results vs. experiment} A comparison of the analytical model and the MC simulations with the experiment is disappointing. From Fig.~\ref{fig:MC_Analyt} it is evident that at room temperature the ground states have always a probability of almost $100\,\%$ which is in striking contrast to the experiment and the MD results. This is not surprising since the dust comprises a dissipative system and the clusters are created under nonequilibrium conditions. In contrast, both Monte Carlo and the model are based on the canonical partition function and assume thermodynamic equilibrium. Thus, at first sight, there seems to be no way to explain the experiment with our analytical model or with Monte Carlo methods. However, this is not true. As we will show below, there is a way to apply equilibrium methods to the problem of metastable states. \subsection{{\label{ssec:dynamics}}Time scales of the cluster dynamics} Let us have a closer look at the nonequilibrium dynamics of the cluster during the cooling process. It is particularly interesting to analyze on what time scales the different relaxation processes occur. In a weakly coupled plasma there are three main time scales, e.g. \cite{bonitz-book,semkat99}: first, the buildup of binary correlations which occurs for times shorter than the correlation time $\tau_{cor}$. Second, the relaxation of the velocity distribution towards local equilibrium due to collisions, for $\tau_{cor}\le t \le t_{rel}$ (kinetic phase) and third, hydrodynamic relaxation, $t_{rel}\le t\le t_{hyd}$. This behavior has so far not been analyzed for the strongly correlated Yukawa clusters. To get first insight, the quantities of central interest are the kinetic energy and the velocity distribution function $f(v,t)$ of the cluster particles. These quantities are easily computed in our nonequilibrium MD simulations of the cooling process, as explained in Section~\ref{sec:MDresults}. To obtain the velocity distribution we performed 420 runs with different randomly chosen initial conditions and collected the data for each time step. The results for the kinetic energy evolution and for $f(v_x,t)$ are shown in Fig.~\ref{fig:distribution} (the other velocity components show the same behavior). We observe three main relaxation stages: \begin{enumerate} \item for $t\le 0.5$, a rapid heating is observed which is due to acceleration and build up of binary correlations in the initially random (uncorrelated) particle system. This is typical for any rapid change of the interparticle forces, and proceeds on scales of the order of the correlation time, e.g. \cite{bonitz96,haberland01,gericke03}. \item for $0.5\le t\le 1.3$, the kinetic energy increase saturates and cooling starts. This means, correlation build up is finished and dissipation due to neutral gas friction dominates the behavior. \item for $t>1.3$, the mean kinetic energy decreases approximately exponentially, i.e. $\langle E_{\rm kin}\rangle(t) \propto e^{-2\gamma t}$ where the decay constant is found to be $\gamma\approx 0.65\approx \nu/5$. \end{enumerate} The behavior on the third stage resembles a single (Brownian) particle in a dissipative medium where $\gamma$ is the velocity relaxation rate corresponding to a relaxation time of $t_{rel}=\gamma^{-1}=1.54$. In case of Brownian particles, the velocity distribution rapidly relaxes towards a Maxwellian for $t\le t_{rel}$. The velocity distributions for the present system are shown for four different times in Fig.~\ref{fig:distribution}, parts a)-d). The solid curves indicate the best fit to a Maxwellian, the obtained ``temperatures'' are shown in Fig.~\ref{fig:distribution}~e) by the crosses. The evolution towards a Maxwellian is evident which is established around $t=2.5$. This allows us to conclude that, after an initial stage (phases 1 and 2), the cluster has reached an equilibrium velocity distribution and the subsequent cooling process ultimately leading to freezing into a spherical Yukawa crystal is well described by local thermodynamic equilibrium: the time-dependent velocity distribution is given by $f(v,t)\sim \exp\{-\frac{mv^2}{2k_BT(t)}\}$ with $k_B T(t)=2\langle E_{kin}\rangle(t)/3$. Thus, the system evolves from one equilibrium state to another which differ only by temperature. \subsection{{\label{ssec:lte}}Application of Equilibrium Theories to the probability of metastable states of Yukawa balls} Based on the results of Subsection~\ref{ssec:dynamics}, we expect that equilibrium methods such as Monte Carlo or our analytical model are applicable to the third relaxation stage. Thereby one has to use the equilibrium result for the current temperature $T(t)$. Using temperature dependent results such as in Fig.~\ref{fig:MC_Analyt}, allows one to reconstruct the time-dependence of various quantities from the known dynamics of the kinetic energy: $T(t)=T(t_{rel}) e^{-2\gamma (t-t_{rel})}$. \begin{figure} {\includegraphics*[width=0.48\textwidth]{distribution.eps}} \caption{a) - d) Velocity distribution function $f(v_x,t)$ for different times [as indicated in e)] for $N=27$, $\kappa=0.6$, $\nu=3.2$. Solid lines show the best Maxwellian fit. e) Averaged kinetic energy as a function of time. Crosses denote averaged kinetic energy obtained from best fit using the equipartition theorem.}{\label{fig:distribution}} \end{figure} Now, the key point is that this local (time-dependent) Maxwellian is established long before the particles start to feel the potential energy $U$ of the trap and of the pair interaction. For example, at $t\approx t_{rel}$, the temperature is around $0.15$ which is about a factor $100$ higher than room temperature and one order of magnitude higher than the freezing point. In case of very rapid cooling beyond the freezing point the particles will settle (with a certain probability) in the stationary state ``s'' and will not have time to escape it since further cooling removes the necessary kinetic energy (i.e. the escape probability will be low). This means that the decision about what stationary state the system will reach is made at a time when the system temperature is close to the melting temperature. Using this idea we compute the probability of metastable states from Monte Carlo for two temperatures $T=0.02$ and $T=0.04$, cf. Fig.~\ref{fig:MC_Analyt} (at the higher temperature, due to intershell transitions, shell configurations can be identified only with an error of about $8\%$). We also calculate the probability at $T=0.02$ within the analytical model. Finally we consider the high-temperature limit which is obtained by neglecting, in the probability ratios, the Boltzmann factor. The corresponding results are presented in Table~\ref{tab:comp}. The overall agreement with the experiment is much better than the results for room temperature which confirms the correctness of the above arguments. Evidently, the Boltzmann factor is crucial and cannot be neglected, cf. last lines in Table~\ref{tab:comp}. The best results are observed for temperatures around $T=0.04$ which is about two to three times higher than the melting temperature where the system is in the moderately coupled liquid state. This shows that it is indeed possible to predict, at least qualitatively, the probabilities of metastable states in dissipative nonequilibrium Yukawa crystals within equilibrium models and simulations. This is possible in the overdamped limit as is the case in dusty plasmas. \begin{table} \caption{Comparison of experimental results for $N=27$ and $N=31$ with MD and MC simulations (MC results are for the two temperatures $T=0.02$ and $T=0.04$). Also shown are the results of the analytical model (``AM'') for $T=0.02$ and with the Boltzmann factor being neglected ($T\rightarrow \infty$). For $N=27$ ($N=31$) the simulation results are shown for $\kappa=0.6$ ($\kappa=0.8$).} \begin{ruledtabular}\label{tab:comp} \begin{tabular}{cllll} $N=27$ & $P(24,3)$ & $P(23,4)$ & $P(25,2)$\\ \hline Experiment & $0.46\pm 0.14$ & $0.46\pm 0.14$ & $0.08\pm 0.06$\\ MD & $0.46$ & $0.53$ & $0.01$\\ MC(0.02) & $0.56$ & $0.43$ & $0.01$\\ MC(0.04) & $0.43$ & $0.45$ & $0.04$\\ AM(0.02) & $0.67$ & $0.33$ & $0.00$\\ AM($\infty$) & $0.12$ & $0.64$ & $0.24$\\ \hline \hline $N=31$ & $P(27,4)$ & $P(26,5)$ & $P(25,6)$\\ \hline Experiment & $0.35\pm 0.10$ & $0.62\pm 0.13$ & $0.03\pm 0.03$\\ MD & $0.30$ & $0.59$ & $0.11$\\ MC(0.02) & $0.40$ & $0.55$ & $0.04$\\ MC(0.04) & $0.33$ & $0.50$ & $0.14$\\ AM(0.02) & $0.44$ & $0.53$ & $0.03$\\ AM($\infty$) & $0.02$ & $0.60$ & $0.38$\\ \end{tabular} \end{ruledtabular} \end{table} \section{Discussion} In summary we have presented simulation results for Yukawa balls with three different numbers of particles and a broad range of screening parameters and damping coefficients. It was shown by extensive molecular dynamics and Langevin dynamics simulations that the cooling speed (damping coefficient) strongly affects the occurrence probabilities of metastable states even if the interaction and the confinement remain the same. This is similar to the liquid solid transition in macroscopic systems where rapid cooling may give rise to a glass-like disordered solid rather than a crystal with lower total energy. The same scenario is also observed in the present finite crystals. While slow cooling leads predominantly to the lowest energy state, strong damping gives rise to an increased probability of metastable states. These states may have an up to five times higher probability than the ground state, which is fully consistent with the recent observation of metastable states in dusty plasma experiments~\cite{block2008}. These metastable states are not an artefact of an imperfect experiment or due to fluctuations of experimental parameters, but are an intrinsic property of finite Yukawa balls. Furthermore we showed that screening strongly alters the results compared to Coulomb interaction. Generally increased screening leads to a higher probability of states with more particles on inner shells due to the shorter interaction range. An analytical theory for the ground state density profile of a confined one-component Yukawa plasma~\cite{Christian2006, Christian2007} also showed that decreasing the screening length (increasing $\kappa$) leads to a higher particle density in the center of the trap, which would correspond to a higher population of inner shells in our case. We presented an analytical model based on the canonical partition function and the harmonic approximation for the total potential energy. This model allowed for a physically intuitive explanation of the observed high probabilities of metastable configurations. The Boltzmann factor (which always favors the ground state relative to higher lying states), competes with two factors that favor metastable states: the degeneracy factor [favoring states with more particles on the inner shell(s)] and the local curvature of the potential minimum. Low curvature (low eigenfrequency) corresponds to a broad minimum and a large phase space volume attracting particles. Among all normal modes the dominant effect is due to the energetically lowest modes. The thermodynamic results from Monte-Carlo simulations and the analytical theory are in reasonable agreement with each other, at low temperatures, as expected. For higher temperatures anharmonic effects such as barrier heights will be equally important. It was shown that in thermodynamic equilibrium the abundances of metastables are much lower than observed in the dusty plasma experiments at the same temperature. The reason is that, in equilibrium, the particles are given infinitely long time to escape a local potential minimum and they always will visit the ground state more frequently than any metastable state. In contrast, in the limit of strong damping the particles are being trapped in the first minimum they visit. Thus the decision about the final stationary state is being made early during the cooling process, when the temperature is of the order of two to three times the melting temperature. Therefore, equilibrium theories without dissipation may be successfully applied to strongly correlated and strongly damped nonequilibrium systems. A systematic derivation from a time-dependent theory is still lacking and will be presented in a forthcoming paper. \begin{acknowledgements} We acknowledge stimulating discussions with J.W. Dufty. This work is supported by the Deutsche Forschungsgemeinschaft via SFB-TR 24 and by the U.S. Department of Energy award DE-FG02-07ER54946. \end{acknowledgements}
1,314,259,994,147
arxiv
\section{Motivation} Formal definitions are postponed until Section~\ref{sec:basics}. \smallskip The first comprehensive paper on state complexity was published by A. N. Maslov~\cite{Mas70} in 1970, but this work was unknown in the West for many years. Maslov wrote: \begin{quote} {\it An important measure of the complexity of [sets of words representable in finite automata] is the number of states in the minimal representing automaton. ... if $T(A) \cup T(B)$ are representable in automata $A$ and $B$ with $m$ and $n$ states respectively ..., then: \begin{enumerate} \item $T(A) \cup T(B)$ is representable in an automaton with $m\cdot n$ states; \item $T(A).T(B)$ is representable in an automaton with $(m-1)2^n + 2^{n-1}$ states. \end{enumerate}} \end{quote} The second comprehensive paper on state complexity was published by S. Yu, Q. Zhuang and K. Salomaa~\cite{YZS94} in 1994. Here the authors wrote: \begin{quote}{\it \begin{enumerate} \item ... for any pair of complete $m$-state DFA $A$ and $n$-state DFA $B$ defined on the same alphabet $\Sigma$, there exists a DFA with at most $m2^n-2^{n-1}$ states which accepts $L(A)L(B)$. \item ... $m\cdot n$ states are ... sufficient for a DFA to accept the intersection (union) of an $m$-state DFA language and an $n$-state DFA language. \end{enumerate}} \end{quote} Here DFA stands for \emph{deterministic finite automaton}, and \emph{complete} means that there is a transition from every state under every input letter. I will show that statements 1 and 2 of Maslov are incorrect, but undoubtedly Maslov had in mind languages over the same alphabet, in which case the statements are correct. In~\cite{YZS94} the first statement includes the same-alphabet restriction, but the second omits it (presumably it is implied by the context). The same-alphabet restriction is unnecessary: There is no reason why we should not be able to find, for example, the union of languages $L'_2=\{a,b\}^*b$ and $L_2=\{a,c\}^*c$ accepted by the minimal complete two-state automata ${\mathcal D}'_2$ and ${\mathcal D}_2$ of Figure~\ref{fig:example}, where an incoming arrow denotes the initial state and a double circle represents a final state. \begin{figure}[ht] \unitlength 8.5pt \begin{center}\begin{picture}(30,4)(0,6) \gasset{Nh=2,Nw=2,Nmr=1.25,ELdist=0.4,loopdiam=1.5} \node(0')(1,7){$0'$}\imark(0') \node(1')(8,7){$1'$}\rmark(1') \node(0)(22,7){0}\imark(0) \node(1)(29,7){1}\rmark(1) \drawloop(0'){$a$} \drawloop(1'){$b$} \drawedge[curvedepth= .8,ELdist=.4](0',1'){$b$} \drawedge[curvedepth= .8,ELdist=.4](1',0'){$a$} \drawloop(0){$a$} \drawloop(1){$c$} \drawedge[curvedepth= .8,ELdist=.4](0,1){$c$} \drawedge[curvedepth= .8,ELdist=.4](1,0){$a$} \end{picture}\end{center} \caption{Two minimal complete DFAs ${\mathcal D}'_2$ and ${\mathcal D}_2$.} \label{fig:example} \end{figure} The union of $L'_2$ and $L_2$ is a language over three letters. To find the DFA for $L'_2 \cup L_2$, we view ${\mathcal D}'_2$ and ${\mathcal D}_2$ as incomplete DFA's, the first missing all transitions under $c$, and the second under $b$. After adding the missing transitions we obtain DFAs ${\mathcal D}'_3$ and ${\mathcal D}_3$ shown in Figure~\ref{fig:complete}. Now we can proceed as is usually done in the same-alphabet approach, and take the direct product of ${\mathcal D}'_3$ and ${\mathcal D}_3$ to find $L_2'\cup L_2$. Here it turns out that six states are necessary to represent $L'_2\cup L_2$, but the state complexity of union is actually $(m+1)(n+1)$. \begin{figure}[ht] \unitlength 8.5pt \begin{center}\begin{picture}(30,9)(0,1.6) \gasset{Nh=2,Nw=2,Nmr=1.25,ELdist=0.4,loopdiam=1.5} \node(0')(1,7){$0'$}\imark(0') \node(1')(8,7){$1'$}\rmark(1') \node(2')(4.5,3){$2'$} \node(0)(22,7){0}\imark(0) \node(1)(29,7){1}\rmark(1) \node(2)(25.5,3){2} \drawloop(0'){$a$} \drawloop(1'){$b$} \drawedge[curvedepth= .8,ELdist=.4](0',1'){$b$} \drawedge[curvedepth= .8,ELdist=.4](1',0'){$a$} \drawloop[loopangle=270,ELpos=25](2'){$a,b,c$} \drawloop(0){$a$} \drawloop(1){$c$} \drawedge[ELdist=-1.1](0',2'){$c$} \drawedge[ELdist=.3](1',2'){$c$} \drawedge[curvedepth= .8,ELdist=.4](0,1){$c$} \drawedge[curvedepth= .8,ELdist=.4](1,0){$a$} \drawedge[ELdist=-1.1](0,2){$b$} \drawedge[ELdist=.3](1,2){$b$} \drawloop[loopangle=270,ELpos=25](2){$a,b,c$} \end{picture}\end{center} \caption{DFAs ${\mathcal D}'_3$ and ${\mathcal D}_3$ over three letters.} \label{fig:complete} \end{figure} In general, when calculating the result of a binary operation on regular languages with different alphabets, we deal with special incomplete DFAs that are only missing some letters and all the transitions caused by these letters. The complexity of incomplete DFAs has been studied previously by Gao, K. Salomaa, and Yu~\cite{GSY11} and by Maia, Moreira and Reis~\cite{MMR15}. However, the objects studied there are \emph{arbitrary} incomplete DFAs, whereas we are interested only in \emph{complete DFAs with some missing letters}. Secondly, we study \emph{state} complexity, whereas the above-mentioned papers deal mainly with \emph{transition} complexity. Nevertheless, there is some overlap. It was shown in~\cite[Corollary 3.2]{GSY11} that the incomplete state complexity of union is less than or equal to $mn+m+n$, and that this bound is tight in some special cases. In~\cite[Theorem 2]{MMR15}, witnesses that work in all cases were found. These complexities correspond to my result for union in Theorem~\ref{thm:boolean}. Also in~\cite[Theorem 5]{MMR15}, the incomplete state complexity of product is shown to be $m2^n+2^{n-1}-1$, and this corresponds to my result for product in Theorem~\ref{thm:product}. In this paper I remove the restriction of equal alphabets of the two operands. I prove that the complexity of union and symmetric difference is $mn+m+n+1$, that of intersection is $mn$, that of difference is $mn+m$, and that of the product is $m2^n+2^{n-1}$, if each language's own alphabet is used. I exhibit a new most complex regular language that meets the complexity bounds for boolean operations, product, star, and reversal, has a maximal syntactic semigroup and most complex atoms. All the witnesses used here are derived from that one most complex language. \section{Terminology and Notation} \label{sec:basics} We say that the \emph{alphabet of a regular language} $L$ is $\Sigma$ (or that \emph{$L$ is a language over $\Sigma$}) if $L\subseteq \Sigma^*$ and for each letter $a\in \Sigma$ there is a word $uav$ in $L$. A basic complexity measure of $L$ with alphabet $\Sigma$ is the number $n$ of distinct (left) quotients of $L$ by words in $\Sigma^*$, where a \emph{(left) quotient} of $L$ by a word $w\in\Sigma^*$ is $w^{-1}L=\{x\mid wx\in L\}$. The number of quotients of $L$ is its \emph{quotient complexity}~\cite{Brz10a}, $\kappa(L)$. A concept equivalent to quotient complexity is the \emph{state complexity}~\cite{YZS94} of $L$, which is the number of states in a complete minimal deterministic finite automaton (DFA) recognizing $L$. Since we do not use any other measures of complexity in this paper (with the exception of one mention of time and space complexity in the next paragraph), we refer to quotient/state complexity simply as \emph{complexity}. Let $L'_m\subseteq\Sigma'^*$ and $L_n\subseteq \Sigma^*$ be regular languages of complexities $m$ and $n$, respectively. The \emph{complexity of a binary operation} $\circ$ on $L'_m$ and $L_n$ is the maximal value of $\kappa(L'_m \circ L_n)$ as a function $f(m,n)$, as $L'_m$ and $L_n$ range over all regular languages of complexity $m$ and $n$, respectively. The complexity of an operation gives a worst-case lower bound on the time and space complexity of the operation. For this reason it has been studied extensively; see~\cite{Brz10a,Brz13,Yu01,YZS94} for additional references. A \emph{deterministic finite automaton (DFA)} is a quintuple ${\mathcal D}=(Q, \Sigma, \delta, q_0,F)$, where $Q$ is a finite non-empty set of \emph{states}, $\Sigma$ is a finite non-empty \emph{alphabet}, $\delta\colon Q\times \Sigma\to Q$ is the \emph{transition function}, $q_0\in Q$ is the \emph{initial} state, and $F\subseteq Q$ is the set of \emph{final} states. We extend $\delta$ to a function $\delta\colon Q\times \Sigma^*\to Q$ as usual. A~DFA ${\mathcal D}$ \emph{accepts} a word $w \in \Sigma^*$ if ${\delta}(q_0,w)\in F$. The language accepted by ${\mathcal D}$ is denoted by $L({\mathcal D})$. If $q$ is a state of ${\mathcal D}$, then the language $L^q$ of $q$ is the language accepted by the DFA $(Q,\Sigma,\delta,q,F)$. A state is \emph{empty} (or \emph{dead} or a \emph{sink state}) if its language is empty. Two states $p$ and $q$ of ${\mathcal D}$ are \emph{equivalent} if $L^p = L^q$. A state $q$ is \emph{reachable} if there exists $w\in\Sigma^*$ such that $\delta(q_0,w)=q$. A DFA is \emph{minimal} if all of its states are reachable and no two states are equivalent. Usually DFAs are used to establish upper bounds on the complexity of operations, and also as witnesses that meet these bounds. If $\delta(q,a)=p$ for a state $q\in Q$ and a letter $a\in \Sigma$, we say there is a \emph{transition} under $a$ from $q$ to $p$ in ${\mathcal D}$. The DFAs defined above are \emph{complete} in the sense that there is \emph{exactly one} transition for each state $q\in Q$ and each letter $a\in \Sigma$. If there is \emph{at most one transition} for each state of $Q$ and letter of $\Sigma$, the automaton is an \emph{incomplete} DFA. A \emph{nondeterministic finite automaton (NFA)} is a 5-tuple ${\mathcal D}=(Q, \Sigma, \delta, I,F)$, where $Q$, $\Sigma$ and $F$ are defined as in a DFA, $\delta\colon Q\times \Sigma\to 2^Q$ is the \emph{transition function}, and $I\subseteq Q$ is the \emph{set of initial states}. An \emph{$\varepsilon$-NFA} is an NFA in which transitions under the empty word $\varepsilon$ are also permitted. To simplify the notation, without loss of generality we use $Q_n=\{0,\dots,n-1\}$ as the set of states of every DFA with $n$ states. A \emph{transformation} of $Q_n$ is a mapping $t\colon Q_n\to Q_n$. The \emph{image} of $q\in Q_n$ under $t$ is denoted by $qt$. For $k\ge 2$, a transformation (permutation) $t$ of a set $P=\{q_0,q_1,\ldots,q_{k-1}\} \subseteq Q$ is a \emph{$k$-cycle} if $q_0t=q_1, q_1t=q_2,\ldots,q_{k-2}t=q_{k-1},q_{k-1}t=q_0$. This $k$-cycle is denoted by $(q_0,q_1,\ldots,q_{k-1})$, and acts as the identity on the states in $Q_n\setminus P$. A~2-cycle $(q_0,q_1)$ is called a \emph{transposition}. A transformation that changes only one state $p$ to a state $q\neq p$ and acts as the identity for the other states is denoted by $(p\to q)$. The identity transformation is denoted by ${\mathbbm1}$. In any DFA, each $a\in \Sigma$ induces a transformation $\delta_a$ of the set $Q_n$ defined by $q\delta_a=\delta(q,a)$; we denote this by $a\colon \delta_a$. For example, when defining the transition function of a DFA, we write $a\colon (0,1)$ to mean that $\delta(q,a)=q(0,1)$, where the transformation $(0,1)$ acts on state $q$ as follows: if $q$ is 0 it maps it to 1, if $q$ is 1 it maps it to 0, and it acts as the identity on the remaining states. By a slight abuse of notation we use the letter $a$ to denote the transformation it induces; thus we write $qa$ instead of $q\delta_a$. We extend the notation to sets of states: if $P\subseteq Q_n$, then $Pa=\{pa\mid p\in P\}$. We also find it convenient to write $P\stackrel{a}{\longrightarrow} Pa$ to indicate that the image of $P$ under $a$ is $Pa$. If $s,t$ are transformations of $Q$, their composition is denoted by $s\ast t$ and defined by $q(s \ast t)=(qs)t$; the $\ast$ is usually omitted. Let ${\mathcal T}_{Q_n}$ be the set of all $n^n$ transformations of $Q_n$; then ${\mathcal T}_{Q_n}$ is a monoid under composition. A sequence $(L_n, n\ge k)=(L_k,L_{k+1},\dots)$, of regular languages is called a \emph{stream}; here $k$ is usually some small integer, and the languages in the stream usually have the same form and differ only in the parameter $n$. For example, $(\{a,b\}^*a^n\{a,b\}^* \mid n\ge 2)$ is a stream. To find the complexity of a binary operation $\circ$ we need to find an upper bound on this complexity and two streams $(L'_m, m \ge h)$ and $(L_n, n\ge k)$ of languages meeting this bound. In general, the two streams are different, but there are many examples where $L'_n$ ``differs only slightly'' from $L_n$; such a language $L'_n$ is called a \emph{dialect}~\cite{Brz13} of $L_n$. Let $\Sigma=\{a_1,\dots,a_k\}$ be an alphabet; we assume that its elements are ordered as shown. Let $\pi$ be a \emph{partial permutation} of $\Sigma$, that is, a partial function $\pi \colon \Sigma \rightarrow \Gamma$ where $\Gamma \subseteq \Sigma$, for which there exists $\Delta \subseteq \Sigma$ such that $\pi$ is bijective when restricted to $\Delta$ and undefined on $\Sigma \setminus \Delta$. We denote undefined values of $\pi$ by ``$-$'', that is, we write $\pi(a)=-$, if $\pi$ is undefined at $a$. If $L\subseteq \Sigma^*$, we denote it by $L(a_1,\dots,a_k)$ to stress its dependence on $\Sigma$. If $\pi$ is a partial permutation, let $s_\pi(L(a_1,\dots,a_k))$ be the language obtained from $L(a_1,\dots,a_k)$ by the substitution $s_\pi$ defined as follows: for $a\in \Sigma$, $a \mapsto \{\pi(a)\}$ if $\pi(a)$ is defined, and $a \mapsto \emptyset$ otherwise. The \emph{permutational dialect}, or simply \emph{dialect}, of $L(a_1,\dots,a_k)$ defined by $\pi$ is the language $L(\pi(a_1),\dotsc,\pi(a_k))= s_\pi(L(a_1,\dots,a_k))$. Similarly, let ${\mathcal D} = (Q,\Sigma,\delta,q_0,F)$ be a DFA; we denote it by ${\mathcal D}(a_1,\dots,a_k)$ to stress its dependence on $\Sigma$. If $\pi$ is a partial permutation, then the \emph{permutational dialect}, or simply \emph{dialect}, ${\mathcal D}(\pi(a_1),\dotsc,\pi(a_k))$ of ${\mathcal D}(a_1,\dots,a_k)$ is obtained by changing the alphabet of ${\mathcal D}$ from $\Sigma$ to $\pi(\Sigma)$, and modifying $\delta$ so that in the modified DFA $\pi(a_i)$ induces the transformation induced by $a_i$ in the original DFA. One verifies that if the language $L(a_1,\dots,a_k)$ is accepted by DFA ${\mathcal D}(a_1,\dots,a_k)$, then $L(\pi(a_1),\dotsc,\pi(a_k))$ is accepted by ${\mathcal D}(\pi(a_1),\dotsc,\pi(a_k))$. If the letters for which $\pi$ is undefined are at the end of the alphabet $\Sigma$, then they are omitted. For example, if $\Sigma=\{a,b,c,d\}$ and $\pi(a)=b$, $\pi(b)=a$, and $\pi(c)=\pi(d)=-$, then we write $L_n(b,a)$ for $L_n(b,a,-,-)$, etc. \medskip \section{Boolean Operations} A binary boolean operation is \emph{proper} if it is not a constant and does not depend on only one variable. We study the complexities of four proper boolean operations only: union ($\cup$), symmetric difference ($\oplus$), difference ($\setminus$), and intersection ($\cap$); the complexity of any other proper operation can be deduced from these four. For example, $\kappa(\overline{L'} \cup L)=\kappa\left( \overline{ \overline{L'}\cup L}\right)=\kappa (L'\cap \overline{L})=\kappa(L'\setminus L)$, where we have used the well-known fact that $\kappa(\overline{L})=\kappa(L)$, for any $L$. The DFA of Definition~\ref{def:regular} is required for the next theorem; this DFA is the 4-input ``universal witness'' called ${\mathcal U}_n(a,b,c,d)$ in~\cite{Brz13}. \begin{definition} \label{def:regular} For $n\ge 3$, let ${\mathcal D}_n={\mathcal D}_n(a,b,c,d)=(Q_n,\Sigma,\delta_n, 0, \{n-1\})$, where $\Sigma=\{a,b,c,d\}$, and $\delta_n$ is defined by the transformations $a\colon (0,\dots,n-1)$, $b\colon(0,1)$, $c\colon(n-1 \rightarrow 0)$, and $d\colon {\mathbbm1}$. Let $L_n=L_n(a,b,c,d)$ be the language accepted by~${\mathcal D}_n$. The structure of ${\mathcal D}_n(a,b,c,d)$ is shown in Figure~\ref{fig:RegWit}. \end{definition} \begin{figure}[ht] \unitlength 8.5pt \begin{center}\begin{picture}(37,10)(0,2) \gasset{Nh=1.8,Nw=3.5,Nmr=1.25,ELdist=0.4,loopdiam=1.5} {\scriptsize \node(0)(1,7){0}\imark(0) \node(1)(8,7){1} \node(2)(15,7){2} } \node[Nframe=n](3dots)(22,7){$\dots$} {\scriptsize \node(n-2)(29,7){$n-2$} } {\scriptsize \node(n-1)(36,7){$n-1$}\rmark(n-1) } \drawloop(0){$c,d$} \drawedge[curvedepth= .8,ELdist=.1](0,1){$a,b$} \drawedge[curvedepth= .8,ELdist=-1.2](1,0){$b$} \drawedge(1,2){$a$} \drawloop(2){$b,c,d$} \drawedge(2,3dots){$a$} \drawedge(3dots,n-2){$a$} \drawloop(n-2){$b,c,d$} \drawedge(n-2,n-1){$a$} \drawedge[curvedepth= 4.0,ELdist=-1.0](n-1,0){$a,c$} \drawloop(n-1){$b,d$} \drawloop(1){$c,d$} \end{picture}\end{center} \caption{ DFA of Definition~\ref{def:regular}.} \label{fig:RegWit} \end{figure} \begin{theorem} \label{thm:boolean} For $m,n \ge 3$, let $L'_m$ (respectively, $L_n$) be a regular language with $m$ (respectively, $n$) quotients over an alphabet $\Sigma'$, (respectively, $\Sigma$). Then the complexity of union and symmetric difference is $mn+m+n+1$ and this bound is met by $L'_m(a,b,-,c)$ and $L_n(b,a,-,d)$; the complexity of difference is $mn+m$, and this bound is met by $L'_m(a,b,-,c)$ and $L_n(b,a)$; the complexity of intersection is $mn$ and this bound is met by $L'_m(a,b)$ and $L_n(b.a)$. \end{theorem} \begin{proof} Let ${\mathcal D}'_m = ( Q'_m, \Sigma', \delta', 0',F')$ and ${\mathcal D}_n = (Q_n, \Sigma,\delta, 0, F)$ be minimal DFAs for $L'_m$ and $L_n$, respectively. To calculate an upper bound for the boolean operations assume that $\Sigma'\setminus \Sigma $ and $\Sigma \setminus \Sigma' $ are non-empty; this assumption results in the largest upper bound. We add an empty state $\emptyset'$ to ${\mathcal D}'_m$ to send all transitions under the letters from $\Sigma \setminus \Sigma' $ to that state; thus we get an $(m+1)$-state DFA ${\mathcal D}'_{m,\emptyset'}$. Similarly, we add an empty state $\emptyset$ to ${\mathcal D}_n$ to get ${\mathcal D}_{n,\emptyset}$. Now we have two DFAs over the same alphabet, and an ordinary problem of finding an upper bound for the boolean operations on two languages over the same alphabet, \emph{except that these languages both have empty quotients}. It is clear that $(m+1)(n+1)$ is an upper bound for all four operations, but it can be improved for difference and intersection. Consider the direct product ${\mathcal P}_{m,n}$ of ${\mathcal D}'_{m,\emptyset'}$ and ${\mathcal D}_{n,\emptyset}$. For difference, all $n+1$ states of ${\mathcal P}_{m,n}$ that have the form $(\emptyset', q)$, where $q\in Q_n\cup \{\emptyset\}$ are empty. Hence the bound can be reduced by $n$ states to $mn+m+1$. However, the empty states can only be reached by words in $\Sigma\setminus \Sigma'$ and the alphabet of $L'_m\setminus L_n$ is a subset of $\Sigma'$; hence the bound is reduced futher to $mn+m$. For intersection, all $n$ states $(\emptyset',q)$, $q\in Q_n$, and all $m$ states $(p',\emptyset)$, $p'\in Q'_m$, are equivalent to the empty state $(\emptyset',\emptyset)$, thus reducing the upper bound to $mn+1$. Since the alphabet of $L'_m\cap L_n$ is a subset of $\Sigma'\cap \Sigma$, these empty states cannot be reached and the bound is reduced to $mn$. To prove that the bounds are tight, we start with ${\mathcal D}_n(a,b,c,d)$ of Definition~\ref{def:regular}. For $m,n\ge 3$, let $D'_m(a,b,-,c)$ be the dialect of ${\mathcal D}'_m(a,b,c,d)$ where $c$ plays the role of $d$ and the alphabet is restricted to $\{a,b,c\}$, and let ${\mathcal D}_n(b,a,-,d)$ be the dialect of ${\mathcal D}_n(a,b,c,d)$ in which $a$ and $b$ are permuted, and the alphabet is restricted to $\{a,b,d\}$; see Figure~\ref{fig:boolean}. \begin{figure}[ht] \unitlength 7.5pt \begin{center}\begin{picture}(37,17)(-3.5,2) \gasset{Nh=2.2,Nw=5.0,Nmr=1.25,ELdist=0.4,loopdiam=1.5} {\scriptsize \node(0')(-2,14){$0'$}\imark(0') \node(1')(7,14){$1'$} \node(2')(16,14){$2'$} \node[Nframe=n](3dots')(25,14){$\dots$} \node(m-1')(34,14){$(m-1)'$}\rmark(m-1') } \drawedge[curvedepth= 1.4,ELdist=-1.3](0',1'){$a,b$} \drawedge[curvedepth= 1,ELdist=.3](1',0'){$b$} \drawedge(1',2'){$a$} \drawedge(2',3dots'){$a$} \drawedge(3dots',m-1'){$a$} \drawedge[curvedepth= -5.2,ELdist=-1](m-1',0'){$a$} \drawloop(0'){$c$} \drawloop(1'){$c$} \drawloop(2'){$b,c$} \drawloop(m-1'){$b,c$} \gasset{Nh=2.2,Nw=5.0,Nmr=1.25,ELdist=0.4,loopdiam=1.5} {\scriptsize \node(0)(-2,7){0}\imark(0) \node(1)(7,7){1} \node(2)(16,7){2} \node[Nframe=n](3dots)(25,7){$\dots$} \node(n-1)(34,7){$n-1$}\rmark(n-1) } \drawloop(0){$d$} \drawloop(1){$d$} \drawloop(2){$a,d$} \drawloop(n-1){$a,d$} \drawedge[curvedepth= 1.2,ELdist=-1.3](0,1){$a,b$} \drawedge[curvedepth= .8,ELdist=.25](1,0){$a$} \drawedge(1,2){$b$} \drawedge(2,3dots){$b$} \drawedge(3dots,n-1){$b$} \drawedge[curvedepth= 5.0,ELdist=-1.5](n-1,0){$b$} \end{picture}\end{center} \caption{Witnesses $D'_m(a,b,-,c)$ and ${\mathcal D}_n(b,a,-,d)$ for boolean operations. } \label{fig:boolean} \end{figure} To finish the proof, we complete the two DFAs by adding empty states, and construct their direct product as illustrated in Figure~\ref{fig:cross}. If we restrict both DFAs to the alphabet $\{a,b\}$, we have the usual problem of determining the complexity of two DFAs over the same alphabet. By \cite[Theorem 1]{BBMR14}, all $mn$ states of the form $(p',q)$, $p'\in Q'_m$, $q\in Q_n$, are reachable and pairwise distinguishable by words in $\{a,b\}^*$ for all proper boolean operations if $(m,n)\notin \{(3,4),(4,3),(4,4)\}$. For our application, the three exceptional cases were verified by computation. \begin{figure}[ht] \unitlength 8.5pt \begin{center}\begin{picture}(35,20)(0,-2) \gasset{Nh=2.6,Nw=2.6,Nmr=1.2,ELdist=0.3,loopdiam=1.2} {\scriptsize \node(0'0)(2,15){$0',0$}\imark(0'0) \node(1'0)(2,10){$1',0$} \node(2'0)(2,5){$2',0$}\rmark(2'0) \node(3'0)(2,0){$\emptyset',0$} \node(0'1)(10,15){$0',1$} \node(1'1)(10,10){$1',1$} \node(2'1)(10,5){$2',1$}\rmark(2'1) \node(3'1)(10,0){$\emptyset',1$} \node(0'2)(18,15){$0',2$} \node(1'2)(18,10){$1',2$} \node(2'2)(18,5){$2',2$}\rmark(2'2) \node(3'2)(18,0){$\emptyset',2$} \node(0'3)(26,15){$0',3$}\rmark(0'3) \node(1'3)(26,10){$1',3$}\rmark(1'3) \node(2'3)(26,5){$2',3$}\rmark(2'3) \node(3'3)(26,0){$\emptyset',3$}\rmark(3'3) \node(0'4)(34,15){$0',\emptyset$} \node(1'4)(34,10){$1',\emptyset$} \node(2'4)(34,5){$2',\emptyset$}\rmark(2'4) \node(3'4)(34,0){$\emptyset',\emptyset$} } \drawedge(3'0,3'1){$b$} \drawedge(3'1,3'2){$b$} \drawedge(3'2,3'3){$b$} \drawedge[curvedepth=2,ELdist=.4](3'3,3'0){$b$} \drawedge(0'4,1'4){$a$} \drawedge(1'4,2'4){$a$} \drawedge[curvedepth=-2,ELdist=-.9](2'4,0'4){$a$} \drawedge(3'3,3'4){$c$} \drawedge(2'4,3'4){$d$} \drawedge[curvedepth=-3,ELdist=.4](0'0,3'0){$d$} \drawedge[curvedepth=3,ELdist=.4](0'0,0'4){$c$} \end{picture}\end{center} \caption{Direct product for union shown partially.} \label{fig:cross} \end{figure} To prove that the remaining states are reachable, observe that $(0',0) \stackrel{d} {\longrightarrow} (\emptyset',0)$ and $(\emptyset',0) \stackrel{b^q} {\longrightarrow} (\emptyset',q)$, for $q\in Q_n$. Symmetrically, $(0',0) \stackrel{c} {\longrightarrow} (0',\emptyset)$ and $(0',\emptyset) \stackrel{a^p} {\longrightarrow} (p',\emptyset)$, for $p'\in Q'_m$. Finally, $(\emptyset',n-1) \stackrel{c} {\longrightarrow} (\emptyset',\emptyset)$, and all $(m+1)(n+1)$ states of the direct product are reachable. It remains to verify that the appropriate states are pairwise distinguishable. From \cite[Theorem 1]{BBMR14}, we know that all states in $Q'_m\times Q_n$ are distinguishable. Let $H= \{(\emptyset',q) \mid q\in Q_n \}$, and $V= \{ (p',\emptyset) \mid p'\in Q'_m \}$. For the operations consider four cases: \begin{description} \item[Union] The final states of ${\mathcal P}_{m,n}$ are $\{((m-1)',q) \mid q\in Q_n\cup \{\emptyset\} \}$, and $\{ (p',n-1) \mid p'\in Q'_m \cup \{\emptyset'\} \}$. Every state in $V$ accepts a word with a $c$, whereas no state in $H$ accepts such words. Similarly, every state in $H$ accepts a word with a $d$, whereas no state in $V$ accepts such words. Every state in $Q'_m \times Q_n$ accepts a word with a $c$ and a word with a $d$. State $(\emptyset',\emptyset)$ accepts no words at all. Hence any two states chosen from different sets (the sets being $Q'_m\times Q_n$, $H$, $V$, and $\{(\emptyset',\emptyset)\}$) are distinguishable. States in $H$ are distinguishable by words in $b^*$ and those in $V$, by words in $a^*$. Therefore all $mn+m+n+1$ states are pairwise distinguishable. \item[Symmetric Difference] The final states here are all the final states for union except $( (m-1)',n-1 )$. The rest of the argument is the same as for union. \item[Difference] Here the final states are $\{((m-1)', q) \mid q\neq n-1\}$. The $n$ states of the form $(\emptyset',q)$, $q \in Q_n$, are now equivalent to the empty state $(\emptyset',\emptyset)$. The remaining states are pairwise distinguishable by the arguments used for union. Hence we have $mn+m+1$ distinguishable states. However, the alphabet of $L'_m \setminus L_n$ is $\{a,b,c\}$, and the empty state can only be reached by $d$. Since this empty state is not needed, neither is $d$, the final bound is $mn+m$ and it is reached by $L'_m(a,b,-,c)$ and $L_n(b,a)$. \item[Intersection] Here only $((m-1)', n-1 )$ is final and all states $(p', \emptyset)$, $p' \in Q'_m$, and $(\emptyset',q)$, $q\in Q_n$ are equivalent to $(\emptyset',\emptyset)$, leaving $mn+1$ distinguishable states. However, the alphabet of $L'_m \cap L_n$ is $\{a,b\}$, and so the empty state cannot be reached. This gives the final bound of $mn$ states, and this bound is met by $L'_m(a,b)$ and $L_n(b.a)$ as was already known in~\cite{Brz13}. \qed \end{description} \end{proof} \section{Product} \begin{theorem} \label{thm:product} For $m,n \ge 3$, let $L'_m$ (respectively, $L_n$) be a regular language with $m$ (respectively, $n$) quotients over an alphabet $\Sigma'$, (respectively, $\Sigma$). Then $\kappa(L'_m L_n) \le m2^n+2^{n-1}$, and this bound is met by $L'_m(a,b,-,c)$ and $L_n(b,a,-,d)$. \end{theorem} \begin{proof} First we derive the upper bound. Let ${\mathcal D}'_m=( Q'_m, \Sigma', \delta', 0',F')$ and ${\mathcal D}_n=(Q_n,\Sigma,\delta,0,F) $ be minimal DFAs of $L'_m$ and $L_n$, respectively. We use the normal construction of an $\varepsilon$-NFA ${\mathcal N}$ to recognize $L'_mL_n$, by introducing an $\varepsilon$-transition from each final state of ${\mathcal D}'_m$ to the initial state of ${\mathcal D}_n$, and changing all final states of ${\mathcal D}'_m$ to non-final. This is illustrated in Figure~\ref{fig:product}, where $(m-1)'$ is the only final state of ${\mathcal D}'_m$. We then determinize ${\mathcal N}$ using the subset construction to get the DFA ${\mathcal D}$ for $L'_m L_n$. Suppose ${\mathcal D}'_m$ has $k$ final states, where $1 \le k \le m-1$. I will show that ${\mathcal D}$ can have only the following types of states: (a) at most $(m-k)2^n$ states $\{p'\} \cup S$, where $p'\in Q'_m\setminus F'$, and $S\subseteq Q_n$, (b) at most $k2^{n-1}$ states $\{ p' , 0\} \cup S$, where $p'\in F'$ and $S\subseteq Q_n\setminus \{0\}$, and (c) at most $2^n$ states $S\subseteq Q_n$. Because ${\mathcal D}'_m$ is deterministic, there can be at most one state $p'$ of $Q'_m$ in any reachable subset. If $p' \notin F'$, it may be possible to reach any subset of states of $Q_n$ along with $p'$, and this accounts for (a). If $p' \in F'$, then the set must contain $0$ and possibly any subset of $Q_n\setminus \{0\}$, giving (b). It may also be possible to have any subset $S$ of $Q_n$ by applying an input that is not in $\Sigma'$ to $\{0'\} \cup S$ to get $S$, and so we have~(c). Altogether, there are at most $(m-k)2^n +k2^{n-1}+2^n = (2m-k)2^{n-1} + 2^n$ reachable subsets. This expression reaches its maximum when $k=1$, and hence we have at most $m2^n+2^{n-1}$ states in ${\mathcal D}$. \begin{figure}[ht] \unitlength 7.5pt \begin{center}\begin{picture}(37,17)(-4,2) \gasset{Nh=2.0,Nw=4.6,Nmr=1.25,ELdist=0.4,loopdiam=1.5} {\scriptsize \node(0')(-4,14){$0'$}\imark(0') \node(1')(3,14){$1'$} \node(2')(10,14){$2'$} \node[Nframe=n](3dots')(17,14){$\dots$} \node(m-1')(24,14){$(m-1)'$} } \drawedge[curvedepth= 1.4,ELdist=-1.3](0',1'){$a,b$} \drawedge[curvedepth= 1,ELdist=.3](1',0'){$b$} \drawedge(1',2'){$a$} \drawedge(2',3dots'){$a$} \drawedge(3dots',m-1'){$a$} \drawedge[curvedepth= -5.2,ELdist=-1](m-1',0'){$a$} \drawloop(0'){$c$} \drawloop(1'){$c$} \drawloop(2'){$b,c$} \drawloop(m-1'){$b,c$} \gasset{Nh=2.0,Nw=3.7,Nmr=1.25,ELdist=0.4,loopdiam=1.5} {\scriptsize \node(0)(7,7){0}\imark(0) \node(1)(14,7){1} \node(2)(21,7){2} \node[Nframe=n](3dots)(28,7){$\dots$} \node(n-1)(35,7){$n-1$}\rmark(n-1) } \drawloop(0){$d$} \drawloop(1){$d$} \drawloop(2){$a,d$} \drawloop(n-1){$a,d$} \drawedge[curvedepth= 1.2,ELdist=-1.3](0,1){$a,b$} \drawedge[curvedepth= .8,ELdist=.25](1,0){$a$} \drawedge(1,2){$b$} \drawedge(2,3dots){$b$} \drawedge(3dots,n-1){$b$} \drawedge[curvedepth= 5.0,ELdist=-1.5](n-1,0){$b$} \drawedge[curvedepth= -1.5,ELdist=-1](m-1',0){$\varepsilon$} \end{picture}\end{center} \caption{An NFA for the product of $L'_m(a,b,-,c)$ and $L_n(b,a,-,d)$. } \label{fig:product} \end{figure} To prove that the bound is tight, we use the same witnesses as for boolean operations; see Figure~\ref{fig:product}. If $S=\{q_1,\dots, q_k\}\subseteq Q_n$ then $S+i= \{q_1+i,\dots, q_k+i\}$ and $S-i = \{q_1-i,\dots, q_k-i\} $, where addition and subtraction are modulo $n$. Note that $b^2$ and $a^m$ ($a^2$ and $b^n$) act as the identity on $Q'_m$ ($Q_n$). If $p < m-1$, then $\{p'\} \cup S \stackrel{b^2}{\longrightarrow} \{p'\} \cup (S+2)$, for all $S\subseteq Q_n$. If $n$ is odd, then $(b^2)^{(n-1)/2} =b^{n-1}$ and $\{p'\} \cup S \stackrel{ b^{n-1} }{\longrightarrow} \{ p' \} \cup (S-1)$, for all $q \in Q_n$. If $0, 1\notin S$ or $\{0,1\}\subseteq S$, then $a$ acts as the identity on $S$. \begin{remark} \label{rem:add1} If $1\notin S$ and $\{(m-2)'\} \cup S$ is reachable, then $\{0',1\} \cup S$ is reachable for all $S\subseteq Q_n\setminus \{1\}$. \end{remark} \begin{proof} If $0\in S$, then $\{(m-2)',0\} \cup S\setminus\{0\} \stackrel{a}{\longrightarrow} \{(m-1)',0,1\} \cup S\setminus\{0\} \stackrel{a}{\longrightarrow} \{ 0',0,1 \} \cup S\setminus\{0\} = \{0',1\} \cup S. $ If $0\notin S$, then $\{(m-2)'\} \cup S \stackrel{a}{\longrightarrow} \{(m-1)',0\} \cup S \stackrel{a}{\longrightarrow} \{ 0',1 \} \cup S. $ \qed \end{proof} We now prove that the languages of Figure~\ref{fig:product} meet the upper bound. \smallskip \noindent {\bf Claim 1:} \emph{All sets of the form $\{p'\} \cup S$, where $p'\in Q'_{m-1}$ and $S\subseteq Q_n$, are reachable.} We show this by induction on the size of $S$. \smallskip \noindent {\bf Basis: $|S| =0$.} The initial set is $\{0'\}$, and from $\{0'\}$ we reach $\{p'\}$, $p'\in Q'_{m-1}$, by $a^{p}$, without reaching any states of $Q_n$. Thus the claim holds if $|S|=0$. \smallskip \noindent {\bf Induction Assumption:} $\{p'\} \cup S$, where $p'\in Q'_{m-1}$ and $S\subseteq Q_n$, is reachable if $|S|\le k$. \smallskip \noindent {\bf Induction Step:} We prove that if $|S|=k+1$, then $\{p'\} \cup S$ is reachable. Let $S=\{q_0,q_1,\dots, q_k\}$, where $0 \le q_0< q_1<\dots <q_k \le n-1$. Suppose $q\in S$. By assumption, sets $\{p'\}\cup (S\setminus \{q\} -(q-1))$ are reachable for all $p'\in Q'_{m-1}$. \smallskip \noindent $\bullet$ \emph{All sets of the form $\{0'\} \cup S$ are reachable.}\\ Note that $1\notin (S\setminus \{q\} -(q-1))$. By assumption, $\{(m-2)'\} \cup (S \setminus \{q\} -(q-1))$ is reachable. By Remark~\ref{rem:add1}, $\{0',1\}\cup (S\setminus \{q\} -(q-1))$ is reachable. \begin{enumerate} \item If there is an odd state $q$ in $S$, then $\{0',1\}\cup (S\setminus \{q\} -(q-1)) \stackrel{b^{q-1}}{\longrightarrow} \{0',q\}\cup (S\setminus \{q\}) =\{0'\} \cup S.$ \item If there is no odd state in $S$ and $n$ is odd, then $S \subseteq \{0,2,\dots, n-1\}$. Pick $q \in S$. Then $\{0', 1\} \cup (S \setminus \{q\} - (q-1)) \xrightarrow{b^q} \{0', q+1\} \cup (S \setminus\{q\}+1) \xrightarrow{b^{n-1}} \{0', q\} \cup S \setminus\{q\} = \{0'\} \cup S$. \item If there is no odd state and $n$ is even, then $S\subseteq \{0,2,\dots,n-2\}$ (so $n-1\notin S$). \begin{enumerate} \item If $0 \notin S$, then $0,1 \notin S+1$. By 1, $\{0'\} \cup (S+1)$ is reachable, since $S+1$ contains an odd state. Then $\{0'\} \cup (S+1) \stackrel{a}{\longrightarrow} \{1'\} \cup (S+1) \stackrel{b^{n-1}}{\longrightarrow} \{0'\} \cup S$. \item If $2\notin S$, then $0,1 \notin S-1$. By 1, $\{0'\} \cup (S-1)$ is reachable, since $S-1$ contains an odd state. Then $\{0'\} \cup (S-1) \stackrel{a}{\longrightarrow} \{1'\} \cup (S-1) \stackrel{b^{n+1}}{\longrightarrow} \{0'\} \cup S$. \item If $\{0,2\} \subseteq S$, then $0\notin S-1$, and $1, n-1\in S-1$. By 1, $\{0'\} \cup (S-1)$ is reachable, since $1\in S-1$. Note that $aba$ sends 1 to 0, $n-1$ to 1, and adds 1 to each state $q\ge 3$ of $S-1$; thus $2\notin (S-1)aba$, and $\{0'\} \cup (S-1) \stackrel{aba}{\longrightarrow} \{1', 0, 1\} \cup S\setminus \{0,2\}$. Next, $b^{n-1}$ sends 0 to $n-1$ and subtracts 1 from every other element of $S\setminus \{0,2\}$. Hence $\{1', 0, 1\} \cup S\setminus \{0,2\} \stackrel{b^{n-1}}{\longrightarrow} \{0',n-1,0\} \cup (S \setminus \{0,2\} -1) \stackrel{ab }{\longrightarrow} \{0',0,2\} \cup (S \setminus \{0,2\}) =\{0'\} \cup S$. \end{enumerate} \end{enumerate} \noindent $\bullet$ \emph{All sets of the form $\{1'\} \cup S$ are reachable.}\\ If 0 and 1 are not in $S$ or are both in $S$, then $\{0'\} \cup S \stackrel{a}{\longrightarrow} \{1'\} \cup S$. If $0\in S$ but $1\notin S$, then $\{0',1\} \cup S\setminus\{0\} \stackrel{a}{\longrightarrow} \{1',0\} \cup S\setminus\{0\} =\{1'\} \cup S$. If $1\in S$ but $0\notin S$, then $\{0',0\} \cup S\setminus\{1\} \stackrel{a}{\longrightarrow} \{1',1\} \cup S\setminus\{1\} =\{1'\} \cup S$. \medskip \noindent \label{p'} $\bullet$ \emph{All sets of the form $\{p'\} \cup S$, where $2 \le p\le m-2$, are reachable.}\\ If $p$ is even, then $\{0'\} \cup S \stackrel{a^p}{\longrightarrow} \{p'\} \cup S$.\\ If $p$ is odd, then $\{1'\} \cup S \stackrel{a^{p-1}}{\longrightarrow} \{p'\} \cup S$. \medskip \noindent {\bf Claim 2:} \label{(m-1)'} \emph{All sets of the form $\{(m-1)', 0\} \cup S$ are reachable.} \begin{enumerate} \item By Claim~1, $\{(m-3)'\} \cup S$ is reachable. If $q_0=1$, then\\ $\{(m-3)',1\} \cup S\setminus \{1\} \stackrel{a^2}{\longrightarrow} \{(m-1)', 0,1\} \cup S\setminus\{1\}= \{(m-1)', 0\} \cup S.$ \item By Claim~1, $\{(m-2)'\} \cup S$ is reachable. If $q_0\ge 2$, then \\ $\{(m-2)'\} \cup S \stackrel{a}{\longrightarrow} \{(m-1)', 0\} \cup S.$ \end{enumerate} \smallskip \noindent {\bf Claim 3:} \emph{All sets of the form $S$ are reachable.}\\ By~Claim 1, $\{0'\} \cup S$ is reachable for every $S$, and $\{0'\} \cup S \stackrel{d}{\longrightarrow} S.$ \smallskip For distinguishability, note that only state $q$ accepts $w_q = b^{n-1-q}$ in ${\mathcal D}_n$. Hence, if two states of the product have different sets $S$ and $S'$ and $q\in S\oplus S'$, then they can be distinguished by $w_q$. State $\{p'\} \cup S$ is distinguished from $S$ by $ca^{m-1-p}b^{n-1}$. If $p <q$, states $\{p'\} \cup S$ and $\{q'\} \cup S$ are distinguished as follows. Use $ca^{m-1-q}$ to reach $\{(p+m-1-q)'\}$ from $p'$ and $\{(m-1)'\} \cup \{0\}$ from $q'$. The reached states are distinguishable since they differ in their subsets of $Q_n$. \qed \end{proof} \section{Most Complex Regular Languages} A \emph{most complex} regular language stream is one that, together with some dialects, meets the complexity bounds for all boolean operations, product, star, and reversal, and has the largest syntactic semigroup and most complex atoms\footnote{The \emph{atom congruence} is a left congruence defined as follows: two words $x$ and $y$ are equivalent if $ux\in L$ if and only if $uy\in L$ for all $u\in \Sigma^*$. Thus $x$ and $y$ are equivalent if $x\in u^{-1}L$ if and only if $y\in u^{-1}L$. An equivalence class of this relation is called an \emph{atom} of $L$~\cite{BrTa14,Iva16}. It follows that an atom is a non-empty intersection of complemented and uncomplemented quotients of $L$. The number of atoms and their quotient complexities are possible measures of complexity of regular languages~\cite{Brz13}. For more information about atoms and their complexity, see~\cite{BrTa13,BrTa14,Iva16}. }~\cite{Brz13}. A most complex stream should have the smallest possible alphabet sufficient to meet all the bounds. Most complex streams are useful in systems dealing with regular languages and finite automata. One would like to know the maximal sizes of automata that can be handled by the system. In view of the existence of most complex streams, one stream can be used to test all the operations. Here we use the stream of~\cite{Brz13} shown in Figure~\ref{fig:RegWit}. \begin{theorem}[Most Complex Regular Languages] \label{thm:main} For each $n\ge 3$, the DFA of Definition~\ref{def:regular} is minimal and its language $L_n(a,b,c,d)$ has complexity $n$. The stream $(L_m(a,b,c,d) \mid m \ge 3)$ with dialect streams $(L_n(a,b,-,c) \mid n \ge 3)$ and $(L_n(b,a,-,d) \mid n \ge 3)$ is most complex in the class of regular languages. In particular, it meets all the complexity bounds below, which are maximal for regular languages. In several cases the bounds can be met with a restricted alphabet. \begin{enumerate} \item The syntactic semigroup of $L_n(a,b,c)$ has cardinality $n^n$. \item Each quotient of $L_n(a)$ has complexity $n$. \item The reverse of $L_n(a,b,c)$ has complexity $2^n$, and $L_n(a,b,c)$ has $2^n$ atoms. \item For each atom $A_S$ of $L_n(a,b,c)$, the complexity $\kappa(A_S)$ satisfies: \begin{equation*} \kappa(A_S) = \begin{cases} 2^n-1, & \text{if $S\in \{\emptyset,Q_n\}$;}\\ 1+ \sum_{x=1}^{|S|}\sum_{y=1}^{n-|S|} \binom{n}{x}\binom{n-x}{y}, & \text{if $\emptyset \subsetneq S \subsetneq Q_n$.} \end{cases} \end{equation*} \item The star of $L_n(a,b)$ has complexity $2^{n-1}+2^{n-2}$. \item The product $L'_m(a,b,-,c) L_n(b,a,-,d)$ has complexity $m2^n+2^{n-1}$. \item The complexity of $L'_m(a,b,-,c) \circ L_n(b,a,-,d)$ is $mn+m+n+1$ if $\circ\in \{\cup,\oplus\}$, that of $L'_m(a,b,-,c) \setminus L_n(b,a)$ is $mn+m$, and that of $L'_m(a,b) \cap L_n(b,a)$ is $mn$. \end{enumerate} \end{theorem} \begin{proof} The proofs of 1--5 can be found in~\cite{Brz13}, and Claims 6 and 7 are proved in the present paper, Theorems~\ref{thm:boolean} and~\ref{thm:product}. \qed \end{proof} \begin{proposition}[Marek Szyku{\l}a, personal communication] At least four inputs are required for a most complex regular language. In particular, four inputs are needed for union: two inputs are needed to reach all pairs of states in $Q'_m \times Q_n$, one input in $\Sigma' \setminus \Sigma$ for pairs $(p',\emptyset)$ with $p'\in Q'_m$, and one in $\Sigma \setminus \Sigma'$ for pairs $(\emptyset',q)$ with $q \in Q_n$. \end{proposition} \section{Conclusions} Two complete DFAs over different alphabets $\Sigma'$ and $\Sigma$ are incomplete DFAs over $\Sigma' \cup \Sigma$. Each DFA can be completed by adding an empty state and sending all transitions induced by letters not in the DFA's alphabet to that state. This results in an $(m+1)$-state DFA and an $(n+1)$-state DFA. From the theory about DFAs over the same alphabet we know that $(m+1)(n+1)$ is an upper bound for all boolean operations on the original DFAs, and that $m2^{n+1} +2^n$ is an upper bound for product. We have shown that the tight bounds for boolean operations are $(m+1)(n+1)$ for union and symmetric difference, $mn+m$ for difference, and $mn$ for intersection, while the tight bound for product is $m2^n+2^{n-1}$. In the same-alphabet case the tight bound is $mn$ for all boolean operations and it is $(m-1)2^n+2^{n-1}$ for product. In summary, the restriction of identical alphabets is unnecessary and leads to incorrect results. It should be noted that if the two languages in question already have empty quotients, then making the alphabets the same does not require the addition of any states, and the traditional same-alphabet methods are correct. This is the case, for example, for prefix-free, suffix-free and finite languages. \medskip \noindent {\bf Acknowledgment} I am very grateful to Sylvie Davies, Bo Yang Victor Liu and Corwin Sinnamon for careful proofreading and constructive comments. I thank Marek Szyku{\l}a for contributing Proposition~1 and several other useful comments. \providecommand{\noopsort}[1]{}
1,314,259,994,148
arxiv
\section{Introduction} Autonomous robots need to interact with their physical environment to fulfill a plethora of tasks. This requires the manipulation of individual objects, for example, to fetch an item or to stow it away. A popular approach to enable such manipulations uses object pose estimation and grasp pose annotation~\cite{Tremblay2018_DeepOP, Srinivasa2010, Chitta2012, Wang2019}. Previous work on object detection and pose estimation achieves high accuracy on popular datasets such as \texttt{LINEMOD}~\cite{Hinterstoisser2012} or \texttt{YCB-VIDEO}~\cite{Xiang2017}. However, the performance of these algorithms deteriorates when the objects' 3D models are inaccurate or lighting and viewing conditions change~\cite{Loghmani2018,Ammirato2017}. To deal with this problem, hypotheses verification and object pose refinement are commonly used in object pose estimation pipelines. \begin{figure}[!t] \centering \setlength{\tabcolsep}{2pt} \begin{tabular}{cc} \raisebox{-0.69\height}[0pt][0pt]{\includegraphics[trim=0 0 0 50,clip,width=0.48\linewidth]{figures/teaser_robot.jpg}} & \includegraphics[trim=100 220 120 140, clip,width=0.48\linewidth]{figures/teaser_initial.png} \\ & \includegraphics[trim=80 190 110 120, clip,width=0.48\linewidth]{figures/teaser_refined.png} \\ & \includegraphics[trim=160 155 110 95, clip, width=0.48\linewidth]{figures/teaser_render.png} \\ \end{tabular} \setlength{\tabcolsep}{6pt} \caption{Grasping \texttt{YCB-VIDEO} objects with a Toyota HSR. Initial pose estimates in simulation environment (top) are improved using VeREFINE (mid and bottom).} \label{fig:teaser} \end{figure} The idea of hypotheses verification is to evaluate the fit of different estimates with the observed scene: The best fitting estimates are selected and estimates below a threshold are pruned. While this improves accuracy and reliability it also introduces the problem of increased complexity arising from the number of possible combinations in multi-object scenes. The usability of such approaches in robotics is limited by their runtime. For example, Mitash et al.~\cite{Mitash2018} combine object pose verification with physics simulation, resulting in frame times of up to 30s. Krull et al.~\cite{Krull2017} integrate object pose refinement and verification with reinforcement learning to efficiently allocate a refinement budget. However, the authors report a frame time of up to 34s. In contrast to hypothesis verification that only accepts or rejects object pose estimates, object pose refinement improves the estimates themselves. This is achieved by minimizing the discrepancy between the observed scene and the object in an estimated pose. The most popular pose refinement method is the Iterative Closest Point (ICP) algorithm~\cite{Besl1992}. However, if the initial estimate or the visual observation are inaccurate, ICP converges to a local minimum (wrong pose) or even diverges. Alternatively, physics simulation has been used to ensure plausibility and improve accuracy \cite{Mitash2018,Furrer2017} of object pose estimates. But applying physics simulation to objects is an unstable process. It may cause objects to topple over and create worse estimates given the inaccuracy of the simulated environment. The goal of both verification and refinement is to maximize the fit of the estimate to the observed scene. We hypothesize that, by integrating these approaches into one step, we are able to improve the overall accuracy of the pose estimates, while achieving more graceful degradation by limiting divergence of individual strategies. To this end we present VeREFINE, an integrated approach that combines hypotheses \textit{Ver}ification, object pose \textit{Refine}ment and physics simulation into a unified framework. Our contributions are \begin{itemize} \item improving accuracy by integrating refinement with physics simulation in an iterative loop, \item improving robustness by efficient rendering-based verification of object pose estimates, \item improving accuracy and runtime using regret minimization to exploit promising hypotheses, and \item the combination into reliable scene-level refinement and verification for multi-object scenes. \end{itemize} We evaluate our framework on three publicly available datasets, \texttt{Extended APC} \cite{Mitash2018}, \texttt{LINEMOD} \cite{Hinterstoisser2012} and \texttt{YCB-VIDEO}~\cite{Xiang2017}, and out-perform state of the art in pose estimation and refinement in terms of robustness and accuracy. We compare to the related approach by Mitash et al.~\cite{Mitash2018}, achieving a significant reduction in runtime while increasing the accuracy of the pose estimates. We demonstrate the robustness of our method with respect to initial pose errors and missing depth values due to occlusion and material properties. Finally, we evaluate the proposed framework in a robotic grasping experiment resulting in significantly increased success rates compared to other methods. After reviewing related work in Sec.~\ref{sec:related_work}, we discuss the refinement methods in Sec.~\ref{sec:single} and the complete VeREFINE approach in Sec.~\ref{sec:vf}. Sec.~\ref{sec:experiments} presents experiments and results. Sec.~\ref{sec:conclusion} concludes the paper. \section{Related Work}\label{sec:related_work} The proposed approach builds on previous work in hypotheses verification, object pose refinement and their combination with physics simulation. Hypotheses verification approaches for object pose estimation show that considering multiple pose hypotheses per object improves overall estimation performance. Drost et al.~\cite{Drost2010} use a clustering-based verification stage to refine pose estimates. In~\cite{Vidal2018}, a pool of 200 object pose hypotheses is generated using a Point Pair Features (PPF) pipeline. Each hypothesis is refined using Projective ICP and a two-step verification to determine the best estimate. In~\cite{Xiang2017}, an initial estimate is perturbed to generate a set of hypotheses for better coverage of the solution space. All hypotheses are refined before scoring and selection. In contrast, Wang et al.~\cite{Wang2019} estimate a pose confidence score jointly with per-pixel object pose estimates. The highest scoring estimate is selected and refined. Krull et al.~\cite{Krull2017} train a CNN to predict two different hypotheses scores for use during refinement and for the selection of the final estimate. On the scene level, a scoring function that considers geometrical cues, clutter and conflicting hypotheses for multiple objects is proposed in~\cite{Aldoma2016}. For efficient evaluation of the search space, \cite{Bauer2019icvs} consider equivalent combinations of hypotheses to reduce the search tree to a directed acyclic graph and explore using Monte Carlo Tree Search (MCTS). Physics simulation is incorporated in MCTS to additionally consider the supporting relations between objects in~\cite{Mitash2018,Bauer2019oagm}. We propose to apply rendering-based verification to guide refinement, allowing refinement steps to be allocated to promising hypotheses. This naturally extends to multi-object scenes, which reduces the solution space as compared to search-based methods. Previous work on object pose refinement exploit depth, RGB and object segmentation as input modalities. A seminal approach is ICP \cite{Besl1992}. More recently, deep learning approaches for object pose refinement have been proposed. RGB-based methods render intermediary object pose estimates and use CNNs to compute a pose update~\cite{Li2018,Manhardt2018,Zakharov2019}. The refinement method by Wang et al.~\cite{Wang2019} requires RGB-D images and instance segmentation as input. The depth cues are processed using PointNet~\cite{Qi2017} and combined with the RGB-based features from a CNN. We show that our proposed approach is applicable to both learning-free and learning-based methods. It boosts their performance by improving initial estimates using physics simulation and guides refinement through rendering-based verification. Application of physics reasoning and simulation in related vision tasks indicates that it creates strong cues for object pose and admissible scene configurations. The segmentation method by Jia et al.~\cite{Jia2014} uses rule-based physical stability reasoning to combine or split candidate patches, represented by bounding boxes, to generate physically plausible scenes. A similar reasoning is applied to voxelized scene representations to segment and estimate the shape of objects in~\cite{Zheng2015}. In a robotics context, Furrer et al.~\cite{Furrer2017} show the benefit of using physics simulation for object stacking. They propose a method for determining the target pose of irregular stones such that a structurally stable stack can be built by a robot. Mitash et al.~\cite{Mitash2018} use physics-based verification and MCTS for object pose estimation given multiple hypotheses in multi-object scenes. For each hypotheses combination, this approach runs one iteration of Trimmed ICP and a physics simulation, making it sensitive to the estimation of the supporting plane and the physical properties of the simulated objects. Our proposed solution of interleaving physics simulation and refinement is more robust to these challenges and prevents diverging simulation. We allow more promising estimates to be refined multiple times while saving these additional iterations on less promising estimates. Moreover, in~\cite{Mitash2018}, a solution is processed one object after another. Feedback on the impact on the overall solution quality is given by a scene-level reward but only allows to select among the refined hypotheses. In contrast, by incorporating the scene-level feedback in the refinement process, our approach adapts the estimates to the overall solution. Furthermore, the approach of~\cite{Mitash2018} needs to grow a search tree of combinations of hypotheses, spending expensive refinements on exploring the search space. More efficiently, our approach uses an object-based representation of the search space, which is initialized using a rendering-based verification score. Thereby, no additional computation needs to be spent on initial exploration of the search space. \section{Integrating Hypotheses Verification with Physics-guided Iterative Refinement}\label{sec:single} The goal of this work is to accurately and robustly explain scenes of varying complexity in terms of object poses for applications such as robotic grasping. An RGB-D observation, instance detection, instance segmentation and a set of initial object pose estimates are assumed to be given. \begin{figure}[!t] \centering \vspace{1ex} \includegraphics[width=0.8\linewidth]{figures/schematic_single.png} \caption{Proposed integration approaches given a simulation environment and an initial object pose estimate ($T_{cur}$). (a) Integration of physics simulation (SIM) and iterative refinement (REF) into Physics-guided Iterative Refinement (PIR). (b) Supervision using verification score $\Bar{f}$ (SIR). (c) Regret minimization using UCB score (RIR).} \label{fig:schematic_single-object} \end{figure} In the following, we present the building blocks of our VeREFINE approach by considering increasingly complex scenarios. For individual objects, we propose an iterative physics-guided refinement loop (Sec.~\ref{sec:pir}). To improve the robustness of this approach, supervision of the refinement loop through rendering-based verification is presented (Sec.~\ref{sec:sir}). Given multiple estimates, a regret minimization approach is introduced to efficiently allocate refinement towards promising estimates (Sec.~\ref{sec:rir}). We extend the discussed methods to consider multi-object scenes with multiple initial estimates each, where occlusion and support relationships between objects need to be considered (Sec.~\ref{sec:vf}). \subsection{Physics-guided Iterative Refinement (PIR)}\label{sec:pir} Object pose refinement methods depend heavily on the quality of the initial estimates. In contrast to previous approaches that apply physics simulation as a post-hoc step after iterative refinement \cite{Mitash2018}, we propose to interleave object pose refinement and physics simulation in a Physics-guided Iterative Refinement (PIR) loop, illustrated in Figure \ref{fig:schematic_single-object}a. The physical plausibility of the initial estimate used for refinement is improved using simulation, helping the refinement to relate the correct parts of the model to the observation. The iterative feedback loop allows the refinement to, in turn, initialize physics simulation with more accurate estimates, thus limiting divergence. In each iteration, the current object pose estimate $T_{cur}=[R_{cur},t_{cur}]$ initializes the object in the simulation environment, shown in Figure \ref{fig:schematic_single-object}a (mid). In the simplest case, the environment consists of a supporting plane. In more complex scenes, it also includes other estimated objects. The simulation is progressed and the resulting object pose $T_{sim}$ is returned. As indicated in Figure \ref{fig:schematic_single-object}a, only the orientation part $R_{sim}$ is used to update the estimate. This is motivated by the observation that, when physics simulation leads to large displacements, it causes the iterative refinement to lose track of corresponding object parts. We found only using the orientation contains this divergent behavior while still improving the refinement process. The estimate $[R_{sim}, t_{cur}]$ is used to initialize an iteration of the object pose refinement algorithm that returns the final estimate $T_{ref}$ after one iteration of PIR. \subsection{Supervised Iterative Refinement (SIR)}\label{sec:sir} Due to divergent behavior in physics simulation or iterative refinement, the final estimate after applying these methods might generate a worse explanation of the observation than the initial or intermediary estimates. We solve this by continuously evaluating the observation fit of the intermediary estimates. This integration of verification into the refinement process allows us to supervise divergent behavior and select the best fitting estimate as the final one. The verification score $\Bar{f}$ measures the observation fit and is computed from the average discrepancy between the estimate and the observation in terms of depth and surface normals, given by \begin{equation}\label{eq:score} \begin{split} \Bar{f}(T) &= \frac{1}{2} \left( \frac{1}{N} \mathlarger{\sum\limits^{N}} f_d(T) + \frac{1}{N} \mathlarger{\sum\limits^{N}} f_\mathbf{n}(T) \right) \\ f_d(T) &= \begin{cases} 1 - \frac{|d - \hat{d}_T|}{\tau},& \text{where $|d - \hat{d}_T| < \tau$} \\ 0,& \text{otherwise} \end{cases} \\ f_\mathbf{n}(T) &= \begin{cases} 1 - \frac{1-{\mathbf{n} \cdot \mathbf{\hat{n}}_T}}{\alpha},& \text{where $1-{\mathbf{n} \cdot \mathbf{\hat{n}}_T} < \alpha$} \\ 0,& \text{otherwise} \end{cases} \end{split} \end{equation} where $d$ is a valid depth value and $\mathbf{n}$ is a corresponding surface normal in the segmented scene. The $N$ corresponding values of the rendered estimate are $\hat{d}_T$ and $\hat{\mathbf{n}}_T$. Parameters $\tau$ and $\alpha$ are soft thresholds for the maximal admissible discrepancy. Figure \ref{fig:schematic_single-object}b (bottom) shows an example of $\Bar{f}$ applied to an estimate. In each PIR iteration $i$, we evaluate the estimates returned by physics simulation $T_{i;sim}$ and refinement $T_{i;ref}$ and proceed with the estimate that achieves the better score. After the last iteration, the final estimate $T$ that gives the best score $\Bar{f}$ overall is selected from all processed estimates. As such, Supervised Iterative Refinement (SIR) covers cases where the individual approaches could diverge. The supervision requires evaluations of $\Bar{f}$ for $T_{i;sim}$ and $T_{i;ref}$ each iteration. To enable fast evaluation, computations are carried out on the GPU in two rendering passes using OpenGL. The first pass writes $\hat{d}_T$ and $\hat{\mathbf{n}}_T$ to a texture. The second pass uses this texture and the observation to compute $f_d$ and $f_n$. The summed values of $N$, $f_d$ and $f_n$ are read-back from a higher-level mipmap, drastically reducing the read-back time. The final averaging is done on the CPU and yields $\Bar{f}$. In our experiments, one evaluation of $\Bar{f}$ using a NVIDIA GTX 1080Ti takes 1-2ms. This is a significant speed-up compared to 7-9ms when reading-back the full depth and normal information from the GPU to evaluate $\Bar{f}$ on the CPU. \subsection{Regret-minimizing Iterative Refinement (RIR)}\label{sec:rir} Considering multiple pose hypotheses per object raises the questions: On which hypotheses to spend refinement steps and which hypothesis to select in the end. Promising hypotheses should be exploited by applying more refinement steps while other hypotheses should still be explored to find better candidates. We propose a Multi-armed Bandit (MAB) to model this exploitation-exploration problem, where the pull of arm $j$ represents running one SIR iteration for hypothesis $j$. The Upper Confidence Bound policy (UCB) \cite{Auer2002} minimizes the regret of choosing a sub-optimal arm of a MAB with respect to a given reward. In each iteration, the arm with maximal $ucb_j$ is selected according to \begin{equation} ucb_j = \mu_j + c \cdot \sqrt{\frac{\ln{p}}{n_j}} \end{equation} where $\mu_j$ is the mean reward of playing arm $j$, $p$ is the total number of plays and $n_j$ is the number of times the arm has been played. $c$ is a parameter of the algorithm that controls the balance of exploitation and exploration. In our approach, illustrated in Figure \ref{fig:schematic_single-object}c, the verification score $\Bar{f}$ is chosen as reward function. Applying the UCB policy to the resulting reward statistics efficiently allocates a fixed refinement budget, spending more refinements on promising hypotheses while saving refinements on those that have a low $\Bar{f}$. This results in the same total amount of refinements but in a regret-minimizing way. The resulting Regret-minimizing Iterative Refinement (RIR) procedure starts by ranking the initial estimates based on $\Bar{f}$. For each subsequent RIR iteration, SIR is applied and the verification score is used as reward signal. As with SIR, the final selection is based on the observation fit across all encountered estimates. The formulation based on a MAB and a rendering-based score allows our approach to be quickly applied to new datasets and can be used to extend existing and future refinement methods. In contrast, the related approach in~\cite{Krull2017} uses reinforcement learning and a CNN-based verification score regression, which need to be expensively re-trained. \section{Physics Simulation and Regret Minimization in Cluttered Multi-Object Scenes}\label{sec:vf} In cluttered multi-object scenes, the proposed verification score and physics simulation need to deal with occlusions and support relationships. Thus, the order in which objects are considered is important. Moreover, with each of the $N$ objects having $n$ hypotheses, the number of combinations of hypotheses grows exponentially. To tame this problem, we discuss clustering strategies to reduce the number of combinations that need to be considered and present two approaches to efficiently evaluate the remaining search space. \begin{figure}[!t] \vspace{1.0ex} \centering \includegraphics[width=\linewidth]{figures/schematic_multi.png} \caption{Proposed approaches for cluttered multi-object scenes. The best estimate per object is added to the simulation environment used for the subsequent objects, allowing consideration of occlusions and support relationships. VF$_b$ (blue) fully refines each object using the object fit as reward. VF$_d$ (green) repeats this process iteratively, refining each object only once per iteration and uses the scene fit as reward.} \label{fig:schematic_multi-object} \end{figure} \subsection{Object Clustering and Dependency Graph} Mitash et al. \cite{Mitash2018} isolate objects that might interact based on the segmented point clouds. This reduces the number of objects that need to be jointly considered and thus the number of combinations. Furthermore, they argue that not all combinations of objects have to be considered. Instead, occlusion and support relationships between objects are used to compute a dependency list. A search tree is built from this list, where at layer $i$, object $i$ is represented by all of its $n$ hypotheses. This yields a tree of $(n^{N+1}-1)/(n-1)$ nodes. For a scene of 5 objects with 5 hypotheses each this produces a search tree of 3905 nodes (excluding the root node). In contrast, we address more general scenarios by explicitly considering ambiguous dependencies, for example, the case where an object is occluded by another object but also supporting the same object. To resolve such ambiguities, we first decompose the independent clusters into support dependency lists. The first object in each support dependency list, the base object, is in contact with the ground plane and supports the remaining objects in the list. The support dependency lists are then ordered front-to-back based on their respective base objects. Instead of using the resulting dependency list to grow a search tree using MCTS as in~\cite{Mitash2018}, we exploit our single-object approaches to reduce the solution space and allow for iterative refinement on a scene level. The proposed representation requires only $N \cdot n$ nodes to represent the same search space as before -- or only 25 instead of 3905 nodes in the example. \subsection{VeREFINE breadth (VF$_b$)}\label{sec:vfb} Our first approach is to explore all object hypotheses by representing each object in the dependency list and its hypotheses using a RIR bandit. The scene is incrementally built by computing the best estimate for the considered object in the current environment. The object is added to the environment with the computed pose, allowing more accurate estimation of the next objects' poses. We call this approach of first exploring all hypotheses per object \textit{VeREFINE breadth} (VF$_b$), shown in Figure \ref{fig:schematic_multi-object}~(blue). This results in $N$ RIR bandits with $n$ nodes each. \subsection{VeREFINE depth (VF$_d$)}\label{sec:vfd} An alternative approach, and to introduce a feedback loop that is missing in VF$_b$, is to iterate through the dependency list. For each iteration, the objects' RIR bandits are progressed only once. The best known hypotheses after each iteration are evaluated as a complete scene. The resulting scene fit is computed by $\Bar{f}$ and is used as reward for selected hypotheses instead of the per-object reward. Thereby, hypotheses that contribute to a better overall scene fit are selected more often. This scene-first approach, called \textit{VeREFINE depth} (VF$_d$), is illustrated in Figure~\ref{fig:schematic_multi-object}~(green). As the procedure results in a changing reward distribution, the UCB policy is replaced with Discounted-UCB (D-UCB)~\cite{Kocsis2006}. The reward and plays statistics are discounted by a small factor each iteration, which reduces the impact of previous iterations and adapts to a changing reward distribution over time. This is shown to reduce the cumulative regret of the D-UCB policy as compared to UCB for abruptly and continuously changing reward distributions~\cite{Garivier2011}. The RIR bandits are initialized using the rendering-based verification score as in the single-object scenario, acting as a heuristic in the first iteration through the dependency list to select better initial estimates. Therefore, instead of spending refinement steps to grow the search tree as in the MCTS-based approach~\cite{Mitash2018}, both our proposed approaches immediately and efficiently allocate refinement steps to more promising estimates. \section{Experiments}\label{sec:experiments} This section presents the evaluation of VeREFINE on the \texttt{Extended APC} (xAPC), \texttt{LINEMOD} (LM) and \texttt{YCB-VIDEO} (YCBV) datasets. Improvement over state-of-the-art refinement methods is shown by comparison with Iterative Closest Point (ICP) and DenseFusion Refinement (DF-R). For pose estimation, we use Point Pair Features (PPF) and DenseFusion (DF). In addition, we compare against the approach presented by Mitash et al.~\cite{Mitash2018} (PHYSIM-MCTS). It uses Super4PCS (PCS) and hypotheses clustering for pose estimation and Trimmed ICP (TrICP) for refinement. The impact of the individual parts of our method is evaluated in an ablation study on LM. \textbf{Datasets:} The LM dataset \cite{Hinterstoisser2012} is used to evaluate the single-object setting. It consists of 15 scenes with individual toys and household objects. A test set is defined based on the BOP19 challenge \cite{BOP}, albeit adapted to learning-based methods. These methods use the training split defined in \cite{Brachmann2016,Rad2017,Tekin2018}, which excludes scenes 3 and 7 but includes 15\% of the test frames used in \cite{BOP}. We therefore exclude both scenes and the frames used in training from the test set for a total of 2219 test frames. xAPC \cite{Mitash2018} and YCBV \cite{Xiang2017} are used for the multi-object setting. Both datasets exhibit clutter as well as isolated, 2- and 3-object support relationships. xAPC uses Amazon Picking Challenge objects and features three objects per scene. The whole dataset is used for testing. YCBV contains 92 scenes. The 12 test scenes consist of 3 to 6 objects from the YCB object set~\cite{Calli2015}. The test set defined in \cite{BOP} is used for our evaluation. \textbf{Metrics:} The procedure defined for the BOP 2019 challenge \cite{BOP} is used for evaluation. This considers three different error functions, namely, the Maximum Symmetry-Aware Projection Distance (MSPD), the Maximum Symmetry-Aware Surface Distance (MSSD) and the Visible Surface Discrepancy (VSD). The reported values per error function are the average recall rates over 10 thresholds in percent. The overall performance score (AR) is the average recall rate over all sub-scores. On xAPC, we additionally report the average rotation and translation errors for comparison with \cite{Mitash2018}. \textbf{Baselines:} Mitash et al.~\cite{Mitash2018} (PHYSIM-MCTS) evaluate on the xAPC dataset. For comparability, we use the code provided by the authors to generate bounding boxes, a pool of 25 hypotheses per object and the results reported for their method. A maximum of 150 TrICP iterations is used for evaluation of all approaches. Note that, for PHYSIM-MCTS, we only count the refinement iterations in the expansion step to ensure a fair comparison. The best performing methods on LM are the PPF-based methods by Vidal et al. \cite{Vidal2018} and Drost et al. \cite{Drost2010}. As neither provide code, we use the code of a comparable PPF-based method by Alexandrov et al.~\cite{Alexandrov2019} to produce a pool of hypotheses. We train Mask R-CNN \cite{He2017} to provide detections and segmentation masks. In addition, we evaluate the RGB-D-based method DenseFusion~\cite{Wang2019}. It features a fast inference time and a learning-based refinement method. Precomputed detections and segmentation masks by~\cite{Xiang2017} are used. A pool of object pose estimates is generated using the provided code and weights. The hypotheses pool consists of the highest confidence per-pixel estimate and additional uniformly-random sampled estimates. We set the parameters for the verification score in Equation~\eqref{eq:score} to $\tau=20$mm and $\alpha=45$deg on all datasets. PyBullet~\cite{pybullet} is used as physics simulator with a time-step of 1/60sec, 10 solver iterations, 4 sub-steps and assuming an equal mass of 1kg for all objects. 3D plane segmentation is employed to determine a supporting plane and the normal is used to compute the gravity direction. The generality of our approach is shown by applying it to three baseline iterative refinement approaches, namely, TrICP, point-to-point ICP and DF-R. TrICP uses the implementation in PCL \cite{pcl} with the same settings as \cite{Mitash2018}. The simulation uses 60 steps in this case. We use the basic point-to-point ICP implementation from PCL with 50 iterations. For our approaches, we distribute the ICP iterations evenly over 5 PIR iterations. DF-R uses the weights provided by~\cite{Wang2019}, trained to use 2 iterations. They are distributed over 2 PIR iterations. As ICP and DF-R are more sensitive to interference with the iterative refinement procedure, only 3 simulation steps are used. \subsection{Ablation Study} The following ablations aim to motivate several design choices. The experiments start with the ground-truth annotations of the LM dataset as initial estimates and introduce errors of increasing magnitude. For the ablation, the ground-truth ground plane is used for physics simulation. Two types of errors are applied. (1) Rotation error is created by uniformly-random sampling a rotation axis from the unit sphere and rotating the ground-truth estimate by a varying angle about this axis. (2) Translation error is introduced by offsetting the ground truth by a translation vector that is sampled from the unit sphere, scaled by a varying distance. \begin{figure}[!t] \centering \vspace{1ex} \includegraphics[width=\linewidth]{figures/ablation.png} \caption{Ablations on LM using single hypotheses (top) and 5 hypotheses (bottom). EVEN and EXPL use our verification score to determine the best estimate and PIR for refinement. PhysBefore and PhysAfter apply simulation before and after refinement. AR values at 5mm/deg steps are reported and interpolated in between.} \label{fig:ablation} \end{figure} \subsubsection{Physics Simulation and Iterative Refinement} As shown in Figure \ref{fig:ablation} (top), our interleaved approach to combine physics simulation with refinement (PIR) is consistently the best performing simulation approach under rotation error. For translation error, it is limited as it only considers the rotation part from simulation to contain divergence. The benefit of using only rotation is illustrated by comparison with applying full simulation after refinement (PhysAfter) as used in~\cite{Mitash2018}. Rotation error in the initial estimate causes this approach to diverge and perform even worse than the baseline method (DF-R) without physics simulation. Figure \ref{fig:ablation} (top) also shows the benefit of supervising the refinement process. Our approach (SIR) consistently improves the accuracy of pose estimates, most notably under translation error. \subsubsection{Regret Minimization} There are two major approaches to deal with multiple hypotheses. The first is to score all initial hypotheses, exploiting only the best scoring hypothesis for refinement (EXPL). The second approach is to refine all hypotheses evenly and selecting the best scoring refined hypothesis (EVEN). As shown in Figure \ref{fig:ablation} (bottom), EVEN performs well for low error magnitudes while EXPL is robust to high error magnitudes. Our regret-minimizing approach (RIR) balances between these two extreme approaches and is thus able to outperform the alternatives. Moreover, a comparison with SIR shows the benefit of considering multiple hypotheses. \subsection{Robustness Analysis} To highlight the robustness of our approach, we perform experiments with missing depth values to consider two types of errors. (1) Occlusion is simulated by removing rectangular patches that are centered at uniformly-random sampled positions of the observed object. (2) Missing parts of objects from the depth channel, e.g., due to reflective material, are considered by removing depth values that correspond to the object above a certain height. Error is introduced similar to the ablation study but kept fixed at 5mm for translation and 5deg for rotation. The depth error increases from 0 to 90\%. \begin{figure}[!t] \centering \vspace{1ex} \includegraphics[width=\linewidth]{figures/robustness.png} \caption{Robustness study on LM using single hypotheses with a fixed error magnitude of 5mm and 5deg. AR values are measured every 10\% and interpolated in between.} \label{fig:robustness} \end{figure} As shown in Figure \ref{fig:robustness}, our approach increases robustness to both types of error in comparison to the baseline. This indicates that the remaining depth information, together with physics simulation, limit the degradation of performance. \subsection{Comparison to State of the Art} \subsubsection{Single-Object Scenario} \begin{table}[!t] \scriptsize \centering \caption{Comparison using DF-R\cite{Wang2019} and ICP\cite{Besl1992} on \texttt{LINEMOD}.} \begin{tabular}{l||c|c|c|c||c|c} DF & VSD & MSPD & MSSD & AR & T[ms] & \#ref/obj \\ \hline\hline \cite{Wang2019} & 70.6 & 76.8 & 77.7 & 75.0 & 2 & 2 \\ \hline PIR & 73.3 & 79.3 & 80.1 & 77.6 & 4 & 2 \\ \hline SIR & 74.0 & 85.9 & 86.4 & 82.1 & 14 & 2 \\ \hline\hline \cite{Wang2019} & 76.9 & 82.6 & 82.9 & 80.8 & 11 & 10 \\ \hline RIR & \textbf{78.3} & \textbf{89.7} & \textbf{89.6} & \textbf{85.9} & 48 & 10 \\ \\ % PPF & VSD & MSPD & MSSD & AR & T[ms] & \#ref/obj \\ \hline\hline \cite{Besl1992} & 79.8 & 93.2 & 93.0 & 88.7 & 248 & 50 \\ \hline PIR & 78.1 & 92.1 & 92.1 & 87.4 & 274 & 50 \\ \hline SIR & 79.9 & 93.7 & 93.2 & 88.9 & 302 & 50 \\ \hline\hline \cite{Besl1992} & 80.0 & 93.4 & 93.2 & 88.9 & 617 & 150 \\ \hline RIR & \textbf{81.0} & \textbf{95.1} & \textbf{94.5} & \textbf{90.2} & 892 & 150 \\ \end{tabular} \label{tab:results_lm} \end{table} The single-object scenario is evaluated on the LM dataset using DF and PPF as object pose estimators and DF-R and ICP as refinement methods. The refiners are run for the same number of iterations for comparison with RIR. Results are shown in Table~\ref{tab:results_lm}. The performance of PIR indicates that physics simulation is beneficial given less accurate initial estimates using DF as compared to PPF. This agrees with our hypothesis that simulation improves implausible initial estimates while being vulnerable to divergence in inaccurate simulation environments. In both conditions, the biggest relative improvement is achieved by SIR, improving over DF-R by 7.1\% AR. As indicated by the results using PPF, SIR is able to limit the divergence of the physics simulation observed for PIR. The top-performing approach in both conditions is RIR, improving over the baselines using the same number of refinement iterations by 5.1\% and 1.3\% AR, respectively. Regarding runtime, we observe the application of physics simulation results in a small relative increase per frame of 1ms per simulation. Note that, using DF-R as refinement method, SIR and RIR still achieve 71fps and 21fps. \subsubsection{Multi-Object Scenario} An evaluation on the YCBV and xAPC datasets highlights the performance in multi-object scenarios. For comparison with RIR, VF$_b$ and VF$_d$, the baseline DF-R is also run for the same number of iterations. The results on YCBV are shown in Table~\ref{tab:results_ycbv}. The supervision through SIR is again the biggest source of relative improvement as compared to DF-R with an increase of 2.6\% AR. The increased number of refinement iterations decreases the performance of DF-R. This could be due to the confidence score of DF suggesting a sub-optimal initial estimate for exploitation or due to divergence of the refinement method itself. In either case, RIR does not exhibit divergent behavior and is able to outperform the baseline method given the same number of refinement iterations by 6.5\%. Table \ref{tab:results_exapc} shows that on xAPC, the performance of RIR improves over the approach by Mitash et al.~\cite{Mitash2018} on the VSD and MSPD metrics by 3.3\% and 0.4\% and significantly speeds-up the runtime. The results on both datasets show that our scene-level approaches successfully deal with the occlusion and support relationships in multi-object scenarios. Both VF$_b$ and VF$_d$ outperform~\cite{Mitash2018} by a significant margin of 2.3\% and 5.0\% AR, respectively, with VF$_d$ performing the best overall. All our approaches are approximately five times faster, with TrICP accounting for 5s per frame. This highlights the benefit of the initialization of the solutions, the efficient search space formulation and our GPU-based computation of the verification score. As YCBV contains highly cluttered scenes that introduce occlusion and features few support relationships, the relative increases over RIR are less pronounced with 0.1\% and 0.3\%. Overall, our scene-level approaches perform best on YCBV with VF$_b$ achieving an increase of 6.8\% over DF-R given the same number of iterations. \begin{table}[!t] \scriptsize \centering \vspace{1ex} \caption{Comparison with Mitash et al. \cite{Mitash2018} using Trimmed ICP \cite{pcl} on \texttt{Extended APC} with 150 iterations each.} \begin{tabular}{l||c|c|c|c||c|c||c} PCS & VSD & MSPD & MSSD & AR & $\Bar{r}$ [deg] & $\Bar{t}$ [cm] & T[s] \\ \hline \hline \cite{Mitash2018} & 48.5 & 51.6 & 68.3 & 56.2 & \textbf{5.7} & 1.3 & 29.9 \\ \hline RIR & 51.8 & 52.0 & 63.0 & 55.6 & 10.5 & 1.4 & 5.5 \\ \hline VF$_b$ & 54.4 & 54.3 & 66.7 & 58.5 & 8.0 & \textbf{1.2} & 5.5 \\ \hline VF$_d$ & \textbf{56.7} & \textbf{57.3} & \textbf{69.6} & \textbf{61.2} & 7.5 & \textbf{1.2} & 6.2 \\ \end{tabular} \label{tab:results_exapc} \end{table} \begin{table}[!t] \scriptsize \centering \caption{Comparison using DF-R\cite{Wang2019} on \texttt{YCB-VIDEO}.} \begin{tabular}{l||c|c|c|c||c|c} DF & VSD & MSPD & MSSD & AR & T[ms] & \#ref/obj \\ \hline \hline \cite{Wang2019} & 74.2 & 69.9 & 77.6 & 73.9 & 17 & 2 \\ \hline PIR & 74.9 & 70.8 & 78.2 & 74.7 & 20 & 2 \\ \hline SIR & 76.5 & 72.9 & 80.2 & 76.5 & 49 & 2 \\ \hline\hline \cite{Wang2019} & 71.2 & 66.3 & 75.6 & 71.0 & 71 & 10 \\ \hline RIR & 77.9 & 73.9 & 80.6 & 77.5 & 228 & 10 \\ \hline VF$_b$ & 78.3 & 73.8 & 80.6 & 77.6 & 495 & 10 \\ \hline VF$_d$ & \textbf{78.5} & \textbf{74.1} & \textbf{80.9} & \textbf{77.8} & 521 & 10 \\ \end{tabular} \label{tab:results_ycbv} \end{table} \subsection{Robotic Grasping Experiment} Our work is motivated by the performance deterioration of object detection and pose estimation methods when deployed on robots \cite{Loghmani2018,Ammirato2017}. To evaluate whether the proposed approach is able to reduce this problem, its performance is evaluated in a grasping experiment using a Toyota HSR and YCBV objects. Reproducible experimental conditions are ensured by using the GRASPA scene layouts~\cite{Bottarel2020} to place 5 objects as shown in Figure \ref{fig:teaser}. 10 grasps are attempted per object -- 5 are attempted for a given pose and an additional 5 for a rotation to a symmetric pose. Multiple grasp poses are annotated by hand for each object as shown in Figure~\ref{fig:grasping} (mid). In each experiment, Mask R-CNN~\cite{He2017} is executed to detect objects and to provide instance segmentation masks. The evaluated methods are queried to compute an object pose estimate from this information and the RGB-D image. Using this pose estimate, the annotated grasp poses are transformed to the scene, then checked for collision with the octomap. Trajectories for all collision-free grasps are planned using MoveIt \cite{Sucan2013}. If at least one plan is found, this is counted as a \textit{found} grasp. A grasp is considered a \textit{success}ful grasp if the plan can be executed, i.e., the object is grasped and remains stable in the robot's gripper. As shown in Table \ref{tab:results_grasping}, our proposed approach generates object pose estimates that result in more successful and reliable grasps. The most striking improvements are achieved on the ``061\_\textit{foam}\_brick'' and ``011\_\textit{banana}'' objects. Due to their proximity to other objects, object poses must be accurate to allow collision-free grasps. The \textit{banana} is the most difficult object, resulting from inaccuracy in the instance segmentation and the low height. \begin{table}[!t] \scriptsize \centering \vspace{1ex} \caption{Results of grasping experiments in percentage of \textit{found} collision-free grasp poses and \textit{success}ful grasps.} \setlength{\tabcolsep}{4pt} \begin{tabular}{l||c|c|c|c|c||c|c|c} DF & mustard & spam & foam & jello & banana & success & found & \#ref/obj \\ \hline \hline \cite{Wang2019} & 10 & 3 & 1 & 7 & 0 & 42\% & 46\% & 2 \\ \hline SIR & 9 & 7 & 2 & 7 & 0 & 50\% & 70\% & 2 \\ \hline\hline \cite{Wang2019} & 10 & 6 & 5 & 9 & 1 & 62\% & 70\% & 10 \\ \hline \cite{Mitash2018} & 9 & 10 & 2 & 6 & 0 & 54\% & 78\% & 10 \\ \hline RIR & \textbf{10} & \textbf{10} & \textbf{9} & \textbf{10} & \textbf{4} & \textbf{86\%} & \textbf{90\%} & 10 \\ \hline \end{tabular} \label{tab:results_grasping} \end{table} \begin{figure}[!t] \vspace{1.0ex} \centering \includegraphics[trim=285 170 115 100, clip,width=0.3\linewidth]{figures/teaser_render.png} \includegraphics[trim=40 340 20 212, clip,width=0.3\linewidth]{figures/annotate_grasps.png} \includegraphics[trim=0 7 0 0, clip,width=0.3\linewidth]{figures/mustard_grasp.png} \caption{Refined estimates using RIR (left), retrieved annotated grasps (mid) and successful grasp attempt (right).} \label{fig:grasping} \end{figure} \section{Conclusion}\label{sec:conclusion} This work presented an approach for the tight integration of hypotheses verification, refinement and physics simulation for object pose estimation. The rendering-based hypotheses verification and the proposed physics-guided extension to iterative refinement methods benefit from this integration by allowing them to share useful information. The comparison with state-of-the-art methods and a robotic grasping experiment show that our integrated approach creates more accurate and more reliable object pose estimates. Furthermore, we are able to increase performance over related work while significantly reducing the runtime. An open issue for robot systems is the presence of a-priori unknown objects. With interactions between known and unknown objects, the results of simulation will diverge from the true object pose. In these cases, incorporating shape estimation would enable unknown objects to be considered in simulation. Moreover, the use of physics simulation requires fixed structures on which objects can rest and an estimate for the gravity vector. For robotic applications, static objects with non-planar surfaces in the robot's environment map could be considered as supporting structures. An IMU could be used to determine the gravity direction to become robust to tilted or non-planar support.
1,314,259,994,149
arxiv
\section{Introduction} \label{sec:introduction} Point cloud registration is an important task in computer vision, which aims to find a rigid body transformation to align one 3D point cloud (source) to another (target). It has a variety of applications in computer vision, augmented reality and virtual reality, such as pose estimation and 3D reconstruction. The most widely used traditional registration method is Iterative Closest Point (ICP) \cite{icp}, which is only suitable for estimating small rigid transformation. However, in many real world applications, this assumption does not hold. The task of registering two point clouds with large rotation and translation is called global registration. Some global registration methods \cite{fgr,goicp} are proposed to overcome the limitation of ICP, but are usually very slow compared to ICP. In recent years, deep learning models have dominated the field of computer vision \cite{deeplearning,alexnet,inception,resnet,resnet2}. Many computer vision tasks are proven to be solved better using data-driven methods based on neural networks. Recently, some learning-based neural network methods for point cloud registration are proposed \cite{pointnetlk,dcp,prnet}. They are capable of dealing with large rotation angles, and are typically much faster than traditional global registration methods. However, they still have major drawbacks. For example, DCP \cite{dcp} assumes that all the points in the source point cloud have correspondences in the target point cloud. Although promising, learning-based point cloud registration methods are far from perfect. In this paper, we propose the \textbf{Iterative Distance-Aware Similarity Matrix Convolution Network (IDAM)}, a novel learnable pipeline for accurate and efficient point cloud registration. The intuition for IDAM is that while many registration methods use local geometric features for point matching, ICP uses the distance as the only criterion for matching. We argue that incorporating both geometric and distance features into the iterative matching process can resolve ambiguity and have better performance than using either of them. Moreover, point matching involves computing a similarity score, which is usually computed using the inner product or $L2$ distance between feature vectors. This simple matching method does not take into consideration the interaction of features of different point pairs. We propose to use a learned module to compute the similarity score based on the entire concatenated features of the two points of interest. These two intuition can be realized using a single learnable \textbf{similarity matrix convolution} module that accepts pairwise inputs in both the feature and Euclidean space. Another major problem for global registration methods is efficiency. To reduce computational complexity, we propose a novel \textbf{two-stage point elimination} technique to keep a balance between performance and efficiency. The first point elimination step, \textbf{hard point elimination}, independently filters out the majority of individual points that are not likely to be matched with confidence. The second step, \textbf{hybrid point elimination}, eliminates correspondence pairs instead of individual points. It assigns low weights to those pairs that are probable to be false positives while solving the absolute orientation problem. We design a novel \textbf{mutual-supervision loss} to train these learned point elimination modules. This loss allows the model to be trained end-to-end without extra annotations of keypoints. This two-stage elimination process makes our method significantly faster than the current state-of-art global registration methods. Our learned registration pipeline is compatible with both learning-based and traditional point cloud feature extraction methods. We show by experiments that our method performs well with both FPFH \cite{fpfh} and Graph Neural Network (GNN) \cite{dgcnn,rscnn,gnn} features. We compare our model to other point cloud registration methods, showing that the power of learning is not only restricted to feature extraction, but is also critical for the registration process. \section{Related Work} \label{sec:related-work} \subsubsection{Local Registration} \label{sec:local-related} The most widely used traditional local registration method is Iterative Closest Point (ICP) \cite{icp}. It finds for each point in the source the closest neighbor in the target as the correspondence. Trimmed ICP \cite{trimmedicp} extends ICP to handle partially overlapping point clouds. Other methods \cite{efficienticp,generalizedicp,sparseicp} are mostly variants to the vanilla ICP. \subsubsection{Global Registration} \label{sec:global-related} The most important non-learning global registration methods is RANSAC \cite{ransac}. Usually FPFH \cite{fpfh} or SHOT \cite{shot} feature extraction methods are used with RANSAC. However, RANSAC is very slow compared to ICP. Fast Global Registration (FGR) \cite{fgr} uses FPFH features and an alternating optimization technique to speed up global registration. Go-ICP \cite{goicp} adopts a brute-force branch-and-bound strategy to find the rigid transformation. There are also other methods \cite{convexrelaxation,integerprogramming,extremeoutlier,sdrsac} that utilize a variety of optimization techniques. \subsubsection{Data-driven Registration} \label{sec:data-related} PointNetLK \cite{pointnetlk} pioneers the recent learning-based registration methods. It adapts PointNet \cite{pointnet} and the Lucas \& Kanade \cite{lk} algorithm into a single trainable recurrent deep neural network. Deep Closest Point (DCP) \cite{dcp} proposed to use a transformer network based on DGCNN \cite{dgcnn} to extract features, and train the network end-to-end by back-propagating through the SVD layer. PRNet \cite{prnet} tries to extend DCP to an iterative pipeline and deals with partially overlapping point cloud registration. \subsubsection{Learning on Point Cloud} \label{sec:pointcloud-related} Recently a large volume of research papers apply deep learning techniques for learning on point clouds. Volumetric methods \cite{voxnet,voxelnet} apply discrete 3D convolution on the voxel representation. OctNet \cite{octnet} and O-CNN \cite{ocnn} try to design efficient high-resolution 3D convolution using the sparsity property of point clouds. Other methods \cite{pointconv,pointcnn,kpconv,interpconv} try to directly define convolution in the continuous Euclidean space, or convert the point clouds to a new space for implementing easy convolution-like operations \cite{spectralcnn,splatnet}. Contrary to the effort of adapting convolution to point clouds, PointNet \cite{pointnet} and PointNet++ \cite{pointnet++}, which use simple permutation invariant pooling operations to aggregate information from individual points, are widely used recently due to their simplicity. \cite{dgcnn,rscnn} view point clouds as graphs with neighbors connecting to each other, and apply graph neural networks (GNN) \cite{gnn} to extract features. \begin{figure} \centering \includegraphics[width=0.95\linewidth]{images/model.pdf} \caption{The overall architecture of the IDAM registration pipeline. Details of hard point elimination and hybrid point elimination are demonstrated in \reffig{elimination}.} \label{fig:model} \end{figure} \section{Model} \label{sec:model} This section describes the proposed IDAM point cloud registration model. The diagram of the whole pipeline is shown in \reffig{model}. The details of each component is explained in the following sections. \subsection{Notation} \label{sec:notation} \newcommand{\mathcal{S}}{\mathcal{S}} \newcommand{\mathcal{T}}{\mathcal{T}} \newcommand{N_\source}{N_\mathcal{S}} \newcommand{N_\target}{N_\mathcal{T}} \newcommand{\mathbf{p}}{\mathbf{p}} \newcommand{\mathbf{q}}{\mathbf{q}} \newcommand{\mathbf{R}^*}{\mathbf{R}^*} \newcommand{\mathbf{t}^*}{\mathbf{t}^*} \newcommand{\mathbf{R}}{\mathbf{R}} \newcommand{\mathbf{t}}{\mathbf{t}} Here we introduce some notation that will be used throughout the paper. The problem of interest is that for a given source point cloud $\mathcal{S}$ of $N_\source$ points and a target point cloud $\mathcal{T}$ of $N_\target$ points, we need to find the ground truth rigid body transformation $(\mathbf{R}^*, \mathbf{t}^*)$ that aligns $\mathcal{S}$ to $\mathcal{T}$. Let $\mathbf{p}_i\in\mathcal{S}$ denote the $i$th point in the source, and $\mathbf{q}_j\in\mathcal{T}$ the $j$th point in the target. \subsection{Similarity Matrix Convolution} \label{sec:smc} To find the rigid body transformation $\mathbf{R}^*$ and $\mathbf{t}^*$, we need to find a set of point correspondences between the source and target point clouds. Most of the existing methods achieve this by using the inner product (or $L2$ distance) of the point features as a measure of similarity, and directly pick the ones with the highest (or lowest for $L2$) response. However, this has two shortcomings. First of all, one point in $\mathcal{S}$ may have multiple possible correspondences in $\mathcal{T}$, and one-shot matching is not ideal since the points chosen as correspondences may not be the correct ones due to randomness. Inspired by ICP, we argue that incorporating distance information between points into an iterative matching processing can alleviate this problem, since after an initial registration, correct point correspondences are more likely to be closer to each other. The second drawback of direct feature similarity computation is that it has limited power of identifying the similarity between two points because the way of matching is the same for different pairs. Instead, we have a learned network that accepts the whole feature vectors and outputs the similarity scores. This way, the network takes into consideration the combinations of features from two points in a pair for matching. \newcommand{\mathbf{u}^\source}{\mathbf{u}^\mathcal{S}} \newcommand{\mathbf{u}^\target}{\mathbf{u}^\mathcal{T}} \newcommand{\mathbf{T}^{(n)}}{\mathbf{T}^{(n)}} \newcommand{\mathbf{S}^{(n)}}{\mathbf{S}^{(n)}} \newcommand{\mathbf{R}^{(n)}}{\mathbf{R}^{(n)}} \newcommand{\mathbf{t}^{(n)}}{\mathbf{t}^{(n)}} Based on the above intuition, we propose \textbf{distance-aware similarity matrix convolution} for finding point correspondences. Suppose we have the geometric features $\mathbf{u}^\source(i)$ for $\mathbf{p}_i\in\mathcal{S}$ and $\mathbf{u}^\target(j)$ for $\mathbf{q}_j\in\mathcal{T}$, both with dimension $K$. We form the \textbf{distance-augmented feature tensor} at iteration $n$ as \begin{align} \mathbf{T}^{(n)}(i,j) = [\mathbf{u}^\source(i); \mathbf{u}^\target(j); \norm{\mathbf{p}_i-\mathbf{q}_j}; \frac{\mathbf{p}_i-\mathbf{q}_j}{\norm{\mathbf{p}_i-\mathbf{q}_j}}] \end{align} where $[\cdot;\cdot]$ denotes concatenation. The $(2K+4)$-dimensional vector at the $(i,j)$ location of $\mathbf{T}^{(n)}$ is a combination of the geometric and Euclidean features for the point pair $(\mathbf{p}_i,\mathbf{q}_j)$. The $4$-dimensional Euclidean features comprise the distance between $\mathbf{p}_i$ and $\mathbf{q}_j$, and the unit vector pointing from $\mathbf{q}_j$ to $\mathbf{p}_i$. Each augmented feature vector in $\mathbf{T}^{(n)}$ encodes the joint information of the local shapes of the two points and their current relative position, which are useful for computing similarity scores at each iteration. The distance-augmented feature tensor $\mathbf{T}^{(n)}$ can be seen as a $(2K+4)$-channel 2D image. To extract a similarity score for each point pair, we apply a series of $1\times 1$ 2D convolution on $\mathbf{T}^{(n)}$ that outputs a single channel image of the same spatial size at the last layer. This is equivalent to applying a multi-layer perceptron on the augmented feature vector at each position. Then we apply a Softmax function on each row of the single channel image to get the \textbf{similarity matrix}, denoted as $\mathbf{S}^{(n)}$. $\mathbf{S}^{(n)}(i,j)$ represents the ``similarity score'' (the higher the more similar) for $\mathbf{p}_i$ and $\mathbf{q}_j$. Each row of $\mathbf{S}^{(n)}$ defines a normalized probability distribution over all the points in $\mathcal{T}$ for some $\mathbf{p}\in\mathcal{S}$. As a result, $\mathbf{S}^{(n)}(i,j)$ can also be interpreted as the probability of $\mathbf{q}_j$ being the correspondence of $\mathbf{p}_i$. The $1\times 1$ convolutions learn their weights using the \textbf{point matching loss} described in \refsec{matching-loss}. They learn to take into account the interaction between the shape and distance information to output a more accurate similarity score compared to simple inner product. To find the correspondence pairs, we take the argmax of each row of $\mathbf{S}^{(n)}$. The results are a set of correspondence pairs $\set{(\mathbf{p}_i, \mathbf{p}^\prime_i)}{\forall \mathbf{p}_i\in\mathcal{S}}$, with which we solve the following optimization problem to find the estimated rigid transformation $(\mathbf{R}^{(n)}, \mathbf{t}^{(n)})$ \begin{align} \label{eq:objective-no-weight} \mathbf{R}^{(n)}, \mathbf{t}^{(n)} = \argmin_{\mathbf{R}, \mathbf{t}} \sum_i \norm{\mathbf{R}\mathbf{p}_i+\mathbf{t}-\mathbf{p}^\prime_i}^2 \end{align} This is a classical absolute orientation problem \cite{horn87}, which can be efficiently solved with the orthogonal Procrustes algorithm \cite{golubmatrix,svd} using Singular Value Decomposition (SVD). $\mathbf{R}^{(n)}$ and $\mathbf{t}^{(n)}$ are then used to transform the source point cloud to a new position before entering the next iteration. The final estimate for $(\mathbf{R}^*, \mathbf{t}^*)$ is the composition of the intermediate $(\mathbf{R}^{(n)}, \mathbf{t}^{(n)})$ for all the iterations. \begin{figure} \centering \includegraphics[width=0.99\linewidth]{images/elimination.pdf} \caption{Comparison of hard point elimination and hybrid point elimination. Hard point elimination filters points based on the features extracted independently for each point, while hybrid point elimination utilizes the joint information of the point pairs to compute weights for the orthogonal Procrustes algorithm.} \label{fig:elimination} \end{figure} \subsection{Two-stage Point Elimination} \label{sec:elimination} Although similarity matrix convolution is powerful in terms of matching, it is computationally expensive to apply convolution on the large $N_\source\timesN_\target\times(2K+4)$ tensor, because $N_\source$ and $N_\target$ are typically more than a thousand. However, if we randomly down-sample the point clouds, the performance of the model would degrade drastically since many points no longer have correspondences. To tackle this dilemma, we propose a \textbf{two-stage point elimination} process. It consists of \textbf{hard point elimination} and \textbf{hybrid point elimination} (\reffig{elimination}), which targets on improving efficiency and accuracy respectively. While manually labelling keypoints for point clouds is not practical, we propose a \textbf{mutual-supervision loss}, that uses the information in the similarity matrices $\mathbf{S}^{(n)}$ to supervise the point elimination modules. The details of the mutual-supervision loss is described in \refsec{loss}. In this section, we present the point elimination process for inference. \subsubsection{Hard Point Elimination} \label{sec:hard-elimination} \newcommand{\mathcal{B}_\mathcal{S}}{\mathcal{B}_\mathcal{S}} \newcommand{\mathcal{B}_\mathcal{T}}{\mathcal{B}_\mathcal{T}} To reduce the computational burden of similarity matrix convolution, we first propose the \textbf{hard point elimination} (\reffig{elimination} Left). Given the extracted local shape features for each point, we apply a multi-layer perceptron on the feature vector, and output a \textbf{significance score}. A high score means a more prominent point, such as a corner point, that can be matched with high confidence later (see the Appendix for visualization). It filters out those points in the ``flat'' regions that are ambiguous during matching. This process is done on individual points, and does not take into account the point pair information as in similarity matrix convolution. As a result, it is efficient to compute the significance score. We preserve the $M$ points for each point cloud with highest significance scores, and eliminate the remaining points. In our network, we choose $M=\ceil{\frac{N}{6}}$, where $N$ can be $N_\source$ or $N_\target$. Denote the set of points in $\mathcal{S}$ preserved by hard point elimination as $\mathcal{B}_\mathcal{S}$, and that for the target as $\mathcal{B}_\mathcal{T}$. \subsubsection{Hybrid Point Elimination} \label{sec:hybrid-elimination} \newcommand{\mathbf{F}}{\mathbf{F}} While hard point elimination improves the efficiency significantly, it has negative effect on the performance of the model. The correct corresponding point in the target point cloud for a point in the source point cloud may be mistakenly eliminated in hard elimination. Therefore, similarity matrix convolution will never be able to find the correct correspondence. However, since we always try to find the correspondence with the maximal similarity score for every point in the source, these ``negative'' point pairs can make the rigid body transformation obtained by solving \refeq{objective-no-weight} inaccurate. This problem is especially severe when the model is dealing with two point clouds that only partially overlap with each other. In this case, even without any elimination, some points will not have any correspondence whatsoever. To alleviate this problem, we propose a \textbf{hybrid point elimination} (\reffig{elimination} Right) process, applied after similarity matrix convolution. Hybrid point elimination is a mixture of both hard elimination and soft elimination, and operates on point pairs instead of individual points. It uses a permutation-invariant pooling operation to aggregate information across all possible correspondences for a given point in the source, and outputs the \textbf{validity score}, for which a higher score means higher probability of having a true correspondence. Formally, let $\mathbf{F}$ be the intermediate output (see \reffig{model}) of the similarity matrix convolution of shape $M\times M\times K^\prime$. Hybrid point elimination first computes the validity score \begin{align} v(i) = \sigma(f(\bigoplus_{j}(\mathbf{F}(i,j)))) \end{align} where $\sigma(\cdot)$ is the sigmoid function, $\bigoplus$ is an element-wise permutation invariant pooling method, such as ``mean'' or ``max'', and $f$ is a multi-layer perceptron that takes the pooled features as input and outputs the scores. This permutation invariant pooling technique is used in a variety point cloud processing \cite{pointnet,pointnet++} and graph neural network \cite{dgcnn,rscnn} models. Following \cite{pointnet,pointnet++} we use element-wise max for $\bigoplus$. This way, we have a validity score for each point in the source, and thus for each point pair. It can be seen as the probability that a correspondence pair is correct. With this validity score, we then compute the \textbf{hybrid elimination weights}. The weight for the $i$th point pair is defined as \begin{align} w_i = \frac{v(i) \cdot \indicator{v(i) \geq \text{median}_{k}(v(k))}}{\sum_i v(i) \cdot \indicator{v(i) \geq \text{median}_{k}(v(k))}} \end{align} where $\indicator{\cdot}$ is the indicator function. What this weighting process does is that it gives 0 weight to those points with lowest validity scores (hard elimination), and weighs the rest proportionally to the validity scores (soft elimination). With this elimination weight vector, we can obtain the $(\mathbf{R}^{(n)}, \mathbf{t}^{(n)})$ with a slightly different objective function from \refeq{objective-no-weight} \begin{align} \label{eq:objective} \mathbf{R}^{(n)}, \mathbf{t}^{(n)} = \argmin_{\mathbf{R}, \mathbf{t}} \sum_i w_i \norm{\mathbf{R}\mathbf{p}_i+\mathbf{t}-\mathbf{p}^\prime_i}^2 \end{align} This can still be solved using SVD with little overhead. Ideally, the hybrid point elimination can eliminate those point pairs that are not correct due to noise and incompletion, giving better performance on estimating $\mathbf{R}^{(n)}$ and $\mathbf{t}^{(n)}$ (see the Appendix for visualization). \subsection{Mutual-Supervision Loss} \label{sec:loss} In this section, we describe in detail the \textbf{mutual-supervision loss} that is used to train the network. With this loss, we can train the similarity matrix convolution, along with the two-stage elimination module, without extra annotations of keypoints. The loss is the sum of three parts, which will be explained in the following. Note that training on all the points during each forward-backward loop is inefficient and unnecessary. However, since hard point elimination does not function properly during training yet, we do not have direct access to $\mathcal{B}_\mathcal{S}$ and $\mathcal{B}_\mathcal{T}$ (see the definitions in \refsec{hard-elimination}). Therefore, we need some way to sample points from the source and the target for training. This sampling technique is described in \refsec{sampling}. In this section we accept that as given, and abuse the notation $\mathcal{B}_\mathcal{S}$ for the \textbf{source sampled set} and $\mathcal{B}_\mathcal{T}$ for the \textbf{target sampled set}, which both contain the $M$ sampled points for training. Let $\mathbf{p}_i$ denote the $i$th point in $\mathcal{B}_\mathcal{S}$ and $\mathbf{q}_j$ denote the $j$th point in $\mathcal{B}_\mathcal{T}$ \subsubsection{Point Matching Loss} \label{sec:matching-loss} The point matching loss is used to supervise the similarity matrix convolution. It is a standard cross-entropy loss. The point matching loss for the $n$th iteration is defined as \begin{align} \mathcal{L}^{(n)}_{\text{match}}(\mathcal{S}, \mathcal{T}, \mathbf{R}^*, \mathbf{t}^*)=\frac{1}{M}\sum_{i=1}^{M}-\log(\mathbf{S}^{(n)}(i, j^*))\cdot\indicator{\norm{\mathbf{R}^* \mathbf{p}_i+\mathbf{t}^*-\mathbf{q}_{j^*}}^2\leq r^2} \end{align} where \begin{align} j^*=\argmin_{1\leq j\leq M} \norm{\mathbf{R}^* \mathbf{p}_i+\mathbf{t}^*-\mathbf{q}_j}^2 \end{align} is the index of the point in the target sampled set $\mathcal{B}_\mathcal{T}$ that is closest to the $i$th point in the source sampled set $\mathcal{B}_\mathcal{S}$ under the ground truth transformation. $r$ is hyper-parameter that controls the minimal radius within which two points are considered close enough. If the distance of $\mathbf{p}_i$ and $\mathbf{q}_{j^*}$ is larger than $r$, they can not be seen as correspondences, and no supervision signal is applied on them. This happens frequently when the model is dealing with partially overlapping point clouds. The total point matching loss is the average of those for all the iterations. \subsubsection{Negative Entropy Loss} \label{sec:hard-elimination-loss} This loss is used for training hard point elimination. The problem for training hard point elimination is that we do not have direct access to annotations of keypoints. Therefore, we propose to use a \textbf{mutual supervision} technique, which uses the result of the point matching loss to supervise hard point elimination. This mutual supervision is based on the intuition that if a point $\mathbf{p}_i\in\mathcal{B}_\mathcal{S}$ is a prominent point (high significance score), the probability distribution defined by the $i$th row of $\mathbf{S}^{(n)}$ should have low entropy because it is confident in matching. On the other hand, the supervision on the similarity matrices has no direct relationship to hard point elimination. Therefore, the \textbf{negative entropy} of the probability distribution can be seen as a supervision signal for the significance scores. Mathematically, the \textbf{negative entropy loss} for the $n$th iteration can be defined as \begin{align} \mathcal{L}^{(n)}_{\text{hard}}(\mathcal{S}, \mathcal{T}, \mathbf{R}^*, \mathbf{t}^*)=\frac{1}{M}\sum_{i=1}^{M}|s(i)-\sum_{j=1}^{M}\mathbf{S}^{(n)}(i, j)\log(\mathbf{S}^{(n)}(i, j))|^2 \end{align} where $s(i)$ is the significant score for the $i$th point in $\mathcal{B}_\mathcal{S}$. Although this loss can be defined for any iteration, we only use the one for first iteration, because in the early stages of registration the shape features are more important than the Euclidean features. We want the hard point elimination module learns to filter points only based on shape information. We cut the gradient flow from the negative entropy loss to $\mathbf{S}^{(n)}$ to prevent interference with the training of similarity matrix convolution. \subsubsection{Hybrid Elimination Loss} \label{sec:hybrid-elimination-loss} A similar mutual supervision idea can also be used for training the hybrid point elimination. The difference is that hybrid elimination takes into account the point pair information, while hard point elimination only looks at individual points. As a result, the mutual supervision signal is much more obvious for hybrid point elimination. We simply use the probability that there exists a point in $\mathcal{B}_\mathcal{T}$ which is the correspondence of point $\mathbf{p}_i\in\mathcal{B}_\mathcal{S}$ as the supervision signal for $v_i$ (validity score). Instead of computing the probability explicitly, the \textbf{hybrid elimination loss} for the $n$th iteration is defined as \begin{align} \mathcal{L}^{(n)}_{\text{hybrid}}(\mathcal{S}, \mathcal{T}, \mathbf{R}^*, \mathbf{t}^*)=\frac{1}{M}\sum_{i=1}^{M}-\mathbbm{I}_i\cdot \log( v_i)-(1-\mathbbm{I}_i)\cdot \log (1-v_i) \end{align} where \begin{align} \mathbbm{I}_i=\indicator{\norm{\mathbf{R}^*\mathbf{p}_i+\mathbf{t}^*-\mathbf{q}_{\argmax_j \mathbf{S}^{(n)}(i,j)}}^2\leq r^2} \end{align} In effect, this loss assigns a positive label $1$ to those points in $\mathcal{B}_\mathcal{S}$ that correctly finds its correspondence, and a negative label $0$ to those that do not. In the long run, those point pairs with high probability of correct matching will have higher validity scores. \subsection{Balanced Sampling for Training} \label{sec:sampling} In this section, we describe a balanced sampling technique to sample points for training our network. We first sample $\ceil{\frac{M}{2}}$ points from $\mathcal{S}$ with the following unnormalized probability distribution \begin{align} p_{\text{pos}}(i)=\indicator{(\min_{\mathbf{q}\in\mathcal{T}} \norm{\mathbf{R}^*\mathbf{p}_i+\mathbf{t}^*-\mathbf{q}}^2)\leq r^2} + \epsilon \end{align} where $\epsilon=10^{-6}$ is some small number. This sampling process aims to randomly sample ``positive'' points from $\mathcal{S}$, in the sense that they indeed have correspondences in the target. It introduces the $\epsilon$ to avoid errors when encountering the singularity cases where no points in the source have correspondences in the target. Similarly, we sample $(M-\ceil{\frac{M}{2}})$ ``negative'' points from $\mathcal{S}$ using the unnormalized distribution \begin{align} p_{\text{neg}}(i)=\indicator{(\min_{\mathbf{q}\in\mathcal{T}} \norm{\mathbf{R}^*\mathbf{p}_i+\mathbf{t}^*-\mathbf{q}}^2)> r^2} + \epsilon \end{align} This way, we have a set $\mathcal{B}_\mathcal{S}$ of points of size $M$, with both positive and negative instances. To sample points from the target, we simply find the closest points of each point from $\mathcal{B}_\mathcal{S}$ in the target \begin{align} \mathcal{B}_\mathcal{T} = \set{\argmin_{\mathbf{q}}\norm{\mathbf{R}^*\mathbf{p}_i+\mathbf{t}^*-\mathbf{q}}}{i\in\mathcal{B}_\mathcal{S}} \end{align} This balanced sampling technique randomly samples points from $\mathcal{S}$ and $\mathcal{T}$, while keeping a balance between points that have correspondences and points that do not. \section{Experiments} \label{sec:experiments} This section shows the experimental results to demonstrate the performance and efficiency of our method. We also conduct ablation study to show the effectiveness of each component of our model. \subsection{Experimental Setup} \label{sec:experimental-setup} We train our model with the Adam \cite{adam} optimizer for 40 epochs. The initial learning rate is $1\times 10^{-4}$, and is multiplied by 0.1 after 30 epochs. We use a weight decay of $1\times 10^{-3}$ and no Dropout \cite{dropout}. We use the FPFH implementation from the Open3D \cite{open3d} library and a very simple graph neural network (GNN) for feature extraction. The details of the GNN architecture are described in the supplementary material. For both FPFH and GNN features, the number of iterations is set to 3. Following \cite{prnet}, all the experiments are done on the ModelNet40 \cite{modelnet} dataset. ModelNet40 includes 9843 training shapes and 2468 testing shapes from 40 object categories. For a given shape, we randomly sample 1024 points to form a point cloud. For each point cloud, we randomly generate rotations within $[0^\circ, 45^\circ]$ and translation in $[-0.5, 0.5]$. The original point cloud is used as the source, and the transformed point cloud as the target. To generate partially overlap point clouds, we follow the same method as \cite{prnet}, which fixes a random point far away from the two point clouds, and preserve 768 points closest to the far point for each point cloud. We compare our method to ICP, Go-ICP, FGR, FPFH+RANSAC, PointNetLK, DCP and PRNet. All the data-driven methods are trained on the same training set. We use the same metrics as \cite{prnet,dcp} to evaluate all these methods. For the rotation matrix, the root mean square error (RMSE($\mathbf{R}$)) and mean absolute error (MAE($\mathbf{R}$)) in degrees are used. For the translation vector, the root mean square error (RMSE($\mathbf{t}$)) and mean absolute error (MAE($\mathbf{t}$)) are used. \subsection{Results} \label{sec:results} In this section, we show the results for three different experiments to demonstrate the effectiveness and robustness of our method. These experimental settings are the same as those in \cite{prnet}. We also include in the supplementary material some visualization results for these experiments. \subsubsection{Unseen Shapes} \label{sec:unseen-shapes} First, we train our model on the training set of ModelNet40 and evaluate on the test set. Both the training set and test set of ModelNet40 contain point clouds from all the 40 categories. This experiment evaluates the ability to generalize to unseen point clouds. \reftab{unseen-shapes} shows the results. \begin{table} \begin{center} \caption{Results for testing on point clouds of unseen shapes in ModelNet40.} \label{tab:unseen-shapes} \begin{tabular}{lcccc} \hline\noalign{\smallskip} Model & RMSE($\mathbf{R}$) & MAE($\mathbf{R}$) & RMSE($\mathbf{t}$) & MAE($\mathbf{t}$) \\ \noalign{\smallskip} \hline \noalign{\smallskip} ICP & 33.68 & 25.05 & 0.29 & 0.25\\ FPFH+RANSAC & 2.33 & 1.96 & \textbf{0.015} & 0.013\\ FGR & 11.24 & 2.83 & 0.030 & 0.008\\ Go-ICP & 14.0 & 3.17 & 0.033 & 0.012\\ PointNetLK & 16.74 & 7.55 & 0.045 & 0.025\\ DCP & 6.71 & 4.45 & 0.027 & 0.020\\ PRNet & 3.20 & 1.45 & 0.016 & 0.010\\ \hline FPFH+IDAM & \textbf{2.46} & \textbf{0.56} & 0.016 & \textbf{0.003}\\ GNN+IDAM & 2.95 & 0.76 & 0.021 & 0.005\\ \hline \end{tabular} \end{center} \end{table} We can see that local registration method ICP performs poorly because the initial rotation angles are large. FPFH+RANSAC is the best performing traditional method, which is comparable to many learning-based methods. Note that both RANSAC and FGR use FPFH methods, and our method with FPFH features outperforms both of them. Neural network models have a good balance between performance and efficiency. Our method outperforms all the other methods with both hand-crafted (FPFH) and learned (GNN) features. Surprisingly, FPFH+IDAM has better performance than GNN+IDAM. One possibility is that the GNN overfits to the point clouds in the training set, and does not generalize well to unseen shapes. However, as will be shown in later sections, GNN+IDAM is more robust to noise and also more efficient that FPFH+IDAM. \subsubsection{Unseen Categories} \label{sec:unseen-categories} In the second experiment, we use the first 20 categories in the training set of ModelNet40 for training, and evaluate on the other 20 categories on the test set. This experiment tests the capability to generalize to point clouds of unseen categories. The results are summarized in \reftab{unseen-categories}. We can see that without training on the testing categories, all the learning-based methods perform worse consistently. Traditional methods are not affected that much as expected. Based on different evaluation metrics, FPFH+RANSAC and FPFH+IDAM are the best performing methods. \begin{table} \begin{center} \caption{Results for testing on point clouds of unseen categories in ModelNet40. } \label{tab:unseen-categories} \begin{tabular}{lcccc} \hline\noalign{\smallskip} Model & RMSE($\mathbf{R}$) & MAE($\mathbf{R}$) & RMSE($\mathbf{t}$) & MAE($\mathbf{t}$) \\ \noalign{\smallskip} \hline \noalign{\smallskip} ICP & 34.89 & 25.46 & 0.29 & 0.25\\ FPFH+RANSAC & \textbf{2.11} & 1.82 & \textbf{0.015} & 0.013\\ FGR & 9.93 & 1.95 & 0.038 & 0.007\\ Go-ICP & 12.53 & 2.94 & 0.031 & 0.010\\ PointNetLK & 22.94 & 9.66 & 0.061 & 0.033\\ DCP & 9.77 & 6.95 & 0.034 & 0.025\\ PRNet & 4.99 & 2.33 & 0.021 & 0.015\\ \hline FPFH+IDAM & 3.04 & \textbf{0.61} & 0.019 & \textbf{0.004}\\ GNN+IDAM & 3.42 & 0.93 & 0.022 & 0.005\\ \hline \end{tabular} \end{center} \end{table} \subsubsection{Gaussian Noise} \label{sec:gaussian-noise} In the last experiment, we add random Gaussian noise with standard deviation 0.01 to all the shapes, and repeat the first experiment (unseen shapes). The random noise is clipped to $[-0.05, 0.05]$. As shown in \reftab{gaussian-noise}, both traditional methods and IDAM based on FPFH features perform much worse than the noise-free case. This demonstrates that FPFH is not very robust to noise. The performance of data-driven methods are comparable to the noise-free case, thanks to the powerful feature extraction networks. Our method based on GNN features has the best performance compared to others. \begin{table} \begin{center} \caption{Results for testing on point clouds of unseen shapes in ModelNet40 with Gaussian noise. } \label{tab:gaussian-noise} \begin{tabular}{lcccc} \hline\noalign{\smallskip} Model & RMSE($\mathbf{R}$) & MAE($\mathbf{R}$) & RMSE($\mathbf{t}$) & MAE($\mathbf{t}$) \\ \noalign{\smallskip} \hline \noalign{\smallskip} ICP & 35.07 & 25.56 & 0.29 & 0.25\\ FPFH+RANSAC & 5.06 & 4.19 & 0.021 & 0.018\\ FGR & 27.67 & 13.79 & 0.070 & 0.039\\ Go-ICP & 12.26 & 2.85 & 0.028 & 0.029\\ PointNetLK & 19.94 & 9.08 & 0.057 & 0.032\\ DCP & 6.88 & 4.53 & 0.028 & 0.021\\ PRNet & 4.32 & 2.05 & \textbf{0.017} & 0.012\\ \hline FPFH+IDAM & 14.21 & 7.52 & 0.067 & 0.042\\ GNN+IDAM & \textbf{3.72} & \textbf{1.85} & 0.023 & \textbf{0.011}\\ \hline \end{tabular} \end{center} \end{table} \subsection{Efficiency} \label{sec:efficiency} We test the speed of our method, and compare it to ICP, FGR, FPFH+RANSAC, PointNetLK, DCP and PRNet. We use the Open3D implementation of ICP, FGR and FPFH+RANSAC, and the official implementation of PointNetLK, DCP and PRNet released by the authors. The experiments are done on a machine with 2 Intel Xeon Gold 6130 CPUs and a single Nvidia GeForce RTX 2080 Ti GPU. We use a batch size of 1 for all the neural network based models. The speed is measured in seconds per frame. We test the speed on point clouds with 1024, 2048 and 4096 points, and the results are summarized in \reftab{efficiency}. It can be seen that neural network based methods are generally faster than traditional methods. When the number of points is small, IDAM with GNN features is only slower than DCP. But as the number of points increases, IDAM+GNN is much faster than all the other methods. Although FPFH+RANSAC has the best performance among non-learning methods, it is also the slowest. Note that our method with FPFH features is $2\times$ to $5\times$ faster than the other two methods (FGR and RANSAC) which also use FPFH. \begin{table} \begin{center} \caption{Comparison of speed of different models. IDAM(G) and IDAM(F) represent GNN+IDAM and FPFH+IDAM respectively. RANSAC also uses FPFH. Speed is measured in seconds-per-frame.} \label{tab:efficiency} \begin{tabular}{lcccccccc} \hline\noalign{\smallskip} & IDAM(G) & IDAM(F) & ICP & FGR & RANSAC & PointNetLK & DCP & PRNet \\ \noalign{\smallskip} \hline \noalign{\smallskip} 1024 points & 0.026 & 0.050 & 0.095 & 0.123 & 0.159 & 0.082 & 0.015 & 0.022\\ 2048 points & 0.038 & 0.078 & 0.185 & 0.214 & 0.325 & 0.085 & 0.030 & 0.048\\ 4096 points & 0.041 & 0.175 & 0.368 & 0.444 & 0.685 & 0.098 & 0.084 & 0.312\\ \hline \end{tabular} \end{center} \end{table} \subsection{Ablation Study} \label{sec:ablation-study} In this section, we present the results of ablation study of IDAM to show the effectiveness of each component. We examine three key components of our model: distance-aware similarity matrix convolution (denoted as SM), hard point elimination (HA) and hybrid point elimination (HB). We use BS to denote the model that does not contain any of the three components mentioned above. Since hard point elimination is necessary for similarity matrix convolution due to memory constraints, we replace it with random point elimination in BS. We use inner-product of features in BS when similarity matrix convolution is disabled. As a result, BS is just a simple model that uses the inner-product of features as similarity scores to find correspondences, and directly solves the absolute orientation problem (\refeq{objective-no-weight}). We add the components one by one and compare their performance for GNN features. We conducted the experiments under the settings of ``unseen categories'' as described in \refsec{unseen-categories}. The results are summarized in \reftab{ablation}. It can be seen that even with random sampling, similarity matrix convolution already outperforms the baseline (BS) by a large margin. The two-stage point elimination (HA and HB) further boosts the performance significantly. \begin{table} \begin{center} \caption{Comparison of the performance of different model choices for IDAM. These experiments examine the effectiveness of similarity matrix convolution (SM), hard point elimination (HA) and hybrid point elimination (HB).} \label{tab:ablation} \begin{tabular}{lcccc} \hline\noalign{\smallskip} Model & RMSE($\mathbf{R}$) & MAE($\mathbf{R}$) & RMSE($\mathbf{t}$) & MAE($\mathbf{t}$) \\ \noalign{\smallskip} \hline \noalign{\smallskip} BS & 7.77 & 5.33 & 0.055 & 0.047\\ BS+SM & 5.08 & 3.58 & 0.056 & 0.042\\ BS+HA+SM & 4.31 & 2.89 & 0.029 & 0.019\\ BS+HA+SM+HB & \textbf{3.42} & \textbf{0.93} & \textbf{0.022} & \textbf{0.005}\\ \hline \end{tabular} \end{center} \end{table} \section{Conclusions} \label{sec:conclusion} In this paper, we propose a novel data-driven pipeline named IDAM for partially overlapping 3D point cloud registration. We present a novel distance-aware similarity matrix convolution to augment the network's ability of finding correct correspondences in each iteration. Moreover, a novel two-stage point elimination method is proposed to improve performance while reducing computational complexity. We design a mutual-supervised loss for training IDAM end-to-end without extra annotations of keypoints. Experiments show that our method performs better than the current state-of-the-art point cloud registration methods and is robustness to noise. \subsubsection{Acknowledgements} \label{sec:acknowledgements} This work was supported in part by the National Key Research and Development Program of China under Grant 2017YFA0700800. \bibliographystyle{splncs04}
1,314,259,994,150
arxiv
\section{Introduction} Presburger Arithmetic $\PrA$ is the first-order theory of natural numbers with addition. It was introduced by M.~Presburger in 1929 \cite{presburger}. Presburger Arithmetic is complete, recursively-axiomatizable, and decidable. The method of interpretations is a standard tool in model theory and in the study of decidability of first-order theories \cite{tarskimostowski,hodges}. An interpretation of a theory $\mathbf{T}$ in a theory $\mathbf{U}$ essentially is a uniform first-order definition of models of $\mathbf{T}$ in models of $\mathbf{U}$ (we present a detailed definition in Section~3). In the paper we study certain questions about interpretability for Presburger Arithmetic that were well-studied in the case of stronger theories like Peano Arithmetic $\PA$. Although, from technical point of view the study of interpretability for Presburger Arithmetic uses completely different methods than the study of interpretability for $\PA$ (see for example \cite{visser}), we show that from interpretation-theoretic point of view, $\PrA$ has certain similarities to strong theories that prove all the instances of mathematical induction in their own language, i.e. $\PA$, Zermelo-Fraenkel set theory $\ZF,$ etc. A {\em reflexive} arithmetical theory (\cite[p.\,13]{visser}) is a theory that can prove the consistency of all its finitely axiomatizable subtheories. Peano Arithmetic $\PA$ and Zermelo-Fraenkel set theory $\ZF$ are among well-known reflexive theories. In fact, all sequential theories (very general class of theories similar to $\PA,$ see \cite[III.1(b)]{hajekpudlak}) that prove all instances of induction scheme in their language are reflexive. For sequential theories reflexivity implies that the theory cannot be interpreted in any of its finite subtheories. A.~Visser have conjectured that this purely interpretational-theoretic property holds for $\PrA$ as well. Note that $\PrA$ satisfies full-induction scheme in its own language but cannot formalize the statements about consistency of formal theories. The conjecture was studied by J.~Zoethout \cite{jetze}. Note that Presburger Arithmetic, unlike sequential theories, cannot encode tuples of natural numbers by single natural numbers. And hence for interpretations in Presburger Arithmetic it is important whether individual objects are interpreted by individual objects (one-dimensional interpretations) or by tuples of objects of some fixed length $m$ ($m$-dimensional interpretations). Zoethout considered only the case of one-dimensional interpretations and proved that if any one-dimensional interpretation of $\PrA$ in $(\NN,+)$ gives a model that is definably isomorphic to $(\mathbb{N},+)$ then Visser's conjecture holds for one-dimensional interpretations, i.e. there are no one-dimensional interpretations of $\PrA$ in its finite subtheories. In the present paper we show that the following theorem holds and thus prove Visser's conjecture for one-dimensional interpretations: \begin{theorem}\label{1a} For any model $\mathfrak{A}$ of $\PrA$ that is one-dimensionally interpreted in the model $(\NN,+)$, (a) $\mathfrak{A}$ is isomorphic to $(\NN,+)$; (b) the isomorphism is definable in $(\NN,+)$. \end{theorem} Note that \sref{Theorem}{1a}(a) was established by J.~Zoethout in \cite{jetze}. We also study whether the generalization of \sref{Theorem}{1a} to multi-dimensional interpretations holds. We prove: \begin{theorem}\label{lb} For any $m$ and model $\mathfrak{A}$ of $\PrA$ that is $m$-dimensionally interpreted in $(\NN,+)$, the model $\mathfrak{A}$ is isomorphic to $(\NN,+)$. \end{theorem} We don't know whether the isomorphism is always definable in $(\NN,+)$ In order to prove \sref{Theorem}{lb}, we show that for every $m$ each linear order that is $m$-dimensionally interpretable in $(\mathbb{N},+)$ is \emph{scattered}, i.e. it doesn't contain a dense suborder. Moreover, our construction gives an estimation for Cantor-Bendixson ranks of the orders (a notion of Cantor-Bendixson rank for scattered linear orders goes back to Hausdorff \cite{hausdorff} in order to give more precise estimation we use slightly different notion of $VD_*$-rank from \cite{krs}): \begin{theorem}\label{ordering} All linear orders $m$-dimensionally interpretable in $(\NN,+)$ have the $VD_*$-rank at most $m.$ \end{theorem} Note that since every structure interpretable in $(\NN,+)$ is automatic, the fact that both the $VD_*$ and Hausdorff ranks of any scattered linear order interpretable in $(\mathbb{N},+)$ is finite follows from the results on automatic linear orders by B.~Khoussainov, S.~Rubin, and F.~Stephan \cite{krs}. The work is organized as follows. Section 2 introduces the basic notions. In Section 3 we give the definitions of non-parametric interpretations and definable isomorphism of interpretations. In Section 4 we define the dimension of Presburger sets and prove \sref{Theorem}{ordering}. In Section 5 we prove \sref{Theorem}{1a} and explain how it implies the impossibility to interpret $\PrA$ in its finite subtheories. In Section 6 we discuss the approach for the multi-dimensional case. \section{Presburger Arithmetic and Definable Sets} In the section we give some results about Presburger Arithmetic and definable sets in $(\mathbb{N},+)$ from the literature that will be relevant for our paper \begin{definition} {\em Presburger Arithmetic} $(\PrA)$ is the elementary theory of the model $(\NN,+)$ of natural numbers with addition. \end{definition} It is easy to see that every number $n\in \mathbb{N}$, the relations $<$ and $\le,$ modulo comparison relations $\equiv_n$, for natural $n\ge 1$, and the functions $x\longmapsto nx$ of multiplication by a natural number $n$ are definable in the model $(\mathbb{N},+)$. We fix some definitions for these constants, relations, and functions. This gives us a translation from the first-order language $\Lc$ of the signature $\langle =,\{n\mid n\in\mathbb{N}\},+,<~,\{\equiv_n\mid n\ge 1\}, \{x\longmapsto n x\mid n\in \mathbb{N}\}\rangle$ to the first-order language $\Lc^{-}$ of the signature $\langle =,+\rangle$. Since $\PrA$ is the elementary theory of $(\mathbb{N},+)$, regardless of the choice of the definitions, the translation is uniquely determined up to $\PrA$-provable equivalence. Thus we could freely switch between $\Lc$-formulas and equivalent $\Lc^{-}$-formulas. Note that $\PrA$ admits the quantifier elimination in the extended language $\mathcal{L}$ \cite{presburger}. The well-known fact about order types of nonstandard models of $\mathrm{PA}$ also holds for models of Presburger arithmetic: \begin{theorem}\label{models-class} Any nonstandard model $\mathfrak{A} \models\PrA$ has the order type $\NN+\ZZ\cdot A$, where $\langle A,<_A\rangle$ is some dense linear order without endpoints. Thus, in particular, any countable model of $\PrA$ either has the order type $\NN$ or $\NN+\ZZ\cdot\QQ.$ \end{theorem} For vectors $\overline{c},\overline{p_1},\ldots,\overline{p_n}\in\ZZ^m$ we call the set $\{\overline{c}+\sum k_i \overline{p_i}\mid k_i\in\NN\}$ a {\em lattice} with the {\em generating} vectors $\overline{p_1},\ldots,\overline{p_n}$ and the \emph{initial} vector $\overline{c}$. If $\overline{p_1},\ldots,\overline{p_n}$ are linearly independent ($n\le m$) we call the set an {\em $n$-dimensional fundamental lattice}. R.~Ito \cite{ir} have proved that any union of finitely many (possibly, intersecting) lattices in $\NN^m$ is a disjoint union of finitely many fundamental lattices. S.~Ginsburg and E.~Spanier \cite[Theorem 1.3]{ginsburg} have shown that the subsets of $\NN^k$ definable in $(\mathbb{N},+)$ are exactly the subsets of $\NN^k$ that are unions of finitely many (possibly, intersecting) lattices; note that the sets from the latter class are known as \emph{semilinear} sets. Combining these two results we obtain \begin{theorem} \label{fund} All subsets of $\NN^k$ definable in $(\mathbb{N},+)$ are exactly the subsets of $\NN^k$ that are disjoint unions of finitely many fundamental lattices. \end{theorem} Let us now consider the extension of the first-order predicate language with an additional quantifier $\exists^{=y}x,$ called a {\em counting quantifier} (notion introduced in \cite{barrington}), used as follows: if $f(\overline{x},z)$ is an $\Lc$-formula with the free variables $\overline{x},z,$ then $F=\exists^{=y}z\:G(\overline{x},z)$ is also a formula with the free variables $\overline{x},y.$ We extend the standard assignment of truth values to first-order formulas in the model $(\mathbb{N},+)$ to formulas with counting quantifiers. For a formula $F(\overline{x},y)$ of the form $\exists^{=y}z\:G(\overline{x},z)$, a vector of natural numbers $\overline{a}$, and a natural number $n$ we say that $F(\overline{a},n)$ is true iff there are exactly $n$ distinct natural numbers $b$ such that $G(\overline{a},b)$ is true. H.~Apelt \cite{apelt} and N.~Schweikardt \cite{schweikardt} have discovered that such an extension does not extend the expressive power of $\PrA:$ \begin{theorem}(\cite[Corollary 5.10]{schweikardt})\label{unti} Every $\Lc$-formula $F(\overline{x})$ that uses counting quantifiers is equivalent in $(\NN,+)$ to a quantifier-free $\Lc$-formula. \end{theorem} \section{Interpretations} \begin{definition} \label{interpretation} Suppose we have two first-order signatures $\Omega_1$ and $\Omega_2$. An $m$-dimensional \emph{translation} $\iota$ of a first order language of the signature $\Omega_1$ to the first-order language of the signature $\Omega_2$ consists of \begin{enumerate} \item a first-order formula $\mathit{Dom}_{\iota}(\overline{y})$ of the signature $\Omega_2$, where $\overline{x}$ is a vector of variables of the length $m$, with the intended meaning of the definition of the domain of translation; \item first-order formulas $\mathit{Pred}_{\iota,P}(\overline{y}_1,\ldots,\overline{y}_n)$ of the signature $\Omega_2$, where each $\overline{y}_i$ is a vector of variables of the length $m$, for each predicate $P(x_1,\ldots,x_n)$ from $\Omega_1$ (including $x_1=x_2$); \item first-order formulas $\mathit{Fun}_{\iota,f}(\overline{y}_0,\overline{y}_1,\ldots,\overline{y}_n,)$ of the signature $\Omega_2$, where each $\overline{y}_i$ is a vector of variables of the length $m$, for each function $f(x_1,\ldots,x_n)$ from $\Omega_1$. \end{enumerate} Translation $\iota$ is an {\em interpretation} of a model $\mathfrak{A}$ of the signature $\Omega_1$ with the domain $A$ in a model $\mathbf{B}$ of the signature $\Omega_2$ with the domain $B$ if \begin{enumerate} \item $\mathit{Dom}_{\iota}(\overline{y})$ defines a non-empty subset $D\subseteq B^m$; \item $\mathit{Pred}_{\iota,=}(\overline{y}_1,\overline{y}_2)$ defines an equivalence relation $\sim$ on the set $D$; \item there is a bijection $h\colon D/{\sim}\to A$ such that for each predicate $P(x_1,\ldots,x_n)$ from $\Omega_1$ and $\overline{b}_1,\ldots,\overline{b}_n\in D$ we have $$\mathfrak{A}\models P(h([\overline{b}_1]_{\sim}),\ldots,h([\overline{b}_n]_{\sim}))\iff \mathfrak{B}\models \mathit{Pred}_{\iota,P}(\overline{b}_1,\ldots,\overline{b}_n)$$ and for each function $f(x_1,\ldots,x_n)$ from $\Omega_1$ and $\overline{b}_0,\overline{b}_1,\ldots,\overline{b}_n\in D$ we have $$\mathfrak{A}\models h([\overline{b}_0]_{\sim})=f(h([\overline{b}_1]_{\sim}),\ldots,h([\overline{b}_n]_{\sim}))\iff \mathfrak{B}\models \mathit{Fun}_{\iota,f}(\overline{b}_0,\overline{b}_1,\ldots,\overline{b}_n).$$ \end{enumerate} Translation $\iota$ is an interpretation of a theory $\mathbf{T}$ of the signature $\Omega_1$ in a model $\mathfrak{B}$ of the signature $\Omega_2$ if it is an interpretation of some model of $\mathbf{T}$ in $\mathfrak{B}$. $\iota$ is an interpretation of a theory $\mathbf{T}$ of the signature $\Omega_1$ in a theory $\mathbf{U}$ of the signature $\Omega_2$ if it is an interpretation of $\mathbf{T}$ in every model $\mathfrak{B}$ of $\mathbf{U}$. Translation $\iota$ is called \emph{non-relative} if the formula $\mathit{Dom}_{\iota}(\overline{y})\equiv\top$, where $\overline{y}$ is $(y_1,\ldots,y_m)$. We say that translation $\iota$ has \emph{absolute equality} if the formula $\mathit{Pred}_{\iota,=}(\overline{y},\overline{z})$ is $y_1=z_1\land\ldots\land y_m=z_m$, where $\overline{y}$ is $(y_1,\ldots,y_m)$ and $\overline{z}$ is $(z_1,\ldots,z_m)$. \end{definition} Note that naturally for each translation $\iota$ of a signature $\Omega_1$ to a signature $\Omega_2$, we could define a map $F(x_1,\ldots,x_n)\longmapsto F^{\iota}(\overline{y}_1,\ldots,\overline{y}_m)$ from formulas of the signature $\Omega_1$ to formulas of the signature $\Omega_2$ such that if $\iota$ is an interpretation of a model $\mathfrak{A}$ in a model $\mathfrak{B}$ then for each $\overline{b}_1,\ldots,\overline{b}_n\in D$ we have $$\mathfrak{A}\models F(h([\overline{b}_1]_{\sim}),\ldots,h([\overline{b}_n]_{\sim}))\iff \mathfrak{B}\models \mathit{F}^{\iota}(\overline{b}_1,\ldots,\overline{b}_n),$$ where $m$, $D$, and $h$ are as in the definition above. Also we note that if $\iota$ is an interpretation of a theory $\mathbf{T}$ in a model $\mathfrak{B}$ then there is a unique up to isomorphism model $\mathfrak{A}$ of $\mathbf{T}$ such that $\iota$ is an interpretation of $\mathfrak{B}$ in $\mathfrak{A}$. \begin{definition} Suppose $\iota_1$ and $\iota_2$ are respectively an $m_1$-dimensional and $m_2$-dimensional translations from a signature $\Omega_1$ to a signature $\Omega_2$. And suppose that $I(\overline{y},\overline{z})$ is a first-order formula of the signature $\Omega_2$, where $\overline{y}$ consists of $m_1$ variables and $\overline{z}$ consists of $m_2$ variables. Now assume $\iota_1$ and $\iota_2$ are interpretations of the same model $\mathfrak{A}$ of the signature $\Omega_1$ with the domain $A$ in a model $\mathfrak{B}$ of the signature $\Omega_2$ with the domain $B$. As in Definition~\ref{interpretation} translations $\iota_1$ and $\iota_2$ give us respectively sets $D_1\subseteq B^{m_1}$, $D_2\subseteq B^{m_2}$ and equivalence relations $\sim_1$ on $D_1$ and $\sim_2$ on $D_2$. Under this assumption we say that $I(\overline{y},\overline{z})$ is a \emph{definition of an isomorphism} of $\iota_1$ and $\iota_2$ if we could choose bijections $h_1\colon D_1\to A$ and $h_2\colon D_2\to A$ (satisfying properties of $h$ from Definition~\ref{interpretation}, for respective $\iota_i$) such that for each $\overline{b}\in D_1$ and $\overline{c}\in D_2$ we have $$h_1([\overline{b}]_{\sim_1})=h_2([\overline{c}]_{\sim_1})\iff \mathfrak{B}\models I(\overline{b},\overline{c}).$$ If $\iota_1$ and $\iota_2$ are interpretations of the theory $\mathbf{T}$ in a theory $\mathbf{U}$ and for each model $\mathfrak{B}$ of $\mathbf{U}$ the formula $I(\overline{y},\overline{z})$ is a definition of an isomorphism between $\iota_1$ and $\iota_2$ as interpretations in $\mathfrak{B}$m then we say that $I(\overline{y},\overline{z})$ is a \emph{definition of an isomorphism} between $\iota_1$ and $\iota_2$ as interpretations of $\mathbf{T}$ in $\mathbf{U}$. If $\iota_1$ and $\iota_2$ are interpretations of a theory $\mathbf{T}$ in a theory $\mathbf{U}$ (a model $\mathfrak{A}$) and there is a definition of an isomorphism then we say that $\iota_1$ and $\iota_2$ as interpretations of a theory $\mathbf{T}$ in a theory $\mathbf{U}$ (a model $\mathfrak{A}$) are \emph{definably isomorphic}. \end{definition} Since the theory $\PrA$ that we study is an elementary theory of some model ($\PrA=\Th(\mathbb{N},+)$), actually there is not much difference between interpretations in the standard model and in the theory. A translation $\iota$ is an interpretation of some theory $\mathbf{T}$ in $\PrA$ iff $\iota$ is an interpretation of $\mathbf{T}$ in $(\NN,+)$. A formula $I$ is a definition of an isomorphism between interpretations $\iota_1$ and $\iota_2$ of some theory $\mathbf{T}$ in $\PrA$ iff $I$ is a definition of an isomorphism between $\iota_1$ and $\iota_2$ as interpretations of $\mathbf{T}$ in $(\NN,+)$. \section{Linear Orders Interpretable in $(\NN,+)$} \subsection{Functions Definable in Presburger Arithmetic} \begin{definition} Suppose $A\subseteq \NN^n$ is a definable set. We call a function $f\colon A\ra\NN$ \emph{piecewise polynomial} of a degree $\le m$ if there is a decomposition of $A$ into finitely many fundamental lattices $C_1,\ldots,C_k$ such that the restriction of $f$ on each $C_i$ is a polynomial with rational coefficients of a degree $\le m$ \footnote{In our work, we use the word `piecewise' only in the sense defined here.}. \end{definition} In particular, a {\em piecewise linear} function is a piecewise polynomial function of a degree $\le 1$. \begin{theorem}\label{jetz} All definable in $(\NN,+)$ functions $f\colon\NN^n\ra\NN$ are exactly piecewise linear. \end{theorem} \begin{proof} The definability of all piecewise linear functions in Presburger Arithmetic is obvious. A function $f\colon \NN^n\to \NN$ is definable iff its graph \begin{center} $G=\{(f(a_1,\ldots,a_n),a_1,\ldots,a_n)\mid (a_1,\ldots,a_n)\in \NN^n\}$ \end{center} \noindent is definable. According to \sref{Theorem}{fund}, $G$ is a finite union of fundamental lattices $J_1\sqcup\ldots \sqcup J_k$. For $1\le i\le k$ we denote by $J_i'$ the projections of $J_i$ along the first coordinate, $J_i'= \{(a_1,\ldots,a_n)\mid\exists a_0 ( (a_0,a_1,\ldots,a_n)\in J_i)\}$. Clearly, all $J_i'$ are fundamental lattices. And the restriction of the function $f$ on each of $J_i'$ is linear. \end{proof} \begin{corollary}\label{irration} All definable in $(\NN,+)$ functions $f\colon \NN\to\NN$ can be bounded from above by a linear function with a rational slope. Conversely, if $h_1(x)<f(x)<h_2(x)$ for all $x,$ where $h_{1}(x)$ and $h_{2}(x)$ are linear functions of the same irrational slope, then $f(x)$ is not definable. \end{corollary} \subsection{Dimension} Here we give the definition for the notion of dimension of Presburger-definable sets. \begin{definition} The {\em dimension} $\dim(A)$ of a Presburger-definable set $A\subseteq\NN^m$ is defined as follows. \begin{itemize} \item $\dim(A)=0$ iff $A$ is empty or finite; \item $\dim(A)=k\ge 1$ iff there is a definable bijection between $A$ and $\NN^k.$ \end{itemize} \end{definition} The following theorem shows that the definition indeed gives the unique dimension for each $\PrA$-definable set. \begin{theorem}\label{Existe} Suppose $M$ is an infinite Presburger definable subset of $\NN^k,\:k\ge 1$. Then there is a unique natural number $l\in\NN$ such that there is a Presburger definable bijection between $M$ and $\NN^l,$ $1\le l\le k.$ \end{theorem} \begin{proof} First let us show that there is some $l$ with the property. According to \sref{Theorem}{fund}, all definable in $(\NN,+)$ sets are disjoint unions of fundamental lattices $L_1,\ldots,L_n$ of the dimensions $s_1,\ldots,s_n$, respectively. It is easy to see that for each $L_i$ there is a linear bijection with $\NN^{s_i}$, which is obviously definable. Let us put $l$ to be the maximum of $s_i$'s. Now we just need to notice that for each sequence of natural number $r_1,\ldots,r_m$ and $u=\max(r_1,\ldots,r_m)$ if $u\ge 1$ then we could split a set $\NN^u$ into sets $A_1,\ldots,A_m$ for which we have definable bijections with $\NN^{r_1},\ldots,\NN^{r_m}$, respectively. We prove the latter by induction on $m$. Now let us show that there is no other $l$ with this property. Assume the contrary. Then clearly, for some $l_1>l_2$ there is a mapping $f\colon\NN^{l_1}\ra\NN^{l_2}$. Let us consider a sequence of expanding cubes, $I_n^{l_1}\eqdef\{(x_1\sco x_k)\mid 0\le x_1\sco x_k\le n\}$. We define function $g\colon \NN\to\NN$ to be the function which maps a natural number $n$ to the least $m$ such that $f(I_n^{l_1})\subseteq I_m^{l_2}$. Clearly, $g$ is a Presburger-definable function. Then there should be some linear function $h\colon \NN\to\NN$ such that $g(n)\le h(n)$, for all $n$. But since for each $n\in\NN$ and $m<n^{l_1/l_2}$ the cube $I_n^{l_1}$ contains more points than the cube $I_m^{l_2},$ from the definition of $g$ we see that $g(n)\ge n^{l_1/l_2}$. This contradicts the linearity of the function $h$.\qed \end{proof} From the proof above we see that the following corollary holds: \begin{corollary} \label{sublattice} The dimension of a set $M\subseteq\NN^k$ is equal to the maximal $l$ such that there exists an exactly $l$-dimensional fundamental lattice which is a subset of $M.$ \end{corollary} \subsection{Presburger-Definable Linear Orders} \begin{lemma}\label{MS} Let $\overline{x}=(x_1,\ldots,x_n)$ and $\overline{y}=(y_1,\ldots,y_k)$ be vectors of free variables, where $\overline{y}$ will be treated as a vector of parameters. Let $F(\xv,\yv)$ be an $\Lc^{-}$-formula such that for an infinite set of parameter vectors $B=\{\bv_1,\bv_2,\ldots\}$ the sets defined by $F(\xv,\bv_i)$ are disjoint in $\NN^n.$ Then only a finite number of those definable sets can be exactly $n$-dimensional. \end{lemma} \begin{proof} Let us consider the set $A\subseteq \NN^{n+k}$ defined by the formula $F(\xv,\yv)$. For each vector $\bv=(b_1,\ldots,b_k)\in \mathbb{N}^k$ and set $S\subseteq \NN^{n+k}$ we consider section $S\upharpoonright\bv =\{(a_1,\ldots,a_n,b_1,\ldots,b_k) \mid (a_1,\ldots,a_n,b_1,\ldots, b_k)\in S\}$. Clearly in this terms in order to prove the lemma, we need to show that there are only finitely many distinct $\bv\in B$ such that the section $A\upharpoonright \bv$ is an $n$-dimensional set. By \sref{Theorem}{fund}, the set $A$ is a disjoint union of finitely many of fundamental lattices $J_i\subseteq \NN^{n+k}.$ It is easy to see that if some section $A\upharpoonright\bv$ were an $n$-dimensional set then at least for one $J_i$, the section $J_i\upharpoonright\bv$ were an $n$-dimensional set. Thus it is enough to show that for each $J_i$ there are only finitely many vectors $\bv\in B$ for which the section $J_i\upharpoonright\bv$ is an $n$-dimensional set. Let us now assume for a contradiction that for some $J_i$ there are infinitely many $J_i\upharpoonright \bv_0$, for $\bv_0\in B$, that are $n$-dimensional sets. Let us consider some parameter vector $\bv\in\NN^k$ such that the section $J\upharpoonright\bv$ is an $n$-dimensional set. Then by \sref{Corollary}{sublattice} there exists an $n$-dimensional fundamental lattice $K\subseteq J_i\upharpoonright\bv_0$. Suppose the generating vectors of $K$ are $\vv_1,\ldots,\vv_n$ and initial vector of $K$ is $\uv$. It is easy to see that each vector $\vv_j$ is a non-negative linear combination of generating vectors of $J$, since otherwise for large enough $h\in \mathbb{N}$ we would have $\cv+h\vv_j\not\in J$. Now notice that for any $\bv\in B$ and $\av \in J\upharpoonright \bv$ the $n$-dimensional lattice with generating vectors $\vv_1,\ldots,\vv_n$ and initial vector $\av$ is a subset of $\av \in J\upharpoonright \bv$. Thus infinitely many of the sets defined by $F(\xv,\bv)$, for $\bv\in B$ contain the shifts of the same $n$-dimensional fundamental lattice. It is easy to see that the latter contradicts the assumption that all the sets are disjoint.\qed \end{proof} \begin{definition} We call a linear ordering $(L,<)$ \emph{scattered} if it does not have an infinite dense suborder. \end{definition} \begin{definition} Let $(L,\prec)$ be a linear ordering. We define a family of equivalence relations $\simeq_{\alpha}$, for ordinals $\alpha\in\mathbf{Ord}$ by transfinite recursion: \begin{itemize} \item $\simeq_0$ is just equality; \item $\simeq_{\lambda}=\bigcup\limits_{\beta<\lambda}\simeq_{\alpha}$, for limit ordinals $\lambda$; \item $a\simeq_{\alpha+1}b \stackrel{\mbox{\footnotesize $\mathrm{def}$}}{\iff} |\{c\in L\mid (a\prec c\prec b)\mbox{ or }(b\prec c \prec a)\}/{\simeq_{\alpha}}|<\aleph_0$. \end{itemize} Let us define $VD_*$-\emph{rank}\footnote{$VD$ stand for {\em very discrete}; see \cite[p.\,84-89]{rosenstein}.} $\mathrm{rk}(L,\prec)\in \mathbf{Ord}\cup \{\infty\}$ of the order $(L,\prec)$. The $VD_*$-rank $\mathrm{rk}(L,\prec)$ is the least $\alpha$ such that $L/{\simeq_{\alpha}}$ is finite. And if for all $\alpha\in \mathbf{Ord}$ the factor-set $L/{\simeq_{\alpha}}$ is infinite then we put $\mathrm{rk}(L,\prec)=\infty$. By definition we put $\alpha<\infty$, for all $\alpha\in \mathbf{Ord}$. \end{definition} \begin{remark} Linear orders $(L,\prec)$ such that $\mathrm{rk}(L,\prec)<\infty$ are exactly the scattered linear orders. \end{remark} \begin{example} \label{rank0rank1} The orders with the $VD_*$-rank equal to $0$ are exactly finite orders, and the orders with $VD_*$-rank $\le 1$ are exactly the order sums of finitely many copies of $\NN$, $-\NN$ and $1$ (one element linear order). \end{example} \begin{theorem}[Restatement of \sref{Theorem}{ordering}]\label{rank_from_dimension} For every natural $m\ge 1$, linear orders which are $m$-dimensionally interpretable in $(\mathbb{N},+)$ have $VD_*$-rank $m$ or below. \end{theorem} \begin{proof} We prove the theorem by induction on $m$ Suppose we have an $m$-dimensional interpretation of a linear order $(L,\prec)$ in $(\NN,+)$, i.e. there is an $\Lc^{-}$ formula $D(\xv)$ giving the domain of the interpretation and $\Lc^{-}$ formula $\prec_*(\xv,\yv)$ giving interpretation of the order relation, where both $\xv$ and $\yv$ consist of $m$ variables. Without loss of generality we may assume that $L= \{\av \in \mathbb{N}^m \mid (\mathbb{N},+)\models D(\av)\}$ and $\prec$ is defined by the formula $\prec_*$. Now assume for a contradiction that $\mathrm{rk}(L,\prec)>m$. By the definition of $VD_*$-rank, there are infinitely many distinct $\simeq_m$-equivalence classes in $L$. Hence there is an infinite chain $\av_0\prec \av_1\prec\ldots $ of elements of $L$ such that $\av_i\not\simeq_m \av_{i+1}$, for each $i$. Let us consider intervals $L_i=\{\bv\in L\mid \av_i<\bv<\av_{i+1}\}$. Since $\av_i\not\simeq_m \av_{i+1}$, the set $L_i/{\simeq_{m-1}}$ is infinite and $\mathrm{rk}(L_i,\prec)>m-1$. Clearly, all $L_i$ are Presburger definable sets. Let us show that $\dim(L_i)\ge m$, for each $i$. If $m=1$ then it follows from the fact that $L_i$ is infinite. If $m>1$ then we assume for a contradiction that $\dim(L_i)<m$. And notice that in this case $(L_i,\prec)$ would be $m-1$-dimensionally interpretable in $(N,+)$ which contradict induction hypothesis and the fact that $\mathrm{rk}(L_i,\prec)>m-1$. Since $L_i\subseteq \NN^m$, we conclude that $\dim(L_i)=m$, for all $i$. Now consider the parametric family of subsets of $\NN^m$ given by the formula $\yv_1\prec_* \xv\prec_* \yv_2$, where we treat variables $\yv_1$ and $\yv_2$ as parameters. We consider sets given by pairs of parameters $\yv_1=\av_i$ and $\yv_2=\av_{i+1}$, for $i\in\mathbb{N}$. Clearly the sets are exactly $L_i$'s. Thus we have infinitely many disjoint sets of the dimension $m$ in the family and hence we have contradiction with Lemma \ref{MS}. \end{proof} \begin{remark} Each scattered linear order of $VD_*$-rank 1 is $1$-dimensionally interpretable in $(\NN,+)$. There are scattered linear orders of $VD_*$-rank 2 that are not interpretable in $(\NN,+)$. \end{remark} \begin{proof} The interpretability of linear orders with rank $0$ and rank $1$ follows from \sref{Example}{rank0rank1}. Since there are uncountably many non-isomorphic scattered linear orders of $VD_*$-rank 2 and only countably many linear orders interpretable in $(\NN,+)$, there is some scattered linear order of $VD_*$-rank 2 that is not interpretable in $(\NN,+)$.\qed \end{proof} \section{One-Dimensional Self-Interpretations and Visser's Conjecture} The following theorem is a generalization of \cite[pp.\,27-28,\;Lemmas 3.2.2-3.2.3]{jetze}. \begin{theorem}\label{4r} Let $\mathbf{U}$ be a theory and $\iota$ be an $m$-dimensional interpretation of $\mathbf{U}$ in $(\NN,+)$. Then for some $m'\le m$ there is an $m'$-dimensional non-relative interpretation with absolute equality $\kappa$ of $\mathbf{U}$ in $(\NN,+)$ which is definably isomorphic to $\iota$ \end{theorem} \begin{proof} First let us find $\kappa$ with absolute equality. Indeed there is a definable in $(\NN,+)$ well-ordering $\prec$ of $\NN^m$: $$(a_0,\ldots,a_{m-1})\prec (b_0,\ldots,b_{m-1})\stackrel{\mbox{\footnotesize \textrm{def}}}{\iff} \exists i< m (\forall j< i\;(a_j=b_j)\land a_i<b_i).$$ Now we could define $\kappa$ by taking the definition of $+$ from $\iota$, taking the trivial interpretation of equality, and taking the domain of interpretation to be the part of the domain of $\iota$ that consists of the $\prec$-least elements of equivalence classes with respect to $\iota$-interpretation of equality. It is easy to see that this $\kappa$ is definably isomorphic to $\iota$. Now assume that we already have $\iota$ with absolute equality. We find the desired non-relative interpretation $\kappa$ by using \sref{Theorem}{Existe} and bijectively mapping the domain of $\iota$ to $\NN^{m'}$, where $m'$ is the dimension of the domain of the interpretation $\iota$.\qed \end{proof} Combining \sref{Theorem}{models-class} and \sref{Theorem}{rank_from_dimension}, we obtain \begin{theorem}[Restatement of \sref{Theorem}{1a}]\label{order} For any model $\mathfrak{A}$ of $\PrA$ that is one-dimensionally interpreted in the model $(\NN,+)$, (a) $\mathfrak{A}$ is isomorphic to $(\NN,+)$; (b) the isomorphism is definable in $(\NN,+)$. \end{theorem} \begin{proof} Let us denote by $<_*$ the order relation given by the $\PrA$ definition of $<$ within $\mathfrak{A}$. Clearly $<_*$ is definable in $(\NN,+)$. Thus we have an interpretation of the order type of $\mathfrak{A}$ in $\PrA$. Hence by \sref{Theorem}{rank_from_dimension} the order type of $\mathfrak{A}$ is scattered. But from \sref{Theorem}{models-class} we know that the only case when the order type of a model of $\PrA$ is scattered is the case when it is exactly $\NN$. Thus $\mathfrak{A}$ is isomorphic to $(\NN,+)$. From \sref{Theorem}{4r} it follows that it is enough to show the definability of the isomorphism only in the case when the interpretation that gives us $\mathfrak{A}$ is a non-relative interpretation with absolute equality. It is easy to see that, the isomorphism $f$ from $\mathfrak{A}$ to $(\NN,+)$ is the function $f\colon x\longmapsto |\{y\in\mathbb{N}\mid y<_*x\}|$. Now we use counting quantifier to express the function: \begin{gather} f(a)=b \iff (\mathbb{N},+)\models \exists^{=b}z \;(z<_*a) \end{gather} Now apply \sref{Theorem}{unti} and see that $f$ is definable in $(\mathbb{N},+)$. \end{proof} \begin{theorem} Theory $\PrA$ is not one-dimensionally interpretable in any of its finitely axiomatizable subtheories. \end{theorem} \begin{proof} Assume $\iota$ is an one-dimensional interpretation of $\PrA$ in some finitely axiomatizable subtheory $\mathrm{T}$ of $\PrA$. In the standard model $(\NN,+)$ the interpretation $\iota$ will give us a model $\mathfrak{A}$ for which there is a definable isomorphism $f$ with $(\NN,+)$. Now let us consider theory $\mathrm{T}'$ that consists of $\mathrm{T}$ and the statement that the definition of $f$ gives an isomorphism between (internal) natural numbers and the structure given by $\iota$. Clearly $\mathrm{T}'$ is finitely axiomatizable and true in $(\NN,+)$, and hence is subtheory of $\PrA$. But now note that $\mathrm{T}'$ proves that if something was true in the internal structure given by $\iota$, it is true. And since $\mathrm{T}'$ proved any axiom of $\PrA$ in the internal structure given by $\iota$, the theory $\mathrm{T}'$ proves every axiom of $\PrA$. Thus $\mathrm{T}'$ coincides with $\PrA$. But it is known that $\PrA$ is not finitely axiomatizable, contradiction. \end{proof} \section{Multi-Dimensional Self-Interpretations} We already know that the only linear orders that it is possible to interpret in $(\NN,+)$ (even by multi-dimensional interpretations) are scattered linear orders. And we could use this to prove the analogue of \sref{Theorem}{1a}(a) for multi-dimensional interpretations by the same reasoning as we have used for \sref{Theorem}{1a}(a). However, the only way any interpretation can be isomorphic to trivial in a multi-dimensional case is by having a one-dimensional set as its domain and from \sref{Theorem}{1a} it follows that all interpretations of $\PrA$ in $(\mathbb{N},+)$ that have one-dimensional domain are definably isomorphic to $(\mathbb{N},+)$. Thus in order to prove the analogue of \sref{Theorem}{1a}(b) for multi-dimensional interpretations one should in fact show that the domain of any interpretation of $\PrA$ in $(\NN,+)$ should be one-dimensional set. In the section we will give some partial results about multi-dimensional self-interpretations of $\PrA$. {\em Cantor polynomials} are quadratic polynomials that define a bijection between $\NN^2$ and $\NN:$ \begin{gather} C_1(x,y)=C_2(y,x)=\frac{1}{2}(x+y)^2+\frac{1}{2}(x+3y).\label{cantorps} \end{gather} The bijections $C_1$ and $C_2$ are the isomorphism of $(\NN^2,\prec_1)$ and $(\NN,<)$ and the isomorphism of $(\NN^2,\prec_2)$ and $(\NN,<)$, where \begin{gather*} (a_1,a_2)\prec_1(b_1,b_2)\stackrel{\mbox{\footnotesize \textrm{def}}}{\iff}(a_2<b_2\wedge{a_1+a_2=b_1+b_2})\vee(a_1+a_2<b_1+b_2),\\ (a_1,a_2)\prec_2(b_1,b_2)\stackrel{\mbox{\footnotesize \textrm{def}}}{\iff}(a_2>b_2\wedge{a_1+a_2=b_1+b_2})\vee(a_1+a_2<b_1+b_2). \end{gather*} Note that both $\prec_1$ and $\prec_2$ are definable in $(\NN,+)$. The following theorem show that this interpretations of $(\NN,<)$ could not be extended to interpretations of $(\NN,x\mapsto sx)$, for some $s$ and thus shows that this interpretations could not be extended to interpretations of $(\NN,+)$. \begin{theorem}\label{en} Let $s$ be a natural number that is not a square and $i$ be either $1$ or $2$. Let us denote by $f\colon \NN^2\to \NN^2$ the function $f(\overline{a})=C_i^{-1}(s\cdot C_i(\overline{a}))$, i.e. the preimage of the function $x\mapsto s\cdot x$ under the bijection $C_i\colon \NN^2\to \NN$. Then the function $f$ is not definable in $(\NN,+)$. \end{theorem} \begin{proof} Since the cases of $i=1$ and $i=2$ are essentially the same, let us consider just the case of $i=1$. Suppose the contrary: there is an $\Lc^{-}$-formula $F(x_1,x_2,y_1,y_2)$ which defines the graph of $f$: $$(\NN,+)\models F(a_1,a_2,b_1,b_2)\iff f(a_1,a_2)=(b_1,b_2), \mbox{for all $a_1,a_2,b_1,b_2\in \NN$.}$$ Then the following function $h(x):\NN\ra\NN$ is also definable: \begin{gather} h(a)=b\stackrel{\mbox{\footnotesize \textrm{def}}}{\iff}\exists c,d ( f(a,0)=(c,d)\land b=c+d). \end{gather} Now it is easy to see that the following inequalities holds for all $a\in\NN$: \begin{gather*} C_1(h(a),0)\le s\cdot C_1(a,0)<C_1(h(a)+1,0)\Ra\\ \frac{h(a)(h(a)+1)}{2}\le \frac{sa(a+1)}{2}<\frac{(h(a)+1)(h(a)+2)}{2}\Ra\\ y^2<S(x+1)^2\mbox{ and }Sx^<(y+2)^2 \Ra\\ \sqrt{S}x-2<y<\sqrt{S}x+\sqrt{S}. \end{gather*} We conclude that a Presburger-definable function $h(x)$ is bounded both from above and below with linear functions of the same irrational slope. Contradiction with \sref{Corollary}{irration}.\qed \end{proof} We conjecture the following general fact holds: \begin{hyp}\label{ba} For any (multi-dimensional) interpretation $\iota$ of $\PrA$ in the model $(\NN,+)$ there is a definable isomorphism with the trivial interpretation of $(\NN,+)$ in $(\NN,+)$. \end{hyp} The following theorem is a slight modification of the theorem by G.R.~Blakley~\cite{blakley}. \begin{theorem}\label{bash} Let $A$ be a $d\times n$ matrix of integer numbers, function $\varphi_A\colon\mathbb{Z}^d\ra \NN\cup \{\aleph_0\}$ is defined as follows: \begin{center} $\varphi_A(u)\eqdef|\{\overline{\lambda}=(\lambda_1,\ldots,\lambda_n)\in\NN^n\mid A\lambda=u\}|.$ \end{center} Then if the values of $\varphi_A$ are always finite, the function $\varphi_A$ is a piecewise polynomial function of a degree $\le n-\mathrm{rk}(A)$. \end{theorem} \begin{proof} The existence of the fundamental lattices $C_1,\ldots,C_l$ on which $\varphi_A$ is polynomial follows from \cite[p.\,302]{sturmfels}. Now we prove that the $n-\mathrm{rk}(A)$ bound on the degree holds. Let us consider any fundamental lattice $L$ with the initial vector $\overline{v}$ and generating vectors $\overline{s}_1,\ldots,\overline{s}_m$ such that the restriction of $\varphi_A$ to $L$ is a polynomial. Now it is easy to see that we could find a polynomial $P(x_1,\ldots,x_m)$ such that $\varphi_A(\overline{v}+\eta_1\overline{s}_1+\ldots+\eta_m\overline{s}_m)=P(\eta_1,\ldots,\eta_m)$, for all $\eta_1,\ldots,\eta_m\in \NN$. Since the choice of $L$ was arbitrary, we could finish the proof of the theorem by showing that $P$ is of the degree $\le n-\mathrm{rk}(A)$. Let us assume for a contradiction that the degree of $P$ is $>n-\mathrm{rk}(A)$. Clearly, then there are $\theta_1,\ldots,\theta_m\in \NN$ such that the polynomial $Q(y)=P(\theta_1y,\ldots,\theta_m y)$ is of the degree $k>n-\mathrm{rk}(A)$. Now we consider the vector $\overline{d}=\eta_1\overline{s}_1+\ldots+\eta_m\overline{s}_m$ and the vectors $\overline{e}_l=\overline{v}+l\overline{d}$, for $l\in \mathbb{N}$. We have $\varphi_A(\overline{e}_l)=Q(l)$. Let us now estimate the values of $\varphi_A(\overline{e}_l)$. The value $\varphi_A(\overline{e}_l)$ is the number of integer points in the polyhedron $H_l=\{(\lambda_1,\ldots,\lambda_n)=\overline{\lambda}\in \mathbb{R}^n\mid A\overline{\lambda}=\overline{e}_l\mbox{ and }\lambda_1,\ldots,\lambda_n\ge 0\}$. And now it is easy to see that $\varphi_A(\overline{e}_l)\le h_l/o$, where $o$ is the volume of $(n-\mathrm{rk}(A))$-dimensional sphere of the radius $1/2$ and $h_l$ is the $(n-\mathrm{rk}(A))$-dimensional volume of (at most) $(n-\mathrm{rk}(A))$-dimensional polyhedron $H_l'=\{(\lambda_1,\ldots,\lambda_n)=\overline{\lambda}\in \mathbb{R}^n\mid A\overline{\lambda}=\overline{e}_l\mbox{ and }\lambda_1,\ldots,\lambda_n\ge -1\}$. Now we just need to notice that the linear dimensions of the polyhedra $H_l'$ are bounded by a linear function of $l$ and hence the volumes $h_l$ are bounded by some polynomial of the degree $n-\mathrm{rk}(A)$, contradiction with the fact that the polynomial $Q(y)$ were of the degree $k>n-\mathrm{rk}(A)$.\qed \end{proof} Recall that a semilinear set is a finite union of lattices and that by result of \cite{ir} any semilinear set is a disjoint union of fundamental lattices. It is easy to see that the following lemma holds: \begin{lemma} \label{pwpol_prop} \begin{enumerate} \item \label{pwpol_add} If $f,g\colon A\to \mathbb{Z}$ are piecewise polynomial functions of a degree $\le m$ then the function $h\colon A\to \mathbb{Z},\;h(\vv)=f(\vv)+g(\vv)$, is a piecewise polynomial function of a degree $\le m$; \item \label{pwpol_rest} if $A\subseteq \mathbb{Z}^n$ is a semilinear set, $f\colon A\to \mathbb{Z}$ is a piecewise polynomial function of a degree $\le m$, and $B\subseteq A$ is $\PrA$-definable set then the restriction of $f$ to $B$ is a piecewise polynomial function of a degree $\le m$; \item \label{pwpol_lin} if $A\subseteq \mathbb{Z}^n$ is a semilinear set, $f\colon A\to \mathbb{Z}$ is a piecewise polynomial function of a degree $\le m$, and $F\colon \mathbb{Z}^n\to \mathbb{Z}^k$ is a linear operator, then the function $h\colon F(A)\to\mathbb{Z}^k$ is a piecewise polynomial function of a degree $\le m$. \end{enumerate} \end{lemma} We prove the lemma that generalizes the one-dimensional construction of the cardinality of sections. \begin{lemma}\label{newlem} Let $S\subseteq\NN^{n+m}$ be a definable set in $(\NN,+)$. For each vector $\overline{b}=(b_1,\ldots,b_m)\in \NN^m$ we define section $A\upharpoonright {\overline{b}}$ to be the set of all elements of $S$ of the form $(a_1,\ldots,a_n,b_1,\ldots,b_m)$. Suppose all sets $S\upharpoonright \overline{b}$ are finite. For each vector $\overline{a}\in\NN^n$. Consider the section cardinality function $f_S\colon\NN^m\ra\NN,\:f_S\colon\overline{a}\mapsto|S\upharpoonright \overline{b}|.$ Then $f_S$ is a piecewise polynomial function of a degree $\le n$. \end{lemma} \begin{proof} Let us first prove the theorem for the case when $S$ is a fundamental lattice with the initial vector $\cv$ and the generating vectors $\vv_1,\ldots,\vv_s\in \NN^{n+m}$. We consider the vectors $\cv',\vv_1',\ldots,\vv_s'\in \NN^m$ that consist of the last $m$ components of vectors $\cv,\vv_1,\ldots,\vv_s$, respectively. Clearly, for each $\bv\in \NN^m$, the value $f_S(\bv)=|A\upharpoonright \bv|$ is equal to the number of different $\overline{\lambda}=(\lambda_1,\ldots,\lambda_s)\in\NN^s$ such that $\lambda_1\vv_1'+\ldots+\lambda_s\vv_s'=\bv-\cv'$. Now we compose a matrix $A$ from the vectors $\vv_1',\ldots,\vv_s'$ and see that $f_S(\bv)=|\{\overline{\lambda}\in \NN^m\mid A \overline{\lambda}=\bv-\cv'\}|=\phi_{A}(\bv-\cv)$. Note that since $S$ was a fundamental lattice, $s-\mathrm{rk}(A)\le n$. Now we apply \sref{Theorem}{bash} and see that $\phi_{A}$ is a piecewise polynomial of a degree $\le n$. Now from \sref{Lemma}{pwpol_prop}(\ref{pwpol_rest}) and \sref{Lemma}{pwpol_prop}(\ref{pwpol_lin}) it follows that $f$ is piecewise polynomial of a degree $\le n$ too. In the case of arbitrary definable $A$, we apply \sref{Theorem}{fund} and find fundamental lattices $J_1,\ldots,J_s$ such that $A=J_1\sqcup J_2\sqcup\ldots\sqcup J_s$. Now we see that for each $\overline{b}\in\NN^m$, we have $f_A(a)=f_{J_1}(a)+\ldots+f_{J_s}(a)$ and since we already know that all $f_{J_i}$ are piecewise polynomial of a degree $\le n$, by \sref{Lemma}{pwpol_prop}(\ref{pwpol_add}) the function $f_A$ is piecewise polynomial of a degree $\le n$.\qed \end{proof} \begin{theorem} Suppose a definable in $(\NN,+)$ binary relation $\prec$ on $\NN^n$ has the order type $\NN$. Then the order isomorphism between $(\NN^m,\prec)$ and $(\NN,<)$ is a piecewise polynomial function of a degree $\le n$. \end{theorem} \begin{proof} We see that the order isomorphism is the function $f\colon \NN^m\to \NN$ given by \begin{center} $f(a_1,\ldots,a_n)=|\{(b_1,\ldots,b_n,a_1,\ldots,a_n)\mid (b_1,\ldots,b_n)\mathrel{R} (a_1,\ldots,a_n)\}|.$ \end{center} By \sref{Lemma}{newlem} we see that $f$ is a piecewise polynomial function. \qed \end{proof} Fueter-P\'olya theorem \cite{fueterpolya,nathanson} states that every quadratic polynomial that maps $\NN^2$ onto $\NN$ is one of two Cantor polynomials (\ref{cantorps}). If one would want to prove \sref{Conjecture}{ba} one of the possible approaches would be to give a classification of all piecewise polynomial bijections and then use the classification and a generalization of \sref{Theorem}{en} in order to show that no two-dimensional non-relative interpretation of $(\NN,<)$ in $(\NN,+)$ could be extended to an interpretation of $(\NN,+)$. \section*{Acknowledgments} The authors wish to thank Lev~Beklemishev for suggesting to study Visser's conjecture, number of discussions of the subject, and his useful comments on the paper.
1,314,259,994,151
arxiv
\section{Introduction} Sequences with a low correlation have important applications in code-division multiple access (CDMA), spread spectrum systems, and broadband satellite communications \cite{AP07, CHPW15, K10, LHL,Lin,16ZXE}. For these applications, sequences with good parameters, such as a low correlation (including autocorrelation and cross-correlation) and a large family size, are highly desired. In the literature, many families of binary sequences with good properties have been constructed from various methods. The Gold sequences is the best known binary sequences with optimal correlations which can be obtained by EXOR-ing two $m$-sequences \cite{m} of the same length with each other \cite{Span}. \iffalse there are various constructions of binary sequences with good parameters. The best-known class of such sequences are the so-called m-sequences, which can be obtained from linear feedback shift registers \cite{m}, with length of the form $2^n-1$ for a positive integer $n$. Since m-sequences have ideal autocorrelation properties, many families of sequences with low correlations have been constructed by using m-sequences and their decimations. For example, the well-known Gold sequences are a family of binary sequences having three-level out-of-phase auto- and cross-correlation (nontrivial correlation) values which can be obtained by EXOR-ing two m-sequences of the same length with each other \cite{Span}. \fi Kasami sequences can be constructed using m-sequences and their decimations \cite{Kas}. Besides, there are some known families of sequences of length $2^n-1$ with good correlation properties, such as bent function sequences \cite{Bent}, No sequences \cite{No}, Trace Norm sequences \cite{Klap}. In 2011, Zhou and Tang generalized the modified Gold sequences and obtained binary sequences with length $2^n-1$ for $n=2m+1$, size $2^{\ell n}+\cdots+2^n-1$, and correlation $2^{m+\ell}+1$ for each $1\le \ell \le m$ \cite{ZT}. In addition to the above mentioned sequences of length $2^n-1$, there are also some work devoted to the constructions of good binary sequences with length of forms such as $2(2^n-1)$ and $(2^n-1)^2$. In \cite{Uda}, Udaya and Siddiqi obtained a family of $2^{n-1}$ binary sequences, length $2(2^n-1)$ satisfying the Welch bound for odd $n$. This family was extended to a larger one with $2^n$ sequences and the same correlation in \cite{Tang3}. In \cite{Tang2}, the authors presented two optimal families of sequences of length $2^n-1$ and $2(2^n-1)$ for odd integer $n$. In 2002, Gong presented a construction of binary sequences with size $2^n-1$, length $(2^n-1)^2$, and correlation $3+2(2^n-1)$ \cite{gong}. However, most of the known constructions via finite fields made use of the multiplicative cyclic group of $\mathbb{F}_{2^n}$. It was often overlooked that all $2^n+1$ rational places including the place at infinity of the rational function field over $\mathbb{F}_{2^n}$ form a cyclic structure under an automorphism of order $2^n+1$. Recently, by using this cyclic structure, an explicit construction of binary sequences of length $2^n+1$, size $2^n-1$ and correlation upper bounded by $\lfloor 2^{(n+2)/2}\rfloor$ via cyclotomic function fields over the finite field $\mathbb{F}_{2^n}$ was given \cite{JMX22}. Note that all the above constructions are based on finite fields of even characteristics. There are also some constructions of binary sequences which are based on finite fields of odd characteristics \cite{Lempel,Legendre,Pat,Rushanan,Su, Jin21}. Paterson proposed a family of pseudorandom binary sequences based on Hadamard difference sets and maximum distance separable (MDS) codes for length $p^2$ where $p\equiv 3\ (\text{mod } 4)$ is a prime \cite{Pat}. In 2006, a family of binary sequences of length $p$ which is an odd prime, family size $(p-1)/2$, and correlation bounded by $5+2\sqrt{p}$ were given in \cite{Rushanan}, now known as Weil sequences. The idea of constructing Weil sequences is to derive sequences from single quadratic residue based on Legendre sequences using a shift-and-add construction. In 2010, by combining the $p$-periodic Legendre sequence and the $(q-1)$-periodic Sidelnikov sequence, Su {\it et al.} introduced a new sequence of length $p(q-1)$, called the Lengdre-Sidelnikov sequence \cite{Su}. Recently, Jin {\it et al.} provided several new families of sequences with low correlations of length $q-1$ for any prime power $q$ by using the cyclic multiplication group $\mathbb{F}_q^*$ and sequences of length $p$ for any odd prime $p$ from the cyclic additive group $\mathbb{F}_p$ in \cite{Jin21}. The idea of constructing binary sequences is using multiplicative quadratic character over finite fields of odd characteristics. Though various constructions of binary sequences with good parameters have been proposed, there are still limited choices for sequence lengths. For applications in different scenarios, adding or deleting sequence values, usually destroys the good correlation properties. Thus, there is still a need to construct binary sequences having low correlations with more flexible choices of parameters. However, it appears to be an open and challenging problem to find good binary sequences with new parameters. \subsection{Our main result and techniques} In this paper, we provide an explicit construction of binary sequences with a low correlation of length $p^n+1$ via cyclotomic function fields over the finite field $\mathbb{F}_{p^n}$ for any odd prime $p$. The correlation of this family of binary sequences is upper bounded by $4+\lfloor 2\cdot p^{n/2}\rfloor$. To our best knowledge, this is the first construction of binary sequences with a low correlation of length type $p^n+1$. In Table I, we list many families of binary sequences with low correlations for comparison. It turns out that our binary sequences have competitive parameters. In order to understand the main idea of this article better, we give a high-level description of our techniques. In the language of function fields, we denote $P_\alpha$ by the zero of $x-\alpha$ for any $\alpha\in\mathbb{F}_q$ in the rational function field $\mathbb{F}_q(x)$. We employ all rational places include the infinity place of the rational function field to obtain binary sequences with length $q+1$ as given in \cite{JMX20,JMX21,JMX22}. The automorphism group of the rational function field $\mathbb{F}_q(u)$ over $\mathbb{F}_q$ is isomorphic to the projective general linear group ${\rm PGL}_2(\mathbb{F}_q)$ and there is an automorphism $\sigma} \newcommand{\GS}{\Sigma$ with order $q+1$ such that $\{\sigma} \newcommand{\GS}{\Sigma^j(P_0)\}_{j=0}^q$ consists of all $q+1$ rational places of $\mathbb{F}_q(u)$ \cite{HKT08,JMX20}. Our binary sequences are constructed via the quadratic multiplicative character of $\mathbb{F}_q$ and evaluations of carefully chosen rational functions $z_1,z_2,\cdots,z_S\in E=\mathbb{F}_q(u)$ on such cyclically ordered rational places $\{\sigma} \newcommand{\GS}{\Sigma^j(P_0)\}_{j=0}^q$. The correlation of this family of binary sequences is converted to determining the upper bound of the number of rational places of Kummer extension $E_{i}=E(y)$ with $y^2=z_i\cdot \sigma^{-t}(z_i)$ for $1\le i \le S$ and $1\le t\le q$ and Kummer extension $E_{i,j}=E(y)$ with $y^2=z_i\cdot \sigma^{-t}(z_j)$ for $1\le i\neq j\le S$ and $0\le t\le q$. In order to obtain binary sequences with a low correlation, the number of rational places of $E_i$ or $E_{i,j}$ can not be large. Hence, $z_i\cdot \sigma} \newcommand{\GS}{\Sigma^{-t}(z_j)$ can not be elements in $\mathbb{F}_q\cdot E^2$ except the case $t=0$ and $i=j$, since there are much more rational places in the constant field extensions. To overcome this problem, we introduce an equivalence relation and choose at most one element in each equivalence class. Moreover, the genus of $E_i$ or $E_{i,j}$ can not be large, we need to choose functions $z_i$ such that $z_i$ and $\sigma} \newcommand{\GS}{\Sigma^{-t}(z_j)$ have the same pole. In particular, $z_i$ can be chosen from a Riemann-Roch space associated to some place which is invariant under the automorphism $\sigma} \newcommand{\GS}{\Sigma$. In particular, we employ the cyclotomic function fields over $\mathbb{F}_q(x)$ with modulus $p(x)$ which is a primitive quadratic irreducible polynomial to construct such an explicit family of binary sequences with a low correlation. The Galois group of this cyclotomic function field over $\mathbb{F}_q(x)$ is a cyclic group of order $q^2-1$. There is a unique subgroup of order $q-1$, its fixed subfield is the function field $E$ and the Galois group of $E$ over $\mathbb{F}_q(x)$ is a cyclic group of $q+1$ which is generated by $\sigma$. Let $Q$ be the unique totally ramified place of $E$ lying over $p(x)$ and $\mathcal{L}(Q)$ be the Riemann-Roch space of $Q$. Hence, $\sigma(Q)=Q$ and rational functions $z_1,z_2,\cdots,z_S$ can be chosen as representative elements of some equivalence classes of an equivalence relation on $\mathcal{L}(Q)\setminus \{0\}$. This family of binary sequences with a low correlation via cyclotomic function fields over finite fields with odd characteristics are distinct from the even case given in \cite{JMX22}, since there are non-negligible differences among the determination of cyclic automorphisms $\sigma$, the equivalence relations on $\mathcal{L}(Q)\setminus \{0\}$, representative elements of equivalence classes, the construction of binary sequences and the estimation of their correlations. \begin{table*} \setlength{\abovecaptionskip}{0pt}% \setlength{\belowcaptionskip}{10pt}% \caption{PARAMETERS OF SEQUENCE FAMILIES} \centering { \begin{tabular}{@{}cccc@{}} \toprule Sequence & Length N & Family Size & Bound of Correlation \\ \midrule \multicolumn{1}{|c|}{Gold(odd) \cite{Span}} & \multicolumn{1}{c|}{$2^n-1$, $n$ odd} & \multicolumn{1}{c|}{$N+2$} & \multicolumn{1}{c|}{$1+\sqrt{2}\sqrt{N+1}$} \\ \midrule \multicolumn{1}{|c|}{Gold(even) \cite{Span}} & \multicolumn{1}{c|}{$2^n-1,n=4k+2$} & \multicolumn{1}{c|}{$N+2$} & \multicolumn{1}{c|}{$1+2\sqrt{N+1}$} \\ \midrule \multicolumn{1}{|c|}{Kasami(small) \cite{Kas}} & \multicolumn{1}{c|}{$2^n-1, n \text{ even}$} & \multicolumn{1}{c|}{$\sqrt{N+1}$} & \multicolumn{1}{c|}{$1+\sqrt{N+1}$} \\ \midrule \multicolumn{1}{|c|}{Kasami(large) \cite{Kas}} & \multicolumn{1}{c|}{$2^n-1,n=4k+2$} & \multicolumn{1}{c|}{$(N+2)\sqrt{N+1}$} & \multicolumn{1}{c|}{$1+2\sqrt{N+1}$} \\ \midrule \multicolumn{1}{|c|}{Bent \cite{Bent}} & \multicolumn{1}{c|}{$2^n-1, n=4k$} & \multicolumn{1}{c|}{$\sqrt{N+1}$} & \multicolumn{1}{c|}{$1+\sqrt{N+1}$} \\ \midrule \multicolumn{1}{|c|}{No \cite{No}} & \multicolumn{1}{c|}{$2^n-1, n \text{ even}$} & \multicolumn{1}{c|}{$\sqrt{N+1}$} & \multicolumn{1}{c|}{$1+\sqrt{N+1}$} \\ \midrule \multicolumn{1}{|c|}{Trace Norm \cite{Klap}} & \multicolumn{1}{c|}{$2^n-1, n \text{ even}$} & \multicolumn{1}{c|}{$\sqrt{N+1}$} & \multicolumn{1}{c|}{$1+\sqrt{N+1}$} \\ \midrule \multicolumn{1}{|c|}{Tang et al. \cite{Tang3}} & \multicolumn{1}{c|}{$2(2^n-1),n$ odd } & \multicolumn{1}{c|}{$\frac{N}{2}+1$} & \multicolumn{1}{c|}{$2+\sqrt{N+2}$} \\ \midrule \multicolumn{1}{|c|}{Gong \cite{gong}} & \multicolumn{1}{c|}{$(2^n-1)^2$, $2^n-1$ prime} & \multicolumn{1}{c|}{$\sqrt{N}$ }& \multicolumn{1}{c|}{$3+2\sqrt{N}$} \\ \midrule \multicolumn{1}{|c|}{Jin et al. \cite{JMX22}} &\multicolumn{1}{c|}{{\bf $2^n+1$}} &\multicolumn{1}{c|}{{\bf $N-2$}} & \multicolumn{1}{c|}{\bf {$2\sqrt{N-1}$}} \\\midrule \multicolumn{1}{|c|}{Paterson \cite{Pat}} & \multicolumn{1}{c|}{$p^2$, $p$ prime, $p\equiv 3(\text{mod } 4)$} & \multicolumn{1}{c|}{$N$} & \multicolumn{1}{c|}{$5+4\sqrt{N}$} \\ \midrule \multicolumn{1}{|c|}{Paterson \cite{Pat}} & \multicolumn{1}{c|}{$p^2$, $p$ prime, $p\equiv 3(\text{mod } 4)$} & \multicolumn{1}{c|}{$\sqrt{N}+1$} & \multicolumn{1}{c|}{$3+2\sqrt{N}$} \\ \midrule \multicolumn{1}{|c|}{Weil \cite{Rushanan}} & \multicolumn{1}{c|}{$p$, odd prime} & \multicolumn{1}{c|}{$(N-1)/2$} & \multicolumn{1}{c|}{$5+2\sqrt{N}$} \\ \midrule \multicolumn{1}{|c|}{Jin et al. \cite{Jin21}} & \multicolumn{1}{c|}{$p-1$, $p\ge17$ odd prime power} & \multicolumn{1}{c|}{$N+3$} & \multicolumn{1}{c|}{$6+2\sqrt{N+1}$} \\ \midrule \multicolumn{1}{|c|}{Jin et al. \cite{Jin21}} & \multicolumn{1}{c|}{$p-1$, $p\ge11$ odd prime power} & \multicolumn{1}{c|}{$N/2$} & \multicolumn{1}{c|}{$2+2\sqrt{N+1}$} \\ \midrule \multicolumn{1}{|c|}{Jin et al. \cite{Jin21}} & \multicolumn{1}{c|}{$p$, $p\ge 17$ odd prime} & \multicolumn{1}{c|}{$N$} & \multicolumn{1}{c|}{$5+2\sqrt{N}$} \\ \midrule \multicolumn{1}{|c|}{Jin et al. \cite{Jin21}} & \multicolumn{1}{c|}{$p$, $p \ge 11$ odd prime} & \multicolumn{1}{c|}{$(N-1)/2$} & \multicolumn{1}{c|}{$1+2\sqrt{N}$} \\ \midrule \multicolumn{1}{|c|}{{\bf Our construction}} & \multicolumn{1}{c|}{ {$p^n+1$, $p$ odd prime}} & \multicolumn{1}{c|}{{\bf $N-3$}} & \multicolumn{1}{c|}{\bf {$4+2\sqrt{N-1}$}} \\ \bottomrule \end{tabular} } \end{table*} \subsection{Organization of this paper} In Section \ref{sec:2}, we provide preliminaries on binary sequences, rational function fields, extension theory of function fields, cyclotomic function fields and Kummer extensions. In Section \ref{sec:3}, we present a theoretical construction of binary sequences with a low correlation via cyclotomic function fields over finite fields with odd characteristics. In Section \ref{sec:4}, we give an algorithm to generate such a family of binary sequences and provide many numerical results with the help of software Sage. \section{Preliminaries}\label{sec:2} In this section, we present some preliminaries on definitions and basic theory of binary sequences and their correlation, rational function fields, extension theory of function fields, cyclotomic function fields and Kummer extensions. \subsection{Binary sequences and their correlation} Let $N$ be a positive integer. Let ${\mathcal S}$ be a family of binary sequences with length $N$. For every sequence ${\bf s}=(s_0,s_1,\dots,s_{N-1})\in{\mathcal S}$ with $s_i\in\{1,-1\}$, we define the autocorrelation of ${\bf s}$ at delay $t$ for $1\le t\le N-1$ by \begin{equation}\label{eq:2.1} A_t({\bf s}):=\sum_{i=0}^{N-1}s_is_{i+t}, \end{equation} where $i+t$ means the least non-negative integer after taking modulo $N$. Consider two distinct binary sequences ${\bf u}=(u_0,u_1,\dots,u_{N-1})$ and $ {\bf v}=(v_0,v_1,\dots,v_{N-1})$ in ${\mathcal S}$, we define the cross-correlation of ${\bf u}$ and ${\bf v}$ at delay $0\le t\le N-1$ by \begin{equation}\label{eq:2.2} C_t({\bf u},{\bf v}):=\sum_{i=0}^{N-1}u_iv_{i+t}. \end{equation} The correlation of the family of sequences ${\mathcal S}$ is defined by \begin{equation}\label{eq:2.3} Cor({\mathcal S}):=\max\left\{\max_{{\bf s}\in{\mathcal S}, 1\le t\le N-1}\{|A_t({\bf s})|\},\max_{{\bf u}\neq{\bf v}\in{\mathcal S}, 0\le t\le N-1}\{|C_t({\bf u},{\bf v})|\}\right\}. \end{equation} \subsection{Rational function fields} In this subsection, we introduce basic facts of the rational function field. The reader may refer to \cite{HKT08,JMX20,JMX21,St09} for more details. Let $q$ be a prime power and $\mathbb{F}_q$ be the finite field with $q$ elements. Let $K$ be the rational function field $\mathbb{F}_q(x)$, where $x$ is a transcendental element over $\mathbb{F}_q$. Every finite place $P$ of $K$ corresponds to a monic irreducible polynomial $p(x)$ in $\mathbb{F}_q[x]$, and its degree $\deg(P)$ is equal to the degree of polynomial $p(x)$. The pole of $x$ is called the infinity place of $K$ and denoted by $P_{\infty}$. In fact, there are exactly $q+1$ rational places of $K$, i.e., the place $P_{\alpha}$ corresponding to $x-\alpha$ for each $\alpha\in \mathbb{F}_q$ and the infinite place $P_{\infty}$. Let $P$ be a rational place of $K$ and let $\mathcal{O}_P$ be its valuation ring. For any $f\in \mathcal{O}_P$, $f(P)$ is defined to be the residue class of $f$ modulo $P$ in $\mathcal{O}_P/P\cong \mathbb{F}_q$; otherwise $f(P)=\infty$ for any $f\in K\setminus \mathcal{O}_P$. If $f(x)=g(x)/h(x)\in K$ is written as a quotient of relatively prime polynomials, then the residue class map can be determined explicitly as follows $$f(P_\alpha)=\begin{cases} g(\alpha)/h(\alpha) & \text{ if } h(\alpha)\neq 0 \\ \infty & \text{ if } h(\alpha)=0\end{cases} $$ for any $\alpha\in\mathbb{F}_q$. Moreover, if $\deg(g(x))<\deg(h(x))$, then $f(P_{\infty})=0.$ The automorphism group ${\rm Aut }(K/\mathbb{F}_q)$ of the rational function field $K$ over $\mathbb{F}_q$ is isomorphic to the projective general linear group ${\rm PGL}_2(\mathbb{F}_q)$. Any automorphism $\sigma} \newcommand{\GS}{\Sigma\in {\rm Aut }(K/\mathbb{F}_q)$ is uniquely determined by $\sigma} \newcommand{\GS}{\Sigma(x)$ with the form $$\sigma} \newcommand{\GS}{\Sigma(x)=\frac{ax+b}{cx+d}$$ for some constants $a,b,c,d\in\mathbb{F}_q$ with $ad-bc\neq0$. In particular, there exists an automorphism $\sigma\in {\rm Aut }(K/\mathbb{F}_q)$ with order $q+1$ such that $\sigma$ acts cyclically on all rational places of $K$ from \cite{JMX20}. \subsection{Extension theory of function fields} Let $F/\mathbb{F}_q$ be an algebraic function field with genus $g$ over the full constant field $\mathbb{F}_q$. Let $\mathbb{P}_F$ denote the set of places of $F$. Any place with degree one is called rational. From the Serre bound \cite[Theorem 5.3.1]{St09}, the number $N(F)$ of rational places of $F$ is upper bounded by $$|N(F)-q-1|\le g \lfloor 2\sqrt{q}\rfloor. $$ Here $\lfloor x \rfloor$ stands for the integer part of $x\in \mathbb{R}$. Let $\nu_P$ be the normalized discrete valuation of $F$ with respect to the place $P\in \mathbb{P}_F$. The principal divisor of a nonzero element $z\in F$ is given by $(z):=\sum_{P\in \mathbb{P}_F}\nu_P(z)P$. For a divisor $G$ of $F/\mathbb{F}_q$, the Riemann-Roch space associated to $G$ is defined by \[\mathcal{L}(G):=\{z\in F^*:\; (z)+G\ge 0\}\cup\{0\}.\] If $\deg(G)\ge 2g-1$, then $\mathcal{L}(G)$ is a vector space over $\mathbb{F}_q$ of dimension $\deg(G)-g+1$ from the Riemann-Roch theorem \cite[Theorem 1.5.17]{St09}. Let $E/\mathbb{F}_q$ be a finite extension of function field $F/\mathbb{F}_q$. The Hurwitz genus formula \cite[Theorem 3.4.13]{St09} yields $$2{g(E)}-2=[E:F]\cdot (2g(F)-2)+\deg \text{ Diff}(E/F),$$ where $\text{Diff}(E/F)$ stands for the different of $E/F$. For $P\in \mathbb{P}_F$ and $Q\in \mathbb{P}_E$ with $Q|P$, let $d(Q|P), e(Q|P)$ be the different exponent and ramification index of $Q|P$, respectively. Then the different of $E/F$ can be given by $\text{Diff}(E/F)=\sum_{Q\in\mathbb{P}_E} d(Q|P) Q.$ If $p\nmid e(Q|P)$, then $d(Q|P)=e(Q|P)-1$ from Dedekind's Different Theorem \cite[Theorem 3.5.1]{St09}. Let ${\rm Aut }(F/\mathbb{F}_q)$ denote the automorphism group of $F$ over $\mathbb{F}_q$, i.e., ${\rm Aut }(F/\mathbb{F}_q)=\{\sigma} \newcommand{\GS}{\Sigma: F\rightarrow F |\; \sigma} \newcommand{\GS}{\Sigma \mbox{ is an } \mathbb{F}_q\mbox{-automorphism of } F\}.$ We can consider the group action of automorphism group ${\rm Aut }(F/\mathbb{F}_q)$ on the set of places of $F$. From \cite[Lemma 1]{NX14}, we have the following results. \begin{lemma}\label{lem:2.1} For any automorphism $\sigma\in {\rm Aut }(F/\mathbb{F}_q)$, $P\in \mathbb{P}_F$ and $f\in F$, we have \begin{itemize} \item[(1)] $\deg(\sigma(P))=\deg(P)$; \item[(2)] $\nu_{\sigma(P)}(\sigma(f))=\nu_P(f)$; \item[(3)] $\sigma(f)(\sigma(P))=f(P)$ provided that $\nu_P(f)\ge 0$. \end{itemize} \end{lemma} From Lemma \ref{lem:2.1}, we have $\sigma(\mathcal{L}(D)) = \mathcal{L}(\sigma(D))$ for any divisor $D$ of $F$. In particular, if $\sigma(P)=P$, then $\sigma(\mathcal{L}(rP))=\mathcal{L}(\sigma(rP))=\mathcal{L}(rP)$ for any $r\in \mathbb{N}$. \subsection{Cyclotomic function fields} The basic theory of cyclotomic function fields was developed in the language of function fields by Hayes \cite{Ha74}. Let $x$ be an indeterminate over $\mathbb{F}_q$, $R$ be the polynomial ring $\mathbb{F}_q[x]$, $K$ be the rational function field $\mathbb{F}_q(x)$, and $K^{ac}$ be the algebraic closure of $K$. Let $\varphi$ be the endomorphism of $K^{ac}$ given by $$\varphi(z)=z^q+xz $$ for all $z\in K^{ac}$. Define a ring homomorphism $$R\rightarrow \text{End}_{\mathbb{F}_q}(K^{ac}), f(x)\mapsto f(\varphi).$$ Then the $\mathbb{F}_q$-vector space of $K^{ac}$ is made into an $R$-module by introducing the following action of $R$ on $K^{ac}$, that is, $$ z^{f(x)}=f(\varphi)(z)$$ for all $f(x)\in R$ and $z\in K^{ac}$. For a nonzero polynomial $M\in R$, we consider the set of $M$-torsion points of $K^{ac}$ defined by $$\Lambda_M=\{z\in K^{ac}| z^M=0\}.$$ The cyclotomic function field over $K$ with modulus $M$ is defined by the subfield of $K^{ac}$ generated over $K$ by all elements of $\Lambda_M$, and it is denoted by $K(\Lambda_M)$. Let $p(x)=x^2+ax+b$ be an irreducible polynomial in $\mathbb{F}_q[x]$. In particular, we have the following facts from \cite{MXY16, JMX22}. \begin{prop}\label{prop:2.2} Let $p(x)=x^2+ax+b$ be an irreducible polynomial in $\mathbb{F}_q[x]$. Let $F$ be the cyclotomic function field $K(\Lambda_{p(x)})$ with modulus $p(x)$ over $K$. Then the following results hold: \begin{itemize} \item[\rm (i)] $[F:K]=q^2-1$ and $F= K(\lambda} \newcommand{{\rm GL}}{\Lambda), \text{ where } \lambda} \newcommand{{\rm GL}}{\Lambda^{q^2-1} +(x^q+x+a)\lambda} \newcommand{{\rm GL}}{\Lambda^{q-1} +x^2+ax+b = 0.$ \item[\rm (ii)] There is a unique place of $F$ lying over $p(x)$ which is totally ramified in $F/K$. \item[\rm (iii)] The infinite place $\infty$ of $K$ splits into $q+1$ rational places, each with ramification index $q-1$ in the extension $F/K$. \item[\rm (iv)] All other places of $K$ except $p(x)$ and $\infty$ are unramified in $F/K$. \item[\rm (v)] The Galois group of $F$ over $K$ is ${\rm Gal}(F/K)\cong (\mathbb{F}_q[x]/(p(x)))^*$. Moreover, the automorphism $\sigma_f\in {\rm Gal}(F/K)$ associated to $\overline{f}\in (\mathbb{F}_q[x]/(p(x)))^*$ is determined by $\sigma_f(\lambda)=\lambda^f$. \end{itemize} \end{prop} \subsection{Kummer extensions} The theory of Kummer extension of function fields can be summarized as follows from \cite[Proposition 3.7.3 and Corollary 3.7.4]{St09}. \begin{prop}\label{prop:2.3} Let $K$ be the rational function field $\mathbb{F}_q(x)$ and $n$ be a positive divisor of $q-1$. Suppose that $u\in K$ is an element satisfying $u\neq \omega^d \text{ for all } \omega\in F \text{ and } d|n$ with $d>1.$ Let $F$ be the Kummer extension over $K$ defined by \[F=K(y) \text{ with } y^n=u.\] Then we have: \begin{itemize} \item[(a)] The polynomial $\phi(T)=T^n-u$ is the minimal polynomial of $y$ over $K$. The extension $F/K$ is a cyclic extension of degree $n$, and the automorphisms of $F/K$ are given by $\sigma(y)=\zeta y$, where $\zeta\in \mathbb{F}_q$ is an $n$-th root of unity. \item[(b)] Let $Q\in \mathbb{P}_F$ be an extension of $P\in \mathbb{P}_K$. Let $r_P$ be the greatest common divisor of $n$ and $\nu_P(u)$, i.e., $r_P=\gcd(n,\nu_P(u))$. Then one has \[e(Q|P)=\frac{n}{r_P}\quad \text{and} \quad d(Q|P)=\frac{n}{r_P}-1.\] \item[(c)] Assume that there is a place $R\in \mathbb{P}_F$ such that $\gcd(\nu_Q(u),n)=1$. Then $\mathbb{F}_q$ is the full constant field of $F$. \end{itemize} \end{prop} \section{Binary sequences with a low correlation of length $q+1$}\label{sec:3} Let $p$ be an odd prime, $m$ be a positive integer, $q=p^m$ be a prime power and $\mathbb{F}_q$ be the finite field with $q$ elements. In this section, we will construct a family of binary sequences with a low correlation of length $q+1$ via cyclotomic function fields over the finite field $\mathbb{F}_q$ with an odd characteristic $p$. \subsection{Fixed subfields of cyclotomic function fields}\label{subsec:3.1} Let $K$ be the rational function field $\mathbb{F}_q(x)$ defined over $\mathbb{F}_q$. Let $p(x)=x^2+ax+b$ be a primitive irreducible polynomial in $\mathbb{F}_q[x]$. Let $F$ be the cyclotomic function field $K(\Lambda_{p(x)})$ with modulus $p(x)$ over $K$. From Proposition \ref{prop:2.2}, we have $F=K(\lambda)$ and the Galois group ${\rm Gal}(F/K)$ is an abelian group of order $q^2-1$. Moreover, we have the following result which is similar as \cite[Proposition 3.2]{JMX22} for the case of odd characteristics. \begin{lemma}\label{lem:3.1} Let $K$ be the rational function field $\mathbb{F}_q(x)$. Let $p(x)=x^2+ax+b$ be a primitive irreducible polynomial in $\mathbb{F}_q[x]$. Let $F$ be the cyclotomic function field $K(\Lambda_{p(x)})$ with modulus $p(x)$ over $K$. Then the Galois group ${\rm Gal}(F/K)$ is a cyclic group of order $q^2-1$. \end{lemma} \begin{proof} Let $\eta$ be the $K$-automorphism of $F$ defined by $\eta(\lambda)=\lambda^x$ from Proposition \ref{prop:2.2}. Then $\eta^i$ can be determined by $\eta^i(\lambda)=\lambda^{x^i}$ for any positive integer $i$. Hence, $\eta^i=id$ if and only if $x^i\equiv 1\ (\text{mod } p(x))$. Since $p(x)$ is primitive, the order of $\eta$ is $q^2-1$. \end{proof} From Lemma \ref{lem:3.1}, there exists a unique subgroup $G$ of ${\rm Gal}(F/K)$ with order $q-1$. Such a unique subgroup $G$ can be determined explicitly as follows. \begin{prop}\label{prop:3.2} Let $\eta$ be the generator of ${\rm Gal}(F/K)$ determined by $\eta(\lambda} \newcommand{{\rm GL}}{\Lambda)=\lambda} \newcommand{{\rm GL}}{\Lambda^x$ and let $\tau=\eta^{q+1}$ be an automorphism of $F$. Then the unique subgroup $G$ of ${\rm Gal}(F/K)$ with order $q-1$ is the cyclic group generated by $\tau$, i.e., $$G=\langle \tau \rangle=\{\tau_c\in {\rm Gal}(F/K): \tau_c(\lambda)=c\lambda \text{ for } c\in \mathbb{F}_q^*\}.$$ \end{prop} \begin{proof} From the Eisenstein's irreducibility criterion \cite[Proposition 3.1.15]{St09}, we have $x^2+ax+b$ divides $x^q+x+a$, i.e., $$x^q\equiv -x-a \ (\text{mod } x^2+ax+b).$$ Let $\tau=\eta^{q+1}$. It is easy to verify that $\tau(\lambda} \newcommand{{\rm GL}}{\Lambda)=\eta^{q+1}(\lambda)=\lambda^{x^{q+1}}=\lambda^{-x(x+a)}=\lambda} \newcommand{{\rm GL}}{\Lambda^b=b\lambda} \newcommand{{\rm GL}}{\Lambda$, since $x^{q+1}=x\cdot x^q\equiv -x(x+a)\equiv b\ (\text{mod } x^2+ax+b).$ From \cite[Lemma 3.17]{LN83}, the constant $b$ is a generator of $\mathbb{F}_q^*$. Hence, the order of $\tau$ is $q-1$ and $G=\langle \tau \rangle=\{\tau_c\in {\rm Gal}(F/K): \tau_c(\lambda)=c\lambda \text{ for } c\in \mathbb{F}_q^*\}$ from Proposition \ref{prop:2.2}. \end{proof} Let $E$ be the fixed subfield of $F$ with respect to $G$, that is, $E=F^G=\{z\in F: \sigma(z)=z \text{ for any } \sigma\in G\}.$ From Galois theory, $E/K$ is an abelian extension of degree $q+1$. In particular, the fixed subfield $E$ can be characterized explicitly as follows. \begin{prop}\label{prop:3.3} Let $u=\lambda} \newcommand{{\rm GL}}{\Lambda^x/x=\lambda} \newcommand{{\rm GL}}{\Lambda^{q-1}+x$. Then we have $F=K(\lambda} \newcommand{{\rm GL}}{\Lambda)=\mathbb{F}_q(u,\lambda} \newcommand{{\rm GL}}{\Lambda)$ and $F/\mathbb{F}_q(u)$ is a Kummer extension determined by $$\lambda} \newcommand{{\rm GL}}{\Lambda^{q-1}=-\frac{u^2+au+b}{u^q-u}.$$ Moreover, $u$ and $x$ satisfy the following equation $$\frac{u^{q+1}+au+b}{u^q-u}=x.$$ \end{prop} \begin{proof} From Proposition \ref{prop:2.2} or \cite[Section 4]{MXY16}, the cyclotomic function field $F=K(\Lambda_{p(x)})$ is given by $F=K(\lambda} \newcommand{{\rm GL}}{\Lambda)$ with $\lambda} \newcommand{{\rm GL}}{\Lambda^{q^2-1} +(x^q+x+a)\lambda} \newcommand{{\rm GL}}{\Lambda^{q-1} +x^2+ax+b = 0.$ Substituting $x$ with $u-\lambda} \newcommand{{\rm GL}}{\Lambda^{q-1}$, one has $ \lambda} \newcommand{{\rm GL}}{\Lambda^{q^2-1}+[(u-\lambda} \newcommand{{\rm GL}}{\Lambda^{q-1})^q+(u-\lambda} \newcommand{{\rm GL}}{\Lambda^{q-1})+a]\lambda} \newcommand{{\rm GL}}{\Lambda^{q-1}+(u-\lambda} \newcommand{{\rm GL}}{\Lambda^{q-1})^2+a(u-\lambda} \newcommand{{\rm GL}}{\Lambda^{q-1})+b=0.$ It follows that $$\lambda} \newcommand{{\rm GL}}{\Lambda^{q-1}=-\frac{u^2+au+b}{u^q-u}.$$ Since $x=u-\lambda} \newcommand{{\rm GL}}{\Lambda^{q-1}$, we have $F=K(\lambda} \newcommand{{\rm GL}}{\Lambda)=\mathbb{F}_q(x,\lambda} \newcommand{{\rm GL}}{\Lambda)=\mathbb{F}_q(u,\lambda} \newcommand{{\rm GL}}{\Lambda)$ and $$x=u-\lambda} \newcommand{{\rm GL}}{\Lambda^{q-1}=u+\frac{u^2+au+b}{u^q-u}=\frac{u^{q+1}+au+b}{u^q-u}. $$ \end{proof} \begin{prop}\label{prop:3.4} Let $\eta$ be the generator of ${\rm Gal}(F/K)$ determined by $\eta(\lambda} \newcommand{{\rm GL}}{\Lambda)=\lambda} \newcommand{{\rm GL}}{\Lambda^x$, $\tau=\eta^{q+1}$ and $G=\langle \tau \rangle$. Then the fixed subfield of $F$ with respect to $G$ is $E=F^G=\mathbb{F}_q(u)$. \end{prop} \begin{proof} It is easy to verify that $\tau(\lambda} \newcommand{{\rm GL}}{\Lambda^{q-1})=(\tau(\lambda} \newcommand{{\rm GL}}{\Lambda))^{q-1}=(b\lambda} \newcommand{{\rm GL}}{\Lambda)^{q-1}=\lambda} \newcommand{{\rm GL}}{\Lambda^{q-1}$. From Proposition \ref{prop:3.3}, we have $\mathbb{F}_q(u)=\mathbb{F}_q(u,x)=\mathbb{F}_q(x,x+\lambda} \newcommand{{\rm GL}}{\Lambda^{q-1})=\mathbb{F}_q(x,\lambda} \newcommand{{\rm GL}}{\Lambda^{q-1})\subseteq F^{\langle \tau\rangle}=F^G.$ From Galois theory, the degree of extension $F/F^G$ is $[F:F^G]=|G|=q-1$. From the proof of Proposition \ref{prop:3.3}, $F/\mathbb{F}_q(u)$ is a Kummer extension given by $$\lambda} \newcommand{{\rm GL}}{\Lambda^{q-1}=-\frac{u^2+au+b}{u^q-u}.$$ From Proposition \ref{prop:2.3}, we have $[F:\mathbb{F}_q(u)]=q-1$. Hence, we have $E=F^G=\mathbb{F}_q(u)$. \end{proof} Let $Q$ be a place of $E$ lying over $p(x)$. From Proposition \ref{prop:2.2}, $Q|p(x)$ is totally ramified in $E/K$ with ramification index $e(Q|p(x))=q+1$ and different exponent $d(Q|p(x))=q$. From the Hurwitz genus formula \cite[Theorem 3.4.13]{St09}, we obtain $$-2=2g(E)-2\ge (q+1)[2g(K)-2]+2q.$$ It follows that all other places of $E$ except $Q$ are unramified in $E/K$. Moreover, the place $Q$ can be characterized similarly as \cite[Proposition 4.2]{JMX22}. \begin{lemma}\label{lem:3.5} Let $Q$ be the unique place of $E$ lying over $p(x)$. Then the place $Q$ corresponds to the monic quadratic irreducible polynomial $p(u)=u^2+au+b$. \end{lemma} \begin{proof} From Proposition \ref{prop:2.2}, $p(x)$ is totally ramified in $F/\mathbb{F}_q(x)$. Hence, $Q$ is totally ramified in $F/\mathbb{F}_q(u)$ with $\deg(Q)=2$. From Proposition \ref{prop:2.3} and Proposition \ref{prop:3.3}, the zeros of $u^2+au+b$, $1/u$ and $u-\alpha$ with $\alpha\in \mathbb{F}_q$ are all totally ramified in the extension $F/\mathbb{F}_q(u)$. However, the zero of $u^2+au+b$ is the unique totally ramified place of $\mathbb{F}_q(u)$ in the extension $F/\mathbb{F}_q(u)$ with degree $2$. Hence, the place $Q$ corresponds to the monic quadratic polynomial $p(u)=u^2+au+b$. \end{proof} \begin{theorem}\label{thm:3.6} Let $\eta$ be the generator of ${\rm Gal}(F/K)$ determined by $\eta(\lambda} \newcommand{{\rm GL}}{\Lambda)=\lambda} \newcommand{{\rm GL}}{\Lambda^x$ and let $\sigma=\eta^{q}|_E$. Then $\sigma$ is a $K$-automorphism of $E$ with order $q+1$ and the automorphism $\sigma$ can be determined explicitly by $$\sigma(u)=\frac{-b}{u+a}.$$ In particular, the Galois group of $E/K$ is a cyclic group given by ${\rm Gal}(E/K)=\langle \sigma \rangle.$ \end{theorem} \begin{proof} From Galois theory, we have ${\rm Gal}(E/K)\cong {\rm Gal}(F/K)/{\rm Gal}(F/E)= \langle \eta \rangle/ \langle \eta^{q+1} \rangle$. Thus, the order of $\eta$ is $q+1$ in the group ${\rm Gal}(E/K)$. Since $\gcd(q,q+1)=1$, the order of $\sigma=\eta^{q}|_E$ is $q+1$. It is easy to verify that $\sigma(u)=\sigma(\lambda} \newcommand{{\rm GL}}{\Lambda^x/\lambda} \newcommand{{\rm GL}}{\Lambda)=\lambda} \newcommand{{\rm GL}}{\Lambda^{x^{q+1}}/\lambda} \newcommand{{\rm GL}}{\Lambda^{x^{q}}$. From the proof of Proposition \ref{prop:3.2}, we have $x^q\equiv -x-a\ (\text{mod } x^2+ax+b)$ and $x^{q+1}=x\cdot x^q\equiv x(-x-a)\equiv b\ (\text{mod } x^2+ax+b)$. It follows that $\lambda} \newcommand{{\rm GL}}{\Lambda^{x^q}=\lambda} \newcommand{{\rm GL}}{\Lambda^{-x-a}$, $\lambda} \newcommand{{\rm GL}}{\Lambda^{x^{q+1}}=\lambda} \newcommand{{\rm GL}}{\Lambda^b$ and \begin{align*} \sigma(u)&=\frac{\lambda} \newcommand{{\rm GL}}{\Lambda^{x^{q+1}}}{\lambda} \newcommand{{\rm GL}}{\Lambda^{x^{q}}}=\frac{\lambda} \newcommand{{\rm GL}}{\Lambda^{b}}{\lambda} \newcommand{{\rm GL}}{\Lambda^{-x-a}}=\frac{b\lambda} \newcommand{{\rm GL}}{\Lambda}{-\lambda} \newcommand{{\rm GL}}{\Lambda^x-a\lambda} \newcommand{{\rm GL}}{\Lambda}=\frac{-b}{u+a}\in E. \end{align*} Hence, $\sigma$ is a $K$-automorphism of $E$ and ${\rm Gal}(E/K)=\langle \sigma \rangle.$ \end{proof} \subsection{An equivalence relation on $\mathcal{L}(Q)\setminus \{0\}$}\label{subsec:3.2} Let $Q$ be the place of $E$ with degree two corresponding to the irreducible polynomial $p(u)=u^2+au+b$ from Lemma \ref{lem:3.5}. Since the degree of $Q$ is $\deg(Q)=2\ge 2g(E)-1=-1$, the dimension of Riemann-Roch space $\mathcal{L}(Q)$ is $\ell(Q)=\deg(Q)-g(E)+1=3$ from the Riemann-Roch theorem. It is clear that $\mathbb{F}_q=\mathcal{L}(0)\subseteq \mathcal{L}(Q)$. Furthermore, $\mathcal{L}(Q)$ can be determined explicitly as $$\mathcal{L}(Q)=\left\{\frac{c_0+c_1u+c_2u^2}{u^2+au+b}\in E: c_i\in \mathbb{F}_q \text{ for } 0\le i\le 2\right\}.$$ Now we can define a relation $\sim$ on $\mathcal{L}(Q)\setminus \{0\}$ as follows: for any $z_1,z_2\in \mathcal{L}(Q)\setminus \{0\}$, \[z_1\sim z_2 \Leftrightarrow \exists \tau\in {\rm Gal}(E/K) \text{ such that } z_1\cdot \tau(z_2)\in \mathbb{F}_q\cdot E^2.\] Here $E^2$ stands for the set $\{z^2: z\in E\}$. \begin{prop}\label{prop:3.7} The relation $\sim$ defined as above is an equivalence relation on the set $\mathcal{L}(Q)\setminus \{0\}$. \end{prop} \begin{proof} It is clear that three axioms of an equivalence relation are satisfied: \begin{itemize} \item[(1)] {\bf Reflexivity:} For any $z\in \mathcal{L}(Q)\setminus \{0\}$, $z\cdot id(z)=z\cdot z=z^2\in \mathbb{F}_q\cdot E^2$. Hence, we have $z\sim z$. \item[(2)] {\bf Symmetry:} If $z_1\sim z_2$, then there exists $\tau\in {\rm Gal}(E/K)$ such that $z_1\cdot \tau(z_2)=\alpha\cdot v^2\in \mathbb{F}_q\cdot E^2$ for some $\alpha\in \mathbb{F}_q$ and $v\in E$. It follows that $z_2\cdot \tau^{-1}(z_1)=\tau^{-1}(\alpha\cdot v^2)=\tau^{-1}(\alpha)\cdot \tau^{-1}( v^2)=\alpha\cdot (\tau^{-1}(v))^2\in \mathbb{F}_q\cdot E^2$. Hence, we have $z_2\sim z_1$. \item[(3)] {\bf Transitivity:} If $z_1\sim z_2$ and $z_2\sim z_3$, there exist $\tau_1$ and $\tau_2$ in ${\rm Gal}(E/K)$ such that $z_1\cdot \tau_1(z_2)=\alpha_1\cdot v_1^2\in \mathbb{F}_q\cdot E^2$ and $z_2\cdot \tau_2(z_3)=\alpha_2\cdot v_2^2\in \mathbb{F}_q\cdot E^2$ for some $\alpha_1,\alpha_2\in \mathbb{F}_q$. It follows that $z_1\cdot \tau_1(z_2)\cdot \tau_1(z_2\cdot \tau_2(z_3))=z_1\cdot (\tau_1(z_2))^2\cdot \tau_1\tau_2(z_3)=\alpha_1\alpha_2\cdot v_1^2(\tau_1(v_2))^2\in \mathbb{F}_q\cdot E^2$, i.e., $z_1\cdot \tau_1\tau_2(z_3)\in \mathbb{F}_q\cdot E^2$. Hence, we have $z_1\sim z_3$. \end{itemize} \end{proof} For any element $z\in \mathcal{L}(Q)\setminus \{0\}$, let $[z]$ denote the equivalence class $\{x\in V: x\sim z\}$ containing $z$. In this subsection, we want to determine representative elements of equivalence classes of $\mathcal{L}(Q)\setminus \{0\}$ under the relation $\sim$. Since $\alpha \cdot z\in [z]$ for any $\alpha\in \mathbb{F}_q^*$, the relation $\sim$ induces an equivalence relation on the set $V:=(\mathcal{L}(Q)\setminus \{0\})/\mathbb{F}_q^*$. Hence, it will be sufficient to determine representative elements of equivalent classes of $V$. Let $S_1$ be the set $$S_1=\left\{\frac{u^2+cu+d}{u^2+au+b}\notin \mathbb{F}_q: u^2+cu+d \text{ is an irreducible polynomial in } \mathbb{F}_q[u]\right\},$$ let $S_2$ be the set $$S_2=\left\{\frac{u-\alpha}{u^2+au+b}: \alpha\in \mathbb{F}_q\right\}\cup \left\{\frac{(u-\alpha)(u-\beta)}{u^2+au+b}: \alpha\neq \beta\in \mathbb{F}_q\right\},$$ and let $S_3$ be the set $$S_3=\left\{\frac{1}{u^2+au+b}\right\}\cup \left\{\frac{(u-\alpha)^2}{u^2+au+b}: \alpha\in \mathbb{F}_q\right\}.$$ From \cite[Corollary 3.21]{LN83}, the number of monic irreducible polynomials of degree $2$ in $\mathbb{F}_q[u]$ is $(q^2-q)/2$. Hence, the cardinality of $S_1$ is $|S_1|=(q^2-q)/2-1=(q-2)(q+1)/2.$ By choosing representatives with monic leading coefficients of numerators of elements in $V$, we can identify $V$ as $S_1\cup S_2 \cup S_3\cup \{1\}$. It is easy to see that all elements in $S_3$ are equivalent to $1/(u^2+au+b)$. In order to determine the equivalence classes of the set $S_1$ under the relation $\sim$, we need to study the ramification behavior of places of $E$ with degree $2$ in the extension $E/K$. The following result can be found from \cite[Proposition 1.4.12]{NX01}. \begin{lemma}\label{lem:3.8} Let $F/K$ be an abelian extension of function fields and let $E$ be an intermediate field of $F/K$. Assume that the place $P$ of $K$ is unramified in $F/K$. Then $P$ splits completely in $E/K$ if and only if the Artin symbol $[\frac{F/K}{P}]$ belongs to ${\rm Gal}(F/E)$. \end{lemma} Now let us study the ramification behavior of places of $K$ with degree $2$ in the abelian extension $E/K$. \begin{prop}\label{prop:3.9} There are exactly $(q-3)/2$ distinct places of $K$ with degree $2$ which split completely in $E$, and there exists a unique rational place of $K$ which splits into $(q+1)/2$ places of $E$ with degree $2$. All such places of $E$ with degree $2$ correspond to the numerators of elements in $S_1$. \end{prop} \begin{proof} There are $(q^2-q)/2$ places of the rational function field $E$ over $\mathbb{F}_q$ with degree $2$ in total from \cite[Corollary 3.21]{LN83} and \cite[Proposition 1.2.1]{St09}. Let $Q_i$ be all pairwise distinct places of $E$ with degree $2$ other than $Q$ and let $P_i$ be its restriction to $K$ for $1\le i\le (q-2)(q+1)/2$. From Proposition \ref{prop:2.2}, $Q_i|P_i$ is unramified in $E/K$ and the Artin symbol of $P_i\in \mathbb{P}_K$ in the abelian extension $F/K$ is given by $$\left[\frac{F/K}{P_i}\right](\lambda} \newcommand{{\rm GL}}{\Lambda)=\lambda} \newcommand{{\rm GL}}{\Lambda^{P_i}.$$ Since $E$ is the fixed subfield of $F$ with respect to $G$ from Proposition \ref{prop:3.2} and Proposition \ref{prop:3.4}, the Galois group of $F/E$ is ${\rm Gal}(F/E)=G=\{\tau_c\in {\rm Gal}(F/K): \tau_c(\lambda)=c\lambda \text{ for } c\in \mathbb{F}_q^*\}.$ From Lemma \ref{lem:3.8}, $P_i$ splits completely in $E/K$ if and only if there exists an element $\delta_i\in \mathbb{F}_q^*$ such that $$\left[\frac{F/K}{P_i}\right](\lambda} \newcommand{{\rm GL}}{\Lambda)=\lambda} \newcommand{{\rm GL}}{\Lambda^{P_i}=\lambda} \newcommand{{\rm GL}}{\Lambda^{\delta_i}.$$ Hence, $P_i=x^2+ax+b+\delta_i$ must be a quadratic irreducible polynomial in $\mathbb{F}_q[x]$. In fact, the quadratic polynomial $P_i$ is irreducible if and only if there doesn't exist $\alpha\in \mathbb{F}_q$ such that $\alpha^2+a\alpha+b+\delta_i=0$, i.e., for any $\alpha\in \mathbb{F}_q$, we have $$\delta_i\neq -(\alpha^2+a\alpha+b)=-\left(\alpha+\frac{a}{2}\right)^2+\frac{a^2}{4}-b\in -\mathbb{F}_q^2+\frac{a^2}{4}-b.$$ Hence, $P_i=x^2+ax+b+\delta_i$ with $\delta_i\in \mathbb{F}_q^*\setminus (-\mathbb{F}_q^2+a^2/4-b)$ are irreducible polynomials in $\mathbb{F}_q[x]$. Since the cardinality of $-\mathbb{F}_q^2+a^2/4-b$ is $(q+1)/2$, there are exactly $q-1-(q+1)/2=(q-3)/2$ different choices of $\delta\in \mathbb{F}_q^*$ such that $P_i$ is irreducible, i.e., there are exactly $(q-3)/2$ places of $K$ with degree $2$ which split completely into $(q-3)(q+1)/2$ places of $E$ with degree $2$. Let $P$ be a place of $K$ which is unramified in $F/K$ and let $R$ be any place of $F$ lying over $P$. The order of Artin symbol $\left[\frac{F/K}{P}\right]$ is $f(R|P)=\deg(R)/\deg(P)$. Since there are $(q-2)(q+1)/2$ places of $E$ with degree $2$ except $Q$, the remaining $(q+1)/2$ places of $E$ with degree $2$ must lie over a rational place of $K$ from \cite[Theorem 3.7.2]{St09}. \end{proof} From Proposition \ref{prop:3.9}, $P_i=x^2+ax+b+\delta_i$ with $\delta_i\in \mathbb{F}_q^*\setminus (-\mathbb{F}_q^2+a^2/4-b)$ are distinct places of $K$ of degree $2$ which split completely in $E/K$. Let $Q_i=u^2+c_iu+d_i\in \mathbb{F}_q[u]$ be any place of $E$ lying over $P_i$ for $1\le i\le (q-3)/2$. Now we try to characterize the equivalence classes of $S_1$. Let $R_1$ be the set defined by $$R_1:=\left\{z_i=\frac{u^2+c_iu+d_i}{u^2+au+b}: 1\le i\le (q-3)/2\right\}.$$ \begin{prop}\label{prop:3.10} For any $z_i\neq z_j\in R_1$, one has $z_i\cdot \tau(z_j)\notin \mathbb{F}_q\cdot E^2$ for any $\tau\in {\rm Gal}(E/K)$, i.e., $[z_i]$ are distinct equivalence classes of $\mathcal{L}(Q)\setminus \{0\}$ for $1\le i\le (q-3)/2$. \end{prop} \begin{proof} The principal divisor of $z_i=(u^2+c_iu+d_i)/(u^2+au+b)$ is given by $$(z_i)=\left(\frac{u^2+c_iu+d_i}{u^2+au+b}\right)=Q_i-Q.$$ From Lemma \ref{lem:2.1}, the principal divisor of $\tau(z_i)$ is $(\tau(z_i))=\tau((z_i))=\tau(Q_i-Q)=\tau(Q_i)-\tau(Q). $ Since $Q$ is totally ramified in $E/K$, one has $\tau(Q)=Q$ for $\tau\in {\rm Gal}(E/K)$. For $z_i\neq z_j\in R_1$, we have $Q_i\neq Q_j$ and the principal divisor of $z_i\cdot \tau(z_j)$ is given by $$ (z_i\cdot \tau(z_j))=Q_i-Q+\tau(Q_j)-\tau(Q)=Q_i+\tau(Q_j)-2Q.$$ Since $Q_i$ and $Q_j$ lie over distinct places of $K$ with degree $2$ from Proposition \ref{prop:3.9}, we have $\tau(Q_j)\neq Q_i$ for any $\tau \in {\rm Gal}(E/K)$. Hence, we have $z_i\cdot \tau(z_j)\notin \mathbb{F}_q\cdot E^2$. \end{proof} From Proposition \ref{prop:3.10}, $[z_i]$ are pairwise distinct equivalence classes of $\mathcal{L}(Q)\setminus \{0\}$ for $1\le i\le (q-3)/2$. In particular, their representative elements satisfy the following property. \begin{prop}\label{prop:3.11} For any $z_i\in R_1$ with $1\le i\le (q-3)/2$ and any automorphism $\tau\in {\rm Gal}(E/K)\setminus \{id\}$, one has $z_i\cdot \tau(z_i)\notin \mathbb{F}_q\cdot E^2$. \end{prop} \begin{proof} The principal divisor of $z_i=(u^2+c_iu+d_i)/(u^2+au+b)$ is given by $$(z_i)=\left(\frac{u^2+c_iu+d_i}{u^2+au+b}\right)=Q_i-Q.$$ From Lemma \ref{lem:2.1} and Proposition \ref{prop:3.10}, the principal divisor of $z_i\cdot \tau(z_i)$ is given by $$\left(z_i\cdot \tau(z_i)\right)=Q_i-Q+\tau(Q_i)-\tau(Q)=Q_i+\tau(Q_i)-2Q.$$ Since $P_i$ splits completely in $E/K$ from Proposition \ref{prop:3.9}, we have $\tau(Q_i)\neq Q_i$. Hence, one has $z_i\cdot \tau(z_i)\notin \mathbb{F}_q\cdot E^2$. \end{proof} In the following, we want to determine the equivalence classes of $S_2$. Since $E$ is a rational function field over $\mathbb{F}_q$, there exist exactly $q+1$ rational places which are places of $E$ lying over the infinity place $\infty$ of $K$ from Proposition \ref{prop:2.2}. From Theorem \ref{thm:3.6}, the Galois group ${\rm Gal}(E/K)$ is a cyclic group generated by an automorphism $\sigma$ with order $q+1$. Let $P_0$ be the zero of $u$ in $E$ and $P_{\sigma} \newcommand{\GS}{\Sigma^j}=\sigma^j(P_0)$ for $0\le j\le q$. From \cite[Theorem 3.7.1]{St09}, $P_{\sigma} \newcommand{\GS}{\Sigma^j}$ are distinct rational places of $E$ for $0\le j\le q$. In particular, we have $P_{\sigma} \newcommand{\GS}{\Sigma^1}=\sigma(P_0)=P_\infty$ from Theorem \ref{thm:3.6}. Let $\alpha_j$ be the element $u(P_{\sigma} \newcommand{\GS}{\Sigma^j})$ in $\mathbb{F}_q$ for each $0\le j\neq 1\le q$. Then we have $P_{\sigma} \newcommand{\GS}{\Sigma^j}=\sigma} \newcommand{\GS}{\Sigma^j(P_0)=P_{u-\alpha_j}$. It is easy to see that $\alpha_j$ are pairwise distinct elements of $\mathbb{F}_q$ for $0\le j\neq 1\le q$. Let $w_j=(u-\alpha_j)/(u^2+au+b)$ for $j\neq 1$ and let $R_2$ be a set defined by $$R_2:=\left\{w_j=\frac{u-\alpha_j}{u^2+au+b}: 2\le j\le (q+1)/2\right\}.$$ \begin{prop}\label{prop:3.12} For $w_i\neq w_j\in R_2$, one has $w_i\cdot \sigma^{t}(w_j)\notin \mathbb{F}_q\cdot E^2$ for any $0\le t\le q$, i.e., $[w_j]$ are pairwise distinct equivalence classes of $\mathcal{L}(Q)\setminus \{0\}$ for $2\le j\le (q+1)/2$. \end{prop} \begin{proof} The principal divisor of $w_i$ is given by $$(w_i)=P_{\sigma} \newcommand{\GS}{\Sigma^i}+P_\infty-Q=\sigma} \newcommand{\GS}{\Sigma^i(P_0)+\sigma} \newcommand{\GS}{\Sigma(P_0)-Q.$$ From Lemma \ref{lem:2.1}, the principal divisor of $\sigma^t(w_j)$ for each $0\le t\le q$ is $$(\sigma^t(w_j))=\sigma^t(P_{\sigma} \newcommand{\GS}{\Sigma^j})+\sigma^t(P_\infty)-\sigma^t(Q)=\sigma} \newcommand{\GS}{\Sigma^{t+j}(P_0)+\sigma} \newcommand{\GS}{\Sigma^{t+1}(P_0)-Q.$$ If $w_i \sim w_j$ for $2\le j\neq i\le (q+1)/2$, i.e., $w_i\cdot \sigma^t(w_j)\in \mathbb{F}_q\cdot E^2$ for some $0\le t\le q$, then we have $$\begin{cases} \sigma} \newcommand{\GS}{\Sigma^{t+j}(P_0)=\sigma} \newcommand{\GS}{\Sigma^i(P_0)\\ \sigma} \newcommand{\GS}{\Sigma^{t+1}(P_0)=\sigma} \newcommand{\GS}{\Sigma(P_0)\end{cases} \text{ or }\quad \begin{cases} \sigma} \newcommand{\GS}{\Sigma^{t+j}(P_0)=\sigma} \newcommand{\GS}{\Sigma(P_0)\\ \sigma} \newcommand{\GS}{\Sigma^{t+1}(P_0)=\sigma} \newcommand{\GS}{\Sigma^i(P_0)\end{cases}.$$ If $ \sigma} \newcommand{\GS}{\Sigma^{t+1}(P_0)=\sigma} \newcommand{\GS}{\Sigma(P_0)$, then we have $t=0$. It follows that $j=i$ which is impossible. If $ \sigma} \newcommand{\GS}{\Sigma^{t+1}(P_0)=\sigma} \newcommand{\GS}{\Sigma^i(P_0)$, then there exists an integer $t$ with $1\le t\le q$ such that $t+1=i$ and $t+j=1+q+1$. Hence, we obtain $i+j=q+3$ which is a contradiction. \end{proof} Now we want to classify all elements of $S_2$ which are equivalent to $w_j$ for each $2\le j\le (q+1)/2$. From the definition of the equivalence relation $\sim$, it is clear that $\sigma} \newcommand{\GS}{\Sigma^{t}(w_j)$ is equivalent to $w_j$ for any $0\le t\le q$. From Lemma \ref{lem:2.1}, the principal divisor of $\sigma^t(w_j)$ for each $1\le t\le q$ is given by $(\sigma^t(w_j))=\sigma} \newcommand{\GS}{\Sigma^{t+j}(P_0)+\sigma} \newcommand{\GS}{\Sigma^{t+1}(P_0)-Q.$ If $\sigma} \newcommand{\GS}{\Sigma^{t+j}(P_0)=P_\infty$, i.e., $t+j=1+(q+1)$, then $t+1=q+3-j$ and $w_j\sim w_{q+3-j}$ from the proof of Proposition \ref{prop:3.12}. If $\sigma} \newcommand{\GS}{\Sigma^{t+j}(P_0)\neq P_\infty$, i.e., $t+j\neq 1+(q+1)$, then $$w_j\sim \frac{(u-\alpha_{t+j})(u-\alpha_{t+1})}{u^2+au+b}. $$ Hence, the equivalence class $[w_j]$ contains at least $q+1$ elements in $S_2$ for each $2\le j\le (q+1)/2$. For $j=(q+3)/2$, it is easy to verify that $w_{j}\cdot \sigma^{j-1}(w_{j})\in \mathbb{F}_q\cdot E^2$ and $[w_j]$ contains at least $(q+1)/2$ elements in $S_2$. The cardinality of $S_2$ is $q(q+1)/2$. Hence, $\{[w_i]: 2\le i\le (q+3)/2\}$ are all distinct equivalence classes of $S_2$. Furthermore, these representative elements in $R_2$ have the following property. \begin{prop}\label{prop:3.13} For any $w_j\in R_2$ with $2\le j\le (q+1)/2$, one has $w_j\cdot \sigma^{t}(w_j)\notin \mathbb{F}_q\cdot E^2$ for $1\le t\le q$, i.e., $w_j\cdot \tau(w_j)\notin \mathbb{F}_q\cdot E^2$ for any $\tau\in {\rm Gal}(E/K)\setminus \{id\}$. \end{prop} \begin{proof} The principal divisor of $w_j\cdot \sigma^t(w_j)$ for each $1\le t\le q$ is given by \begin{align*} (w_j\cdot \sigma^t(w_j))&=P_{\sigma} \newcommand{\GS}{\Sigma^j}+P_\infty-Q+\sigma^t(P_{\sigma} \newcommand{\GS}{\Sigma^j})+\sigma^t(P_\infty)-\sigma} \newcommand{\GS}{\Sigma^t(Q)\\ &=\sigma^j(P_0)+\sigma(P_0)+\sigma^{t+j}(P_0)+\sigma^{t+1}(P_0)-2Q. \end{align*} Assume that $w_j\cdot \sigma^t(w_j)\in \mathbb{F}_q \cdot E^2$. Since $1\le t\le q$, we must have $\sigma^{t+1}(P_0)=\sigma^j(P_0)$ and $\sigma^{t+j}(P_0)=\sigma(P_0)$, i.e., $ t+1=j$ and $ t+j=1+q+1$. Hence, we obtain $j=(q+3)/2$ which is a contradiction. This completes the proof. \end{proof} \begin{theorem}\label{thm:3.14} For any $z_i\in R_1, w_j\in R_2$, we have $z_i\cdot \sigma^{t}(w_j)\notin \mathbb{F}_q\cdot E^2$ and $w_j\cdot \sigma^{t}(z_i)\notin \mathbb{F}_q\cdot E^2$ for $0\le t\le q$, i.e., $[z_i]$ and $[w_j]$ are pairwise distinct equivalence classes of $\mathcal{L}(Q)\setminus \{0\}$ for $1\le i\le (q-3)/2$ and $2\le j\le (q+1)/2$. \end{theorem} \begin{proof} The principal divisor of $z_i\cdot \sigma^t(w_j)$ for each $0\le t\le q$ is given by \begin{align*} (z_i\cdot \sigma^t(w_j))&=Q_i-Q+\sigma^t(P_{\sigma} \newcommand{\GS}{\Sigma^j})+\sigma^t(P_\infty)-Q\\ &=Q_i+\sigma^{t+j}(P_0)+\sigma^{t+1}(P_0)-2Q. \end{align*} It is easy to see that $z_i\cdot \sigma^{t}(w_j)\notin \mathbb{F}_q\cdot E^2$ for $0\le t\le q$. From Proposition \ref{prop:3.10} and Proposition \ref{prop:3.12}, $[z_i]$ and $[w_j]$ are pairwise distinct equivalence classes of $\mathcal{L}(Q)\setminus \{0\}$ for $1\le i\le (q-3)/2$ and $2\le j\le (q+1)/2$. \end{proof} \subsection{Binary sequences with a low correlation of length $q+1$}\label{subsec:3.3} In this subsection, we provide a construction of binary sequences with a low correlation of length $q+1$ via cyclotomic function fields over finite fields with odd characteristics. From Proposition \ref{prop:3.10}, let $ [z_1],[z_2],\cdots,[z_{\frac{q-3}{2}}]$ be pairwise distinct equivalence classes of $z_i\in R_1$. From Proposition \ref{prop:3.12}, let $ [w_2],[w_3],\cdots,[w_{\frac{q+1}{2}}]$ be pairwise distinct equivalence classes of $w_j\in R_2$. For simplicity, let $z_{\frac{q-5}{2}+j}=w_j$ for $2\le j\le (q+1)/2$. From Theorem \ref{thm:3.14}, $[z_i]$ are pairwise distinct equivalence classes of $\mathcal{L}(Q)\setminus \{0\}$ for $1\le i\le q-2$. Let $\eta$ be the quadratic character from $\mathbb{F}_q^*$ to $\mathbb{C}^*$, i.e., $$\eta(\alpha)=\begin{cases} 1 &\text{ if } \alpha \text{ is a square in } \mathbb{F}_q^*,\\ -1 & \text{ if } \alpha \text{ is a non-square in } \mathbb{F}_q^*.\end{cases}$$ Assume that $\eta(0)=1$. We can extend $\eta$ to a map from $\mathbb{F}_q$ to $\mathbb{C}^*$ by defining $$\eta(\alpha)=\begin{cases} 1 &\text{ if } \alpha \text{ is a square in } \mathbb{F}_q,\\ -1 & \text{ if } \alpha \text{ is a non-square in } \mathbb{F}_q.\end{cases}$$ Let $P_0$ be the zero of $u$ in $E$ and $P_{\sigma} \newcommand{\GS}{\Sigma^j}=\sigma^j(P_0)$ for $0\le j\le q$. For each equivalence class $[z_i]$ for $1\le i\le q-2$, we define a sequence $s_i$ as follows: $$s_i=(s_{i,0},s_{i,1},\cdots,s_{i,q}) \text{ with } s_{i,j}=\eta(z_i(P_{\sigma} \newcommand{\GS}{\Sigma^j})) \text{ for } 0\le j\le q.$$ In the following, we will show that this family of binary sequences $\{s_i: 1\le i\le q-2\}$ with length $q+1$ has a low correlation. \begin{prop}\label{prop:3.15} If $q=p^m$ is a power of odd prime, then the autocorrelation of $s_i$ with $1\le i\le q-2$ at delay $t$ for $1\le t\le q$ is upper bounded by $$|A_t(s_i)|\le 4+\lfloor 2\sqrt{q}\rfloor.$$ \end{prop} \begin{proof} The autocorrelation of $s_i$ at delay $t$ for $1\le t\le q$ is given by \begin{align*} A_t(s_i)&= \sum_{j=0}^{q} s_{i,j}s_{i,j+t}= \sum_{j=0}^q \eta(z_i(P_{\sigma} \newcommand{\GS}{\Sigma^j}))\cdot \eta(z_i(P_{\sigma} \newcommand{\GS}{\Sigma^{j+t}}))\\ &= \sum_{j=0}^q \eta(z_i(P_{\sigma} \newcommand{\GS}{\Sigma^j}))\cdot \eta((\sigma^{-t}(z_i))(P_{\sigma} \newcommand{\GS}{\Sigma^j})). \end{align*} Let $Z_i=\{0\le j\le q: (z_i\sigma^{-t}(z_i))(P_{\sigma} \newcommand{\GS}{\Sigma^j})=0\}$. It is easy to see that $|Z_i|\le 4$, since $z_i\in \mathcal{L}(Q)$ and $z_i\sigma^{-t}(z_i)\in \mathcal{L}(2Q)$ from Proposition \ref{prop:3.11} and Proposition \ref{prop:3.13}. Thus, we have $$A_t(s_i)= \sum_{j\in Z_i} \eta(z_i(P_{\sigma} \newcommand{\GS}{\Sigma^j}))\cdot \eta(z_i(P_{\sigma} \newcommand{\GS}{\Sigma^{j+t}}))+\sum_{j\notin Z_i} \eta((z_i\sigma^{-t}(z_i))(P_{\sigma} \newcommand{\GS}{\Sigma^j})).$$ For any $z_i\in R_1\cup R_2$, we have $z_i\sigma^{-t}(z_i)\notin \mathbb{F}_q\cdot E^2$ for $1\le t\le q$ from Proposition \ref{prop:3.11} and Proposition \ref{prop:3.13}. Consider the Kummer extension $E_i/E$ given by $E_i=E(y)$ with $$y^2=z_i\cdot \sigma^{-t}(z_i).$$ If $z_i\in R_1$, then $(z_i\cdot \sigma^{-t}(z_i))=Q_i+\sigma} \newcommand{\GS}{\Sigma^{-t}(Q_i)-2Q$ from Proposition \ref{prop:3.11}. Hence, there are two places of $E$ with degree $2$ which are totally ramified in $E_i/E$ from Proposition \ref{prop:2.3}. If $z_i\in R_2$, then $(z_i\cdot \sigma^{-t}(z_i))=\sigma} \newcommand{\GS}{\Sigma^i(P_0)+\sigma} \newcommand{\GS}{\Sigma(P_0)+\sigma} \newcommand{\GS}{\Sigma^{i-t}(P_0)+\sigma} \newcommand{\GS}{\Sigma^{1-t}(P_0)-2Q$ from Proposition \ref{prop:3.13}. Hence, there are at most four rational places of $E$ which are totally ramified in $E_i/E$. In both cases, we have $\deg \text{Diff}(E_i/E)\le 4$. Furthermore, $\mathbb{F}_q$ is the full constant field of $E_i$ from Proposition \ref{prop:2.3}. The Hurwitz genus formula yields $$2g(E_i)-2= 2\cdot (2g(E)-2)+\deg \text{Diff}(E_i/E).$$ Hence, the genus of $E_i$ is at most $1$ for each $1\le i\le q-2$. Let $N_1$ denote the cardinality of the set $\{j\notin Z_i: \eta((z_i\cdot \sigma^{-t}(z_i))(P_{\sigma} \newcommand{\GS}{\Sigma^j}))=1\}$ and let $N_{-1}$ denote the cardinality of the set $\{j\notin Z_i: \eta((z_i\cdot \sigma^{-t}(z_i))(P_{\sigma} \newcommand{\GS}{\Sigma^j}))=-1\}$. It is clear that $$N_1+N_{-1}=q+1-|Z_i|.$$ From \cite[Theorem 3.3.7]{St09}, the number of rational places of $E_i$ is $N(E_i)=2N_1+|Z_i|$. From the Serre bound, $N(E_i)$ is bounded by $$q+1-\lfloor 2\sqrt{q}\rfloor\le N(E_i)=2N_1+|Z_i|\le q+1+\lfloor 2\sqrt{q}\rfloor.$$ Hence, we have $-\lfloor 2\sqrt{q}\rfloor\le N_1-N_{-1}\le \lfloor 2\sqrt{q}\rfloor$ and $$|A_t(s_i)|\le |Z_i|+|N_1-N_{-1}|\le 4+\lfloor 2\sqrt{q}\rfloor.$$ \end{proof} \begin{prop}\label{prop:3.16} If $q=p^m$ is a power of odd prime, then the cross-correlation of $s_i$ and $s_j$ with $1\le i\neq j\le q-2$ at delay $t$ for $0\le t\le q$ is upper bounded by $$|C_t(s_i,s_j)|\le 4+\lfloor 2\sqrt{q}\rfloor.$$ \end{prop} \begin{proof} For two distinct sequences $s_i$ and $s_j$ in ${\mathcal S}$, the cross-correlation of $s_i$ and $s_j$ with $1\le i\neq j\le q-2$ at delay $t$ for $0\le t\le q$ is given by \begin{align*}C_t(s_i,s_j)&= \sum_{k=0}^{q} s_{i,k}s_{j,k+t} = \sum_{k=0}^q \eta(z_i(P_{\sigma} \newcommand{\GS}{\Sigma^k}))\cdot \eta(z_j(P_{\sigma} \newcommand{\GS}{\Sigma^{k+t}})) \\ &=\sum_{k=0}^q \eta(z_i(P_{\sigma} \newcommand{\GS}{\Sigma^k}))\cdot \eta((\sigma^{-t}(z_j))(P_{\sigma} \newcommand{\GS}{\Sigma^k})).\end{align*} Let $Z_{i,j}=\{0\le k\le q: (z_i\sigma^{-t}(z_j))(P_{\sigma} \newcommand{\GS}{\Sigma^k})=0\}$ be the zeros of $z_i\sigma^{-t}(z_j)$ with degree one. It is easy to see that $|Z_{i,j}|\le 4$, since $z_i\sigma^{-t}(z_j)\in \mathcal{L}(2Q)$ for any $z_i,z_j\in \mathcal{L}(Q)$. Thus, we have $$C_t(s_i,s_j) = \sum_{j\in Z_{i,j}} \eta(z_i(P_{\sigma} \newcommand{\GS}{\Sigma^k}))\cdot \eta((\sigma^{-t}(z_j))(P_{\sigma} \newcommand{\GS}{\Sigma^k}))+\sum_{j\notin Z_{i,j}} \eta((z_i\sigma^{-t}(z_j))(P_{\sigma} \newcommand{\GS}{\Sigma^k})).$$ For $z_i\neq z_j\in R_1\cup R_2$, we have $z_i\sigma^{-t}(z_j)\notin \mathbb{F}_q\cdot E^2$ from Proposition \ref{prop:3.10}, Proposition \ref{prop:3.12} and Theorem \ref{thm:3.14}. Let us consider the Kummer extension $E_{i,j}=E(y)$ with $$y^2=z_i\cdot \sigma^{-t}(z_j).$$ If $z_i\neq z_j\in R_1$, then there are two places of $E$ with degree $2$ which are totally ramified in $E_{i,j}/E$ from Proposition \ref{prop:3.10}. If $z_i\neq z_j\in R_2$, then there are at most four rational places of $E$ which are totally ramified in $E_{i,j}/E$ from Proposition \ref{prop:3.12}. If $z_i\in R_1, z_j\in R_2$ or $z_i\in R_2, z_j\in R_1$, then there are a place of degree $2$ and two rational places of $E$ which are totally ramified in $E_{i,j}/K$ from Theorem \ref{thm:3.14}. In these cases, we have $\deg \text{Diff}(E_{i,j}/E)\le 4$. From Proposition \ref{prop:2.3}, $\mathbb{F}_q$ is the full constant field of $E_{i,j}$ as well. The Hurwitz genus formula yields $$2g(E_{i,j})-2= 2\cdot (2g(E)-2)+\deg \text{Diff}(E_{i,j}/E).$$ Hence, the genus of $E_{i,j}$ is at most $1$ for any $1\le i\neq j\le q-2$. Let $N_1$ be the number of the set $\{k\notin Z_{i,j}: \eta((z_i\cdot \sigma^{-t}(z_j))(P_{\sigma} \newcommand{\GS}{\Sigma^k})))=1\}$ and let $N_{-1}$ be the number of the set $\{k\notin Z_{i,j}: \eta((z_i\cdot \sigma^{-t}(z_j))(P_{\sigma} \newcommand{\GS}{\Sigma^k}))=-1\}$. It is clear that $$N_1+N_{-1}=q+1-|Z_{i,j}|.$$ From \cite[Theorem 3.3.7]{St09} and the Serre bound, the number of rational places of $E_{i,j}$ is bounded by $$q+1-\lfloor 2\sqrt{q}\rfloor\le N(E_{i,j})=2N_1+|Z_{i,j}|\le q+1+\lfloor 2\sqrt{q}\rfloor.$$ Hence, we have $-\lfloor 2\sqrt{q}\rfloor\le N_1-N_{-1}\le \lfloor 2\sqrt{q}\rfloor$ and $$|C_t(s_i,s_j)|\le |Z_{i,j}|+|N_1-N_{-1}|\le 4+\lfloor 2\sqrt{q}\rfloor.$$ The proof is completed. \end{proof} \begin{theorem}\label{thm:3.17} If $q=p^m$ is a power of odd prime, then there exists a family of binary sequences ${\mathcal S}=\{s_i: 1\le i\le q-2\}$ of length $q+1$ with a correlation upper bounded by $$\text{Cor}({\mathcal S})\le 4+\lfloor 2\sqrt{q}\rfloor.$$ \end{theorem} \begin{proof} This theorem follows immediately from Proposition \ref{prop:3.15} and Proposition \ref{prop:3.16}. \end{proof} \section{Algorithm and numerical results}\label{sec:4} The previous section provides a theoretical construction of binary sequences with a low correlation via cyclotomic function fields over finite fields with odd characteristics. In fact, such a family of binary sequences constructed in Section \ref{sec:3} can be realized explicitly. In this section, we provide an explicit construction of binary sequences via explicit automorphisms and compute some examples for finite fields of small sizes with the help of the software Sage. From Theorem \ref{thm:3.6}, the Galois group ${\rm Gal}(E/K)$ is a cyclic group generated by $\sigma} \newcommand{\GS}{\Sigma$. From \cite[Proposition 4.4]{JMX22}, all automorphisms $\sigma^j$ can be calculated explicitly by the following recursive relations for $0\le j\le q$. \begin{lemma}\label{lem:4.1} Let $a_0=1, b_0=0,c_0=0,d_0=1$ and $a_1=0, b_1=-b, c_1=1,d_1=a$. Let $\sigma^j$ be the automorphism of $\mathbb{F}_q(u)$ determined by $$\sigma^j(u)=\frac{a_ju+b_j}{c_ju+d_j}.$$ Then $a_j,b_j,c_j,d_j$ with $0\le j\le q$ can be obtained from the following recursive equations $$\begin{cases} a_{j+1}=a_1a_j+c_1b_j=b_j,\\ b_{j+1}=b_1a_j+d_1b_j=-ba_j+a b_j,\\ c_{j+1}=a_1c_j+c_1d_j=d_j,\\ d_{j+1}=b_1c_j+d_1d_j=-bc_j+a d_j. \end{cases}$$ \end{lemma} \iffalse \begin{proof} It is easy to verify that \begin{align*} \sigma^{k+1}(u)&=\sigma(\sigma^k(u))=\sigma\left(\frac{a_ku+b_k}{c_ku+d_k}\right)=\frac{a_k\sigma(u)+b_k}{c_k\sigma(u)+d_k}\\ &= \frac{(a_1a_k+c_1b_k)u+(b_1a_k+d_1b_k)}{(a_1c_k+c_1d_k)u+(b_1c_k+d_1d_k)} =\frac{a_{k+1}u+b_{k+1}}{c_{k+1}u+d_{k+1}}. \end{align*} This completes the proof. \end{proof} \fi \begin{prop}\label{prop:4.2} Let $P_0$ be the zero of $u$ in $E=\mathbb{F}_q(u)$, $\sigma} \newcommand{\GS}{\Sigma$ be the $\mathbb{F}_q$-automorphism of $E$ determined by $\sigma} \newcommand{\GS}{\Sigma(u)=-b/(u+a)$ and $P_{\sigma} \newcommand{\GS}{\Sigma^j}=\sigma} \newcommand{\GS}{\Sigma^j(P_0)$. Then $P_{\sigma} \newcommand{\GS}{\Sigma^1}=P_\infty$ and $P_{\sigma} \newcommand{\GS}{\Sigma^j}$ corresponds to the linear polynomial $u+a_j^{-1}b_j$ for each $0\le j\neq 1 \le q$. \end{prop} \begin{proof} Since $\sigma} \newcommand{\GS}{\Sigma(u)=-b/(u+a)$, we have $P_{\sigma} \newcommand{\GS}{\Sigma^1}=\sigma} \newcommand{\GS}{\Sigma(P_0)=P_\infty$. From Lemma \ref{lem:4.1}, the automorphism $\sigma} \newcommand{\GS}{\Sigma^j$ is determined by $\sigma^j(u)=(a_ju+b_j)/(c_ju+d_j).$ If $a_j\neq 0$, i.e., $j\neq 1$, then $P_{\sigma} \newcommand{\GS}{\Sigma^j}$ corresponds to the linear polynomial $u+a_j^{-1}b_j$. \end{proof} Let $[z_1],[z_2],\cdots,[z_{q-2}]$ be the distinct equivalence classes of $\mathcal{L}(Q)\setminus \{0\}$ under the equivalence relation $\sim$ in Subsection \ref{subsec:3.2}. From the theory of rational function fields, we have $z_i(P_{\sigma} \newcommand{\GS}{\Sigma^j})=z_i(u)|_{u=-b_j/a_j}$ if $0\le j\neq 1\le q$; otherwise, $z_i(P_{\sigma} \newcommand{\GS}{\Sigma^1})=z_i(P_\infty)=1$ for $1\le i\le (q-3)/2$ and $z_i(P_{\sigma} \newcommand{\GS}{\Sigma^1})=z_i(P_\infty)=0$ for $(q-1)/2\le i\le q-2$. Now, let us provide an explicit construction of binary sequences with a low correlation via cyclotomic function fields over finite fields with odd characteristics obtained from Theorem \ref{thm:3.17}. Such an explicit construction of binary sequences can be presented as follows. \begin{center} Construction of binary sequences with a low correlation \end{center} \begin{itemize} \item Step 1: Input an odd prime $p$ and a prime power $q=p^m$. \item Step 2: Find a primitive quadratic irreducible polynomial $p(x)=x^2+ax+b\in \mathbb{F}_q[x]$. \item Step 3: Let $\sigma$ be an $\mathbb{F}_q$-automorphism of $\mathbb{F}_q(u)$ determined by $\sigma(u)=-b/(u+a).$ Let $a_0=d_0=1,b_0=c_0=0$. For each $0\le j\le q$, calculate the explicit formula $\sigma^j(u)=(a_ju+b_j)/(c_ju+d_j)$ via the following recursive equations: $$\begin{cases} a_{j+1}=b_j,\\ b_{j+1}=-ba_j+a b_j,\\ c_{j+1}=d_j,\\ d_{j+1}=-bc_j+a d_j. \end{cases}$$ \item Step 4: Determine a quadratic irreducible polynomial $u^2+c_i u+d_i$ lying over the place $P_i=x^2+ax+b+\delta_i$ with $\delta_i\in \mathbb{F}_q^*\setminus (-\mathbb{F}_q^2+a^2/4-b)$ in the extension $E/K$ for each $1\le i\le (q-3)/2$. Define $z_i=(u^2+c_i u+d_i)/(u^2+au+b)$ for $1\le i\le (q-3)/2$. \item Step 5: Let $\alpha_j=u(P_{\sigma} \newcommand{\GS}{\Sigma^j})=-b_j/a_j$ for $2\le j\le (q+1)/2$. For $2\le j\le (q+1)/2$, $$z_{\frac{q-5}{2}+j}=\frac{u-\alpha_j}{u^2+au+b}.$$ \item Step 6: Let $\eta$ be a map from $\mathbb{F}_q$ to $\mathbb{C}^*$ defined by $$\eta(\alpha)=\begin{cases} 1 &\text{ if } \alpha \text{ is a square in } \mathbb{F}_q,\\ -1 & \text{ if } \alpha \text{ is a non-square in } \mathbb{F}_q.\end{cases}$$ Output a family of sequences ${\mathcal S}=\{s_i: 1\le i\le q-2\}$ defined by $$s_i=(s_{i,0},s_{i,1},\cdots,s_{i,q}) \text{ with } s_{i,j}=\eta(z_i(P_{\sigma} \newcommand{\GS}{\Sigma^j})) \text{ for } 0\le j\le q.$$ \item Step 7: Output the correlation $Cor({\mathcal S})=\max\{\{|\sum_{k=0}^{q} s_{i,k}s_{i,k+t}|:1\le i\le q-2, 1\le t\le q\} \cup \{|\sum_{k=0}^{q} s_{i,k}s_{j,k+t}|: 1\le i\neq j\le q-2, 0\le t\le q\}\}.$ \end{itemize} \begin{table}[]\label{tab:2} \setlength{\abovecaptionskip}{0pt}% \setlength{\belowcaptionskip}{10pt}% \caption{PARAMETERS OF OUR SEQUENCES} \center \begin{tabular}{@{}|c|c|c|c|@{}} \toprule Field Size & Sequence Length & Family Size & Correlation \\ \midrule $3^3$ & 28 & 25 & 12 \\ \midrule $3^4$ & 82 & 79 & 22 \\ \midrule $3^5$ & 244 & 241 & 32 \\ \midrule $5^2$ & 26 & 23 & 14 \\ \midrule $5^3$ &126 & 123 & 26 \\ \midrule 23 & 24 & 21 & 12 \\ \bottomrule \end{tabular} \end{table} In order to better understand the above explicit construction, we provide an example for $q=5^2$. \begin{ex} {\rm For $q=5^2$, let $\zeta$ be a primitive element of $\mathbb{F}_{25}$ satisfying $\zeta^2+\zeta+2=0$. We choose a primitive quadratic irreducible polynomial $p(x)=x^2+(\zeta+2)x+\zeta+2$, i.e., $a=\zeta+2$ and $b=\zeta+2$. From Theorem \ref{thm:3.6}, an automorphism $\sigma$ of $E=\mathbb{F}_q(u)$ with order $q+1$ can be determined explicitly by $\sigma(u)=-b/(u+a)$. Let $P_0$ be the zero of $u$ in $\mathbb{F}_q(u)$. With the help of software Sage, all rational places $P_{\sigma} \newcommand{\GS}{\Sigma^j}=\sigma^j(P_0)$ with $0\le j\le 25$ of $E$ and all representative elements $z_i$ with $1\le i\le 23$ in equivalence classes of $V\setminus \{0\}$ can be determined explicitly from Proposition \ref{prop:3.10} and Proposition \ref{prop:3.12}. Hence, the family of binary sequences with a low correlation constructed in Theorem \ref{thm:3.17} can be obtained explicitly as follows:\\ {\footnotesize ${\bf s}_1=(-1, 1, -1, -1, 1, 1, -1, 1, 1, 1, 1, 1, 1, -1, 1, 1, -1, -1, 1, -1, -1, -1, 1, 1, -1, -1)$, \\ ${\bf s}_2=(-1, 1, -1, 1, -1, -1, 1, 1, 1, 1, 1, -1, -1, -1, 1, -1, -1, -1, 1, 1, 1, 1, 1, -1, -1, 1)$, \\ ${\bf s}_3=(1, 1, 1, 1, 1, -1, -1, -1, -1, 1, -1, -1, -1, -1, 1, -1, -1, -1, -1, 1, -1, -1, -1, -1, 1, 1)$, \\ ${\bf s}_4=(-1, 1, -1, 1, 1, -1, 1, 1, -1, -1, 1, 1, 1, 1, 1, 1, 1, 1, 1, -1, -1, 1, 1, -1, 1, 1)$, \\ ${\bf s}_5=(1, 1, 1, -1, 1, 1, -1, 1, -1, -1, 1, -1, -1, 1, 1, 1, -1, -1, 1, -1, -1, 1, -1, 1, 1, -1)$, \\ ${\bf s}_6=(-1, 1, -1, -1, 1, 1, -1, 1, -1, -1, -1, -1, 1, -1, 1, -1, 1, -1, -1, -1, -1, 1, -1, 1, 1, -1)$, \\ ${\bf s}_7=(-1, 1, 1, -1, -1, -1, -1, 1, 1, -1, -1, 1, -1, 1, -1, 1, 1, 1, 1, 1, 1, -1, 1, -1, 1, -1)$, \\ ${\bf s}_8=(1, 1, -1, 1, -1, -1, 1, -1, -1, 1, -1, -1, 1, -1, 1, 1, 1, -1, -1, -1, 1, 1, -1, -1, -1, 1)$, \\ ${\bf s}_9=(-1, 1, -1, 1, 1, -1, -1, -1, 1, 1, 1, 1, 1, 1, -1, -1, -1, 1, 1, -1, 1, -1, 1, 1, 1, 1)$, \\ ${\bf s}_{10}=(1, 1, 1, -1, -1, 1, 1, 1, 1, -1, 1, -1, -1, -1, -1, 1, -1, 1, 1, 1, 1, -1, -1, 1, 1, 1)$, \\ ${\bf s}_{11}=(-1, 1, -1, -1, -1, 1, -1, -1, 1, -1, -1, -1, 1, -1, -1, 1, 1, 1, 1, -1, -1, 1, 1, 1, 1, -1)$, \\ ${\bf s}_{12}=(-1, 1, -1, -1, -1, 1, 1, -1, 1, 1, -1, -1, -1, 1, -1, 1, 1, -1, 1, 1, 1, 1, 1, -1, 1, 1)$, \\ ${\bf s}_{13}=(-1, 1, 1, 1, 1, 1, -1, 1, 1, -1, -1, 1, 1, 1, 1, -1, -1, -1, 1, 1, 1, 1, -1, -1, 1, 1)$, \\ ${\bf s}_{14}=(1, 1, -1, 1, 1, 1, 1, 1, -1, -1, -1, -1, 1, -1, -1, 1, -1, -1, 1, -1, -1, -1, -1, 1, 1, 1)$, \\ ${\bf s}_{15}=(1, 1, 1, 1, -1, -1, 1, 1, 1, -1, -1, 1, -1, 1, 1, 1, 1, -1, 1, -1, -1, 1, 1, 1, -1, -1)$, \\ ${\bf s}_{16}=(-1, 1, -1, 1, -1, 1, -1, 1, -1, -1, 1, 1, 1, -1, 1, 1, 1, 1, 1, 1, 1, -1, 1, 1, 1, -1)$, \\ ${\bf s}_{17}=(-1, 1, 1, 1, -1, -1, 1, 1, 1, -1, 1, -1, -1, -1, -1, -1, 1, 1, 1, 1, -1, -1, -1, -1, -1, 1)$, \\ ${\bf s}_{18}=(1, 1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 1, 1, 1, -1, 1, -1, -1, 1, 1, -1, -1, 1, -1, 1)$, \\ ${\bf s}_{19}=(1, 1, -1, 1, -1, 1, 1, -1, 1, -1, 1, 1, -1, 1, -1, -1, -1, 1, -1, -1, 1, -1, -1, -1, 1, -1)$, \\ ${\bf s}_{20}=(1, 1, -1, -1, 1, 1, -1, 1, 1, -1, -1, 1, 1, -1, -1, -1, 1, 1, -1, 1, -1, 1, 1, -1, -1, -1)$, \\ ${\bf s}_{21}=(-1, 1, -1, -1, 1, -1, -1, 1, -1, 1, -1, -1, -1, 1, 1, -1, -1, 1, 1, -1, -1, -1, 1, -1, 1, -1)$, \\ ${\bf s}_{22}=(1, 1, 1, -1, -1, 1, 1, 1, 1, 1, 1, 1, -1, -1, 1, -1, -1, -1, -1, 1, -1, -1, 1, 1, 1, 1)$, \\ ${\bf s}_{23}=(1, 1, 1, -1, -1, -1, -1, -1, 1, 1, 1, -1, 1, 1, -1, 1, -1, 1, 1, 1, -1, 1, -1, 1, 1, -1)$.} \\ It is easy to verify that the correlation of this family of sequences is $14=4+2\sqrt{25}$.} \end{ex} We list more numerical results on binary sequences with a low correlation obtained from the above explicit construction in the above Table II.
1,314,259,994,152
arxiv
\section{Introduction} Much information has been learned concerning the nature of halos in nuclei from studies of heavy ion break up reactions in which the momentum distributions of the valence nucleons have been found to be very narrow \cite{Ha95}. This observation suggests matter distributions which extend well beyond the radius of the nuclear potential and examples of halos found by this means are $^{11}$Li and $^{11}$Be. Different neutron distributions in exotic nuclei, such as skins ($^8$He, for example), also have been studied using this method. However, doubt remains on the ability of such reactions to probe the initial state wave functions. The breakup of $^6$He has been demonstrated to be a two-step process \cite{Al98}, in which the $^5$He fragment survives for a considerable amount of time as an $\alpha-n$ resonance before it breaks up. This suggests that the effects of final state interactions are significant in this reaction, so that information concerning the initial state wave function is lost. Also, that approach has the disadvantage of missing part of the initial state wave function of the halo nucleons \cite{Ha96} probing only the asymptotic part of the wave function. Success has been achieved in the analysis of those reactions using few-body models for the halo nuclei (see Ref.~\cite{Ga99} and references therein) as they are able to describe the asymptotic parts of nuclei better than most shell models \cite{Ka97}. There remains the need to find ways of studying microscopic properties of the wave functions of halo nuclei. To study the microscopic aspects of the wave functions of exotic nuclei we look to alternatives which probe the entire wave function. Proton scattering in the inverse kinematics and charged pion photoproduction are such reactions. Experiments have been performed for the (elastic) scattering of radioactive ions from hydrogen (see, for example, \cite{Ko97}). In the inverse kinematics this corresponds to proton scattering from the heavy ion, which directly measures the matter distribution of that ion. In particular, depending on the momentum transfer, such scattering may measure the density near the surface of the nucleus so that detailed information on the halo may be collected. Charged pion photoproduction from nuclei may serve as a useful complementary probe of halo structures \cite{Ka98}, especially as that reaction also is sensitive to the entire halo wave function formed in the final state. We present analyses of data on both of these reactions to study the neutron distributions of $^6$He, $^8$He, $^9$Li, and $^{11}$Li to determine whether the results permit identification of any of these nuclei as a neutron halo or neutron skin system. \section{Models of structure} As both proton scattering and charged pion photoproduction reactions probe the microscopic structure of the nucleus, a suitable model for the description of halo nuclear states in those reactions would be one in which nucleon degrees of freedom are admitted. This would, by necessity, include the core. In the case of $^{11}$Li scattering from hydrogen, it was found that a full description of the $^9$Li core was required \cite{Cr96} to describe the elastic scattering data. Therefore, we describe the halo states within the shell model, and allow for all nucleons to be active within the space (the so-called ``no-core'' models). Several groups report shell model calculations of $^{6,8}$He and $^{9,11}$Li. Navr\'atil and Barrett \cite{Na96,Na98} have made large-space shell model calculations using interactions obtained directly from the $NN$ $g$ matrices, with the Reid93 $NN$ interaction as their base. Their calculations for $^6$He were made in a complete $(0+2+4+6)\hbar\omega$ model space while those for $^8$He, $^9$Li and $^{11}$Li were made in the smaller $(0+2+4)\hbar\omega$ model space; the limitation arising from the dimensionality increasing with mass for a given space. (Henceforth, only the highest excitation will be given in reference to the complete model space.) Good results were found for the ground state properties in each case. For $^6$He, specifically, their calculations indicate that there is little or no need for this system to have a neutron halo to obtain agreement. For the other nuclei, they find spectra and ground state properties that are also quite good, although the calculated proton root-mean-square (r.m.s.) radii are small in comparison to the measured values. The cause of these discrepancies may be a halo-like distribution of the excess neutrons; the $4\hbar\omega$ model space is not large enough to admit such halo characteristics for these nuclei \cite{Na98}. These calculations may be contrasted with the results of our recent study \cite{Ka97a} in which the results of $0\hbar\omega$ and $2\hbar\omega$ shell model calculations of $^9$Li and $^{11}$Li, made using phenomenological interactions, were reported. When using the wave functions obtained in those smaller space calculations, the available elastic scattering data at $60A$ and $68A$~MeV from hydrogen were well described. We have calculated the wave functions for $^{6,8}$He within a complete $4\hbar\omega$ model space using the $G$ matrix interaction of Zheng {\em et al.} \cite{Zh95}. For $^{9,11}$Li, we used the wave functions as calculated in our previous work \cite{Ka97a}: using the P$(5-16)$T interaction in the $0\hbar\omega$ model space for $^9$Li, and the WBP interaction \cite{Wa92} in the $2\hbar\omega$ model space for $^{11}$Li. All calculations were made using the shell model code OXBASH \cite{Ox86}. From those wave functions, the one-body density matrix elements (OBDME) were obtained to use in the descriptions of the scattering and of the ($\gamma,\pi^+$) reaction. The spectrum of $^6$He is displayed in Fig.~\ref{he6spec}. Therein, the results of our calculation are compared to those of the $6\hbar\omega$ calculation of Navr\'atil and Barrett \cite{Na96}, as well as to those of Pudliner {\em et al.} \cite{Pu97}, in which the spectra of $A = 6$ nuclei were calculated using the Variational Monte Carlo (VMC) shell model approach. The experimental spectrum was obtained from Ref. \cite{Ja96}. The two calculations made using the ``traditional'' shell model approach ascribe $J^{\pi};T = 2^+;1$ to the first two excited states, in agreement with experiment. While the energy of the $2^+_1;1$ state is similar in the $4\hbar\omega$ and $6\hbar\omega$ models, the energy of the $2^+_2$ state in the $6\hbar\omega$ model is in much better agreement with the data. This may be due to the modification of the auxiliary potential in the Hamiltonian in that calculation \cite{Na96}. Without that modification, overbinding is observed, of the order of 4~MeV. However, it does not affect the spectrum significantly; the increase in energy of each state is less than 1~MeV. It should be noted that this overbinding will also affect our calculations, as we use the same interactions, although we do not expect that the wave functions will be significantly affected. The results of the VMC calculation place the $2^+_1$ state in very close agreement with experiment. However, that calculation also has an extra $1^+$ state in the spectrum not observed, nor seen in the other calculations. It would be interesting to investigate in more detail the character of that particular state. There is very little experimental information on the spectrum of $^8$He. The first excited state is listed at $2.8 \pm 0.4$~MeV and has $J^{\pi};T = (2^+);2$ \cite{Aj88}. Other states are reported at 1.3, 2.6 and 4.0~MeV \cite{Aj88}, as obtained from a transfer experiment involving heavy ions, but no other data are available as yet to support those measurements. The results from the present calculation are compared to those obtained from the VMC calculation \cite{Wi98} in Fig.~\ref{he8spec}. The spectrum obtained by Navr\'atil and Barrett in the $4\hbar\omega$ model space using their updated $G$ matrix interaction \cite{Na98} is similar to the present results, and so are not shown. The $2^+_1;2$ state is predicted correctly by all calculations as the first excited state, although only the VMC calculation agrees well with experiment. The disagreement between the shell model calculations and experiment may be due to the shell model failing to reproduce, within the $4\hbar\omega$ model space, the correct neutron density distribution. $^8$He has a well-known neutron skin, the description of which may require a calculation using a very large model space. The $^9$Li spectrum is displayed in Fig.~\ref{li9spec}, wherein the results of the present calculation are compared to those obtained within the $4\hbar\omega$ model space. The experimental energies are obtained from \cite{Aj88}. The spectrum obtained in the $0\hbar\omega$ model space is in general agreement with that obtained in the $4\hbar\omega$ model space, although the first excited state comes much lower in the latter. There are no spins assignments in the experimental spectrum bar the ground and first excited states, which the models correctly predict. As we consider only the elastic channel in the calculations of proton scattering, the $0\hbar\omega$ calculation is sufficient. One expects that core polarization corrections will become important for inelastic scattering. The $^{11}$Li spectrum is displayed in Fig.~\ref{li11spec}. Therein, the experimental results of Gornov {\em et al.} \cite{Go98} are compared to the results of the present calculation. The experiment from which the excitation spectrum was obtained was $^{14}$C($\pi^-,pd$)$^{11}$Li and did not allow for any spin assignments to be made so the comparison between experiment and theory at this stage must be tentative. The $\frac{1}{2}^-_1;\frac{5}{2}$ state is formed from the coupling of the valence neutrons to the $\frac{1}{2}^-$ state in $^9$Li. \section{Elastic proton scattering} We now consider elastic scattering of the heavy ions from hydrogen, data for which are available at $72A$~MeV for $^{6,8}$He and $62A$~MeV for $^{9,11}$Li. The analyses follow those made for the elastic scattering of 65~MeV protons from various targets ranging from $^6$Li to $^{238}$U \cite{Do98}, and we refer the reader to that reference for complete details. We present a brief summary of the formalism herein. There are three essential ingredients one must specify to calculate proton scattering observables. The first are the OBDME as obtained from the shell model calculations. They are explicitly defined as \begin{equation} S_{\alpha_1 \alpha_2 I} = \left\langle J_f \left\| \left[ a^{\dagger}_{\alpha_2} \times \tilde{a}_{\alpha_1} \right]^I \right\| J_i \right\rangle \end{equation} where $J_i$ and $J_f$ are the initial and final nuclear states respectively, $I$ is the angular momentum transfer, and $\alpha_i = \left\{ n_i, l_i, j_i, \rho_i \right\}$ with $\rho$ specifying either a proton or a neutron. The second ingredient is the effective interaction between the projectile nucleon and each and every nucleon in the target. The complex, fully nonlocal, effective interaction we choose \cite{Do98} accurately maps onto a set of nucleon-nucleon ($NN$) $g$ matrices. These density-dependent $g$ matrices are solutions of the Brueckner-Bethe-Goldstone equations in which a realistic $NN$ potential defines the basic pairwise interaction. For that, we have chosen the Paris interaction \cite{La80}. Good to excellent predictions of the elastic scattering observables for stable targets from $^6$Li to $^{238}$U were found with this effective (coordinate space) interaction. Finally, the single particle wave functions describing the nucleon bound states must be specified. For the present calculations we distinguish between those calculations which yield an extensive (halo) density distribution and those that do not. The former we designate ``halo'' while the latter are designated ``non-halo''. Those calculations use single-particle wave functions as specified naively from the shell model calculations, which do not make allowance directly for the very loose binding of the valence neutrons, at least not to the level in $\hbar\omega$ assumed in the model spaces. In all cases bar one, Woods-Saxon (WS) wave functions were used. Those which gave good reproduction of the elastic electron scattering form factors of $^6$Li \cite{Ka97} were used for all the $^{6,8}$He calculations while those which reproduced the elastic electron scattering form factors of $^9$Be \cite{Do97} were used in the calculations for $^9$Li, and also for the core in the halo calculation of $^{11}$Li. For the non-halo specification of $^{11}$Li, we used appropriate harmonic oscillator wave functions for mass-11 \cite{Ka97a}. To specify the halo, we adjusted the WS potentials from the values given such that the relevant valence neutron orbits are weakly bound. Those are the $0p$-shell orbits and higher for the helium isotopes, and the $0p_{\frac{1}{2}}$ orbit and higher for the lithium ones. Such an adjustment to single particle wave functions adequately explains the very large $B(E1)$ in $^{11}$Be \cite{Mi83} and guarantees an extensive neutron distribution. In our analyses, $^8$He and $^9$Li act as controls: $^8$He is an example of a neutron skin and $^9$Li is a simple core nucleus. The single neutron separation energies are 2.137~MeV and 4.063~MeV for $^8$He \cite{Aj88} and $^9$Li \cite{Aj90}, respectively. We may artificially ascribe a halo to these nuclei, by setting a much lower separation energy, to ascertain if the procedure and data are sensitive enough to detect the flaw. For $^6$He, the 0p-shell binding energy was set to 2~MeV, which is close to the separation energy (1.87~MeV \cite{Aj88}) of a single neutron from $^6$He, leaving the lowest $0p$-shell resonance in $^5$He. For $^8$He, $^9$Li and $^{11}$Li, the halo was specified by setting the binding energy for the WS functions of the $0p_{\frac{1}{2}}$ and higher orbits to 0.5~MeV \cite{Ka97a}. While the halo and nonhalo specifications are a matter of convenience at this point, we test the validity of halo name by calculating the r.m.s. radii for all four nuclei. The ability by which the wave functions can describe halo states may be evaluated by calculating the r.m.s. radius for each nucleus and compare to those results obtained from analyses of the reaction cross sections. The r.m.s. radii are presented in Table~\ref{radii}, as calculated using the shell model wave functions and the specified single particle wave functions. The values obtained from the shell model using the correct single particle wave functions are largely consistent with those obtained from few-body calculations \cite{To97,Al96,Al98a}. The values obtained indicate that $^6$He and $^{11}$Li are halo nuclei, while $^8$He and $^9$Li are not. While our prediction for the r.m.s. radius for $^{11}$Li appears lower compared to the value extracted from the reaction cross section \cite{Al96}, it is consistent within the error bars quoted with that analysis. The lower value may be due to the wave functions possibly being incapable of describing long range phenomena adequately. If that is the case, more $\hbar\omega$ excitations must be admitted into the model space; although the present set of wave functions should be sufficient to describe the proton scattering observables at high momentum transfer. The calculations for the scattering from $^9$Li and $^{11}$Li are those presented in Ref.~\cite{Ka97a}, while those for $^6$He and $^8$He used the OBDME as we have obtained from our shell model wave functions. The neutron density profiles for $^6$He, $^8$He, $^9$Li, and $^{11}$Li obtained from the present shell model calculations are shown in Fig.~\ref{fig1}. Therein the dashed and solid lines portray, respectively, the profiles found with and without the halo conditions being implemented. The dot-dashed line in each case represents the proton density. As the folding process defines the optical potentials, the internal ($r < r_{\rm rms}$) region influences the predictions of differential cross sections, notably at large scattering angles. In this region the extensive (halo) distribution exhibits a lower density, as the neutron strength is bled to higher radii. That effect characterized the proton halo in $^{17}$F$^{\ast}$ as manifest in the $^{17}$O($\gamma,\pi^-$) reaction \cite{Ka98}. The extended nature of the halo also influences the optical potentials as evidenced in changes to the cross sections at small momentum transfers (typically $< 0.5$~fm$^{-1}$ or $\theta_{c.m.} < 15^{\circ}$ for beam energies between $60A$ and $70A$~MeV). The predicted differential cross sections for the scattering of $^{6,8}$He and $^{9,11}$Li from hydrogen are presented in Figs.~\ref{fig2} and \ref{fig3}. In Fig.~\ref{fig2} we display the results to $80^{\circ}$ ($q \sim 2.5$~fm$^{-1}$) and compare them with the data taken by Korsheninnikov {\em et al.} \cite{Ko97,Ko93} using $70.5A$~MeV $^6$He and $72A$~MeV $^8$He beams, and by Moon {\em et al.} \cite{Mo92} using $60A$~MeV $^9$Li and $62A$~$^{11}$Li beams. The forward angle results specifically, for which there are no data, are shown in Fig.~\ref{fig3} to emphasize the influence on the predictions by the extension of the halo ($r > r_{\rm rms}$). In both figures the solid curves depict the non-halo results while the dashed curves are those with the halo. As is evident in Fig.~\ref{fig2}, the data for our two controls, $^8$He and $^9$Li, are sufficient to resolve the question of whether these nuclei exhibit halos. In both cases the data above $50^{\circ}$ are reproduced by the non-halo results suggesting that these nuclei do not have extended (halo) neutron distributions. This gives confidence in our ability to use such data to determine if a nucleus has a halo. That is confirmed in the case of the scattering of $^{11}$Li from hydrogen as the data clearly support a halo structure. There are differences evident between the halo and the non--halo predictions with these nuclei when one considers small angle scattering, where the influence of the Coulomb interaction is quite important. We present the results of our calculations for small angle scattering in Fig.~\ref{fig3}. For $^9$Li, the difference between the halo and non-halo results is small which supports the notion that this nucleus is a close-packed system. This is contrasted by the results for both $^8$He and $^{11}$Li: the difference between the halo and non-halo results for $^{11}$Li is greater, suggesting again the halo structure, but the difference is greatest in $^8$He. Together with the large angle scattering data this suggests the neutron skin structure for $^8$He serves to dilute the charge distribution stemming from the two protons while pushing the density of the neutrons uniformly to larger radii as is shown in Fig.~\ref{fig1}. We now turn our attention to $^6$He. As shown in Fig.~\ref{fig2}, the existing $^6$He data range only to $50^{\circ}$ ($q \sim 1.6$~fm$^{-1}$). This is insufficient to discriminate between the halo and non-halo structures. As confirmed by the data and optical model analysis of Korsheninnikov {\em et al.} \cite{Ko97}, our results are almost identical to those from $p$--$^6$Li scattering, but only in the region where the data were taken for the $p$--$^6$He scattering. Beyond this region there is a sufficient difference between the calculations to determine if $^6$He exhibits a halo. Data are needed beyond $50^{\circ}$ to make such an assessment. The small angle scattering shown in Fig.~\ref{fig3} is consistent with the result for $^9$Li in showing little difference between the halo and non-halo results. We may also study $^6$He via the $^6$Li($\gamma,\pi^+$)$^6$He$_{gs}$ reaction. This reaction may be more sensitive to details of the halo as the transition is more sensitive to the descriptions of the valence neutrons. We have calculated the cross sections for this reaction at $E_\gamma = 200$~MeV using the DWIA model of Tiator and Wright \cite{Ti84}. As the $^6$He ground state is the isobaric analogue of the $0^+;1$ (3.563~MeV) state in $^6$Li, we have used the OBDME for the transition to that state in $^6$Li, as obtained from a complete $(0+2+4)\hbar\omega$ shell model calculation \cite{Ka97}. The non-halo result corresponds to a calculation using harmonic oscillator single-particle wave functions with $\hbar\omega = 12.65$~MeV \cite{Ka97}. Those wave functions are also used for the initial $^6$Li state to obtain the halo result with the final $^6$He state being specified by WS wave functions in the $0p$-shell and higher orbitals only as given in the halo calculation of the scattering presented above. Such a specification introduces a problem in normalization with the $0p_{\frac{3}{2}}$ wave functions. The overlap of the HO and WS $0p_{\frac{3}{2}}$ radial wave functions is 0.96, hence the wave functions preserve the norm to within 4\%. Both results are compared to the data of Shaw {\em et al.} \cite{Sh91} (circles) and Shoda {\em et al.} \cite{Sh81} (squares) in Fig.~\ref{fig4}, wherein the halo and non-halo calculations are displayed by the dashed and solid lines respectively. From the available data one may infer that the non-halo result is favored, but this is due to the datum at $137^{\circ}$ only. Note that our non-halo result is similar to that found by Doyle {\em et al.} \cite{Do95} in which they used a $0\hbar\omega$ model of structure and no specific halo structure was specified. Our halo result is very similar to the result obtained from a three-body description of $^6$He in which the wave functions reproduced the halo properties \cite{Er99}. More data in the region of the possible minimum as well as at large angles are needed to confirm the conjecture that $^6$He does not have a halo structure. \section{Conclusions} The available scattering data from hydrogen confirm that $^{11}$Li is a halo nucleus, while the analysis of the scattering data correctly determines that both $^8$He and $^9$Li are not. This confirms our ability to predict correctly any halo structures as probed by the scattering of exotic nuclei from hydrogen. The low-angle scattering results also suggest that $^8$He is a neutron skin nucleus, as found from breakup reactions. While the data on the r.m.s. radii suggests that $^6$He is a halo nucleus, the available scattering data for $^6$He from hydrogen are not extensive enough to discriminate between the halo and non-halo scenarios; in the measured region they suggest for $^6$He a very similar matter distribution compared to $^6$Li. The complementary $^6$Li($\gamma,\pi^+$)$^6$He reaction data suggest the non-halo hypothesis. However, it must be stressed that more data, particularly involving transitions to states in $^6$He, are required to support or refute this conjecture. The analysis presented here also demonstrates that, to test structure models of these exotic nuclei most intensively, one should study reactions of skin and halo nuclei with complementary probes and in complementary reaction channels. Financial support from the Natural Sciences and Engineering Research Council of Canada, the Australian Research Council, and Department of Energy Grant no. DE-FG02-95ER-40907 is gratefully acknowledged.
1,314,259,994,153
arxiv
\section{Introduction} \quad Masses of many particles of Standard Model are below the SM scale and their prediction remains one of the main challenges of theoretical physics. In frames of the AdS/CFT approach and Randall-Sundrum model \cite{Randall} spectra of physical particles are obtained as eigenvalues of equations for bulk fields, and it is possible in principle to get the looked for masses of intermediate scale with the choice of bulk masses of the fields. In papers \cite{Alt0} it was shown that "old" conformal bootstrap (proposed about 50 years ago in pioneer papers \cite{old1}, \cite{old2} and developed in \cite{Parisi} - \cite{Dobrev2}, see e.g. \cite{Grensing} and references therein) considered in the AdS/CFT context permits to calculate conformal dimensions, that is bulk masses of the fields. The simplest "old" conformal bootstrap equation for Green function $G(X, Y)$ traditionally written in planar approximation looks as: \begin{equation} \label{1} G (X_{1}, X_{2}) = g^{2} \, \int\int\, G(X_{1}, X)\,G(X, Y) \, G(X, Y) \, G(Y, X_{2})\, dX\,dY \end{equation} (triple interaction is supposed, $g$ is the coupling constant). Equating of exact Green functions to the one-loop quantum contribution built of the same exact Green functions is the main postulate of the "old" conformal bootstrap. Schwinger-Dyson Eq-s of type (\ref{1}) with "tadpole" self-energy and contribution from massless "bare" Lagrangian in the RHS are used in theories with dynamical generation of mass (gap) such as superconductivity and certain models of spontaneous symmetry breaking. In some models the choice of only planar diagrams in the RHS of (\ref{1}) may be justified as "most divergent" ones or in frames of the $1/N$ expansion. However, Kenneth Wilson in his 1982 Nobel Lecture \cite{Wilson} criticized this approximation as ungrounded. Nevertheless people use it, and we shall do the same in the present paper, having in mind that interesting results are the best justification of any postulate. To see the meaning of Eq. (\ref{1}) in the AdS/CFT context let us assume that $X_{1,2}$, $X$, $Y$ in (1) are the bulk coordinates in $AdS_{d + 1}$ and direct $X_{1, 2}$ to horizon (AdS boundary at $z_{0} \to 0$ in Poincare coordinates). Then LHS of (\ref{1}) becomes conformal correlator of the boundary conformal theory, whereas $G(X_{1}, X)$, $G(Y, X_{2})$ in the RHS become the corresponding bulk-to-boundary propagators. Thus RHS of (\ref{1}) becomes the quantum one-loop self-energy contribution to the boundary-to-boundary correlator; this Witten diagram is called "bubble" \cite{Giombi1}. The spectra of conformal dimensions obtained in \cite{Alt0} were calculated under the simplifying assumption that Green functions that form a bubble in the RHS of (\ref{1}) may be replaced by the corresponding harmonic (Wightman) functions. Here we abandon this assumption. It is well known that one-loop diagrams in the RHS of (\ref{1}) are plagued by UV divergencies. In particular it is seen in the divergence of the double-integral spectral representation of the bubble \cite{Giombi1}. To overcome this difficulty we propose to apply to Witten bubble diagrams the double-trace from UV to IR flow approach used in \cite{Mitra} - \cite{Diaz2} for UV finite calculations of tadpoles and quantum vacuum energies of scalar and spinor bulk fields in spaces of arbitrary dimensions. This "flow" is just a difference of two similar Witten diagrams built of the UV or IR Green functions, and it proves to be finite and well defined. We propose to apply this approach to Witten bubble diagrams. Most generally this approach means that instead of standard quantum generation functional (symbolically) \begin{equation} \label{2} Z [j;G] = ({\rm{Det}}G)^{-1/2}\, e^{L_{int}\left(\frac{\delta}{\delta j}\right)} \, e^{\left(\frac{1}{2}jGj\right)} \end{equation} (here $G$, $L_{int}$, $j$ are the free field Green function, the interaction Lagrangian and field's source) the ratio \begin{equation} \label{3} {\widetilde Z} [j;G^{UV}, G^{IR}] = \frac{Z [j;G^{UV}]}{Z [j;G^{IR}]} = \frac{({\rm{Det}}G^{UV})^{-1/2}\, e^{L_{int}\left(i\frac{\delta}{\delta j}\right)} \, e^{\left(\frac{1}{2}jG^{UV}j\right)}}{({\rm{Det}}G^{IR})^{-1/2}\, e^{L_{int}\left(i\frac{\delta}{\delta j}\right)} \, e^{\left(\frac{1}{2}jG^{IR}j\right)}} \end{equation} of two quantum functionals determined by Green functions ($G^{UV}$ and $G^{IR}$) possessing two different asymptotics at the AdS boundary must be considered as quantum generation functional for Witten diagrams. This means in particular that self-energy in the RHS of (\ref{1}) is defined as a difference of conventional Witten diagrams built from the products of two $G^{UV}$ and two $G^{IR}$ Green functions correspondingly. Then the diverging double spectral integrals are canceled in $(G^{UV})^{2} - (G^{IR})^{2}$ and the remaining terms are UV-finite, see it in Sec. 3. In Sec.2 familiar expressions used in the bulk of the paper are summed up. In Sec. 3 the expression for the UV-finite one-loop self-energy correlator is obtained that proves to be surprisingly simple in $AdS_{5}$ that is for $d = 4$. This is one of the main results of the paper. In Sec. 4 "old" conformal bootstrap equations in the AdS/CFT context are written down for the $O(N)$ symmetric model of $N$ scalar fields $\psi_{i}$ of one and the same conformal dimension $\Delta_{\psi}$ interacting with the Hubbard-Stratonovich conformal invariant auxiliary scalar field. "Old" conformal bootstrap gives non-trivial spectral equation for $\Delta_{\psi}$. This equation as well as calculation of the roots of this equation obeying unitarity bound demand is another result of the paper. In Conclusion the possible directions of future work are outlined. \section{Preliminaries} \quad We work in $AdS_{d+1}$ in Poincare Euclidean coordinates $Z = \{z_{0}, {\vec z}\,\}$, where AdS curvature radius $R_{AdS}$ is put equal to one: \begin{equation} \label{4} ds^{2} = \frac{dz_{0}^{2} + d {\vec z}\,^{2}}{z_{0}^{2}}, \end{equation} and consider bulk scalar fields. Bulk field $\phi (X)$ of mass $m$ is dual to boundary conformal operator $O_{\Delta^{IR}} (\vec x)$ or to its "shadow" operator $O_{\Delta^{UV}}(\vec x)$ with scaling dimensions \begin{equation} \label{5} \Delta_{\phi}^{IR} = \frac{d}{2} + \sqrt{\frac{d^{2}}{4} + m^{2}} > \frac{d}{2}, \, \, \, \, \Delta_{\phi}^{UV} = d - \Delta_{\phi}^{IR} < \frac{d}{2}. \end{equation} We take normalization of the scalar field bulk-to-boundary propagator $G^{\partial B}_{\Delta} (Z; \vec x)$ and of the corresponding conformal correlator like in \cite{Giombi1}: \begin{eqnarray} \label{6} G^{\partial B}_{\Delta} (Z; \vec x) = \lim_{\stackrel {x_{0} \to 0}{}} \left[\frac{G_{\Delta}^{BB} (Z, X)}{(x_{0})^{\Delta}}\right] = C_{\Delta}\, \left [\frac{z_{0}}{z_{0}^{2} + (\vec z - \vec x)^{2}}\right]^{\Delta}, \nonumber \\ \\ C_{\Delta} = \frac{\Gamma (\Delta)}{2\pi^{d/2}\Gamma \left(1 + \Delta - \frac{d}{2}\right)}, \qquad \qquad \qquad \qquad \nonumber \end{eqnarray} and: \begin{equation} \label{7} <O_{\Delta}({\vec x}) O_{\Delta} ({\vec y})> = \lim_{\stackrel{x_{0} \to 0}{y_{0} \to 0}} \left[\frac{G_{\Delta}^{BB} (X, Y)}{(x_{0}\,y_{0})^{\Delta}}\right]= \frac{C_{\Delta}}{P_{xy}^{\Delta}}, \, \, \, P_{xy} = |{\vec x} - {\vec y}|^{2} \end{equation} Bulk-to-bulk IR ($\Delta = \Delta^{IR}$) scalar field Green function $G_{\Delta}^{IR} (X, Y)$ that is zero at infinity $x_{0}, y_{0} \to \infty$ possesses Kallen-Lehmann type spectral representation \cite{Penedones} - \cite{Bekaert}, \cite{Giombi1}: \begin{equation} \label{8} G^{IR}_{\Delta} (X, Y) = \int_{-\infty}^{+\infty} \frac{\Omega_{\nu,0}(X,Y)\,d\nu}{[\nu^{2} + (\Delta - \frac{d}{2})^{2}]}, \qquad \qquad \qquad \qquad \end{equation} where nominator of the integrand is scalar field Harmonic function that admits split representation and that is proportional to the difference (marked here with tilde) of IR and UV bulk Green functions: \begin{equation} \label{9} \Omega_{\nu,0}(X,Y) = \frac{\nu^{2}}{\pi} \, \int \, G^{\partial B}_{\frac{d}{2} + i\nu}(X, {\vec x}_{a})\, G^{\partial B}_{\frac{d}{2} - i\nu}(Y, {\vec x}_{a})\, d^{d}{\vec x}_{a} = \frac{i\nu}{2\pi}\,{\widetilde G}_{\frac{d}{2} + i\nu}, \qquad \end{equation} \begin{equation} \label{10} {\widetilde G}_{\Delta}(X, Y) = G_{\Delta}^{IR} - G_{d-\Delta}^{UV} = (d - 2\Delta)\, \int \, G^{\partial B}_{\Delta}(X, {\vec x}_{a})\, G^{\partial B}_{d-\Delta}(Y, {\vec x}_{a})\, d^{d}{\vec x}_{a}. \end{equation} Spectral representation for $G^{UV}_{d - \Delta}$ was analyzed in detail in \cite{Giombi1}, but after all it comes to the identity: \begin{equation} \label{11} G^{UV}_{d - \Delta}(X, Y) = G^{IR}_{\Delta}(X, Y) - {\widetilde G}_{\Delta}(X, Y), \end{equation} where $G^{IR}_{\Delta}(X, Y)$ and ${\widetilde G}_{\Delta}(X, Y)$ are given in (\ref{8}) and (\ref{10}). We shall also need expression for AdS/CFT tree 3-point vertex \cite{Freedman}, \cite{Penedones}, \cite{Paulos}, \cite{Giombi1}: \begin{eqnarray} \label{12} \Gamma_{\Delta_{1}, \Delta_{2}, \Delta_{3}} ({\vec x}_{1}, {\vec x}_{2}, {\vec x}_{3}) = \int \,G^{\partial B}_{\Delta_{1}} (X; {\vec x}_{1})\, G^{\partial B}_{\Delta_{2}} (X; {\vec x}_{2}) \, G^{\partial B}_{\Delta_{3}} (X; {\vec x}_{3}) \, dX = \nonumber \\ \\ = \, \frac{B(\Delta_{1}, \Delta_{2}, \Delta_{3})}{P_{12}^{\delta_{12}}\,P_{13}^{\delta_{13}}\, P_{23}^{\delta_{23}}}, \qquad \, \qquad \, \qquad \, \qquad \, \qquad \nonumber \end{eqnarray} where \begin{equation} \label{13} \delta_{12} = \frac{\Delta_{1} + \Delta_{2} - \Delta_{3}}{2}; \,\,\, \delta_{13} = \frac{\Delta_{1} + \Delta_{3} - \Delta_{2}}{2}; \,\,\, \delta_{23} = \frac{\Delta_{2} + \Delta_{3} - \Delta_{1}}{2}, \end{equation} \begin{equation} \label{14} B(\Delta_{1}, \Delta_{2}, \Delta_{3}) = \frac{\pi^{d/2}}{2}\, \left( \prod\limits_{i=1}^{3}\frac{C_{\Delta_{i}}}{\Gamma (\Delta_{i})}\right) \cdot \Gamma \left(\frac{\Sigma \Delta_{i} - d}{2}\right)\cdot \Gamma(\delta_{12})\, \Gamma(\delta_{13}) \, \Gamma (\delta_{23}). \end{equation} Also some well known \cite{Symanzik2}, \cite{Parisi}, \cite{Fradkin}, \cite{Giombi1} conformal integrals will be used below: \begin{equation} \label{15} \int \frac{d^{d}{\vec y}}{P_{1y}^{\Delta_{1}} \,P_{2y}^{\Delta_{2}}\, P_{3y}^{\Delta_{3}}} \stackrel {\Sigma \Delta_{i} = d} {=} \frac{A(\Delta_{1}, \Delta_{2}, \Delta_{3})}{P_{12}^{\frac{d}{2} - \Delta_{3}}\,P_{13}^{\frac{d}{2} - \Delta_{2}}\, P_{23}^{\frac{d}{2} - \Delta_{1}}}, \end{equation} and \begin{equation} \label{16} \int \frac{d^{d}{\vec y}}{P_{1y}^{\Delta_{1}} \,P_{2y}^{\Delta_{2}}} = \frac{A(\Delta_{1}, \Delta_{2}, d - \Delta_{1} - \Delta_{2})} {P_{12}^{\Delta_{1} + \Delta_{2} - \frac{d}{2}}}, \end{equation} where \begin{equation} \label{17} A(\Delta_{1}, \Delta_{2}, \Delta_{3}) = \frac{\pi^{d/2}\, \Gamma (\frac{d}{2} - \Delta_{1}) \, \Gamma (\frac{d}{2} - \Delta_{2}) \, \Gamma (\frac{d}{2} - \Delta_{3})}{\Gamma (\Delta_{1}) \, \Gamma(\Delta_{2}) \, \Gamma (\Delta_{3})}. \end{equation} And we shall need divergent integral (\ref{16}) when $\Delta_{1} = \Delta_{2} = d/2$: \begin{equation} \label{18} \int \frac{d^{d}{\vec y}}{P_{1y}^{\frac{d}{2}} \,P_{2y}^{\frac{d}{2}}} = \frac{A(\frac{d}{2}, \frac{d}{2}, 0)}{P_{12}^{\frac{d}{2}}} = \frac{\pi^{d/2} \, \Gamma (0)}{\Gamma(\frac{d}{2})} \cdot \frac{1}{P_{12}^{\frac{d}{2}}}, \end{equation} the possible different regularizations of (\ref{18}) are discussed in \cite{Giombi1}. \section{UV-finite one-loop self-energy correlator} \qquad We consider here three bulk scalar fields $\phi_{i}(Z)$ ($i = 1, 2, 3$) with conformal dimensions $\Delta_{\phi_{i}}$ (\ref{5}) and triple bulk interaction: \begin{equation} \label{19} L_{int} = g\,\phi_{1}(Z) \,\phi_{2}(Z) \,\phi_{3}(Z). \end{equation} The RHS of bootstrap equation (\ref{1}) when $X_{1}$, $X_{2}$ are put at the horizon is the 2-point one-loop self-energy boundary-boundary correlator (bubble) ${\cal M}^{{\rm 2pt \, bub}}_{\Delta_{\phi_{1}}|\Delta_{\phi_{2}}\Delta_{\phi_{3}}}({\vec x}_{1}, {\vec x}_{2})$ built of two bulk-to-boundary propagators (\ref{6}) of the "external" field $\phi_{1}$ and of the product of two intermediate bulk Green functions of fields $\phi_{2}$ and $\phi_{3}$, this is reflected by index $\Delta_{\phi_{1}}|\Delta_{\phi_{2}}\Delta_{\phi_{3}}$. From now on the IR option $(\Delta_{\phi_{i}} > d/2)$ will be considered when spectral representation (\ref{8}) for the intermediate Green functions is valid. The product of two bulk Green functions gives UV divergence of the bubble. Thus we $postulate$ that bubble is built with use of the "double trace from UV to IR deformation" quantum generation functional ${\widetilde Z} = Z^{UV}/Z^{IR}$ (\ref{3}) (see discussion in the Introduction), and also mark it with tilde: \begin{eqnarray} \label{20} {\cal {\widetilde M}}^{{\rm 2pt \, bub}}_{\Delta_{\phi_{1}}|\Delta_{\phi_{2}}\Delta_{\phi_{3}}}({\vec x}_{1}, {\vec x}_{2}) = {\cal M}^{{\rm 2pt \, bub} \, UV}_{\Delta_{\phi_{1}}|\Delta_{\phi_{2}}\Delta_{\phi_{3}}}({\vec x}_{1}, {\vec x}_{2}) - {\cal M}^{{\rm 2pt \, bub} \, IR}_{\Delta_{\phi_{1}}|\Delta_{\phi_{2}}\Delta_{\phi_{3}}}({\vec x}_{1}, {\vec x}_{2}) = \nonumber \\ \\ = g^{2} \, \int\int G^{\partial B}_{\Delta_{\phi_{1}}}(X ; {\vec x}_{1})\, {\widetilde \Pi}_{\Delta_{\phi_{2}}, \Delta_{\phi_{3}}}(X, Y) \, G^{\partial B}_{\Delta_{\phi_{1}}}(Y ; {\vec x}_{2})\,dXdY, \qquad \qquad \nonumber \end{eqnarray} where \begin{eqnarray} \label{21} {\widetilde \Pi}_{\Delta_{\phi_{2}}, \Delta_{\phi_{3}}}(X, Y) = \qquad \qquad \qquad \qquad \qquad \nonumber \\ \\ = G^{UV}_{\Delta_{\phi_{2}}}(X, Y) \, G^{UV}_{\Delta_{\phi_{3}}} (X, Y) - G^{IR}_{\Delta_{\phi_{2}}}(X, Y) \, G^{IR}_{\Delta_{\phi_{3}}} (X, Y) = \qquad \nonumber \\ \nonumber \\ = {\widetilde G}_{\Delta_{\phi_{2}}} \, {\widetilde G}_{\Delta_{\phi_{3}}} - G^{IR}_{\Delta_{\phi_{2}}} \, {\widetilde G}_{\Delta_{\phi_{3}}} - {\widetilde G}_{\Delta_{\phi_{2}}} \, G^{IR}_{\Delta_{\phi_{3}}}, \qquad \qquad \qquad \nonumber \end{eqnarray} last equality in (\ref{21}) is actually an identity following from the definition of ${\widetilde G_{\Delta}}$ (\ref{10}). UV-divergent terms of $[G^{IR}]^{2}$ and $[G^{UV}]^{2}$ (that are given by the double integral spectral representations \cite{Giombi1}) reduce in (\ref{20}) whereas, as it will be shown below, terms of (\ref{20}) corresponding to the last two terms in the final line of (\ref{21}) are given by the convergent spectral integrals (\ref{8}). Since nominators of these integrals are proportional to $\widetilde G$ (see (\ref{9}), (\ref{10})) all three terms of (\ref{20})-(\ref{21}) are expressed through the one and the same correlator (we call it "harmonic bubble" ${\cal H}$), where both intermediate Green functions are replaced by their harmonic counterparts ${\widetilde G}$ (\ref{10}): \begin{eqnarray} \label{22} {\cal H}^{\rm {2pt \, bub}}_{\Delta_{\phi_{1}}|\Delta_{\phi_{2}}\Delta_{\phi_{3}}}\,({\vec x}_{1}, {\vec x}_{2}) = \qquad \qquad \qquad \qquad \nonumber \\ \\ = g^{2} \int\int G^{\partial B}_{\Delta_{\phi_{1}}}(X ; {\vec x}_{1})\,{\widetilde G}_{\Delta_{\phi_{2}}}(X, Y) \, {\widetilde G}_{\Delta_{\phi_{3}}} (X, Y)\,G^{\partial B}_{\Delta_{\phi_{1}}}(Y ; {\vec x}_{2})\,dXdY. \nonumber \end{eqnarray} Thus, with account of (\ref{21}), (\ref{8}), (\ref{9}), (\ref{22}) the double-trace deformation of bubble ${\widetilde M}$ (\ref{20}) takes a form: \begin{eqnarray} \label{23} {\cal {\widetilde M}}^{{\rm 2pt \, bub}}_{\Delta_{\phi_{1}}|\Delta_{\phi_{2}}\Delta_{\phi_{3}}}({\vec x}_{1}, {\vec x}_{2}) = - \int_{-\infty}^{+\infty} \frac{i\nu \, d\nu}{2 \pi} \, \frac{{\cal H}^{\rm {2pt \, bub}}_{\Delta_{\phi_{1}}|\frac{d}{2} + i\nu, \Delta_{\phi_{3}}}\,({\vec x}_{1}, {\vec x}_{2})}{[\nu^{2} + (\Delta_{\phi_{2}} - \frac{d}{2})^{2}]} - \qquad \nonumber \\ \\ - \int_{-\infty}^{+\infty} \frac{i\nu \, d\nu}{2 \pi} \, \frac{{\cal H}^{\rm {2pt \, bub}}_{\Delta_{\phi_{1}}| \Delta_{\phi_{2}}, \frac{d}{2} + i\nu}\,({\vec x}_{1}, {\vec x}_{2})}{[\nu^{2} + (\Delta_{\phi_{3}} - \frac{d}{2})^{2}]} \, + \, {\cal H}^{\rm {2pt \, bub}}_{\Delta_{\phi_{1}}| \Delta_{\phi_{2}}, \Delta_{\phi_{3}}}\,({\vec x}_{1}, {\vec x}_{2}). \qquad \nonumber \end{eqnarray} Harmonic bubble ${\widetilde H}$ (\ref{22}) was calculated in \cite{Giombi1}. The evident steps are as follows: (1) to use in (\ref{22}) split representations of ${\widetilde G}_{\Delta}$ (\ref{10}); (2) to perform two bulk integrals (\ref{12}) that gives in (\ref{22}) convolution of two vertices (\ref{12}) over two boundary points ${\vec x}_{a}$, ${\vec x}_{b}$; (3) to perform familiar conformal integral (\ref{15}) over ${\vec x}_{b}$. Then (\ref{22}) comes to: \begin{eqnarray} \label{24} {\cal {\widetilde H}}^{{\rm 2pt \, bub}}_{\Delta_{\phi_{1}}|\Delta_{\phi_{2}}\Delta_{\phi_{3}}}\,({\vec x}_{1}, {\vec x}_{2}) = g^{2} \, (d - 2\Delta_{\phi_{2}}) (d - 2\Delta_{\phi_{3}})\,\frac{1}{P_{12}^{\Delta_{\phi_{1}} - \frac{d}{2}}}\, \int \frac{d{\vec x}_{a}}{P_{1a}^{\frac{d}{2}}P_{2a}^{\frac{d}{2}}} \, \cdot \nonumber \\ \nonumber \\ \nonumber \cdot \, B(\Delta_{\phi_{1}}, \Delta_{\phi_{2}}, \Delta_{\phi_{3}})\, B(\Delta_{\phi_{1}}, d - \Delta_{\phi_{2}}, d - \Delta_{\phi_{3}}) \, A(\delta_{12}, \delta_{13}, d - \Delta_{\phi_{1}}) = \quad \nonumber \\ \\ = \frac{C_{\Delta_{\phi_{1}}}}{P_{12}^{\Delta_{\phi_{1}}}} \, \cdot \, \frac{g_{R}^{2}}{F(\Delta_{\phi_{1}})} \, \cdot \, {\bf {\cal R}}(\Delta_{\phi_{1}}, \Delta_{\phi_{2}}, \Delta_{\phi_{3}}), \qquad \qquad \qquad \nonumber \end{eqnarray} $P_{12} = |{\vec x}_{1} - {\vec x}_{2}|^{2}$, etc.; standard divergent conformal integral (\ref{18}) is absorbed here, together with some coefficients, in the "bare" coupling constant $g^{2}$ defining the renormalized coupling constant as: \begin{equation} \label{25} g_{R}^{2} = g^{2}\,\frac{P_{12}^{\frac{d}{2}}}{32\pi^{d}}\,\int \frac{d{\vec x}_{a}}{P_{1a}^{\frac{d}{2}}P_{2a}^{\frac{d}{2}}}. \end{equation} Coefficient ${\bf {\cal R}}(\Delta_{\phi_{1}}, \Delta_{\phi_{2}}, \Delta_{\phi_{3}})$ in the last line of (\ref{24}) is equal to: \begin{eqnarray} \label{26} {\bf {\cal R}}(\Delta_{\phi_{1}}, \Delta_{\phi_{2}}, \Delta_{\phi_{3}}) = \Gamma\left(\frac{{\Sigma_{i}\Delta_{\phi_{i}}} - d}{2}\right) \, \Gamma\left(\frac{2d - {\Sigma_{i}\Delta_{\phi_{i}}}}{2}\right) \, \cdot \nonumber \\ \\ \cdot \, \frac{\Gamma(\delta_{12}) \, \Gamma(\delta_{13}) \, \Gamma(\delta_{23}) \,\Gamma\left(\frac{d}{2} - \delta_{12}\right) \, \Gamma\left(\frac{d}{2} - \delta_{13}\right) \, \Gamma\left(\frac{d}{2} - \delta_{23}\right)}{\Pi_{i=1}^{3}\left[\Gamma\left(\frac{d}{2} - \Delta_{\phi_{i}}\right) \, \Gamma\left(1 + \Delta_{\phi_{i}} - \frac{d}{2}\right)\right]}, \nonumber \end{eqnarray} it is symmetric in three its arguments and changes its sign under change of any of its arguments to the conjugate one: $\Delta_{\phi_{1}} \to d - \Delta_{\phi_{1}}$ etc. We also introduced for brevity in the last line of (\ref{24}): \begin{equation} \label{27} F(\Delta) = \frac{\Gamma(\Delta) \, \Gamma(d - \Delta)}{\Gamma\left(\Delta - \frac{d}{2}\right) \, \Gamma\left(\frac{d}{2} - \Delta\right)}; \, \, \, F_{(d=4)} = (\Delta - 1)(\Delta - 2)^{2}(\Delta - 3). \end{equation} \vspace{0.3cm} Expressions for ${\bf {\cal R}}$ (\ref{26}) and $F$ (\ref{27}) are received in (\ref{24}) with account of formulas for $\delta_{ij}$, $B$, $A$ and also $C_{\Delta}$ hidden in $B$, given in (\ref{13}), (\ref{14}), (\ref{17}) and (\ref{6}) correspondingly. Thus substitution of (\ref{24}) to (\ref{23}) gives finally: \begin{eqnarray} \label{28} {\cal {\widetilde M}}^{{\rm 2pt \, bub}}_{\Delta_{\phi_{1}}|\Delta_{\phi_{2}}\Delta_{\phi_{3}}}({\vec x}_{1}, {\vec x}_{2}) = \frac{C_{\Delta_{\phi_{1}}}}{P_{12}^{\Delta_{\phi_{1}}}} \, \cdot \, \frac{g_{R}^{2}}{F(\Delta_{\phi_{1}})} \, \Biggl[{\bf {\cal R}}(\Delta_{\phi_{1}}, \Delta_{\phi_{2}}, \Delta_{\phi_{3}}) - \qquad \nonumber \\ \\ - \int_{-\infty}^{+\infty} \frac{i\nu \, d\nu}{2 \pi} \, \frac{{\bf {\cal R}}(\Delta_{\phi_{1}}, \frac{d}{2} + i\nu, \Delta_{\phi_{3}})}{[\nu^{2} + (\Delta_{\phi_{2}} - \frac{d}{2})^{2}]} -\int_{-\infty}^{+\infty} \frac{i\nu \, d\nu}{2 \pi} \, \frac{{\bf {\cal R}}(\Delta_{\phi_{1}}, \Delta_{\phi_{2}}, \frac{d}{2} + i\nu)}{[\nu^{2} + (\Delta_{\phi_{3}} - \frac{d}{2})^{2}]}\, \Biggr]. \nonumber \end{eqnarray} \vspace{0.3cm} Coefficient ${\bf {\cal R}}$ (\ref{26}) is the main object for us. In particular for $d = 4$ when one of its arguments is changed to $2 + i\nu$ (like for $\Delta_{\phi_{2}}$ in the second term in square brackets in the RHS of (\ref{28}) and for $\Delta_{\phi_{3}}$ in the third term) its dependence on $\nu$ is given by the elementary functions (see \cite{Ryzhik}, formula 8.332.4): \begin{eqnarray} \label{29} {\bf {\cal R}}_{(d =4)}(\Delta_{\phi_{1}}, 2 + i\nu, \Delta_{\phi_{3}}) = \frac{\sin \pi\Delta_{\phi_{1}}}{\pi} \, \frac{\sin \pi\Delta_{\phi_{3}}}{\pi} \, \frac{\sinh \pi\nu}{i\, \pi} \, \cdot \qquad \qquad \nonumber \\ \\ \cdot \, \frac{\pi^{2} [\nu^{2} + (\Delta_{\phi_{1}} - \Delta_{\phi_{3}})^{2}]}{2\, [\cosh \pi\nu - \cos\pi(\Delta_{\phi_{1}} - \Delta_{\phi_{3}})]} \, \frac{\pi^{2} [\nu^{2} + (\Delta_{\phi_{1}} + \Delta_{\phi_{3}} - 4)^{2}]}{2\, [\cosh \pi\nu - \cos\pi(\Delta_{\phi_{1}} + \Delta_{\phi_{3}} - 4)]} \quad \nonumber \end{eqnarray} \vspace{0.3cm} Thus for $d = 4$ with account of (\ref{29}) it is obtained for the UV-finite self-energy one-loop correlator (\ref{28}): \begin{eqnarray} \label{30} {\cal {\widetilde M}}^{{\rm 2pt \, bub}\,(d = 4)}_{\Delta_{\phi_{1}}|\Delta_{\phi_{2}}\Delta_{\phi_{3}}}({\vec x}_{1}, {\vec x}_{2}) = \frac{C_{\Delta_{\phi_{1}}}}{P_{12}^{\Delta_{\phi_{1}}}} \, \frac{g_{R}^{2}}{F_{(d = 4)}(\Delta_{\phi_{1}})} \, \cdot \, \Biggl[ \, {\bf {\cal R}}_{(d =4)}(\Delta_{\phi_{1}}, \Delta_{\phi_{2}}, \Delta_{\phi_{3}}) - \nonumber \\ \nonumber \\ \nonumber - \, \, \, \frac{\sin \pi\Delta_{\phi_{1}} \, \sin \pi\Delta_{\phi_{3}}}{8} \, \, {\rm{\bf I}}(\Delta_{\phi_{2}} - 2, \Delta_{\phi_{1}} - \Delta_{\phi_{3}}, \Delta_{\phi_{1}} + \Delta_{\phi_{3}} - 4) - \qquad \nonumber \\ \\ - \, \frac{\sin \pi\Delta_{\phi_{1}} \, \sin \pi\Delta_{\phi_{2}}}{8} \, \, {\rm{\bf I}}(\Delta_{\phi_{3}} - 2, \Delta_{\phi_{1}} - \Delta_{\phi_{2}}, \Delta_{\phi_{1}} + \Delta_{\phi_{2}} - 4) \Biggl], \quad \qquad \nonumber \end{eqnarray} where we introduced definite integral: \begin{equation} \label{31} {\rm{\bf I}}(a, b, c) = \int_{-\infty}^{+ \infty} \, \frac{\nu \sinh \pi\nu \, d\nu}{\nu^{2} + a^{2}} \, \frac{\nu^{2} + b^{2}}{[\cosh \pi\nu - \cos \pi b]}\ \, \frac{\nu^{2} + c^{2}}{[\cosh \pi\nu - \cos \pi c]}. \end{equation} \section{$O(N)$-symmetric model: successful "hunting for numbers"} \qquad As it was noted in the Introduction the AdS/CFT version of the "old" conformal bootstrap is obtained if coordinates $X_{1,2}$, $X$, $Y$ in general formula (\ref{1}) are considered as the bulk coordinates in $AdS_{d + 1}$, and $X_{1,2}$ are placed at the horizon with appropriate normalization like in (\ref{6}), (\ref{7}). This procedure transforms LHS of (\ref{1}) into elementary boundary-to-boundary conformal correlator (\ref{7}) whereas RHS of (\ref{1}) transforms to the UV-divergent 2-point one-loop self-energy correlator; we use its UV-finite redefined form (\ref{20}) built with general approach (\ref{3}) of the double-trace from UV to IR deformation. Thus "old" conformal bootstrap in the AdS/CFT context looks as: \begin{equation} \label{32} \frac{C_{\Delta_{\phi_{1}}}}{P_{12}^{\Delta_{\phi_{1}}}} \, = \, {\cal {\widetilde M}}^{{\rm 2pt \, bub}}_{\Delta_{\phi_{1}}|\Delta_{\phi_{2}}\Delta_{\phi_{3}}}({\vec x}_{1}, {\vec x}_{2}). \end{equation} Space dependence of ${\widetilde M}$ is singled out in front of (\ref{28}) and it reduces in (\ref{32}) with the same dependence of the LHS of (\ref{32}). Two more bootstrap equations are obtained from (\ref{32}) by the permutation of fields. Thus we have three spectral equations for four unknown variables: $\Delta_{\phi_{1}}$, $\Delta_{\phi_{2}}$, $\Delta_{\phi_{3}}$ and $g_{R}^{2}$. The most consistent way to get the missing fourth equation for coupling constant would be to require the validity of the vertex "old" bootstrap equation that in the AdS/CFT context means equating of 3-point vertex (\ref{12}) to the one-loop triangle Witten diagram, redefined according to prescription (\ref{3}). This bootstrap equation is easy to put down, but it is difficult to work it out. Another option is to reduce the number of unknown variables with fixing conformal dimension of one of the fields. This will be done below. Let us look at the $O(N)$ symmetric model of $N$ scalar fields $\psi_{k}$ with quartic interaction term $\sim (\Sigma_{k}\psi_{k}^{2})^{2}$ on the $AdS_{d + 1}$ background. This model may be always reduced to triple interaction \begin{equation} \label{33} L_{int} = g \, \sigma(Z)\,\Sigma_{k}\psi_{k}^{2}(Z) \end{equation} with the introduction of the auxiliary Hubbard-Stratonovich field $\sigma(Z)$. Thus we consider theory of $N + 1$ scalar fields with interaction (\ref{33}) where $N$ fields $\psi_{k}$ have one and the same conformal dimension $\Delta_{\psi} > d/2$ and conformal dimension $\Delta_{\sigma}$ of the Hubbard-Stratonovich field is given. In what follows the Hubbard-Stratonovich field is considered to be conformally invariant in $AdS_{d + 1}$, that is: \begin{equation} \label{34} \Delta_{\sigma} = \Delta^{\rm conf}_{\sigma} = \frac{d}{2} + \frac{1}{2} \, \, \, (= \frac{5}{2} \,\, for \, \, d = 4) \end{equation} (we take IR-option (\ref{5}) of $\Delta_{\psi}$, $\Delta_{\sigma}$ when spectral representation (\ref{8}) of intermediate Green functions is valid). Then there are $N$ identical bootstrap Eq-s (\ref{32}) written for fields $\psi_{k}$ and one more Eq. (\ref{32}) for field $\sigma$ where the RHS must be multiplied by $N$ since in (\ref{33}) field $\sigma(Z)$ interacts with every of fields $\psi_{k}$. To write down these spectral equations it is sufficient to put in (\ref{30}), (\ref{26}) $\Delta_{\phi_{1}} = \Delta_{\phi_{2}} = \Delta_{\psi}$ and $\Delta_{\phi_{3}} = \Delta_{\sigma}$ (\ref{34}). In what follows we consider $d = 4$. Thus there are two bootstrap spectral equations obtained from (\ref{32}), (\ref{30}), (\ref{33}). One for field $\psi$: \begin{eqnarray} \label{35} 1 \, = \, \frac{g_{R}^{2}}{F_{(d = 4)}(\Delta_{\psi})} \, \cdot \, \Biggl[ \, {\bf {\cal R}}_{(d =4)}(\Delta_{\psi}, \Delta_{\psi}, \Delta_{\sigma}) - \qquad \qquad \nonumber \\ \nonumber \\ \nonumber - \, \, \, \frac{\sin \pi\Delta_{\psi} \, \sin \pi\Delta_{\sigma}}{8} \, \, {\rm{\bf I}}(\Delta_{\psi} - 2, \Delta_{\psi} - \Delta_{\sigma}, \Delta_{\psi} + \Delta_{\sigma} - 4) - \qquad \nonumber \\ \\ - \, \frac{(\sin \pi\Delta_{\psi})^{2}}{8} \, \, {\rm{\bf I}}(\Delta_{\sigma} - 2, 0, 2 \Delta_{\psi} - 4) \Biggl], \quad \qquad \qquad \qquad \nonumber \end{eqnarray} and the other one for field $\sigma$: \begin{eqnarray} \label{36} 1 \, = \, N \, \frac{g_{R}^{2}}{F_{(d = 4)}(\Delta_{\sigma})} \, \cdot \, \Biggl[ \, {\bf {\cal R}}_{(d =4)}(\Delta_{\psi}, \Delta_{\psi}, \Delta_{\sigma}) - \qquad \qquad \nonumber \\ \\ - \, \, \, 2 \, \frac{\sin \pi\Delta_{\sigma} \, \sin \pi\Delta_{\psi}}{8} \, \, {\rm{\bf I}}(\Delta_{\psi} - 2, \Delta_{\sigma} - \Delta_{\psi}, \Delta_{\psi} + \Delta_{\sigma} - 4)\Biggr], \qquad \nonumber \end{eqnarray} ${\bf {\cal R}}_{(d =4)}(\Delta_{\psi}, \Delta_{\psi}, \Delta_{\sigma})$, $F_{(d = 4)}(\Delta_{\psi})$, ${\rm{\bf I}}(a,b,c)$ see in (\ref{26}), (\ref{27}), (\ref{31}). After elimination here $g_{R}^{2}$ and substitution of $\Delta_{\sigma}$ from (\ref{34}) (for $d = 4$) the looked for spectral equation for $\Delta_{\psi}$ is obtained; it is convenient to put it down for variable $\lambda$: \begin{equation} \label{37} \lambda = \Delta_{\psi} - 2 > 0; \, \, \, \lambda < 1. \end{equation} Here $\lambda > 0$ since $\Delta_{\psi} > d/2 = 2$; inequality $\lambda < 1$ is the unitarity bound demand. Finally taking into account that according to (\ref{26}), (\ref{34}), (\ref{37}) \begin{equation} \label{38} {\bf {\cal R}}_{(d = 4)}\left(\lambda + 2, \lambda + 2, \frac{5}{2}\right) = \frac{\pi}{8}\, \left(\lambda^{2} - \frac{1}{16}\right) \, \frac{1 - \cos 2\pi\lambda}{\cos 2\pi\lambda}, \end{equation} and that according to (\ref{27}) $F_{(d = 4)}(\lambda + 2) = \lambda^{2}(\lambda^{2} - 1)$ and $F_{(d = 4)}(5/2) = - 3/16$ the following spectral equation for $\lambda$ (\ref{37}) is obtained from (\ref{34})-(\ref{36}), (\ref{38}): \begin{eqnarray} \label{39} 2\pi \sin\pi\lambda \Bigg[1 + N \frac{16}{3} \lambda^{2}(\lambda^{2} - 1)\Bigg] \left(\lambda^{2} - \frac{1}{16}\right) - \sin\pi\lambda \, \cos2\pi\lambda \, {\rm{\bf I}}\left(\frac{1}{2}, 0, 2 \lambda \right) - \nonumber \\ \\ - \cos2\pi\lambda \, \Bigg[1 + 2 N \frac{16}{3} \lambda^{2}(\lambda^{2} - 1)\Bigg] \, \, {\rm{\bf I}}\left(\lambda, \lambda - \frac{1}{2}, \lambda + \frac{1}{2}\right) = 0, \qquad \qquad \nonumber \end{eqnarray} where integral ${\rm{\bf I}}$ is defined in (\ref{31}). For every $N = 1, 2, 3, 4$ there are three roots of Eq. (\ref{39}) obeying unitarity bound demand $0 < \lambda < 1$, the values of $g_{R}^{2}$ (in units of the proper powers of the AdS curvature) calculated from (\ref{35}) or (\ref{36}) corresponding to every of these roots are also shown below: \begin{eqnarray} \label{40} N = 1: \, \lambda = \, \,\, 0.500 \, (g_{R}^{2} = 0.75); \, \, \, 0.875 \, (g_{R}^{2} = - 2.74); \,\, \, 0.965 \, (g_{R}^{2} = 36.2); \nonumber \\ \nonumber \\ \nonumber N = 2: \, \lambda = \, \, \, 0.296 \, (g_{R}^{2} = 0.72); \, \, \, 0.928 \, (g_{R}^{2} = - 16.9); \,\, \, 0.978 \, (g_{R}^{2} = 18.7); \nonumber \\ \\ N = 3: \, \lambda = \, \, \, 0.227 \, (g_{R}^{2} = 0.64); \, \, \, 0.936 \, (g_{R}^{2} = - 17.9); \,\, \, 0.985 \, (g_{R}^{2} = 14.0); \nonumber \\ \nonumber \\ \nonumber N = 4: \, \lambda = \, \, \, 0.189 \, (g_{R}^{2} = 0.57); \, \, \, 0.938 \, (g_{R}^{2} = - 19.6); \,\, \, 0.988 \, (g_{R}^{2} = 12.5); \, \nonumber \end{eqnarray} these values of $\lambda$ correspond to conformal dimensions $2 < \Delta^{IR}_{\psi} < 3$ (see (\ref{37})). Surely for every positive solution $\lambda > 0$ (\ref{40}) of the "old" conformal bootstrap equations (\ref{32}) there exists negative solution of the same modulus that corresponds to the conjugate conformal dimension $\Delta^{UV}_{\psi} = 4 - \Delta^{IR}_{\psi} = 2 - |\lambda|$ . However it would be mistake just to reverse the sign of $\lambda$ in spectral equation (\ref{39}). Spectral equation identical to (\ref{39}) is obtained for $|\lambda|$, that is for "conjugate sector", if in general bootstrap equations (\ref{32}) conformal dimensions of every "external" tail is taken "UV" that is less than $d/2$. For model under consideration and for $d = 4$ this means that in Eq. (\ref{32}) written down for "external" field $\psi$ it is necessary to replace $\Delta_{\phi_{1}} \to 4 - \Delta_{\psi}$ ($\Delta_{\psi} > 2$) and in Eq. (\ref{32}) for "external" field $\sigma$ we must take $\Delta_{\sigma} = 3/2$, while conformal dimensions of "intermediate" fields should remain "IR" when spectral representations (\ref{8}) of Green functions forming the bubble are valid. With such a substitutions in (\ref{32}) the RHS of bootstrap equations (\ref{35}), (\ref{36}) change sign, which does not change the spectral equation (\ref{39}) written for $|\lambda|$, but replaces $g_{R}^{2} \to - g_{R}^{2}$. \section{Conclusion} \qquad Solutions (\ref{40}) are $O(N)$ symmetric, they were obtained under the assumption of the coincidence of conformal dimensions of all fields $\psi_{k}$. The immediate task for future would be to look at the possibility of spontaneous $O(N)$ symmetry breaking in the model (\ref{33}) when every of fields $\psi_{k}$ is equipped with its own conformal dimension $\Delta_{\psi_{k}}$ and asymmetric solutions of self-consistent bootstrap equations (\ref{32}) must be found. In particular this means in case of $d = 4$ that bootstrap Eq. (\ref{35}) is valid for every $\Delta_{\psi_{k}}$ whereas in (\ref{36}) instead of factor $N$ in the RHS there will be a sum over $k$ of functions on $\Delta_{\psi_{k}}$ standing in the RHS of (\ref{36}). The simplest case of the interacting bulk scalar fields is considered in the paper, and the physical meaning of solutions (\ref{40}) is rather vague. However many questions of modern physics are connected with the fermions of spin $1/2$, and the point is that "flavors" mass hierarchy, which is still a mystery, may be explained in principle in frames of the two-brane Randall-Sundrum model \cite{Randall} when some natural ("twisted") boundary conditions are imposed on the bulk spinor fields and for certain bulk masses of these fields (see e.g. \cite{Neubert}, \cite{Pomarol}). Thus calculation of fermions' bulk masses (that is of conformal dimensions) in frames of the proposed approach of the "old" conformal bootstrap may open the way to solving the problem of the fermion mass hierarchy. To generalize the approach of the present paper to fields of spin 1/2 in frames of Yukawa model or QED on $AdS_{5}$ may be one more immediate task for future. \section*{Acknowledgments} Author is grateful to Ruslan Metsaev for fruitful discussions and to participants of the seminar in the Theoretical Physics Department of P.N. Lebedev Physical Institute for stimulating questions.
1,314,259,994,154
arxiv
\section{Introduction} A study of images of topological spaces under certain sequence-covering maps is an important question in general topology \cite{GMT, LJ, Li, LL1, Ls3, LY, LZGG, LC2, YP}. S. Lin and P. F. Yan in \cite{LY} proved that each sequence-covering and compact map on metric spaces is an 1-sequence-covering map. Recently, F. C. Lin and S. Lin in \cite{LL1} proved that each sequence-covering and boundary-compact map on metric spaces is an 1-sequence-covering map. Also, the authors posed the following question in \cite{LL1} : \begin{question}\cite[Question 3.6]{LL1} Let $f:X\rightarrow Y$ be a sequence-covering and boundary-compact map. Is $f$ an 1-sequence-covering map if $X$ is a space with a point-countable base or a developable space? \end{question} In this paper, we shall give an affirmative answer for Question 1.1. S. Lin in \cite[Theorem 2.2]{Ls4} proved that if $X$ is a metrizable space and $f$ is a sequence-quotient and compact map, then $f$ is a pseudo-sequence-covering map. Recently, C. F. Lin and S. Lin in \cite{LL1} proved that if $X$ is a metrizable space and $f$ is a sequence-quotient and boundary-compact map, then $f$ is a pseudo-sequence-covering map. Hence we have the following Question 1.2. \begin{question} Let $f:X\rightarrow Y$ be a sequence-quotient and boundary-compact map. Is $f$ a pseudo-sequence-covering map if $X$ is a space with a point-countable base or a developable space? \end{question} On the other hand, the authors in \cite{YP} proved that each closed sequence-covering map on metric spaces is an 1-sequence-covering map. Hence we have the following Question 1.3. \begin{question} Let $f:X\rightarrow Y$ be a closed sequence-covering map. Is $f$ an 1-sequence-covering map if $X$ is a regular space with a point-countable base or a developable space? \end{question} In this paper, we shall we give an affirmative answer for Question 1.2, which improves some results in \cite{LL1} and \cite{Ls4}, respectively. Moreover, we give an affirmative answer for Question 1.3 when $X$ has a point-countable base or $X$ is $g$-metirzable. In \cite{TV}, V. V. Tkachuk introduced the strongly monotonically monolithic spaces. In this paper, we also prove that strongly monotonically monolithities are preserved by open and closed maps, and spaces with a $\sigma$-point-discrete $k$-network are preserved by closed sequence-covering maps. \vskip 1cm\setlength{\parindent}{1cm} \section{Definitions and terminology} Let $X$ be a space. For $P\subset X$, $P$ is a {\it sequential neighborhood} of $x$ in $X$ if every sequence converging to $x$ is eventually in $P$. \begin{definition} Let $\mathcal{P}=\bigcup_{x\in X}\mathcal{P}_{x}$ be a cover of a space $X$ such that for each $x\in X$, (a)\ if $U,V\in \mathcal{P}_{x}$, then $W\subset U\cap V$ for some $W\in \mathcal{P}_{x}$; (b)\ $\mathcal{P}_{x}$ is a network of $x$ in $X$, i.e., $x\in\bigcap\mathcal{P}_x$, and if $x\in U$ with $U$ open in $X$, then $P\subset U$ for some $P\in\mathcal P_x$. (1)$\mathcal{P}$ is called an {\it $sn$-network} for $X$ if each element of $\mathcal{P}_{x}$ is a sequential neighborhood of $x$ in $X$ for each $x\in X$. $X$ is called {\it $snf$-countable}\cite{Ls3}, if $X$ has an $sn$-network $\mathcal P$ such that each $\mathcal P_x$ is countable. (2)$\mathcal{P}$ is called a {\it weak base}\cite{Ar} for $X$ if whenever $G\subset X$ satisfying for each $x\in X$ there is a $P\in \mathcal{P}_{x}$ with $P\subset G$,\ $G$ is open in $X$. $X$ is {\it $g$-metrizable}\cite{Si2} if $X$ is regular and has a $\sigma$-locally finite weak base. \end{definition} \begin{definition} Let $f:X\rightarrow Y$ be a map. \begin{enumerate} \item $f$ is a {\it compact} (resp. {\it separable}) map if each $f^{-1}(y)$ is compact (separable) in $X$; \item $f$ is a {\it boundary-compact}(resp. {\it boundary-separable}) map if each $\partial f^{-1}(y)$ is compact (separable) in $X$; \item $f$ is a {\it sequence-covering map}\cite{Si1} if whenever $\{y_{n}\}$ is a convergent sequence in $Y$ there is a convergent sequence $\{x_{n}\}$ in $X$ with each $x_{n}\in f^{-1}(y_{n})$; \item $f$ is an {\it 1-sequence-covering map}\cite{Ls2} if for each $y\in Y$ there is $x\in f^{-1}(y)$ such that whenever $\{y_{n}\}$ is a sequence converging to $y$ in $Y$ there is a sequence $\{x_{n}\}$ converging to $x$ in $X$ with each $x_{n}\in f^{-1}(y_{n})$; \item $f$ is a {\it sequentially quotient map}\cite{BS} if whenever $\{y_{n}\}$ is a convergent sequence in $Y$ there is a convergent sequence $\{x_{k}\}$ in $X$ with each $x_{k}\in f^{-1}(y_{n_{k}})$; \item $f$ is a {\it pseudo-sequence-covering map}\cite{GMT, ILT} if for each convergent sequence $L$ in $Y$ there is a compact subset $K$ in $X$ such that $f(K)=\overline{L}$; \end{enumerate} \end{definition} It is obvious that \setlength{\unitlength}{1cm} \begin{picture}(15,1.5)\thicklines \put(2.1,0){\makebox(0,0){1-sequence-covering maps}} \put(4.3,0){\vector(1,0){1}} \put(7.4,0){\makebox(0,0){sequence-covering maps}} \put(9,0.3){\vector(2,1){1}} \put(9.5,1.1){\makebox(0,0){pseudo-sequence-covering maps}} \put(9,-0.3){\vector(2,-1){1}} \put(10,-1){\makebox(0,0){sequential quotient maps.}} \end{picture} \vskip 1.4cm Remind readers attention that the sequence-covering maps defined the above-mentioned are different from the sequence-covering maps defined in \cite{GMT}, which is called pseudo-sequence-covering maps in this paper. \begin{definition}\cite{MN} Let $A$ be a subset of a space $X$. We call an open family $\mathcal{N}$ of subsets of $X$ is an {\it external base} of $A$ in $X$ if for any $x\in A$ and open subset $U$ with $x\in U$ there is a $V\in \mathcal{N}$ such that $x\in V\subset U$. \end{definition} Similarly, we can define an {\it externally weak base} for a subset $A$ for a space $X$. Throughout this paper all spaces are assumed to be Hausdorff, all maps are continuous and onto. The letter $\mathbb{N}$ will denote the set of positive integer numbers. Readers may refer to \cite{En, Gr, Ls3} for unstated definitions and terminology. \vskip 1cm\setlength{\parindent}{1cm} \section{Sequence-covering and boundary-compact maps} Let ${\it \Omega}$ be the sets of all topological spaces such that, for each compact subset $K\subset X\in {\it \Omega}$, $K$ is metrizable and also has a countably neighborhood base in $X$. In fact, E. A. Michael and K. Nagami in \cite{MN} has proved that $X\in {\it \Omega}$ if and only if $X$ is the image of some metric space under an open and compact-covering\footnote{Let $f:X\rightarrow Y$ be a map. $f$ is called a {\it compact-covering map}\cite{MN} if in case $L$ is compact in $Y$ there is a compact subset $K$ of $X$ such that $f(K)= L$.} map. It is easy to see that if a space $X$ is developable or has a point-countable base, then $X\in {\it \Omega}$ (see \cite{AB} and \cite{TV}, respectively). In this paper, when we say an $snf$-countable space $Y$, it is always assume that $Y$ has an $sn$-network $\mathcal{P}=\cup\{\mathcal{P}_{y}:y\in Y\}$ such that $\mathcal{P}_{y}$ is countable and closed under finite intersections for each point $y\in Y$. \begin{lemma}\label{l5} Let $f:X\rightarrow Y$ be a sequence-covering and boundary-compact map, where $Y$ is $snf$-countable. For each non-isolated point $y\in Y$, there exists a point $x_{y}\in\partial f^{-1}(y)$ such that whenever $U$ is an open subset with $x_{y}\in U$, there exists a $P\in\mathcal{P}_{y}$ satisfying $P\subset f(U)$ \end{lemma} \begin{proof} Suppose not, there exists a non-isolated point $y\in Y$ such that for every point $x\in \partial f^{-1}(y)$, there is an open neighborhood $U_{x}$ of $x$ such that $P\not\subseteq f(U_{x})$ for every $P\in \mathcal{P}_{y}$. Then $\partial f^{-1}(y)\subset\cup\{U_{x}: x\in \partial f^{-1}(y)\}$. Since $\partial f^{-1}(y)$ is compact, there exists a finite subfamily $\mathcal{U}\subset \{U_{x}: x\in \partial f^{-1}(y)\}$ such that $\partial f^{-1}(y)\subset\cup\mathcal{U}$. We denote $\mathcal{U}$ by $\{U_{i}: 1\leq i\leq n_{0}\}$. Assume that $\mathcal{P}_{y}=\{P_{n}:n\in\mathbb{N}\}$ and $\mathcal{W}_{y}=\{F_{n}=\bigcap_{i=1}^{n}P_{i}:n\in\mathbb{N}\}$. It is obvious that $\mathcal{W}_{y}\subset \mathcal{P}_{y}$ and $F_{n+1}\subset F_{n}$, for every $n\in \mathbb{N}$. For each $1\leq m\leq n_{0}, n\in\mathbb{N}$, it follows that there exists $x_{n, m}\in F_{n}\setminus f(U_{m})$. Then denote $y_{k}=x_{n, m}$, where $k=(n-1)n_{0}+m$. Since $\mathcal{P}_{y}$ is a network at point $y$ and $F_{n+1}\subset F_{n}$ for every $n\in \mathbb{N}$, $\{y_{k}\}$ is a sequence converging to $y$ in $Y$. Because $f$ is a sequence-covering map, $\{y_{k}\}$ is an image of some sequence $\{x_{k}\}$ converging to $x\in \partial f^{-1}(y)$ in $X$. From $x\in \partial f^{-1}(y)\subset\cup\mathcal{U}$ it follows that there exists $1\leq m_{0}\leq n_{0}$ such that $x\in U_{m_{0}}$. Therefore, $\{x\}\cup\{x_{k}: k\geq k_{0}\}\subset U_{m_{0}}$ for some $k_{0}\in \mathbb{N}$. Hence $\{y\}\cup\{y_{k}: k\geq k_{0}\}\subset f(U_{m_{0}})$. However, we can choose an $n> k_{0}$ such that $k=(n-1)n_{0}+m_{0}\geq k_{0}$ and $y_{k}=x_{n, m_{0}}$, which implies that $x_{n, m_{0}}\in f(U_{m_{0}})$. This contradictions to $x_{n, m_{0}}\in F_{n}\setminus f(U_{m_{0}})$. \end{proof} The next lemma is obvious. \begin{lemma}\label{l11} Let $f:X\rightarrow Y$ be 1-sequence-covering, where $X$ is $snf$-countable. Then $Y$ is $snf$-countable. \end{lemma} \begin{theorem}\label{t7} Let $f:X\rightarrow Y$ be a sequence-covering and boundary-compact map, where $X$ is first-countable. Then $Y$ is $snf$-countable if and only if $f$ is an 1-sequence-covering map. \end{theorem} \begin{proof} Necessity. Let $y$ be a non-isolated point in $Y$. Since $Y$ is $snf$-countable, it follows from Lemma~\ref{l5} that there exists a point $x_{y}\in\partial f^{-1}(y)$ such that whenever $U$ is an open neighborhood of $x_{y}$, there is a $P\in \mathcal{P}_{y}$ satisfying $P\subset f(U)$. Let $\{B_{n}: n\in \mathbb{N}\}$ be a countably neighborhood base at point $x_{y}$ such that $B_{n+1}\subset B_{n}$ for each $n\in\mathbb{N}$. Suppose that $\{y_{n}\}$ is a sequence in $Y$, which converges to $y$. Next, we take a sequence $\{x_{n}\}$ in $X$ as follows. Since $B_{n}$ is an open neighborhood of $x_{y}$, it follows from the Lemma~\ref{l5} that there exists a $P_{n}\in\mathcal{P}_{y}$ such that $P_{n}\subset f(B_{n})$ for each $n\in \mathbb{N}$. Because every $P\in\mathcal{P}_{y}$ is a sequential neighborhood, it is easy to see that for each $n\in\mathbb{N}$, $f(B_{n})$ is a sequential neighborhood of $y$ in $Y$. Therefore, for each $n\in\mathbb{N}$, there is an $i_{n}\in\mathbb{N}$ such that $y_{i}\in f(B_{n})$ for every $i\geq i_{n}$. Suppose that $1<i_{n}<i_{n+1}$ for every $n\in \mathbb{N}$. Hence, for each $j\in \mathbb{N}$, we take \[x_{j}\in\left\{ \begin{array}{lll} f^{-1}(y_{j}), & \mbox{if } j<i_{1},\\ f^{-1}(y_{j})\cap B_{n}, & \mbox{if } i_{n}\leq j<i_{n+1}.\end{array}\right.\] We denote $S=\{x_{j}:j\in \mathbb{N}\}$. It is easy to see that $S$ converges to $x_{y}$ in $X$ and $f(S)=\{y_{n}\}$. Therefore, $f$ is an 1-sequence-covering map. Sufficiency. It easy to see that $Y$ is $snf$-countable by Lemma~\ref{l11}. \end{proof} We don't know whether, in Theorem~\ref{t7}, $f$ is an 1-sequence-covering map when $X$ is only first-countable. However, we have the following Theorem~\ref{t0}, which gives an affirmative answer for Question 1.1. Firstly, we give some technique lemmas. \begin{lemma}\cite{MN}\label{l0} If $X\in {\it \Omega}$, then every compact subset of $X$ has a countably external base. \end{lemma} \begin{lemma}\label{l1} Let $f:X\rightarrow Y$ be a sequence-covering and boundary-compact map. If $X\in {\it \Omega}$, then $Y$ is $snf$-countable. \end{lemma} \begin{proof} Let $y$ be a non-isolated point for $Y$. Then $\partial f^{-1}(y)$ is non-empty and compact for $X$. Therefore, $\partial f^{-1}(y)$ has a countably external base $\mathcal{U}$ in $X$ by Lemma~\ref{l0}. Let $$\mathcal{V}=\{\cup\mathcal{F}:\mbox{There is a finite subfamily}\ \mathcal{F}\subset \mathcal{U}\ \mbox{with}\ \partial f^{-1}(y)\subset\cup\mathcal{F}\}.$$ Obviously, $\mathcal{V}$ is countable. We now prove that $f(\mathcal{V})$ is a countable $sn$-network at point $y$. (1) $f(\mathcal{V})$ is a network at $y$. Let $y\in U$. Obviously, $\partial f^{-1}(y)\subset f^{-1}(U)$. For each $x\in\partial f^{-1}(y)$, there exist an $U_{x}\in\mathcal{U}$ such that $x\in U_{x}\subset f^{-1}(U)$. Therefore, $\partial f^{-1}(y)\subset\cup\{U_{x}: x\in\partial f^{-1}(y)\}$. Since $\partial f^{-1}(y)$ is compact, it follows that there exists a finite subfamily $\mathcal{F}\subset\{U_{x}: x\in\partial f^{-1}(y)\}$ such that $\partial f^{-1}(y)\subset\cup\mathcal{F}\subset f^{-1}(U)$. It is easy to see that $F\in\mathcal{V}$ and $y\in\cup f(\mathcal{F})\subset U$. (2) For any $P_{1}, P_{2}\in f(\mathcal{V})$, there exists a $P_{3}\in f(\mathcal{V})$ such that $P_{3}\subset P_{1}\cap P_{2}$. It is obvious that there exist $V_{1}, V_{2}\in \mathcal{V}$ such that $f(V_{1})=P_{1}, f(V_{2})=P_{2}$, respectively. Since $\partial f^{-1}(y)\subset V_{1}\cap V_{2}$, it follows from the similar proof of (1) that there exists a $V_{3}\in\mathcal{V}$ such that $\partial f^{-1}(y)\subset V_{3}\subset V_{1}\cap V_{2}$. Let $P_{3}=f(V_{3})$. Hence $P_{3}\subset f(V_{1}\cap V_{2})\subset f(V_{1})\cap f(V_{2})=P_{1}\cap P_{2}$. (3) For each $P\in f(\mathcal{V})$, $P$ is a sequential neighborhood of $y$. Let $\{y_{n}\}$ be any sequence in $Y$ which converges to $y$ in $Y$. Since $f$ is a sequence-covering map, $\{y_{n}\}$ is the image of some sequence $\{x_{n}\}$ converging to $x\in\partial f^{-1}(y)\subset X$. It follows from $P\in f(\mathcal{V})$ that there exists a $V\in \mathcal{V}$ such that $P=f(V)$. Therefore, $\{x_{n}\}$ is eventually in $V$, and this is implied that $\{y_{n}\}$ is eventually in $P$. Therefore, $f(\mathcal{V})$ is a countable $sn$-network at point $y$. \end{proof} \begin{theorem}\label{t0} Let $f:X\rightarrow Y$ be a sequence-covering and boundary-compact map. If $X\in {\it \Omega}$, then $f$ is an 1-sequence-covering map. \end{theorem} \begin{proof} From Lemma~\ref{l1} it follows that $Y$ is $snf$-countable. Therefore, $f$ is an 1-sequence-covering map by Theorem~\ref{t7}. \end{proof} By Theorem~\ref{t0}, it easily follows the following Corollary~\ref{c0}, which gives an affirmative answer for Question 1.1. \begin{corollary}\label{c0} Let $f:X\rightarrow Y$ be a sequence-covering and boundary-compact map. Suppose also that at least one of the following conditions holds: \begin{enumerate} \item $X$ has a point-countable base; \item $X$ is a developable space. \end{enumerate} Then $f$ is an 1-sequence-covering map. \end{corollary} \begin{lemma}\label{l6} Let $f:X\rightarrow Y$ be a sequence-covering map, where $Y$ is $snf$-countable and $\partial f^{-1}(y)$ has a countably external base for each point $y\in Y$. Then, for each non-isolated point $y\in Y$, there exists a point $x_{y}\in\partial f^{-1}(y)$ such that whenever $U$ is an open subset with $x_{y}\in U$, there exists a $P\in\mathcal{P}_{y}$ satisfying $P\subset f(U)$ \end{lemma} \begin{proof} Suppose not, there exists a non-isolated point $y\in Y$ such that for every point $x\in \partial f^{-1}(y)$, there is an open neighborhood $U_{x}$ of $x$ such that $P\not\subseteq f(U_{x})$ for every $P\in \mathcal{P}_{y}$. Let $\mathcal{B}$ be a countably external base for $\partial f^{-1}(y)$. Therefore, for each $x\in \partial f^{-1}(y)$, there exists a $B_{x}\in \mathcal{B}$ such that $x\in B_{x}\subset U_{x}$. For each $x\in \partial f^{-1}(y)$, it follows that $P\not\subseteq f(B_{x})$ whenever $P\in \mathcal{P}_{y}$. Assume that $\mathcal{P}_{y}=\{P_{n}:n\in\mathbb{N}\}$ and $\mathcal{W}_{y}=\{F_{n}=\bigcap_{i=1}^{n}P_{i}:n\in\mathbb{N}\}$. We denote $\{B_{x}\in\mathcal{B}:x\in\partial f^{-1}(y)\}$ by $\{B_{m}:m\in\mathbb{N}\}$. For each $n, m\in\mathbb{N}$, it follows that there exists $x_{n, m}\in F_{n}\setminus f(B_{m})$. For $n\geq m$, we denote $y_{k}=x_{n, m}$ with $k=m+n(n-1)/2$. Since $\mathcal{P}_{y}$ is a network at point $y$ and $F_{n+1}\subset F_{n}$ for every $n\in \mathbb{N}$, $\{y_{k}\}$ is a sequence converging to $y$ in $Y$. Because $f$ is a sequence-covering map, $\{y_{k}\}$ is an image of some sequence $\{x_{k}\}$ converging to $x\in \partial f^{-1}(y)$ in $X$. From $x\in \partial f^{-1}(y)\subset\cup\{B_{m}: m\in \mathbb{N}\}$ it follows that there exists a $m_{0}\in \mathbb{N}$ such that $B_{m_{0}}$ is an open neighborhood at $x$. Therefore, $\{x\}\cup\{x_{k}: k\geq k_{0}\}\subset B_{m_{0}}$ for some $k_{0}\in \mathbb{N}$. Hence $\{y\}\cup\{y_{k}: k\geq k_{0}\}\subset f(B_{m_{0}})$. However, we can choose a $k\geq k_{0}$ and an $n\geq m_{0}$ such that $y_{k}=x_{n, m_{0}}$, which implies that $x_{n, m_{0}}\in f(B_{m_{0}})$. This contradictions to $x_{n, m_{0}}\in F_{n}\setminus f(B_{m_{0}})$. \end{proof} \begin{theorem}\label{t1} Let $f:X\rightarrow Y$ be a sequence-covering and boundary-separable map. If $X$ has a point-countable base and $Y$ is $snf$-countable, then $f$ is an 1-sequence-covering map. \end{theorem} \begin{proof} Obviously, $\partial f^{-1}(y)$ has a countably external base for each point $y\in Y$. Therefore, it is easy to see by Lemma~\ref{l6} and the proof of Theorem~\ref{t7}. \end{proof} {\bf Remark} We can't omit the condition ``$Y$ is $snf$-countable'' in Theorem~\ref{t1}. Indeed, the sequence fan $S_{\omega}$\footnote{$S_{\omega}$ is the space obtained from the topological sum of $\omega$ many copies of the convergent sequence by identifying all the limit points to a point.} is the image of metric spaces under the sequence-covering $s$-maps by \cite[Corollary 2.4.4]{Ls3}. However, $S_{\omega}$ is not $snf$-countable, and therefore, $S_{\omega}$ is not the image of metric spaces under an 1-sequence-covering map. In this section, we finally give an affirmative answer for Question 1.2. \begin{lemma}\cite{BS}\label{l2} Let $f:X\rightarrow Y$ be a map. If $X$ is a Fr\'echet space\footnote{$X$ is said to be a {\it Fr\'echet space}\cite{Fr} if $x\in\overline{P}\subset X$, there is a sequence in $P$ converging to $x$ in $X$.}, then $f$ is a pseudo-open map\footnote{$f$ is a {\it pseudo-open map}\cite{Ar2} if whenever $f^{-1}(y)\subset U$ with $U$ open in $X$, then $y\in\mbox{Int}(f(U))$.} if and only if $Y$ is a Fr\'echet space and $f$ is a sequentially quotient map. \end{lemma} \begin{theorem} Let $f:X\rightarrow Y$ be a boundary-compact map. If $X\in {\it \Omega}$, then $f$ is a sequentially quotient map if and only if it is a pseudo-sequence-covering map. \end{theorem} \begin{proof} First, suppose that $f$ is sequentially quotient. If $\{y_{n}\}$ is a non-trivial sequence converging to $y_{0}$ in $Y$, put $S_{1}=\{y_{0}\}\cup \{y_{n}:n\in \mathbb{N}\},\ X_{1}=f^{-1}(S_{1})$ and $g=f|_{X_{1}}$. Thus $g$ is a sequentially quotient,\ boundary compact map.\ So $g$ is a pseudo-open map by Lemma~\ref{l2}. Since $X\in {\it \Omega}$, let $\{U_{n}\}_{n\in\mathbb{N}}$ be a decreasingly neighborhood base of compact subset $\partial g^{-1}(y_{0})$ in $X_{1}$. Thus $\{U_{n}\cup\mbox{Int}(g^{-1}(y_{0}))\}_{n\in\mathbb{N}}$ is a decreasingly neighborhood base of $g^{-1}(y_{0})$ in $X_{1}$. Let $V_{n}=U_{n}\cup\mbox{Int}(g^{-1}(y_{0}))$ for each $n\in \mathbb{N}$. Then $y_{0}\in\mbox{Int}(g(V_{n}))$, thus there exists an $i_{n}\in \mathbb{N}$ such that $y_{i}\in g(V_{n})$ for each $i\geq i_{n}$, so $g^{-1}(y_{i})\cap V_{n}\neq \emptyset$. We can suppose that $1<i_{n}<i_{n+1}$. For each $j\in \mathbb{N}$, we take \[x_{j}\in\left\{ \begin{array}{lll} f^{-1}(y_{j}), & \mbox{if } j<i_{1},\\ f^{-1}(y_{j})\cap V_{n}, & \mbox{if } i_{n}\leq j<i_{n+1}.\end{array}\right.\] Let $K=\partial g^{-1}(y_{0})\cup \{x_{j}:j\in\mathbb{N}\}$. Clearly, $K$ is a compact subset in $X_{1}$ and $g(K)=S_{1}$. Thus $f(K)=S_{1}$. Therefore, $f$ is a pseudo-sequence-covering map. Conversely, suppose that $f$ is a pseudo-sequence-covering map. If $\{y_n\}$ is a convergent sequence in $Y$, then there is a compact subset $K$ in $X$ such that $f(K)=\overline{\{y_n\}}$. For each $n\in\mathbb{N}$, take a point $x_n\in f^{-1}(y_n)\cap K$. Since $K$ is compact and metrizable, $\{x_n\}$ has a convergent subsequence $\{x_{n_k}\}$. So $f$ is sequentially quotient. \end{proof} \begin{corollary} Let $f:X\rightarrow Y$ be a boundary-compact map. Suppose also that at least one of the following conditions holds: \begin{enumerate} \item $X$ has a point-countable base; \item $X$ is a developable space. \end{enumerate} Then $f$ is a sequentially quotient map if and only if it is a pseudo-sequence-covering map. \end{corollary} \begin{question}\label{q1} Let $f:X\rightarrow Y$ be a sequence-covering and boundary-compact (or compact) map. Is $f$ an 1-sequence-covering map if one of the following conditions is satisfied? \begin{enumerate} \item Every compact subset of $X$ is metrizable; \item Every compact subset of $X$ has countable character. \end{enumerate} \end{question} {\bf Remark} If $X$ satisfies the conditions (1) and (2) in Question~\ref{q1}, then $f$ is an 1-sequence-covering map by Theorem~\ref{t0}. \vskip 0.5cm \section{Sequence-covering maps on $g$-metrizable spaces} In this section, we mainly discuss sequence-covering maps on spaces with a specially weak base. \begin{lemma}\label{l3} Let $f:X\rightarrow Y$ be a sequence-covering and boundary-compact map. For each non-isolated point $y\in Y$, there exist a point $x\in\partial f^{-1}(y)$ and a decreasingly weak neighborhood base $\{B_{xi}\}_{i}$ at $x$ such that for each $n\in \mathbb{N}$, there are a $P\in\mathcal{P}_{y}$ and $i\in \mathbb{N}$ with $P\subset f(B_{xi})$ if $X$ and $Y$ satisfy the following (1) and (2): \begin{enumerate} \item $Y$ is $snf$-countable; \item Every compact subset of $X$ has a countably externally weak base in $X$. \end{enumerate} \end{lemma} \begin{proof} Suppose not, there exists a non-isolated point $y\in Y$ such that for every point $x\in \partial f^{-1}(y)$ and every decreasingly weak neighborhood base $\{B_{xi}\}_{i}$ of $x$, there is an $n\in \mathbb{N}$ such that $P\not\subseteq f(B_{xn})$ for every $P\in \mathcal{P}_{y}$. Since $\partial f^{-1}(y)$ is compact, it follows that $\partial f^{-1}(y)$ has a countably externally weak base $\mathcal{B}$ of $X$. Without loss of generality, we can assume that $\mathcal{B}$ is closed under finite intersections. Therefore, for each $x\in \partial f^{-1}(y)$, there exists a $B_{x}\in \mathcal{B}$ such that $P\not\subseteq f(B_{x})$ for every $P\in \mathcal{P}_{y}$. Next, using the argument from the proof of Lemma~\ref{l6}, this leads to a contradiction. \end{proof} The following Lemma~\ref{l4} is easily to check, and hence we omit it. \begin{lemma}\label{l4} Let $X$ have a compact-countable weak base. Then every compact subset of $X$ has a countably externally weak base in $X$. \end{lemma} \begin{theorem}\label{t2} Let $f:X\rightarrow Y$ be a sequence-covering and boundary-compact map, where $X$ has a compact-countable weak base. Then $Y$ is $snf$-countable if and only if $f$ is an 1-sequence-covering map. \end{theorem} \begin{proof} Necessity. Let $y$ be a non-isolated point in $Y$. Since $X$ has a compact-countable weak base, it follows from Lemmas~\ref{l3} and~\ref{l4} that there exists a point $x_{y}\in\partial f^{-1}(y)$ and a decreasingly countably weak base $\{B_{n}: n\in \mathbb{N}\}$ at point $x_{y}$ such that for each $n\in \mathbb{N}$, there is a $P\in \mathcal{P}_{y}$ satisfying $P\subset f(B_{n})$. Suppose that $\{y_{n}\}$ is a sequence in $Y$, which converges to $y$. Then we can take a sequence $\{x_{n}\}$ in $X$ by the similar argument from the proof of Theorem~\ref{t7}. Therefore, $f$ is an 1-sequence-covering map. Sufficiency. By Lemma~\ref{l11}, $Y$ is $snf$-countable. \end{proof} We don't know whether the condition ``compact-countable weak base'' on $X$ can be replaced by ``point-countable weak base'' in Theorem~\ref{t2}, \begin{corollary}\label{c1} Let $f:X\rightarrow Y$ be a sequence-covering and boundary-compact map, where $X$ is $g$-metrizable. Then $Y$ is $snf$-countable if and only if $f$ is an 1-sequence-covering map. \end{corollary} Each closed sequence-covering map on metric spaces is 1-sequence-covering \cite{YP}. Now, we improve the result in the following theorem. \begin{theorem} Let $f:X\rightarrow Y$ be a closed sequence-covering map, where $X$ is $g$-metrizable. Then $f$ is an 1-sequence-covering map. \end{theorem} \begin{proof} Since $X$ is $g$-metrizable and $f$ is a closed sequence-covering map, $Y$ is $g$-metrizable\cite[Theroem 3.3]{LC}. Therefore, $f$ is a boundary-compact map by \cite[Corollary 2.2]{LC}. Hence $f$ is an 1-sequence-covering map by Corollary~\ref{c1}. \end{proof} \begin{question} Let $f:X\rightarrow Y$ be a sequence-covering and boundary-compact map. If $X$ is $g$-metrizable, then is $f$ an 1-sequence-covering map? \end{question} \vskip 0.5cm \section{Closed sequence-covering maps} Say that a Tychonoff space $X$ is {\it strongly monotonically monolithic} \cite{TV} if, for any $A\subset X$ we can assign an external base $\mathcal {O}(A)$ to the set $\overline{A}$ in such a way that the following conditions are satisfied: (a) $|\mathcal {O}(A)|\leq \mbox{max}\{|A|, \omega\}$; (b) if $A\subset B\subset X$ then $\mathcal {O}(A)\subset\mathcal {O}(B)$; (c) if $\alpha$ is an ordinal and we have a family $\{A_{\beta}:\beta <\alpha\}$ of subsets of $X$ such that $\beta <\beta^{\prime}<\alpha$ implies $A_{\beta}\subset A_{\beta^{\prime}}$ then $\mathcal {O}(\cup_{\beta <\alpha}A_{\beta})=\cup_{\beta <\alpha}\mathcal {O}(A_{\beta})$. From \cite[Proposition 2.5]{TV} it follows that a Tychonoff space with a point-countable base is strongly monotonically monolithic. Moreover, if $X$ is a strongly monotonically monolithic space, then it is easy to see that $X\in {\it \Omega}$ by \cite[Theorem 2.7]{TV}. \begin{lemma}\label{l9} Let $f:X\rightarrow Y$ be a closed sequence-covering map, where $X$ is a strongly monotonically monolithic space. Then $Y$ contains no closed copy of $S_{\omega}$. \end{lemma} \begin{proof} Suppose that $Y$ contains a closed copy of $S_{\omega}$, and that $\{y\}\cup\{y_{i}(n):i, n\in\mathbb{N}\}$ is a closed copy of $S_{\omega}$ in $Y$, here $y_{i}(n)\rightarrow y$ as $i\rightarrow\infty$. For every $k\in\mathbb{N}$, put $L_{k}=\cup\{y_{i}(n):i\in \mathbb{N}, n\leq k\}$. Hence $L_{k}$ is a sequence converging to $y$. Let $M_{k}$ be a sequence of $X$ converging to $u_{k}\in f^{-1}(y)$ such that $f(M_{k})=L_{k}$. We rewrite $M_{k}=\cup\{x_{i}(n, k):i\in \mathbb{N}, n\leq k\}$ with each $f(x_{i}(n, k))=y_{i}(n)$. Case 1: $\{u_{k}:k\in\mathbb{N}\}$ is finite. There are a $k_{0}\in\mathbb N$ and an infinite subset $\mathbb{N}_{1}\subset \mathbb{N}$ such that $M_{k}\rightarrow u_{k_{0}}$ for every $k\in\mathbb{N}_{1}$, then $X$ contains a closed copy of $S_{\omega}$. Hence $X$ is not first countable. This is a contradiction. Case 2: $\{u_{k}:k\in\mathbb{N}\}$ has a non-trivial convergent sequence in $X$. Without loss of generality, we suppose that $u_{k}\rightarrow u$ as $k\rightarrow \infty$. Since $X$ is first-countable, let $\{U_{m}\}$ be a decreasingly and open neighborhood base of $X$ at point $u$ with $\overline{U}_{m+1}\subset U_{m}$. Then $\bigcap_{m\in\mathbb{N}}U_{m}=\{u\}$. Fix $n$, pick $x_{i_{m}}(n, k_{m})\in U_{m}\cap\{x_{i}(n, k_{m})\}_{i}$. We can suppose that $i_{m}< i_{m+1}$. Then $\{f(x_{i_{m}}(n, k_{m}))\}_{m}$ is a subsequence of $\{y_{i}(n)\}$. Since $f$ is closed, $\{x_{i_{m}}(n, k_{m})\}_{m}$ is not discrete in $X$. Then there is a subsequence of $\{x_{i_{m}}(n, k_{m})\}_{m}$ converging to a point $b\in X$ because $X$ is a first-countable space. It is easy to see that $b=u$ by $x_{i_{m}}(n, k_{m})\in U_{m}$ for every $m\in\mathbb{N}$. Hence $x_{i_{m}}(n, k_{m})\rightarrow u$ as $m\rightarrow\infty$. Then $\{u\}\cup\{x_{i_{m}}(n, k_{m}):n, m\in\mathbb{N}\}$ is a closed copy of $S_{\omega}$ in $X$. Thus, $X$ is not first countable. This is a contradiction. Case 3: $\{u_{k}:k\in\mathbb{N}\}$ is discrete in $X$. Let $B=\{u_{k}:k\in\mathbb{N}\}\cup\{M_{k}:k\in\mathbb{N}\}$. Since $X$ is strongly monotonically monolithic, $\overline{B}$ is metrizable. Hence there exists a discrete family $\{V_{k}\}_{k\in\mathbb{N}}$ consisting of open subsets of $\overline{B}$ with $u_{k}\in V_{k}$ for each $k\in\mathbb{N}$. Pick $x_{i_{k}}(1, k)\in V_{k}\cap\{x_{i}(1, k)\}_{i}$ such that $\{f(x_{i_{k}}(1, k))\}_{k}$ is a subsequence of $\{y_{i}(n)\}$. Since $\{x_{i_{k}}(1, k)\}_{k}$ is discrete in $\overline{B}$, $\{f(x_{i_{k}}(1, k))\}_{k}$ is discrete in $Y$. This is a contradiction. In a word, $Y$ contains no closed copy of $S_{\omega}$. \end{proof} \begin{lemma}\label{l10} Let $f:X\rightarrow Y$ be a closed sequence-covering map, where $X$ is a strongly monotonically monolithic space. Then $\partial f^{-1}(y)$ is compact for each point $y\in Y$. \end{lemma} \begin{proof} From Lemma~\ref{l9} it follows that $Y$ contains no closed copy $S_{\omega}$. Since $X$ is a strongly monotonically monolithic space, every closed separable subset of $X$ is metirzable, and hence is normal. Therefore, $\partial f^{-1}(y)$ is countable compact for each point $y\in Y$ by \cite[Theorem 2.6]{LC}. From \cite[Theorem 2.7]{TV} it easily follows that every countable compact subset of $X$ is compact. \end{proof} \begin{theorem}\label{t6} Let $f:X\rightarrow Y$ be a closed sequence-covering map, where $X$ is a strongly monotonically monolithic space. Then $f$ is an 1-sequence-covering map. \end{theorem} \begin{proof} It is easy to see by Lemma~\ref{l10} and Theorem~\ref{t0}. \end{proof} \begin{corollary}\label{t3} Let $f:X\rightarrow Y$ be a closed sequence-covering map, where $X$ is a Tychonoff space with a point-countable base. Then $f$ is an 1-sequence-covering map. \end{corollary} In fact, we can replace ``Tychonoff'' by ``regular'' in Corollary~\ref{t3}, and hence we have the following result. \begin{corollary}\label{c3} Let $f:X\rightarrow Y$ be a closed sequence-covering map, where $X$ is a regular space with a point-countable base. Then $f$ is an 1-sequence-covering map. \end{corollary} \begin{proof} Since $X$ has a point-countable base and $f$ is a closed sequence-covering map, $Y$ has a point-countable base by \cite[Theorem 3.1]{LC}. Therefore, $f$ is a boundary-compact map by \cite[Lemma 3.2]{LC1}. Hence $f$ is an 1-sequence-covering map by Corollary~\ref{c0}. \end{proof} We don't know whether, in Corollary~\ref{c3}, the condition ``$X$ has a point-countable base'' can be replaced by ``$X\in {\it \Omega}$''. So we have the following question. \begin{question} Let $f:X\rightarrow Y$ be a closed sequence-covering map. If $X\in {\it \Omega}$ (and $X$ is regular), then is $f$ an 1-sequence-covering map? \end{question} \begin{corollary}\label{c2} Let $f:X\rightarrow Y$ be a closed sequence-covering map, where $X$ is a strongly monotonically monolithic space. Then $f$ is an almost-open map\footnote{$f$ is an {\it almost-open map}\cite{Ar1} if there exists a point $x_{y}\in f^{-1}(y)$ for each $y\in Y$ such that for each open neighborhood $U$ of $x_{y}$, $f(U)$ is a neighborhood of $y$ in $Y$.}. \end{corollary} \begin{proof} $f$ is an 1-sequence-covering map by Theorem~\ref{t6}. For each point $y\in Y$, there exists a point $x_{y}\in f^{-1}(y)$ satisfying the Definition 2.2(4). Let $U$ be an open neighborhood of $x_{y}$. Then $f(U)$ is a sequential neighborhood of $y$. Indeed, for each sequence $\{y_{n}\}\subset Y$ converging to $y$, there exists a sequence $\{x_{n}\}\subset X$ such that $\{x_{n}\}$ converges to $x_{y}$ and $x_{n}\in f^{-1}(y_{n})$ for each $n\in \mathbb{N}$. Obviously, $\{x_{n}\}$ is eventually in $U$, and therefore, $\{y_{n}\}$ is eventually in $f(U)$. Hence $f(U)$ is a sequential neighborhood of $y$. Since $X$ is first-countable, $Y$ is a Fr\'echet space. Then $f(U)$ is a neighborhood of $y$. Otherwise, suppose $y\in Y\setminus\mbox{int}(f(U))$, and therefore, $y\in \overline{Y\setminus f(U)}$. Since $Y$ is Fr\'echet, there exists a sequence $\{y_{n}\}\subset Y\setminus f(U)$ converging to $y$. This is a contradiction with $f(U)$ is a sequential neighborhood of $y$. Therefore, $f$ is an almost-open map. \end{proof} {\bf Remark} In \cite{TV}, V. V. Tkachuk has proved that closed maps don't preserve strongly monotonically monolithic spaces. However, if perfect maps\footnote{A map $f$ is called {\it perfect} if $f$ is a closed and compact map} preserve strongly monotonically monolithic spaces, then it is easy to see that closed sequence-covering maps preserve strongly monotonically monolithity by Lemma~\ref{l10}. So we have the following two questions. \begin{question} Do closed sequence-covering maps (or an almost open and closed maps) preserve strongly monotonically monolithity? \end{question} \begin{question} Do perfect maps preserve strongly monotonically monolithity? \end{question} In \cite{TV}, V. V. Tkachuk has also proved that open and separable maps preserve strongly monotonically monolithity. However, we have the following result. \begin{theorem} Let $f:X\rightarrow Y$ be an open and closed map, where $X$ is a strongly monotonically monolithic space. Then $Y$ is a strongly monotonically monolithic space. \end{theorem} \begin{proof} From \cite[Theorem 3.4]{LC} it follows that $f$ is a sequence-covering map. Therefore, $\partial f^{-1}(y)$ is compact for each point $y\in Y$ by Lemma~\ref{l10}. Then $\partial f^{-1}(y)$ is metrizable by \cite[Theorem 2.7]{TV}, and hence it is separable, for each point $y\in Y$. For each point $y\in Y$, if $y$ is a non-isolated point, let $A_{y}$ be a countable dense set in the subspace $\partial f^{-1}(y)$; if $y$ is an isolated point, then we choose a point $x_{y}\in f^{-1}(y)$ and let $A_{y}=\{x_{y}\}$. Let $B\subset Y$. Put $A_{B}=\cup\{A_{y}: y\in B\}$ and $\mathcal{N}(B)=\{f(W): W\in\mathcal{O}(A_{B})\}$. It is easy to see that $\mathcal{N}(B)$ satisfies the conditions (a)-(c) of the definition of strongly monotonically monolithity. Therefore, we only need to prove that $\mathcal{N}(B)$ is an external base for $\overline{B}$. For each point $y\in\overline{B}$, let $U$ be open subset in $Y$ with $y\in U$. Case 1: $y$ is a non-isolated point in $Y$. Since $f$ is an open map, $\emptyset\neq f^{-1}(y)\subset \overline{f^{-1}(B)}$, and hence $\partial f^{-1}(y)\subset \overline{f^{-1}(B)}$. Take any point $x\in \partial f^{-1}(y)$. Then $x\in \overline{A_{B}}$. Therefore, there exists a $V\in \mathcal{O}(A_{B})$ such that $x\in V\subset f^{-1}(U)$. So $W=f(V)\in\mathcal{N}(B)$ and $y\in W\subset U$. Case 2: $y$ is an isolated point in $Y$. It is easy to see that $\{y\}\in \mathcal{N}(B)$, and therefore, $y\in \{y\}\subset U$. In a word, $\mathcal{N}(B)$ is an external base for $\overline{B}$. \end{proof} Let $\mathcal{B}=\{B_{\alpha}:\alpha\in H\}$ be a family of subsets of a space $X$. $\mathcal{B}$ is {\it point-discrete} (or {\it weakly hereditarily closure-preserving}) if $\{x_{\alpha}:\alpha\in H\}$ is closed discrete in $X$, whenever $x_{\alpha}\in B_{\alpha}$ for each $\alpha\in H$. It is well-known that metrizability, $g$-metrizability, $\aleph$-spaces, and spaces with a point-countable base are preserved by closed sequence-covering maps, see \cite{LC, YP}. Next, we shall consider spaces with a $\sigma$-point-discrete $k$-network, and shall prove that spaces with $\sigma$-point-discrete $k$-network are preserved by closed sequence-covering maps. Firstly, we give some technique lemmas. \begin{lemma}\label{l7} Let $X$ be an $\aleph_{1}$-compact space\footnote{A space $X$ is called {\it $\aleph_{1}$-compact} if each subset of $X$ with a cardinality of $\aleph_{1}$ has a cluster point.} with a $\sigma$-point-discrete network. Then $X$ has a countable network. \end{lemma} \begin{proof} Let $\mathcal{P}=\bigcup_{n\in\mathbb{N}}\mathcal{P}_{n}$ be a $\sigma$-point-discrete network for $X$, where each $\mathcal{P}_{n}$ is a point-discrete family for each $n\in\mathbb{N}$. For each $n\in\mathbb{N}$, let $$B_{n}=\{x\in X: |(\mathcal{P}_{n})_{x}|>\omega\}.$$ Claim 1: $\{P\setminus B_{n}: P\in\mathcal{P}_{n}\}$ is countable. Suppose not, there exist an uncountable subset $\{P_{\alpha}: \alpha<\omega_{1}\}\subset \mathcal{P}_{n}$ and $\{x_{\alpha}: \alpha<\omega_{1}\}\subset X$ such that $x_{\alpha}\in P_{\alpha}\setminus B_{n}$. Since $\mathcal{P}_{n}$ is a point-discrete family and $X$ is $\aleph_{1}$-compact, $\{x_{\alpha}: \alpha<\omega_{1}\}$ is countable. Without loss of generality, we can assume that there exists $x\in X\setminus B_{n}$ such that each $x_{\alpha}=x$. Therefore, $x\in B_{n}$, a contradiction. Claim 2: For each $n\in\mathbb{N}$, $B_{n}$ is a countable and closed discrete subspace for $X$. For each $Z\subset B_{n}$ with $|Z|\leq\omega_{1}$. Let $Z=\{x_{\alpha}: \alpha\in\bigwedge\}$. By the definition of $B_{n}$ and Well-ordering Theorem, it is easy to obtain by transfinite induction that $\{P_{\alpha}: \alpha\in\bigwedge\}\subset \mathcal{P}_{n}$ such that $x_{\alpha}\in P_{\alpha}$ and $P_{\alpha}\neq P_{\beta}$ for each $\alpha\neq\beta$. Therefore, $Z$ is a countable and closed discrete subspace for $X$. Hence $B_{n}$ is a countable and closed discrete subspace. For each $n\in\mathbb{N}$, let $\mathcal{P}_{n}^{\prime}=\{P\setminus B_{n}: P\in\mathcal{P}_{n}\}\cup\{\{x\}: x\in B_{n}\}$. Then $\mathcal{P}_{n}^{\prime}$ is a countable family. Obviously, $\bigcup_{n\in\mathbb{N}}\mathcal{P}_{n}^{\prime}$ is a countable network for $X$. \end{proof} The proof of the following lemma is an easy exercise. \begin{lemma}\label{l8} Let $\{F_{\alpha}\}_{\alpha\in A}$ be a point-discrete family for $X$ and countably compact subset $K\subset\bigcup_{\alpha\in A}F_{\alpha}$. Then there exists a finite family $\mathcal{F}\subset\{F_{\alpha}\}_{\alpha\in A}$ such that $K\subset \cup\mathcal{F}$. \end{lemma} \begin{lemma}\label{t4} Let $\mathcal{P}$ be a family of subsets of a space $X$. Then $\mathcal{P}$ is a $\sigma$-point-discrete $wcs^{\ast}$-network\footnote{A family $\mathcal{P}$ of $X$ is called a {\it $wcs^{\ast}$-network}\cite{LT} of $X$, if whenever a sequence $\{x_{n}\}$ converges to $x\in U$ with $U$ open in $X$, there are a $P\in\mathcal{P}$ and a subsequence $\{x_{n_{i}}\}$ of $\{x_{n}\}$ such that $x_{n_{i}}\in P\subset U$ for each $n\in \mathbb{N}$} for $X$ if and only if $\mathcal{P}$ is a $\sigma$-point-discrete $k$-network\footnote{A family $\mathcal{P}$ of $X$ is called a {\it $k$-network}\cite{PO} if whenever $K$ is a compact subset of $X$ and $K\subset U$ with $U$ open in $X$, there is a finite subfamily $\mathcal{P}^{\prime}\subset \mathcal{P}$ such that $K\subset \cup\mathcal{P}^{\prime}\subset U$.} for $X$. \end{lemma} \begin{proof} Sufficiency. It is obvious. Hence we only need to prove the necessity. Necessity. Let $\mathcal{P}=\bigcup_{n\in\mathbb{N}}\mathcal{P}_{n}$ be a $\sigma$-point-discrete $wcs^{\ast}$-network, where each $\mathcal{P}_{n}$ is a point-discrete family for each $n\in\mathbb{N}$. Suppose that $K$ is compact and $K\subset U$ with $U$ open in $X$. For each $n\in \mathbb{N}$, let $$\mathcal{P}_{n}^{\prime}=\{P\in\mathcal{P}_{n}: P\subset U\}, F_{n}=\cup\mathcal{P}_{n}^{\prime}.$$ Then there exists $m\in \mathbb{N}$ such that $K\subset\bigcup_{k\leq m}F_{k}$. Suppose not, there is a sequence $\{x_{n}\}\subset K$ with $x_{n}\in K-\bigcup_{i\leq n}F_{i}$. By Lemma~\ref{l7}, it is easy to see that $K$ is metrizable. Therefore, $K$ is sequentially compact. It follows that there exists a convergent subsequence of $\{x_{n}\}$. Without loss of generality, we assume that $x_{n}\rightarrow x$. Since $\mathcal{P}$ is a $wcs^{\ast}$-network, there exist a $P\in \mathcal{P}$, and a subsequence $\{x_{n_{i}}\}$ of $\{x_{n}\}$ such that $\{x_{n_{i}}: i\in \mathbb{N}\}\subset P\subset U$. Therefore, there exists $l\in \mathbb{N}$ such that $P\in \mathcal{P}_{l}^{\prime}$. Choose $i> l$, since $P\subset F_{l}$, $x_{n_{i}}\in F_{l}$, a contradiction. Hence there exists $m\in \mathbb{N}$ such that $K\subset\bigcup_{k\leq m}F_{k}$. By Lemma~\ref{l8}, there is a finite family $\mathcal{P}^{\prime\prime}\subset\bigcup_{i\leq m}\mathcal{P}_{i}^{\prime}$ such that $K\subset \cup\mathcal{P}^{\prime\prime}\subset U$. Therefore, $\mathcal{P}$ is a $k$-network. \end{proof} \begin{theorem} Closed sequence-covering maps preserve spaces with a $\sigma$-point-discrete $k$-network. \end{theorem} \begin{proof} It is easy to see that closed sequence-covering maps preserve spaces with a $\sigma$-point-discrete $wcs^{\ast}$-network. Hence closed sequence-covering maps preserve spaces with a $\sigma$-point-discrete $k$-network by Lemma~\ref{t4}. \end{proof} \begin{question} Do closed maps preserve spaces with a $\sigma$-point-discrete $k$-network? \end{question} \vskip0.9cm
1,314,259,994,155
arxiv
\section*{Introduction} \label{Introduction} Since the launch of Silk Road, the first modern dark web marketplace (DWM), in 2011~\cite{christin2013traveling} millions of buyers and sellers have traded in the dark web. DWMs have became popular because their users can anonymously access them through ad-hoc browsers, such as The Onion Router (Tor)~\cite{dingledine2004tor}, and trade goods using cryptocurrencies, such as Bitcoin~\cite{nakamoto2008Bitcoin}. They offer a variety of illicit goods including drugs, firearms, credit cards dumps, and fake IDs~\cite{GwernDarkNets}. DWMs could represent a threat for the regular economy and public health. For instance, during the COVID-19 pandemic, DWMs sold COVID-19 related goods (e.g., masks and COVID-19 tests) that were in shortage in regulated marketplaces as well as unapproved vaccines and fake treatments~\cite{broadhurstavailability, bracci2020covid, bracci2021covid}. Law enforcement agencies have therefore targeted DWMs and users trading on them, performing dozens of arrests and seizing millions of US dollars worth of Bitcoin~\cite{Operation_Onymous, FBIAlphabay, BillionFedsSilkRoad}. Despite police raids and unexpected closures, DWM trading volume has been steadily increasing and exceeded \$1.5 billion for the first time in 2020~\cite{Chainalysis_crypto_crime_report_2021}. DWM users display complex trading patterns within the marketplace environment. For example, users migrate to alternative DWMs when a DWM that they trade on close~\cite{elbahrawy2020collective, hiramoto2020measuring}. Such migration of users is aided by communication via online forums and chats on the dark web~\cite{buxton2015rise, maddox2016constructive}. However, little is known about how DWM users trade and transact \textit{outside} the DWMs. On the one hand, some recent works have shown that a significant number of DWM users trade drugs and other illicit goods using social media platforms, such as Facebook, Telegram, and Reddit~\cite{oksanen2020illicit, DarknetLive_telegram, sung2021prevalence, childs2021beyond, kwon2021dark}. Moreover, several qualitative, interview-based studies have shown that DWM users form direct trading relationships with other users, starting user-to-user (U2U) pairs that bypass the intermediary role of DWMs~\cite{barratt2016if, munksgaard2020and}. Past research has also found that sellers on regulated online marketplaces and social medial platforms may decide to use intermediaries, such Facebook groups or Instagram, to find new customers, and may start direct U2U trading with potential buyers~\cite{bakken2019sellers}. In this paper, we look closely at patterns of U2U trading relationships among DWM users. \begin{figure}[H] \centering \begin{subfigure}[H]{0.45\textwidth} \centering \vspace{1cm} \includegraphics[width=0.7\textwidth]{DWMs_ego_network_selected.pdf} \vspace{1cm} \caption*{(a) Marketplace ego network} \end{subfigure} \begin{subfigure}[H]{0.45\textwidth} \centering \includegraphics[width=\textwidth]{DWMs_full_network_selected.pdf} \caption*{(b) Full network} \end{subfigure} \caption{\textbf{Ego and full networks.} (a) Schematic representation of an ego network surrounding a dark web marketplace (``DWM'', in red). The DWM interacts with its users (``U'', in black), which make user-to-user (U2U) pairs, represented with arrows and their respective users. (b) Multiple ego networks may be aggregated to form the full network.} \label{DWM_ego_full_networks} \end{figure} The starting point for this paper is identifying U2U networks around DWMs. We analyse 40 DWMs for a 10-year time period spanning from June 18, 2011 to January 31, 2021. Our dataset covers all major DWMs that have ever existed, as identified by the European Monitoring Centre, Europol, the World Health Organization, and independent researchers~\cite{european2017drugs, world2019world, gwern_live_markets}. Our analysis focuses on Bitcoin -- the most popular cryptocurrency on DWMs~\cite{lee2019cybercriminal, foley2019sex} as well as in the regulated economy~\cite{baur2015cryptocurrencies, saiedi2020global}. We focus on two kinds of transactions, occurring (i) between the user and a DWM and (ii) between two users of the same DWM. The result is 40 distinct marketplace ego networks containing user-DWM and U2U transactions, whose typical structure is depicted in Figure~\ref{DWM_ego_full_networks}(a). In each network, links are directed and the arrows point at the receiver of Bitcoin. Since users often migrate from one DWM to another~\cite{elbahrawy2020collective} and become users of multiple DWMs, the 40 ego networks are not isolated, and can be combined to form one full network, as shown in Figure~\ref{DWM_ego_full_networks}(b). Previous analyses of U2U trading relationships around DWMs include only two studies~\cite{barratt2016if, munksgaard2020and} based on unstructured~\cite{barratt2016if} or semi-structured~\cite{munksgaard2020and} interviews of 17 users of Silk Road and 13 DWMs sellers, respectively. Here, we dramatically extend previous work by exploring the collective emergence and structure of U2U pairs. First, we observe that the U2U network, formed by all transactions between pairs of users, has a larger trading volume than DWMs themselves. We then identify stable U2U trading relationships, which represent a subset of persistent pairs in our dataset~\cite{nadini2020detecting, nadini2020reconstructing} forming the \emph{backbone} of the U2U network. We find that 137,667 (i.e., 1.7\% out of 7.85 million total) pairs are stable, generating a total trading volume of \$1.5 billion (i.e., 5\% out of \$30 billion total volume). We then explore the behaviour of users forming stable U2U pairs. We reveal that stable U2U pairs play a crucial role for marketplaces by spending significantly more time and generating far greater transaction volume with DWMs than other users. By analysing the temporal evolution of stable pairs, we unveil that DWMs acted as meeting points for 37,192 (out of around 16 million users), whose trading volume is estimated to be \$417 million. Importantly, these newly formed pairs persist in time and transact for several months even after the closure of the DWM that spurred their formation. Finally, we observe that COVID-19 only had a temporary impact on the evolution of stable U2U pairs, which continued to increase their trading volume throughout 2020. \section*{Results} \label{Results} \subsection*{Large number of U2U transactions} \paragraph{Ego networks.} We start our analysis by measuring the extent of the U2U network around each DWM. The percentages of users forming U2U pairs vary across DWMs, with a median value of 38\% (min 23\%, max 68\%). The variance in the percentages of users with U2U pairs is shown by Figure~\ref{Importance_U2U_transactions}(a), which shows that the number of users with U2U pairs obeys an almost linear relationship with the number of users interacting with a DWM, having an exponent equal to 1.06 and $R^2 = 0.969$. The total trading volume users sent to the marketplace is obviously equivalent to the one they receive from it (two-sided Wilcoxon test~\cite{wilcoxon1992individual}: $W=330$, $p=0.282$). Importantly, the total trading volume users sent to a DWM (and consequently the one that they receive from it) is always less than the one exchanged through U2U transactions, as shown in Figure~\ref{Importance_U2U_transactions}(b). \begin{figure}[H] \centering \includegraphics[width=16cm]{Importance_U2U_transactions_time.pdf} \caption{\textbf{User-DWM and U2U transactions.} (a) Total number of users interacting with a DWM against the total number of them forming U2U transactions. The dotted line corresponds to the result of a fitted power law function. (b) Trading volume in dollars sent to a DWM compared with the total trading volume in its surrounding U2U transactions. The dashed line is the bisector and allows to easily compare the two trading volumes. (c) Total monthly trading volume sent to all DWMs and exchanged in all unique U2U pairs. We do not include the trading volume received from DWMs because it is equivalent as the volume sent to DWMs.} \label{Importance_U2U_transactions} \end{figure} \paragraph{Full network.} Similar results hold for the full network, confirming that the formation of U2U pairs is a pervasive phenomenon around DWMs. The total trading volume users sent to DWMs is \$3.8 billion, received from DWMs \$3.7 billion, while the volume exchanged through U2U pairs reaches \$30 billion. In Figure~\ref{Distributions_pairs_dataset}, we illustrate the number of transactions, trading volume, and lifespan of U2U pairs. In all cases we observe familiar fat-tailed distributions. We then consider the temporal evolution of transactions. We look at the trading volume over time in Figure~\ref{Importance_U2U_transactions}(c), where we observe that U2U transactions have consistently involved greater monthly volume than the volume sent to all DWMs since 2011. This underlines the economic importance of U2U transactions in the Bitcoin ecosystem relative to DWMs. \subsection*{Behaviour of the U2U network} Henceforth, we are going to analyse users by focusing on the following groups: users who do not form stable U2U pairs; users who form stable U2U pairs, of which there are users who met outside DWMs and users who met inside DWMs (see the nomenclature in Table~\ref{Nomenclature}). We start by focusing our attention on identifying stable U2U pairs, i.e., persistent pairs of the U2U network. To this end, we use the evolving activity-driven model~\cite{nadini2020detecting} to extract them in a statistically-principled way (see Methods). We find 137,667 stable U2U pairs formed by 106,648 users and generating a trading volume equal to \$1.5 billion. Stable pairs produce five times more transactions per pair than non-stable pairs (two-sided Mann-Whitney-U test~\cite{mann1947test}: MNU$=4,58 \cdot 10^9$, $p<0.0001$) corresponding to a 5.34 times larger trading volume (MNU$= 317 \cdot 10^9$, $p<0.0001$), see Figure~\ref{Number_pairs_stable_not}. Stable pairs, despite representing less than 2\% of the total number of U2U pairs, generate a disproportionate amount of trading volume. \begin{figure}[H] \centering \includegraphics[width=12cm]{Role_users_in_DWMs_adjusted.pdf} \caption{\textbf{Role of users forming stable U2U pairs.} (Main) PDFs of trading volume that users exchange with any DWMs. (Inset) PDFs of time spent by users on any DWMs. These distributions are explored for each of the 40 DWMs under consideration in Figure~\ref{Role_users_in_DWMs_time_spent} and~\ref{Role_users_trading_volume}, respectively. Vertical lines represent median values of the respective distributions.} \label{Role_users} \end{figure} The high activity of users forming stable U2U pairs is not limited to the U2U network, as they are also the most active in trading with DWMs. Users in stable U2U pairs spend a median number of 41 days on DWMs versus a median of only one day for users without stable pairs. The two resulting distributions are significantly different (two-sided Kolmogorov-Smirnov test~\cite{massey1951kolmogorov}: KS $= 0.673$, $p<0.0001$), see the inset of Figure~\ref{Role_users}. When we look at the trading volume with DWMs, we find qualitatively similar results. Users in stable U2U pairs transact a median of \$400 with DWMs, while other users transact only \$56. The two resulting distributions are significantly different (KS $= 0.438$, $p<0.0001$), see Figure~\ref{Role_users}. These results hold not only for full network but for every DWM in our data, see Figure~\ref{Role_users_in_DWMs_time_spent} and~\ref{Role_users_trading_volume}. \subsection*{U2U network evolution} \paragraph{Formation of U2U stable pairs.} \begin{table}[H] \centering \begin{tabular}{ccc|ccc|ccc} \multicolumn{3}{c|}{\thead{Users who met outside the DWM}} & \multicolumn{3}{c|}{\thead{Users who met outside the DWM}} & \multicolumn{3}{c}{\thead{Users who met inside the DWM}} \\ \begin{subfigure}[H]{0.08\textwidth} \centering \includegraphics[width=\textwidth]{Conf_alpha1} \caption*{\thead{$t_1$}} \end{subfigure} & \begin{subfigure}[H]{0.08\textwidth} \centering \includegraphics[width=\textwidth]{Conf_beta2} \caption*{\thead{$t_2$}} \end{subfigure} & \begin{subfigure}[H]{0.08\textwidth} \centering \includegraphics[width=\textwidth]{Conf_gamma3} \caption*{\thead{$t_3$}} \end{subfigure} & \begin{subfigure}[H]{0.08\textwidth} \centering \includegraphics[width=\textwidth]{Conf_beta1} \caption*{\thead{$t_1$}} \end{subfigure} & \begin{subfigure}[H]{0.08\textwidth} \centering \includegraphics[width=\textwidth]{Conf_beta2} \caption*{\thead{$t_2$}} \end{subfigure} & \begin{subfigure}[H]{0.08\textwidth} \centering \includegraphics[width=\textwidth]{Conf_gamma3} \caption*{\thead{$t_3$}} \end{subfigure} & \begin{subfigure}[H]{0.08\textwidth} \centering \includegraphics[width=\textwidth]{Conf_gamma1} \caption*{\thead{$t_1$}} \end{subfigure} & \begin{subfigure}[H]{0.08\textwidth} \centering \includegraphics[width=\textwidth]{Conf_gamma2} \caption*{\thead{$t_2$}} \end{subfigure} & \begin{subfigure}[H]{0.08\textwidth} \centering \includegraphics[width=\textwidth]{Conf_gamma3} \caption*{\thead{$t_3$}} \end{subfigure} \end{tabular} \caption{\textbf{Formation mechanism of stable U2U pairs.} We compare the time in which the first transaction between a pair of users occur with the time in which these users interact with the same DWM. Each row in the figure indicates a possible temporal sequence, which we classify in two groups: users who met outside the DWM (first two columns) and users who met inside the DWM (last column).} \label{Motifs_Triads_table11} \end{table} Having mapped the behaviour of stable pairs, we now consider their temporal evolution. More specifically, we ask: How do stable pairs form? Do DWMs spur their creation? One possible hypothesis is that users meet for the first time while active on a DWM, i.e., after they have both traded with that DWM, see Table~\ref{Motifs_Triads_table11} and the nomenclature in Table~\ref{Nomenclature}. This can be considered as a plausible, and conservative, proxy for users who met inside a DWM (see Methods). A total of 37,129 users have met at least one other user inside a DWM. Their trading volume is about \$417 million, and the percentage of users who met inside a DWM is proportional to the trading volume sent to DWMs (Spearman~\cite{spearman1961proof}: $C=0.805$, $p<0.0001$), see Fig~\ref{Where_users_meet}, meaning that large DWMs are more likely to favour the encounter of users than smaller DWMs. Importantly, users who met inside a DWM transact more than those meeting outside them. In particular, users who met inside a DWM trade a median of \$2,212 between themselves, almost twice the \$1,379 for users meeting outside the DWM (MNU$= 1.863 \cdot 10^9$, $p<0.0001$). Moreover, users who met inside a DWM tend to make transactions significantly longer with median of 61 days than users meeting outside with a median of 50 days (MNU $= 2.099 \cdot 10^9$, $p<0.0001$). \begin{figure}[H] \centering \includegraphics[width=12cm]{U2U_after_DWM_closure_meet_inside_outside_adjusted.pdf} \caption{\textbf{Resilience of stable U2U pairs after DWMs closure.} Trading volume of U2U pairs surrounding active DWMs. (Main) U2U pairs meet who met inside aa DWM. (Inset) U2U pairs meet outside them. Curves indicate the median value while bands represent the 95\% confidence interval. Day zero corresponds to the day when the market closed. Negative and positive numbers indicate the days prior and after the closure, respectively. Only the 33 DWMs closed are considered in the analysis.} \label{After_DWMs_closure_meet_inside_outside} \end{figure} \paragraph{Resilience of U2U stable pairs.} Thus far, we have shown that users involved in stable trading relationships are also very active on DWMs, where they may meet new trading partners. But are DWMs and the U2U network truly interdependent? In particular, do stable pairs need the DWMs to survive? To answer these questions, we look at market closures, previously investigated to show how active users migrate to other existing DWMs~\cite{elbahrawy2020collective}. Our dataset includes 33 closure events, which we study independently from one another by considering the evolution of the respective 33 marketplace ego networks. We find that non-stable U2U pairs sharply stop interacting immediately after DWM closure therefore their existence is highly sensitive to the presence of the DWM. On the other hand, the trading volume of stable U2U pairs is only marginally affected by the disappearance of the DWM. As a result, while prior to DWM closure non-stable U2U pairs generate an overall trading volume that is 10 times higher than that of stable U2U pairs (since non-stable pairs are far more prevalent), within a few weeks after DWM closure the pattern is reversed: stable U2U pairs generate more trade volume than non-stable U2U pairs. Indeed, trading patterns of stable pairs are not significantly influenced by DWMs closure, see Figure~\ref{After_DWMs_closure_meet_inside_outside}. We have shown that the U2U network is resilient to short-lasting external shocks, namely the closure of a marketplace, and it does not need the centralised structure of DWMs to survive. What about long-lasting systemic stress? To answer this question, we consider the impact that the COVID-19 pandemic has had on the evolution of stable U2U pairs. Previous studies reported that COVID-19 had a strong impact on DWMs, with reported delays and damage to the shipping infrastructure due to border closures~\cite{bergeron2020preliminary, Chainalysis_DWMs_growth_2020}. We start by investigating the number of new stable U2U pairs and their trading volume. Users in stable pairs meeting both inside and outside DWMs have been growing over the last two years. In 2020, a total of 6,778 pairs of users in stable pairs met inside a DWM, corresponding to the 192\% of 2019 and to the 255\% of 2018, see Figure~\ref{Temporal_analysis}(a). Pairs of users in stable pairs meeting inside a DWM traded for a total of \$145 million in 2020, which corresponds to the 252\% of 2019, and the 593\% of 2018, see Figure~\ref{Temporal_analysis}(b). We see similar trends for stable U2U pairs meeting outside any DWMs. The impact of the COVID-19 pandemic has, however, had different phases, determined by the number and level of measures introduced around the world. For users in stable pairs who met both inside or outside DWMs, we find that during the first lockdowns in 2020 trading volume fell with respect to January of the same year, suggesting that they were negatively impacted by COVID-19 restrictions. After that, trading volume sharply increased over all 2020, see Figure~\ref{Trading_volume_COVID}. The number of stable U2U pairs created each day was, however, steady over time during 2020, even though more U2U pairs were created compared to the same period of 2019, see Figure~\ref{new_U2U_pairs_COVID19}. Overall, stable U2U pairs have shown resilience to the systemic stress caused by COVID-19, suggesting, once again, that these trading relationships are fundamentally independent from the underlying DWMs. \section*{Discussion and Conclusion} \label{Discussions_conclusion} In this paper, we revealed the prevalence and structure of a large network of direct transactions between users who trade on the same DWM. We showed that some of the links of this user-to-user (U2U) network are ephemeral while other persist in time. We highlighted that a significant fraction of stable U2U pairs formed as their members were trading with the same DWM, suggesting that DWMs may play a role in promoting the formation of stable U2U pairs. We showed that the relationships between users forming stable pairs persist even after the DWM shuts down and are not significantly affected by COVID-19, suggesting overall resilience of stable pairs to external shocks. Our study has several limitations. In particular, our dataset does not include any attributes related to either users or their Bitcoin transactions, such as, whether the transaction represents an actual purchase or not. Moreover, we do not have information about which users trade with other users on the same DWM. Finally, our coverage of DWMs, albeit extensive, may lack information on other DWMs where users could have met. Our work has several policy implications. Our findings suggest that DWMs are much more than mere marketplaces~\cite{gupta2021dark}. DWMs are also communication platforms, where users can meet and chat with other users either directly -- using Whatsapp, phone, or email -- or through specialised forums. These direct interactions may favour the emergence of decentralised trade networks that bypass the intermediary role of the marketplace, similar to what is currently happening on Facebook, Telegram, and Reddit~\cite{oksanen2020illicit, bakken2019sellers,DarknetLive_telegram, sung2021prevalence, childs2021beyond, kwon2021dark}, where users post products, negotiate item prices, and then trade directly without an intermediary. We estimate that the trading volume of U2U pairs meeting on DWMs is increasing, reaching a peak in 2020 (during the COVID-19 pandemic). Indeed, our results support recent recommendations of paying attention to single sellers rather than entire DWMs~\cite{Cryptomarkets2020}. Law enforcement agencies, however, have only recently started targeting single sellers. The first operation took place in 2018 and successfully led to the arrest of 35 sellers~\cite{FirstLargeVendorArrest2018}, while the largest operation to date occurred in 2020 and led to 179 arrests in six different countries~\cite{Europol2020}. Our study indicates that a much higher number of highly active DWM users, on the order of tens of thousands, is involved in transactions with other DWM users. Overall, our study provides a first step towards the understanding of how users of DWMs collectively behave outside organised marketplace. We believe that the results might suggest to researchers, practitioners, and law enforcement agencies that a shift in the attention from the evolution of DWMs to the behaviour of their users might facilitate the design of more appropriate strategies to counteract online trading of illicit goods. \section*{Competing interests} The authors declare that they have no competing interests. \section*{Author's contributions} M.N., A.Br., A.E., P.G., A.T., and A.Ba. designed the research; A.E. and P.G., acquired, prepared, and cleansed the data. M.N. and A.Br. performed the measurements. M.N., A.Br., A.E., P.G., A.T., and A.Ba. analysed the data. M.N., A.Br., P.G., A.T., and A.Ba. wrote the manuscript. M.N., A.Br., A.E., P.G., A.T., and A.Ba. discussed the results and commented on the manuscript. \section*{Acknowledgements} M.N., A.Br., A.T., and A.Ba. were supported by ESRC as part of UK Research and Innovation’s rapid response to COVID-19, through grant ES/V00400X/1. \section*{Data availability} All data needed to evaluate the conclusions in the paper are present in the paper. Additional data related to this paper may be requested from the authors. \section*{Data and methods} \label{Data_methods} Additional considerations on our data and methods are available in Section~\ref{Data_methods_SI}. \paragraph{Data preprocessing.} We consider only a subset of the transactions in our dataset. Namely, the ones made by the 40 entities representing the 40 DWMs under consideration, which directly interact with more than 16 million other entities, who are the users of these DWMs. Users interacting with other users form U2U pairs and we include them in our dataset. We instead discard single Bitcoin transactions below \$0.01 or above \$100,000, which are unlikely to show real purchases and minimise false positives. They may be attributed to a residual amount of Bitcoins in an address or transactions between two business partners where no good is actually given in return, respectively. The analysed dataset includes about 31 million transactions among more than 16 million users. Finally, we note that the same user can interact in multiple DWMs~\cite{elbahrawy2020collective, hiramoto2020measuring}. By definition, users that interact among themselves form U2U transactions. If the pair of users interact with multiple DWMs these U2U transactions are included in all relative DWMs and counted multiple times. Therefore, the simple sum of all U2U transactions of each DWM is more than the sum of all unique U2U transactions. We count a total of 11 million transactions around all DWMs, that goes down to 9.9 million when multiple counting is avoided. Similarly, the simple sum of the single trading volumes surrounding all DWMs amounts to \$33 billion, while the overall trading volume in all unique U2U pairs is \$30 billion. Among the 40 large DWMs under consideration, 17 participate in at least one transaction in either 2020 or 2021, while the remaining 23 closed before 2020. Notably, our dataset includes Silk Road (the first modern DWM)~\cite{christin2013traveling}, Alphabay (once the leading DWM)~\cite{van2016drugs}, and Hydra (currently the largest DWM in Russia)~\cite{elbahrawy2020collective}. Other general statistics about our dataset can be found in the Section~\ref{general_statistics_label}. \begin{figure}[H] \centering \includegraphics[width=0.7\textwidth]{U2U_network_selected.pdf} \caption{\textbf{U2U network.} The U2U network is formed by the entire set of interacting users (black and gray arrows with their respective users). Using the evolving activity-driven model~\cite{nadini2020detecting}, U2U pairs are divided in either stable (black arrows and respective users) or non-stable (gray arrows and respective users).} \label{Backbone_U2U_network} \end{figure} \paragraph{Detection of the U2U network.} The detection of stable U2U pairs in the full network is done by using the evolving activity-driven model~\cite{nadini2020detecting}, which introduced a statistically-principled methodology to detect the network backbone against what expected from a proper null model. If a U2U pair occurs significantly more than what expected from the null model, it is labeled as stable, otherwise as non-stable. The evolving activity-driven model is an appropriate methodology for large temporal networks~\cite{nadini2020reconstructing} and it is implemented in the Python 3 pip library TemporalBackbone~\cite{TemporalBackbone_pip2021}, where default parameter values have been used. As input parameter, we considered the full network, comprehending transactions from/to DWMs and U2U transactions between users (see Section~\ref{Detecting_trading_partners}). \paragraph{Users who met inside a DWM.} We determine whether U2U pairs meet while active on a DWM by looking at the time occurrence of their first U2U transaction. This transaction can occur at three different moment in time. (i) At $t=t_1$, before both users interact with the same DWM (occurring at $t=t_2>t_1$ and $t=t_3>t_1$, respectively), as shown on the left hand side of Table~\ref{Motifs_Triads_table11}. (ii) At $t=t_2$, when only one user has interacted with a specific DWM and the other user will do so at a later time, as in the middle column of Table~\ref{Motifs_Triads_table11}. (iii) At $t=t_3$, when both users have interacted with the same DWM, as in the right column of Table~\ref{Motifs_Triads_table11}. We classify these three chain of events in two groups. One group includes all pairs that meet outside any DWMs, which includes case (i) and case (ii), and the other group users that meet inside a DWM, described by case (iii). This last case constitute a conservative proxy for users that meet who met inside aa DWM. The proxy admits the possibility of false positives, since it consider users who met inside a the same DWM without having interacted on it, as well as false negatives, since it does not take into account users who met inside a DWM without having ever interacted on it. The latter is arguably more significant, since it is possible that only one of the two users (the seller) has actually engaged in transactions with the DWM, while the other user, after seeing the seller’s profile on a DWM, has established a direct contact, through Whatsapp, email, or phone. \paragraph{Nomenclature of all groups considered.} We provide the definition of all considered groups in Table~\ref{Nomenclature}. \begin{table}[H] \centering \begin{adjustbox}{max width=\textwidth} \begin{tabular}{l|l|c} Group & Description & Number of users \\ \hline \thead{1. Users who do not form \\ stable U2U pairs} & \thead{Users that form neither stable \\ U2U pair nor U2U pairs} & 15,875,877 \\ \thead{2. Users who form \\ stable U2U pairs} & \thead{ Users who form at least one stable U2U pair \\ as detected by our chosen metholodogy~\cite{nadini2020detecting} } & 106,648 \\ \thead{2a. Users who met \\ outside DWMs} & \thead{Users that form stable pairs and met at least \\ one other user following the chain \\ of events in Table~\ref{Motifs_Triads_table11} (first two columns)} & 88,828 \\ \thead{2b. Users who met \\ inside a DWM} & \thead{Users that form stable pairs and met at least \\ one other user following the chain \\ of events in Table~\ref{Motifs_Triads_table11} (last column)} & 37,129 \\ \end{tabular} \end{adjustbox} \caption{\textbf{Nomenclature.} Definitions of all groups the users are divided to based on their behaviour. Number of users in each group is given in the last column.} \label{Nomenclature} \end{table} \bibliographystyle{unsrt} \section*{Introduction} \label{Introduction} Since the launch of Silk Road, the first modern dark web marketplace (DWM), in 2011~\cite{christin2013traveling} millions of buyers and sellers have traded in the dark web. DWMs have became popular because their users can anonymously access them through ad-hoc browsers, such as The Onion Router (Tor)~\cite{dingledine2004tor}, and trade goods using cryptocurrencies, such as Bitcoin~\cite{nakamoto2008Bitcoin}. They offer a variety of illicit goods including drugs, firearms, credit cards dumps, and fake IDs~\cite{GwernDarkNets}. DWMs could represent a threat for the regular economy and public health. For instance, during the COVID-19 pandemic, DWMs sold COVID-19 related goods (e.g., masks and COVID-19 tests) that were in shortage in regulated marketplaces as well as unapproved vaccines and fake treatments~\cite{broadhurstavailability, bracci2020covid, bracci2021covid}. Law enforcement agencies have therefore targeted DWMs and users trading on them, performing dozens of arrests and seizing millions of US dollars worth of Bitcoin~\cite{Operation_Onymous, FBIAlphabay, BillionFedsSilkRoad}. Despite police raids and unexpected closures, DWM trading volume has been steadily increasing and exceeded \$1.5 billion for the first time in 2020~\cite{Chainalysis_crypto_crime_report_2021}. DWM users display complex trading patterns within the marketplace environment. For example, users migrate to alternative DWMs when a DWM that they trade on close~\cite{elbahrawy2020collective, hiramoto2020measuring}. Such migration of users is aided by communication via online forums and chats on the dark web~\cite{buxton2015rise, maddox2016constructive}. However, little is known about how DWM users trade and transact \textit{outside} the DWMs. On the one hand, some recent works have shown that a significant number of DWM users trade drugs and other illicit goods using social media platforms, such as Facebook, Telegram, and Reddit~\cite{oksanen2020illicit, DarknetLive_telegram, sung2021prevalence, childs2021beyond, kwon2021dark}. Moreover, several qualitative, interview-based studies have shown that DWM users form direct trading relationships with other users, starting user-to-user (U2U) pairs that bypass the intermediary role of DWMs~\cite{barratt2016if, munksgaard2020and}. Past research has also found that sellers on regulated online marketplaces and social medial platforms may decide to use intermediaries, such Facebook groups or Instagram, to find new customers, and may start direct U2U trading with potential buyers~\cite{bakken2019sellers}. In this paper, we look closely at patterns of U2U trading relationships among DWM users. \begin{figure}[H] \centering \begin{subfigure}[H]{0.45\textwidth} \centering \vspace{1cm} \includegraphics[width=0.7\textwidth]{DWMs_ego_network_selected.pdf} \vspace{1cm} \caption*{(a) Marketplace ego network} \end{subfigure} \begin{subfigure}[H]{0.45\textwidth} \centering \includegraphics[width=\textwidth]{DWMs_full_network_selected.pdf} \caption*{(b) Full network} \end{subfigure} \caption{\textbf{Ego and full networks.} (a) Schematic representation of an ego network surrounding a dark web marketplace (``DWM'', in red). The DWM interacts with its users (``U'', in black), which make user-to-user (U2U) pairs, represented with arrows and their respective users. (b) Multiple ego networks may be aggregated to form the full network.} \label{DWM_ego_full_networks} \end{figure} The starting point for this paper is identifying U2U networks around DWMs. We analyse 40 DWMs for a 10-year time period spanning from June 18, 2011 to January 31, 2021. Our dataset covers all major DWMs that have ever existed, as identified by the European Monitoring Centre, Europol, the World Health Organization, and independent researchers~\cite{european2017drugs, world2019world, gwern_live_markets}. Our analysis focuses on Bitcoin -- the most popular cryptocurrency on DWMs~\cite{lee2019cybercriminal, foley2019sex} as well as in the regulated economy~\cite{baur2015cryptocurrencies, saiedi2020global}. We focus on two kinds of transactions, occurring (i) between the user and a DWM and (ii) between two users of the same DWM. The result is 40 distinct marketplace ego networks containing user-DWM and U2U transactions, whose typical structure is depicted in Figure~\ref{DWM_ego_full_networks}(a). In each network, links are directed and the arrows point at the receiver of Bitcoin. Since users often migrate from one DWM to another~\cite{elbahrawy2020collective} and become users of multiple DWMs, the 40 ego networks are not isolated, and can be combined to form one full network, as shown in Figure~\ref{DWM_ego_full_networks}(b). Previous analyses of U2U trading relationships around DWMs include only two studies~\cite{barratt2016if, munksgaard2020and} based on unstructured~\cite{barratt2016if} or semi-structured~\cite{munksgaard2020and} interviews of 17 users of Silk Road and 13 DWMs sellers, respectively. Here, we dramatically extend previous work by exploring the collective emergence and structure of U2U pairs. First, we observe that the U2U network, formed by all transactions between pairs of users, has a larger trading volume than DWMs themselves. We then identify stable U2U trading relationships, which represent a subset of persistent pairs in our dataset~\cite{nadini2020detecting, nadini2020reconstructing} forming the \emph{backbone} of the U2U network. We find that 137,667 (i.e., 1.7\% out of 7.85 million total) pairs are stable, generating a total trading volume of \$1.5 billion (i.e., 5\% out of \$30 billion total volume). We then explore the behaviour of users forming stable U2U pairs. We reveal that stable U2U pairs play a crucial role for marketplaces by spending significantly more time and generating far greater transaction volume with DWMs than other users. By analysing the temporal evolution of stable pairs, we unveil that DWMs acted as meeting points for 37,192 (out of around 16 million users), whose trading volume is estimated to be \$417 million. Importantly, these newly formed pairs persist in time and transact for several months even after the closure of the DWM that spurred their formation. Finally, we observe that COVID-19 only had a temporary impact on the evolution of stable U2U pairs, which continued to increase their trading volume throughout 2020. \section*{Results} \label{Results} \subsection*{Large number of U2U transactions} \paragraph{Ego networks.} We start our analysis by measuring the extent of the U2U network around each DWM. The percentages of users forming U2U pairs vary across DWMs, with a median value of 38\% (min 23\%, max 68\%). The variance in the percentages of users with U2U pairs is shown by Figure~\ref{Importance_U2U_transactions}(a), which shows that the number of users with U2U pairs obeys an almost linear relationship with the number of users interacting with a DWM, having an exponent equal to 1.06 and $R^2 = 0.969$. The total trading volume users sent to the marketplace is obviously equivalent to the one they receive from it (two-sided Wilcoxon test~\cite{wilcoxon1992individual}: $W=330$, $p=0.282$). Importantly, the total trading volume users sent to a DWM (and consequently the one that they receive from it) is always less than the one exchanged through U2U transactions, as shown in Figure~\ref{Importance_U2U_transactions}(b). \begin{figure}[H] \centering \includegraphics[width=16cm]{Importance_U2U_transactions_time.pdf} \caption{\textbf{User-DWM and U2U transactions.} (a) Total number of users interacting with a DWM against the total number of them forming U2U transactions. The dotted line corresponds to the result of a fitted power law function. (b) Trading volume in dollars sent to a DWM compared with the total trading volume in its surrounding U2U transactions. The dashed line is the bisector and allows to easily compare the two trading volumes. (c) Total monthly trading volume sent to all DWMs and exchanged in all unique U2U pairs. We do not include the trading volume received from DWMs because it is equivalent as the volume sent to DWMs.} \label{Importance_U2U_transactions} \end{figure} \paragraph{Full network.} Similar results hold for the full network, confirming that the formation of U2U pairs is a pervasive phenomenon around DWMs. The total trading volume users sent to DWMs is \$3.8 billion, received from DWMs \$3.7 billion, while the volume exchanged through U2U pairs reaches \$30 billion. In Figure~\ref{Distributions_pairs_dataset}, we illustrate the number of transactions, trading volume, and lifespan of U2U pairs. In all cases we observe familiar fat-tailed distributions. We then consider the temporal evolution of transactions. We look at the trading volume over time in Figure~\ref{Importance_U2U_transactions}(c), where we observe that U2U transactions have consistently involved greater monthly volume than the volume sent to all DWMs since 2011. This underlines the economic importance of U2U transactions in the Bitcoin ecosystem relative to DWMs. \subsection*{Behaviour of the U2U network} Henceforth, we are going to analyse users by focusing on the following groups: users who do not form stable U2U pairs; users who form stable U2U pairs, of which there are users who met outside DWMs and users who met inside DWMs (see the nomenclature in Table~\ref{Nomenclature}). We start by focusing our attention on identifying stable U2U pairs, i.e., persistent pairs of the U2U network. To this end, we use the evolving activity-driven model~\cite{nadini2020detecting} to extract them in a statistically-principled way (see Methods). We find 137,667 stable U2U pairs formed by 106,648 users and generating a trading volume equal to \$1.5 billion. Stable pairs produce five times more transactions per pair than non-stable pairs (two-sided Mann-Whitney-U test~\cite{mann1947test}: MNU$=4,58 \cdot 10^9$, $p<0.0001$) corresponding to a 5.34 times larger trading volume (MNU$= 317 \cdot 10^9$, $p<0.0001$), see Figure~\ref{Number_pairs_stable_not}. Stable pairs, despite representing less than 2\% of the total number of U2U pairs, generate a disproportionate amount of trading volume. \begin{figure}[H] \centering \includegraphics[width=12cm]{Role_users_in_DWMs_adjusted.pdf} \caption{\textbf{Role of users forming stable U2U pairs.} (Main) PDFs of trading volume that users exchange with any DWMs. (Inset) PDFs of time spent by users on any DWMs. These distributions are explored for each of the 40 DWMs under consideration in Figure~\ref{Role_users_in_DWMs_time_spent} and~\ref{Role_users_trading_volume}, respectively. Vertical lines represent median values of the respective distributions.} \label{Role_users} \end{figure} The high activity of users forming stable U2U pairs is not limited to the U2U network, as they are also the most active in trading with DWMs. Users in stable U2U pairs spend a median number of 41 days on DWMs versus a median of only one day for users without stable pairs. The two resulting distributions are significantly different (two-sided Kolmogorov-Smirnov test~\cite{massey1951kolmogorov}: KS $= 0.673$, $p<0.0001$), see the inset of Figure~\ref{Role_users}. When we look at the trading volume with DWMs, we find qualitatively similar results. Users in stable U2U pairs transact a median of \$400 with DWMs, while other users transact only \$56. The two resulting distributions are significantly different (KS $= 0.438$, $p<0.0001$), see Figure~\ref{Role_users}. These results hold not only for full network but for every DWM in our data, see Figure~\ref{Role_users_in_DWMs_time_spent} and~\ref{Role_users_trading_volume}. \subsection*{U2U network evolution} \paragraph{Formation of U2U stable pairs.} \begin{table}[H] \centering \begin{tabular}{ccc|ccc|ccc} \multicolumn{3}{c|}{\thead{Users who met outside the DWM}} & \multicolumn{3}{c|}{\thead{Users who met outside the DWM}} & \multicolumn{3}{c}{\thead{Users who met inside the DWM}} \\ \begin{subfigure}[H]{0.08\textwidth} \centering \includegraphics[width=\textwidth]{Conf_alpha1} \caption*{\thead{$t_1$}} \end{subfigure} & \begin{subfigure}[H]{0.08\textwidth} \centering \includegraphics[width=\textwidth]{Conf_beta2} \caption*{\thead{$t_2$}} \end{subfigure} & \begin{subfigure}[H]{0.08\textwidth} \centering \includegraphics[width=\textwidth]{Conf_gamma3} \caption*{\thead{$t_3$}} \end{subfigure} & \begin{subfigure}[H]{0.08\textwidth} \centering \includegraphics[width=\textwidth]{Conf_beta1} \caption*{\thead{$t_1$}} \end{subfigure} & \begin{subfigure}[H]{0.08\textwidth} \centering \includegraphics[width=\textwidth]{Conf_beta2} \caption*{\thead{$t_2$}} \end{subfigure} & \begin{subfigure}[H]{0.08\textwidth} \centering \includegraphics[width=\textwidth]{Conf_gamma3} \caption*{\thead{$t_3$}} \end{subfigure} & \begin{subfigure}[H]{0.08\textwidth} \centering \includegraphics[width=\textwidth]{Conf_gamma1} \caption*{\thead{$t_1$}} \end{subfigure} & \begin{subfigure}[H]{0.08\textwidth} \centering \includegraphics[width=\textwidth]{Conf_gamma2} \caption*{\thead{$t_2$}} \end{subfigure} & \begin{subfigure}[H]{0.08\textwidth} \centering \includegraphics[width=\textwidth]{Conf_gamma3} \caption*{\thead{$t_3$}} \end{subfigure} \end{tabular} \caption{\textbf{Formation mechanism of stable U2U pairs.} We compare the time in which the first transaction between a pair of users occur with the time in which these users interact with the same DWM. Each row in the figure indicates a possible temporal sequence, which we classify in two groups: users who met outside the DWM (first two columns) and users who met inside the DWM (last column).} \label{Motifs_Triads_table11} \end{table} Having mapped the behaviour of stable pairs, we now consider their temporal evolution. More specifically, we ask: How do stable pairs form? Do DWMs spur their creation? One possible hypothesis is that users meet for the first time while active on a DWM, i.e., after they have both traded with that DWM, see Table~\ref{Motifs_Triads_table11} and the nomenclature in Table~\ref{Nomenclature}. This can be considered as a plausible, and conservative, proxy for users who met inside a DWM (see Methods). A total of 37,129 users have met at least one other user inside a DWM. Their trading volume is about \$417 million, and the percentage of users who met inside a DWM is proportional to the trading volume sent to DWMs (Spearman~\cite{spearman1961proof}: $C=0.805$, $p<0.0001$), see Fig~\ref{Where_users_meet}, meaning that large DWMs are more likely to favour the encounter of users than smaller DWMs. Importantly, users who met inside a DWM transact more than those meeting outside them. In particular, users who met inside a DWM trade a median of \$2,212 between themselves, almost twice the \$1,379 for users meeting outside the DWM (MNU$= 1.863 \cdot 10^9$, $p<0.0001$). Moreover, users who met inside a DWM tend to make transactions significantly longer with median of 61 days than users meeting outside with a median of 50 days (MNU $= 2.099 \cdot 10^9$, $p<0.0001$). \begin{figure}[H] \centering \includegraphics[width=12cm]{U2U_after_DWM_closure_meet_inside_outside_adjusted.pdf} \caption{\textbf{Resilience of stable U2U pairs after DWMs closure.} Trading volume of U2U pairs surrounding active DWMs. (Main) U2U pairs meet who met inside aa DWM. (Inset) U2U pairs meet outside them. Curves indicate the median value while bands represent the 95\% confidence interval. Day zero corresponds to the day when the market closed. Negative and positive numbers indicate the days prior and after the closure, respectively. Only the 33 DWMs closed are considered in the analysis.} \label{After_DWMs_closure_meet_inside_outside} \end{figure} \paragraph{Resilience of U2U stable pairs.} Thus far, we have shown that users involved in stable trading relationships are also very active on DWMs, where they may meet new trading partners. But are DWMs and the U2U network truly interdependent? In particular, do stable pairs need the DWMs to survive? To answer these questions, we look at market closures, previously investigated to show how active users migrate to other existing DWMs~\cite{elbahrawy2020collective}. Our dataset includes 33 closure events, which we study independently from one another by considering the evolution of the respective 33 marketplace ego networks. We find that non-stable U2U pairs sharply stop interacting immediately after DWM closure therefore their existence is highly sensitive to the presence of the DWM. On the other hand, the trading volume of stable U2U pairs is only marginally affected by the disappearance of the DWM. As a result, while prior to DWM closure non-stable U2U pairs generate an overall trading volume that is 10 times higher than that of stable U2U pairs (since non-stable pairs are far more prevalent), within a few weeks after DWM closure the pattern is reversed: stable U2U pairs generate more trade volume than non-stable U2U pairs. Indeed, trading patterns of stable pairs are not significantly influenced by DWMs closure, see Figure~\ref{After_DWMs_closure_meet_inside_outside}. We have shown that the U2U network is resilient to short-lasting external shocks, namely the closure of a marketplace, and it does not need the centralised structure of DWMs to survive. What about long-lasting systemic stress? To answer this question, we consider the impact that the COVID-19 pandemic has had on the evolution of stable U2U pairs. Previous studies reported that COVID-19 had a strong impact on DWMs, with reported delays and damage to the shipping infrastructure due to border closures~\cite{bergeron2020preliminary, Chainalysis_DWMs_growth_2020}. We start by investigating the number of new stable U2U pairs and their trading volume. Users in stable pairs meeting both inside and outside DWMs have been growing over the last two years. In 2020, a total of 6,778 pairs of users in stable pairs met inside a DWM, corresponding to the 192\% of 2019 and to the 255\% of 2018, see Figure~\ref{Temporal_analysis}(a). Pairs of users in stable pairs meeting inside a DWM traded for a total of \$145 million in 2020, which corresponds to the 252\% of 2019, and the 593\% of 2018, see Figure~\ref{Temporal_analysis}(b). We see similar trends for stable U2U pairs meeting outside any DWMs. The impact of the COVID-19 pandemic has, however, had different phases, determined by the number and level of measures introduced around the world. For users in stable pairs who met both inside or outside DWMs, we find that during the first lockdowns in 2020 trading volume fell with respect to January of the same year, suggesting that they were negatively impacted by COVID-19 restrictions. After that, trading volume sharply increased over all 2020, see Figure~\ref{Trading_volume_COVID}. The number of stable U2U pairs created each day was, however, steady over time during 2020, even though more U2U pairs were created compared to the same period of 2019, see Figure~\ref{new_U2U_pairs_COVID19}. Overall, stable U2U pairs have shown resilience to the systemic stress caused by COVID-19, suggesting, once again, that these trading relationships are fundamentally independent from the underlying DWMs. \section*{Discussion and Conclusion} \label{Discussions_conclusion} In this paper, we revealed the prevalence and structure of a large network of direct transactions between users who trade on the same DWM. We showed that some of the links of this user-to-user (U2U) network are ephemeral while other persist in time. We highlighted that a significant fraction of stable U2U pairs formed as their members were trading with the same DWM, suggesting that DWMs may play a role in promoting the formation of stable U2U pairs. We showed that the relationships between users forming stable pairs persist even after the DWM shuts down and are not significantly affected by COVID-19, suggesting overall resilience of stable pairs to external shocks. Our study has several limitations. In particular, our dataset does not include any attributes related to either users or their Bitcoin transactions, such as, whether the transaction represents an actual purchase or not. Moreover, we do not have information about which users trade with other users on the same DWM. Finally, our coverage of DWMs, albeit extensive, may lack information on other DWMs where users could have met. Our work has several policy implications. Our findings suggest that DWMs are much more than mere marketplaces~\cite{gupta2021dark}. DWMs are also communication platforms, where users can meet and chat with other users either directly -- using Whatsapp, phone, or email -- or through specialised forums. These direct interactions may favour the emergence of decentralised trade networks that bypass the intermediary role of the marketplace, similar to what is currently happening on Facebook, Telegram, and Reddit~\cite{oksanen2020illicit, bakken2019sellers,DarknetLive_telegram, sung2021prevalence, childs2021beyond, kwon2021dark}, where users post products, negotiate item prices, and then trade directly without an intermediary. We estimate that the trading volume of U2U pairs meeting on DWMs is increasing, reaching a peak in 2020 (during the COVID-19 pandemic). Indeed, our results support recent recommendations of paying attention to single sellers rather than entire DWMs~\cite{Cryptomarkets2020}. Law enforcement agencies, however, have only recently started targeting single sellers. The first operation took place in 2018 and successfully led to the arrest of 35 sellers~\cite{FirstLargeVendorArrest2018}, while the largest operation to date occurred in 2020 and led to 179 arrests in six different countries~\cite{Europol2020}. Our study indicates that a much higher number of highly active DWM users, on the order of tens of thousands, is involved in transactions with other DWM users. Overall, our study provides a first step towards the understanding of how users of DWMs collectively behave outside organised marketplace. We believe that the results might suggest to researchers, practitioners, and law enforcement agencies that a shift in the attention from the evolution of DWMs to the behaviour of their users might facilitate the design of more appropriate strategies to counteract online trading of illicit goods. \section*{Competing interests} The authors declare that they have no competing interests. \section*{Author's contributions} M.N., A.Br., A.E., P.G., A.T., and A.Ba. designed the research; A.E. and P.G., acquired, prepared, and cleansed the data. M.N. and A.Br. performed the measurements. M.N., A.Br., A.E., P.G., A.T., and A.Ba. analysed the data. M.N., A.Br., P.G., A.T., and A.Ba. wrote the manuscript. M.N., A.Br., A.E., P.G., A.T., and A.Ba. discussed the results and commented on the manuscript. \section*{Acknowledgements} M.N., A.Br., A.T., and A.Ba. were supported by ESRC as part of UK Research and Innovation’s rapid response to COVID-19, through grant ES/V00400X/1. \section*{Data availability} All data needed to evaluate the conclusions in the paper are present in the paper. Additional data related to this paper may be requested from the authors. \section*{Data and methods} \label{Data_methods} Additional considerations on our data and methods are available in Section~\ref{Data_methods_SI}. \paragraph{Data preprocessing.} We consider only a subset of the transactions in our dataset. Namely, the ones made by the 40 entities representing the 40 DWMs under consideration, which directly interact with more than 16 million other entities, who are the users of these DWMs. Users interacting with other users form U2U pairs and we include them in our dataset. We instead discard single Bitcoin transactions below \$0.01 or above \$100,000, which are unlikely to show real purchases and minimise false positives. They may be attributed to a residual amount of Bitcoins in an address or transactions between two business partners where no good is actually given in return, respectively. The analysed dataset includes about 31 million transactions among more than 16 million users. Finally, we note that the same user can interact in multiple DWMs~\cite{elbahrawy2020collective, hiramoto2020measuring}. By definition, users that interact among themselves form U2U transactions. If the pair of users interact with multiple DWMs these U2U transactions are included in all relative DWMs and counted multiple times. Therefore, the simple sum of all U2U transactions of each DWM is more than the sum of all unique U2U transactions. We count a total of 11 million transactions around all DWMs, that goes down to 9.9 million when multiple counting is avoided. Similarly, the simple sum of the single trading volumes surrounding all DWMs amounts to \$33 billion, while the overall trading volume in all unique U2U pairs is \$30 billion. Among the 40 large DWMs under consideration, 17 participate in at least one transaction in either 2020 or 2021, while the remaining 23 closed before 2020. Notably, our dataset includes Silk Road (the first modern DWM)~\cite{christin2013traveling}, Alphabay (once the leading DWM)~\cite{van2016drugs}, and Hydra (currently the largest DWM in Russia)~\cite{elbahrawy2020collective}. Other general statistics about our dataset can be found in the Section~\ref{general_statistics_label}. \begin{figure}[H] \centering \includegraphics[width=0.7\textwidth]{U2U_network_selected.pdf} \caption{\textbf{U2U network.} The U2U network is formed by the entire set of interacting users (black and gray arrows with their respective users). Using the evolving activity-driven model~\cite{nadini2020detecting}, U2U pairs are divided in either stable (black arrows and respective users) or non-stable (gray arrows and respective users).} \label{Backbone_U2U_network} \end{figure} \paragraph{Detection of the U2U network.} The detection of stable U2U pairs in the full network is done by using the evolving activity-driven model~\cite{nadini2020detecting}, which introduced a statistically-principled methodology to detect the network backbone against what expected from a proper null model. If a U2U pair occurs significantly more than what expected from the null model, it is labeled as stable, otherwise as non-stable. The evolving activity-driven model is an appropriate methodology for large temporal networks~\cite{nadini2020reconstructing} and it is implemented in the Python 3 pip library TemporalBackbone~\cite{TemporalBackbone_pip2021}, where default parameter values have been used. As input parameter, we considered the full network, comprehending transactions from/to DWMs and U2U transactions between users (see Section~\ref{Detecting_trading_partners}). \paragraph{Users who met inside a DWM.} We determine whether U2U pairs meet while active on a DWM by looking at the time occurrence of their first U2U transaction. This transaction can occur at three different moment in time. (i) At $t=t_1$, before both users interact with the same DWM (occurring at $t=t_2>t_1$ and $t=t_3>t_1$, respectively), as shown on the left hand side of Table~\ref{Motifs_Triads_table11}. (ii) At $t=t_2$, when only one user has interacted with a specific DWM and the other user will do so at a later time, as in the middle column of Table~\ref{Motifs_Triads_table11}. (iii) At $t=t_3$, when both users have interacted with the same DWM, as in the right column of Table~\ref{Motifs_Triads_table11}. We classify these three chain of events in two groups. One group includes all pairs that meet outside any DWMs, which includes case (i) and case (ii), and the other group users that meet inside a DWM, described by case (iii). This last case constitute a conservative proxy for users that meet who met inside aa DWM. The proxy admits the possibility of false positives, since it consider users who met inside a the same DWM without having interacted on it, as well as false negatives, since it does not take into account users who met inside a DWM without having ever interacted on it. The latter is arguably more significant, since it is possible that only one of the two users (the seller) has actually engaged in transactions with the DWM, while the other user, after seeing the seller’s profile on a DWM, has established a direct contact, through Whatsapp, email, or phone. \paragraph{Nomenclature of all groups considered.} We provide the definition of all considered groups in Table~\ref{Nomenclature}. \begin{table}[H] \centering \begin{adjustbox}{max width=\textwidth} \begin{tabular}{l|l|c} Group & Description & Number of users \\ \hline \thead{1. Users who do not form \\ stable U2U pairs} & \thead{Users that form neither stable \\ U2U pair nor U2U pairs} & 15,875,877 \\ \thead{2. Users who form \\ stable U2U pairs} & \thead{ Users who form at least one stable U2U pair \\ as detected by our chosen metholodogy~\cite{nadini2020detecting} } & 106,648 \\ \thead{2a. Users who met \\ outside DWMs} & \thead{Users that form stable pairs and met at least \\ one other user following the chain \\ of events in Table~\ref{Motifs_Triads_table11} (first two columns)} & 88,828 \\ \thead{2b. Users who met \\ inside a DWM} & \thead{Users that form stable pairs and met at least \\ one other user following the chain \\ of events in Table~\ref{Motifs_Triads_table11} (last column)} & 37,129 \\ \end{tabular} \end{adjustbox} \caption{\textbf{Nomenclature.} Definitions of all groups the users are divided to based on their behaviour. Number of users in each group is given in the last column.} \label{Nomenclature} \end{table} \bibliographystyle{unsrt}
1,314,259,994,156
arxiv
\section{Introduction} Optical spectra are important for the characterization and prediction of material properties, as optical excitations are at the core of e.g., light-emitting devices, laser technology, and photovoltaics. For extended systems many-body perturbation theory~\cite{onida:02,botti-review:07}, a Green's function based approach, is the most accurate method to calculate optical properties. Perhaps inevitably, it is also one of the most computationally costly methods available to the community. It involves the solution of an equation of motion for the two-particle Green's function, the Bethe-Salpeter equation (BSE), that describes coupled and correlated electron-hole excitations~\cite{albrecht:98,*benedict:98,*rohlfing:98,*PhysRevLett.33.582}. The standard numerical techniques used to solve the BSE are based on an expansion of the relevant quantities in electron-hole states (needing therefore both filled and empty states), and require a very dense $k$-point sampling of the Brillouin zone (BZ). Typically, the number of electron-hole states used in the expansion can be relatively small if one is only interested in the visible spectra, but the number of $k$-points can easily reach several thousands. Some approaches have been put forward to reduce the computational burden of the BSE. For example, the number of k-points can be reduced by interpolating the interaction integrals in $k$-space~\cite{rohlfing:00}, while recent implementations allow for the complete exclusion of empty states~\cite{rocca:12}. It is well known that optical spectra are very sensitive to the $k$-point sampling~\cite{albrecht:99,rocca:12,aguilera:11}. A common approach to alleviate the problem is the use of arbitrarily shifted $k$-point grids, that often yield sufficient sampling of the Brillouin zone while keeping the number of $k$-points manageable. Such a shifted grid, indeed, does not use the symmetries of the Brillouin zone and guarantees a maximum number of non-equivalent $k$-points accelerating spectrum convergence~\cite{benedict:98b}. However, it might induce artificial splitting of normally degenerate states, and thus produce artifacts in the spectrum~\cite{wirtz:08}, such as the splitting of some peaks or even the appearance of spurious excitations in some directions. Of course, these artifacts (slowly) disappear with increasing density of $k$-points~\cite{albrecht:99,rocca:12}, and consequent increase of the computational burden. In view of that a very dense $k$-point sampling is crucial to obtain an accurate lineshape, including the correct peak positions~\cite{albrecht:99, hahn:05}, but very hard to achieve in practical calculations. In this Article, we propose a new strategy to solve the BSE equation that alleviates the need for dense $k$-point grids. The independent-particle part of the BSE is first evaluated on a very dense $k$-grid ($40\x40\x40$ in the example below) by making use of Wannier interpolation~\cite{marzari:97}. The BSE is then solved in a unshifted coarse $k$-grid ($10\x10\x10$ in the example below) using a double-grid technique to take into account the fast changing independent-particle contribution. This approach is simple to implement, and leads to a considerable gain in computational time. In the following, we start by presenting a short review of the theoretical ingredients for the description of optical spectra within the BSE~\cite{rohlfing:00, strinati, *bussi:04}. We then discuss our approach and prove its usefulness with a notoriously difficult example, the calculation of the optical absorption spectrum of the standard semiconductor GaAs. The subsequent discussion of bulk silicon concludes the benchmark. The optical absorption spectrum is described by the imaginary part of the macroscopic dielectric function $\epsilon_{\textrm M}(\omega)$ in the long wavelength limit, which in turn can be obtained from the two-point contraction of the reducible four-point polarizability $L$, \begin{equation} \label{eq:eps_macro} \epsilon_{\textrm M}(\omega) = 1 - \lim_{\bm{q}\rightarrow0} v(\bm{q}) \bm{\lambda} \int \! d\bm{r} d\bm{r}' e^{-i\bm{q}(\bm{r}-\bm{r}')} L(\bm{r},\bm{r}, \bm{r}',\bm{r}';\omega), \end{equation} with the Coulomb potential $v=4\pi/({\bm G}+\bm{q})^2$, the transferred momentum $\bm{q}$, and $\bm{\lambda}$ the direction of light polarization. The quantity $L$ satisfies the BSE, a Dyson like equation, \begin{multline} \label{eq:bse_realspace} L(1,2,3,4) = L^0(1,2,3,4) + \\ \int d(5678) L^0(1,2,5,6)\, \Xi(5,6,7,8)\, L(7,8,3,4), \end{multline} with the abbreviation of space, spin and time coordinates $(1) = (\bm{r}_1, \sigma_1, t_1)$, and where $L^0(1,2,3,4)=i G(1,3) G(4,2)$ is the independent particle polarizability, expressed as a product of single-particle Green's functions. Equation~\eqref{eq:bse_realspace} describes the effects of the electron-hole interaction mediated by the BSE kernel $\Xi=\overline{v} - W$ that is composed of (i) a bare, repulsive short-range \textit{exchange term}, that includes the microscopic components of the Coulomb interaction, i.e. $\overline{v}_{{\bm G} \neq 0}=v_{{\bm G}}; \overline{v}_{{\bm G}=0}=0$, and (ii) an attractive, static screened Coulomb potential $W$, the \textit{direct term}, arising from the variation of the self-energy. Dynamical effects due to the self-energy influence both the (single) quasiparticle renormalization and the excitonic two-body interaction $W$~\cite{bechstedt:97, marini:03}. In response calculations for semiconductors they partially cancel each other, which justifies the commonly employed approximation of a static $W$ and neglected quasiparticle renormalization, but in general this is not true for metals~\cite{marini:03}. As the interaction is instantaneous, only two time-variables of the initial four remain. And, due to the time-translation invariance, $L$ and $L^0$ depend only on the relative time-difference. A time-energy Fourier transformation then turns the polarizability into a function of a single frequency $L(1,2,3,4;\omega)$ where from now on $(1)=(\bm{r}_1,\sigma_1)$ only. By taking advantage of the two-particle nature of the BSE, all expressions are conveniently written on the basis of the electron-hole vertical transition space composed of $N_v$ valence bands, $N_c$ conduction bands, and $N_{\kk}^{\text{\tiny BZ}}$ $k$-points in the whole BZ. The dimension of this basis is $4 \! \times \! N_v \! \times \! N_c \! \times \! N_{\kk}^{\text{\tiny BZ}}$. In the case of vanishing spin-orbit coupling, the BSE can be separated into the two subspaces of singlet and triplet excitons~\cite{rohlfing:00}, each of them with dimension $2 \! \times \! N_v \! \times \! N_c \! \times \! N_{\kk}^{\text{\tiny BZ}}$. The basis sets that span these subspaces are constructed from pairs of single-particle states $\phi_{n,\bm{k}}$ with $n$ as band index and $\bm{k}$ as $k$-point and spin variable, such that \begin{equation}\label{eq:transition_basis} \Phi_{\bm{K}}(\bm{r}_1,\bm{r}_2) = \phi_{c,\bm{k}}(\bm{r}_1) \cdot \phi^*_{v,\bm{k}}(\bm{r}_2), \end{equation} with the short hand notation $\bm{K}=(c,v,\bm{k})$, and $c$ and $v$ running over indices of conduction and valence bands, respectively. In this basis the polarizability is written as \begin{multline}\label{eq:transition_repres} L_{\bm{K}_1,\bm{K}_2}(\omega) = \int d\bm{r}_1 d\bm{r}_2 d\bm{r}_3 d\bm{r}_4 \\ \Phi^*_{\bm{K}_1}(\bm{r}_1,\bm{r}_2) L(1,2,3,4;\omega) \Phi_{\bm{K}_2}(\bm{r}_3,\bm{r}_4) , \end{multline} and $L^0$, that is now diagonal, reads \begin{equation}\label{eq:l0_transition_repres} L^0_{\bm{K}_1,\bm{K}_2}(\omega) = \frac{f_{c_1 \bm{k}_1} - f_{v_1 \bm{k}_1}} {\epsilon_{c_1 \bm{k}_1} - \epsilon_{v_1 \bm{k}_1} - \omega -i\eta} \delta_{\bm{K}_1,\bm{K}_2}, \end{equation} where $f$ denotes the occupation number. The infinitesimal $\eta$ shifts the pole $\omega = \epsilon_{c_1 \bm{k}_1} - \epsilon_{v_1 \bm{k}_1}$ away from the real axis, and is thus responsible for a finite life-time of the excitation. In this basis, the BSE becomes a matrix equation \begin{equation} \label{eq:bse} L_{\bm{K}_1,\bm{K}_2} = L^0_{\bm{K}_1,\bm{K}_2} + L^0_{\bm{K}_1,\bm{K}_3} \Xi_{\bm{K}_3,\bm{K}_4} L_{\bm{K}_4,\bm{K}_2}, \end{equation} where we used Einstein's notation for summations over repeated indices in the tensor products, and omitted the explicit energy dependence for clarity. In the following we adopt the notation of $\mat{O}$ for the matrix representation in the electron-hole basis of an arbitrary operator $O$. It can be shown that the kernel $\mat{\Xi}$ couples pairs of excitations $(vc)$ with $(v'c')$, but also $(vc)$ with $(c'v')$, leading to the so-called resonant and coupling terms, respectively~\cite{rohlfing:00}. Here we make use of the so-called Tamm-Dancoff approximation, and neglect the latter. We thus arrive at a Hilbert space of dimension $ N_v \! \times \! N_c \! \times \! N_{\kk}^{\text{\tiny BZ}}$, regardless of the symmetries of the $k$-grid. Note that these are standard approximations for the solution of the BSE. Equation~\eqref{eq:bse} can be solved symbolically, yielding for each frequency $\omega$ \begin{equation} \label{eq:bse_invert} L_{\bm{K}_1,\bm{K}_2} = [ \mat{1 - L^0 \Xi} ]^{-1}_{\bm{K}_1,\bm{K}_3} \, L^0_{\bm{K}_3,\bm{K}_2} . \end{equation} To circumvent the inversion of $\mat{1 - L^0 \Xi}$, the usual procedure is to rewrite the matrix into a two-particle Hamiltonian whose diagonalization gives the excitonic eigensystem used to express the polarizability for \textit{all} frequencies at once. As it will become clear in the following, we stick to the inversion scheme by taking advantage of the series expansion of Eq.~\eqref{eq:bse_invert}, \begin{equation} \label{eq:bse_inv_expand} L_{\bm{K}_1,\bm{K}_2}(\omega) = \sum_m [\mat{ L^0(\omega) \Xi}]^m _{\bm{K}_1,\bm{K}_3} \, L^0_{\bm{K}_3,\bm{K}_2}(\omega) , \end{equation} that is interrupted at convergence. If, however, a convergence is not attained, i.e. the assumption of the expandability of Eq.~\eqref{eq:bse_invert} is falsified \textit{a posteriori}, we perform the full inversion. The solution of Eq.~\eqref{eq:bse_inv_expand} has two distinct bottlenecks in terms of computational cost. First, the calculation of matrix $\mat{\Xi}$ is very time consuming. Second, its storage needs large quantities of memory. In view of that, reducing the number of $k$-points is a major issue. To this end, Rohlfing and Louie~\cite{rohlfing:00} employed a double-grid technique where the kernel $\mat{\Xi}$ is calculated on a coarse grid with its subsequent interpolation onto a fine $k$-point mesh where the BSE is solved. This approach helps reducing the time necessary to compute $\mat{\Xi}$, but it is less helpful to save memory, since it requires the storage of the computed and interpolated matrix elements of the kernel. Its use is justified with the authors' observation that $\mat{\Xi}$ varies little with respect to the $k$-points, as the single-particle wavefunctions $\phi_{n\bm{k}}$ are quite robust with respect to $\bm{k}$ (with the possible exception of sudden band crossings). In a similar spirit it has been proved~\cite{marini:01, adragna:03} for the random phase approximation (RPA), that $\mat{L^0}$ is a rapidly varying quantity and that could be correctly evaluated by performing additional Monte Carlo integrations on a large number of random $\bm{k}$ points. Note that $\mat{L^0}$ can in principle be easily calculated for a large number of $k$-points and bands. By taking into account these observations, we define in our approach two grids: (i)~a coarse one with points $\bm{k}$ on which we calculate \textit{and} store $\mat{\Xi}$ and solve Eq.~\eqref{eq:bse_inv_expand}, and (ii)~a fine $k$-grid with vectors $\tilde{{\bm k}}$ on which we compute $\mat{L^0}$. The mapping of $L^0_{\tilde{{\bm K}}_1,\tilde{{\bm K}}_2 }$ to $L^0_{\bm{K}_1,\bm{K}_2 }$ is performed through a double-grid technique~\cite{ono:99} with a suitably chosen interpolation for the kernel. To simplify our approach we use the simplest zeroth order interpolation, that leads to averaging the finely resolved $\mat{L^0}$ in a neighborhood around each point of the coarse grid. Consequently, this technique is expected to work if the oscillator strengths and $\mat{\Xi}$ are smoothly varying functions of $\tilde{{\bm k}}$. In practice, we define: \begin{equation} \label{eq:l0_mean} L^0(\omega)_{\bm{K}_1,\bm{K}_2 } = \frac{1}{N_{\tilde{{\bm k}}}} \sum_{\tilde{{\bm k}} \in \mathcal{D}_{\bm{k}}} \frac{f_{c_1 \tilde{{\bm k}}} - f_{v_1 \tilde{{\bm k}}}} {\epsilon_{c_1 \tilde{{\bm k}}} - \epsilon_{v_1 \tilde{{\bm k}}} - \omega -i\eta} \delta_{\bm{K}_1,\bm{K}_2}, \end{equation} where $N_{\tilde{{\bm k}}}$ is the number of $k$-points of the fine grid in the domain $\mathcal{D}_{\bm{k}}$ around $\bm{k}$ of the coarse grid. We would like to note that we are averaging the polarization $\mat{L^0}$ that has poles at the particle excitation energies, which is not equivalent to averaging the particle excitation energies that appear in the diagonal of the excitonic Hamiltonian. An arbitrary $k$-point resolution of $\mat{L^0}$ is possible once the respective single-particle energies $\epsilon_{n\tilde{{\bm k}}}$ are available. In general the calculation of quasiparticle states on the $\tilde{{\bm k}}$ grid are not practical. Fortunately, there is a solution to this problem that relies on the interpolation of the (quasiparticle) electronic structure to a dense $k$-grid using maximally localized Wannier functions~\cite{marzari:97}. In this method, $\epsilon_{n\tilde{{\bm k}}}$ at an arbitrary $k$-point $\tilde{{\bm k}}$ is the result of (i) a rotation of the initial quasiparticle Hamiltonian into the Wannier basis, (ii) its Fourier interpolation to the fine grid of $k$-points, and (iii) the diagonalization of the resulting Hamiltonian~\cite{hamann:09}. Note that, even if this procedure leads to an expression that has the form of a Slater-Koster tight-binding interpolation~\cite{slater:54}, the obtained single-particle energies are calculated directly from the underlying \textit{ab initio} eigensystem rather than just fitted to it. \begin{figure} \begin{center} \includegraphics[width=0.99\columnwidth,clip]{./spectra_gaas_rpa_1.eps} \caption{\label{fig:gaas_spectrum_rpa} (Color online) Calculated RPA absorption spectra starting from $GW$ corrected bands for bulk GaAs with $N_v=N_c=2$ for various $k$-grids, without (a) and with (b) double-grid method. The shaded area in (b) shows the spectrum corresponding to the $40\x40\x40$ $k$-grid of panel (a) for better comparison between the double-grid method and the standard calculations.} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=0.99\columnwidth,clip]{./spectra_gaas_bse_2.eps} \caption{\label{fig:gaas_spectrum_bse} (Color online) Calculated BSE absorption spectra for bulk GaAs with $N_v=2$ and $N_c=3$ using several $k$-grids without (a) and with (b) double-grid method. In (a) one spectrum is calculated on a symmetric $10\x10\x10$ $k$-grid, while the others are on shifted grids to accelerate convergence. The shaded area in both panels indicate the experimental spectrum at $22$~K~\cite{lautenschlager:87}. } \end{center} \end{figure} To illustrate the efficiency of our scheme we calculated the optical spectra of semiconducting GaAs, known for its slow convergence with respect to the $k$-point sampling~\cite{rohlfing:00, hahn:05, benedict:98}. The Kohn-Sham band structure and wavefunctions were obtained with density functional theory (DFT) within the local density approximation using norm conserving pseudopotentials with an energy cutoff of $14$~Hartree and the experimental lattice constant of $10.68$~Bohr~\cite{cohen:88}. For the DFT part we utilized the code \texttt{ABINIT}~\cite{abinit:09}. To obtain the energy bands [used in Eq.~\eqref{eq:l0_mean}] we employ the code \texttt{wannier90}~\cite{wannier90} to perform a Wannier interpolation to a $40 \! \times \! 40 \! \times \! 40$ regular $k$-point grid in the whole BZ. Finally, optical spectra were calculated using the code \texttt{Yambo}~\cite{yambo} that uses the DFT Kohn-Sham wavefunctions and the interpolated single-particle energies as input. It is well known~\cite{godby:87} that self-energy corrections in GaAs can be simulated by a rigid shift of the conduction bands. We therefore applied a scissor operator of $0.9$~eV, that yields an overall agreement of the band dispersions within 0.1\,eV with the $GW$ corrected bands, and a close agreement with experimental data as well~\cite{bimberg:72,aspnes:76,chiang:80,wolford:85,aspnes:86}. For all spectra in Fig.~\ref{fig:gaas_spectrum_rpa} and \ref{fig:gaas_spectrum_bse} we included the two highest valence bands and the two (three for BSE) lowest conduction bands, considering only the resonant part of the BSE kernel $\mat{\Xi}$, and used a Lorentzian broadening of $0.1$~eV. Furthermore, we neglected spin-orbit coupling. Omitting local field effects (LFE), the non-interacting RPA spectra in Fig.~\ref{fig:gaas_spectrum_rpa}(a) are obtained on symmetric Monkhorst-Pack (MP) grids~\cite{monkhorst:76}. With increasing $k$-point resolution the spectrum converges to two main peaks at $3.3$~eV and $5.3$~eV. By using our double-grid method, shown in panel (b), a $12\x12\x12$ symmetric grid yields an equally well converged spectrum. We observe an excellent agreement between the latter and the RPA done on a $40\x40\x40$ grid (indicated by the shaded area). In general, if one calculates independent-electron transitions starting from $GW$ corrected bands the oscillator strength of the absorption spectrum is moved too high in energy compared to the experiment. The attractive net electron-hole interaction decreases the energy of the excited states and transfers oscillator strength to lower energies. This can be seen by comparing the non-interacting and interacting results of Figs.~\ref{fig:gaas_spectrum_rpa} and \ref{fig:gaas_spectrum_bse}, respectively. Figure~\ref{fig:gaas_spectrum_bse}(a) illustrates that a symmetric $10\x10\x10$ grid alone does not provide enough independent sampling points for a converged BSE spectrum. Shifting this grid in a direction different from the high symmetry directions provides 1~000 instead of only 47 nonequivalent sampling points (see Table~\ref{tab:kpoints}). This leads to a spectrum that is sufficiently compatible with experimental results of Ref.~\onlinecite{lautenschlager:87}. Nevertheless, the low-energy region (peak at $1.9$~eV) and the region between the two main transitions at $3.2$~eV and $5.1$~eV are still expected to change on a denser $k$-grid. With our double-grid technique the BSE spectrum is converged even on a symmetric grid of $10\x10\x10$ [see Fig.~\ref{fig:gaas_spectrum_bse}(b)]. Contrary to the spectrum on an equally dense, but shifted grid, our spectrum is smooth in between the main transitions at $3.2$~eV and $5.1$~eV. It is noteworthy to mention that, for our scheme, a shifted, coarse $k$-grid does not improve the convergence of the spectrum of GaAs. \begin{figure} \begin{center} \includegraphics[width=0.99\columnwidth,clip]{./spectra_si_rpa_bse_3.eps} \caption{\label{fig:si_spectra} (Color online) Calculated RPA (a) and BSE (b) absorption spectra with LFE starting from $GW$ corrected bands for bulk Si with and without double-grid method employing a broadening of $0.05$~eV. For (a) $N_v=N_c=2$ were used, while for (b) $N_v=2$ and $N_c=3$. With the double-grid method both spectra (solid lines) converge faster. The shaded area in (b) shows the experimental spectrum~\cite{lautenschlager:87si}. } \end{center} \end{figure} In the same fashion we calculated the RPA and BSE absorption spectra of Si, shown in Fig.~\ref{fig:si_spectra}. To converge the ground state KS energies and wavefunctions with DFT we used an energy cutoff of $15$~Ha and a lattice constant of $10.2$~Bohr obtained by crystal relaxation~\cite{dalcorso:94}. Similarly to GaAs, we took advantage of the good approximation of self-energy corrections by a rigid shift of the conduction bands of $0.8$~eV~\cite{rocca:12}. For the spectra we employed the two highest valence bands and the two (three for BSE) lowest conduction bands, and we included LFE by a dielectric matrix of size $51\x51$. Furthermore, we diminished the Lorentzian broadening to $0.05$~eV in order to better resolve the first peak of the BSE spectrum at $3.5$~eV. Consequently a high $k$-point resolution of $60\x60\x60$ was necessary to converge the RPA spectrum on a symmetric grid without the double-grid method. Figure~\ref{fig:si_spectra}(a) illustrates the advantage of the double-grid technique as it warrants a converged RPA spectrum on a $12\x12\x12$ grid. With this method also the BSE spectrum in Fig.~\ref{fig:si_spectra}(b) converged fast on a $10\x10\x10$ grid, while the standard method of diagonalizing the BSE Hamiltonian is still far off convergence on the same $k$-grid. Additionally, we observe a good agreement of the converged BSE spectrum with experiment~\cite{lautenschlager:87si} (indicated by the shaded area). A final remark for Si on the sampling of the dense $k$-points $\tilde{{\bm k}}$ in Eq.~\eqref{eq:l0_mean} is in order now. Instead of using a $40\x40\x40$ regular $k$-grid of the full BZ (as for GaAs), we found it favorable to resort to a set of four $40\x40\x40$ shifted MP grids that respects the face-centered cubic symmetry of the Si crystal. Note that the calculations of the spectra using the double-grid technique were then again performed on unshifted, symmetric MP grids. Although a MP grid that respects the symmetries of the BZ does not reduce the dimension of the BSE kernel, its use is still advantageous for three reasons. Firstly, there is no artificial splitting of degenerate states~\cite{wirtz:08}. Secondly, no artificial crystal anisotropy is introduced that has to be compensated by averaging the computed spectra over the three spatial directions of light polarization~\cite{sottile:03, rocca:12}. Finally, in the calculation of the exchange term of $\mat{\Xi}$, symmetries of the BZ can be exploited, which translates in a strong reduction of computational time. In Table~\ref{tab:kpoints} we have summarized the number of $k$-points with and without considering symmetries of the BZ. \begin{table} \caption{\label{tab:kpoints} Number of $k$-points in the (ir)reducible BZ $N_{\bm{k}}^{\text{\tiny (I)BZ}}$ that are used for the calculation of the spectra in Figs.~\ref{fig:gaas_spectrum_rpa} - \ref{fig:si_spectra}. } \begin{tabular}{llcrr} \hline \hline calc. & Figs. & $k$-point grid & $N_{\kk}^{\text{\tiny IBZ}}$ & $N_{\kk}^{\text{\tiny BZ}}$ \\ \hline RPA & \ref{fig:si_spectra}(a) & 60x60x60 &5~216 & 216~000 \\ RPA & \ref{fig:gaas_spectrum_rpa}(a) & 40x40x40 &1~661 & 64~000 \\ RPA, dgrid & \ref{fig:gaas_spectrum_rpa}(b), \ref{fig:si_spectra}(a) & 12x12x12 & 72 & 1~728 \\ BSE & \ref{fig:gaas_spectrum_bse}(a) & 10x10x10 shifted & 1~000 & 1~000 \\ BSE, dgrid & \ref{fig:gaas_spectrum_bse}(b), \ref{fig:si_spectra}(b) & 10x10x10 & 47 & 1~000 \\ \hline \end{tabular} \end{table} In conclusion, we presented a double-grid method to solve the BSE on a coarse $k$-point grid, where the average of the strongly varying, but easily obtainable, independent-particle polarization is used. Converged spectra are reached for relatively small symmetric $k$-point grids. This allows for a considerably faster calculation of the BSE kernel. The single-particle energy bands in a dense $k$-point grid, the basic ingredient of our method, are not calculated directly, but are obtained through Wannier interpolation of the electronic band-structure. As examples, we discussed the convergence of the absorption spectra of GaAs and Si with respect of the number of $k$-points. The speed-up is considerable, and opens the way for the solution of the BSE equation in large, complex systems. D.~K. and C.~A. are financially supported by the Joseph Fourier university funding program for research (p\^ole Smingue). D.~K. and M.~A.~L.~M. acknowledge financial support from the French ANR (ANR-08-CEXC8-008-01). Computational resources were provided by GENCI (project x2011096017). \addcontentsline{toc}{chapter}{Bibliography} \bibliographystyle{apsrev4-1}
1,314,259,994,157
arxiv
\chapter{Auxiliaries} \label{auxiliaries} Although there has been some debate about the lexical category of auxiliaries, the English XTAG grammar follows \cite{mccawley88}, \cite{haegeman91}, and others in classifying auxiliaries as verbs. The category of verbs can therefore be divided into two sets, main or lexical verbs, and auxiliary verbs, which can co-occur in a verbal sequence. Only the highest verb in a verbal sequence is marked for tense and agreement regardless of whether it is a main or auxiliary verb. Some auxiliaries ({\it be}, {\it do}, and {\it have}) share with main verbs the property of having overt morphological marking for tense and agreement, while the modal auxiliaries do not. However, all auxiliary verbs differ from main verbs in several crucial ways. \begin{itemize} \item Multiple auxiliaries can occur in a single sentence, while a matrix sentence may have at most one main verb. \item Auxiliary verbs cannot occur as the sole verb in the sentence, but must be followed by a main verb. \item All auxiliaries precede the main verb in verbal sequences. \item Auxiliaries do not subcategorize for any arguments. \item Auxiliaries impose requirements on the morphological form of the verbs that immediately follow them. \item Only auxiliary verbs invert in questions (with the sole exception in American English of main verb {\it be}\footnote{Some dialects, particularly British English, can also invert main verb {\it have} in yes/no questions (e.g. {\it have you any Grey Poupon ?}). This is usually attributed to the influence of auxiliary {\it have}, coupled with the historic fact that English once allowed this movement for all verbs.\label{have-footnote}}). \item An auxiliary verb must precede sentential negation (e.g. $\ast${\it John not goes}). \item Auxiliaries can form contractions with subjects and negation (e.g. {\it he'll}, {\it won't}). \end{itemize} \noindent The restrictions that an auxiliary verb imposes on the succeeding verb limits the sequence of verbs that can occur. In English, sequences of up to five verbs are allowed, as in sentence (\ex{1}). \enumsentence{The music should have been being played [for the president] .} \noindent The required ordering of verb forms when all five verbs are present is: \begin{quote} \begin{tabular}{ccl} & & {\bf modal base perfective progressive passive} \end{tabular} \end{quote} \noindent The rightmost verb is the main verb of the sentence. While a main verb subcategorizes for the arguments that appear in the sentence, the auxiliary verbs select the particular morphological forms of the verb to follow each of them. The auxiliaries included in the English XTAG grammar are listed in Table \ref{aux-table} by type. The third column of Table \ref{aux-table} lists the verb forms that are required to follow each type of auxiliary verb. \vspace*{0.2in} \begin{table}[ht] \centering \begin{tabular}{|l|c|c|} \hline TYPE&LEX ITEMS&SELECTS FOR\\ \hline modals & {\it can}, {\it could}, {\it may}, {\it might}, {\it will}, & base form\footnotemark \\ & {\it would}, {\it ought}, {\it shall}, {\it should} & (e.g. {\it will go}, {\it might come})\\ & {\it need} &\\ \hline perfective & {\it have} & past participle\\ & & (e.g. {\it has gone})\\ \hline progressive & {\it be} & gerund\\ & & (e.g. {\it is going}, {\it was coming})\\ \hline passive & {\it be} & past participle\\ & & (e.g. {\it was helped by Jane})\\ \hline do support & {\it do} &base form\\ & & (e.g. {\it did go}, {\it does come})\\ \hline infinitive to & {\it to} & base form\\ & & (e.g. {\it to go}, {\it to come})\\ \hline \end{tabular} \caption{Auxiliary Verb Properties} \label{aux-table} \end{table} \vspace*{0.2in} \footnotetext{There are American dialects, particularly in the South, which allow double modals such as {\it might could} and {\it might should}. These constructions are not allowed in the XTAG English grammar.} \section{Non-inverted sentences} \label{aux-non-inverted} This section and the sections that follow describe how the English XTAG grammar accounts for properties of the auxiliary system described above. In our grammar, auxiliary trees are added to the main verb tree by adjunction. Figure~\ref{Vvx} shows the adjunction tree for non-inverted sentences.\footnote{We saw this tree briefly in section~\ref{case-for-verbs}, but with most of its features missing. The full tree is presented here.} \begin{figure}[htb] \centering \begin{tabular}{c} \psfig{figure=ps/auxs-files/betaVvx-with-features.ps,height=5.2in} \end{tabular} \caption{Auxiliary verb tree for non-inverted sentences: $\beta$Vvx } \label{Vvx} \end{figure} \begin{figure}[htbp] \centering \begin{tabular}{ccc} {\psfig{figure=ps/auxs-files/betaVvx_should-with-features.ps,height=3.9in}} & \hspace*{1in}& {\psfig{figure=ps/auxs-files/betaVvx_have-with-features.ps,height=3.9in}} \\ \\ {\psfig{figure=ps/auxs-files/betaVvx_been-with-features.ps,height=3.9in}} & \hspace*{1in}& {\psfig{figure=ps/auxs-files/betaVvx_being-with-features.ps,height=3.9in}} \\ \end{tabular} \caption{Auxiliary trees for {\it The music should have been being played .}} \label{anchored-aux-trees} \end{figure} The restrictions outlined in column 3 of Table \ref{aux-table} are implemented through the features {\bf $<$mode$>$}, {\bf $<$perfect$>$}, {\bf $<$progressive$>$} and {\bf $<$passive$>$}. The syntactic lexicon entries for the auxiliaries gives values for these features on the foot node~(VP$^{*}$) in Figure~\ref{Vvx}. Since the top features of the foot node must eventually unify with the bottom features of the node it adjoins onto for the sentence to be valid, this enforces the restrictions made by the auxiliary node. In addition to these feature values, each auxiliary also gives values to the anchoring node~(V$\diamond$), to be passed up the tree to the root VP~(VP$_{r}$) node; there they will become the new features for the top VP node of the sentential tree. Another auxiliary may now adjoin on top of it, and so forth. These feature values thereby ensure the proper auxiliary sequencing. Figure~\ref{anchored-aux-trees} shows the auxiliary trees anchored by the four auxiliary verbs in sentence (\ex{0}). Figure~\ref{non-inverted-sentence} shows the final tree created for this sentence. \begin{figure}[htb] \centering \begin{tabular}{c} {\psfig{figure=ps/auxs-files/non-inverted-sentence.ps,height=5.1in}} \end{tabular} \caption{{\it The music should have been being played .}} \label{non-inverted-sentence} \end{figure} The general English restriction that matrix clauses must have tense (or be imperatives) is enforced by requiring the top S-node of a sentence to have {\bf $<$mode$>$=ind/imp} (indicative or imperative). Since only the indicative and imperative sentences have tense, non-tensed clauses are restricted to occurring in embedded environments. Noun-verb contractions are labeled NVC in their part-of-speech field in the morphological database and then undergo special processing to split them apart into the noun and the reduced verb before parsing. The noun then selects its trees in the normal fashion. The contraction, say {\it 'll} or {\it 'd}, likewise selects the normal auxiliary verb tree, $\beta$Vvx. However, since the contracted form, rather than the verb stem, is given in the morphology, the contracted form must also be listed as a separate syntactic entry. These entries have all the same features of the full form of the auxiliary verbs, with tense constraints coming from the morphological entry (e.g. {\it it's} is listed as {\sc it 's NVC 3sg PRES}). The ambiguous contractions {\it 'd} ({\it had/would}) and {\it `s} ({\it has/is}) behave like other ambiguous lexical items; there are simply multiple entries for those lexical items in the lexicon, each with different features. In the resulting parse, the contracted form is shown with features appropriate to the full auxiliary it represents. \section{Inverted Sentences} In inverted sentences, the two trees shown in Figure~\ref{inverted-trees} adjoin to an S tree anchored by a main verb. The tree in Figure~\ref{inverted-trees}(a) is anchored by the auxiliary verb and adjoins to the S node, while the tree in Figure~\ref{inverted-trees}(b) is anchored by an empty element and adjoins at the VP node. Figure~\ref{yes/no-question} shows these trees (anchored by {\it will}) adjoined to the declarative transitive tree\footnote{The declarative transitive tree was seen in section~\ref{nx0Vnx1-family}.} (anchored by main verb {\it buy}). \begin{figure}[htbp] \centering \begin{tabular}{ccc} {\psfig{figure=ps/auxs-files/betaVs-with-features.ps,height=4.5in}} & \hspace*{0.5in} & {\psfig{figure=ps/auxs-files/betaVvx_epsilon-with-features.ps,height=5in}} \\ (a) &&(b) \\ \end{tabular} \caption{Trees for auxiliary verb inversion: $\beta$Vs (a) and $\beta$Vvx (b)} \label{inverted-trees} \end{figure} \begin{figure}[htb] \centering \begin{tabular}{c} {\psfig{figure=ps/auxs-files/yes-no-question.ps,height=4.0in}} \\ \end{tabular} \caption{{\it will John buy a backpack ?}} \label{yes/no-question} \end{figure} The feature {\bf $<$displ-const$>$} ensures that both of the trees in Figure~\ref{inverted-trees} must adjoin to an elementary tree whenever one of them does. For more discussion on this mechanism, which simulates tree local multi-component adjunction, see \cite{hockeysrini93}. The tree in Figure~\ref{inverted-trees}(b), anchored by $\epsilon$, represents the originating position of the inverted auxiliary. Its adjunction blocks the {\bf $<$assign-case$>$} values of the VP it dominates from being co-indexed with the {\bf $<$case$>$} value of the subject. Since {\bf $<$assign-case$>$} values from the VP are blocked, the {\bf $<$case$>$} value of the subject can only be co-indexed with the {\bf $<$assign-case$>$} value of the inverted auxiliary (Figure~\ref{inverted-trees}(a)). Consequently, the inverted auxiliary functions as the case-assigner for the subject in these inverted structures. This is in contrast to the situation in uninverted structures where the anchor of the highest (leftmost) VP assigns case to the subject (see section~\ref{case-for-verbs} for more on case assignment). The XTAG analysis is similar to GB accounts where the inverted auxiliary plus the $\epsilon$-anchored tree are taken as representing I to C movement. \section{Do-Support} It is well-known that English requires a mechanism called `do-support' for negated sentences and for inverted yes-no questions without auxiliaries. \enumsentence {John does not want a car .} \enumsentence {$\ast$John not wants a car .} \enumsentence {John will not want a car .} \enumsentence {Do you want to leave home ?} \enumsentence {$\ast$want you to leave home ?} \enumsentence {will you want to leave home ?} \subsection{In negated sentences} \label{do-support-negatives} The GB analysis of do-support in negated sentences hinges on the separation of the INFL and VP nodes (see \cite{chomsky65}, \cite{jackendoff72} and \cite{chomsky86}). The claim is that the presence of the negative morpheme blocks the main verb from getting tense from the INFL node, thereby forcing the addition of a verbal lexeme to carry the inflectional elements. If an auxiliary verb is present, then it carries tense, but if not, periphrastic or `dummy', {\it do} is required. This seems to indicate that {\it do} and other auxiliary verbs would not co-occur, and indeed this is the case (see sentences (\ex{1})-(\ex{2})). Auxiliary {\it do} is allowed in English when no negative morpheme is present, but this usage is marked as emphatic. Emphatic {\it do} is also not allowed to co-occur with auxiliary verbs (sentences (\ex{3})-(\ex{6})). \enumsentence {$\ast$We will have do bought a sleeping bag .} \enumsentence {$\ast$We do will have bought a sleeping bag .} \enumsentence {You {\bf do} have a backpack, don't you ?} \enumsentence {I {\bf do} want to go !} \enumsentence {$\ast$You {\bf do} can have a backpack, don't you ?} \enumsentence {$\ast$I {\bf did} have had a backpack !} At present, the XTAG grammar does not have analyses for emphatic {\it do}. In the XTAG grammar, {\it do} is prevented from co-occurring with other auxiliary verbs by a requirement that it adjoin only onto main verbs ({\bf $<$mainv$>$ = $+$}). It has indicative mode, so no other auxiliaries can adjoin above it.\footnote{Earlier, we said that indicative mode carries tense with it. Since only the topmost auxiliary carries the tense, any subsequent verbs must {\bf not} have indicative mode.} The lexical item {\it not} is only allowed to adjoin onto a non-indicative (and therefore non-tensed) verb. Since all matrix clauses must be indicative (or imperative), a negated sentence will fail unless an auxiliary verb, either {\it do} or another auxiliary, adjoins somewhere above the negative morpheme, {\it not}. In addition to forcing adjunction of an auxiliary, this analysis of {\it not} allows it freedom to move around in the auxiliaries, as seen in the sentences (\ex{1})-(\ex{4}). \enumsentence {John will have had a backpack .} \enumsentence {$\ast$John not will have had a backpack .} \enumsentence {John will not have had a backpack .} \enumsentence {John will have not had a backpack .} \subsection{In inverted yes/no questions} In inverted yes/no questions, {\it do} is required if there is no auxiliary verb to invert, as seen in sentences (\ex{-12})-(\ex{-10}), replicated here as (\ex{1})-(\ex{3}). \enumsentence {do you want to leave home ?} \enumsentence {$\ast$want you to leave home ?} \enumsentence {will you want to leave home ?} \enumsentence {$\ast$do you will want to leave home ?} In English, unlike other Germanic languages, the main verb cannot move to the beginning of a clause, with the exception of main verb {\it be}.\footnote{The inversion of main verb {\it have} in British English was previously noted.} In a GB account of inverted yes/no questions, the tense feature is said to be in C$^{0}$ at the front of the sentence. Since main verbs cannot move, they cannot pick up the tense feature, and do-support is again required if there is no auxiliary verb to perform the role. Sentence (\ex{0}) shows that {\it do} does not interact with other auxiliary verbs, even when in the inverted position. In XTAG, trees anchored by a main verb that lacks tense are required to have an auxiliary verb adjoin onto them, whether at the VP node to form a declarative sentence, or at the S node to form an inverted question. {\it Do} selects the inverted auxiliary trees given in Figure~\ref{inverted-trees}, just as other auxiliaries do, so it is available to adjoin onto a tree at the S node to form a yes/no question. The mechanism described in section~\ref{do-support-negatives} prohibits {\it do} from co-occurring with other auxiliary verbs, even in the inverted position. \section{Infinitives} The infinitive {\it to} is considered an auxiliary verb in the XTAG system, and selects the auxiliary tree in Figure~\ref{Vvx}. {\it To}, like {\it do}, does not interact with the other auxiliary verbs, adjoining only to main verb base forms, and carrying infinitive mode. It is used in embedded clauses, both with and without a complementizer, as in sentences (\ex{1})-(\ex{3}). Since it cannot be inverted, it simply does not select the trees in Figure~\ref{inverted-trees}. \enumsentence {John wants to have a backpack .} \enumsentence {John wants Mary to have a backpack .} \enumsentence {John wants for Mary to have a backpack .} The usage of infinitival {\em to} interacts closely with the distribution of null subjects (PRO), and is described in more detail in section~\ref{for-complementizer}. \section{Semi-Auxiliaries} Under the category of semi-auxiliaries, we have placed several verbs that do not seem to closely follow the behavior of auxiliaries. One of these auxiliaries, {\it dare}, mainly behaves as a modal and selects for the base form of the verb. The other semi-auxiliaries all select for the infinitival form of the verb. Examples of this second type of semi-auxiliary are {\it used to}, {\it ought to}, {\it get to}, {\it have to}, and {\it BE to}. \subsection{Marginal Modal {\it dare}} The auxiliary {\it dare} is unique among modals in that it both allows DO-support and exhibits a past tense form. It clearly falls in modal position since no other auxiliary (except {\it do}) may precede it in linear order\footnote{Some speakers accept {\it dare} preceded by a modal, as in {\it I might dare finish this report today}. In the XTAG analysis, this particular double modal usage is accounted for. Other cases of double modal occurrence exist in some dialects of American English, although these are not accounted for in the system, as was mentioned earlier.\label{dare-footnote}}. Examples appear below. \enumsentence{she {\bf dare} not have been seen .} \enumsentence{she does not {\bf dare} succeed .} \enumsentence{Jerry {\bf dared} not look left or right .} \enumsentence{only models {\bf dare} wear such extraordinary outfits .} \enumsentence{{\bf dare} Dale tell her the secret ?} \enumsentence{$\ast$Louise had dared not tell a soul .} As mentioned above, auxiliaries are prevented from having DO-support within the XTAG system. To allow for DO-support in this case, we had to create a lexical entry for {\it dare} that allowed it to have the feature {\bf mainv+} and to have {\bf base} mode (this measure is what also allows {\it dare} to occur in double-modal sequences). A second lexical entry was added to handle the regular modal occurrence of {\it dare}. Additionally, all other modals are classified as being present tense, while {\it dare} has both present and past forms. To handle this behavior, {\it dare} was given similar features to the other modals in the morphology minus the specification for tense. \subsection{Other semi-auxiliaries} The other semi-auxiliaries all select for the infinitival form of the verb. Many of these auxiliaries allow for DO-support and can appear in both base and past participle forms, in addition to being able to stand alone (indicative mode). Examples of this type appear below. \enumsentence{Alex {\bf used} to attend karate workshops .} \enumsentence{Angelina might have {\bf used} to believe in fate .} \enumsentence{Rich did not {\bf used} to want to be a physical therapist .} \enumsentence{Mick might not {\bf have} to play the game tonight .} \enumsentence{Singer {\bf had} to have been there .} \enumsentence{Heather has {\bf got} to finish that project before she goes insane .} The auxiliaries {\it ought to} and {\it BE to} may not be preceded by any other auxiliary. \enumsentence{Biff {\bf ought} to have been working harder .} \enumsentence{$\ast$Carson does {\bf ought} to have been working harder .} \enumsentence{the party {\bf is} to take place this evening .} \enumsentence{$\ast$the party had {\bf been} to take place this evening .} The trickiest element in this group of auxiliaries is {\it used to}. While the other verbs behave according to standard inflection for auxiliaries, {\it used to} has the same form whether it is in mode base, past participle, or indicative forms. The only connection {\it used to} maintains with the infinitival form {\it use} is that occasionally, the bare form {\it use} will appear with DO-support. Since the three modes mentioned above are mutually exclusive in terms of both the morphology and the lexicon, {\it used} has three entries in each. \subsection{Other Issues} There is a lingering problem with the auxiliaries that stems from the fact that there currently is no way to distinguish between the main verb and auxiliary verb behaviors for a given letter string within the morphology. This situation results in many unacceptable sentences being successfully parsed by the system. Examples of the unacceptable sentences are given below. \enumsentence{the miller {\bf cans} tell a good story . (vs the farmer {\bf cans} peaches in July .)} \enumsentence{David {\bf wills} have finished by noon . (vs the old man {\bf wills} his fortune to me .)} \enumsentence{Sarah {\bf needs} not leave . (vs Sarah {\bf needs} to leave .)} \enumsentence{Jennifer {\bf dares} not be seen . (vs the young woman {\bf dares} him to do the stunt .)} \enumsentence{Lila {\bf does use} to like beans . (vs Lila {\bf does use} her new cookware .)} \section{Case Assignment} \label{case-assignment} \subsection{Approaches to Case} \subsubsection{Case in GB theory} GB (Government and Binding) theory proposes the following `case filter' as a requirement on S-structure.\footnote{There are certain problems with applying the case filter as a requirement at the level of S-structure. These issues are not crucial to the discussion of the English XTAG implementation of case and so will not be discussed here. Interested readers are referred to \cite{lasnik-uriagereka88}.} \begin{verse} \xtagdef{Case Filter} Every overt NP must be assigned abstract case. \cite{haegeman91} \end{verse} Abstract case is taken to be universal. Languages with rich morphological case marking, such as Latin, and languages with very limited morphological case marking, like English, are all presumed to have full systems of abstract case that differ only in the extent of morphological realization. In GB, abstract case is argued to be assigned to NP's by various case assigners, namely verbs, prepositions, and INFL. Verbs and prepositions are said to assign accusative case to NP's that they govern, and INFL assigns nominative case to NP's that it governs. These governing categories are constrained as to where they can assign case by means of `barriers' based on `minimality conditions', although these are relaxed in `exceptional case marking' situations. The details of the GB analysis are beyond the scope of this technical report, but see \cite{chomsky86} for the original analysis or \cite{haegeman91} for an overview. Let it suffice for us to say that the notion of abstract case and the case filter are useful in accounting for a number of phenomena including the distribution of nominative and accusative case, and the distribution of overt NP's and empty categories (such as PRO). \subsubsection{Minimalism and Case} A major conceptual difference between GB theories and Minimalism is that in Minimalism, lexical items carry their features with them rather than being assigned their features based on the nodes that they end up at. For nouns, this means that they carry case with them, and that their case is `checked' when they are in SPEC position of AGR$_s$ or AGR$_o$, which subsequently disappears \cite{chomsky92}. \subsection{Case in XTAG} The English XTAG grammar adopts the notion of case and the case filter for many of the same reasons argued in the GB literature. However, in some respects the English XTAG grammar's implementation of case more closely resembles the treatment in Chomsky's Minimalism framework \cite{chomsky92} than the system outlined in the GB literature \cite{chomsky86}. As in Minimalism, nouns in the XTAG grammar carry case with them, which is eventually `checked'. However in the XTAG grammar, noun cases are checked against the case values assigned by the verb during the unification of the feature structures. Unlike Chomsky's Minimalism, there are no separate AGR nodes; the case checking comes from the verbs directly. Case assignment from the verb is more like the GB approach than the requirement of a SPEC-head relationship in Minimalism. Most nouns in English do not have separate forms for nominative and accusative case, and so they are ambiguous between the two. Pronouns, of course, are morphologically marked for case, and each carries the appropriate case in its feature. Figures~\ref{nouns-with-case}(a) and \ref{nouns-with-case}(b) show the NP tree anchored by a noun and a pronoun, respectively, along with the feature values associated with each word. Note that {\it books} simply gets the default case {\bf nom/acc}, while {\it she} restricts the case to be {\bf nom}. \begin{figure}[htb] \centering \begin{tabular}{ccc} {\psfig{figure=ps/case-files/alphaNXN_books.ps,height=3.0in}} & \hspace*{0.5in} & {\psfig{figure=ps/case-files/alphaNXN_she.ps,height=3.2in}} \\ (a)& \hspace*{0.5in}&(b)\\ \end{tabular}\\ \caption{Lexicalized NP trees with case markings} \label {nouns-with-case} \end{figure} \subsection{Case Assigners} \subsubsection{Prepositions} \label{prep-case} Case is assigned in the XTAG English grammar by two lexical categories - verbs and prepositions.\footnote{{\it For} also assigns case as a complementizer. See section \ref{for-complementizer} for more details.} Prepositions assign accusative case ({\bf acc}) through their {\bf $<$assign-case$>$} feature, which is linked directly to the {\bf $<$case$>$} feature of their objects. Figure~\ref{PXPnx-with-case}(a) shows a lexicalized preposition tree, while Figure~\ref{PXPnx-with-case}(b) shows the same tree with the NP tree from Figure~\ref{nouns-with-case}(a) substituted into the NP position. Figure~\ref{PXPnx-with-case}(c) is the tree in Figure~\ref{PXPnx-with-case}(b) after unification has taken place. Note that the case ambiguity of {\it books} has been resolved to accusative case. \begin{figure}[htb] \centering \begin{tabular}{ccccc} {\psfig{figure=ps/case-files/alphaPXPnx_of.ps,height=1.7in}} & & {\psfig{figure=ps/case-files/NXN-substituted-into-PXPnx.ps,height=3.5in}} & & {\psfig{figure=ps/case-files/NXN-substituted-into-PXPnx-unified.ps,height=2.8in}} \\ (a)& \hspace*{0.05in}&(b)& \hspace*{0.05in}&(c)\\ \end{tabular}\\ \caption {Assigning case in prepositional phrases} \label{PXPnx-with-case} \end{figure} \subsubsection{Verbs} \label{case-for-verbs} Verbs are the other part of speech in the XTAG grammar that can assign case. Because XTAG does not distinguish INFL and VP nodes, verbs must provide case assignment on the subject position in addition to the case assigned to their NP complements. Assigning case to NP complements is handled by building the case values of the complements directly into the tree that the case assigner (the verb) anchors. Figures~\ref{S-tree-with-case}(a) and \ref{S-tree-with-case}(b) show an S tree\footnote{Features not pertaining to this discussion have been taken out to improve readability and to make the trees easier to fit onto the page.} that would be anchored\footnote{The diamond marker ($\diamond$) indicates the anchor(s) of a structure if the tree has not yet been lexicalized.} by a transitive and ditransitive verb, respectively. Note that the case assignments for the NP complements are already in the tree, even though there is not yet a lexical item anchoring the tree. Since every verb that selects these trees (and other trees in each respective subcategorization frame) assigns the same case to the complements, building case features into the tree has exactly the same result as putting the case feature value in each verb's lexical entry. \begin{figure}[htb] \centering \begin{tabular}{ccc} {\psfig{figure=ps/case-files/alphanx0Vnx1-case-features.ps,height=2.0in}} & \hspace*{0.5in} & {\psfig{figure=ps/case-files/alphanx0Vnx1nx2-case-features.ps,height=2.0in}} \\ (a)& \hspace*{0.5in}&(b)\\ \end{tabular}\\ \caption {Case assignment to NP arguments} \label{S-tree-with-case} \label{2;1,1} \label{2;1,3} \end{figure} The case assigned to the subject position varies with verb form. Since the XTAG grammar treats the inflected verb as a single unit rather than dividing it into INFL and V nodes, case, along with tense and agreement, is expressed in the features of verbs, and must be passed in the appropriate manner. The trees in Figure~\ref{lexicalized-S-tree-with-case} show the path of linkages that joins the {\bf$<$assign-case$>$} feature of the V to the {\bf $<$case$>$} feature of the subject NP. The morphological form of the verb determines the value of the {\bf $<$assign-case$>$} feature. Figures~\ref{lexicalized-S-tree-with-case}(a) and \ref{lexicalized-S-tree-with-case}(b) show the same tree\footnote{Again, the feature structures shown have been restricted to those that pertain to the V/NP interaction.} anchored by different morphological forms of the verb {\it sing}, which give different values for the {\bf $<$assign-case$>$} feature. \begin{figure}[htbp] \centering \begin{tabular}{ccc} {\psfig{figure=ps/case-files/alphanx0Vnx1_sings-case-features.ps,height=3.3in}} & \hspace*{0.5in}& {\psfig{figure=ps/case-files/alphanx0Vnx1_singing-case-features.ps,height=3.0in}} \\ (a)& \hspace*{0.5in}&(b)\\ \end{tabular}\\ \caption {Assigning case according to verb form} \label {lexicalized-S-tree-with-case} \end{figure} \begin{figure}[htbp] \centering \begin{tabular}{ccc} {\psfig{figure=ps/case-files/betaVvx_is-with-case.ps,height=2.6in}} & \hspace*{0.5in} & {\psfig{figure=ps/case-files/betaVvx_is-adjoined-into-nx0Vnx1_singing.ps,height=3.7in}} \\ (a)&\hspace*{0.5in} &(b)\\ \end{tabular}\\ \caption {Proper case assignment with auxiliary verbs} \label{Vvx-with-case} \end{figure} The adjunction of an auxiliary verb onto the VP node breaks the {\bf $<$assign-case$>$} link from the main V, replacing it with a link from the auxiliary verb instead.\footnote{See section \ref{aux-non-inverted} for a more complete explanation of how this relinking occurs.} The progressive form of the verb in Figure~\ref{lexicalized-S-tree-with-case}(b) has the feature-value {\bf $<$assign-case$>$=none}, but this is overridden by the adjunction of the appropriate form of the auxiliary word {\it be}. Figure~\ref{Vvx-with-case}(a) shows the lexicalized auxiliary tree, while Figure~\ref{Vvx-with-case}(b) shows it adjoined into the transitive tree shown in Figure~\ref{lexicalized-S-tree-with-case}(b). The case value passed to the subject NP is now {\bf nom} (nominative). \subsection{PRO in a unification based framework} Tensed forms of a verb assign nominative case, and untensed forms assign case {\bf none}, as the progressive form of the verb {\it sing} does in Figure~\ref{lexicalized-S-tree-with-case}(b). This is different than assigning no case at all, as one form of the infinitive marker {\it to} does. See Section~\ref{for-complementizer} for more discussion of this special case.) The distinction of a case {\bf none} from no case is indicative of a divergence from the standard GB theory. In GB theory, the absence of case on an NP means that only PRO can fill that NP. With feature unification as is used in the FB-LTAG grammar, the absence of case on an NP means that {\em any\/} NP can fill it, regardless of its case. This is due to the mechanism of unification, in which if something is unspecified, it can unify with anything. Thus we have a specific case {\bf none} to handle verb forms that in GB theory do not assign case. PRO is the only NP with case {\bf none}. Note that although we are drawn to this treatment by our use of unification for feature manipulation, our treatment is very similar to the assignment of null case to PRO in \cite{ChomskyLasnik93}. \cite{watanabe93} also proposes a very similar approach within Chomsky's Minimalist framework.\footnote{See Sections~ \ref{PRO} and \ref{PRO-control} for additional discussion of PRO.} \chapter{Comparatives} \label{compars-chapter} \section{Introduction} Comparatives in English can manifest themselves in many ways, acting on many different grammatical categories and often involving ellipsis. A distinction must be made at the outset between two very different sorts of comparatives---those which make a comparison between two propositions and those which compare the extent to which an entity has one property to a greater or lesser extent than another property. The former, which we will refer to as {\it propositional} comparatives, is exemplified in (\ex{1}), while the latter, which we will call {\it metalinguistic} comparatives (following Hellan 1981), is seen in (\ex{2}): \enumsentence{Ronaldo is more angry than Romario.} \enumsentence{Ronaldo is more angry than upset.} \noindent In (\ex{-1}), the extent to which Ronaldo is angry is greater than the extent to which Romario is angry. Sentence (\ex{0}) indicates that the extent to which Ronaldo is angry is greater than the extent to which he is upset. Apart from certain of the elliptical cases, both kinds of comparatives can be handled straightforwardly in the XTAG system. Elliptical cases which are not presently covered include those exemplified by the following sentences, which would presumably be handled in the same way as other sorts of VP ellipsis would. \enumsentence{Ronaldo is more angry than Romario is.} \enumsentence{Bill eats more broccoli than George eats.} \enumsentence{Bill eats more broccoli than George does.} We turn to the analysis of metalinguistic comparatives first. \section{Metalinguistic Comparatives} A metalinguistic comparison can be performed on basically all of the predicational categories---adjectives, verb phrases, prepositional phrases, and nouns---as in the following examples: \enumsentence{The table is more long than wide. (AP)} \enumsentence{Clark more makes the rules than follows them. (VP)} \enumsentence{Calvin is more in the living room than in the kitchen. (PP)} \enumsentence{That unindentified amphibian in the bush is more frog than toad, I would say. (NP)} \noindent At present, we only deal with the adjectival metalinguistic comparatives as in (\ex{-3}). The analysis given here for these can be easily extended to prepositional phrases and nominal comparatives of the metalinguistic sort, but, as with coordination in XTAG, verb phrases will prove more difficult. Adjectival comparatives appear to distribute with simple adjectives, as in the following examples: \enumsentence{Herbert is more livid than angry.} \enumsentence{Herbert is more livid and furious than angry.} \enumsentence{The more innovative than conventional medication cured everyone in the sick ward.} \enumsentence{The elephant, more wobbly than steady, fell from the circus ball.} \begin{figure}[htb] \centering \begin{tabular}{cc} {\psfig{figure=ps/comparatives-files/betaARBaPa.ps,height=2.0in}} \end{tabular}\\ \caption {Tree for Metalinguistic Adjective Comparative: $\beta$ARBaPa} \label {ARBaPa-tree} \end{figure} This patterning indicates that we can give these comparatives a tree that adjoins quite freely onto adjectives, as in Figure~\ref{ARBaPa-tree}. This tree is anchored by {\it more/less - than}. To avoid grammatically incorrect comparisons such as {\it more brighter than dark}, the feature {\bf compar} is used to block this tree from adjoining onto morphologically comparative adjectives. The foot node is {\bf compar-}, while {\it brighter} and its comparative siblings are {\bf compar+}\footnote {The analysis given later for adjectival propositional comparatives produces aggregated {\bf compar+} adjectives such as {\it more bright}, which will also be incompatible (as desired) with $\beta$ARBaPa.}. We also wish to block strings like {\it more brightest than dark}, which is accomplished with the feature {\bf super}, indicating superlatives. This feature is negative at the foot node so that $\beta$ARBaPa cannot adjoin to superlatives like {\it nicest}, which are specified as {\bf super+} from the morphology. Furthermore, the root node is {\bf super+} so that $\beta$ARBaPa cannot adjoin onto itself and produce monstrosities such as (\ex{1}): \enumsentence{*Herbert is more less livid than angry than furious.} \noindent Thus, the use of the {\bf super} feature is less to indicate superlativeness specifically, but rather to indicate that the subtree below a {\bf super+} node contains a full-fleshed comparison. In the case of lexical superlatives, the comparison is against everything, implicitly. A benefit of the multiple-anchor approach here is that we will never allow sentences such as (\ex{1}), which would be permissible if we split the comparative component and the {\it than} component of metalinguistic comparatives into two separate trees. \enumsentence{*Ronaldo is angrier than upset.} We also see another variety of adjectival comparatives of the form {\it more/less than X}, which indicates some property which is more or less extreme than the property {\it X}. In a sentence such as (\ex{1}), some property is being said to hold of Francis such that it is of a kind with {\it stupid} and that it exceeds {\it stupid} on some scale (intelligence, for example). Quirk et al. also note that these constructions remark on the inadequacy of the lexical item. Thus, in (\ex{0}), it could be that {\it stupid} is a starting point from which the speaker makes an approximation for some property which the speaker feels is beyond the range of the English lexicon, but which expresses the supreme lack of intellect of the individual it is predicated of. \enumsentence{Francis is more than stupid.} \enumsentence{Romario is more than just upset.} Taking our inspiration from $\beta$ARBaPa, we can handle these comparatives, which have the same distribution but contain an empty adjective, by using the tree shown in Figure~\ref{ARBPa-tree}. \begin{figure}[htb] \centering \begin{tabular}{cc} {\psfig{figure=ps/comparatives-files/betaARBPa.ps,height=2.0in}} \end{tabular}\\ \caption {Tree for Adjective-Extreme Comparative: $\beta$ARBPa} \label {ARBPa-tree} \end{figure} This sort of metalinguistic comparative also occurs with the verb phrase, prepositional phrase, and noun varieties. \enumsentence{Clark more than makes the rules. (VP)} \enumsentence{Calvin's hands are more than near the cookie jar. (PP)} \enumsentence{That stuff on her face is more than mud. (NP)} \noindent Presumably, the analysis for these would parallel that for adjectives, though it has not yet been implemented. \section{Propositional Comparatives} \subsection{Nominal Comparatives}\label{nom-comparatives-section} Nominal comparatives are considered here to be those which compare the cardinality of two sets of entities denoted by nominal phrases. The following data lay out a basic distribution of these comparatives. \enumsentence{More vikings than mongols eat spam.} \enumsentence{*More the vikings than mongols eat spam.} \enumsentence{Vikings eat less spaghetti than spam.} \enumsentence{More men that walk to the store than women who despise spam enjoyed the football game.} \enumsentence{More men than James like scotch on the rocks.} \enumsentence{Elmer knows fewer martians than rabbits.} Looking at these examples, we are tempted to produce a tree for this construction that is similar to $\beta$ARBaPa. However, it is quite common for the {\it than} portion of these comparatives to be left out, as in the following sentences: \enumsentence{More vikings eat spam.} \enumsentence{Mongols eat less spam.} \noindent Furthermore, {\it than NP} cannot occur without {\it more}. These facts indicate that we can and should build up nominal comparatives with two separate trees. The first, which allows a comparative adverb to adjoin to a noun, is given in Figure~\ref{nom-compar}(a). The second is the noun-phrase modifying prepositional tree. The tree $\beta$CARBn is anchored by {\it more/less/fewer} and $\beta$CnxPnx is anchored by {\it than}. The feature {\bf compar} is used to ensure that only one $\beta$CARBn tree can adjoin to any given noun---its foot node is {\bf compar-} and the root node is {\bf compar+}. All nouns are {\bf compar-}, and the {\bf compar} value is passed up through all trees which adjoin to N or NP. In order to ensure that we do not allow sentences like *{\it Vikings than mongols eat spam}, the {\bf compar} feature is used. The NP foot node of $\beta$CnxPnx is {\bf compar+}; thus, $\beta$CnxPnx will adjoin only to NP's which have been already modified by $\beta$CARBn (and thereby comparativized). In this way, we capture sentences like (\ex{-1}) en route to deriving sentences like (\ex{-7}), in a principled and simple manner. \begin{figure}[htbp] \centering \begin{tabular}{ccc} {\psfig{figure=ps/comparatives-files/betaCARBn.ps,height=2.1in}} & \hspace{0.6in} {\psfig{figure=ps/comparatives-files/betanxPnx.ps,height=1.7in}} \\ (a) $\beta$CARBn tree& \qquad(b) $\beta$CnxPnx tree \\ \end{tabular}\\ \caption {Nominal comparative trees} \label {nom-compar} \end{figure} Further evidence for this approach comes from comparative clauses which are missing the noun phrase which is being compared against something, as in the following: \enumsentence{The vikings ate more.\footnote{We ignore here the interpretation in which the comparison covers the eating event, focussing only on the one which the comparison involves the stuff being eaten.}} \enumsentence{The vikings ate more than a boar.\footnote{This sentence differs from the metalinguistic comparison {\it That stuff on her face is more than mud} in that it involves a comment on the quantity and/or type of the compared NP, whereas the other expresses that the property denoted by the compared noun is an inadequate characterization of the thing being described.}} \noindent Sometimes the missing noun refers to an entity or set available in the prior discourse, while at other times it is a reference to some anonymous, unspecified set. The former is exemplified in a mini-discourse such as the following: \\ \noindent Calvin: ``The mongols ate spam.''\\ \noindent Hobbes: ``The vikings ate more.'' \\ \noindent The latter can be seen in the following example: \\ \noindent Calvin: ``The vikings ate a a boar.''\\ \noindent Hobbes: ``Indeed. But in fact, the vikings ate more than a boar.'' \\ Since the lone comparatives {\it more/less/fewer} have the same basic distribution as noun phrases, the tree in Figure~\ref{lone-compar} is employed to capture this fact. The root node of $\alpha$CARB is {\bf compar+}. Not only does this accord with our intuitions about what the {\bf compar} feature is supposed to indicate, it also permits $\beta$nxPnx to adjoin, giving us strings such as {\it more than NP} for free. \begin{figure}[htb] \centering \begin{tabular}{cc} {\psfig{figure=ps/comparatives-files/alphaCARB.ps,height=2.0in}} \end{tabular}\\ \caption {Tree for Lone Comparatives: $\alpha$CARB} \label {lone-compar} \end{figure} Thus, by splitting nominal comparatives into multiple trees, we make correct predictions about their distribution with a minimal number of simple trees. Furthermore, we now also get certain comparative coordinations for free, once we place the requirement that nouns and noun phrases must match for {\bf compar} if they are to be coordinated. This yields strings such as the following: \enumsentence{Julius eats more grapes and fewer boars than avocados.} \enumsentence{Were there more or less than fifty people (at the party)?} \noindent The structures are given in Figure~\ref{comparconjs}. Also, it will block strings like {\it more men and women than children} under the (impossible) interpretation that there are more men than children but the comparison of the quantity of women to children is not performed. Unfortunately, it will permit comparative clauses such as {\it more grapes and fewer than avocados} under the interpretation in which there are more grapes than avocados and fewer of some unspecified thing than avocados (see Figure~\ref{badcomparconj}). \begin{figure}[htb] \centering \begin{tabular}{cc} {\psfig{figure=ps/comparatives-files/moregrapes.ps,height=3.0in}}\\ {\psfig{figure=ps/comparatives-files/fiftypeople.ps,height=3.0in}} \end{tabular}\\ \caption {Comparative conjunctions.} \label{comparconjs} \end{figure} \begin{figure}[htb] \centering \begin{tabular}{cc} {\psfig{figure=ps/comparatives-files/fewerthanavocados.ps,height=3.0in}} \end{tabular}\\ \caption {Comparative conjunctions.} \label{badcomparconj} \end{figure} One aspect of this analysis is that it handles the elliptical comparatives such as the following: \enumsentence{Arnold kills more bad guys than Steven.} \noindent In a sense, this is actually only simulating the ellipsis of these constructions indirectly. However, consider the following sentences: \enumsentence{Arnold kills more bad guys than I do.} \enumsentence{Arnold kills more bad guys than I.} \enumsentence{Arnold kills more bad guys than me.} \noindent The first of these has a {\it pro}-verb phrase which has a nominative subject. If we totally drop the second verb phrase, we find that the second NP can be in either the nominative or the accusative case. Prescriptive grammars disallow accusative case, but it actually is more common to find accusative case---use of the nominative in conversation tends to sound rather stiff and unnatural. This accords with the present analysis in which the second noun phrase in these comparatives is the complement of {\it than} in $\beta$nxPnx, and receives its case-marking from {\it than}. This does mean that the grammar will not currently accept (\ex{-1}), and indeed such sentences will only be covered by an analysis which really deals with the ellipsis. Yet the fact that most speakers produce (\ex{0}) indicates that some sort of restructuring has occured that results in the kind of structure the present analysis offers. There is yet another distributional fact which falls out of this analysis. When comparative or comparativized adjectives modify a noun phrase, they can stand alone or occur with a {\it than} phrase; furthermore, they are obligatory when a {\it than}-phrase is present. \enumsentence{Hobbes is a better teacher.} \enumsentence{Hobbes is a better teacher than Bill.} \enumsentence{A more exquisite horse launched onto the racetrack.} \enumsentence{A more exquisite horse than Black Beauty launched onto the racetrack.} \enumsentence{*Hobbes is a teacher than Bill.} \noindent Comparative adjectives such as {\it better} come from the lexicon as {\bf compar+}. By having trees such as $\beta$An transmit the {\bf compar} value of the A node to the root N node, we can signal to $\beta$CnxPnx that it may adjoin when a comparative adjective has adjoined. An example of such an adjunction is given in Figure~\ref{better-teacher-than-Bill}. Of course, if no comparative element is present in the lower part of the noun phrase, $\beta$nxPnx will not be able to adjoin since nouns themselves are {\bf compar-}. In order to capture the fact that a comparative element blocks further modification to N, $\beta$An must only adjoin to N nodes which are {\bf compar-} in their lower feature matrix. \begin{figure}[htb] \centering \begin{tabular}{cc} {\psfig{figure=ps/comparatives-files/better_teacher_than_Bill_nf.ps,height=3.0in}} \end{tabular}\\ \caption {Adjunction of $\beta$nxPnx to NP modified by comparative adjective.} \label {better-teacher-than-Bill} \end{figure} In order to obtain this result for phrases like {\it more exquisite horse}, we need to provide a way for {\it more} and {\em less} to modify adjectives without a {\it than}-clause as we have with $\beta$ARBaPa. Actually, we need this ability independently for comparative adjectival phrases, as discussed in the next section. \subsection{Adjectival Comparatives} With nominal comparatives, we saw that a single analysis was amenable to both ``pure'' comparatives and elliptical comparatives. This is not possible for adjectival comparatives, as the following examples demonstrate: \enumsentence{The dog is less patient.} \enumsentence{The dog is less patient than the cat.} \enumsentence{The dog is as patient.} \enumsentence{The dog is as patient as the cat.} \enumsentence{The less patient dog waited eagerly for its master.} \enumsentence{*The less patient than the cat dog waited eagerly for its master.} \noindent The last example shows that comparative adjectival phrases cannot distribute quite as freely as comparative nominals. The analysis of elliptical comparative adjectives follows closely to that of comparative nominals. We build them up by first adjoining the comparative element to the A node, which then signals to the AP node, via the {\bf compar} feature, that it may allow a {\it than}-clause to adjoin. The relevant trees are given in Figure~\ref{ellip-adj-compar}. $\beta$CARBa is anchored by {\it more, less} and {\it as}, and $\beta$axPnx is anchored by both {\it than} and {\it as}. \begin{figure}[htbp] \centering \begin{tabular}{ccc} {\psfig{figure=ps/comparatives-files/betaCARBa.ps,height=1.2in}} & \hspace{0.6in} {\psfig{figure=ps/comparatives-files/betaaxPnx.ps,height=2.0in}} \\ (a) $\beta$CARBa tree& \qquad(b) $\beta$axPnx tree \\ \end{tabular}\\ \caption {Elliptical adjectival comparative trees} \label {ellip-adj-compar} \end{figure} The advantages of this analysis are many. We capture the distribution exhibited in the examples given in (\ex{-5}) - (\ex{0}). With $\beta$CARBa, comparative elements may modify adjectives wherever they occur. However, {\it than} clauses for adjectives have a more restricted distribution which coincides nicely with the distribution of AP's in the XTAG grammar. Thus, by making them adjoin to AP rather than A, ill-formed sentences like (\ex{0}) are not allowed. \begin{figure}[htb] \centering \begin{tabular}{cc} {\psfig{figure=ps/comparatives-files/black_beauty.ps,height=4.0in}} \end{tabular}\\ \caption {Comparativized adjective triggering $\beta$CnxPnx.} \label {black_beauty} \end{figure} There are two further advantages to this analysis. One is that $\beta$CARBa interacts with $\beta$nxPnx to produce sequences like {\it more exquisite horse than Black Beauty}, a result alluded to at the end of Section~\ref{nom-comparatives-section}. We achieve this by ensuring that the comparativeness of an adjective is controlled by a comparative adverb which adjoins to it. A sample derivation is given in Figure~\ref{black_beauty}. The second advantage is that we get sentences such as (\ex{1}) for free. \enumsentence{Hobbes is better than Bill.} \noindent Since {\it better} comes from the lexicon as {\bf compar+} and this value is passed up to the AP node, $\beta$axPnx can adjoin as desired, giving us the derivation given in Figure~\ref{better-than-Bill}. \begin{figure}[htb] \centering \begin{tabular}{cc} {\psfig{figure=ps/comparatives-files/better_than_Bill_f.ps,height=4.0in}} \end{tabular}\\ \caption {Adjunction of $\beta$axPnx to comparative adjective.} \label {better-than-Bill} \end{figure} Notice that the root AP node of Figure~\ref{better-than-Bill} is {\bf compar-}, so we are basically saying that strings such as {\it better than Bill} are not ``comparative.'' This accords with our use of the {\bf compar} feature---a positive value for {\bf compar} signals that the clause beneath it is to {\bf be} compared against something else. In the case of {\it better than Bill}, the comparison has been fulfilled, so we do not want it to signal for further comparisons. A nice result which follows is that $\beta$axPnx cannot adjoin more than once to any given AP spine, and we have no need for the NA constraint on the tree's root node. Also, this treatment of the comparativeness of various strings proves important in getting the coordination of comparative constructions to work properly. A note needs to be made about the analysis regarding the interaction of the equivalence comparative construction {\it as ... as} and the inequivalence comparative construction {\it more/less ... than}. In the grammar, {\it more, less}, and {\it as} all anchor $\beta$CARBa, and both {\it than} and {\it as} anchor $\beta$axPnx. Without further modifications, this of course will give us sentences such as the following: \enumsentence{*?Hobbes is as patient than Bill.} \enumsentence{*?Hobbes is more patient as Bill.} \noindent Such cases are blocked with the feature {\bf equiv}: {\it more, less, fewer} and {\it than} are {\bf equiv-} while {\it as} (in both adverbial and prepositional uses) is {\bf equiv+}. The prepositional trees then require that their P node and the node to which they are adjoining match for {\bf equiv}. An interesting phenomena in which comparisons seem to be paired with an inappropriate {\it as/than}-clause is exhibited in (\ex{1}) and (\ex{2}). \enumsentence{Hobbes is as patient or more patient than Bill.} \enumsentence{Hobbes is more patient or as patient as Bill.} \noindent Though prescriptive grammars disfavor these sentences, these are perfectly acceptable. We can capture the fact that the {\it as/than}-clause shares the {\bf equiv} value with the latter of the comparison phrases by passing the {\bf equiv} value for the second element to the root of the coordination tree. \subsection{Adverbial Comparatives} The analysis of adverbial comparatives encouragingly parallels the analysis for nominal and elliptical adjectival comparatives---with, however, some interesting differences. Some examples of adverbial comparatives and their distribution are given in the following: \enumsentence{Albert works more quickly.} \enumsentence{Albert works more quickly than Richard.} \enumsentence{Albert works more.} \enumsentence{*Albert more works.} \enumsentence{Albert works more than Richard.} \enumsentence{Hobbes eats his supper more quickly than Calvin.} \enumsentence{Hobbes more quickly eats his supper than Calvin.} \enumsentence{*Hobbes more quickly than Calvin eats his supper.} \noindent When {\it more} is used alone as an adverb, it must also occur after the verb phrase. Also, it appears that adverbs modified by {\it more} and {\it less} have the same distribution as when they are not modified. However, the {\it than} portion of an adverbial comparative is restricted to post verb phrase positions. The first observation can be captured by having {\it more} and {\it less} select only $\beta$vxARB from the set of adverb trees. Comparativization of adverbs looks very similar to that of other categories, and we follow this trend by giving the tree in Figure~\ref{more-adv-mod}(a), which parallels the adjectival and nominal trees, for these instances. This handles the quite free distribution of adverbs which have been comparativized, while the tree in Figure~\ref{more-adv-mod}(b), $\beta$vxPnx, allows the {\it than} portion of an adverbial comparative to occur only after the verb phrase, blocking examples such as (\ex{0}). \begin{figure}[htbp] \centering \begin{tabular}{ccc} {\psfig{figure=ps/comparatives-files/betaCARBarb.ps,height=1.2in}} & \hspace{0.6in} {\psfig{figure=ps/comparatives-files/betavxPnx.ps,height=2.0in}} \\ (a) $\beta$CARBarb tree& \qquad(b) $\beta$vxPnx tree \\ \end{tabular}\\ \caption {Adverbial comparative trees} \label {more-adv-mod} \end{figure} The usage of the {\bf compar} feature parallels that of the adjectives and nominals; however, trees which adjoin to VP are {\bf compar-} on their root VP node. In this way, $\beta$vxPnx anchored by {\it than} or {\it as} (which must adjoin to a {\bf compar+} VP) can only adjoin immediately above a comparative or comparativized adverb. This avoids extra parses in which the comparative adverb adjoins at a VP node lower than the {\it than}-clause. A final note is that {\it as} may anchor $\beta$vxPnx non-comparatively, as in sentence (\ex{1}). This means that there will be two parses for sentences such as (\ex{2}). \enumsentence{John works as a carpenter.} \enumsentence{John works as quickly as a carpenter.} \noindent This appears to be a legitimate ambiguity. One is that John works as quickly as a carpenter (works quickly), and the other is that John works quickly when he is acting as a carpenter (but maybe he is slow when he acting as a plumber). \section{Future Work} \begin{itemize} \item Interaction with determiner sequencing (e.g., {\it several more men than women} but not {\it *every more men than women}). \item Handle sentential complement comparisons (e.g., {\it Bill eats more pasta than Angus drinks beer}). \item Add partitives. \item Deal with constructions like {\it as many} and {\it as much}. \item Look at {\it so...as} construction. \end{itemize} \chapter{Underview} \label{underview} The morphology, syntactic, and tree databases together comprise the English grammar. A lexical item that is not in the databases receives a default tree selection and features for its part of speech and morphology. In designing the grammar, a decision was made early on to err on the side of acceptance whenever there are conflicting opinions as to whether or not a construction is grammatical. In this sense, the XTAG English grammar is intended to function primarily as an acceptor rather than a generator of English sentences. The range of syntactic phenomena that can be handled is large and includes auxiliaries (including inversion), copula, raising and small clause constructions, topicalization, relative clauses, infinitives, gerunds, passives, adjuncts, it-clefts, wh-clefts, PRO constructions, noun-noun modifications, extraposition, determiner sequences, genitives, negation, noun-verb contractions, clausal adjuncts and imperatives. \section{Subcategorization Frames} \label{subcat-frames} Elementary trees for non-auxiliary verbs are used to represent the linguistic notion of subcategorization frames. The anchor of the elementary tree subcategorizes for the other elements that appear in the tree, forming a clausal or sentential structure. Tree families group together trees belonging to the same subcategorization frame. Consider the following uses of the verb {\it buy}: \enumsentence{Srini bought a book.} \enumsentence{Srini bought Beth a book.} In sentence (\ex{-1}), the verb {\it buy} subcategorizes for a direct object NP. The elementary tree anchored by {\it buy} is shown in Figure~\ref{subcat-trees}(a) and includes nodes for the NP complement of {\it buy} and for the NP subject. In addition to this declarative tree structure, the tree family also contains the trees that would be related to each other transformationally in a movement based approach, i.e passivization, imperatives, wh-questions, relative clauses, and so forth. Sentence (\ex{0}) shows that {\it buy} also subcategorizes for a double NP object. This means that {\it buy} also selects the double NP object subcategorization frame, or tree family, with its own set of transformationally related sentence structures. Figure~\ref{subcat-trees}(b) shows the declarative structure for this set of sentence structures. \begin{figure}[ht] \centering \begin{tabular}{ccc} {\psfig{figure=ps/compl-adj-files/alphanx0Vnx1_bought.ps,height=1.8in}} & \hspace*{0.5in} & {\psfig{figure=ps/compl-adj-files/alphanx0Vnx1nx2_bought.ps,height=1.8in}}\\ (a) & \hspace*{0.5in} & (b) \\ \end{tabular} \caption{Different subcategorization frames for the verb {\it buy}} \label{subcat-trees} \end{figure} \section{Complements and Adjuncts} \label{compl-adj} Complements and adjuncts have very different structures in the XTAG grammar. Complements are included in the elementary tree anchored by the verb that selects them, while adjuncts do not originate in the same elementary tree as the verb anchoring the sentence, but are instead added to a structure by adjunction. The contrasts between complements and adjuncts have been extensively discussed in the linguistics literature and the classification of a given element as one or the other remains a matter of debate (see \cite{rizzi90}, \cite{larson88}, \cite{jackendoff90}, \cite{larson90}, \cite{cinque90}, \cite{obernauer84}, \cite{lasnik-saito84}, and \cite{chomsky86}). The guiding rule used in developing the XTAG grammar is whether or not the sentence is ungrammatical without the questioned structure.\footnote{Iteration of a structure can also be used as a diagnostic: {\it Srini bought a book at the bookstore on Walnut Street for a friend}.} Consider the following sentences: \enumsentence{Srini bought a book.} \enumsentence{Srini bought a book at the bookstore.} \enumsentence{Srini arranged for a ride.} \enumsentence{$\ast$Srini arranged.} Prepositional phrases frequently occur as adjuncts, and when they are used as adjuncts they have a tree structure such as that shown in Figure~\ref{compl-adjunct}(a). This adjunction tree would adjoin into the tree shown in Figure~\ref{subcat-trees}(a) to generate sentence (\ex{-2}). There are verbs, however, such as {\it arrange}, {\it hunger} and {\it differentiate}, that take prepositional phrases as complements. Sentences (\ex{-1}) and (\ex{0}) clearly show that the prepositional phrase are not optional for {\it arrange}. For these sentences, the prepositional phrase will be an initial tree (as shown in Figure~\ref{compl-adjunct}(b)) that substitutes into an elementary tree, such as the one anchored by the verb {\it arrange} in Figure~\ref{compl-adjunct}(c). \begin{figure}[ht] \centering \begin{tabular}{ccccc} {\psfig{figure=ps/compl-adj-files/betavxPnx_at.ps,height=1.8in}} & \hspace*{0.5in} & {\psfig{figure=ps/compl-adj-files/alphaPXPnx_for.ps,height=1.3in}} & \hspace*{0.5in} & {\psfig{figure=ps/compl-adj-files/alphanx0Vpnx1_arranged.ps,height=1.8in}}\\ (a) & \hspace*{0.5in} & (b) & \hspace*{0.5in} & (c) \\ \end{tabular} \caption{Trees illustrating the difference between Complements and Adjuncts} \label{compl-adjunct} \label{2;1,9} \end{figure} Virtually all parts of speech, except for main verbs, function as both complements and adjuncts in the grammar. More information is available in this report on various parts of speech as complements: adjectives (e.g. section \ref{nx0Vax1-family}), nouns (e.g. section~\ref{nx0Vnx1-family}), and prepositions (e.g. section~\ref{nx0Vpnx1-family}); and as adjuncts: adjectives (section~\ref{adj-modifier}), adverbs (section~\ref{adv-modifier}), nouns (section~\ref{noun-modifier}), and prepositions (section~\ref{prep-modifier}). \section{Non-S constituents} Although sentential trees are generally considered to be special cases in any grammar, insofar as they make up a `starting category', it is the case that any initial tree constitutes a phrasal constituent. These initial trees may have substitution nodes that need to be filled (by other initial trees), and may be modified by adjunct trees, exactly as the trees rooted in S. Although grouping is possible according to the heads or anchors of these trees, we have not found any classification similar to the subcategorization frames for verbs that can be used by a lexical entry to `group select' a set of trees. These trees are selected one by one by each lexical item, according to each lexical item's idiosyncrasies. The grammar described by this technical report places them into several files for ease of use, but these files do not constitute tree families in the way that the subcategorization frames do. \input{case} \chapter{Conjunction} \label{conjunction} \section{Introduction} The XTAG system can handle sentences with conjunction of two constituents of the same syntactic category. The coordinating conjunctions which select the conjunction trees are {\it and}, {\it or} and {\it but}.\footnote{We believe that the restriction of {\it but} to conjoining only two items is a pragmatic one, and our grammars accepts sequences of any number of elements conjoined by {\it but}.} There are also multi-word conjunction trees, anchored by {\it either-or}, {\it neither-nor}, {\it both-and}, and {\it as well as}. There are eight syntactic categories that can be coordinated, and in each case an auxiliary tree is used to implement the conjunction. These eight categories can be considered as four different cases, as described in the following sections. In all cases the two constituents are required to be of the same syntactic category, but there may also be some additional constraints, as described below. \section{Adjective, Adverb, Preposition and PP Conjunction} Each of these four categories has an auxiliary tree that is used for conjunction of two constituents of that category. The auxiliary tree adjoins into the left-hand-side component, and the right-hand-side component substitutes into the auxiliary tree. \begin{figure}[htb] \centering \begin{tabular}{ccc} {\psfig{figure=ps/conj-files/betaA1conjA2.ps,height=1in}}& \hspace*{0.5in}& {\psfig{figure=ps/conj-files/derived-tree-140291.ps,height=2.8in}}\\ (a) & \hspace*{0.5in}& (b)\\ \end{tabular} \caption{Tree for adjective conjunction: $\beta$a1CONJa2 and a resulting parse tree} \label{A1conjA2} \end{figure} Figure~\ref{A1conjA2}(a) shows the auxiliary tree for adjective conjunction, and is used, for example, in the derivation of the parse tree for the noun phrase {\it the dark and dreary day}, as shown in Figure~\ref{A1conjA2}(b). The auxiliary tree adjoins onto the node for the left adjective, and the right adjective substitutes into the right hand side node of the auxiliary tree. The analysis for adverb, preposition and PP conjunction is exactly the same and there is a corresponding auxiliary tree for each of these that is identical to that of Figure~\ref{A1conjA2}(a) except, of course, for the node labels. \section{Noun Phrase and Noun Conjunction} The tree for NP conjunction, shown in Figure~\ref{NP1conjNP2}(a), has the same basic analysis as in the previous section except that the {\bf $<$wh$>$} and {\bf $<$case$>$} features are used to force the two noun phrases to have the same {\bf $<$wh$>$} and {\bf $<$case$>$} values. This allows, for example, {\it he and she wrote the book together} while disallowing {\it $\ast$he and her wrote the book together.} Agreement is lexicalized, since the various conjunctions behave differently. With {\it and}, the root {\bf $<$agr num$>$} value is {\bf $<$plural$>$}, no matter what the number of the two conjuncts. With {\it or}, however, the root {\bf $<$agr num$>$} is co-indexed with the {\bf $<$agr num$>$} feature of the right conjunct. This ensures that the entire conjunct will bear the number of both conjuncts if they agree (Figure~\ref{NP1conjNP2}(b)), or of the most ``recent'' one if they differ ({\it Either the boys or John is going to help you.}). There is no rule per se on what the agreement should be here, but people tend to make the verb agree with the last conjunct (cf. \cite{quirk85}, section 10.41 for discussion). The tree for N conjunction is identical to that for the NP tree except for the node labels. (The multi-word conjunctions do not select the N conjunction tree - {\it $^*$the both dogs and cats}). \begin{figure}[htb] \centering \begin{tabular}{cc} {\psfig{figure=ps/conj-files/betaCONJnx1CONJnx2.ps,height=4in}} \hspace{0.5cm} & {\psfig{figure=ps/conj-files/aardvarks-and-emus.ps,height=4in}}\\ (a) & (b)\\ \end{tabular} \caption{Tree for NP conjunction: $\beta$CONJnx1CONJnx2 and a resulting parse tree} \label{NP1conjNP2} \end{figure} \section{Determiner Conjunction} In determiner coordination, all of the determiner feature values are taken from the left determiner, and the only requirement is that the {\bf $<$wh$>$} feature is the same, while the other features, such as {\bf $<$card$>$}, are unconstrained. For example, {\it which and what} and {\it all but one} are both acceptable determiner conjunctions, but {\it $\ast$which and all} is not. \enumsentence{how many and which people camp frequently ?} \enumsentence{$^*$some or which people enjoy nature .} \begin{figure}[htbp] \centering \begin{tabular}{c} \psfig{figure=ps/conj-files/betad1CONJd2.ps,height=5.3in} \end{tabular} \vspace{-0.25in} \caption{Tree for determiner conjunction: $\beta$d1CONJd2.ps} \label{DX1conjDX2} \end{figure} \section{Sentential Conjunction} The tree for sentential conjunction, shown in Figure~\ref{S1conjS2}, is based on the same analysis as the conjunctions in the previous two sections, with a slight difference in features. The {\bf $<$mode$>$} feature\footnote{See section~\ref{s-features} for an explanation of the {\bf $<$mode$>$} feature.} is used to constrain the two sentences being conjoined to have the same mode so that {\it the day is dark and the phone never rang} is acceptable, but {\it $\ast$the day dark and the phone never rang} is not. Similarly, the two sentences must agree in their {\bf $<$wh$>$}, {\bf $<$comp$>$} and {\bf $<$extracted$>$} features. Co-indexation of the {\bf $<$comp$>$} feature ensures that either both conjuncts have the same complementizer, or there is a single complementizer adjoined to the complete conjoined S. The {\bf $<$assign-comp$>$} feature\footnote{See section~\ref{for-complementizer} for an explanation of the {\bf $<$assign-comp$>$} feature.} feature is used to allow conjunction of infinitival sentences, such as {\it to read and to sleep is a good life}. \begin{figure}[htb] \centering \begin{tabular}{c} \psfig{figure=ps/conj-files/betaS1conjS2.ps,height=3.5in} \end{tabular} \caption{Tree for sentential conjunction: $\beta$s1CONJs2} \label{S1conjS2} \end{figure} \section{Comma as a conjunction} We treat comma as a conjunction in conjoined lists. It anchors the same trees as the lexical conjunctions, but is considerably more restricted in how it combines with them. The trees anchored by commas are prohibited from adjoining to anything but another comma conjoined element or a non-coordinate element. (All scope possibilities are allowed for elements coordinated with lexical conjunctions.) Thus, structures such as Tree \ref{Comma-conj}(a) are permitted, with each element stacking sequentially on top of the first element of the conjunct, while structures such as Tree \ref{Comma-conj}(b) are blocked. \begin{figure}[htb] \centering \begin{tabular}{ccc} {\psfig{figure=ps/conj-files/good-adj-conj.ps,height=2.75in}}& \hspace*{0.5in}& {\psfig{figure=ps/conj-files/bad-adj-conj.ps,height=2.75in}}\\ (a) Valid tree with comma conjunction & \hspace*{0.5in}& (b) Invalid tree\\ \end{tabular} \caption{} \label{Comma-conj} \end{figure} This is accomplished by using the {\bf $<$conj$>$} feature, which has the values {\bf and/or/but} and {\bf comma} to differentiate the lexical conjunctions from commas. The {\bf $<$conj$>$} values for a comma-anchored tree and {\it and}-anchored tree are shown in Figure \ref{conj-contrast}. The feature {\bf $<$conj$>$ = comma/none} on A$_1$ in (a) only allows comma conjoined or non-conjoined elements as the left-adjunct, and {\bf $<$conj$>$ = none} on A in (a) allows only a non-conjoined element as the right conjunct. We also need the feature {\bf $<$conj$>$ = and/or/but/none} on the right conjunct of the trees anchored by lexical conjunctions like (b), to block comma-conjoined elements from substituting there. Without this restriction, we would get multiple parses of the NP in Tree \ref{Comma-conj}; with the restrictions we only get the derivation with the correct scoping, shown as (a). Since comma-conjoined lists can appear without a lexical conjunction between the final two elements, as shown in example (\ex{1}), we cannot force all comma-conjoined sequences to end with a lexical conjunction. \enumsentence{So it is too with many other spirits which we all know: the spirit of Nazism or Communism, school spirit , the spirit of a street corner gang or a football team, the spirit of Rotary or the Ku Klux Klan. \hfill [Brown cd01]} \begin{figure}[htb] \centering \begin{tabular}{cc} {\psfig{figure=ps/conj-files/adj-comma-conj.ps,height=2.5in}}& {\psfig{figure=ps/conj-files/adj-and-conj.ps,height=2.5in}}\\ \end{tabular} \caption{$\beta$a1CONJa2 (a) anchored by comma and (b) anchored by {\it and}} \label{conj-contrast} \end{figure} \section{{\it But-not}, {\it not-but}, {\it and-not} and {\it $\epsilon$-not}} We are analyzing conjoined structures such as {\it The women but not the men} with a multi-anchor conjunction tree anchored by the conjunction plus the adverb {\it not}. The alternative is to allow {\it not} to adjoin to any constituent. However, this is the only construction where {\it not} can freely occur onto a constituent other than a VP or adjective (cf. $\beta$NEGvx and $\beta$NEGa trees). It can also adjoin to some determiners, as discussed in Section \ref{det-comparitives}. We want to allow sentences like (\ex{1}) and rule out those like (\ex{2}). The tree for the good example is shown in Figure \ref{but-not}. There are similar trees for {\it and-not} and {\it $\epsilon$-not}, where $\epsilon$ is interpretable as either {\it and} or {\it but}, and a tree with {\it not} on the first conjunct for {\it not-but}. \enumsentence{Beth grows basil in the house (but) not in the garden .} \enumsentence{$^*$Beth grows basil (but) not in the garden .} \begin{figure}[htb] \centering \begin{tabular}{c} \psfig{figure=ps/conj-files/but-not.ps,height=2.5in} \end{tabular} \caption{Tree for conjunction with but-not: $\beta$px1CONJARBpx2} \label{but-not} \end{figure} Although these constructions sound a bit odd when the two conjuncts do not have the same number, they are sometimes possible. The agreement information for such NPs is always that of the non-negated conjunct: {\it his sons, and not Bill, are in charge of doing the laundry} or {\it not Bill, but his sons, are in charge of doing the laundry} (Some people insist on having the commas here, but they are frequently absent in corpus data.) The agreement feature from the non-negated conjunct in passed to the root NP, as shown in Figure \ref{not-but}. Aside from agreement, these constructions behave just like their non-negated counterparts. \begin{figure}[htb] \centering \begin{tabular}{c} \psfig{figure=ps/conj-files/not-but.ps,height=4in} \end{tabular} \caption{Tree for conjunction with not-but: $\beta$ARBnx1CONJnx2} \label{not-but} \end{figure} \section{{\it To} as a Conjunction} {\it To} can be used as a conjunction for adjectives (Fig. \ref{to-conj}) and determiners, when they denote points on a scale: \enumsentence{two to three degrees} \enumsentence{high to very high temperatures} As far as we can tell, when the conjuncts are determiners they must be cardinal. \begin{figure}[htb] \centering \begin{tabular}{c} \psfig{figure=ps/conj-files/to.ps,height=3.5in} \end{tabular} \caption{Example of conjunction with {\it to}} \label{to-conj} \end{figure} \section{Predicative Coordination} This section describes the method for predicative coordination (including VP coordination of various kinds) used in XTAG. The description is derived from work described in (\cite{anoopjoshi96}). It is important to say that this implementation of predicative coordination is not part of the XTAG release at the moment due massive parsing ambiguities. This is partly because of the current implementation and also the inherent ambiguities due to VP coordination that cause a combinatorial explosion for the parser. We are trying to remedy both of these limitations using a probability model for coordination attachments which will be included as part of a later XTAG release. This extended domain of locality in a lexicalized Tree Adjoining Grammar causes problems when we consider the coordination of such predicates. Consider~(\ex{1}) for instance, the NP {\em the beans that I bought from Alice} in the Right-Node Raising (RNR) construction has to be shared by the two elementary trees (which are anchored by {\em cooked} and {\em ate} respectively). \enumsentence{(((Harry cooked) and (Mary ate)) the beans that I bought from Alice)} We use the standard notion of coordination which is shown in Figure~\ref{fig:conj} which maps two constituents of {\em like type}, but with different interpretations, into a constituent of the same type. \begin{figure}[htbp] \begin{center} \leavevmode \psfig{figure=ps/conj-files/conj.ps,scale=110} \end{center} \caption{Coordination schema} \label{fig:conj} \end{figure} We add a new operation to the LTAG formalism (in addition to substitution and adjunction) called {\em conjoin} (later we discuss an alternative which replaces this operation by the traditional operations of substitution and adjunction). While substitution and adjunction take two trees to give a derived tree, {\em conjoin\/} takes three trees and composes them to give a derived tree. One of the trees is always the tree obtained by specializing the schema in Figure~\ref{fig:conj} for a particular category. The tree obtained will be a lexicalized tree, with the lexical anchor as the conjunction: {\em and}, {\em but}, etc. The conjoin operation then creates a {\em contraction\/} between nodes in the contraction sets of the trees being coordinated. The term {\em contraction\/} is taken from the graph-theoretic notion of edge contraction. In a graph, when an edge joining two vertices is contracted, the nodes are merged and the new vertex retains edges to the union of the neighbors of the merged vertices. The conjoin operation supplies a new edge between each corresponding node in the contraction set and then contracts that edge. For example, applying {\em conjoin\/} to the trees {\em Conj(and)}, $\alpha(eats)$ and $\alpha(drinks)$ gives us the derivation tree and derived structure for the constituent in \ex{1} shown in Figure~\ref{fig:vpc}. \enumsentence{$\ldots$ eats cookies and drinks beer} \begin{figure}[htbp] \begin{center} \leavevmode \psfig{figure=ps/conj-files/vpc.ps,scale=110} \end{center} \caption{An example of the {\em conjoin\/} operation. $\{1\}$ denotes a shared dependency.} \label{fig:vpc} \end{figure} Another way of viewing the conjoin operation is as the construction of an auxiliary structure from an elementary tree. For example, from the elementary tree $\alpha(drinks)$, the conjoin operation would create the auxiliary structure $\beta(drinks)$ shown in Figure~\ref{fig:aux-conj}. The adjunction operation would now be responsible for creating contractions between nodes in the contraction sets of the two trees supplied to it. Such an approach is attractive for two reasons. First, it uses only the traditional operations of substitution and adjunction. Secondly, it treats {\em conj X} as a kind of ``modifier'' on the left conjunct {\em X}. This approach reduces some of the parsing ambiguities introduced by the predicative coordination trees and forms the basis of the XTAG implementation. \begin{figure}[htbp] \begin{center} \leavevmode \psfig{figure=ps/conj-files/aux-conj.ps,scale=110} \end{center} \caption{Coordination as adjunction.} \label{fig:aux-conj} \end{figure} More information about predicative coordination can be found in (\cite{anoopjoshi96}), including an extension to handle gapping constructions. \section{Pseudo-coordination} The XTAG grammar does handle one sort of verb pseudo-coordination. Semi-idiomatic phrases such as 'try and' and 'up and' (as in 'they might try and come today') are handled as multi-anchor modifiers rather than as true coordination. These items adjoin to a V node, using the $\beta$VCONJv tree. This tree adjoins only to verbs in their base morphological (non-inflected) form. The verb anchor of the $\beta$VCONJv must also be in its base form, as shown in examples (\ex{1})-(\ex{3}). This blocks 3rd-person singular derivations, which are the only person morphologically marked in the present, except when an auxiliary verb is present or the verb is in the infinitive. \enumsentence{$\ast$He tried and came yesterday.} \enumsentence{They try and exercise three times a week.} \enumsentence{He wants to try and sell the puppies.} \chapter{Determiners and Noun Phrases} \label{det-comparitives} In our English XTAG grammar,\footnote{A more detailed discussion of this analysis can be found in \cite{ircs:det98}.} all nouns select the noun phrase (NP) tree structure shown in Figure~\ref{np-tree}. Common nouns do not require determiners in order to form grammatical NPs. Rather than being ungrammatical, singular countable nouns without determiners are restricted in interpretation and can only be interpreted as mass nouns. Allowing all nouns to head determinerless NPs correctly treats the individuation in countable NPs as a property of determiners. Common nouns have negative(``-'') values for determiner features in the lexicon in our analysis and can only acquire a positive(``+'') value for those features if determiners adjoin to them. Other types of NPs such as pronouns and proper nouns have been argued by Abney \cite{Abney87} to either be determiners or to move to the determiner position because they exhibit determiner-like behavior. We can capture this insight in our system by giving pronouns and proper nouns positive values for determiner features. For example pronouns and proper nouns would be marked as definite, a value that NPs containing common nouns can only obtain by having a definite determiner adjoin. In addition to the determiner features, nouns also have values for features such as reflexive ({\bf refl}), case, pronoun ({\bf pron}) and conjunction ({\bf conj}). \begin{figure}[ht] \centering \begin{tabular}{c} {\psfig{figure=ps/det-files/alphaNXN.ps,height=16.0cm}}\\ \end{tabular} \caption{NP Tree} \label{np-tree} \end{figure} A single tree structure is selected by simple determiners, an auxiliary tree which adjoins to NP. An example of this determiner tree anchored by the determiner {\it these\/} is shown in Figure~\ref{det-trees}. In addition to the determiner features the tree in Figure~\ref{det-trees} has noun features such as {\bf case} (see section 4.4.2), the {\bf conj} feature to control conjunction (see Chapter \ref{conjunction}), {\bf rel-clause$-$} (see Chapter \ref{rel_clauses}) and {\bf gerund$-$} (see Chapter \ref{gerunds-chapter}) which prevent determiners from adjoining on top of relative clauses and gerund NPs respectively, and the {\bf displ-const} feature which is used to simulate multi-component adjunction. Complex determiners such as genitives and partitives also anchor tree structures that adjoin to NP. They differ from the simple determiners in their internal complexity. Details of our treatment of these more complex constructions appear in Sections \ref{genitives} and \ref{partitives}. Sequences of determiners, as in the NPs {\it all her dogs\/} or {\it those five dogs\/} are derived by multiple adjunctions of the determiner tree, with each tree anchored by one of the determiners in the sequence. The order in which the determiner trees can adjoin is controlled by features. \begin{figure}[ht] \centering \begin{tabular}{c} {\psfig{figure=ps/det-files/betaDnx-these.ps,height=14cm}} \end{tabular} \caption{Determiner Trees with Features} \label{det-trees} \end{figure} This treatment of determiners as adjoining onto NPs is similar to that of \cite{Abeille90:TAG}, and allows us to capture one of the insights of the DP hypothesis, namely that determiners select NPs as complements. In Figure~\ref{det-trees} the determiner and its NP complement appear in the configuration that is typically used in LTAG to represent selectional relationships. That is, the head serves as the anchor of the tree and it's complement is a sister node in the same elementary tree. The XTAG treatment of determiners uses nine features for representing their properties: definiteness ({\bf definite}), quantity ({\bf quan}), cardinality ({\bf card}), genitive ({\bf gen}), decreasing ({\bf decreas}), constancy ({\bf const}), {\bf wh}, agreement ({\bf agr}), and complement ({\bf compl}). Seven of these features were developed by semanticists for their accounts of semantic phenomena (\cite{KeenanStavi86:LP}, \cite{BarwiseCooper81:LP}, \cite{Partee90:BK}), another was developed for a semantic account of determiner negation by one of the authors of this determiner analysis (\cite{Mateyak97}), and the last is the familiar agreement feature. When used together these features also account for a substantial portion of the complex patterns of English determiner sequencing. Although we do not claim to have exhaustively covered the sequencing of determiners in English, we do cover a large subset, both in terms of the phenomena handled and in terms of corpus coverage. The XTAG grammar has also been extended to include complex determiner constructions such as genitives and partitives using these determiner features. Each determiner carries with it a set of values for these features that represents its own properties, and a set of values for the properties of NPs to which can adjoin. The features are crucial to ordering determiners correctly. The semantic definitions underlying the features are given below. \begin{description} \item[Definiteness:] Possible Values [+/--]. \\ A function f is definite iff f is non-trivial and whenever f(s)~$\neq~\emptyset$ then it is always the intersection of one or more individuals. \cite{KeenanStavi86:LP} \item[Quantity:] Possible Values [+/--]. \\ If A and B are sets denoting an NP and associated predicate, respectively; E is a domain in a model M, and F is a bijection from M$_{1}$ to M$_{2}$, then we say that a determiner satisfies the constraint of quantity if Det$_{E_{1}}$AB~$\leftrightarrow$~Det$_{E_{2}}$F(A)F(B). \cite{Partee90:BK} \item[Cardinality:] Possible Values [+/--]. \\ A determiner D is cardinal iff D $\in$ cardinal numbers~$\geq$~1. \item[Genitive:] Possible Values [+/--]. \\ Possessive pronouns and the possessive morpheme ({\it 's}) are marked {\bf gen$+$}; all other nouns are {\bf gen$-$}. \item[Decreasing:] Possible Values [+/--]. \\ A set of Q properties is decreasing iff whenever s$\leq$t and t$\in$Q then s$\in$Q. A function f is decreasing iff for all properties f(s) is a decreasing set. A non-trivial NP (one with a Det) is decreasing iff its denotation in any model is decreasing. \cite{KeenanStavi86:LP} \item[Constancy:] Possible Values [+/--]. \\ If A and B are sets denoting an NP and associated predicate, respectively, and E is a domain, then we say that a determiner displays constancy if (A$\cup$B)~$\subseteq$~E~$\subseteq$~E$^{\prime}$ then Det$_{E}$AB~$\leftrightarrow$~Det$_{E^{\prime}}$AB. Modified from \cite{Partee90:BK} \item[Complement:] Possible Values [+/--]. \\ A determiner Q is positive complement if and only if for every set X, there exists a continuous set of possible values for the size of the negated determined set, NOT(QX), and the cardinality of QX is the only aspect of QX that can be negated. (adapted from \cite{Mateyak97}) \end{description} The {\bf wh}-feature has been discussed in the linguistics literature mainly in relation to wh-movement and with respect to NPs and nouns as well as determiners. We give a shallow but useful working definition of the {\bf wh}-feature below: \begin{description} \item[Wh:] Possible Values [+/--]. \\ Interrogative determiners are {\bf wh$+$}; all other determiners are {\bf wh$-$}. \end{description} The {\bf agr} feature is inherently a noun feature. While determiners are not morphologically marked for agreement in English many of them are sensitive to number. Many determiners are semantically either singular or plural and must adjoin to nouns which are the same. For example, {\it a\/} can only adjoin to singular nouns ({\it a dog\/} vs {\it $\ast$a dogs\/} while {\it many\/} must have plurals ({\it many dogs\/} vs {\it $\ast$many dog\/}). Other determiners such as {\it some} are unspecified for agreement in our analysis because they are compatible with either singulars or plurals ({\it some dog}, {\it some dogs}). The possible values of agreement for determiners are: [3sg, 3pl, 3]. The determiner tree in Figure~\ref{det-trees} shows the appropriate feature values for the determiner {\it these}, while Table \ref{det-values} shows the corresponding feature values of several other common determiners. \begin{table} \centering \begin{tabular}{|l||c|c|c|c|c|c|c|c|c|} \hline Det&definite&quan&card&gen&wh&decreas&const&agr&compl\\ \hline \hline all&$-$&$+$&$-$&$-$&$-$&$-$&$+$&3pl&$+$\\ both&$+$&$-$&$-$&$-$&$-$&$-$&$+$&3pl&$+$\\ this&$+$&$-$&$-$&$-$&$-$&$-$&$+$&3sg&$-$\\ these&$+$&$-$&$-$&$-$&$-$&$-$&$+$&3pl&$-$\\ that&$+$&$-$&$-$&$-$&$-$&$-$&$+$&3sg&$-$\\ those&$+$&$-$&$-$&$-$&$-$&$-$&$+$&3pl&$-$\\ what&$-$&$-$&$-$&$-$&$+$&$-$&$+$&3&$-$\\ whatever&$-$&$-$&$-$&$-$&$-$&$-$&$+$&3&$-$\\ which&$-$&$-$&$-$&$-$&$+$&$-$&$+$&3&$-$\\ whichever&$-$&$-$&$-$&$-$&$-$&$-$&$+$&3&$-$\\ the&$+$&$-$&$-$&$-$&$-$&$-$&$+$&3&$-$\\ each&$-$&$+$&$-$&$-$&$-$&$-$&$+$&3sg&$-$\\ every&$-$&$+$&$-$&$-$&$-$&$-$&$+$&3sg&$+$\\ a/an&$-$&$+$&$-$&$-$&$-$&$-$&$+$&3sg&$+$\\ some$_{1}$&$-$&$+$&$-$&$-$&$-$&$-$&$+$&3&$-$\\ some$_{2}$&$-$&$+$&$-$&$-$&$-$&$-$&$-$&3pl&$-$\\ any&$-$&$+$&$-$&$-$&$-$&$-$&$+$&3sg&$+$\\ another&$-$&$+$&$-$&$-$&$-$&$-$&$+$&3sg&$+$\\ few&$-$&$+$&$-$&$-$&$-$&$+$&$-$&3pl&$-$\\ a few&$-$&$+$&$-$&$-$&$-$&$-$&$+$&3pl&$-$\\ many&$-$&$+$&$-$&$-$&$-$&$-$&$-$&3pl&$+$\\ many a/an&$-$&$+$&$-$&$-$&$-$&$-$&$-$&3sg&$+$\\ several&$-$&$+$&$-$&$-$&$-$&$-$&$+$&3pl&$-$\\ various&$-$&$-$&$-$&$-$&$-$&$-$&$+$&3pl&$-$\\ sundry&$-$&$-$&$-$&$-$&$-$&$-$&$+$&3pl&$-$\\ no&$-$&$+$&$-$&$-$&$-$&$+$&$+$&3&$-$\\ neither&$-$&$-$&$-$&$-$&$-$&$+$&$+$&3&$-$\\ either&$-$&$-$&$-$&$-$&$-$&$-$&$+$&3&$-$\\ \hline \hline GENITIVE&$+$&$-$&$-$&$+$&$-$&$-$&$+$&UN\footnotemark&$-$\\ CARDINAL&$-$&$+$&$+$&$-$&$-$&$-$&$+$&3pl\footnotemark\ &$-$\footnotemark\ \\ PARTITIVE&$-$&+/-\footnotemark\ &$-$&$-$&$-$&$-$&$+$&UN&+/-\\ \hline \end{tabular} \caption{Determiner Features associated with D anchors} \label{det-values} \end{table} \addtocounter{footnote}{-3} \footnotetext{We use the symbol UN to represent the fact that the selectional restrictions for a given feature are unspecified, meaning the noun phrase that the determiner selects can be either positive or negative for this feature.} \stepcounter{footnote} \footnotetext{Except {\it one} which is 3sg.} \stepcounter{footnote} \footnotetext{Except {\it one} which is {\bf compl+}.} \stepcounter{footnote} \footnotetext{A partitive can be either {\bf quan+} or {\bf quan-}, depending upon the nature of the noun that anchors the partitive. If the anchor noun is modified, then the quantity feature is determined by the modifier's quantity value.} In addition to the features that represent their own properties, determiners also have features to represent the selectional restrictions they impose on the NPs they take as complements. The selectional restriction features of a determiner appear on the NP footnode of the auxiliary tree that the determiner anchors. The NP$_{f}$ node in Figure~\ref{det-trees} shows the selectional feature restriction imposed by {\it these}\footnote{In addition to this tree, {\it these} would also anchor another auxiliary tree that adjoins onto {\bf card+} determiners.}, while Table~\ref{det-ordering} shows the corresponding selectional feature restrictions of several other determiners. \small \begin{table} \centering \begin{tabular}{|l||c|c|c|c|c|c|c|c|c||l|} \hline Det&defin&quan&card&gen&wh&decreas&const&agr&compl&{\it e.g.}\\ \hline \hline &$-$&$-$&$-$&$-$&$-$&$-$&$-$&3pl&$-$&{\it dogs}\\ all&$+$&$-$&$-$&UN&$-$&UN&UN&3pl&$-$&{\it these dogs}\\ &UN&UN&$+$&UN&UN&UN&UN&3pl&UN&{\it five dogs}\\ \hline &$-$&$-$&$-$&$-$&$-$&$-$&$-$&3pl&$-$&{\it dogs}\\ {both}&$+$&$-$&$-$&UN&$-$&UN&UN&3pl&$-$&{\it these dogs}\\ \hline &$-$&$-$&$-$&$-$&$-$&$-$&$-$&3sg&$-$&{\it dog}\\ &$-$&$+$&UN&UN&$-$&$+$&$-$&3&UN&{\it few dogs}\\ {this/that}&$-$&$+$&UN&UN&$-$&$-$&$-$&3pl&$+$&{\it many dogs}\\ &UN&UN&$+$&UN&UN&UN&UN&3sg&UN&{\it five dogs}\\ \hline &$-$&$-$&$-$&$-$&$-$&$-$&$-$&3pl&$-$&{\it dogs}\\ these/those&$-$&$+$&UN&UN&$-$&$+$&$-$&3pl&UN&{\it few dogs}\\ &UN&UN&$+$&UN&UN&UN&UN&3pl&UN&{\it five dogs}\\ \hline what/which&$-$&$-$&$-$&$-$&$-$&$-$&$-$&3&$-$&{\it dog(s)}\\ whatever&$-$&$+$&UN&UN&$-$&$+$&$-$&3&UN&{\it few dogs}\\ whichever&UN&UN&$+$&UN&UN&UN&UN&3&UN&{\it many dogs}\\ \hline &$-$&$-$&$-$&$-$&$-$&$-$&$-$&3&$-$&{\it dog(s)}\\ the&$-$&$+$&UN&UN&$-$&$+$&$-$&3&UN&{\it few dogs}\\ &$+$&$-$&$-$&$-$&$-$&$-$&$-$&UN&$-$&{\it the me}\\ &$-$&$+$&UN&UN&$-$&$-$&$-$&3pl&$+$&{\it many dogs}\\ &UN&UN&$+$&UN&UN&UN&UN&3&UN&{\it five dogs}\\ \hline &$-$&$-$&$-$&$-$&$-$&$-$&$-$&3sg&$-$&{\it dog}\\ every/each&$-$&$+$&UN&UN&$-$&$+$&$-$&3&UN&{\it few dogs}\\ &UN&UN&$+$&UN&UN&UN&UN&3&UN&{\it five dogs}\\ \hline a/an&$-$&$-$&$-$&$-$&$-$&$-$&$-$&3sg&$-$&{\it dog}\\ \hline some$_{1,2}$&$-$&$-$&$-$&$-$&$-$&$-$&$-$&3&$-$&{\it dog(s)}\\ some$_{1}$&UN&UN&$+$&UN&UN&UN&UN&3pl&UN&{\it dogs}\\ \hline &$-$&$-$&$-$&$-$&$-$&$-$&$-$&3sg&$-$&{\it dog}\\ any&$-$&$+$&UN&UN&$-$&$+$&$-$&3&UN&{\it few dogs}\\ &UN&UN&$+$&UN&UN&UN&UN&3&UN&{\it five dogs}\\ \hline &$-$&$-$&$-$&$-$&$-$&$-$&$-$&3sg&$-$&{\it dog}\\ another&$-$&$+$&UN&UN&$-$&$+$&$-$&3&UN&{\it few dogs}\\ &UN&UN&$+$&UN&UN&UN&UN&3&UN&{\it five dogs}\\ \hline few&$-$&$-$&$-$&$-$&$-$&$-$&$-$&3pl&$-$&{\it dogs}\\ \hline a few&$-$&$-$&$-$&$-$&$-$&$-$&$-$&3pl&$-$&{\it dogs}\\ \hline many&$-$&$-$&$-$&$-$&$-$&$-$&$-$&3pl&$-$&{\it dogs}\\ \hline many a/an&$-$&$-$&$-$&$-$&$-$&$-$&$-$&3sg&$-$&{\it dog}\\ \hline several&$-$&$-$&$-$&$-$&$-$&$-$&$-$&3pl&$-$&{\it dogs}\\ \hline various&$-$&$-$&$-$&$-$&$-$&$-$&$-$&3pl&$-$&{\it dogs}\\ \hline sundry&$-$&$-$&$-$&$-$&$-$&$-$&$-$&3pl&$-$&{\it dogs}\\ \hline no&$-$&$-$&$-$&$-$&$-$&$-$&$-$&3&$-$&{\it dog(s)}\\ \hline neither&$-$&$-$&$-$&$-$&$-$&$-$&$-$&3sg&$-$&{\it dog}\\ \hline either&$-$&$-$&$-$&$-$&$-$&$-$&$-$&3sg&$-$&{\it dog}\\ \hline \end{tabular} \caption{Selectional Restrictions Imposed by Determiners on the NP foot node} \label{det-ordering} \end{table} \begin{table}[htb] \centering \begin{tabular}{|l||c|c|c|c|c|c|c|c|c|} \hline\hline Det&definite&quan&card&gen&wh&decreas&const&agr&compl\\ \hline \hline &$-$&$-$&$-$&$-$&$-$&$-$&$-$&3&$-$\\ &$-$&$+$&UN&UN&$-$&$+$&$-$&3&UN\\ GENITIVE&$-$&$+$&UN&UN&$-$&$-$&$-$&3pl&$+$\\ &UN&UN&$+$&UN&UN&UN&UN&3&UN\\ &$-$&$+$&$-$&$-$&$-$&$-$&$+$&3pl&$-$\\ &$-$&$-$&$-$&$-$&$-$&$-$&$+$&3pl&$-$\\ \hline CARDINAL&$-$&$-$&$-$&$-$&$-$&$-$&$-$&3pl\footnotemark&$-$\\ \hline PARTITIVE&UN&UN&UN&UN&$-$&UN&UN&UN&UN\\ \hline \end{tabular} \caption{Selectional Restrictions Imposed by Groups of Determiners/Determiner Constructions} \label{det-ordering2} \end{table} \footnotetext{{\it one} differs from the rest of CARD in selecting singular nouns} \normalsize \section{The Wh-Feature} \label{agr-section} A determiner with a {\bf wh+} feature is always the left-most determiner in linear order since no determiners have selectional restrictions that allow them to adjoin onto an NP with a +wh feature value. The presence of a wh+ determiner makes the entire NP wh+, and this is correctly represented by the coindexation of the determiner and root NP nodes' values for the wh-feature. Wh+ determiners' selectional restrictions on the NP foot node of their tree only allows them adjoin onto NPs that are {\bf wh-} or unspecified for the wh-feature. Therefore ungrammatical sequences such as {\it $\ast$which what dog} are impossible. The adjunction of {\bf wh +} determiners onto {\bf wh+} pronouns is also prevented by the same mechanism. \section{Multi-word Determiners} The system recognizes the multi-word determiners {\it a few} and {\it many a}. The features for a multi-word determiner are located on the parent node of its two components (see Figure~\ref{multi-det-tree}). We chose to represent these determiners as multi-word constituents because neither determiner retains the same set of features as either of its parts. For example, the determiner {\it a} is 3sg and {\it few} is decreasing, while {\it a few} is 3pl and increasing. Additionally, {\it many} is 3pl and {\it a} displays constancy, but {\it many a} is 3sg and does not display constancy. Example sentences appear in (\ex{1})-(\ex{2}). \begin{itemize} \item{Multi-word Determiners} \enumsentence{{\bf a few} teaspoons of sugar should be adequate .} \enumsentence{{\bf many a} man has attempted that stunt, but none have succeeded .} \end{itemize} \begin{figure}[htb] \centering \begin{tabular}{cc} {\psfig{figure=ps/det-files/betaDDnx.ps,height=5.0in}} \end{tabular}\\ \caption{Multi-word Determiner tree: $\beta$DDnx} \label{multi-det-tree} \end{figure} \section{Genitive Constructions} \label{genitives} There are two kinds of genitive constructions: genitive pronouns, and genitive NP's (which have an explicit genitive marker, {\it 's}, associated with them). It is clear from examples such as {\it her dog returned home\/} and {\it her five dogs returned home} vs {\it $\ast$dog returned home\/} that genitive pronouns function as determiners and as such, they sequence with the rest of the determiners. The features for the genitives are the same as for other determiners. Genitives are not required to agree with either the determiners or the nouns in the NPs that they modify. The value of the {\bf agr} feature for an NP with a genitive determiner depends on the NP to which the genitive determiner adjoins. While it might seem to make sense to take {\it their} as 3pl, {\it my} as 1sg, and {\it Alfonso's} as 3sg, this number and person information only effects the genitive NP itself and bears no relationship to the number and person of the NPs with these items as determiners. Consequently, we have represented {\bf agr} as unspecified for genitives in Table \ref{det-values}. Genitive NP's are particularly interesting because they are potentially recursive structures. Complex NP's can easily be embedded within a determiner. \enumsentence{[[[John]'s friend from high school]'s uncle]'s mother came to town.} There are two things to note in the above example. One is that in embedded NPs, the genitive morpheme comes at the end of the NP phrase, even if the head of the NP is at the beginning of the phrase. The other is that the determiner of an embedded NP can also be a genitive NP, hence the possibility of recursive structures. In the XTAG grammar, the genitive marker, {\it 's}, is separated from the lexical item that it is attached to and given its own category (G). In this way, we can allow the full complexity of NP's to come from the existing NP system, including any recursive structures. As with the simple determiners, there is one auxiliary tree structure for genitives which adjoins to NPs. As can be seen in \ref{gen-trees}, this tree is anchored by the genitive marker {\it 's} and has a branching D node which accomodates the additional internal structure of genitive determiners. Also, like simple determiners, there is one initial tree structure (Figure~\ref{subst-genNP-tree}) available for substitution where needed, as in, for example, the Determiner Gerund NP tree (see Chapter~\ref{gerunds-chapter} for discussion on determiners for gerund NP's). \begin{figure}[ht] \centering \begin{tabular}{c} {\psfig{figure=ps/det-files/betaGnx-features.ps,height=13.0cm}}\\ \end{tabular} \caption{Genitive Determiner Tree} \label{gen-trees} \end{figure} \begin{figure}[htb] \centering \begin{tabular}{c} {\psfig{figure=ps/det-files/alphaDnxG.ps,height=1.8in}}\\ \end{tabular} \caption{Genitive NP tree for substitution: $\alpha$DnxG} \label{subst-genNP-tree} \end{figure} Since the NP node which is sister to the G node could also have a genitive determiner in it, the type of genitive recursion shown in (\ex{0}) is quite naturally accounted for by the genitive tree structure used in our analysis. \section{Partitive Constructions} \label{partitives} The deciding factor for including an analysis of partitive constructions(e.g.\ {\it some kind of}, {\it all of\/}) as a complex determiner constructions was the behavior of the agreement features. If partitive constructions are analyzed as an NP with an adjoined PP, then we would expect to get agreement with the head of the NP (as in ({\ex{1}})). If, on the other hand, we analyze them as a determiner construction, then we would expect to get agreement with the noun that the determiner sequence modifies (as we do in ({\ex{2}})). \enumsentence{a {\it kind} [of these machines] {\it is} prone to failure.} \enumsentence{[a kind of] these {\it machines are} prone to failure.} In other words, for partitive constructions, the semantic head of the NP is the second rather than the first noun in linear order. That the agreement shown in ({\ex{0}}) is possible suggests that the second noun in linear order in these constructions should also be treated as the syntactic head. Note that both the partitive and PP readings are usually possible for a particular NP. In the cases where either the partitive or the PP reading is preferred, we take it to be just that, a preference, most appropriately modeled not in the grammar but in a component such as the heuristics used with the XTAG parser for reducing the analyses produced to the most likely. In our analysis the partitive tree in Figure~\ref{part-tree} is anchored by one of a limited group of nouns that can appear in the determiner portion of a partitive construction. A rough semantic characterization of these nouns is that they either represent quantity (e.g. {\it part, half, most, pot, cup, pound} etc.) or classification (e.g. {\it type, variety, kind, version} etc.). In the absence of a more implementable characterization we use a list of such nouns compiled from a descriptive grammar \cite{quirk85}, a thesaurus, and from online corpora. In our grammar the nouns on the list are the only ones that select the partitive determiner tree. \begin{figure}[ht] \centering \begin{tabular}{c} {\psfig{figure=ps/det-files/betaNofnx.ps,height=17.0cm}}\\ \end{tabular} \caption{Partitive Determiner Tree} \label{part-tree} \end{figure} Like other determiners, partitives can adjoin to an NP consisting of just a noun ({\it `[a certain kind of] machine'}), or adjoin to NPs that already have determiners ({\it `[some parts of] these machines'}. Notice that just as for the genitives, the complexity and the recursion are contained below the D node and rest of the structure is the same as for simple determiners. \section{Adverbs, Noun Phrases, and Determiners} \label{adverbial-section} Many adverbs interact with the noun phrase and determiner system in English. For example, consider sentences (\ref{approx})-(\ref{double}) below. \enumsentence{\label{approx}{\bf Approximately} thirty people came to the lecture.} \enumsentence{\label{practically}{\bf Practically} every person in the theater was laughing hysterically during that scene.} \enumsentence{\label{only}{\bf Only} John's crazy mother can make stuffing that tastes so good.} \enumsentence{\label{relatively}{\bf Relatively} few programmers remember how to program in COBOL.} \enumsentence{\label{not}{\bf Not} every martian would postulate that all humans speak a universal language.} \enumsentence{\label{enough}{\bf Enough} money was gathered to pay off the group gift.} \enumsentence{\label{quite}{\bf Quite} a few burglaries occurred in that neighborhood last year.} \enumsentence{\label{double}I wanted to be paid {\bf double} the amount they offered.} Although there is some debate in the literature as to whether these should be classified as determiners or adverbs, we believe that these items that interact with the NP and determiner system are in fact adverbs. These items exhibit a broader distribution than either determiners or adjectives in that they can modify many other phrasal categories, including adjectives, verb phrases, prepositional phrases, and other adverbs. Using the determiner feature system, we can obtain a close approximation to an accurate characterization of the behavior of the adverbs that interact with noun phrases and determiners. Adverbs can adjoin to either a determiner or a noun phrase (see figure~\ref{det-adv-trees}), with the adverbs restricting what types of NPs or determiners they can modify by imposing feature requirements on the foot D or NP node. For example, the adverb {\it approximately}, seen in (\ref{approx}) above, selects for determiners that are {\bf card+}. The adverb {\it enough} in (\ref{enough}) is an example of an adverb that selects for a noun phrase, specifically a noun phrase that is not modified by a determiner. \begin{figure}[ht] \centering \begin{tabular}{ccc} {\psfig{figure=ps/det-files/advdet.ps,height=5.0cm}}&& {\psfig{figure=ps/det-files/advnoun.ps,height=5.0cm}}\\ (a)&&(b) \end{tabular} \caption{(a) Adverb modifying a determiner; (b) Adverb modifying a noun phrase} \label{det-adv-trees} \end{figure} Most of the adverbs that modify determiners and NPs divide into six classes, with some minor variation within classes, based on the pattern of these restrictions. Three of the classes are adverbs that modify determiners, while the other three modify NPs. The largest of the five classes is the class of adverbs that modify cardinal determiners. This class includes, among others, the adverbs {\it about}, {\it at most}, {\it exactly}, {\it nearly}, and {\it only}. These adverbs have the single restriction that they must adjoin to determiners that are {\bf card+}. Another class of adverbs consists of those that can modify the determiners {\it every}, {\it all}, {\it any}, and {\it no}. The adverbs in this class are {\it almost}, {\it nearly}, and {\it practically}. Closely related to this class are the adverbs {\it mostly} and {\it roughly}, which are restricted to modifying {\it every} and {\it all}, and {\it hardly}, which can only modify {\it any}. To select for {\it every}, {\it all}, and {\it any}, these adverbs select for determiners that are [{\bf quan+}, {\bf card-}, {\bf const+}, {\bf compl+}], and to select for {\it no}, the adverbs choose a determiner that is [{\bf quan+}, {\bf decreas+}, {\bf const+}]. The third class of adverbs that modify determiners are those that modify the determiners {\it few} and {\it many}, representable by the feature sequences [{\bf quan+}, {\bf decreas+}, {\bf const-}] and [{\bf quan+}, {\bf decreas-}, {\bf const-}, {\bf 3pl}, {\bf compl+}], respectively. Examples of these adverbs are {\it awfully}, {\it fairly}, {\it relatively}, and {\it very}. Of the three classes of adverbs that modify noun phrases, one actually consists of a single adverb {\it not}, that only modifies determiners that are {\bf compl+}. Another class consists of the focus adverbs, {\it at least}, {\it even}, {\it only}, and {\it just}. These adverbs select NPs that are {\bf wh-} and {\bf card-}. For the NPs that are {\bf card+}, the focus adverbs actually modify the cardinal determiner, and so these adverbs are also included in the first class of adverbs mentioned in the previous paragraph. The last major class that modify NPs consist of the adverbs {\it double} and {\it twice}, which select NPs that are [{\bf definite+}] (i.e., {\it the}, {\it this/that/those/these}, and the genitives). Although these restrictions succeed in recognizing the correct determiner/adverb sequences, a few unacceptable sequences slip through. For example, in handling the second class of adverbs mentioned above, {\it every}, {\it all}, and {\it any} share the features [{\bf quan+}, {\bf card-}, {\bf const+}, {\bf compl+}] with {\it a} and {\it another}, and so {\it $\ast$nearly a man} is acceptable in this system. In addition to this over-generation within a major class, the adverb {\it quite} selects for determiners and NPs in what seems to be a purely idiosyncratic fashion. Consider the following examples. \eenumsentence{\label{quite2}\item[a.] {\bf Quite} a few members of the audience had to leave. \item[b.] There were {\bf quite} many new participants at this year's conference. \item[c.] {\bf Quite} few triple jumpers have jumped that far. \item[d.] Taking the day off was {\bf quite} the right thing to do. \item[e.] The recent negotiation fiasco is {\bf quite} another issue. \item[f.] Pandora is {\bf quite} a cat!} In examples (\ref{quite2}a)-(\ref{quite2}c), {\it quite} modifies the determiner, while in (\ref{quite2}d)-(\ref{quite2}f), {\it quite} modifies the entire noun phrase. Clearly, it functions in a different manner in the two sets of sentences; in (\ref{quite2}a)-(\ref{quite2}c), {\it quite} intensifies the amount implied by the determiner, whereas in (\ref{quite2}d)-(\ref{quite2}f), it singles out an individual from the larger set to which it belongs. To capture the selectional restrictions needed for (\ref{quite2}a)-(\ref{quite2}c), we utilize the two sets of features mentioned previously for selecting {\it few} and {\it many}. However, {\it a few} cannot be singled out so easily; using the sequence [{\bf quan+}, {\bf card-}, {\bf decreas-}, {\bf const+}, {\bf 3pl}, {\bf compl-}], we also accept the ungrammatical NPs {\it $\ast$quite several members} and {\it $\ast$quite some members} (where {\it quite} modifies {\it some}). In selecting {\it the} as in (d) with the features [{\bf definite+}, {\bf gen-}, {\bf 3sg}], {\it quite} also selects {\it this} and {\it that}, which are ungrammatical in this position. Examples (\ref{quite2}e) and (\ref{quite2}f) present yet another obstacle in that in selecting {\it another} and {\it a}, {\it quite} erroneously selects {\it every} and {\it any}. It may be that there is an undiscovered semantic feature that would alleviate these difficulties. However, on the whole, the determiner feature system we have proposed can be used as a surprisingly efficient method of characterizing the interaction of adverbs with determiners and noun phrases. \chapter{Ditransitive constructions and dative shift} \label{double-objs} Verbs such as {\it give\/} and {\it put\/} that require two objects, as shown in examples (\ex{1})-(\ex{4}), are termed ditransitive. \enumsentence{Christy gave a cannoli to Beth Ann .} \enumsentence{$\ast$Christy gave Beth Ann .} \enumsentence{Christy put a cannoli in the refrigerator .} \enumsentence{$\ast$Christy put a cannoli .} The indirect objects {\it Beth Ann\/} and {\it refrigerator\/} appear in these examples in the form of PP's. Within the set of ditransitive verbs there is a subset that also allow two NP's as in (\ex{1}). As can be seen from (\ex{1}) and (\ex{2}) this two NP, or double-object, construction is grammatical for {\it give\/} but not for {\it put}. \enumsentence{Christy gave Beth Ann a cannoli .} \enumsentence{$\ast$Christy put the refrigerator the cannoli .} The alternation between (\ex{-5}) and (\ex{-1}) is known as dative shift.\footnote{In languages similar to English that have overt case marking indirect objects would be marked with dative case. It has also been suggested that for English the preposition {\it to} serves as a dative case marker.} In order to account for verbs with dative shift the English XTAG grammar includes structures for both variants in the tree family Tnx0Vnx1Pnx2. The declarative trees for the shifted and non-shifted alternations are shown in Figure~\ref{dative-alt}. \begin{figure}[htb] \centering \begin{tabular}{ccc} {\psfig{figure=ps/double-obj-files/alphanx0Vnx1Pnx2.ps,height=2.0in}}& \hspace*{0.5in} & {\psfig{figure=ps/double-obj-files/alphanx0Vnx2nx1.ps,height=1.1in}} \\ (a)&\hspace*{0.5in}&(b)\\ \end{tabular} \caption{Dative shift trees: $\alpha$nx0Vnx1Pnx2 (a) and $\alpha$nx0Vnx2nx1 (b)} \label{dative-alt} \label{2;1,2} \end{figure} The indexing of nodes in these two trees represents the fact that the semantic role of the indirect object (NP$_2$) in Figure~\ref{dative-alt}(a) is the same as that of the direct object (NP$_2$) in Figure~\ref{dative-alt}(b) (and vice versa). This use of indexing is consistent with our treatment of other constructions such as passive and ergative. Verbs that do not show this alternation and have only the NP PP structure (e.g. {\it put\/}) select the tree family Tnx0Vnx1pnx2. Unlike the Tnx0Vnx1Pnx2 family, the Tnx0Vnx1pnx2 tree family does not contain trees for the NP NP structure. Other verbs such as {\it ask} allow only the NP NP structure as shown in (\ex{1}) and (\ex{2}). \enumsentence{Beth Ann asked Srini a question .} \enumsentence{$\ast$Beth Ann asked a question to Srini .} Verbs that only allow the NP NP structure select the tree family Tnx0Vnx1nx2. This tree family does not have the trees for the NP PP structure. Notice that in Figure~\ref{dative-alt}(a) the preposition {\it to\/} is built into the tree. There are other apparent cases of dative shift with {\it for}, such as in (\ex{1}) and (\ex{2}), that we have taken to be structurally distinct from the cases with {\it to}. \enumsentence{Beth Ann baked Dusty a biscuit .} \enumsentence{Beth Ann baked a biscuit for Dusty .} \cite{mccawley88} notes: \begin{quote} A ``{\it for-dative}'' expression in underlying structure is external to the V with which it is combined, in view of the fact that the latter behaves as a unit with regard to all relevant syntactic phenomena. \end{quote} In other words, the {\it for} PP's that appear to undergo dative shift are actually adjuncts, not complements. Examples (\ex{1}) and (\ex{2}) demonstrate that PP's with {\it for} are optional while ditransitive {\it to} PP's are not. \enumsentence{Beth Ann made dinner .} \enumsentence{$\ast$Beth Ann gave dinner .} Consequently, in the XTAG grammar the apparent dative shift with {\it for} PP's is treated as Tnx0Vnx1nx2 for the NP NP structure, and as a transitive plus an adjoined adjunct PP for the NP PP structure. To account for the ditransitive {\it to} PP's, the preposition {\it to} is built into the tree family Tnx0Vnx1tonx2. This accounts for the fact that {\it to} is the only preposition allowed in dative shift constructions. \cite{mccawley88} also notes that the {\it to} and {\it for} cases differ with respect to passivization; the indirect objects with {\it to} may be the subjects of corresponding passives while the alleged indirect objects with {\it for} cannot, as in sentences~(\ex{1})-(\ex{4}). Note that the passivisation examples are for NP NP structures of verbs that take {\it to} or {\it for} PP's. \enumsentence{Beth Ann gave Clove dinner .} \enumsentence{Clove was given dinner (by Beth Ann) .} \enumsentence{Beth Ann made Clove dinner .} \enumsentence{?Clove was made dinner (by Beth Ann) .} However, we believe that this to be incorrect, and that the indirect objects in the {\it for} case are allowed to be the subjects of passives, as in sentences~(\ex{1})-(\ex{2}). The apparent strangeness of sentence~(\ex{0}) is caused by interference from other interpretations of {\it Clove was made dinner .} \enumsentence{Dania baked Doug a cake .} \enumsentence{Doug was baked a cake by Dania .} \chapter{Ergatives} \label{ergatives} Verbs in English that are termed ergative display the kind of alternation shown in the sentences in (\ex{1}) below. \enumsentence{The sun melted the ice .\\ The ice melted .} The pattern of ergative pairs as seen in (\ex{0}) is for the object of the transitive sentence to be the subject of the intransitive sentence. The literature discussing such pairs is based largely on syntactic models that involve movement, particularly GB. Within that framework two basic approaches are discussed: \begin{itemize} \item {\bf Derived Intransitive}\\ The intransitive member of the ergative pair is derived through processes of movement and deletion from: \begin{itemize} \item a transitive D-structure \cite{Burzio86}; or \item transitive lexical structure \cite{HaleKeyser86,HaleKeyser87} \end{itemize} \item {\bf Pure Intransitive}\\ The intransitive member is intransitive at all levels of the syntax and the lexicon and is not related to the transitive member syntactically or lexically \cite{Napoli88}. \end{itemize} The Derived Intransitive approach's notions of movement in the lexicon or in the grammar are not represented as such in the XTAG grammar. However, distinctions drawn in these arguments can be translated to the FB-LTAG framework. In the XTAG grammar the difference between these two approaches is not a matter of movement but rather a question of tree family membership. The relation between sentences represented in terms of movement in other frameworks is represented in XTAG by membership in the same tree family. Wh-questions and their indicative counterparts are one example of this. Adopting the Pure Intransitive approach suggested by \cite{Napoli88} would mean placing the intransitive ergatives in a tree family with other intransitive verbs and separate from the transitive variants of the same verbs. This would result in a grammar that represented intransitive ergatives as more closely related to other intransitives than to their transitive counterparts. The only hint of the relation between the intransitive ergatives and the transitive ergatives would be that ergative verbs would select both tree families. While this is a workable solution, it is an unattractive one for the English XTAG grammar because semantic coherence is implicitly associated with tree families in our analysis of other constructions. In particular, constancy in thematic role is represented by constancy in node names across sentence types within a tree family. For example, if the object of a declarative tree is NP$_{1}$ the subject of the passive tree(s) in that family will also be NP$_{1}$. The analysis that has been implemented in the English XTAG grammar is an adaptation of the Derived Intransitive approach. The ergative verbs select one family, Tnx0Vnx1, that contains both transitive and intransitive trees. The {\bf$<$trans$>$} feature appears on the intransitive ergative trees with the value {\bf --} and on the transitive trees with the value {\bf +}. This creates the two possibilities needed to account for the data. \begin{itemize} \item {\bf intransitive ergative/transitive alternation.} These verbs have transitive and intransitive variants as shown in sentences~(\ex{1}) and (\ex{2}). \enumsentence{The sun melted the ice cream .} \enumsentence{The ice cream melted .} In the English XTAG grammar, verbs with this behavior are left unspecified as to value for the {\bf$<$trans$>$} feature. This lack of specification allows these verbs to anchor either type of tree in the Tnx0Vnx1 tree family because the unspecified {\bf$<$trans$>$} value of the verb can unify with either {\bf +} or {\bf --} values in the trees. \item {\bf transitive only.} Verbs of this type select only the transitive trees and do not allow intransitive ergative variants as in the pattern show in sentences~(\ex{1}) and (\ex{2}). \enumsentence{Elmo borrowed a book .} \enumsentence{$\ast$A book borrowed .} The restriction to selecting only transitive trees is accomplished by setting the {\bf$<$trans$>$} feature value to {\bf +} for these verbs. \end{itemize} \begin{figure}[htb] \centering \mbox{} \psfig{figure=ps/erg-files/alphaEnx1V.ps,height=4.0cm} \caption{Ergative Tree: $\alpha$Enx1V} \label{decl-erg-tree} \label{2;14,1} \end{figure} The declarative ergative tree is shown in Figure~\ref{decl-erg-tree} with the {\bf $<$trans$>$} feature displayed. Note that the index of the subject NP indicates that it originated as the object of the verb. \chapter{Evaluation and Results} \label{evaluation} In this appendix we describe various evaluations done of the XTAG grammar. Some of these evaluations were done on an earlier version of the XTAG grammar (the 1995 release), while other were done more recently. We will try to indicate in each section which version was used. \section{Parsing Corpora} In the XTAG project, we have used corpus analysis in two main ways: (1) to measure the performance of the English grammar on a given genre and (2) to identify gaps in the grammar. The second type of evaluation involves performing detailed error analysis on the sentences rejected by the parser, and we have done this several times on WSJ and Brown data. Based on the results of such analysis, we prioritize upcoming grammar development efforts. The results of a recent error analysis are shown in Table \ref{errors}. The table does not show errors in parsing due to mistakes made by the POS tagger which contributed the largest number of errors: 32. At this point, we have added a treatment of punctuation to handle \#1, an analysis of time NPs (\#2), a large number of multi-word prepositions (part of \#3), gapless relative clauses (\#7), bare infinitives (\#14) and have added the missing subcategorization (\#3) and missing lexical entry (\#12). We are in the process of extending the parser to handle VP coordination (\#9) (See Section~\ref{conjunction} on recent work to handle VP and other predicative coordination). We find that this method of error analysis is very useful in focusing grammar development in a productive direction. \begin{table}[htb] \centering \begin{tabular}{|l|l|l|} \hline Rank & No of errors & Category of error \\ \hline \#1 & 11 & Parentheticals and appositives \\ \hline \#2 & 8 & Time NP \\ \hline \#3 & 8 & Missing subcat \\ \hline \#4 & 7 & Multi-word construction \\ \hline \#5 & 6 & Ellipsis \\ \hline \#6 & 6 & Not sentences \\ \hline \#7 & 3 & Relative clause with no gap \\ \hline \#8 & 2 & Funny coordination \\ \hline \#9 & 2 & VP coordination \\ \hline \#10 & 2 & Inverted predication \\ \hline \#11 & 2 & Who knows \\ \hline \#12 & 1 & Missing entry \\ \hline \#13 & 1 & Comparative? \\ \hline \#14 & 1 & Bare infinitive \\ \hline \end{tabular} \caption{Results of Corpus Based Error Analysis} \label{errors} \end{table} To ensure that we are not losing coverage of certain phenomena as we extend the grammar, we have a benchmark set of grammatical and ungrammatical sentences from this technical report. We parse these sentences periodically to ensure that in adding new features and constructions to the grammar, we are not blocking previous analyses. There are approximately $590$ example sentences in this set. \section{TSNLP} In addition to corpus-based evaluation, we have also run the English Grammar on the Test Suites for Natural Language Processing (TSNLP) English corpus \cite{Lehmann96}. The corpus is intended to be a systematic collection of English grammatical phenomena, including complementation, agreement, modification, diathesis, modality, tense and aspect, sentence and clause types, coordination, and negation. It contains 1409 grammatical sentences and phrases and 3036 ungrammatical ones. \begin{table*}[htb] \centering \begin{tabular}{|l|c|c|} \hline Error Class & \% & Example \\ \hline POS Tag & 19.7\% & She adds to/V it , He noises/N him abroad \\ \hline Missing lex item & 43.3\% & {\it used} as an auxiliary V, {\it calm NP down} \\ \hline Missing tree & 21.2\% & {\it should've}, {\it bet NP NP S}, {\it regard NP as Adj} \\ \hline Feature clashes & 3\% & {\it My every firm}, {\it All money} \\ \hline Rest&12.8\% & {\it approx}, {\it e.g.} \\ \hline \end{tabular} \caption{Breakdown of TSNLP Errors} \label{tsnlp-table} \end{table*} There were 42 examples which we judged ungrammatical, and removed from the test corpus. These were sentences with conjoined subject pronouns, where one or both were accusative, e.g. {\it Her and him succeed.} Overall, we parsed 61.4\% of the 1367 remaining sentences and phrases. The errors were of various types, broken down in Table~\ref{tsnlp-table}. As with the error analysis described above, we used this information to help direct our grammar development efforts. It also highlighted the fact that our grammar is heavily slanted toward American English---our grammar did not handle {\it dare} or {\it need} as auxiliary verbs, and there were a number of very British particle constructions, e.g. {\it She misses him out}. One general problem with the test-suite is that it uses a very restricted lexicon, and if there is one problematic lexical item it is likely to appear a large number of times and cause a disproportionate amount of grief. {\it Used to} appears 33 times and we got all 33 wrong. However, it must be noted that the XTAG grammar has analyses for syntactic phenomena that were not represented in the TSNLP test suite such as sentential subjects and subordinating clauses among others. This effort was, therefore, useful in highlighting some deficiencies in our grammar, but did not provide the same sort of general evaluation as parsing corpus data. \section{Chunking and Dependencies in XTAG Derivations} We evaluated the XTAG parser for the text chunking task~\cite{abney91}. In particular, we compared NP chunks and verb group (VG) chunks\footnote{We treat a sequence of verbs and verbal modifiers, including auxiliaries, adverbs, modals as constituting a verb group.} produced by the XTAG parser with the NP and VG chunks from the Penn Treebank~\cite{marcus93}. The test involved $940$ sentences of length $15$ words or less from sections $17$ to $23$ of the Penn Treebank, parsed using the XTAG English grammar. The results are given in Table~\ref{chunking-results}. \begin{table*}[htb] \centering \begin{tabular}{|l|c|c|} \hline & NP Chunking & VG Chunking \\ \hline Recall & 82.15\% & 74.51\% \\ \hline Precision & 83.94\% & 76.43\% \\ \hline \end{tabular} \caption{Text Chunking performance of the XTAG parser} \label{chunking-results} \end{table*} \begin{table*}[htb] \centering \begin{tabular}{|c|c|c|c|} \hline System & Training Size & Recall & Precision \\ \hline \hline Ramshaw \& Marcus & Baseline & 81.9\% & 78.2\% \\ \hline Ramshaw \& Marcus & 200,000 & 90.7\% & 90.5\% \\ (without lexical information) & & & \\ \hline Ramshaw \& Marcus & 200,000 & 92.3\% & 91.8\% \\ (with lexical information) & & & \\ \hline \hline Supertags & Baseline & 74.0\% & 58.4\% \\ \hline Supertags & 200,000 & 93.0\% & 91.8\% \\ \hline Supertags & 1,000,000 & 93.8\% & 92.5\% \\ \hline \end{tabular} \caption{Performance comparison of the transformation based noun chunker and the supertag based noun chunker} \label{nc-compare} \end{table*} As described earlier, the results cannot be directly compared with other results in chunking such as in~\cite{lance&mitch95} since we do not train from the Treebank before testing. However, in earlier work, text chunking was done using a technique called supertagging~\cite{srini97iwpt} (which uses the XTAG English grammar) which can be used to train from the Treebank. The comparative results of text chunking between supertagging and other methods of chunking is shown in Figure~\ref{nc-compare}.\footnote{It is important to note in this comparison that the supertagger uses lexical information on a per word basis only to pick an initial set of supertags for a given word.} We also performed experiments to determine the accuracy of the derivation structures produced by XTAG on WSJ text, where the derivation tree produced after parsing XTAG is interpreted as a dependency parse. We took sentences that were $15$ words or less from the Penn Treebank~\cite{marcus93}. The sentences were collected from sections $17$--$23$ of the Treebank. $9891$ of these sentences were given at least one parse by the XTAG system. Since XTAG typically produces several derivations for each sentence we simply picked a single derivation from the list for this evaluation. Better results might be achieved by ranking the output of the parser using the sort of approach described in~\cite{srinietal95}. There were some striking differences in the dependencies implicit in the Treebank and those given by XTAG derivations. For instance, often a subject NP in the Treebank is linked with the first auxiliary verb in the tree, either a modal or a copular verb, whereas in the XTAG derivation, the same NP will be linked to the main verb. Also XTAG produces some dependencies within an NP, while a large number of words in NPs in the Treebank are directly dependent on the verb. To normalize for these facts, we took the output of the NP and VG chunker described above and accepted as correct any dependencies that were completely contained within a single chunk. For example, for the sentence {\em Borrowed shares on the Amex rose to another record}, the XTAG and Treebank chunks are shown below. \begin{verbatim} XTAG chunks: [Borrowed shares] [on the Amex] [rose] [to another record] Treebank chunks: [Borrowed shares on the Amex] [rose] [to another record] \end{verbatim} Using these chunks, we can normalize for the fact that in the dependencies produced by XTAG {\em borrowed} is dependent on {\em shares} (i.e. in the same chunk) while in the Treebank {\em borrowed} is directly dependent on the verb {\em rose}. That is to say, we are looking at links between \underline{chunks}, not between \underline{words}. The dependencies for the sentence are given below. \begin{verbatim} XTAG dependency Treebank dependency Borrowed::shares Borrowed::rose shares::rose shares::rose on::shares on::shares the::Amex the::Amex Amex::on Amex::on rose::NIL rose::NIL to::rose to::rose another::record another::record record::to record::to \end{verbatim} After this normalization, testing simply consisted of counting how many of the dependency links produced by XTAG matched the Treebank dependency links. Due to some tokenization and subsequent alignment problems we could only test on $835$ of the original $9891$ parsed sentences. There were a total of $6135$ dependency links extracted from the Treebank. The XTAG parses also produced $6135$ dependency links for the same sentences. Of the dependencies produced by the XTAG parser, $5165$ were correct giving us an accuracy of $84.2\%$. \section{Comparison with IBM} The evaluation in this section was done with the earlier 1995 release of the grammar. This section describes an experiment to measure the crossing bracket accuracy of the XTAG-parsed IBM-manual sentences. In this experiment, XTAG parses of 1100 IBM-manual sentences have been ranked using certain heuristics. The ranked parses have been compared\footnote{We used the parseval program written by Phil Harison (phil@atc.boeing.com).} against the bracketing given in the Lancaster Treebank of IBM-manual sentences\footnote{The Treebank was obtained through Salim Roukos (roukos@watson.ibm.com) at IBM.}. Table~\ref{ibm-results} shows the results of XTAG obtained in this experiment, which used the highest ranked parse for each system. It also shows the results of the latest IBM statistical grammar (\cite{jelineketal94}) on the same genre of sentences. Only the highest-ranked parse of both systems was used for this evaluation. Crossing Brackets is the percentage of sentences with no pairs of brackets crossing the Treebank bracketing (i.e. (~(~a~b~)~c~) has a crossing bracket measure of one if compared to (~a~(~b~c~)~)~). Recall is the ratio of the number of constituents in the XTAG parse to the number of constituents in the corresponding Treebank sentence. Precision is the ratio of the number of correct constituents to the total number of constituents in the XTAG parse. \begin{table}[ht] \centering \begin{tabular}{|l|c|c|c|c|} \hline System & \# of & Crossing Bracket & Recall & Precision \\ & sentences & Accuracy & & \\ \hline XTAG & 1100 & 81.29\% & 82.34\% & 55.37\% \\ \hline IBM Statistical & 1100 & 86.20\% & 86.00\% & 85.00\% \\ grammar & & & &\\ \hline \end{tabular} \vspace{0.1in} \caption{Performance of XTAG on IBM-manual sentences} \label{ibm-results} \end{table} As can be seen from Table~\ref{ibm-results}, the precision figure for the XTAG system is considerably lower than that for IBM. For the purposes of comparative evaluation against other systems, we had to use the same crossing-brackets metric though we believe that the crossing-brackets measure is inadequate for evaluating a grammar like XTAG. There are two reasons for the inadequacy. First, the parse generated by XTAG is much richer in its representation of the internal structure of certain phrases than those present in manually created treebanks (e.g. IBM: [$_N$ your personal computer], XTAG: [$_{NP}$ [$_G$ your] [$_N$ [$_N$ personal] [$_N$ computer]]]). This is reflected in the number of constituents per sentence, shown in the last column of Table~\ref{const-no}.\footnote{We are aware of the fact that increasing the number of constituents also increases the recall percentage. However we believe that this a legitimate gain.} \begin{table}[ht] \centering \begin{tabular}{|l|c|c|c|c|} \hline System & Sent. & \# of & Av. \# of & Av. \# of \\ & Length & sent & words/sent & Constituents/sent \\ \hline XTAG & 1-10 & 654 & 7.45 & 22.03 \\ \cline{2-5} & 1-15 & 978 & 9.13 & 30.56 \\ \hline IBM Stat. & 1-10 & 447 & 7.50 & 4.60 \\ \cline{2-5} Grammar & 1-15 & 883 & 10.30 & 6.40 \\ \hline \end{tabular} \caption{Constituents in XTAG parse and IBM parse} \label{const-no} \end{table} A second reason for considering the crossing bracket measure inadequate for evaluating XTAG is that the primary structure in XTAG is the derivation tree from which the bracketed tree is derived. Two identical bracketings for a sentence can have completely different derivation trees (e.g. {\it kick the bucket} as an idiom vs. a compositional use). A more direct measure of the performance of XTAG would evaluate the derivation structure, which captures the dependencies between words. \section{Comparison with Alvey} The evaluation in this section was done with the earlier 1995 release of the grammar. This section compares XTAG to the Alvey Natural Language Tools (ANLT) Grammar. We parsed the set of LDOCE Noun Phrases presented in Appendix B of the technical report (\cite{Carroll93}) using XTAG. Table~\ref{Alvey-xtag} summarizes the results of this experiment. A total of 143 noun phrases were parsed. The NPs which did not have a correct parse in the top three derivations were considered failures for either system. The maximum and average number of derivations columns show the highest and the average number of derivations produced for the NPs that have a correct derivation in the top three. We show the performance of XTAG both with and without the tagger since the performance of the POS tagger is significantly degraded on the NPs because the NPs are usually shorter than the sentences on which it was trained. It would be interesting to see if the two systems performed similarly on a wider range of data. \begin{table}[ht] \centering \begin{tabular}{|l|c|c|c|c|c|} \hline System & \# of & \# parsed & \% parsed & Maximum & Average \\ & NPs &&& derivations & derivations \\ \hline ANLT Parser & 143 & 127 & 88.81\% & 32 & 4.57 \\ \hline XTAG Parser with & 143 & 93 & 65.03\% & 28 & 3.45 \\ POS tagger & & & & & \\ \hline XTAG Parser without & 143 & 120 & 83.91\% & 28 & 4.14\\ POS tagger & & & & & \\ \hline \end{tabular} \\ \vspace{0.1in} \caption{Comparison of XTAG and ANLT Parser} \label{Alvey-xtag} \end{table} \section{Comparison with CLARE} The evaluation in this section was done with the earlier 1995 release of the grammar. This section compares the performance of XTAG against that of the CLARE-2 system (\cite{clare-report92}) on the ATIS corpus. Table~\ref{clare-results} shows the performance results. The percentage parsed column for both systems represents the percentage of sentences that produced any parse. It must be noted that the performance result shown for CLARE-2 is without any tuning of the grammar for the ATIS domain. The performance of CLARE-3, a later version of the CLARE system, is estimated to be 10\% higher than that of the CLARE-2 system.\footnote{When CLARE-3 is tuned to the ATIS domain, performance increases to 90\%. However XTAG has not been tuned to the ATIS domain.} \begin{table}[ht] \centering \begin{tabular}{|l|c|c|} \hline System & Mean length & \% parsed \\ \hline CLARE-2 & 6.53 & 68.50\% \\ \hline XTAG & 7.62 & 88.35\% \\ \hline \end{tabular} \caption{Performance of CLARE-2 and XTAG on the ATIS corpus} \label{clare-results} \end{table} In an attempt to compare the performance of the two systems on a wider range of sentences (from similar genres), we provide in Table~\ref{clare-results1} the performance of CLARE-2 on LOB corpus and the performance of XTAG on the WSJ corpus. The performance was measured on sentences of up to 10 words for both systems. \begin{table}[ht] \centering \begin{tabular}{|c|c|c|c|} \hline System & Corpus & Mean length & \% parsed \\ \hline CLARE-2 & LOB & 5.95 & 53.40\% \\ \hline XTAG & WSJ & 6.00 & 55.58\% \\ \hline \end{tabular} \caption{Performance of CLARE-2 and XTAG on LOB and WSJ corpus respectively} \label{clare-results1} \end{table} \chapter{Extraction} \label{extraction} The discussion in this chapter covers constructions that are analyzed as having wh-movement in GB, in particular, wh-questions and topicalization. Relative clauses, which could also be considered extractions, are discussed in Chapter~\ref{rel_clauses}. Extraction involves a constituent appearing in a linear position to the left of the clause with which it is interpreted. One clause argument position is empty. For example, the position filled by {\it frisbee} in the declarative in sentence~(\ex{1}) is empty in sentence~(\ex{2}). The wh-item {\it what} in sentence~(\ex{2}) is of the same syntactic category as {\it frisbee} in sentence~(\ex{1}) and fills the same role with respect to the subcategorization. \enumsentence{Clove caught a frisbee.} \enumsentence{What$_{i}$ did Clove catch $\epsilon_{i}$?} The English XTAG grammar represents the connection between the extracted element and the empty position with co-indexing (as does GB). The {\bf $<$trace$>$} feature is used to implement the co-indexing. In extraction trees in XTAG, the `empty' position is filled with an {\it $\epsilon$}. The extracted item always appears in these trees as a sister to the S$_{r}$ tree, with both dominated by a S$_{q}$ root node. The S$_{r}$ subtrees in extraction trees have the same structure as the declarative tree in the same tree family. The additional structure in extraction trees of the S$_{q}$ and NP nodes roughly corresponds to the CP and Spec of CP positions in GB. All sentential trees with extracted components (this does not include relative clause trees) are marked {\bf $<$extracted$>$=+} at the top S node, while sentential trees with no extracted components are marked {\bf $<$extracted$>$=--}. Items that take embedded sentences, such as nouns, verbs and some prepositions can place restrictions on whether the embedded sentence is allowed to be extracted or not. For instance, sentential subjects and sentential complements of nouns and prepositions are not allowed to be extracted, while certain verbs may allow extracted sentential complements and others may not (e.g. sentences (\ex{1})-(\ex{4})). \enumsentence{The jury wondered [who killed Nicole].} \enumsentence{The jury wondered [who Simpson killed].} \enumsentence{The jury thought [Simpson killed Nicole].} \enumsentence{$\ast$The jury thought [who did Simpson kill]?} The {\bf $<$extracted$>$} feature is also used to block embedded topicalization in infinitival complement clauses as exemplified in (\ex{1}). \enumsentence{* John wants [ Bill$_{i}$ [PRO to see t$_{i}$]]} Verbs such as {\em want} that take non-{\em wh} infinitival complements specify that the {\bf $<$extracted$>$} feature of their complement clause (i.e. of the foot S node) is {\bf --}. Clauses that involve topicalization have {\bf +} as the value of their {\bf $<$extracted$>$} feature (i.e. of the root S node). Sentences like (\ex{0}) are thus ruled out. \begin{figure}[htb] \centering \mbox{} \psfig{figure=ps/extraction-files/alphaW1nx0Vnx1.ps,height=10.0cm} \caption{Transitive tree with object extraction: $\alpha$W1nx0Vnx1} \label{alphaW1nx0Vnx1} \label{2;5,1} \end{figure} The tree that is used to derive the embedded sentence in (\ex{-2}) in the English XTAG grammar is shown in Figure~\ref{alphaW1nx0Vnx1}\footnote{Features not pertaining to this discussion have been taken out to improve readability.}. The important features of extracted trees are: \begin{itemize} \item The subtree that has S$_{r}$ as its root is identical to the declarative tree or a non-extracted passive tree, except for having one NP position in the VP filled by $\epsilon$. \item The root S node is S$_{q}$, which dominates NP and S$_{r}$. \item The {\bf $<$trace$>$} feature of the $\epsilon$ filled NP is co-indexed with the {\bf $<$trace$>$} feature of the NP daughter of S$_{q}$. \item The {\bf $<$case$>$} and {\bf $<$agr$>$} features are passed from the empty NP to the extracted NP. This is particularly important for extractions from subject NP's, since {\bf $<$case$>$} can continue to be assigned from the verb to the subject NP position, and from there be passed to the extracted NP. \item The {\bf $<$inv$>$} feature of S$_{r}$ is co-indexed to the {\bf $<$wh$>$} feature of NP through the use of the {\bf $<$invlink$>$} feature in order to force subject-auxiliary inversion where needed (see section~\ref{topicalization} for more discussion of the {\bf $<$inv$>$}/{\bf$<$wh$>$} co-indexing and the use of these trees for topicalization). \end{itemize} \section{Topicalization and the value of the {\bf $<$inv$>$} feature} \label{topicalization} Our analysis of topicalization uses the same trees as wh-extraction. For any NP complement position a single tree is used for both wh-questions and for topicalization from that position. Wh-questions have subject-auxiliary inversion and topicalizations do not. This difference between the constructions is captured by equating the values of the S$_{r}$'s {\bf $<$inv$>$} feature and the extracted NP's {\bf $<$wh$>$} feature. This means that if the extracted item is a wh-expression, as in wh-questions, the value of {\bf $<$inv$>$} will be {\bf +} and an inverted auxiliary will be forced to adjoin. If the extracted item is a non-wh, {\bf $<$inv$>$} will be {\bf --} and no auxiliary adjunction will occur. An additional complication is that inversion only occurs in matrix clauses, so the values of {\bf $<$inv$>$} and {\bf $<$wh$>$} should only be equated in matrix clauses and not in embedded clauses. In the English XTAG grammar, appropriate equating of the {\bf $<$inv$>$} and {\bf $<$wh$>$} features is accomplished using the {\bf $<$invlink$>$} feature and a restriction imposed on the root S of a derivation. In particular, in extraction trees that are used for both wh-questions and topicalizations, the value of the {\bf $<$inv$>$} feature for the top of the S$_{r}$ node is co-indexed to the value of the {\bf $<$inv$>$} feature on the bottom of the S$_{q}$ node. On the bottom of the S$_{q}$ node the {\bf $<$inv$>$} feature is co-indexed to the {\bf $<$invlink$>$} feature. The {\bf $<$wh$>$} feature of the extracted NP node is co-indexed to the value of the {\bf $<$wh$>$} feature on the bottom of S$_{q}$. The linking between the value of the S$_{q}$ {\bf $<$wh$>$} and the {\bf $<$invlink$>$} features is imposed by a condition on the final root node of a derivation (i.e. the top S node of a matrix clause) requires that {\bf $<$invlink$>$=$<$wh$>$}. For example, the tree in Figure~\ref{alphaW1nx0Vnx1} is used to derive both (\ex{1}) and (\ex{2}). \enumsentence{John, I like.} \enumsentence{Who do you like?} For the question in (\ex{0}), the extracted item {\it who} has the feature value {\bf $<$wh$>$=+}, so the value of the {\bf $<$inv$>$} feature on VP is also $+$ and an auxiliary, in this case {\it do}, is forced to adjoin. For the topicalization (\ex{-1}) the values for {\it John}'s {\bf $<$wh$>$} feature and for S$_{q}$'s {\bf $<$inv$>$} feature are both {\bf --} and no auxiliary adjoins. \section{Extracted subjects} \label{subject-extraction} The extracted subject trees provide for sentences like (\ex{1})-(\ex{3}), depending on the tree family with which it is associated. \enumsentence{Who left?} \enumsentence{Who wrote the paper?} \enumsentence{Who was happy?} Wh-questions on subjects differ from other argument extractions in not having subject-auxiliary inversion. This means that in subject wh-questions the linear order of the constituents is the same as in declaratives so it is difficult to tell whether the subject has moved out of position or not (see \cite{heycock/kroch93gagl} for arguments for and against moved subject). The English XTAG treatment of subject extractions assumes the following: \begin{itemize} \item Syntactic subject topicalizations don't exist; and \item Subjects in wh-questions are extracted rather than in situ. \end{itemize} The assumption that there is no syntactic subject topicalization is reasonable in English since there is no convincing syntactic evidence and since the interpretability of subjects as topics seems to be mainly affected by discourse and intonational factors rather than syntactic structure. As for the assumption that wh-question subjects are extracted, these questions seem to have more similarities to other extractions than to the two cases in English that have been considered in situ wh: multiple wh questions and echo questions. In multiple wh questions such as sentence~(\ex{1}), one of the wh-items is blocked from moving sentence initially because the first wh-item already occupies the location to which it would move. \enumsentence{Who ate what?} This type of `blocking' account is not applicable to subject wh-questions because there is no obvious candidate to do the blocking. Similarity between subject wh-questions and echo questions is also lacking. At least one account of echo questions (\cite{hockey94}) argues that echo questions are not ordinary wh-questions at all, but rather focus constructions in which the wh-item is the focus. Clearly, this is not applicable to subject wh-questions. So it seems that treating subject wh-questions similarly to other wh-extractions is more justified than an in situ treatment. Given these assumptions, there must be separate trees in each tree family for subject extractions. The declarative tree cannot be used even though the linear order is the same because the structure is different. Since topicalizations are not allowed, the {\bf $<$wh$>$} feature for the extracted NP node is set in these trees to {\bf +}. The lack of subject-auxiliary inversion is handled by the absence of the {\bf $<$invlink$>$} feature. Without the presence of this feature, the {\bf $<$wh$>$} and {\bf $<$inv$>$} are never linked, so inversion can not occur. Like other wh-extractions, the S$_{q}$ node is marked {\bf $<$extracted$>$=+} to constrain the occurrence of these trees in embedded sentences. The tree in Figure~\ref{alphaW0nx0V} is an example of a subject wh-question tree. \begin{figure}[htb] \centering \begin{tabular}{c} \psfig{figure=ps/extraction-files/alphaW0nx0V.ps,height=10.3cm} \end{tabular} \caption{Intransitive tree with subject extraction: $\alpha$W0nx0V} \label{alphaW0nx0V} \label{1;4,13} \end{figure} \section{Wh-moved NP complement} \label{NP-extr} Wh-questions can be formed on every NP object or indirect object that appears in the declarative tree or in the passive trees, as seen in sentences (\ex{1})-(\ex{6}). A tree family will contain one tree for each of these possible NP complement positions. Figure~\ref{ditrans-extractions} shows the two extraction trees from the ditransitive tree family for the extraction of the direct (Figure~\ref{ditrans-extractions}(a)) and indirect object (Figure~\ref{ditrans-extractions}(b)). \enumsentence{Dania asked Beth a question.} \enumsentence{Who$_{i}$ did Dania ask $\epsilon_{i}$ a question?} \enumsentence{What$_{i}$ did Dania ask Beth $\epsilon_{i}$?} \enumsentence{Beth was asked a question by Dania.} \enumsentence{Who$_{i}$ was Beth asked a question by $\epsilon_{i}$??} \enumsentence{What$_{i}$ was Beth asked $\epsilon_{i}$? by Dania?} \begin{figure}[htb] \centering \begin{tabular}{ccc} \psfig{figure=ps/extraction-files/alphaW1nx0Vnx1nx2.ps,height=6.0cm}& \hspace{1.0in}& \psfig{figure=ps/extraction-files/alphaW2nx0Vnx1nx2.ps,height=6.0cm}\\ (a)&&(b) \end{tabular} \caption{Ditransitive trees with direct object: $\alpha$W1nx0Vnx1nx2 (a) and indirect object extraction: $\alpha$W2nx0Vnx1nx2 (b)} \label{ditrans-extractions} \label{2;5,3} \end{figure} \section{Wh-moved object of a P} Wh-questions can be formed on the NP object of a complement PP as in sentence~(\ex{1}). \enumsentence{$[$Which dog$]_{i}$ did Beth Ann give a bone to $\epsilon_{i}$?} The {\it by} phrases of passives behave like complements and can undergo the same type of extraction, as in (\ex{1}). \enumsentence{$[$Which dog$]_{i}$ was the frisbee caught by $\epsilon_{i}$?} Tree structures for this type of sentence are very similar to those for the wh-extraction of NP complements discussed in section~\ref{NP-extr} and have the identical important features related to tree structure and trace and inversion features. The tree in Figure~\ref{alphaW2nx0Vnx1pnx2} is an example of this type of tree. Topicalization of NP objects of prepositions is handled the same way as topicalization of complement NP's. \begin{figure}[htb] \centering \mbox{} \psfig{figure=ps/extraction-files/alphaW2nx0Vnx1pnx2.ps,height=6.0cm} \caption{Ditransitive with PP tree with the object of the PP extracted: $\alpha$W2nx0Vnx1pnx2} \label{alphaW2nx0Vnx1pnx2} \label{2;8,4} \end{figure} \section{Wh-moved PP} Like NP complements, PP complements can be extracted to form wh-questions, as in sentence (\ex{1}). \enumsentence{[To which dog]$_{i}$ did Beth Ann throw the frisbee $\epsilon_{i}$?} As can be seen in the tree in Figure~\ref{alphapW2nx0Vnx1pnx2}, extraction of PP complements is very similar to extraction of NP complements from the same positions. \begin{figure}[htb] \centering \mbox{} \psfig{figure=ps/extraction-files/alphapW2nx0Vnx1pnx2.ps,height=6.0cm} \caption{Ditransitive with PP with PP extraction tree: $\alpha$pW2nx0Vnx1pnx2} \label{alphapW2nx0Vnx1pnx2} \label{2;9,4} \end{figure} The PP extraction trees differ from NP extraction trees in having a PP rather than an NP left daughter node under S$_{q}$ and in having the $\epsilon$ fill a PP rather than an NP position in the VP. In other respects these PP extraction structures behave like the NP extractions, including being used for topicalization. \section{Wh-moved S complement} Except for the node label on the extracted position, the trees for wh-questions on S complements look exactly like the trees for wh-questions on NP's in the same positions. This is because there is no separate wh-lexical item for clauses in English, so the item {\it what} is ambiguous between representing a clause or an NP. To illustrate this ambiguity notice that the question in (\ex{1}) could be answered by either a clause as in (\ex{2}) or an NP as in (\ex{3}). The extracted NP in these trees is constrained to be {\bf $<$wh$>$=+}, since sentential complements can not be topicalized. \enumsentence{What does Clove want?} \enumsentence{for Beth Ann to play frisbee with her} \enumsentence{a biscuit} \section{Wh-moved Adjective complement} In subcategorizations that select an adjective complement, that complement can be questioned in a wh-question, as in sentence~(\ex{1}). \enumsentence{How$_{i}$ did he feel $\epsilon_{i}$?} \begin{figure}[htb] \centering \mbox{} \psfig{figure=ps/extraction-files/alphaWA1nx0Vax1.ps,height=6.0cm} \caption{Predicative Adjective tree with extracted adjective: $\alpha$WA1nx0Vax1} \label{wh-adj-extr} \label{1;7,14} \end{figure} The tree families with adjective complements include trees for such adjective extractions that are very similar to the wh-extraction trees for other categories of complements. The adjective position in the VP is filled by an {\it $\epsilon$} and the trace feature of the adjective complement and of the adjective daughter of S$_{q}$ are co-indexed. The extracted adjective is required to be {\bf $<$wh$>$=+}\footnote{{\it How} is the only {\bf $<$wh$>$=+} adjective currently in the XTAG English grammar.}, so no topicalizations are allowed. An example of this type of tree is shown in Figure~\ref{wh-adj-extr}. \chapter{Features} \label{features} Table~\ref{feature-table} contains a comprehensive list of the features in the XTAG grammar and their possible values. This section consists of short `biographical' sketches of the various features currently in use in the XTAG English grammar. \footnotesize \begin{table}[htbp] \centering \begin{tabular}{|l|l|} \hline Feature&Value\\ \hline \hline $<$agr 3rdsing$>$&$+,-$\\ $<$agr num$>$&plur,sing\\ $<$agr pers$>$&1,2,3\\ $<$agr gen$>$&fem,masc,neuter\\ $<$assign-case$>$&nom,acc,none\\ $<$assign-comp$>$&that,whether,if,for,ecm,rel,inf\_nil,ind\_nil,ppart\_nil,none\\ $<$card$>$&$+,-$\\ $<$case$>$&nom,acc,gen,none\\ $<$comp$>$&that,whether,if,for,rel,inf\_nil,ind\_nil,nil\\ $<$compar$>$&$+,-$\\ $<$compl$>$&$+,-$\\ $<$conditional$>$&$+,-$\\ $<$conj$>$&and,or,but,comma,scolon,to,disc,nil\\ $<$const$>$&$+,-$\\ $<$contr$>$&$+,-$\\ $<$control$>$&no value, indexing only\\ $<$decreas$>$&$+,-$\\ $<$definite$>$&$+,-$\\ $<$displ-const$>$&$+,-$\\ $<$equiv$>$&$+,-$\\ $<$extracted$>$&$+,-$\\ $<$gen$>$&$+,-$\\ $<$gerund$>$&$+,-$\\ $<$inv$>$&$+,-$\\ $<$invlink$>$&no value, indexing only\\ $<$irrealis$>$&$+,-$\\ $<$mainv$>$&$+,-$\\ $<$mode$>$&base,ger,ind,inf,imp,nom,ppart,prep,sbjunt\\ $<$neg$>$&$+,-$\\ $<$passive$>$&$+,-$\\ $<$perfect$>$&$+,-$\\ $<$pred$>$&$+,-$\\ $<$progressive$>$&$+,-$\\ $<$pron$>$&$+,-$\\ $<$punct bal$>$&dquote,squote,paren,nil\\ $<$punct contains colon$>$&$+,-$\\ $<$punct contains dash$>$&$+,-$\\ $<$punct contains dquote$>$&$+,-$\\ $<$punct contains scolon$>$&$+,-$\\ $<$punct contains squote$>$&$+,-$\\ $<$punct struct$>$&comma,dash,colon,scolon,nil\\ $<$punct term$>$&per,qmark,excl,nil\\ $<$quan$>$&$+,-$\\ $<$refl$>$&$+,-$\\ $<$rel-clause$>$&$+,-$\\ $<$rel-pron$>$&ppart,ger,adj-clause\\ $<$select-mode$>$&ind,inf,ppart,ger\\ $<$super$>$&$+,-$\\ $<$tense$>$&pres,past\\ $<$trace$>$&no value, indexing only\\ $<$trans$>$&$+,-$\\ $<$weak$>$&$+,-$\\ $<$wh$>$&$+,-$\\ \hline \end{tabular} \caption{List of features and their possible values} \label{feature-table} \end{table} \normalsize \section{Agreement} {\bf $\langle$agr$\rangle$} is a complex feature. It can have as its subfeatures:\\ {\bf $\langle$agr 3rdsing$\rangle$}, possible values: {\bf $+/-$ }\\ {\bf $\langle$agr num$\rangle$}, possible values: {\bf $plur,sing$ }\\ {\bf $\langle$agr pers$\rangle$}, possible values: {\bf $1,2,3$ }\\ {\bf $\langle$agr gen$\rangle$}, possible values: {\bf $masc,fem,neut$ } These features are used to ensure agreement between a verb and its subject. Where does it occur:\\ Nouns comes specified from the lexicon with their {\bf $\langle$agr$\rangle$} features. e.g. {\em books} is {\bf $\langle$agr 3rdsing$\rangle$:~--}, {\bf $\langle$agr num$\rangle$:~plur}, and {\bf $\langle$agr pers$\rangle$:~3}. Only pronouns use the {\bf $<$gen$>$} (gender) feature. The {\bf $\langle$agr$\rangle$} features of a noun are transmitted up the NP tree by the following equation:\\ {\bf NP.b:$\langle$agr$\rangle =$ N.t:$\langle$agr$\rangle$} Agreement between a verb and its subject is mediated by the following feature equations: \enumsentence{ {\bf NP$_{subj}$:$\langle$agr$\rangle =$ VP.t:$\langle$agr$\rangle$}} \enumsentence{ {\bf VP.b:$\langle$agr$\rangle =$ V.t:$\langle$agr$\rangle$}} Agreement has to be done as a two step process because whether the verb agrees with the subject or not depends upon whether some auxiliary verb adjoins in and upon what the {\bf $\langle$agr$\rangle$} specification of the verb is. Verbs also come specified from the lexicon with their {\bf $\langle$agr$\rangle$} features, e.g. the {\bf $\langle$agr$\rangle$} features of the verb {\em sings} are {\bf $\langle$agr 3rdsing$\rangle$:~+}, {\bf $\langle$agr num$\rangle$:~sing}, and {\bf $\langle$agr pers$\rangle$:~3}; Non-finite forms of the verb {\em sing} e.g. {\em singing} do not come with an {\bf $\langle$agr$\rangle$} feature specification. \subsection{Agreement and Movement} The {\bf $\langle$agr$\rangle$} features of a moved NP and its trace are co-indexed. This captures the fact that movement does not disrupt a pre-existing agreement relationship between an NP and a verb. \enumsentence{ \ [Which boys]$_{i}$ does John think [t$_{i}$ are/*is intelligent]?} \section{Case} There are two features responsible for case-assignment:\\ {\bf $\langle$case$\rangle$}, possible values: {\bf nom, acc, gen, none}\\ {\bf $\langle$assign-case$\rangle$}, possible values: {\bf nom, acc, none} Case assigners (prepositions and verbs) as well as the VP, S and PP nodes that dominate them have an {\bf $\langle$assign-case$\rangle$} case feature. Phrases and lexical items that have case i.e. Ns and NPs have a {\bf $\langle$case$\rangle$} feature. Case assignment by prepositions involves the following equations: \enumsentence{ {\bf PP.b:$\langle$assign-case$\rangle =$ P.t:$\langle$case$\rangle$}} \enumsentence{ {\bf NP.t:$\langle$case$\rangle =$ P.t:$\langle$case$\rangle$}} Prepositions come specified from the lexicon with their {\bf $\langle$assign-case$\rangle$} feature. \enumsentence{ {\bf P.b:$\langle$assign-case$\rangle =$ acc}} Case assignment by verbs has two parts: assignment of case to the object(s) and assignment of case to the subject. Assignment of case to the object is simpler. English verbs always assign accusative case to their NP objects (direct or indirect). Hence this is built into the tree and not put into the lexical entry of each individual verb. \enumsentence{ {\bf NP$_{object}$.t:$\langle$case$\rangle =$ acc}} Assignment of case to the subject involves the following two equations. \enumsentence{ {\bf NP$_{subj}$:$\langle$case$\rangle =$ VP.t:$\langle$assign-case$\rangle$}} \enumsentence{ {\bf VP.b:$\langle$assign-case$\rangle =$ V.t:$\langle$assign-case$\rangle$}} This is a two step process -- the final case assigned to the subject depends upon the {\bf $\langle$assign-case$\rangle$} feature of the verb as well as whether an auxiliary verb adjoins in. Finite verbs like {\em sings} have {\bf nom} as the value of their {\bf $\langle$assign-case$\rangle$} feature. Non-finite verbs have {\bf none} as the value of their {\bf $\langle$assign-case$\rangle$} feature. So if no auxiliary adjoins in, the only subject they can have is {\bf PRO} which is the only NP with {\bf none} as the value its {\bf $\langle$case$\rangle$} feature. \subsection{ECM} Certain verbs e.g. {\em want, believe, consider} etc. and one complementizer {\em for} are able to assign case to the subject of their complement clause. The complementizer {\em for}, like the preposition {\em for}, has the {\bf $\langle$assign-case$\rangle$} feature of its complement set to {\bf acc}. Since the {\bf $\langle$assign-case$\rangle$} feature of the root S$_{r}$ of the complement tree and the {\bf $\langle$case$\rangle$} feature of its NP subject are co-indexed, this leads to the subject being assigned accusative case. ECM verbs have the {\bf $\langle$assign-case$\rangle$} feature of their foot S node set to {\bf acc}. The co-indexation between the {\bf $\langle$assign-case$\rangle$} feature of the root S$_{r}$ and the {\bf $\langle$case$\rangle$} feature of the NP subject leads to the subject being assigned accusative case. \subsection{Agreement and Case} The {\bf $\langle$case$\rangle$} features of a moved NP and its trace are co-indexed. This captures the fact that movement does not disrupt a pre-existing relationship of case-assignment between a verb and an NP. \enumsentence{ Her$_{i}$/*She$_{i}$, I think that Odo like t$_{i}$.} \section{Extraction and Inversion} {\bf $\langle$extracted$\rangle$}, possible vales are {\bf $+/-$} All sentential trees with extracted components, with the exception of relative clauses are marked {\bf S.b$\langle$extracted$\rangle = +$} at their top S node. The extracted element may be a {\em wh}-NP or a topicalized NP. The {\bf $\langle$extracted$\rangle$} feature is currently used to block embedded topicalizations as exemplified by the following example. \enumsentence{ * John wants [Bill$_{i}$ [PRO to leave t$_{i}$]] } \noindent {\bf $\langle$trace$\rangle$}: this feature is not assigned any value and is used to co-index moved NPs and their traces which are marked by $\epsilon$. \noindent {\bf $\langle$wh$\rangle$}: possible values are {\bf $+/-$}\\ NPs like {\em who}, {\em what} etc. come marked from the lexicon with a value of {\bf $+$} for the feature {\bf $\langle$wh$\rangle$}. Non {\em wh}-NPs have {\bf $-$} as the value of their {\bf $\langle$wh$\rangle$} feature. Note that {\bf $\langle$wh$\rangle$ = + } NPs are not restricted to occurring in extracted positions, to allow for the correct treatment of echo questions. The {\bf $\langle$wh$\rangle$} feature is propagated up by possessives -- e.g. the $+$ {\bf $\langle$wh$\rangle$} feature of the determiner {\em which} in {\em which boy} is propagated up to the level of the NP so that the value of the {\bf $\langle$wh$\rangle$} feature of the entire NP is $+${\bf $\langle$wh$\rangle$}. This process is recursive e.g. {\em which boy's mother}, {\em which boy's mother's sister}. The {\bf $\langle$wh$\rangle$} feature is also propagated up PPs. Thus the PP {\em to whom} has $+$ as the value of its {\bf $\langle$wh$\rangle$} feature. In trees with extracted NPs, the {\bf $\langle$wh$\rangle$} feature of the root node S node is equated with the {\bf $\langle$wh$\rangle$} feature of the extracted NPs. The {\bf $\langle$wh$\rangle$} feature is used to impose subcategorizational constraints. Certain verbs like {\em wonder} can only take interrogative complements, other verbs such as {\em know} can take both interrogative and non-interrogative complements, and yet other verbs like {\em think} can only take non-interrogative complements (cf. the {\bf $\langle$extracted$\rangle$} and {\bf $\langle$mode$\rangle$} features also play a role in imposing subcategorizational constraints). The {\bf $\langle$wh$\rangle$} feature is also used to get the correct inversion patterns. \subsection{Inversion, Part 1} The following three features are used to ensure the correct pattern of inversion:\\ {\bf $\langle$wh$\rangle$}: possible values are {\bf $+/-$}\\ {\bf $\langle$inv$\rangle$}: possible values are {\bf $+/-$}\\ {\bf $\langle$invlink$\rangle$}: possible values are {\bf $+/-$} Facts to be captured:\\ 1. No inversion with topicalization\\ 2. No inversion with matrix extracted subject {\em wh}-questions\\ 3. Inversion with matrix extracted object {\em wh}-questions\\ 4. Inversion with all matrix {\em wh}-questions involving extraction from an embedded clause\\ 5. No inversion in embedded questions \\ 6. No matrix subject topicalizations. Consider a tree with object extraction, where NP is extracted. The following feature equations are used:\\ \enumsentence{ {\bf S$_{q}$.b:$\langle$wh$\rangle =$ NP.t:$\langle$wh$\rangle$}\label{inv1}} \enumsentence{ {\bf S$_{q}$.b:$\langle$invlink$\rangle =$ S$_{q}$.b:$\langle$inv$\rangle$}\label{inv2}} \enumsentence{ {\bf S$_{q}$.b:$\langle$inv$\rangle =$ S$_{r}$.t:$\langle$inv$\rangle$}\label{inv3}} \enumsentence{ {\bf S$_{r}$.b:$\langle$inv$\rangle = -$}\label{inv4}} \noindent {\bf Root restriction}: A restriction is imposed on the final root node of any XTAG derivation of a tensed sentence which equates the {\bf $\langle$wh$\rangle$} feature and the {\bf $\langle$invlink$\rangle$} feature of the final root node. If the extracted NP is not a {\em wh}-word i.e. its {\bf $\langle$wh$\rangle$} feature has the value $-$, at the end of the derivation, {\bf S$_{q}$.b:$\langle$wh$\rangle$} will also have the value $-$. Because of the root constraint {\bf S$_{q}$.b:$\langle$wh$\rangle$} will be equated to {\bf S$_{q}$.b:$\langle$invlink$\rangle$} which will also come to have the value $-$. Then, by (\ref{inv3}), {\bf S$_{r}$.t:$\langle$inv$\rangle$} will acquire the value $-$. This will unify with {\bf S$_{r}$.b:$\langle$inv$\rangle$} which has the value $-$ (cf. \ref{inv4}). Consequently, no auxiliary verb adjunction will be forced. Hence, there will never be inversion in topicalization. If the extracted NP is a {\em wh}-word i.e. its {\bf $\langle$wh$\rangle$} feature has the value $+$, at the end of the derivation, {\bf S$_{q}$.b:$\langle$wh$\rangle$} will also have the value $+$. Because of the root constraint {\bf S$_{q}$.b:$\langle$wh$\rangle$} will be equated to {\bf S$_{q}$.b:$\langle$invlink$\rangle$} which will also come to have the value $+$. Then, by (\ref{inv3}), {\bf S$_{r}$.t:$\langle$inv$\rangle$} will acquire the value $+$. This will not unify with {\bf S$_{r}$.b:$\langle$inv$\rangle$} which has the value $+$ (cf. \ref{inv4}). Consequently, the adjunction of an inverted auxiliary verb is required for the derivation to succeed. Inversion will still take place even if the extraction is from an embedded clause. \enumsentence{ Who$_{i}$ does Loida think [Miguel likes t$_{i}$]} This is because the adjoined tree's root node will also have its {\bf S$_{r}$.b:$\langle$inv$\rangle$} set to $-$. Note that inversion is only forced upon us because S$_{q}$ is the final root node and the {\bf Root restriction} applies. In embedded environments, the root restriction would not apply and the feature clash that forces adjunction would not take place. The {\bf $\langle$invlink$\rangle$} feature is not present in subject extractions. Consequently there is no inversion in subject questions. Subject topicalizations are blocked by setting the {\bf $\langle$wh$\rangle$} feature of the extracted NP to $+$ i.e. only {\em wh}-phrases can go in this location. \subsection{Inversion, Part 2} {\bf $\langle$displ-const$\rangle$}:\\ Possible values: {\bf [set1: +], [set1: --]}\\ In the previous section, we saw how inversion is triggered using the {\bf $\langle$invlink$\rangle$}, {\bf $\langle$inv$\rangle$}, {\bf $\langle$wh$\rangle$} features. Inversion involves movement of the verb from V to C. This movement process is represented using the {\bf $\langle$displ-const$\rangle$} feature which is used to simulate Multi-Component TAGs.\footnote{The {\bf $\langle$displ-const$\rangle$} feature is also used in the ECM analysis.} The sub-value {\bf set1} indicates the inversion multi-component set; while there are not currently any other uses of this mechanism, it could be expanded with other sets receiving different {\bf set} values. The {\bf $\langle$displ-const$\rangle$} feature is used to ensure adjunction of two trees, which in this case are the auxiliary tree corresponding to the moved verb (S adjunct) and the auxiliary tree corresponding to the trace of the moved verb (VP adjunct). The following equations are used: \enumsentence{ {\bf S$_{r}$.b:$\langle$displ-const set1$\rangle = -$}\label{dis1}} \enumsentence{ {\bf S.t:$\langle$displ-const set1$\rangle = +$}\label{dis2}} \enumsentence{ {\bf VP.b:$\langle$displ-const set1$\rangle =$ V.t:$\langle$displ-const set1$\rangle$}\label{dis3}} \enumsentence{ {\bf V.b:$\langle$displ-const set1$\rangle = +$}\label{dis4}} \enumsentence{ {\bf S$_{r}$.b:$\langle$displ-const set1$\rangle =$ VP.t:$\langle$displ-const set1$\rangle$}\label{dis5}} \section{Clause Type} There are several features that mark clause type.\footnote{We have already seen one instance of a feature that marks clause-type: {\bf $\langle$extracted$\rangle$}, which marks whether a certain S involves extraction or not.} They are:\\ {\bf $\langle$mode$\rangle$}\\ {\bf $\langle$passive$\rangle$}: possible values are {\bf +/--} \noindent {\bf $\langle$mode$\rangle$}: possible values are {\bf base, ger, ind, inf, imp, nom, ppart, prep, sbjnct}\\ The {\bf $\langle$mode$\rangle$} feature of a verb in its root form is {\bf base}. The {\bf $\langle$mode$\rangle$} feature of a verb in its past participial form is {\bf ppart}, the {\bf $\langle$mode$\rangle$} feature of a verb in its progressive/gerundive form is {\bf ger}, the {\bf $\langle$mode$\rangle$} feature of a tensed verb is {\bf ind}, and the {\bf $\langle$mode$\rangle$} feature of a verb in the imperative is {\bf imp}. \noindent {\bf nom} is the {\bf $\langle$mode$\rangle$} value of AP/NP predicative trees headed by a null copula. {\bf prep} is the {\bf $\langle$mode$\rangle$} value of PP predicative trees headed by a null copula. Only the copula auxiliary tree, some sentential complement verbs (such as {\it consider} and raising verb auxiliary trees have {\bf nom/prep} as the {\bf $\langle$mode$\rangle$} feature specification of their foot node. This allow them, and only them, to adjoin onto AP/NP/PP predicative trees with null copulas. \subsection{Auxiliary Selection} The {\bf $\langle$mode$\rangle$} feature is also used to state the subcategorizational constraints between an auxiliary verb and its complement. We model the following constraints:\\ {\em have} takes past participial complements\\ passive {\em be} takes past participial complements\\ active {\em be} takes progressive complements\\ modal verbs, {\em do}, and {\em to} take VPs headed by verbs in their base form as their complements. An auxiliary verb transmits its own mode to its root and imposes its subcategorizational restrictions on its complement i.e. on its foot node. e.g. the auxiliary {\em have} in its infinitival form involves the following equations: \enumsentence{ {\bf VP$_{r}$.b:$\langle$mode$\rangle =$ V.t:$\langle$mode$\rangle$}\label{mode1}} \enumsentence{ {\bf V.t:$\langle$mode$\rangle =$ base}\label{mode2}} \enumsentence{ {\bf VP.b:$\langle$mode$\rangle =$ ppart}\label{mode3}} \noindent {\bf $\langle$passive$\rangle$}: This feature is used to ensure that passives only have {\em be} as their auxiliary. Passive trees start out with their {\bf $\langle$passive$\rangle$} feature as {\bf +}. This feature starts out at the level of the verb and is percolated up to the level of the VP. This ensures that only auxiliary verbs whose foot node has {\bf +} as their {\bf $\langle$passive$\rangle$} feature can adjoin on a passive. Passive trees have {\bf ppart} as the value of their {\bf $\langle$mode$\rangle$} feature. So the only auxiliary trees that we really have to worry about blocking are trees whose foot nodes have {\bf ppart} as the value of their {\bf $\langle$mode$\rangle$} feature. There are two such trees -- the {\em be} tree and the {\em have} tree. The {\em be} tree is fine because its foot node has {\bf +} as its {\bf $\langle$passive$\rangle$} feature, so both the {\bf $\langle$passive$\rangle$} and {\bf $\langle$mode$\rangle$} values unify; the {\em have} tree is blocked because its foot node has {\bf --} as its {\bf $\langle$passive$\rangle$} feature. \section{Relative Clauses} Features that are peculiar to the relative clause system are:\\ {\bf $\langle$select-mode$\rangle$}, possible values are {\bf ind, inf, ppart, ger}\\ {\bf $\langle$rel-pron$\rangle$}, possible values are {\bf ppart, ger, adj-clause}\\ {\bf $\langle$rel-clause$\rangle$}, possible values are {\bf +/--} \noindent {\bf $\langle$select-mode$\rangle$}:\\ Comps are lexically specified for {\bf $\langle$select-mode$\rangle$}. In addition, the {\bf $\langle$select-mode$\rangle$} feature of a Comp is equated to the {\bf $\langle$mode$\rangle$} feature of its sister S node by the following equation: \enumsentence{ {\bf Comp.t:$\langle$select-mode$\rangle =$ S$_{t}$.t:$\langle$mode$\rangle$}} The lexical specifications of the Comps are shown below: \begin{itemize} \item $\epsilon$$_{C}$, {\bf Comp.t:$\langle$select-mode$\rangle =$ind/inf/ger/ppart} \item {\em that}, {\bf Comp.t:$\langle$select-mode$\rangle =$ind} \item {\em for}, {\bf Comp.t:$\langle$select-mode$\rangle =$inf} \end{itemize} \noindent {\bf $\langle$rel-pron$\rangle$}:\\ There are additional constraints on where the null Comp $\epsilon$$_{C}$ can occur. The null Comp is not permitted in cases of subject extraction unless there is an intervening clause or or the relative clause is a reduced relative ({\bf mode = ppart/ger}). To model this paradigm, the feature {\bf $\langle$rel-pron$\rangle$} is used in conjunction with the following equations. \enumsentence{ {\bf S$_{r}$.t:$\langle$rel-pron$\rangle =$ Comp.t:$\langle$rel-pron$\rangle$}} \enumsentence{ {\bf S$_{r}$.b:$\langle$rel-pron$\rangle =$ S$_{r}$.b:$\langle$mode$\rangle$}} \enumsentence{ {\bf Comp.b:$\langle$rel-pron$\rangle =$ppart/ger/adj-clause} (for $\epsilon$$_{C}$)} The full set of the equations above is only present in Comp substitution trees involving subject extraction. So the following will not be ruled out. \enumsentence{ the toy [$\epsilon$$_{i}$ [$\epsilon$$_{C}$ [ Dafna likes t$_{i}$ ]]] } The feature mismatch induced by the above equations is not remedied by adjunction of just any S-adjunct because all other S-adjuncts are transparent to the {\bf $\langle$rel-pron$\rangle$} feature because of the following equation: \enumsentence{ {\bf S$_{m}$.b:$\langle$rel-pron$\rangle =$ S$_{f}$.t:$\langle$rel-pron$\rangle$}} \noindent {\bf $\langle$rel-clause$\rangle$}:\\ The XTAG analysis forces the adjunction of the determiner below the relative clause. This is done by using the {\bf $\langle$rel-clause$\rangle$} feature. The relevant equations are: \enumsentence{ On the root of the RC: {\bf NP$_{r}$.b:$\langle$rel-clause$\rangle = +$}} \enumsentence{ On the foot node of the Determiner tree: {\bf NP$_{f}$.t:$\langle$rel-clause$\rangle = -$}} \section{Complementizer Selection} The following features are used to ensure the appropriate distribution of complementizers: \\ {\bf $\langle$comp$\rangle$}, possible values: {\bf that, if, whether, for, rel, inf\_nil, ind\_nil, nil}\\ {\bf $\langle$assign-comp$\rangle$}, possible values: {\bf that, if, whether, for, ecm, rel, ind\_nil, inf\_nil, none}\\ {\bf $\langle$mode$\rangle$}, possible values: {\bf ind, inf, sbjnct, ger, base, ppart, nom, prep}\\ {\bf $\langle$wh$\rangle$}, possible values: {\bf +, --} The value of the {\bf $\langle$comp$\rangle$} feature tells us what complementizer we are dealing with. The trees which introduce complementizers come specified from the lexicon with their {\bf $\langle$comp$\rangle$} feature and {\bf $\langle$assign-comp$\rangle$} feature. The {\bf $\langle$comp$\rangle$} of the Comp tree regulates what kind of tree goes above the Comp tree, while the {\bf $\langle$assign-comp$\rangle$} feature regulates what kind of tree goes below. e.g. the following equations are used for {\em that}: \enumsentence{ {\bf S$_{c}$.b:$\langle$comp$\rangle =$ Comp.t:$\langle$comp$\rangle$} } \enumsentence{ {\bf S$_{c}$.b:$\langle$wh$\rangle =$ Comp.t:$\langle$wh$\rangle$}} \enumsentence{ {\bf S$_{c}$.b:$\langle$mode$\rangle =$ ind/sbjnct}} \enumsentence{ {\bf S$_{r}$.t:$\langle$assign-comp$\rangle =$ Comp.t:$\langle$comp$\rangle$}} \enumsentence{ {\bf S$_{r}$.b:$\langle$comp$\rangle =$ nil}} By specifying {\bf S$_{r}$.b:$\langle$comp$\rangle =$ nil}, we ensure that complementizers do not adjoin onto other complementizers. The root node of a complementizer tree always has its {\bf $\langle$comp$\rangle$} feature set to a value other than {\bf nil}. Trees that take clausal complements specify with the {\bf $\langle$comp$\rangle$} feature on their foot node what kind of complementizer(s) they can take. The {\bf $\langle$assign-comp$\rangle$} feature of an S node is determined by the highest VP below the S node and the syntactic configuration the S node is in. \subsection{Verbs with object sentential complements} Finite sentential complements: \enumsentence{ {\bf S$_{1}$.t:$\langle$comp$\rangle =$ that/whether/if/nil}} \enumsentence{{\bf S$_{1}$.t:$\langle$mode$\rangle =$ ind/sbjnct} or {\bf S$_{1}$.t:$\langle$mode$\rangle =$ ind}} \enumsentence{ {\bf S$_{1}$.t:$\langle$assign-comp$\rangle =$ ind\_nil/inf\_nil}} The presence of an overt complementizer is optional. Non-finite sentential complements, do not permit {\em for}: \enumsentence{ {\bf S$_{1}$.t:$\langle$comp$\rangle =$ nil}} \enumsentence{ {\bf S$_{1}$.t:$\langle$mode$\rangle =$ inf}} \enumsentence{ {\bf S$_{1}$.t:$\langle$assign-comp$\rangle =$ ind\_nil/inf\_nil} } Non-finite sentential complements, permit {\em for}: \enumsentence{ {\bf S$_{1}$.t:$\langle$comp$\rangle =$ for/nil}} \enumsentence{ {\bf S$_{1}$.t:$\langle$mode$\rangle =$ inf}} \enumsentence{ {\bf S$_{1}$.t:$\langle$assign-comp$\rangle =$ ind\_nil/inf\_nil}} Cases like `*I want for to win' are independently ruled out due to a case feature clash between the {\bf acc} assigned by {\em for} and the intrinsic case feature {\bf none} on the PRO. Non-finite sentential complements, ECM: \enumsentence{ {\bf S$_{1}$.t:$\langle$comp$\rangle =$ nil}} \enumsentence{ {\bf S$_{1}$.t:$\langle$mode$\rangle =$ inf}} \enumsentence{ {\bf S$_{1}$.t:$\langle$assign-comp$\rangle =$ ecm}} \subsection{Verbs with sentential subjects} The following contrast involving complementizers surfaces with sentential subjects: \enumsentence{ *(That) John is crazy is likely.} Indicative sentential subjects obligatorily have complementizers while infinitival sentential subjects may or may not have a complementizer. Also {\em if} is possible as the complementizer of an object clause but not as the complementizer of a sentential subject. \enumsentence{ {\bf S$_{0}$.t:$\langle$comp$\rangle =$ that/whether/for/nil}} \enumsentence{ {\bf S$_{0}$.t:$\langle$mode$\rangle =$ inf/ind}} \enumsentence{ {\bf S$_{0}$.t:$\langle$assign-comp$\rangle =$ inf\_nil}} If the sentential subject is finite and a complementizer does not adjoin in, the {\bf $\langle$assign-comp$\rangle$} feature of the S$_{0}$ node of the embedding clause and the root node of the embedded clause will fail to unify. If a complementizer adjoins in, there will be no feature-mismatch because the root of the complementizer tree is not specified for the {\bf $\langle$assign-comp$\rangle$} feature. The {\bf $\langle$comp$\rangle$} feature {\bf nil} is split into two {\bf $\langle$assign-comp$\rangle$} features {\bf ind\_nil} and {\bf inf\_nil} to capture the fact that there are certain configurations in which it is acceptable for an infinitival clause to lack a complementizer but not acceptable for an indicative clause to lack a complementizer. \subsection{{\em That}-trace and {\em for}-trace effects} \enumsentence{ Who$_{i}$ do you think (*that) t$_{i}$ ate the apple?} {\em That} trace violations are blocked by the presence of the following equation: \enumsentence{ {\bf S$_{r}$.b:$\langle$assign-comp$\rangle =$ inf\_nil/ind\_nil/ecm}} on the bottom of the S$_{r}$ nodes of trees with extracted subjects (W0). The {\bf ind\_nil} feature specification permits the above example while the {\bf inf\_nil/ecm} feature specification allows the following examples to be derived: \enumsentence{ Who$_{i}$ do you want [ t$_{i}$ to win the World Cup]?} \enumsentence{ Who$_{i}$ do you consider [ t$_{i}$ intelligent]?} The feature equation that ruled out the {\em that}-trace filter violations will also serve to rule out the {\em for}-trace violations above. \section{Determiner ordering} {\bf $\langle$card$\rangle$}, possible values are {\bf +, --}\\ {\bf $\langle$compl$\rangle$}, possible values are {\bf +, --}\\ {\bf $\langle$const$\rangle$}, possible values are {\bf +, --}\\ {\bf $\langle$decreas$\rangle$}, possible values are {\bf +, --}\\ {\bf $\langle$definite$\rangle$}, possible values are {\bf +, --}\\ {\bf $\langle$gen$\rangle$}, possible values are {\bf +, --}\\ {\bf $\langle$quan$\rangle$}, possible values are {\bf +, --} For detailed discussion see Chapter \ref{det-comparitives}. \section{Punctuation} {\bf $\langle$punct$\rangle$} is a complex feature. It has the following as its subfeatures:\\ {\bf $\langle$punct bal$\rangle$}, possible values are {\bf dquote, squote, paren, nil}\\ {\bf $\langle$punct contains colon$\rangle$}, possible values are {\bf +, --}\\ {\bf $\langle$punct contains dash$\rangle$}, possible values are {\bf +, --}\\ {\bf $\langle$punct contains dquote$\rangle$}, possible values are {\bf +, --}\\ {\bf $\langle$punct contains scolon$\rangle$}, possible values are {\bf +, --}\\ {\bf $\langle$punct contains squote$\rangle$}, possible values are {\bf +, --}\\ {\bf $\langle$punct struct$\rangle$}, possible values are {\bf comma, dash, colon, scolon, none, nil}\\ {\bf $\langle$punct term$\rangle$}, possible values are {\bf per, qmark, excl, none, nil} For detailed discussion see Chapter~\ref{punct-chapt}. \section{Conjunction} {\bf $\langle$conj$\rangle$}, possible values are {\bf but, and, or, comma, scolon, to, disc, nil}\\ The {\bf $\langle$conj$\rangle$} feature is specified in the lexicon for each conjunction and is passed up to the root node of the conjunction tree. If the conjunction is {\em and}, the root {\bf $\langle$agr num$\rangle$} is {\bf $\langle$plural$\rangle$}, no matter what the number of the two conjuncts. With {\em or}, the the root {\bf $\langle$agr num$\rangle$} is equated to the {\bf $\langle$agr num$\rangle$} feature of the right conjunct. The {\bf $\langle$conj$\rangle$=disc} feature is only used at the root of the $\beta$CONJs tree. It blocks the adjunction of one $\beta$CONJs tree on another. The following equations are used, where S$_{r}$ is the substitution node and S$_{c}$ is the root node: \enumsentence{ S$_{r}$.t:$\langle$conj$\rangle$ = disc} \enumsentence{ S$_{c}$.b:$\langle$conj$\rangle$ = and/or/but/nil} \section{Comparatives} {\bf $\langle$compar$\rangle$}, possible values are {\bf +, --}\\ {\bf $\langle$equiv$\rangle$}, possible values are {\bf +, --}\\ {\bf $\langle$super$\rangle$}, possible values are {\bf +, --} For detailed discussion see Chapter~\ref{compars-chapter}. \section{Control} {\bf $\langle$control$\rangle$} has no value and is used only for indexing purposes. The root node of every clausal tree has its {\bf $\langle$control$\rangle$} feature coindexed with the control feature of its subject. This allows adjunct control to take place. In addition, clauses that take infinitival clausal complements have the control feature of their subject/object coindexed with the control feature of their complement clause S, depending upon whether they are subject control verbs or object control verbs respectively. \section{Other Features} {\bf $\langle$neg$\rangle$}, possible values are {\bf +, --}\\ Used for controlling the interaction of negation and auxiliary verbs. \noindent {\bf $\langle$pred$\rangle$}, possible values are {\bf +, --}\\ The {\bf $\langle$pred$\rangle$} feature is used in the following tree families: Tnx0N1.trees and Tnx0nx1ARB.trees. In the Tnx0N1.trees family, the following equations are used:\\ for $\alpha$W1nx0N1: \enumsentence{ NP$_{1}$.t:$\langle$pred$\rangle$ = +} \enumsentence{ NP$_{1}$.b:$\langle$pred$\rangle$ = +} \enumsentence{ NP.t:$\langle$pred$\rangle$ = +} \enumsentence{ N.t:$\langle$pred$\rangle$ = NP.b:$\langle$pred$\rangle$} This is the only tree in this tree family to use the {\bf $\langle$pred$\rangle$} feature. The other tree family where the {\bf $\langle$pred$\rangle$} feature is used is Tnx0nx1ARB.trees. Within this family, this feature (and the following equations) are used only in the $\alpha$W1nx0nx1ARB tree. \enumsentence{ AdvP$_{1}$.t:$\langle$pred$\rangle$ = +} \enumsentence{ AdvP$_{1}$.b:$\langle$pred$\rangle$ = +} \enumsentence{ NP.t:$\langle$pred$\rangle$ = +} \enumsentence{ AdvP.b:$\langle$pred$\rangle$ = NP.t:$\langle$pred$\rangle$} \noindent {\bf $\langle$pron$\rangle$}, possible values are {\bf +, --}\\ This feature indicates whether a particular NP is a pronoun or not. Certain constructions which do not permit pronouns use this feature to block pronouns. \noindent {\bf $\langle$tense$\rangle$}, possible values are {\bf pres, past}\\ It does not seem to be the case that the {\bf $\langle$tense$\rangle$} feature interacts with other features/syntactic processes. It comes from the lexicon with the verb and is transmitted up the tree in such a way that the root S node ends up with the tense feature of the highest verb in the tree. The equations used for this purpose are: \enumsentence{ {\bf S$_{r}$.b:$\langle$tense$\rangle$ = VP.t:$\langle$tense$\rangle$}} \enumsentence{ {\bf VP.b:$\langle$tense$\rangle$ = V.t:$\langle$tense$\rangle$}} {\bf $\langle$trans$\rangle$}, possible values are {\bf +, --}\\ Many but not all English verbs can anchor both transitive and intransitive trees. \enumsentence{ The sun melted the ice cream.} \enumsentence{ The ice cream melted.} \enumsentence{ Elmo borrowed a book.} \enumsentence{ * A book borrowed.} Transitive trees have the {\bf $\langle$trans$\rangle$} feature of their anchor set to {\ +} and intransitive trees have the {\bf $\langle$trans$\rangle$} feature of their anchor set to {\ --}. Verbs such as {\em melt} which can occur in both transitive and intransitive trees come unspecified for the {\bf $\langle$trans$\rangle$} feature from the lexicon. Verbs which can only occur in transitive trees e.g. {\em borrow} have their {\bf $\langle$trans$\rangle$} feature specified in the lexicon as {\bf +} thus blocking their anchoring of an intransitive tree. \chapter{Future Work} \label{future-work} \section{Adjective ordering} At this point, the treatment of adjectives in the XTAG English grammar does not include selectional or ordering restrictions.\footnote{This section is a repeat of information found in section~\ref{adj-modifier}.} Consequently, any adjective can adjoin onto any noun and on top of any other adjective already modifying a noun. All of the modified noun phrases shown in (\ex{1})-(\ex{4}) currently parse. \enumsentence{big green bugs} \enumsentence{big green ideas} \enumsentence{colorless green ideas} \enumsentence{$\ast$green big ideas} While (\ex{-2})-(\ex{0}) are all semantically anomalous, (\ex{0}) also suffers from an ordering problem that makes it seem ungrammatical as well. Since the XTAG grammar focuses on syntactic constructions, it should accept (\ex{-3})-(\ex{-1}) but not (\ex{0}). Both the auxiliary and determiner ordering systems are structured on the idea that certain types of lexical items (specified by features) can adjoin onto some types of lexical items, but not others. We believe that an analysis of adjectival ordering would follow the same type of mechanism. \section{More work on Determiners} In addition to the analysis described in Chapter~\ref{det-comparitives}, there remains work to be done to complete the analysis of determiner constructions in English.\footnote{This section is from \cite{ircs:det98}.} Although constructions such as determiner coordination are easily handled if overgeneration is allowed, blocking sequences such as {\it one and some} while allowing sequences such as {\it five or ten} still remains to be worked out. There are still a handful of determiners that are not currently handled by our system. We do not have an analysis to handle {\it most}, {\it such}, {\it certain}, {\it other} and {\it own}\footnote{The behavior of {\it own} is sufficiently unlike other determiners that it most likely needs a tree of its own, adjoining onto the right-hand side of genitive determiners.}. In addition, there is a set of lexical items that we consider adjectives ({\it enough}, {\it less}, {\it more} and {\it much}) that have the property that they cannot cooccur with determiners. We feel that a complete analysis of determiners should be able to account for this phenomenon, as well. \section{{\it -ing} adjectives} An analysis has already been provided for past participal ({\it -ed}) adjectives (as in sentence~ (\ex{1})), which are restricted to the Transitive Verb family.\footnote{This analysis may need to be extended to the Transitive Verb particle family as well.} A similar analysis needs to take place for the present participle~({\it -ing}) used as a pre-nominal modifier. This type of adjective, however, does not seem to be as restricted as the~{\it -ed} adjectives, since verbs in other tree families seem to exhibit this alternation as well (e.g. sentences~(\ex{2}) and (\ex{3})). \enumsentence{The murdered man was a doctoral student at UPenn .} \enumsentence{The man died .} \enumsentence{The dying man pleaded for his life .} \section{Verb selectional restrictions} Although we explicitly do not want to model semantics in the XTAG grammar, there is some work along the syntax/semantics interface that would help reduce syntactic ambiguity and thus decrease the number of semantically anomalous parses. In particular, verb selectional restrictions, particularly for PP arguments and adjuncts, would be quite useful. With the exception of the required {\it to} in the Ditransitive with PP Shift tree family (Tnx0Vnx1Pnx2), any preposition is allowed in the tree families that have prepositions as their arguments. In addition, there are no restrictions as to which prepositions are allowed to adjoin onto a given verb. The sentences in (\ex{1})-(\ex{3}) are all currently accepted by the XTAG grammar. Their violations are stronger than would be expected from purely semantic violations, however, and the presence of verb selectional restrictions on PP's would keep these sentences from being accepted. \enumsentence{\#survivors walked of the street .} \enumsentence{\#The man about the earthquake survived .} \enumsentence{\#The president arranged on a meeting .} \section{Thematic Roles} Elementary trees in TAGs capture several notions of locality, with the most primary of these being locality of $\theta$-role assignment. Each elementary tree has associated with it the $\theta$-roles assigned by the anchor of that elementary tree. In the current XTAG system, while the notion of locality of $\theta$-role assignment within an elementary tree has been implicit, the $\theta$-roles assigned by a head have not been explicitly represented in the elementary tree. Incorporating $\theta$-role information will make the elementary trees more informative and will enable efficient pruning of spurious derivations when embedded into a specific context. In the case of a Synchronous TAG, $\theta$-roles can also be used to automatically establish links between two elementary trees, one in the object language and one in the target language. \chapter{Gerund NP's} \label{gerunds-chapter} There are two types of gerunds identified in the linguistics literature. One is the class of {\it derived nominalizations} (also called {\it nominal gerundives} or {\it action nominalizations}) exemplified in (\ex{1}), which instantiates the direct object within an {\it of} PP. The other is the class of so-called {\it sentential} or {\it VP gerundives} exemplified in (\ex{2}). In the English XTAG grammar, the derived nominalizations are termed {\bf determiner gerunds}, and the sentential or VP gerunds are termed {\bf NP gerunds}. \enumsentence{Some think that {\bf the selling of bonds} is beneficial.} \enumsentence{Are private markets approving of {\bf Washington bashing Wall Street}?} Both types of gerunds exhibit a similar distribution, appearing in most places where NP's are allowed.\footnote{an exception being the NP positions in ``equative BE'' sentences, such as, {\it John is my father}.} The bold face portions of sentences (\ex{1})--(\ex{3}) show examples of gerunds as a subject and as the object of a preposition. \enumsentence{{\bf Avoiding such losses} will take a monumental effort.} \enumsentence{{\bf Mr. Nolen's wandering} doesn't make him a weirdo.} \enumsentence{Are private markets approving of {\bf Washington bashing Wall Street}?} The motivation for splitting the gerunds into two classes is semantic as well as structural in nature. Semantically, the two gerunds are in sharp contrast with each other. NP gerunds refer to an action, i.e., a way of doing something, whereas determiner gerunds refer to a fact. Structurally, there are a number of properties (extensively discussed in \cite{Lees60}) that show that NP gerunds have the syntax of verbs, whereas determiner gerunds have the syntax of basic nouns. Firstly, the fact that the direct object of the determiner gerund can only appear within an {\it of} PP suggests that the determiner gerund, like nouns, is not a case assigner and needs insertion of the preposition {\it of} for assignment of case to the direct object. NP gerunds, like verbs, need no such insertion and can assign case to their direct object. Secondly, like nouns, only determiner gerunds can appear with articles (cf. example (\ex{1}) and (\ex{2})). Thirdly, determiner gerunds, like nouns, can be modified by adjectives (cf. example (\ex{3})), whereas NP gerunds, like verbs, resist such modification (cf. example (\ex{4})). Fourthly, nouns, unlike verbs, cannot co-occur with aspect (cf. example (\ex{5}) and (\ex{6})). Finally, only NP gerunds, like verbs, can take adverbial modification (cf. example (\ex{7}) and (\ex{8})). \enumsentence{\ldots the proving of the theorem\ldots. \hspace{1.0in} (det ger with article)} \enumsentence{* \ldots the proving the theorem\ldots. \hspace{1.0in} (NP ger with article)} \enumsentence{John's rapid writing of the book\ldots. \hspace{1.0in} (det ger with Adj)} \enumsentence{* John's rapid writing the book\ldots. \hspace{1.0in} (NP ger with Adj)} \enumsentence{* John's having written of the book\ldots. \hspace{1.0in} (det ger with aspect)} \enumsentence{John having written the book\ldots. \hspace{1.0in} (NP ger with aspect)} \enumsentence{* His writing of the book rapidly\ldots. \hspace{1.0in} (det ger with Adverb)} \enumsentence{His writing the book rapidly\ldots. \hspace{1.0in} (NP ger with Adverb)} In English XTAG, the two types of gerunds are assigned separate trees within each tree family, but in order to capture their similar distributional behavior, both are assigned NP as the category label of their top node. The feature {\bf gerund = +/--} distinguishes gerund NP's from regular NP's where needed.\footnote{This feature is also needed to restrict the selection of gerunds in NP positions. For example, the subject and object NP's in the ``equative BE'' tree (Tnx0BEnx1) cannot be filled by gerunds, and are therefore assigned the feature {\bf gerund = --}, which prevents gerunds (which have the feature {\bf gerund = +}) from substituting into these NP positions.} The determiner gerund and the NP gerund trees are discussed in section~(\ref{detger-sec}) and ~(\ref{NPger-sec}) respectively. \section{Determiner Gerunds} \label{detger-sec} The determiner gerund tree in Figure~\ref{detgerund-tree} is anchored by a V, capturing the fact that the gerund is derived from a verb. The verb projects an N and instantiates the direct object as an {\it of} PP. The nominal category projected by the verb can now display all the syntactic properties of basic nouns, as discussed above. For example, it can be straightforwardly modified by adjectives; it cannot co-occur with aspect; and it can appear with articles. The only difference of the determiner gerund nominal with the basic nominals is that the former cannot occur without the determiner, whereas the latter can. The determiner gerund tree therefore has an initial D modifying the N.\footnote{Note that the determiner can adjoin to the gerund only from {\it within} the gerund tree. Adjunction of determiners to the gerund root node is prevented by constraining determiners to only select NP's with the feature {\bf gerund = --}. This rules out sentences like {\it Private markets approved of (*the) [the selling of bonds]}.} It is used for gerunds such as the ones in bold face in sentences (\ex{1}), (\ex{2}) and (\ex{3}). \begin{figure}[htb] \centering \begin{tabular}{c} {\psfig{figure=ps/gerund-files/alphaDnx0Vnx1.ps,height=5.9in}}\\ $\alpha$Dnx0Vnx1\\ \end{tabular} \caption{Determiner Gerund tree from the transitive tree family: $\alpha$Dnx0Vnx1} \label{detgerund-tree} \label{2;12,1} \end{figure} The D node can take a simple determiner (cf. example (\ex{1})), a genitive pronoun (cf. example (\ex{2})), or a genitive NP (cf. example (\ex{3})).\footnote{The trees for genitive pronouns and genitive NP's have the root node labelled as D because they seem to function as determiners and as such, sequence with the rest of the determiners. See Chapter~\ref{det-comparitives} for discussion on determiner trees.} \enumsentence{Some think that {\bf the selling of bonds} is beneficial.} \enumsentence{{\bf His painting of Mona Lisa} is highly acclaimed.} \enumsentence{Are private markets approving of {\bf Washington's bashing of Wall Street}?} \section{NP Gerunds} \label{NPger-sec} NP gerunds show a number of structural peculiarities, the crucial one being that they have the internal properties of sentences. In the English XTAG grammar, we adopt a position similar to that of \cite{Rosenbaum67} and \cite{Emonds70} -- that these gerunds are NP's exhaustively dominating a clause. Consequently, the tree assigned to the transitive NP gerund tree (cf. Figure~\ref{NPgerund-tree}) looks exactly like the declarative transitive tree for clauses except for the root node label and the feature values. The anchoring verb projects a VP. Auxiliary adjunction is allowed, subject to one constraint -- that the highest verb in the verbal sequence be in gerundive form, regardless of whether it is a main or auxiliary verb. This constraint is implemented by requiring the topmost VP node to be {\bf $<$mode$>$ = ger}. In the absence of any adjunction, the anchoring verb itself is forced to be gerundive. But if the verbal sequence has more than one verb, then the sequence and form of the verbs is limited by the restrictions that each verb in the sequence imposes on the succeeding verb. The nature of these restrictions for sentential clauses, and the manner in which they are implemented in XTAG, are both discussed in Chapter~\ref{auxiliaries}. The analysis and implementation discussed there differs from that required for gerunds only in one respect -- that the highest verb in the verbal sequence is required to be {\bf $<$mode$>$ = ger}. \begin{figure}[htb] \centering \begin{tabular}{cc} {\psfig{figure=ps/gerund-files/alphaGnx0Vnx1.ps,height=4.5in}}\\ $\alpha$Gnx0Vnx1\\ \end{tabular} \caption{NP Gerund tree from the transitive tree family: $\alpha$Gnx0Vnx1} \label{NPgerund-tree} \label{2;13,1} \end{figure} Additionally, the subject in the NP gerund tree is required to have {\bf $<$case$>$=acc/none/gen}, i.e., it can be either a PRO (cf. example \ex{1}), a genitive NP (cf. example \ex{2}), or an accusative NP (cf. example \ex{3}). The whole NP formed by the gerund can occur in either nominative or accusative positions. \enumsentence{\ldots John does not like {\bf wearing a hat}.} \enumsentence{Are private markets approving of {\bf Washington's bashing Wall Street}?} \enumsentence{Mother disapproved of {\bf me wearing such casual clothes}.} One question that arises with respect to gerunds is whether there is anything special about their distribution as compared to other types of NP's. In fact, it appears that gerund NP's can occur in any NP position. Some verbs might not seem to be very accepting of gerund NP arguments, as in (\ex{1}) below, but we believe this to be a semantic incompatibility rather than a syntactic problem since the same structures are possible with other lexical items. \enumsentence{? [$_{NP}$John's tinkering$_{NP}$] ran.} \enumsentence{[$_{NP}$John's tinkering$_{NP}$] worked.} By having the root node of gerund trees be NP, the gerunds have the same distribution as any other NP in the English XTAG grammar without doing anything exceptional. The clause structure is captured by the form of the trees and by inclusion in the tree families. \section{Gerund Passives} It was mentioned above that the NP gerunds display certain clausal properties. It is therefore not surprising that they too have their own set of transformationally related structures. For example, NP gerunds allow passivization just like their sentential counterparts (cf. examples (\ex{1}) and (\ex{2})). \enumsentence{The lawyers objected to {\bf the slanderous book being written by John}.} \enumsentence{Susan could not forget {\bf having been embarrassed by the vicar}.} In the English XTAG grammar, gerund passives are treated in an almost exactly similar fashion to sentential passives, and are assigned separate trees within the appropriate tree families. The passives occur in pairs, one with the {\it by} phrase, and another without it. There are two feature restrictions imposed on the passive trees: (a) only verbs with {\bf $<$mode$>$ = ppart} (i.e., verbs with passive morphology) can be the anchors, and (b) the highest verb in the verb sequence is required to be {\bf $<$mode$>$ = ger}. The two restrictions, together, ensure the selection of only those sequences of auxiliary verb(s) that select {\bf $<$mode$>$ = ppart} and {\bf $<$passive$>$ = +} (such as {\it being} or {\it having been} but NOT {\it having}). The passive trees are assumed to be related to only the NP gerund trees (and not the determiner gerund trees), since passive structures involve movement of some object to the subject position (in a movement analysis). Also, like the sentential passives, gerund passives are found in most tree families that have a direct object in the declarative tree. Figure~\ref{pass-trees} shows the gerund passive trees for the transitive tree family. \begin{figure}[htb] \centering \begin{tabular}{cc} {\psfig{figure=ps/gerund-files/alphaGnx1Vbynx0.ps,height=4.0in}}& {\psfig{figure=ps/gerund-files/alphaGnx1V.ps,height=4.0in}} \\ (a) $\alpha$Gnx1Vbynx0&(b) $\alpha$Gnx1V\\ \end{tabular} \caption{Passive Gerund trees from the transitive tree family: $\alpha$Gnx1Vbynx0 (a) and $\alpha$Gnx1V (b)} \label{pass-trees} \end{figure} \chapter{Getting Around} This technical report presents the English XTAG grammar as implemented by the XTAG Research Group at the University of Pennsylvania. The technical report is organized into four parts, plus a set of appendices. Part 1 contains general information about the XTAG system and some of the underlying mechanisms that help shape the grammar. Chapter~\ref{intro-FBLTAG} contains an introduction to the formalism behind the grammar and parser, while Chapter~\ref{overview} contains information about the entire XTAG system. Linguists interested solely in the grammar of the XTAG system may safely skip Chapters~\ref{intro-FBLTAG} and \ref{overview}. Chapter~\ref{underview} contains information on some of the linguistic principles that underlie the XTAG grammar, including the distinction between complements and adjuncts, and how case is handled. The actual description of the grammar begins with Part 2, and is contained in the following three parts. Parts 2 and 3 contains information on the verb classes and the types of trees allowed within the verb classes, respectively, while Part 4 contains information on trees not included in the verb classes (e.g. NP's, PP's, various modifiers, etc). Chapter~\ref{table-intro} of Part 2 contains a table that attempts to provide an overview of the verb classes and tree types by providing a graphical indication of which tree types are allowed in which verb classes. This has been cross-indexed to tree figures shown in the tech report. Chapter~\ref{verb-classes} contains an overview of all of the verb classes in the XTAG grammar. The rest of Part 2 contains more details on several of the more interesting verb classes, including ergatives, sentential subjects, sentential complements, small classes, ditransitives, and it-clefts. Part 3 contains information on some of the tree types that are available within the verb classes. These tree types correspond to what would be transformations in a movement based approach. Not all of these types of trees are contained in all of the verb classes. The table (previously mentioned) in Part 2 contains a list of the tree types and indicates which verb classes each occurs in. Part 4 focuses on the non-verb class trees in the grammar. NP's and determiners are presented in Chapter~\ref{det-comparitives}, while the various modifier trees are presented in Chapter~\ref{modifiers}. Auxiliary verbs, which are classed separate from the verb classes, are presented in Chapter~\ref{auxiliaries}, while certain types of conjunction are shown in Chapter~\ref{conjunction}. The XTAG treatment of comparatives is presented in Chapter~\ref{compars-chapter}, and our treatment of punctuation is discussed in Chapter~\ref{punct-chapt}. Throughout the technical report, mention is occasionally made of changes or analyses that we hope to incorporate in the future. Appendix~\ref{future-work} details a list of these and other future work. The appendices also contain information on some of the nitty gritty details of the XTAG grammar, including a system of metarules which can be used for grammar development and maintenance in Appendix~\ref{metarules}, a system for the organization of the grammar in terms of an inheritance hierarchy is in Appendix~\ref{lexorg}, the tree naming conventions used in XTAG are explained in detail in Appendix~\ref{tree-naming}, and a comprehensive list of the features used in the grammar is given in Appendix~{\ref{features}. Appendix~\ref{evaluation} contains an evaluation of the XTAG grammar, including comparisons with other wide coverage grammars. \chapter{Imperatives} \label{imperatives} Imperatives in English do not require overt subjects. The subject in imperatives is second person, i.e.\ {\it you}, whether it is overt or not, as is clear from the verbal agreement and the interpretation. Imperatives with overt subjects can be parsed using the trees already needed for declaratives. The imperative cases in which the subject is not overt are handled by the imperative trees discussed in this section. The imperative trees in English XTAG grammar are identical to the declarative tree except that the NP$_{0}$ subject position is filled by an $\epsilon$, the NP$_{0}$ {\bf $<$agr~pers$>$} feature is set in the tree to the value {\bf 2nd} and the {\bf $<$mode$>$} feature on the root node has the value {\bf imp}. The value for {\bf $<$agr~pers$>$} is hardwired into the epsilon node and insures the proper verbal agreement for an imperative. The {\bf $<$mode$>$} value of {\bf imp} on the root node is recognized as a valid mode for a matrix clause. The {\bf imp} value for {\bf $<$mode$>$} also allows imperatives to be blocked from appearing as embedded clauses. Figure \ref{alphaInx0Vnx1} is the imperative tree for the transitive tree family. \begin{figure}[htbp] \centering{ \begin{tabular}{c} \psfig{figure=ps/imperatives-files/alphaInx0Vnx1.ps,height=6in} \end{tabular} } \caption{Transitive imperative tree: $\alpha$Inx0Vnx1} \label{alphaInx0Vnx1} \label{2;11,1} \end{figure} \chapter{Feature-Based, Lexicalized Tree Adjoining Grammars} \label{intro-FBLTAG} The English grammar described in this report is based on the TAG formalism (\cite{joshi75}), which has been extended to include lexicalization (\cite{schabes88}), and unification-based feature structures (\cite{vijay91}). Tree Adjoining Languages (TALs) fall into the class of mildly context-sensitive languages, and as such are more powerful than context free languages. The TAG formalism in general, and lexicalized TAGs in particular, are well-suited for linguistic applications. As first shown by \cite{joshi85} and \cite{kj87}, the properties of TAGs permit us to encapsulate diverse syntactic phenomena in a very natural way. For example, TAG's extended domain of locality and its factoring of recursion from local dependencies lead, among other things, to a localization of so-called unbounded dependencies. \section{TAG formalism} The primitive elements of the standard TAG formalism are known as elementary trees. \xtagdef{Elementary trees} are of two types: initial trees and auxiliary trees (see Figure \ref{elem-fig}). In describing natural language, \xtagdef{initial trees} are minimal linguistic structures that contain no recursion, i.e. trees containing the phrasal structure of simple sentences, NP's, PP's, and so forth. Initial trees are characterized by the following: 1) all internal nodes are labeled by non-terminals, 2) all leaf nodes are labeled by terminals, or by non-terminal nodes marked for substitution. An initial tree is called an X-type initial tree if its root is labeled with type X. \begin{figure}[htb] \centering \psfig{figure=ps/intro-files/schematic-elem-trees.ps,height=1.9in} \caption{Elementary trees in TAG} \label{elem-fig} \end{figure} Recursive structures are represented by \xtagdef{auxiliary trees}, which represent constituents that are adjuncts to basic structures (e.g. adverbials). Auxiliary trees are characterized as follows: 1) all internal nodes are labeled by non-terminals, 2) all leaf nodes are labeled by terminals, or by non-terminal nodes marked for substitution, except for exactly one non-terminal node, called the foot node, which can only be used to adjoin the tree to another node\footnote{A null adjunction constraint (NA) is systematically put on the foot node of an auxiliary tree. This disallows adjunction of a tree onto the foot node itself.}, 3) the foot node has the same label as the root node of the tree. There are two operations defined in the TAG formalism, substitution\footnote{Technically, substitution is a specialized version of adjunction, but it is useful to make a distinction between the two.} and adjunction. In the \xtagdef{substitution} operation, the root node on an initial tree is merged into a non-terminal leaf node marked for substitution in another initial tree, producing a new tree. The root node and the substitution node must have the same name. Figure \ref{proto-subst} shows two initial trees and the tree resulting from the substitution of one tree into the other. \begin{figure}[htb] \centering \psfig{figure=ps/intro-files/schematic-subst2.ps,height=1.9in} \caption{Substitution in TAG} \label{proto-subst} \end{figure} In an \xtagdef{adjunction} operation, an auxiliary tree is grafted onto a non-terminal node anywhere in an initial tree. The root and foot nodes of the auxiliary tree must match the node at which the auxiliary tree adjoins. Figure \ref{proto-adjunction} shows an auxiliary tree and an initial tree, and the tree resulting from an adjunction operation. \begin{figure}[htb] \centering \psfig{figure=ps/intro-files/schematic-adjunction2.ps,height=1.9in} \caption{Adjunction in TAG} \label{proto-adjunction} \end{figure} A TAG $G$ is a collection of finite initial trees, $I$, and auxiliary trees, $A$. The \xtagdef{tree set} of a TAG $G$, ${\cal T}(G)$ is defined to be the set of all derived trees starting from S-type initial trees in $I$ whose frontier consists of terminal nodes (all substitution nodes having been filled). The \xtagdef{string language} generated by a TAG, ${\cal L}(G)$, is defined to be the set of all terminal strings on the frontier of the trees in ${\cal T}(G)$. \section{Lexicalization} `Lexicalized' grammars systematically associate each elementary structure with a lexical anchor. This means that in each structure there is a lexical item that is realized. It does not mean simply adding feature structures (such as head) and unification equations to the rules of the formalism. These resultant elementary structures specify extended domains of locality (as compared to CFGs) over which constraints can be stated. Following \cite{schabes88} we say that a grammar is \xtagdef{lexicalized} if it consists of 1) a finite set of structures each associated with a lexical item, and 2) an operation or operations for composing the structures. Each lexical item will be called the \xtagdef{anchor} of the corresponding structure, which defines the domain of locality over which constraints are specified. Note then, that constraints are local with respect to their anchor. Not every grammar is in a lexicalized form.\footnote{Notice the similarity of the definition of a lexicalized grammar with the off line parsability constraint (\cite{kaplan83}). As consequences of our definition, each structure has at least one lexical item (its anchor) attached to it and all sentences are finitely ambiguous.} In the process of lexicalizing a grammar, the lexicalized grammar is required to be strongly equivalent to the original grammar, i.e. it must produce not only the same language, but the same structures or tree set as well. \begin{figure*}[htb] \centering \begin{tabular}{ccccccc} {{\psfig{figure=ps/intro-files/john.ps,height=1.0in}}\label{fig1a}} & \hspace{0.1in} & {{\psfig{figure=ps/intro-files/walked.ps,height=1.4in}}\label{fig1b}} & \hspace{0.1in} & {{\psfig{figure=ps/intro-files/to.ps,height=1.7in}} \label{fig1c} } & \hspace{0.1in} & {{\psfig{figure=ps/intro-files/philly.ps,height=1.0in}} \label{fig1d}} \\ (a)&&(b)&&(c)&&(d)\\ \end{tabular}\\ \caption {Lexicalized Elementary trees} \label {lex-elem-trees} \end{figure*} In Figure \ref{lex-elem-trees}, which shows sample initial and auxiliary trees, substitution sites are marked by a $\downarrow$, and foot nodes are marked by an $\ast$. This notation is standard and is followed in the rest of this report. \section{Unification-based features} In a unification framework, a feature structure is associated with each node in an elementary tree. This feature structure contains information about how the node interacts with other nodes in the tree. It consists of a top part, which generally contains information relating to the supernode, and a bottom part, which generally contains information relating to the subnode. Substitution nodes, however, have only the top features, since the tree substituting in logically carries the bottom features. \begin{figure}[htb] \centering \begin{tabular}{c} \psfig{figure=ps/intro-files/schematic-feat-subst.ps,height=2.0in} \end{tabular} \caption{Substitution in FB-LTAG} \label{subst-fig} \end{figure} The notions of substitution and adjunction must be augmented to fit within this new framework. The feature structure of a new node created by substitution inherits the union of the features of the original nodes. The top feature of the new node is the union of the top features of the two original nodes, while the bottom feature of the new node is simply the bottom feature of the top node of the substituting tree (since the substitution node has no bottom feature). Figure \ref{subst-fig}\footnote{abbreviations in the figure: t$=$top feature structure, tr$=$top feature structure of the root, br$=$bottom feature structure of the root, U$=$unification} shows this more clearly. \begin{figure}[htb] \centering \begin{tabular}{c} \hspace{0.65in} \psfig{figure=ps/intro-files/schematic-feat-adjunction.ps,height=2.0in} \end{tabular} \caption{Adjunction in FB-LTAG} \label{adjunct-fig} \end{figure} Adjunction is only slightly more complicated. The node being adjoined into splits, and its top feature unifies with the top feature of the root adjoining node, while its bottom feature unifies with the bottom feature of the foot adjoining node. Again, this is easier shown graphically, as in Figure \ref{adjunct-fig}\footnote{abbreviations in the figure: t$=$top feature structure, b$=$bottom feature structure, tr$=$top feature structure of the root, br$=$bottom feature structure of the root, tf$=$top feature structure of the foot, bf$=$bottom feature structure of the foot, U$=$unification}. \begin{figure}[htbp] \centering \begin{tabular}{ccc} {\psfig{figure=ps/intro-files/think-feat.ps,height=6.5in}} & \hspace{0.6in} {\psfig{figure=ps/intro-files/want-feat.ps,height=6.5in}} \\ {\it think} tree&{\it want} tree\\ \end{tabular}\\ \caption {Lexicalized Elementary Trees with Features} \label {lex-with-features} \label{2;Tnx0Vs1} \end{figure} The embedding of the TAG formalism in a unification framework allows us to dynamically specify local constraints that would have otherwise had to have been made statically within the trees. Constraints that verbs make on their complements, for instance, can be implemented through the feature structures. The notions of Obligatory and Selective Adjunction, crucial to the formation of lexicalized grammars, can also be handled through the use of features.\footnote{The remaining constraint, Null Adjunction (NA), must still be specified directly on a node.} Perhaps more important to developing a grammar, though, is that the trees can serve as a schemata to be instantiated with lexical-specific features when an anchor is associated with the tree. To illustrate this, Figure \ref{lex-with-features} shows the same tree lexicalized with two different verbs, each of which instantiates the features of the tree according to its lexical selectional restrictions. In Figure \ref{lex-with-features}, the lexical item {\it thinks} takes an indicative sentential complement, as in the sentence {\it John thinks that Mary loves Sally}. {\it Want} takes a sentential complement as well, but an infinitive one, as in {\it John wants to love Mary}. This distinction is easily captured in the features and passed to other nodes to constrain which trees this tree can adjoin into, both cutting down the number of separate trees needed and enforcing conceptual Selective Adjunctions (SA). \chapter{It-clefts} \label{it-clefts} There are several varieties of it-clefts in English. All the it-clefts have four major components: \begin{itemize} \item {\bf the dummy subject:} {\it it}, \item {\bf the main verb:} {\it be}, \item {\bf the clefted element:} A constituent (XP) compatible with any gap in the clause, \item {\bf the clause:} A clause (e.g. S) with or without a gap. \end{itemize} \noindent Examples of it-clefts are shown in (\ex{1})-(\ex{4}). \enumsentence{it was [$_{XP}$ here $_{XP}$] [$_{S}$ that the ENIAC was created . $_{S}$]} \enumsentence{it was [$_{XP}$ at MIT $_{XP}$] [$_{S}$ that colorless green ideas slept furiously . $_{S}$]} \enumsentence{it is [$_{XP}$ happily $_{XP}$] [$_{S}$ that Seth quit Reality . $_{S}$]} \enumsentence{it was [$_{XP}$ there $_{XP}$] [$_{S}$ that she would have to enact her renunciation . $_{S}$]} The clefted element can be of a number of categories, for example NP, PP or adverb. The clause can also be of several types. The English XTAG grammar currently has a separate analysis for only a subset of the `specificational' it-clefts\footnote{See e.g. \cite{Ball91}, \cite{Delin89} and \cite{Delahunty84} for more detailed discussion of types of it-clefts.}, in particular the ones without gaps in the clause (e.g. (\ex{-1}) and (\ex{-0})). It-clefts that have gaps in the clause, such as (\ex{-3}) and (\ex{-2}) are currently handled as relative clauses. Although arguments have been made against treating the clefted element and the clause as a constituent (\cite{Delahunty84}), the relative clause approach does capture the restriction that the clefted element must fill the gap in the clause, and does not require any additional trees. In the `specificational' it-cleft without gaps in the clause, the clefted element has the role of an adjunct with respect to the clause. For these cases the English XTAG grammar requires additional trees. These it-cleft trees are in separate tree families because, although some researchers (e.g. \cite{Akmajian70}) derived it-clefts through movement from other sentence types, most current researchers (e.g. \cite{Delahunty84}, \cite{Knowles86}, \cite{gazdar85}, \cite{Delin89} and \cite{Sornicola88}) favor base-generation of the various cleft sentences. Placing the it-cleft trees in their own tree families is consistent with the current preference for base generation, since in the XTAG English grammar, structures that would be related by transformation in a movement-based account will appear in the same tree family. Like the base-generated approaches, the placement of it-clefts in separate tree families makes the claim that there is no derivational relation between it-clefts and other sentence types. The three it-cleft tree families are virtually identical except for the category label of the clefted element. Figure~\ref{pp-it-clefts} shows the declarative tree and an inverted tree for the PP It-cleft tree family. \begin{figure}[htb] \centering \begin{tabular}{ccc} {\psfig{figure=ps/it-cleft-files/alphaItVpnx1s2.ps,height=2.0in}} & \hspace*{0.5in} & {\psfig{figure=ps/it-cleft-files/alphaInvItVpnx1s2.ps,height=2.5in}} \\ (a)&\hspace*{0.5in}&(b)\\ \end{tabular} \caption{It-cleft with PP clefted element: $\alpha$ItVpnx1s2 (a) and $\alpha$InvItVpnx1s2 (b)} \label{pp-it-clefts} \label{1;1,3} \label{1;3,3} \end{figure} The extra layer of tree structure in the VP represents that, while {\it be} is a main verb rather than an auxiliary in these cases, it retains some auxiliary properties. The VP structure for the equative/it-cleft-{\it be} is identical to that obtained after adjunction of predicative-{\it be} into small-clauses.\footnote{For additional discussion of equative or predicative-{\it be} see Chapter~\ref{small-clauses}.} The inverted tree in Figure~\ref{pp-it-clefts}(b) is necessary because of {\it be}'s auxiliary-like behavior. Although {\it be} is the main verb in it-clefts, it inverts like an auxiliary. Main verb inversion cannot be accomplished by adjunction as is done with auxiliaries and therefore must be built into the tree family. The tree in Figure~\ref{pp-it-clefts}(b) is used for yes/no questions such as (\ex{1}). \enumsentence{was it in the forest that the wolf talked to the little girl ?} \chapter{Lexical Organization} \label{lexorg} \section{Introduction} An important characteristic of an FB-LTAG is that it is lexicalized, i.e., each lexical item is anchored to a tree structure that encodes subcategorization information. Trees with the same canonical subcategorizations are grouped into tree families. The reuse of tree substructures, such as wh-movement, in many different trees creates redundancy, which poses a problem for grammar development and maintenance \cite{vijay-schabes92}. To consistently implement a change in some general aspect of the design of the grammar, all the relevant trees currently must be inspected and edited. Vijay Shanker and Schabes suggested the use of hierarchical organization and of tree descriptions to specify substructures that would be present in several elementary trees of a grammar. Since then, in addition to ourselves, Becker, \cite{becker94}, Evans et al. \cite{Evans95}, and Candito\cite{Candito96} have developed systems for organizing trees of a TAG which could be used for developing and maintaining grammars. Our system is based on the ideas expressed in Vijay-Shanker and Schabes, \cite{vijay-schabes92}, to use partial-tree descriptions in specifying a grammar by separately defining pieces of tree structures to encode independent syntactic principles. Various individual specifications are then combined to form the elementary trees of the grammar. The chapter begins with a description of our grammar development system, and its implementation. We will then show the main results of using this tool to generate the Penn English grammar as well as a Chinese TAG. We describe the significant properties of both grammars, pointing out the major differences between them, and the methods by which our system is informed about these language-specific properties. The chapter ends with the conclusion and future work. \section{System Overview} In our approach, three types of components -- subcategorization frames, blocks and lexical redistribution rules -- are used to describe lexical and syntactic information. Actual trees are generated automatically from these abstract descriptions, as shown in Figure \ref{system-overview}. In maintaining the grammar only the abstract descriptions need ever be manipulated; the tree descriptions and the actual trees which they subsume are computed deterministically from these high-level descriptions. \begin{figure}[htb] \centerline{\psfig{figure=ps/lexorg/overview.eps,height=2.2in,width=4in,angle=0}} \caption{Lexical Organization: System Overview} \label{system-overview} \end{figure} \subsection{Subcategorization frames} Subcategorization frames specify the category of the main anchor, the number of arguments, each argument's category and position with respect to the anchor, and other information such as feature equations or node expansions. Each tree family has one canonical subcategorization frame. \subsection{Blocks} Blocks are used to represent the tree substructures that are reused in different trees, i.e. blocks subsume classes of trees. Each block includes a set of nodes, dominance relation, parent relation, precedence relation between nodes, and feature equations. This follows the definition of the tree descriptions specified in a logical language patterned after Rogers and Vijay-Shanker\cite{rogers-vijay94}. Blocks are divided into two types according to their functions: subcategorization blocks and transformation blocks. The former describes structural configurations incorporating the various information in a subcategorization frame. For example, some of the subcategorization blocks used in the development of the English grammar are shown in Figure \ref{subcat-blocks}.\footnote{ In order to focus on the use of tree descriptions and to make the figures less cumbersome, we show only the structural aspects and do not show the feature value specification. The parent, (immediate dominance), relationship is illustrated by a plain line and the dominance relationship by a dotted line. The arc between nodes shows the precedence order of the nodes are unspecified. The nodes' categories are enclosed in parentheses.} When the subcategorization frame for a verb is given by the grammar developer, the system will automatically create a new block (of code) by essentially selecting the appropriate primitive subcategorization blocks corresponding to the argument information specified in that verb frame. The transformation blocks are used for various transformations such as wh-movement. These transformation blocks do not encode rules for modifying trees, but rather describe the properties of a particular syntactic construction. Figure \ref{trans-blocks} depicts our representation of phrasal extraction. This can be specialized to give the blocks for wh-movement, topicalization, relative clause formation, etc. For example, the wh-movement block is defined by further specifying that the ExtractionRoot is labeled S, the NewSite has a +wh feature, and so on. \begin{figure}[htb] \centerline{\psfig{figure=ps/lexorg/demosub.eps,height=2in,width=4in}} \caption{Some subcategorization blocks} \label{subcat-blocks} \end{figure} \begin{figure}[htb] \centerline{\psfig{figure=ps/lexorg/extract.eps,height=4.5in}} \caption{Transformation blocks for extraction} \label{trans-blocks} \end{figure} \subsection{Lexical Redistribution Rules (LRRs)} The third type of machinery available for a grammar developer is the Lexical Redistribution Rule (LRR). An LRR is a pair ($r_{l}$, $r_{r}$) of subcategorization frames, which produces a new frame when applied to a subcategorization frame s, by first {\it matching}\footnote{Matching occurs successfully when frame s is compatible with $r_{l}$ in the type of anchors, the number of arguments, their positions, categories and features. In other words, incompatible features etc. will block certain LRRs from being applied.} the left frame $r_{l}$ of r to s, then combining information in $r_{r}$ and s. LRRs are introduced to incorporate the connection between subcategorization frames. For example, most transitive verbs have a frame for active(a subject and an object) and another frame for passive, where the object in the former frame becomes the subject in the latter. An LRR, denoted as passive LRR, is built to produce the passive subcategorization frame from the active one. Similarly, applying dative-shift LRR to the frame with one NP subject and two NP objects will produce a frame with an NP subject and an PP object. Besides the distinct content, LRRs and blocks also differ in several aspects: \begin{itemize} \item They have different functionalities: Blocks represent the substructures that are reused in different trees. They are used to reduce the redundancy among trees; LRRs are introduced to incorporate the connections between the closely related subcategorization frames. \item Blocks are strictly additive and can be added in any order. LRRs, on the other hand, produce different results depending on the order they are applied in, and are allowed to be non-additive, i.e., to remove information from the subcategorization frame they are being applied to, as in the procedure of passive from active. \end{itemize} \begin{figure}[htb] \centerline{\psfig{figure=ps/lexorg/newelem.eps,height=2in,width=4in,angle=0}} \caption{Elementary trees generated from combining blocks} \label{elem} \end{figure} \subsection{Tree generation} To generate elementary trees, we begin with a canonical subcategorization frame. The system will first generate related subcategorization frames by applying LRRs, then select subcategorization blocks corresponding to the information in the subcategorization frames, next the combinations of these blocks are further combined with the blocks corresponding to various transformations, finally, a set of trees are generated from those combined blocks, and they are the tree family for this subcategorization frame. Figure \ref{elem} shows some of the trees produced in this way. For instance, the last tree is obtained by incorporating information from the ditransitive verb subcategorization frame, applying the dative-shift and passive LRRs, and then combining them with the wh-non-subject extraction block. Besides, in our system the hierarchy for subcategorization frames is implicit as shown in Figure \ref{lattice}. \begin{figure}[htb] \centerline{\psfig{figure=ps/lexorg/frame.eps,height=2.2in,angle=0}} \caption{Partial inheritance lattice in English} \label{lattice} \end{figure} \section{Implementation} The input of our system is the description of the language, which includes the subcategorization frame list, LRR list, subcategorization block list and transformation lists. The output is a list of trees generated automatically by the system, as shown in Figure \ref{impl}. The tree generation module is written in Prolog, and the rest part is in C. We also have a graphic interface to input the language description. Figure \ref{interface-firstlevel} and \ref{interface-block} are two snapshots of the interface. \begin{figure}[htb] \centerline{\psfig{figure=ps/lexorg/impl.eps,height=4in}} \caption{Implementation of the system} \label{impl} \end{figure} \begin{figure}[htb] \centerline{\psfig{figure=ps/lexorg/java.interface.ps,height=3in,angle=90}} \caption{Interface for creating a grammar} \label{interface-firstlevel} \end{figure} \begin{figure}[htb] \centerline{\psfig{figure=ps/lexorg/java.block.ps,height=3in,angle=90}} \vspace{0.4in} \caption{Part of the Interface for creating blocks} \label{interface-block} \end{figure} \section{Generating grammars} We have used our tool to specify a grammar for English in order to produce the trees used in the current English XTAG grammar. We have also used our tool to generate a large grammar for Chinese. In designing these grammars, we have tried to specify the grammars to reflect the similarities and the differences between the languages. The major features of our specification of these two grammars\footnote{Both grammars are still under development, so the contents of these two tables might change a lot in the future according to the analyses we choose for certain phenomenon. For example, the majority of work on Chinese grammar treat ba-construction as some kind of object-fronting where the character {\it ba} is either an object marker or a preposition. According to this analysis, an LRR rule for ba-construction is used in our grammar to generate the preverbal-object frame from the postverbal frame. However, there has been some argument for treating {\it ba} as a verb. If we later choose that analysis, the main verbs in the patterns ``NP0 VP'' and ``NP0 ba NP1 VP'' will be different, therefore no LRR will be needed for it. As a result, the numbers of LRRs, subcat frames and tree generated will change accordingly.} are summarized in Table \ref{table} and \ref{table2}. \begin{table}[ht] \centering \begin{tabular}{|l|l|l|} \hline & English & Chinese \\ \hline examples & passive & bei-construction \\ of LRRs & dative-shift & object fronting \\ & ergative & ba-construction \\ \hline examples & wh-question & topicalization \\ of transformation & relativization & relativization \\ blocks & declarative & argument-drop \\ \hline \# LRRs & 6 & 12 \\ \hline \# subcat blocks & 34 & 24 \\ \hline \# trans blocks & 8 & 15 \\ \hline \# subcat frames & 43 & 23 \\ \hline \# trees generated & 638 & 280 \\ \hline \end{tabular} \\ \caption{Major features of English and Chinese grammars} \label{table} \end{table} \begin{table}[ht] \centering \begin{tabular}{|l|l|l|l|} \hline & both grammars & English & Chinese \\ \hline & causative & long passive & VO-inversion \\ LRRs & short passive & ergative & ba-const \\ & & dative-shift & \\ \hline & topicalization & & \\ trans blocks & relativization & gerund & argument-drop \\ & declarative & & \\ \hline & NP/S subject & & zero-subject \\ subcat blocks & S/NP/PP object & PL object & preverbal object \\ & V predicate & prep predicate & \\ \hline \end{tabular} \\ \caption{Comparison of the two grammars} \label{table2} \end{table} By focusing on the specification of individual grammatical information, we have been able to generate nearly all of the trees from the tree families used in the current English grammar developed at Penn\footnote{We have not yet attempted to extend our coverage to include punctuation, it-clefts, and a few idiosyncratic analyses.}. Our approach, has also exposed certain gaps in the Penn grammar. We are encouraged with the utility of our tool and the ease with which this large-scale grammar was developed. We are currently working on expanding the contents of subcategorization frame to include trees for other categories of words. For example, a frame which has no specifier and one NP complement and whose predicate is a preposition will correspond to PP $\rightarrow$ P NP tree. We'll also introduce a modifier field and semantic features, so that the head features will propagate from modifiee to modified node, while non-head features from the predicate as the head of the modifier will be passed to the modified node. \section{Summary} We have described a tool for grammar development in which tree descriptions are used to provide an abstract specification of the linguistic phenomena relevant to a particular language. In grammar development and maintenance, only the abstract specifications need to be edited, and any changes or corrections will automatically be proliferated throughout the grammar. In addition to lightening the more tedious aspects of grammar maintenance, this approach also allows a unique perspective on the general characteristics of a language. Defining hierarchical blocks for the grammar both necessitates and facilitates an examination of the linguistic assumptions that have been made with regard to feature specification and tree-family definition. This can be very useful for gaining an overview of the theory that is being implemented and exposing gaps that remain unmotivated and need to be investigated. The type of gaps that can be exposed could include a missing subcategorization frame that might arise from the automatic combination of blocks and which would correspond to an entire tree family, a missing tree which would represent a particular type of transformation for a subcategorization frame, or inconsistent feature equations. By focusing on syntactic properties at a higher level, our approach allows new opportunities for the investigation of how languages relate to themselves and to each other. \chapter{Metarules} \label{metarules} \section{Introduction} XTAG has now a collection of functions accessible from the user interface that helps the user in the construction and maintenance of a tag tree-grammar. This subsystem is based on the idea of metarules (\cite{becker93}). Here our primary purpose is to describe the facilities implemented under this metarule-based subsystem. For a discussion of the metarules as a method for compact representation of the Lexicon see \cite{becker93} and \cite{srini94}. The basic idea of using metarules is to take profit of the similarities of the relations involving related pairs of XTAG elementary trees. For example, in the English grammar described in this technical report, comparing the XTAG trees for the basic form and the wh-subject moved form, the relation between this two trees for transitive verbs ($\alpha nx_0Vnx_1$, $\alpha W_0nx_0Vnx_1$) is similar to the relation for the intransitive verbs ($\alpha nx_0V$, $\alpha W_0nx_0V$) and also to the relation for the ditransitives ($\alpha nx_0Vnx_1nx_2$, $\alpha W_0nx_0Vnx_1nx_2$). Hence, instead of generating by hand the six trees mentioned above, a more natural and robust way would be generating by hand only the basic trees for the intransitive, transitive and ditransitive cases, and letting the wh-subject moved trees to be automatically generated by the application of a unique transformation rule that would account exactly for the identical relation involved in each of the three pairs above. Notice that the degree of generalization can be much higher than it might be thought in principle from the above paragraph. For example, once a rule for passivization is applied to the tree different basic trees above, the wh-subject moved rule could be again applied to generate the wh-moved subject versions for the passive form. Depending on the degree of regularity that one can find in the grammar being built, the reduction in the number of original trees can be exponential. We still make here a point that the reduction of effort in grammar construction is not the only advantage of the approach. Robustness, reliability and maintainability of the grammar achieved by the use of metarules are equally or even more important. In the next section we define a metarule in XTAG. Section 3 gives some linguistically motivated examples of metarule for the English grammar described in this technical report and their application. Section 4 describes the access through the user interface. \section{The definition of a metarule in XTAG} A metarule specifies a rule for transforming grammar rules into grammar rules. In XTAG the grammar rules are lexicalized trees. Hence an XTAG metarule {\bf mr} is a pair {\bf (lhs, rhs)} of XTAG trees, where: \begin{itemize} \item {\bf lhs}, the {\it left-hand side} of the metarule, is a pattern tree, i.e., it is intended to present a specific pattern of tree to look for in the trees submitted to the application of the metarule. \item When a metarule {\bf mr} is applied to an input tree {\bf inp}, the first step is to verify if the input tree matches the pattern specified by the {\bf lhs}. If there is no match, the application {\it fails}. \item {\bf rhs}, the {\it right-hand side} of the metarule, specifies (together with {\bf lhs}) the transformation that will be done in {\bf inp}, in case of successful matching, thus generating the output tree of the metarule application\footnote{actually more than one output tree can be generated from the successful application of a rule to an input tree, as will be seen soon}. \end{itemize} \subsection{Node names, variable instantiation, and matches} We will use the terms ({\bf lhs}, {\bf rhs} and {\bf inp}) as introduced above to refer to the parts of a generic metarule being applied to an input tree. The nodes at {\bf lhs} can take three different forms: a constant node, a typed variable node, and a non-typed variable node. The naming conventions for these different classes of nodes is given below. \begin{itemize} \item {\bf Constant Node:} Its name must not initiate by a question mark (`?' character). They are like we expect for names to be in normal XTAG trees; for instance, {\bf inp} is expected to have only constant nodes. Some examples of constant nodes are $NP$, $V$, $NP_0$, $NP_1$, $S_r$. We will call the two parts that compose such names the {\it stem} and the {\it subscript}. In the examples above $NP$, $V$ and $S$ are stems and $0$, $1$, $r$ are subscripts. Notice that the subscript part can also be empty as in two of the above examples. \item {\bf Non-Typed Variable Node:} Its name initiates by a question mark (`?'), followed by a sequence of digits (i.e. a number) which uniquely identifies the variable. Examples: ?1, ?3, ?3452\footnote{Notice however that having the sole purpose of distinguishing between variables, a number like the one in the last example is not very likely to occur, and a metarule with more than three thousand variables can give you a place in the Guinness TagBook of Records.}. We assume that there is no stem and no subscript in this names, i.e., `?' is just a meta-character to introduce a variable, and the number is the variable identifier. \item {\bf Typed Variable Node:} Its name initiates by a question mark (`?') followed by a sequence of digits, but is additionally followed by a {\it type specifiers definition}. A {\it type specifiers definition} is a sequence of one or more {\it type specifier} separated by a slash (`/'). A {\it type specifier} has the same form of a regular XTAG node name (like the constant nodes), except that the subscript can be also a question mark. Examples of typed variables are: $?1VP$ (a single type specifier with stem $VP$ and no subscript), $?3NP_1/PP$ (two type specifiers, $NP_1$ and $PP$), $?1NP_?$ (one type specifier, $NP_?$ with undetermined subscript). We'll see ahead that each type specifier represents an alternative for matching, and the presence of `?' in subscript position of a type specifier means that matching will only check for the stem \footnote{This is different from not having a subscript which is interpreted as checking that the input tree have no subscript for matching}. \end{itemize} During the process of matching, variables are associated (we use the term {\it instantiated}) with `tree material'. According to its class a variable can be instantiated with different kinds of tree material: \begin{itemize} \item A typed variable will be instantiated with exactly one node of the input tree, which is in accordance to one of its type specifiers (The full rule is in the following subsection). \item A non-typed variable will be instantiated with a range of subtrees. These subtrees will be taken from one of the nodes of the input tree {\bf inp}. Hence, there will a node $n$ in {\bf inp}, with subtrees $n.t_1$, $n.t_2$, ..., $n.t_k$, in this order, where the variable will be instantiated with some subsequence of these subtrees (e.g., $n.t_2$, $n.t_3$, $n.t_4$). Note however, that some of these subtrees, may be incomplete, i.e., they may not go all the way to the bottom leaves. Entire subtrees may be removed. Actually for each child of the non-typed variable node, one subtree that matches this child subtree will be removed from some of the $n.t_i$(maybe an entire $n.t_i$), leaving in place a mark for inserting material during the substitution of occurences at {\bf rhs}.\\ Notice still that the variable can be instantiated with a single tree and even with no tree. \end{itemize} We define a {\it match} to be a complete instantiation of all variables appearing in the metarule. In the process of matching, there may be several possible ways of instantiating the set of variables of the metarule, i.e., several possible matches. This is due to the presence of non-typed variables. Now, we are ready to define what we mean by a successful matching. The process of matching is {\it successful} if the number of possible matches is greater then 0. When there is no possible match the process is said to {\it fail}. In addition to return success or failure, the process also return the set of all possible {\it matches}, which will be used for generating the output. \subsection{Structural Matching} The process of matching {\bf lhs} and {\bf inp} can be seen as a recursive procedure for matching trees, starting at their roots and proceeding in a top-down style along with their subtrees. In the explanation of this process that follows we have used the term {\bf lhs} not only to refer to the whole tree that contains the pattern but to any of its subtrees that is being considered in a given recursive step. The same applies to {\bf inp}. By now we ignore feature equations, which will be accounted for in the next subsection. The process described below returns at the end the set of matches (where an empty set means the same as failure). We first give one auxiliary definition, of valid Mapping, and one recursive function Match, that matches lists of trees instead of trees, and then define the process of matching two trees as a special case of call to Match. Given a list $list_{lhs}=[lhs_1, lhs_2, ..., lhs_l]$ of nodes of {\bf lhs} and a list $list_{inp}=[inp_1, inp_2, ..., inp_i]$ of nodes of {\bf inp}, we define a {\it mapping} from $list_{lhs}$ to $list_{inp}$ to be a function $Mapping$, that for each element of $list_{lhs}$ assigns a list of elements of $list_{inp}$, defined by the following condition: $$concatenation\ (Mapping(lhs_1),\ Mapping(lhs_2),\ ...,\ Mapping(lhs_l))\ =\ list_{inp}$$, i.e., the elements of $list_{inp}$ are split into sublists and assigned in order of appearance in the list to the elements of $list_{lhs}$. We say that a mapping is a {\it valid mapping} if for all $j$, $1\leq j \leq l$ (where $l$ is the length of $list_{lhs}$), the following restrictions apply: \begin{enumerate} \item if $lhs_j$ is a constant node, then $Mapping(lhs_j)$ must have a single element, say, $rhs_{g(j)}$, and the two nodes must have the same name and agree on the markers (foot, substitution, head and NA), i.e., if $lhs_j$ is NA, then $rhs_{g(j)}$ must be NA, if $lhs_j$ has no markers, then $rhs_{g(j)}$ must have no markers, etc. \item if $lhs_j$ is a type variable node, then $Mapping(lhs_j)$ must have a single element, say, $rhs_{g(j)}$, and $rhs_{g(j)}$ must be {\it marker-compatible} and {\it type-compatible} with $lhs_j$. \\ $rhs_{g(j)}$ is {\it marker-compatible} with $lhs_j$ if any marker (foot, substitution, head and NA) present in $lhs_j$ is also present in $rhs_{g(j)}$\footnote{Notice that, unlike the case for the constant node, the inverse is not required, i.e., if $lhs_j$ has no marker, $rhs_{g(j)}$ is still allowed to have some.}.\\ $rhs_{g(j)}$ is {\it type-compatible} with $lhs_j$ if there is at least one of the alternative type specifiers for the typed variable that satisfies the conditions below. \begin{itemize} \item $rhs_{g(j)}$ has the stem defined in the type specifier. \item if the type specifier doesn't have subscript, then $rhs_{g(j)}$ must have no subscript. \item if the type specifier has a subscript different of `?', then $rhs_{g(j)}$ must have the same subscript as in the type specifier \footnote{If the type specifier has a `?' subscript, there is no restriction, and that is exactly its function: to allow for the matching to be independent of the subscript}. \end{itemize} \item if $lhs_j$ is a non-typed variable node, then there's actually no requirement: $Mapping(lhs_j)$ may have any length and even be empty. \end{enumerate} The following algorithm, Match, takes as input a list of nodes of {\bf lhs} and a list of nodes of {\bf inp}, and returns the set of possible matches generated in the attempt of match this two lists. If the result is an empty set, this means that the matching failed. \begin{tabbing} 012\=0123\=0123\=0123\=0123\=0123\=0123\=0123\=0123\=0123\=0123\=0123\=012\kill \> Function Match ($list_{lhs}$, $list_{rhs}$) \\ \>\> Let $MAPPINGS$ be the list of all valid mappings from $list_{lhs}$ to $list_{rhs}$ \\ \>\> Make $MATCHES=\emptyset$ \\ \>\> For each mapping $Mapping\in MAPPINGS$ do: \\ \>\>\> Make $Matches=\{\emptyset \}$ \\ \>\>\> For each $j$, $1 \leq j \leq l$, where $l=length(list_{lhs})$, do: \\ \>\>\>\> if $lhs_j$ is a constant node, then \\ \>\>\>\>\> let \>$children_{lhs}$ be the list of children of $lhs_j$ \\ \>\>\>\>\> \>$lhr_{g(j)}$ be the single element in $Mapping(lhs_j)$ \\ \>\>\>\>\> \>$children_{rhs}$ be the list of children of $lhr_{g(j)}$ \\ \>\>\>\>\> Make $Matches=\{m\cup m_j\ \mid\ m\in Matches$ \\ \>\>\>\>\>\>\>\>\> $and\ m_j\in$ Match$(children_{lhs},\ children_{rhs})\}$ \\ \>\>\>\> if $lhs_j$ is a typed variable node, then \\ \>\>\>\>\> let \>$children_{lhs}$ be the list of children of $lhs_j$ \\ \>\>\>\>\> \>$lhr_{g(j)}$ be the single element in $Mapping(lhs_j)$ \\ \>\>\>\>\> \>$children_{rhs}$ be the list of children of $lhr_{g(j)}$ \\ \>\>\>\>\> Make $Matches=\{\{(lhs_j,lhr_{g(j)})\}\cup m\cup m_j\ \mid\ m\in Matches$ \\ \>\>\>\>\>\>\>\>\> $and\ m_j\in$ Match$(children_{lhs},\ children_{rhs})\}$ \\ \>\>\>\> if $lhs_j$ is a non-typed variable node, then \\ \>\>\>\>\> let \>$children_{lhs}$ be the list of children of $lhs_j$ \\ \>\>\>\>\> \> $sl$ be the number of nodes in $children_{lhs}$ \\ \>\>\>\>\>\> $DESC_s$ be the set of s-size lists given by: \\ \>\>\>\>\>\>\>\> $DESC_s=\{[dr_1,dr_2,...,dr_s]\ \mid\ $ \\ \>\>\>\>\>\>\>\>\>\> for every $1 \leq k \leq s$, $dr_k$ is a descendant \\ \>\>\>\>\>\>\>\>\>\>\>\> of some node in $Mapping(lhs_j)\}$\footnote{it's not necessary to be a proper descendant, i.e., $dr_k$ may be a node in $Mapping(lhs_j)$}\\ \>\>\>\>\>\>\>\>\>\> for every $1 < k \leq s$, $dr_l$ is {\it to the right of} $dr_{k-1}$\footnote{recall that a node $n$ is to the right of a node $m$, if $n$ and $m$ are not descendant of each other, and all the leaves dominated by $n$ are to the right of the leaves dominated by $m$.}.\\ \>\>\>\>\>\> For every list $Desc=[dr_1,dr_2,...,dr_s] \in DESC_s$ do: \\ \>\>\>\>\>\>\> Let Tree-Material be the list of subtrees dominated \\ \>\>\>\>\>\>\>\>\> by the nodes in $Mapping(lhs_j)$, but, with the \\ \>\>\>\>\>\>\>\>\> subtrees dominated by the nodes in $DESC_s$ \\ \>\>\>\>\>\>\>\>\> cut out from these trees \\ \>\>\>\>\>\>\> Make $Matches=\{\{(lhs_j,\ Tree-Struct)\}\ \cup m\cup m_j\ \mid$ \\ \>\>\>\>\>\>\>\>\> $m\in Matches\ and\ m_j\in$ Match$(children_{lhs},\ Desc)\}$ \\ \>\>\> Make $MATCHES\ =\ MATCHES\ \cup\ Matches$ \\ \>\> Return $MATCHES$ \end{tabbing} Finally we can define the process of structurally matching {\bf lhs} to {\bf inp} as the evaluation of Match([root({\bf lhs})], [root({\bf inp})]. If the result is an empty set, the matching failed, otherwise the resulting set is the set of possible matches that will be used for generating the new trees (after being pruned by the feature equation matching). \subsection{Output Generation} \label{output-gen} Although nothing has yet been said about the feature equations, which is the subject of the next subsection, we assume that only matches that meet the additional constraints imposed by feature equations are considered for output. If no structural match survives feature equations checking, that matching has failed. If the process of matching {\bf lhs} to {\bf inp} fails, there are two alternative behaviors according to the value of a parameter\footnote{the parameter is accessible at the Lisp interface by the name {\it XTAG::*metarules-copy-unmatched-trees*}}. If the parameter is set to false, which is the {\it default} value, no output is generated. On the other hand, if it is set to true, then the own {\bf inp} tree is copied to the output. If the process of matching succeeds, as many trees will be generated in the output as the number of possible matches obtained in the process. For a given match, the output tree is generated by substituting in the {\bf rhs} tree of the metarule the occurrences of variables by the material to which they have been instantiated in the match. The case of the typed-variable is simple. The name of the variable is just substituted by the name of the node to which it has been instantiated from {\bf inp}. A very important detail is that the marker (foot, substitution, head, NA, or none) at the output tree node comes from what is specified in the {\bf rhs} node, which can be different of the marker at the variable node in {\bf inp} and of the associated node from {\bf inp}. The case of the non-typed variable, not surpringly, is not so simple. In the output tree, this node will be substituted by the subtree list that was associated to this node, in the same other, attaching to the parent of this non-typed variable node. But remember, that some subtrees may have been removed from some of the trees in this list, maybe entire elements of this list, due to the effect of the children of the metavariable in {\bf lhs}. It is a requirement that any occurence of a non-typed variable node at the {\bf rhs} tree has exactly the same number of children than the unique occurence of this non-typed variable node in {\bf lhs}. Hence, when generating the output tree, the subtrees at {\bf rhs} will be inserted exactly at the points where subtrees were removed during matching, in a positional, one to one correspondance. For feature equations in the output trees see the next subsection. The comments at the output are the comments at the {\bf lhs} tree of the metarule followed by the coments at {\bf inp}, both parts introduced by appropriate headers, allowing the user to have a complete history of each tree. \subsection{Feature Matching} In the previous subsections we have considered only the aspects of a metarule involving the structural part of the XTAG trees. In a feature based grammar as XTAG is, accounting for features is essential. A metarule is not really worth if it doesn't account for the proper change of feature equations\footnote{Notice that what is really important is not the features themselves, but the feature equations that relate the feature values of nodes of the same tree} from the input to the output tree. The aspects that have to be considered here are: \begin{itemize} \item Which feature equations should be required to be present in {\bf inp} in order for the match to succeed. \item Which feature equations should be generated in the output tree as a function of the feature equations in the input tree. \end{itemize} Based on the possible combinations of these requirements we partition the feature equations into the following five classes\footnote{This classification is really a partition, i.e., no equation may be conceptually in more than one class at the same time.}: \begin{itemize} \item {\it Require \& Retain:} Feature equations in this class are required to be in {\bf inp} in order for matching to succeed. Upon matching, these equations will be copied to the output tree. To achieve this behaviour, the equation must be placed in the {\bf lhs} tree of the metarule preceded by a plus character (e.g. $+V.t:<trans>=+$)\footnote{Commutativity of equations is accounted for in the system. Hence an equation $x=y$ can also be specified as $y=x$. Associativity is not accounted for and its need by an user is viewed as indicating misspecification at the input trees.} \item {\it Require \& Don't Copy:} The equation is required to be in {\bf inp} for matching, but should not be copied to the output tree. Those equations must be in {\bf lhs} preceded by minus character (e.g. $-NP_1:<case>=acc$). \item {\it Optional \& Don't Copy:} The equation is not required for matching, but we have to make sure not to copy it to the output tree set of equations, regardless of it being present or not in {\bf inp}. Those equations must be in {\bf lhs} in raw form, i.e. neither preceded by a plus nor minus character (e.g. $S_r.b:<perfect>=VP.t:<perfect>$). \item {\it Optional \& Retain:} The equation is not required for matching but, in case it is found in {\bf inp} it must be copied to the output tree. This is the {\it default} case, and hence these equations should not be present in the metarule specification. \item {\it Add:} The equation is not required for matching but we want it to be put in the output tree anyway. These equations are placed in raw form in the {\bf rhs} (notice in this case it is the right hand side). \end{itemize} Typed variables can be used in feature equations in both {\bf lhs} and {\bf rhs}. They are intended to represent the nodes of the input tree to which they have been instantiated. For each resulting match from the structural matching process the following is done: \begin{itemize} \item The (typed) variables in the equations at {\bf lhs} and {\bf rhs} are substituted by the names of the nodes they have been instantiated to. \item The requirements concerning feature equations are checked, according to the above rules. \item If the match survives feature equation checking, the proper output tree is generated, according to Section~\ref{output-gen} and to the rules described above for the feature equations. \end{itemize} Finally, a new kind of metavariable, which is not used at the nodes, can be introduced in the feature equations part. They have the same form of the non-typed variables, i.e. quotation mark, followed by a number, and are used in the place of feature values and feature names. Hence, if the equation $?NP_?.b:<?2> = ?3$ appears in {\bf lhs}, this means, that all feature equations of {\bf inp} that match a bottom attribute of some $NP$ to any feature value (but not to a feature path) will not be copied to the output. \setcounter{topnumber}{4} \setcounter{bottomnumber}{4} \setcounter{totalnumber}{4} \section{Examples} Figure~\ref{wh-subj} shows a metarule for wh-movement of the subject. Among the trees to which it have been applied are the basic trees of intransitive, transitive and ditransitive families (including prepositional complements), passive trees of the same families, and ergative. \begin{figure}[htb] \begin{center} \begin{tabular}{c@{\hspace{2em}}c} \framebox{\psfig{figure=fig/lhs-wh-subj.ps,height=4cm}} & \framebox{\psfig{figure=fig/rhs-wh-subj.ps,height=4cm}} \\ {lhs} & {rhs} \\ \end{tabular} \end{center} \caption{Metarule for wh-movement of subject} \label{wh-subj} \end{figure} Figure~\ref{wh-obj} shows a metarule for wh-movement of an NP in object position. Among the trees to which it have been applied are the basic and passive trees of transitive and ditransitive families. \begin{figure}[!htb] \begin{center} \begin{tabular}{c@{\hspace{2em}}c} \framebox{\psfig{figure=fig/lhs-wh-obj.ps,height=4cm}} & \framebox{\psfig{figure=fig/rhs-wh-obj.ps,height=4cm}} \\ {lhs} & {rhs} \\ \end{tabular} \end{center} \caption{Metarule for wh-movement of object} \label{wh-obj} \end{figure} Figure~\ref{wh} shows a metarule for general wh-movement of an NP. It can be applied to generate trees with either subject or object NP moved. We show in Figure~\ref{prep}, the basic tree for the family Tnx0Vnx1Pnx2 and the tree wh-trees generated by the application of the rule. \begin{figure}[!htb] \begin{center} \begin{tabular}{c@{\hspace{2em}}c} \framebox{\psfig{figure=fig/lhs-wh.ps,height=4cm}} & \framebox{\psfig{figure=fig/rhs-wh.ps,height=4cm}} \\ {lhs} & {rhs} \\ \end{tabular} \end{center} \caption{Metarule for general wh movement of an NP} \label{wh} \end{figure} \begin{figure}[!htb] \begin{center} \begin{tabular}{c@{\hspace{2em}}c} \framebox{\psfig{figure=fig/prep.ps,height=4cm}} & \framebox{\psfig{figure=fig/prep1.ps,height=4cm}} \\ {Tnx0Vnx1Pnx2} & {subject moved} \\ \\ \framebox{\psfig{figure=fig/prep2.ps,height=4cm}} & \framebox{\psfig{figure=fig/prep3.ps,height=4cm}} \\ {NP object moved} & {NP object moved from PP} \\ \end{tabular} \end{center} \caption{Application of wh-movement rule to Tnx0Vnx1Pnx2} \label{prep} \end{figure} \section{The Access to the Metarules through the XTAG Interface} We first describe the access to the metarules subsystem using buffers with single metarule applications. Then we proceed by describing the application of multiple metarules in what we call the parallel, sequential, and cumulative modes to input tree files. We have defined conceptually a metarule as an ordered pair of trees. In the implementation of the metarule subsystem it works the same: a metarule is a buffer with two trees. The name of the metarule is the name of the buffer. The first tree that appear in the main window under the metarule buffer is the {\it left hand side}, the next appearing below is the {\it right hand side}\footnote{Although a buffer is intended to implement the concept of a set (not a sequence) of trees we take profit of the actual organization of the system to realize the concept of (ordered) tree pair in the implementation.}. The positional approach allows us to have naming freedom: the tree names are irrelevant\footnote{so that even if we want to have mnemonic names resembling their distinct character - left or right hand side, - we have some naming flexibility to call them e.g. {\it lhs23} or {\it lhs-passive}, ...}. Since we can save buffers into text files, we can talk also about metarule files. The available options for applying a metarule which is in a buffer are: \begin{itemize} \item For applying it to a single input tree, click in the name of the tree in the main window, and choose the option {\it apply metarule to tree}. You will be prompted for the name of the metarule to apply to the tree which should be, as we mentioned before, the name of the buffer that contains the metarule trees. The output trees will be generated at the end of the buffer that contains the input tree. The names of the trees depend of a LISP parameter {\it *metarules-change-name* }. If the value of the parameter is {\bf false} --- the {\it default} value --- then the new trees will have the same name as the input, otherwise, the name of the input tree followed by a dash (`-') and the name of the right hand side of the tree\footnote{the reason why we do not use the name of the metarule, i.e. the name of the buffer, is because in some forms of application the metarules do not carry individual names, as we'll see soon is the case when a set of metarules from a file is applied.}. The value of the parameter can be changed by choosing {\it Tools} at the menu bar and then either {\it name mr output trees = input} or {\it append rhs name to mr output trees}. \item For applying it to all the trees of a buffer, click in the name of the buffer that contains the trees and proceed as above. The output will be a new buffer with all the output trees. The name of the new buffer will be the same as the input buffer prefixed by "MR-". The names of the trees follow the conventions above. \end{itemize} The other options concern application to files (instead of buffers). Lets first define the concepts of parallel, sequential and cumulative application of metarules. One metarule file can contain more than one metarule. The first two trees, i.e., the first tree pair, form one metarule - lets call it $mr_0$. Subsequent pairs in the sequence of trees define additional metarules --- $mr_1$, $mr_2$, ..., $mr_n$. \begin{itemize} \item We say that a metarule file is applied in parallel to a tree (see Figure~\ref{parallel}) if each of the metarules is applied independently to the input generating its particular output trees\footnote{remember a metarule application generates as many output trees as the number of matches}. We generalize the concept to the application in parallel of a metarule file to a tree file (with possibly more than one tree), generating all the trees as if each metarule in the metarule file was applied to each tree in the input file. \begin{figure}[htb] \centerline{\psfig{figure=fig/parallel.ps,width=5in}} \caption{Parallel application of metarules} \label{parallel} \end{figure} \item We say that a metarule file $mr_0, mr_1, mr_2, ...,mr_n$ is applied in sequence to a input tree file (see Figure~\ref{sequential}) if we apply $mr_0$ to the trees of the input file, and for each $0<i\leq n$ apply metarule $mr_i$ to the trees generated as a result of the application of $mr_{i-1}$. \begin{figure}[htb] \centerline{\psfig{figure=fig/sequential.ps,width=5in}} \caption{Sequential application of metarules} \label{sequential} \end{figure} \item Finally, the cumulative application is similar to the sequential, except that the input trees at each stage are by-passed to the output together with the newly generated ones (see Figure~\ref{cumulative}). \begin{figure}[htb] \centerline{\psfig{figure=fig/cumulative.ps,width=5in}} \caption{Cumulative application of metarules} \label{cumulative} \end{figure} \end{itemize} Remember that in case of matching failure the output result is decided as explained in subsection ~\ref{output-gen} either to be empty or to be the input tree. The reflex here of having the parameter set for copying the input is that for the parallel application the output will have as many copies of the input as matching failures. For the sequential case the decision apply at each level, and setting the parameter for copying, in a certain sense, guarantees for the 'pipe' not to break. Due to its nature and unlike the two other modes, the cumulative application is not affected by this parameter. The options for application of metarules to files are available by clicking at the menu item {\it Tools} and then choosing the appropriate function among: \begin{itemize} \item {\it Apply metarule to files:} You'll be prompted for the metarule file name which should contain one metarule\footnote{if it contains more than 2 trees, the additional trees are ignored}, and for input file names. Each input file name {\bf inpfile} will be independently submitted to the application of the metarule generating an output file with the name {\bf MR-inpfile}. \item {\it Apply metarules in parallel to files:} You'll be prompted for the metarules file name with one or more metarules and for input file names. Each input file name {\bf inpfile} will be independently submitted to the application of the metarules in parallel. For each parallel application to a file {\bf inpfile} an output file with the name {\bf MRP-inpfile} will be generated. \item {\it Apply metarules in sequence to files:} The interaction is as described for the application in parallel, except that the application of the metarules are in sequence and that the output files are prefixed by {\bf MRS-} instead of {\bf MRP-}. \item {\it Apply metarules cumulatively to files:} The interaction is as described for the applications in parallel and in sequence, except that the mode of application is cumulative and that the output files are prefixed by {\bf MRC-}. \end{itemize} Finally still under the {\it Tools} menu we can change the setting of the parameter that controls the output result on matching failure (see Subsection~\ref{output-gen}) by choosing either {\it copy input on mr matching failure} or {\it no output on mr matching failure}. \chapter{Modifiers} \label{modifiers} This chapter covers various types of modifiers: adverbs, prepositions, adjectives, and noun modifiers in noun-noun compounds.\footnote{Relative clauses are discussed in Chapter~\ref{rel_clauses}.} These categories optionally modify other lexical items and phrases by adjoining onto them. In their modifier function these items are adjuncts; they are not part of the subcategorization frame of the items they modify. Examples of some of these modifiers are shown in (\ex{1})-(\ex{3}). \enumsentence{[$_{ADV}$ certainly $_{ADV}$], the October 13 sell-off didn't settle any stomachs . (WSJ)} \enumsentence{Mr. Bakes [$_{ADV}$ previously $_{ADV}$] had a turn at running Continental . (WSJ)} \enumsentence{most [$_{ADJ}$ foreign $_{ADJ}$] [$_{N}$ government $_{N}$] [$_{N}$ bond $_{N}$] [prices] rose [$_{PP}$ during the week $_{PP}$]. } The trees used for the various modifiers are quite similar in form. The modifier anchors the tree and the root and foot nodes of the tree are of the category that the particular anchor modifies. Some modifiers, e.g. prepositions, select for their own arguments and those are also included in the tree. The foot node may be to the right or the left of the anchoring modifier (and its arguments) depending on whether that modifier occurs before or after the category it modifies. For example, almost all adjectives appear to the left of the nouns they modify, while prepositions appear to the right when modifying nouns. \section{Adjectives} \label{adj-modifier} In addition to being modifiers, adjectives in the XTAG English grammar can be also anchor clauses (see Adjective Small Clauses in Chapter~\ref{small-clauses}). There is also one tree family, Intransitive with Adjective (Tnx0Vax1), that has an adjective as an argument and is used for sentences such as {\it Seth felt happy}. In that tree family the adjective substitutes into the tree rather than adjoining as is the case for modifiers. \begin{figure}[htb] \centering \begin{tabular}{cc} {\psfig{figure=ps/modifiers-files/betaAn-features.ps,height=6.5in}} \end{tabular}\\ \caption {Standard Tree for Adjective modifying a Noun: $\beta$An} \label {An-tree} \end{figure} As modifiers, adjectives anchor the tree shown in Figure~\ref{An-tree}. The features of the N node onto which the $\beta$An tree adjoins are passed through to the top node of the resulting N. The null adjunction marker (NA) on the N foot node imposes right binary branching such that each subsequent adjective must adjoin on top of the leftmost adjective that has already adjoined. Due to the NA constraint, a sequence of adjectives will have only one derivation in the XTAG grammar. The adjective's morphological features such as superlative or comparative are instantiated by the morphological analyzer. See Chapter~\ref{compars-chapter} for a description of how we handle comparatives. At this point, the treatment of adjectives in the XTAG English grammar does not include selectional or ordering restrictions. Consequently, any adjective can adjoin onto any noun and on top of any other adjective already modifying a noun. All of the modified noun phrases shown in (\ex{1})-(\ex{4}) currently parse with the same structure shown for {\it colorless green ideas\/} in Figure \ref{colorless-green-adj}. \enumsentence{big green bugs} \enumsentence{big green ideas} \enumsentence{colorless green ideas} \enumsentence{?green big ideas} \begin{figure}[htb] \centering \begin{tabular}{cc} {\psfig{figure=ps/modifiers-files/colorless-green-ideas.ps,height=3in}} \end{tabular}\\ \caption {Multiple adjectives modifying a noun} \label {colorless-green-adj} \end{figure} While (\ex{-2})-(\ex{0}) are all semantically anomalous, (\ex{0}) also suffers from an ordering problem that makes it seem ungrammatical in the absence of any licensing context. One could argue that the grammar should accept (\ex{-3})-(\ex{-1}) but not (\ex{0}). One of the future goals for the grammar is to develop a treatment of adjective ordering similar to that developed by \cite{ircs:det98} for determiners\footnote{See Chapter~\ref{det-comparitives} or \cite{ircs:det98} for details of the determiner analysis.}. An adequate implementation of ordering restrictions for adjectives would rule out (\ex{0}). \section{Noun-Noun Modifiers} \label{noun-modifier} Noun-noun compounding in the English XTAG grammar is very similar to adjective-noun modification. The noun modifier tree, shown in Figure~\ref{noun-compound-tree}, has essentially the same structure as the adjective modifier tree in Figure~\ref{An-tree}, except for the syntactic category label of the anchor. \begin{figure}[htb] \centering \begin{tabular}{c} {\psfig{figure=ps/modifiers-files/betaNn.ps,height=4.5in}} \end{tabular} \caption {Noun-noun compounding tree: $\beta$Nn (not all features displayed)} \label {noun-compound-tree} \end{figure} Noun compounds have a variety of scope possibilities not available to adjectives, as illustrated by the single bracketing possibility in (\ex{1}) and the two possibilities for (\ex{2}). This ambiguity is manifested in the XTAG grammar by the two possible adjunction sites in the noun-noun compound tree itself. Subsequent modifying nouns can adjoin either onto the N$_r$ node or onto the N anchor node of that tree, which results in exactly the two bracketing possibilities shown in (\ex{2}). This inherent structural ambiguity results in noun-noun compounds regularly having multiple derivations. However, the multiple derivations are not a defect in the grammar because they are necessary to correctly represent the genuine ambiguity of these phrases. \enumsentence{[$_{N}$ big [$_{N}$ green design $_{N}$]$_{N}$]} \enumsentence{[$_{N}$ computer [$_{N}$ furniture design $_{N}$]$_{N}$]\\ \/~~[$_{N}$ [$_{N}$ computer furniture $_{N}$] design $_{N}$]} Noun-noun compounds have no restriction on number. XTAG allows nouns to be either singular or plural as in (\ex{1})-(\ex{3}). \enumsentence{Hyun is taking an algorithms course .} \enumsentence{waffles are in the frozen foods section .} \enumsentence{I enjoy the dog shows .} \section{Time Noun Phrases} \label{timenps} Although in general NPs cannot modify clauses or other NPs, there is a class of NPs, with meanings that relate to time, that can do so.\footnote{ There may be other classes of NPs, such as directional phrases, such as {\em north, south} etc., which behave similarly. We have not yet analyzed these phrases.} We call this class of NPs ``Time~NPs''. Time~NPs behave essentially like PPs. Like PPs, time~NPs can adjoin at four places: to the right of an NP, to the right and left of a VP, and to the left of an~S. Time~NPs may include determiners, as in {\em this month} in example (\ex{1}), or may be single lexical items as in {\em today} in example (\ex{2}). Like other NPs, time~NPs can also include adjectives, as in example (\ex{6}). \enumsentence{Elvis left the building \underline{this week}} \enumsentence{Elvis left the building \underline{today}} \enumsentence{It has no bearing on our work force \underline{today} (WSJ)} \enumsentence{The fire \underline{yesterday} claimed two lives} \enumsentence{\underline{Today} it has no bearing on our work force} \enumsentence{Michael \underline{late yesterday} announced a buy-back program} The XTAG analysis for time~NPs is fairly simple, requiring only the creation of proper NP auxiliary trees. Only nouns that can be part of time~NPs will select the relevant auxiliary trees, and so only these type of NPs will behave like PPs under the XTAG analysis. Currently, about 60 words select Time~NP trees, but since these words can form NPs that include determiners and adjectives, a large number of phrases are covered by this class of modifiers. Corresponding to the four positions listed above, time~NPs can select one of the four trees shown in Figure~\ref{timenp-trees}. \begin{figure}[htb] \centering \begin{tabular}{ccccccc} {\psfig{figure=ps/timenp-files/betaNs.ps,height=1.5in}} & \hspace{.5in} & {\psfig{figure=ps/timenp-files/betaNvx.ps,height=1.5in}} & \hspace{.5in} & {\psfig{figure=ps/timenp-files/betavxN.ps,height=1.5in}} & \hspace{.5in} & {\psfig{figure=ps/timenp-files//betanxN.ps,height=1.5in}}\\ $\beta$Ns&&$\beta$Nvx&&$\beta$vxN&&$\beta$nxN\\ \end{tabular} \caption{Time Phrase Modifier trees: $\beta$Ns, $\beta$Nvx, $\beta$vxN, $\beta$nxN} \label{timenp-trees} \end{figure} Determiners can be added to time~NPs by adjunction in the same way that they are added to NPs in other positions. The trees in Figure~\ref{everymonth} show that the structures of examples (\ex{-5}) and (\ex{-4}) differ only in the adjunction of {\em this} to the time~NP in example (\ex{-5}). \begin{figure}[htb] \centering \begin{tabular}{ccc} \psfig{figure=ps/timenp-files/elvis-thisweek.ps,height=3.5in} & \hspace{.5in} & \psfig{figure=ps/timenp-files/elvis-today.ps,height=3.5in} \\ \end{tabular}\\ \caption{Time~NPs with and without a determiner} \label{everymonth} \end{figure} The sentence \enumsentence{Esso said the Whiting field started production Tuesday (WSJ)} has (at least) two different interpretations, depending on whether {\em Tuesday} attaches to {\em said} or to {\em started}. Valid time~NP analyses are available for both these interpretations and are shown in Figure~\ref{esso}. \begin{figure}[htb] \centering \begin{tabular}{ccc} {\psfig{figure=ps/timenp-files/EssoSaidTuesday.ps,height=3.5in}} & \hspace{.5in} & {\psfig{figure=ps/timenp-files/EssoStartedTuesday.ps,height=3.5in}} \\ \end{tabular}\\ \caption {Time~NP trees: Two different attachments} \label{esso} \end{figure} Derived tree structures for examples (\ex{-4}) -- (\ex{-1}), which show the four possible time~NP positions are shown in Figures~\ref{bearingtrees} and \ref{lateyesterday}. The derivation tree for example (\ex{-1}) is also shown in Figure~\ref{lateyesterday}. \begin{figure}[htb] \centering \begin{tabular}{ccc} {\psfig{figure=ps/timenp-files/bearingENDtoday.ps,height=3.5in}} & {\psfig{figure=ps/timenp-files/thefireyesterday.ps,height=2.7in}} & \hspace*{-.55in} {\psfig{figure=ps/timenp-files/todaybearing.ps,height=3.5in}} \\ \end{tabular}\\ \caption {Time~NPs in different positions ($\beta$vxN, $\beta$nxN and $\beta$Ns)} \label {bearingtrees} \end{figure} \begin{figure}[htb] \centering \begin{tabular}{cc} \psfig{figure=ps/timenp-files/lateyesterday.ps,height=4in} & \hspace{-1in} \raisebox{2.5in}{\psfig{figure=ps/timenp-files/DERIVlateyesterday.ps,height=1.25in}} \\ \end{tabular} \caption {Time~NPs: Derived tree and Derivation ($\beta$Nvx position)} \label{lateyesterday} \end{figure} \section{Prepositions} \label{prep-modifier} There are three basic types of prepositional phrases, and three places at which they can adjoin. The three types of prepositional phrases are: Preposition with NP Complement, Preposition with Sentential Complement, and Exhaustive Preposition. The three places are to the right of an NP, to the right of a VP, and to the left of an S. Each of the three types of PP can adjoin at each of these three places, for a total of nine PP modifier trees. Table \ref{prep-summary} gives the tree family names for the various combinations of type and location. (See Section \ref{post-PP} for discussion of the $\beta$spuPnx, which handles post-sentential comma-separated PPs.) \begin{table}[htb] \centering \begin{tabular}{|l||c|c|c|} \hline \multicolumn{1}{|c||}{}&\multicolumn{3}{c|}{position and category modified}\\ \cline{2-4} \multicolumn{1}{|c||}{}&pre-sentential&post-NP&post-VP\\ \multicolumn{1}{|c||}{Complement type}&S modifier&NP modifier&VP modifier\\ \hline \hline S-complement&$\beta$Pss&$\beta$nxPs&$\beta$vxPs\\ \hline NP-complement&$\beta$Pnxs&$\beta$nxPnx&$\beta$vxPnx\\ \hline no complement&$\beta$Ps&$\beta$nxP&$\beta$vxP\\ (exhaustive)&&&\\ \hline \end{tabular} \caption{Preposition Anchored Modifiers} \label{prep-summary} \end{table} The subset of preposition anchored modifier trees in Figure~\ref{prep-trees} illustrates the locations and the four PP types. Example sentences using the trees in Figure \ref{prep-trees} are shown in (\ex{1})-(\ex{4}). There are also more trees with multi-word prepositions as anchors. Examples of these are: {\it ahead of}, {\it contrary to}, {\it at variance with} and {\it as recently as}. \begin{figure}[htb] \centering \begin{tabular}{ccccccc} {\psfig{figure=ps/modifiers-files/betaPss.ps,height=1.5in}} & \hspace{.5in} & {\psfig{figure=ps/modifiers-files/betanxPnx.ps,height=1.5in}} & \hspace{.5in} & {\psfig{figure=ps/modifiers-files/betavxP.ps,height=1.5in}} & \hspace{.5in} & {\psfig{figure=ps/betavxPPnx.ps,height=1.75in}} \\ $\beta$Pss&&$\beta$nxPnx&&$\beta$vxP&&$\beta$vxPPnx\\ \end{tabular}\\ \caption {Selected Prepositional Phrase Modifier trees: $\beta$Pss, $\beta$nxPnx, $\beta$vxP and $\beta$vxPPnx} \label {prep-trees} \end{figure} \enumsentence{[$_{PP}$ with Clove healthy $_{PP}$], the veterinarian's bill will be more affordable . ($\beta$Pss\footnote{{\it Clove healthy} is an adjective small clause})} \enumsentence{The frisbee [$_{PP}$ in the brambles $_{PP}$] was hidden . ($\beta$nxPnx)} \enumsentence{Clove played frisbee [$_{PP}$ outside $_{PP}$] . ($\beta$vxP)} \enumsentence{Clove played frisbee [$_{PP}$ outside of the house $_{PP}$] . ($\beta$vxPPnx)} Prepositions that take NP complements assign accusative case to those complements (see section~\ref{prep-case} for details). Most prepositions take NP complements. Subordinating conjunctions are analyzed in XTAG as Preps (see Section~\ref{adjunct-cls} for details). Additionally, a few non-conjunction prepositions take S complements (see Section~\ref{NPA} for details). \section{Adverbs} \label{adv-modifier} In the English XTAG grammar, VP and S-modifying adverbs anchor the auxiliary trees $\beta$ARBs, $\beta$sARB, $\beta$vxARB and $\beta$ARBvx,\footnote{In the naming conventions for the XTAG trees, ARB is used for {\underline a}dve{\underline {rb}}s. Because the letters in A, Ad, and Adv are all used for other parts of speech ({\underline a}djective, {\underline d}eterminer and {\underline v}erb), ARB was chosen to eliminate ambiguity. Appendix~\ref{tree-naming} contains a full explanation of naming conventions.} allowing pre and post modification of S's and VP's. Besides the VP and S-modifying adverbs, the grammar includes adverbs that modify other categories. Examples of adverbs modifying an adjective, an adverb, a PP, an NP, and a determiner are shown in (\ex{1})-(\ex{8}). (See Sections \ref{par-adverb} and \ref{post-adverb} for discussion of the $\beta$puARBpuvx and $\beta$spuARB, which handle pre-verbal parenthetical adverbs and post-sentential comma-separated adverbs.) \begin{itemize} \item{Modifying an adjective} \enumsentence{{\bf extremely} good} \enumsentence{{\bf rather} tall} \enumsentence{rich {\bf enough}} \item{Modifying an adverb} \enumsentence{oddly {\bf enough}} \enumsentence{{\bf very} well} \item{Modifying a PP} \enumsentence{{\bf right} through the wall} \item{Modifying a NP} \enumsentence{{\bf quite} some time} \item{Modifying a determiner} \enumsentence{{\bf exactly} five men} \end{itemize} XTAG has separate trees for each of the modified categories and for pre and post modification where needed. The kind of treatment given to adverbs here is very much in line with the base-generation approach proposed by \cite{Ernst84}, which assumes all positions where an adverb can occur to be base-generated, and that the semantics of the adverb specifies a range of possible positions occupied by each adverb. While the relevant semantic features of the adverbs are not currently implemented, implementation of semantic features is scheduled for future work. The trees for adverb anchored modifiers are very similar in form to the adjective anchored modifier trees. Examples of two of the basic adverb modifier trees are shown in Figure~\ref{adv-trees}. \begin{figure}[hb] \centering \begin{tabular}{ccc} {\psfig{figure=ps/modifiers-files/betaARBs.ps,height=5in}}& \hspace*{1.0in}& {\psfig{figure=ps/modifiers-files/betavxARB.ps,height=4.5in}}\\ (a)&&(b)\\ \end{tabular} \caption {Adverb Trees for pre-modification of S: $\beta$ARBs (a) and post-modification of a VP: $\beta$vxARB (b)} \label{adv-trees} \end{figure} \newpage Like the adjective anchored trees, these trees also have the NA constraint on the foot node to restrict the number of derivations produced for a sequence of adverbs. Features of the modified category are passed from the foot node to the root node, reflecting correctly that these types of properties are unaffected by the adjunction of an adverb. A summary of the categories modified and the position of adverbs is given in Table \ref{adv-summary}. \begin{table}[h] \centering \begin{tabular}{|c||c|c|} \hline &\multicolumn{2}{c|}{Position with respect to item modified}\\ \cline{2-3} Category Modified&Pre&Post\\ \hline \hline S&$\beta$ARBs&$\beta$sARB\\ \hline VP&$\beta$ARBvx,$\beta$puARBpuvx&$\beta$vxARB\\ \hline A&$\beta$ARBa&$\beta$aARB\\ \hline PP&$\beta$ARBpx&$\beta$pxARB\\ \hline ADV&$\beta$ARBarb&$\beta$arbARB\\ \hline NP&$\beta$ARBnx&\\ \hline Det&$\beta$ARBd&\\ \hline \end{tabular} \caption{Simple Adverb Anchored Modifiers} \label{adv-summary} \end{table} In the English XTAG grammar, no traces are posited for wh-adverbs, in-line with the base-generation approach (\cite{Ernst84}) for various positions of adverbs. Since convincing arguments have been made against traces for adjuncts of other types (e.g. \cite{Baltin}), and since the reasons for wanting traces do not seem to apply to adjuncts, we make the general assumption in our grammar that adjuncts do not leave traces. Sentence initial wh-adverbs select the same auxiliary tree used for other sentence initial adverbs ($\beta$ARBs) with the feature {\bf $<$wh$>$=+}. Under this treatment, the derived tree for the sentence {\it How did you fall?} is as in Figure (\ref{how-did-you-fall}), with no trace for the adverb. \begin{figure}[htbp] \centering \begin{tabular}{c} {\psfig{figure=ps/modifiers-files/how-did-you-fall.ps,height=3.5in}} \end{tabular} \caption {Derived tree for {\it How did you fall?}} \label {how-did-you-fall} \end{figure} \begin{figure}[htbp] \centering \begin{tabular}{c} {\psfig{figure=ps/modifiers-files/betaARBarbs.ps,height=6.0in}} \end{tabular} \caption {Complex adverb phrase modifier: $\beta$ARBarbs} \label{weird-adv-tree} \end{figure} There is one more adverb modifier tree in the grammar which is not included in Table \ref{adv-summary}. This tree, shown in Figure~\ref{weird-adv-tree}, has a complex adverb phrase and is used for wh+ two-adverb phrases that occur sentence initially, such as in sentence (\ex{1}). Since {\it how} is the only wh+ adverb, it is the only adverb that can anchor this tree. \enumsentence{how quickly did Srini fix the problem ?} Focus adverbs such as {\it only}, {\it even}, {\it just} and {\it at least} are also handled by the system. Since the syntax allows focus adverbs to appear in practically any position, these adverbs select most of the trees listed in Table \ref{adv-summary}. It is left up to the semantics or pragmatics to decide the correct scope for the focus adverb for a given instance. In terms of the ability of the focus adverbs to modify at different levels of a noun phrase, the focus adverbs can modify either cardinal determiners or noun-cardinal noun phrases, and cannot modify at the level of noun. The tree for adverbial modification of noun phrases is in shown Figure~\ref{other-adv-trees}(a). In addition to {\it at least}, the system handles the other two-word adverbs, {\it at most} and {\it up to}, and the three-word {\it as-as} adverb constructions, where an adjective substitutes between the two occurrences of {\it as}. An example of a three-word {\it as-as} adverb is {\it as little as}. Except for the ability of {\it at least} to modify many different types of constituents as noted above, the multi-word adverbs are restricted to modifying cardinal determiners. Example sentences using the trees in Figure~\ref{other-adv-trees} are shown in (\ex{1})-(\ex{5}). \begin{itemize} \item{Focus Adverb modifying an NP} \enumsentence{{\bf only} a member of our crazy family could pull off that kind of a stunt .} \enumsentence{{\bf even} a flying saucer sighting would seem interesting in comparison \\ with your story .} \enumsentence{The report includes a proposal for {\bf at least} a partial impasse in negotiations .} \item{Multi-word adverbs modifying cardinal determiners} \enumsentence{{\bf at most} ten people came to the party .} \enumsentence{They gave monetary gifts of {\bf as little as} five dollars .} \end{itemize} \begin{figure}[htb] \centering \begin{tabular}{ccccccc} {\psfig{figure=ps/modifiers-files/betaARBnx.ps,height=1.1in}} & \hspace{.5in} & {\psfig{figure=ps/modifiers-files/betaPaPd.ps,height=1.75in}} & \hspace{.5in} & {\psfig{figure=ps/modifiers-files/betaPARBd.ps,height=1.75in}}\\ $\beta$ARBnx&&$\beta$PaPd&&$\beta$PARBd&&\\ (a)&&(b)&&(c)\\ \end{tabular}\\ \caption {Selected Focus and Multi-word Adverb Modifier trees: $\beta$ARBnx, $\beta$PARBd and $\beta$PaPd} \label {other-adv-trees} \end{figure} \input{multiword-advs} \section{Locative Adverbial Phrases} \label{locatives} Locative adverbial phrases are multi-word adverbial modifiers whose meanings relate to spatial location. Locatives consist of a locative adverb (such as {\it ahead} or {\it downstream}) preceded by an NP, an adverb, or nothing, as in Examples (\ex{1})--(\ex{3}) respectively. The modifier as a whole describes a position relative to one previously specified in the discourse. The nature of the relation, which is usually a direction, is specified by the anchoring locative adverb({\em behind, east}). If an NP or a second adverb is present in the phrase, it specifies the degree of the relation (for example: {\it three city blocks, many meters,} and {\it far}). \enumsentence{The accident {\it three blocks ahead} stopped traffic} \enumsentence{The ship sank {\it far offshore}} \enumsentence{The trouble {\it ahead} distresses me} Locatives can modify NPs, VPs and Ss. They modify NPs only by right-adjoining post-positively, as in Example (\ex{-2}). Post-positive is also the more common position when a locative modifies either of the other categories. Locatives pre-modify VPs only when separated by balanced punctuation (commas or dashes). The trees locatives select when modifying NPs are shown in Figure~\ref{loc-np-trees}. \begin{figure}[htb] \centering \begin{tabular}{ccc} \psfig{figure=ps/modifiers-files/betanxnxARB.ps,height=4.0cm} & \hspace{0.5in} & \psfig{figure=ps/modifiers-files/betanxARB.ps,height=4.0cm} \end{tabular} \caption{Locative Modifier Trees: $\beta$nxnxARB, $\beta$nxARB} \label{loc-np-trees} \end{figure} When the locative phrase consists of only the anchoring locative adverb, as in Example (\ex{-1}), it uses the $\beta$nxARB tree, shown in Figure~\ref{loc-np-trees}, and its VP analogue, $\beta$vxARB. In addition, these are the trees selected when the locative anchor is modified by an adverb expressing degree, as in Example \ex{-1}. The degree adverb adjoins on to the anchor using the $\beta$ARBarb tree, which is described in Section~\ref{adv-modifier}. Figure~\ref{toupees} shows an example of these trees in action. Though there is a tree for a pre-sentential locative phrase, $\beta$nxARBs, there is no corresponding post-sentential tree, as it is highly debatable whether the post-sentential version actually has the entire sentence or just the preceding verb phrase as its scope. Thus, in accordance with XTAG practice, which considers ambiguous post-sentential modifiers to be VP-modifiers rather than S-modifiers, there is only a $\beta$vxnxARB tree, as shown in Figure~\ref{toupees}. \begin{figure}[htb] \centering \begin{tabular}{ccc} \psfig{figure=ps/modifiers-files/toupee_np.ps,height=7.0cm} & \hspace{0.5in} & \psfig{figure=ps/modifiers-files/toupee_ad.ps,height=7.0cm} \end{tabular} \caption{Locative Phrases featuring NP and Adverb Degree Specifications} \label{toupees} \end{figure} One possible analysis of locative phrases with NPs might maintain that the NP is the head, with the locative adverb modifying the NP. This is initially attractive because of the similarity to time NPs, which also feature NPs that can modify clauses. This analysis seems insufficient, however, in light of the fact that virtually any NP can occur in locative phrases, as in example (\ex{1}). Therefore, in the XTAG analysis the locative adverb anchors the locative phrase trees. A complete summary of all trees selected by locatives is contained in Table~\ref{loc-summary}. 26\footnote{Though nearly all of these adverbs are spatial in nature, this number also includes a few temporal adverbs, such as {\it ago}, that also select these trees.} adverbs select the locative trees. \enumsentence{I left my toupee and putter {\it three holes back}} \begin{table}[htb] \centering \begin{tabular}{|l||c|c|} \hline \multicolumn{1}{|c||}{}& \multicolumn{2}{|c|}{Degree Phrase Type}\\ \cline{2-3} \multicolumn{1}{|c||}{Category Modified}&NP&Ad/None\\ \hline \hline NP&$\beta$nxnxARB&$\beta$nxARB\\ \hline VP (post)&$\beta$vxnxARB&$\beta$vxARB\\ \hline VP (pre, punct-separated)&$\beta$punxARBpuvx&$\beta$puARBpuvx\\ \hline S&$\beta$nxARBs&$\beta$ARBs\\ \hline \end{tabular} \caption{Locative Modifiers} \label{loc-summary} \end{table} \chapter{Overview of the XTAG System} \label{overview} This section focuses on the various components that comprise the parser and English grammar in the XTAG system. Persons interested only in the linguistic analyses in the grammar may skip this section without loss of continuity, although a quick glance at the tagset used in XTAG and the set of non-terminal labels used will be useful. We may occasionally refer back to the various components mentioned in this section. \section{System Description} Figure~{\ref{flowchart}} shows the overall flow of the system when parsing a sentence; a summary of each component is presented in Table~\ref{sys-table}. At the heart of the system is a parser for lexicalized TAGs (\cite{schabesjoshi88,schabes90}) which produces all legitimate parses for the sentence. The parser has two phases: {\bf Tree Selection} and {\bf Tree Grafting}. \begin{figure}[t] \hspace{0.35in} \centering {\psfig{figure=ps/flowchart.eps,height=3.0in,angle=270}} \caption[XTAG system diagram]{Overview of XTAG system } \label{flowchart} \end{figure} \begin{table}[ht] \small \centering \begin{tabular}{|l|l|} \hline Component & Details \\ \hline Morphological & Consists of approximately 317,000 inflected items \\ Analyzer and & derived from over 90000 stems. \\ Morph Database & Entries are indexed on the inflected form and return \\ & the root form, POS, and inflectional information.\\ \hline POS Tagger & Wall Street Journal-trained trigram tagger (\cite{kwc88}) \\ and Lex Prob & extended to output N-best POS sequences \\ Database & (\cite{soong90}). Decreases the time to parse \\ &a sentence by an average of 93\%. \\\hline Syntactic & More than 30,000 entries. \\ Database & Each entry consists of: the uninflected form of the word, \\ & its POS, the list of trees or tree-families associated with \\ & the word, and a list of feature equations that capture \\ &lexical idiosyncrasies. \\ \hline Tree Database & 1094 trees, divided into 52 tree families and 218 individual \\ & trees. Tree families represent subcategorization frames; \\ & the trees in a tree family would be related to each other \\ & transformationally in a movement-based approach. \\ \hline X-Interface & Menu-based facility for creating and modifying tree files. \\ & User controlled parser parameters: parser's start category, \\ & enable/disable/retry on failure for POS tagger. \\ & Storage/retrieval facilities for elementary and parsed trees.\\ & Graphical displays of tree and feature data structures. \\ & Hand combination of trees by adjunction or substitution \\ & for grammar development. \\ & Ability to manually assign POS tag \\ & and/or Supertag before parsing \\ \hline \end{tabular} \caption{System Summary} \label{sys-table} \end{table} \subsection{Tree Selection} Since we are working with lexicalized TAGs, each word in the sentence selects at least one tree. The advantage of a lexicalized formalism like LTAGs is that rather than parsing with all the trees in the grammar, we can parse with only the trees selected by the words in the input sentence. In the XTAG system, the selection of trees by the words is done in several steps. Each step attempts to reduce ambiguity, i.e. reduce the number of trees selected by the words in the sentence. \begin{description} \item[Morphological Analysis and POS Tagging] The input sentence is first submitted to the {\bf Morphological Analyzer} and the {\bf Tagger}. The morphological analyzer~(\cite{karp92}) consists of a disk-based database (a compiled version of the derivational rules) which is used to map an inflected word into its stem, part of speech and feature equations corresponding to inflectional information. These features are inserted at the anchor node of the tree eventually selected by the stem. The POS Tagger can be disabled in which case only information from the morphological analyzer is used. The morphology data was originally extracted from the Collins English Dictionary (\cite{ced79}) and Oxford Advanced Learner's Dictionary (\cite{oald74}) available through ACL-DCI (\cite{liberman89}), and then cleaned up and augmented by hand (\cite{karp92}). \item[POS Blender] The output from the morphological analyzer and the POS tagger go into the {\bf POS Blender} which uses the output of the POS tagger as a filter on the output of the morphological analyzer. Any words that are not found in the morphological database are assigned the POS given by the tagger. \item[Syntactic Database] The syntactic database contains the mapping between particular stem(s) and the tree templates or tree-families stored in the {\bf Tree Database} (see Table~\ref{sys-table}). The syntactic database also contains a list of feature equations that capture lexical idiosyncrasies. The output of the POS Blender is used to search the {\bf Syntactic Database} to produce a set of lexicalized trees with the feature equations associated with the word(s) in the syntactic database unified with the feature equations associated with the trees. Note that the features in the syntactic database can be assigned to any node in the tree and not just to the anchor node. The syntactic database entries were originally extracted from the Oxford Advanced Learner's Dictionary (\cite{oald74}) and Oxford Dictionary for Contemporary Idiomatic English (\cite{cie75}) available through ACL-DCI (\cite{liberman89}), and then modified and augmented by hand (\cite{EgediMartin94}). There are more than 31,000 syntactic database entries.\footnote{This number does not include trees assigned by default based on the part-of-speech of the word.} Selected entries from this database are shown in Table~\ref{syn-entries}. \item[Default Assignment] For words that are not found in the syntactic database, default trees and tree-families are assigned based on their POS tag. \item[Filters] Some of the lexicalized trees chosen in previous stages can be eliminated in order to reduce ambiguity. Two methods are currently used: structural filters which eliminate trees which have impossible spans over the input sentence and a statistical filter based on unigram probabilities of non-lexicalized trees (from a hand corrected set of approximately 6000 parsed sentences). These methods speed the runtime by approximately 87\%. \item[Supertagging] Before parsing, one can avail of an optional step of {\em supertagging} the sentence. This step uses statistical disambiguation to assign a unique elementary tree (or {\em supertag}) to each word in the sentence. These assignments can then be hand-corrected. These supertags are used as a filter on the tree assignments made so far. More information on supertagging can be found in (\cite{srini97diss,srini97iwpt}). \end{description} \begin{table}[htb] \begin{verbatim} <<INDEX>>porousness<<ENTRY>>porousness<<POS>>N <<TREES>>^BNXN ^BN ^CNn <<FEATURES>>#N_card- #N_const- #N_decreas- #N_definite- #N_gen- #N_quan- #N_refl- <<INDEX>>coo<<ENTRY>>coo<<POS>>V<<FAMILY>>Tnx0V <<INDEX>>engross<<ENTRY>>engross<<POS>>V<<FAMILY>>Tnx0Vnx1 <<FEATURES>>#TRANS+ <<INDEX>>forbear<<ENTRY>>forbear<<POS>>V<<FAMILY>>Tnx0Vs1 <<FEATURES>>#S1_WH- #S1_inf_for_nil <<INDEX>>have<<ENTRY>>have<<POS>>V<<ENTRY>>out<<POS>>PL <<FAMILY>>Tnx0Vplnx1 \end{verbatim} \caption{Example Syntactic Database Entries.} \label{syn-entries} \end{table} \subsection{Tree Database} \label{tree-db} The {\bf Tree Database} contains the tree templates that are lexicalized by following the various steps given above. The lexical items are inserted into distinguished nodes in the tree template called the {\em anchor nodes}. The part of speech of each word in the sentence corresponds to the label of the anchor node of the trees. Hence the tagset used by the POS Tagger corresponds exactly to the labels of the anchor nodes in the trees. The tagset used in the XTAG system is given in Table~\ref{tagset}. The tree templates are subdivided into tree families (for verbs and other predicates), and tree files which are simply collections of trees for lexical items like prepositions, determiners, etc% \footnote{ The nonterminals in the tree database are {\tt A, AP, Ad, AdvP, Comp, Conj, D, N, NP, P, PP, Punct, S, V, VP}.}% . \subsection{Tree Grafting} Once a particular set of lexicalized trees for the sentence have been selected, XTAG uses an Earley-style predictive left-to-right parsing algorithm for LTAGs (\cite{schabesjoshi88,schabes90}) to find all derivations for the sentence. The derivation trees and the associated derived trees can be viewed using the X-interface (see Table~\ref{sys-table}). The X-interface can also be used to save particular derivations to disk. The output of the parser for the sentence {\it I had a map yesterday} is illustrated in Figure~\ref{sentence}. The parse tree\footnote{The feature structures associated with each note of the parse tree are not shown here.} represents the surface constituent structure, while the derivation tree represents the derivation history of the parse. The nodes of the derivation tree are the tree names anchored by the lexical items\footnote{Appendix \ref{tree-naming} explains the conventions used in naming the trees.}. The composition operation is indicated by the nature of the arcs: a dashed line is used for substitution and a bold line for adjunction. The number beside each tree name is the address of the node at which the operation took place. The derivation tree can also be interpreted as a dependency graph with unlabeled arcs between words of the sentence. \begin{figure}[htb] \centering \begin{tabular}{cc} {{\psfig{figure=ps/overview-files/derived.ps,height=3.0in}}} & {{\psfig{figure=ps/overview-files/derivation.ps,height=2.0in,width=2.7in}}} \\ Parse Tree & Derivation Tree \\ \end{tabular} \caption{Output Structures from the Parser} \label{sentence} \end{figure} \begin{table*}[ht] \small \centering \begin{tabular}{|l|l|} \hline Part of Speech & Description \\ \hline A & Adjective \\ \hline Ad & Adverb \\ \hline Comp & Complementizer \\ \hline D & Determiner \\ \hline G & Genitive Noun \\ \hline I & Interjection \\ \hline N & Noun \\ \hline P & Preposition \\ \hline PL & Particle \\ \hline Punct & Punctuation \\ \hline V & Verb \\ \hline \end{tabular} \caption{XTAG tagset} \label{tagset} \end{table*} \subsection{The Grammar Development Environment} Working with and developing a large grammar is a challenging process, and the importance of having good visualization tools cannot be over-emphasized. Currently the XTAG system has X-windows based tools for viewing and updating the morphological and syntactic databases (\cite{karp92,EgediMartin94}). These are available in both ASCII and binary-encoded database format. The ASCII format is well-suited for various UNIX utilities (awk, sed, grep) while the database format is used for fast access during program execution. However even the ASCII formatted representation is not well-suited for human readability. An X-windows interface for the databases allows users to easily examine them. Searching for specific information on certain fields of the syntactic database is also available. Also, the interface allows a user to insert, delete and update any information in the databases. Figure~\ref{morphsyn-tool}(a) shows the interface for the morphology database and Figure~\ref{morphsyn-tool}(b) shows the interface for the syntactic database. \begin{figure}[htb] \begin{tabular}{cc} {\psfig{figure=ps/morph.ps,height=3.0in}}& {\psfig{figure=ps/syn.ps,height=3.0in,width=2.0in}}\\ (a) Morphology database&(b) Syntactic database \end{tabular} \caption[Interfaces database]{Interfaces to the database maintenance tools} \label{morphsyn-tool} \end{figure} XTAG also has a parsing and grammar development interface (\cite{PSJ92}). This interface includes a tree editor, the ability to vary parameters in the parser, work with multiple grammars and/or parsers, and use metarules for more efficient tree editing and construction (\cite{becker94}). The interface is shown in Figure~\ref{xtag-interface}. It has the following features: \begin{itemize} \item Menu-based facility for creating and modifying tree files and loading grammar files. \item User controlled parser parameters, including the root category (main S, embedded S, NP, etc.), and the use of the tagger (on/off/retry on failure). \item Storage/retrieval facilities for elementary and parsed trees. \item The production of postscript files corresponding to elementary and parsed trees. \item Graphical displays of tree and feature data structures, including a scroll `web' for large tree structures. \item Mouse-based tree editor for creating and modifying trees and feature structures. \item Hand combination of trees by adjunction or substitution for use in diagnosing grammar problems. \item Metarule tool for automatic aid to the generation of trees by using tree-based transformation rules \end{itemize} \begin{figure}[htb] \centering \mbox{} {\psfig{figure=ps/xtag-interface.ps,height=3.0in}} \caption[XTAG Interface]{Interface to the XTAG system} \label{xtag-interface} \end{figure} \section{Computer Platform} XTAG was developed on the Sun SPARC station series. It has been tested on various Sun platforms including Ultra-1, Ultra-Enterprise. XTAG is freely available from the XTAG web page at {\tt http://www.cis.upenn.edu/\~{}xtag/}. It requires 75 MB of disk space (once all binaries and databases are created after the install). XTAG requires the following software to run: \begin{itemize} \item A machine running UNIX and X11R4 (or higher). Previous releases of X will not work. X11R4 is free software which usually comes bundled with your OS. It is also freely available for various platforms at {\tt http://www.xfree86.org/} \item A Common Lisp compiler which supports the latest definition of Common Lisp (Steele's Common Lisp, second edition). XTAG has been tested on Lucid Common Lisp/SPARC Solaris, Version: 4.2.1. Allegro CL is no longer directly supported, however there have been third party ports to recent versions of Allegro CL. \item CLX version 4 or higher. CLX is the Lisp equivalent to the Xlib package written in C. \item Mark Kantrowitz's Lisp Utilities from CMU: logical-pathnames and defsystem. \end{itemize} A patched version of CLX (Version 5.02) for SunOS 5.5.1 and the CMU Lisp Utilities are provided in our ftp directory for your convenience. However, we ask that you refer to the appropriate sources for updates. The morphology database component (\cite{karp92}), no longer under licensing restrictions, is available as a separate download from the XTAG web page (see above for URL). The syntactic database component is also available as part of the XTAG system (\cite{EgediMartin94}). More information can be obtained on the XTAG web page at \\ {\tt http://www.cis.upenn.edu/\~{}xtag/}. \chapter{Passives} \label{passives} In passive constructions such as (\ex{1}), the subject NP is interpreted as having the same role as the direct object NP in the corresponding active declarative (\ex{2}). \enumsentence{{\bf An airline buy-out bill} was approved by the House. (WSJ)} \enumsentence{The House approved {\bf an airline buy-out bill}.} \begin{figure}[hbt] \centering \begin{tabular}{ccccc} \psfig{figure=ps/passives-files/betanx1Vs2-reduced-features.ps,height=6.5cm}& \hspace{1.0in}& \psfig{figure=ps/passives-files/betanx1Vbynx0s2.ps,height=6.5cm}& \hspace{1.0in}& \psfig{figure=ps/passives-files/betanx1Vs2bynx0.ps,height=6.5cm}\\ (a)&&(b)&&(c) \end{tabular} \caption{Passive trees in the Sentential Complement with NP tree family: $\beta$nx1Vs2 (a), $\beta$nx1Vbynx0s2 (b) and $\beta$nx1Vs2bynx0 (c)} \label{passive-trees} \label{2;2,5} \end{figure} In a movement analysis, the direct object is said to have moved to the subject position. The original declarative subject is either absent in the passive or is in a {\it by} headed PP ({\it by} phrase). In the English XTAG grammar, passive constructions are handled by having separate trees within the appropriate tree families. Passive trees are found in most tree families that have a direct object in the declarative tree (the light verb tree families, for instance, do not contain passive trees). Passive trees occur in pairs - one tree with the {\it by} phrase, and another without it. Variations in the location of the {\it by} phrase are possible if a subcategorization includes other arguments such as a PP or an indirect object. Additional trees are required for these variations. For example, the Sentential Complement with NP tree family has three passive trees, shown in Figure~\ref{passive-trees}: one without the {\it by}-phrase (Figure~\ref{passive-trees}(a)), one with the {\it by} phrase before the sentential complement (Figure~\ref{passive-trees}(b)), and one with the {\it by} phrase after the sentential complement (Figure~\ref{passive-trees}(c)). Figure~\ref{passive-trees}(a) also shows the feature restrictions imposed on the anchor\footnote{A reduced set of features are shown for readability.}. Only verbs with {\bf $<$mode$>$=ppart} (i.e. verbs with passive morphology) can anchor this tree. The {\bf $<$mode$>$} feature is also responsible for requiring that passive {\it be} adjoin into the tree to create a matrix sentence. Since a requirement is imposed that all matrix sentences must have {\bf $<$mode$>$=ind/imp}, an auxiliary verb that selects {\bf $<$mode$>$=ppart} and {\bf $<$passive$>$=+} (such as {\it was}) must adjoin (see Chapter~\ref{auxiliaries} for more information on the auxiliary verb system). \chapter{Punctuation Marks} \label{punct-chapt} Many parsers require that punctuation be stripped out of the input. Since punctuation is often optional, this sometimes has no effect. However, there are a number of constructions which must obligatorily contain punctuation and adding analyses of these to the grammar without the punctuation would lead to severe overgeneration. An especially common example is noun appositives. Without access to punctuation, one would have to allow every combinatorial possibility of NPs in noun sequences, which is clearly undesirable (especially since there is already unavoidable noun-noun compounding ambiguity). Aside from coverage issues, it is also preferable to take input ``as is'' and do as little editing as possible. With the addition of punctuation to the XTAG grammar, we need only do/assume the conversion of certain sequences of punctuation into the ``British'' order (this is discussed in more detail below in Section \ref{bal}). The XTAG POS tagger currently tags every punctuation mark as itself. These tags are all converted to the POS tag {\it Punct} before parsing. This allows us to treat the punctuation marks as a single POS class. They then have features which distinguish amongst them. Wherever possible we have the punctuation marks as anchors, to facilitate early filtering. The full set of punctuation marks is separated into three classes: balanced, separating and terminal. The balanced punctuation marks are quotes and parentheses, separating are commas, dashes, semi-colons and colons, and terminal are periods, exclamation points and question marks. Thus, the {\bf $<$punct$>$} feature is complex (like the {\bf $<$agr$>$} feature), yielding feature equations like {\bf $<$Punct bal = paren$>$} or {\bf $<$Punct term = excl$>$}. Separating and terminal punctuation marks do not occur adjacent to other members of the same class, but may occasionally occur adjacent to members of the other class, e.g. a question mark on a clause which is separated by a dash from a second clause. Balanced punctuation marks are sometimes adjacent to one another, e.g. quotes immediately inside of parentheses. The {\bf $<$punct$>$} feature allows us to control these local interactions. We also need to control non-local interaction of punctuation marks. Two cases of this are so-called quote alternation, wherein embedded quotation marks must alternate between single and double, and the impossibility of embedding an item containing a colon inside of another item containing a colon. Thus, we have a fourth value for {\bf $<$punct$>$}, {\bf $<$contains colon/dquote/etc. +/-$>$}, which indicates whether or not a constituent contains a particular punctuation mark. This feature is percolated through all auxiliary trees. Things which may not embed are: colons under colons, semi-colons, dashes or commas; semi-colons under semi-colon or commas. Although it is rare, parentheses may appear inside of parentheses, say with a bibliographic reference inside a parenthesized sentence. \section{Appositives, parentheticals and vocatives} These trees handle constructions where additional lexical material is only licensed in conjunction with particular punctuation marks. Since the lexical material is unconstrained (virtually any noun can occur as an appositive), the punctuation marks are anchors and the other nodes are substitution sites. There are cases where the lexical material is restricted, as with parenthetical adverbs like {\it however}, and in those cases we have the adverb as the anchor and the punctuation marks as substitution sites. When these constructions can appear inside of clauses (non-peripherally), they must be separated by punctuation marks on both sides. However, when they occur peripherally they have either a preceding or following punctuation mark. We handle this by having both peripheral and non-peripheral trees for the relevant constructions. The alternative is to insert the second (following) punctuation mark in the tokenization process (i.e. insert a comma before the period when an appositive appears on the last NP of a sentence). However, this is very difficult to do accurately. \subsection{$\beta$nxPUnxPU} The symmetric (non-peripheral) tree for NP appositives, anchored by: comma, dash or parentheses. It is shown in Figure \ref{nxPUnxPU} anchored by parentheses. \enumsentence{The music here , Russell Smith's ``Tetrameron '' , sounded good . [Brown:cc09]} \enumsentence{...cost 2 million pounds (3 million dollars)} \enumsentence{Sen. David Boren (D., Okla.)...} \enumsentence{ ...some analysts believe the two recent natural disasters -- Hurricane Hugo and the San Francisco earthquake -- will carry economic ramifications.... [WSJ] } \begin{figure}[hbt] \centering \hspace{0.0in} \psfig{figure=ps/punct-files/nxPUnxPU.ps,height=3.0in} \caption{The $\beta$nxPUnxPU tree, anchored by parentheses} \label{nxPUnxPU} \end{figure} The punctuation marks are the anchors and the appositive NP is substituted. The appositive can be conjoined, but only with a lexical conjunction (not with a comma). Appositives with commas or dashes cannot be pronouns, although they may be conjuncts containing pronouns. When used with parentheses this tree actually presents an alternative rather than an appositive, so a pronoun is possible. Finally, the appositive position is restricted to having nominative or accusative case to block PRO from appearing here. Appositives can be embedded, as in (\ex{1}), but do not seem to be able to stack on a single NP. In this they are more like restrictive relatives than appositive relatives, which typically can stack. \enumsentence{...noted Simon Briscoe, UK economist for Midland Montagu, a unit of Midland Bank PLC.} \subsection{$\beta$nPUnxPU} The symmetric (non-peripheral) tree for N-level NP appositives, is anchored by comma. The modifier is typically an address. It is clear from examples such as (\ex{1}) that these are attached at N, rather than NP. {\it Carrier } is not an appositive on {\it Menlo Park}, as it would be if these were simply stacked appositives. Rather, {\it Calif.} modifies {\it Menlo Park}, and that entire complex is compounded with {\it carrier}, as shown in the correct derivation in Figure \ref{nPUnx}. Because this distinction is less clear when the modifier is peripheral (e.g. ends the sentence), and it would be difficult to distinguish between NP and N attachment, we do not currently allow a peripheral N-level attachment. \enumsentence{An official at Consolidated Freightways Inc., a Menlo Park, Calif., less-than-truckload carrier , said...} \enumsentence{Rep. Ronnie Flippo (D., Ala.), of the delegation, says...} \begin{figure}[hbt] \centering \hspace{0.0in} \psfig{figure=ps/punct-files/nPUnx.ps,height=4.0in} \caption{An N-level modifier, using the $\beta$nPUnx tree} \label{nPUnx} \end{figure} \subsection{$\beta$nxPUnx} This tree, which can be anchored by a comma, dash or colon, handles asymmetric (peripheral) NP appositives and NP colon expansions of NPs. Figure \ref{nxPUnx} shows this tree anchored by a dash and a colon. Like the symmetric appositive tree, $\beta$nxPUnxpu, the asymmetric appositive cannot be a pronoun, while the colon expansion can. Thus, this constraint comes from the syntactic entry in both cases rather than being built into the tree. \begin{figure}[hbt] \centering \hspace{0.0in} \begin{tabular}{cc} \psfig{figure=ps/punct-files/nxPUnx.ps,height=3.0in} & \psfig{figure=ps/punct-files/colon-exp.ps,height=4.5in} \\ (a) & (b) \\ \end{tabular} \caption{The derived trees for an NP with (a) a peripheral, dash-separated appositive and (b) an NP colon expansion (uttered by the Mouse in \protect{\it Alice's Adventures in Wonderland})} \label{nxPUnx} \end{figure} \enumsentence{the bank's 90\% shareholder -- Petroliam Nasional Bhd. [Brown]} \enumsentence{...said Chris Dillow, senior U.K. economist at Nomura Research Institute .} \enumsentence{...qualities that are seldom found in one work: Scrupulous scholarship, a fund of personal experience,... [Brown:cc06]} \enumsentence{I had eyes for only one person : him .} The colon expansion cannot itself contain a colon, so the foot $S$ has the feature NP.t:$<punct contains colon> = -$. \subsection{$\beta$PUpxPUvx} Tree for pre-VP parenthetical PP, anchored by commas or dashes - \enumsentence{John , in a fit of anger , broke the vase} \enumsentence{Mary , just within the last year , has totalled two cars} \noindent These are clearly not NP modifiers. Figures \ref{betaPUpxPUvx} and \ref{PUpxPUvx-anger} show this tree alone and as part of the parse for (\ex{-1}). \begin{figure}[hbt] \centering \hspace{0.0in} \psfig{figure=ps/punct-files/betaPUpxPUvx.ps,height=2.0in} \caption{The $\beta$PUpxPUvx tree, anchored by commas} \label{betaPUpxPUvx} \end{figure} \begin{figure}[hbt] \centering \hspace{0.0in} \psfig{figure=ps/punct-files/betaPUpxPUvx-anger.ps,height=4.0in} \caption{Tree illustrating the use of $\beta$PUpxPUvx} \label{PUpxPUvx-anger} \end{figure} \subsection{$\beta$puARBpuvx} \label{par-adverb} Parenthetical adverbs - {\it however}, {\it though}, etc. Since the class of adverbs is highly restricted, this tree is anchored by the adverb and the punctuation marks substitute. The punctuation marks may be either commas or dashes. Like the parenthetical PP above, these are clearly not NP modifiers. \enumsentence{The new argument over the notification guideline , however , could sour any atmosphere of cooperation that existed . \hfill [WSJ]} \subsection{$\beta$sPUnx} Sentence final vocative, anchored by comma: \enumsentence{You were there , Stanley/my boy .} Also, when anchored by colon, NP expansion on S. These often appear to be extraposed modifiers of some internal NP. The NP must be quite heavy, and is usually a list: \enumsentence{Of the major expansions in 1960, three were financed under the R. I. Industrial Building Authority's 100\% guaranteed mortgage plan: Collyer Wire, Leesona Corporation, and American Tube \& Controls.} A simplified version of this sentence is shown in figure \ref{sPUnx}. The NP cannot be a pronoun in either of these cases. Both vocatives and colon expansions are restricted to appear on tensed clauses (indicative or imperative). \begin{figure}[hbt] \centering \hspace{0.0in} \psfig{figure=ps/punct-files/sPUnx-colon.ps,height=4.0in} \caption{A tree illustrating the use of sPUnx for a colon expansion attached at S.} \label{sPUnx} \end{figure} \subsection{$\beta$nxPUs} Tree for sentence initial vocatives, anchored by a comma: \enumsentence{Stanley/my boy , you were there .} The noun phrase may be anything but a pronoun, although it is most commonly a proper noun. The clause adjoined to must be indicative or imperative. \section{Bracketing punctuation} \label{bal} \subsection{Simple bracketing} Trees: $\beta$PUsPU, $\beta$PUnxPU, $\beta$PUnPU, $\beta$PUvxPU, $\beta$PUvPU, $\beta$PUarbPU, $\beta$PUaPU, $\beta$PUdPU, $\beta$PUpxPU, $\beta$PUpPU \noindent These trees are selected by parentheses and quotes and can adjoin onto any node type, whether a head or a phrasal constituent. This handles things in parentheses or quotes which are syntactically integrated into the surrounding context. Figure \ref{bal-trees} shows the $\beta$PUsPU anchored by parentheses, and this tree along with $\beta$PUnxPU in a derived tree. \enumsentence{Dick Carroll and his accordion (which we now refer to as ``Freida'') held over at Bahia Cabana where ``Sir'' Judson Smith brings in his calypso capers Oct. 13 .\hfill [Brown:ca31]} \enumsentence{...noted that the term ``teacher-employee'' (as opposed to, e.g., ``maintenance employee'') was a not inapt description. \hfill [Brown:ca35]} \begin{figure}[hbt] \centering \hspace{0.0in} \begin{tabular}{cc} \psfig{figure=ps/punct-files/PUsPU-paren.ps,height=2.0in} & \psfig{figure=ps/punct-files/bal-parse.ps,height=4in} \\ (a) & (b) \\ \end{tabular} \caption{$\beta$PUsPU anchored by parentheses, and in a derivation, along with $\beta$PUnxPU} \label{bal-trees} \end{figure} There is a convention in English that quotes embedded in quotes alternate between single and double; in American English the outermost are double quotes, while in British English they are single. The {\bf contains} feature is used to control this alternation. The trees anchored by double quotation marks have the feature {\bf punct contains dquote = -} on the foot node and the feature {\bf punct contains dquote = +} on the root. All adjunction trees are transparent to the {\bf contains} feature, so if any tree below the double quote is itself enclosed in double quotes the derivation will fail. Likewise with the trees anchored by single quotes. The quote trees in effect ``toggle'' the {\bf contains Xquote} feature. Immediate proximity is handled by the {\bf punct balanced} feature, which allows quotes inside of parentheses, but not vice-versa. In addition, American English typically places/moves periods (and commas) inside of quotation marks when they would logically occur outside, as in example \ex{1}. The comma in the first part of the quote is not part of the quote, but rather part of the parenthetical quoting clause. However, by convention it is shifted inside the quote, as is the final period. British English does not do this. We assume here that the input has already been tokenized into the ``British'' format. \enumsentence{``You can't do this to us ,'' Diane screamed . ``We are Americans.''} The $\beta$PUsPU can handle quotation marks around multiple sentences, since the sPUs tree allows us to join two sentences with a period, exclamation point or question mark. Currently, however, we cannot handle the style where only an open quote appears at the beginning of a paragraph when the quotation extends over multiple paragraphs. We could allow a lone open quote to select the $\beta$PUs tree, if this is deemed desirable. Also, the $\beta$PUsPU is selected by a pair of commas to handle non-peripheral appositive relative clauses, such as in example (\ex{1}). Restrictive and appositive relative clauses are not syntactically differentiated in the XTAG grammar (cf. Chapter \ref{rel_clauses}). \enumsentence{This news , announced by Jerome Toobin , the orchestra's administrative director , brought applause ... [Brown:cc09]} The trees discussed in this section will only allow balanced punctuation marks to adjoin to constituents. We will not get them around non-constituents, as in (\ex{1}). \enumsentence{Mary asked him to leave (and he left)} \subsection{$\beta$sPUsPU} This tree allows a parenthesized clause to adjoin onto a non-parenthesized clause. \enumsentence{Innumerable motels from Tucson to New York boast swimming pools ( `` swim at your own risk '' is the hospitable sign poised at the brink of most pools ) . \hfill [Brown:ca17]} \section{Punctuation trees containing no lexical material} \subsection{$\alpha$PU} This is the elementary tree for substitution of punctuation marks. This tree is used in the quoted speech trees, where including the punctuation mark as an anchor along with the verb of saying would require a new entry for every tree selecting the relevant tree families. It is also used in the tree for parenthetical adverbs ($\beta$puARBpuvx), and for S-adjoined PPs and adverbs ($\beta$spuARB and $\beta$spuPnx). \subsection{$\beta$PUs} Anchored by comma: allows comma-separated clause initial adjuncts, (\ex{1}-\ex{2}). \enumsentence{Here , as in ``Journal'' , Mr. Louis has given himself the lion's share of the dancing... \hfill [Brown:cc09]} \enumsentence{Choreographed by Mr. Nagrin, the work filled the second half of a program} To keep this tree from appearing on root Ss (i.e. {\it , sentence}), we have a root constraint that {\bf $<$punct struct = nil$>$} (similar to the requirement that root Ss be tensed, i.e. {\bf $<$mode~=~ind/imp$>$}). The {\bf $<$punct struct$>$ = nil} feature on the foot blocks stacking of multiple punctuation marks. This feature is shown in the tree in Figure \ref{PUs}. \begin{figure}[hbt] \centering \hspace{0.0in} \psfig{figure=ps/punct-files/PUs.ps,height=5.5in} \caption{$\beta$PUs, with features displayed} \label{PUs} \end{figure} This tree can be also used by adjuncts on embedded clauses: \enumsentence{One might expect that in a poetic career of seventy-odd years, some changes in style and method would have occurred, some development taken place. \hfill [Brown:cj65]} These adjuncts sometimes have commas on both sides of the adjunct, or, like (\ex{0}), only have them at the end of the adjunct. Finally, this tree is also used for peripheral appositive relative clauses. \enumsentence{Interest may remain limited into tomorrow's U.K. trade figures, which the market will be watching closely to see if there is any improvement after disappointing numbers in the previous two months.} \subsection{$\beta$sPUs} \label{sPUs} This tree handles clausal ``coordination'' with comma, dash, colon, semi-colon or any of the terminal punctuation marks. The first clause must be either indicative or imperative. The second may also be infinitival with the separating punctuation marks, but must be indicative or imperative with the terminal marks; with a comma, it may only be indicative. The two clauses need not share the same mode. NB: Allowing the terminal punctuation marks to anchor this tree allows us to parse sequences of multiple sentences. This is not the usual mode of parsing; if it were, this sort of sequencing might be better handled by a higher level of processing. \enumsentence{For critics , Hardy has had no poetic periods -- one does not speak of early Hardy or late Hardy , or of the London or Max Gate period....} \enumsentence{Then there was exercise , boating and hiking , which was not only good for you but also made you more virile : the thought of strenuous activity left him exhausted.} This construction is one of the few where two non-bracketing punctuation marks can be adjacent. It is possible (if rare) for the first clause to end with a question mark or exclamation point, when the two clauses are conjoined with a semi-colon, colon or dash. Features on the foot node, as shown in Figure \ref{sPUs-tree}, control this interaction. \begin{figure}[hbt] \centering \hspace{0.0in} \psfig{figure=ps/punct-files/sPUs.ps,height=4.5in} \caption{$\beta$sPUs, with features displayed} \label{sPUs-tree} \end{figure} Complementizers are not permitted on either conjunct. Subordinating conjunctions sometimes appear on the right conjunct, but seem to be impossible on the left: \enumsentence{Killpath would just have to go out and drag Gun back by the heels once an hour ; because he'd be damned if he was going to be a mid-watch pencil-pusher . [Brown:cl17]} \enumsentence{The best rule of thumb for detecting corked wine (provided the eye has not already spotted it) is to smell the wet end of the cork after pulling it : if it smells of wine , the bottle is probably all right ; if it smells of cork , one has grounds for suspicion. [Brown:cf27]} \subsection{$\beta$sPU} This tree handles the sentence final punctuation marks when selected by a question mark, exclamation point or period. One could also require a final punctuation mark for all clauses, but such an approach would not allow non-periods to occur internally, for instance before a semi-colon or dash as noted above in Section \ref{sPUs}. This tree currently only adjoins to indicative or imperative (root) clauses. \enumsentence{He left !} \enumsentence{Get lost .} \enumsentence{Get lost ?} The feature {\bf punct bal= nil} on the foot node ensures that this tree only adjoins inside of parentheses or quotes completely enclosing a sentence (\ex{1}), but does not restrict it from adjoining to clause which ends with balanced punctuation if only the end of the clause is contained in the parentheses or quotes (\ex{2}). \enumsentence{(John then left .)} \enumsentence{(John then left) .} \enumsentence{Mary asked him to leave (immediately) .} This tree is also selected by the colon to handle a colon expansion after adjunct clause -- \enumsentence{Expressed differently : if the price for becoming a faithful follower... \hfill [Brown:cd02]} \enumsentence{Expressing it differently : if the price for becoming a faithful follower... } \enumsentence{To express it differently : if the price for becoming a faithful follower... \hfill [Brown:cd02]} This tree is only used after adjunct (untensed) clauses, which adjoin to the tensed clause using the adjunct clause trees (cf Section \ref{adjunct-cls} ); the {\bf mode} of the complete clause is that of the matrix rather than the adjunct. Indicative or imperative (i.e. root) clauses separated by a colon use the $\beta$sPUs tree (Section \ref{sPUs}). \subsection{$\beta$vPU} This tree is anchored by a colon or a dash, and occurs between a verb and its complement. These typically are lists. \enumsentence{Printed material Available , on request , from U.S. Department of Agriculture , Washington 25 , D.C. , are : Cooperative Farm Credit Can Assist......\hfill [Brown:ch01]} \subsection{$\beta$pPU} This tree is anchored by a colon or a dash, and occurs between a preposition and its complement. It typically occurs with a sequence of complements. As with the tree above, this typically occurs with a conjoined complement. \enumsentence{...and utilization such as : (A) the protection of forage...} \enumsentence{...can be represented as : Af.} \section{Other trees} \subsection{$\beta$spuARB} \label{post-adverb} In general, we attach post-clausal modifiers at the VP node, as you typically get scope ambiguity effects with negation ({\it John didn't leave today} -- did he leave or not?). However, with post-sentential, comma-separated adverbs, there is no ambiguity - in {\it John didn't leave, today} he definitely did not leave. Since this tree is only selected by a subset of the adverbs (namely, those which can appear pre-sententially, without a punctuation mark), it is anchored by the adverb. \enumsentence{The names of some of these products don't suggest the risk involved in buying them , either . \hfill [WSJ]} \subsection{$\beta$spuPnx} \label{post-PP} Clause-final PP separated by a comma. Like the adverbs described above, these differ from VP adjoined PPs in taking widest scope. \enumsentence{...gold for current delivery settled at \$367.30 an ounce , up 20 cents .} \enumsentence{It increases employee commitment to the company , with all that means for efficiency and quality control .} \subsection{$\beta$nxPUa} Anchored by colon or dash, allows for post-modification of NPs by adjectives. \enumsentence{Make no mistake , this Gorky Studio drama is a respectable import -- aptly grave , carefully written , performed and directed . } \chapter{Relative Clauses} \label{rel_clauses} Relative clauses are NP modifiers, which involve extraction of an argument or an adjunct. The NP head (the portion of the NP being modified by the relative clause) is not directly related to the extracted element. For example in (\ex{1}), {\it the person} is the head NP and is modified by the relative clause {\it whose mother $\epsilon$ likes Chris}. {\em The person} is not interpreted as the subject of the relative clause which is missing an overt subject. In other cases, such as (\ex{2}), the relationship between the head NP {\em export exhibitions} may seem to be more direct but even there we assume that there are two independent relationships: one between the entire relative clause and the NP it modifies, and another between the extracted element and its trace. The extracted element may be an overt {\em wh}-phrase as in (\ex{1}) or a covert element as in (\ex{2}). \enumsentence{the person whose mother likes Chris} \enumsentence{export exhibitions that included high-tech items} Relative clauses are represented in the English XTAG grammar by auxiliary trees that adjoin to NP's. These trees are anchored by the verb in the clause and appear in the appropriate tree families for the various verb subcategorizations. Within a tree family there will be groups of relative clause trees based on the declarative tree and each passive tree. Within each of these groups, there is a separate relative clause tree corresponding to each possible argument that can be extracted from the clause. There is no relationship between the extracted position and the head NP. The relationship between the relative clause and the head NP is treated as a semantic relationship which will be provided by any reasonable compositional theory. The relationship between the extracted element (which can be covert) is captured by co-indexing the {\bf $<$trace$>$} features of the extracted NP and the NP$_{w}$ node in the relative clause tree. If for example, it is {\bf NP$_{0}$} that is extracted, we have the following feature equations:\\ {\bf NP$_{w}$.t:$\langle$ trace $\rangle =$NP$_{0}$.t:$\langle$ trace $\rangle$}\\ {\bf NP$_{w}$.t:$\langle$ case $\rangle =$NP$_{0}$.t:$\langle$ case $\rangle$}\\ {\bf NP$_{w}$.t:$\langle$ agr $\rangle =$NP$_{0}$.t:$\langle$ agr $\rangle$} \footnote{ No adjunct traces are represented in the XTAG analysis of adjunct extraction. Relative clauses on adjuncts do not have traces and consequently feature equations of the kind shown here are not present.} Representative examples from the transitive tree family are shown with a relevant subset of their features in Figures~\ref{trans-rel-clause-trees}(a) and \ref{trans-rel-clause-trees}(b). Figure~\ref{trans-rel-clause-trees}(a) involves a relative clause with a covert extracted element, while figure~\ref{trans-rel-clause-trees}(b) involves a relative clause with an overt {\em wh}-phrase.\footnote{ The convention followed in naming relative clause trees is outlined in Appendix~\ref{tree-naming}.} \begin{figure}[htb] \begin{tabular}{cc} \psfig{figure=ps/rel_clauses-files/NbetaNc1nx0Vnx1.ps,height=10.0cm}& \psfig{figure=ps/rel_clauses-files/NbetaN0nx0Vnx1.ps,height=10.0cm}\\ (a)&(b) \end{tabular} \caption{Relative clause trees in the transitive tree family: $\beta$Nc1nx0Vnx1 (a) and $\beta$N0nx0Vnx1 (b)} \label{trans-rel-clause-trees} \label{2;16,1} \label{2;15,1} \end{figure} The above analysis is essentially identical to the GB analysis of relative clauses. One aspect of its implementation is that an covert {\bf $+<$wh$>$} NP and a covert Comp have to be introduced. See (\ex{1}) and (\ex{2}) for example. \enumsentence{export exhibitions [ [$_{NP_{w}}$$\epsilon$]$_{i}$ [ that [ $\epsilon$$_{i}$ included high-tech items]]]} \enumsentence{the export exhibition [ [$_{NP_{w}}$$\epsilon$]$_{i}$ [ $\epsilon$$_{C}$ [Muriel planned $\epsilon$$_{i}$]]]} The lexicalized nature of XTAG makes it problematic to have trees headed by null strings. Of the two null trees, NP$_{w}$ and Comp, that we could postulate, the former is definitely more undesirable because it would lead to massive overgeneration, as can be seen in (\ex{1}) and (\ex{2}). \enumsentence{* [$_{NP_{w}}$$\epsilon$] did John eat the apple? (as a {\em wh}-question)} \enumsentence{* I wonder [[$_{NP_{w}}$$\epsilon$] Mary likes John](as an indirect question)} The presence of an initial headed by a null Comp does not lead to problems of overgeneration because relative clauses are the only environment with a Comp substitution node. \footnote{Complementizers in clausal complementation are introduced by adjunction. See section \ref{comp-distr}.} Consequently. our treatment of relative clauses has different trees to handle relative clauses with an overt extracted {\em wh}-NP and relative clauses with a covert extracted {\em wh}-NP. Relative clauses with an overt extracted {\em wh}-NP involve substitution of a $+${\bf $<$wh$>$} NP into the NP$_{w}$ node \footnote{The feature equation used is {\bf NP$_{w}$.t:$<$wh$> = +$}. Examples of NPs that could substitute under NP$_{w}$ are {\em whose mother}, {\em who}, {\em whom}, and also {\em which} but not {\em when} and {\em where} which are treated as exhaustive $+${\em wh} PPs. } and have a Comp node headed by $\epsilon$$_{C}$ built in. Relative clauses with a covert extracted {\em wh}-NP have a NP$_{w}$ node headed by $\epsilon$$_{w}$ built in and involve substitution into the Comp node. The Comp node that is introduced by substitution can be the $\epsilon$$_{C}$ (null complementizer), {\em that}, and {\em for}. For example, the tree shown in Figure~\ref{trans-rel-clause-trees}(b) is used for the relative clauses shown in sentences (\ex{1})-(\ex{2}), while the tree shown in Figure~\ref{trans-rel-clause-trees}(a) is used for the relative clauses in sentences (\ex{3})-(\ex{6}). \enumsentence{the man who Muriel likes} \enumsentence{the man whose mother Muriel likes} \enumsentence{the man Muriel likes} \enumsentence{the book for Muriel to read} \enumsentence{the man that Muriel likes} \enumsentence{the book Muriel is reading} Cases of PP pied-piping (cf. \ex{1}) are handled in a similar fashion by building in a PP$_{w}$ node. \enumsentence{the demon by whom Muriel was chased} See the tree in Figure~\ref{trans-rel-clause-trees2}. \begin{figure}[htb] \begin{tabular}{cc} \centerline{\psfig{figure=ps/rel_clauses-files/NbetaNpxnx0Vnx1.ps,height=12.0cm}} \end{tabular} \caption{Adjunct relative clause tree with PP-pied-piping in the transitive tree family: $\beta$Npxnx0Vnx1} \label{trans-rel-clause-trees2} \label{2;Npxnx0Vnx1} \end{figure} \section{Complementizers and clauses} The co-occurrence constraints that exist between various Comps and the clause type of the clause they occur with are implemented through combinations of different clause types using the {\bf $<$mode$>$} feature, the {\bf $<$select-mode$>$} feature, and the {\bf $<$rel-pron$>$} feature. Clauses are specified for the {\bf $<$mode$>$} feature which indicates the clause type of that clause. Possible values for the {\bf $<$mode$>$} feature are {\bf ind, inf, ppart, ger} etc. Comps are lexically specified for a feature named {\bf $<$select-mode$>$}. In addition, the {\bf $<$select-mode$>$} feature of the Comp is equated with the {\bf $<$mode$>$} feature of its complement S by the following equation:\\ {\bf S$_{r}$.t:$\langle$mode$\rangle =$ Comp.t:$\langle$select-mode$\rangle$} The lexical specifications of the Comps are shown below: \begin{itemize} \item $\epsilon$$_{C}$, {\bf Comp.t:$\langle$select-mode$\rangle =$ind/inf/ger/ppart} \item {\em that}, {\bf Comp.t:$\langle$select-mode$\rangle =$ind} \item {\em for}, {\bf Comp.t:$\langle$select-mode$\rangle =$inf} \end{itemize} The following examples display the co-occurence constraints which the {\bf $<$select-mode$>$} specifications assigned above implement. For $\epsilon$$_{C}$: \enumsentence{the book Muriel likes ({\bf S.t:$<$mode$> =$ ind})} \enumsentence{a book to like ({\bf S.t:$<$mode$> =$ inf})} \enumsentence{the girl reading the book ({\bf S.t:$<$mode$> =$ ger})} \enumsentence{the book read by Muriel ({\bf S.t:$<$mode$> =$ ppart})} For {\em for}: \enumsentence{*the book for Muriel likes ({\bf S.t:$<$mode$> =$ ind})} \enumsentence{a book for Mary to like ({\bf S.t:$<$mode$> =$ inf})} \enumsentence{*the girl for reading the book ({\bf S.t:$<$mode$> =$ ger})} \enumsentence{*the book for read by Muriel ({\bf S.t:$<$mode$> =$ ppart})} For {\em that}: \enumsentence{the book that Muriel likes ({\bf S.t:$<$mode$> =$ ind})} \enumsentence{*a book that (Muriel) to like ({\bf S.t:$<$mode$> =$ inf})} \enumsentence{*the girl that reading the book ({\bf S.t:$<$mode$> =$ ger})} \enumsentence{*the book that read by Muriel ({\bf S.t:$<$mode$> =$ ppart})} Relative clause trees that have substitution of {\bf NP$_{w}$} have the following feature equations:\\ {\bf S$_{r}$.t:$\langle$mode$\rangle =$ NP$_{w}$.t:$\langle$select-mode$\rangle$}\\ {\bf NP$_{w}$.t:$\langle$select-mode$\rangle =$ind} The examples that follow are intended to provide the rationale for the above setting of features. \enumsentence{ the boy whose mother chased the cat ({\bf S$_{r}$.t:$\langle$mode$\rangle =$ind})} \enumsentence{ *the boy whose mother to chase the cat ({\bf S$_{r}$.t:$\langle$mode$\rangle = $inf})} \enumsentence{ *the boy whose mother eaten the cake ({\bf S$_{r}$.t:$\langle$mode$\rangle =$ppart})} \enumsentence{ *the boy whose mother chasing the cat ({\bf S$_{r}$.t:$\langle$mode$\rangle =$ ger})} \enumsentence{ the boy [whose mother]$_{i}$ Bill believes $\epsilon$$_{i}$ to chase the cat\\ ({\bf S$_{r}$.t: $\langle$mode$\rangle =$ind})} The feature equations that appear in trees which have substitution of {\bf PP$_{w}$} are:\\ {\bf S$_{r}$.t:$\langle$mode$\rangle =$ PP$_{w}$.t:$\langle$select-mode$\rangle$}\\ {\bf PP$_{w}$.t:$\langle$mode$\rangle =$ind/inf} \footnote{As is the case for {\bf NP$_{w}$} substitution, any $+${\bf wh}-PP can substitute under PP$_{w}$. This is implemented by the following equation:\\ {\bf PP$_{w}$.t:$\langle$wh$\rangle = +$} Not all cases of pied-piping involve substitution of {\bf PP$_{w}$}. In some cases, the P may be built in. In cases where part of the pied-piped PP is part of the anchor, it continues to function as an anchor even after pied-piping i.e. the P node and the {\bf NP$_{w}$} nodes are represented separately. } Examples that justify the above feature setting follow. \enumsentence{ the person [by whom] this machine was invented ({\bf S$_{r}$.t:$\langle$mode$\rangle =$ind})} \enumsentence{ a baker [in whom]$_{i}$ PRO to trust $\epsilon$$_{i}$ ({\bf S$_{r}$.t:$\langle$mode$\rangle =$ inf})} \enumsentence{ *the fork [with which] (Geoffrey) eaten the pudding ({\bf S$_{r}$.t:$\langle$ mode$\rangle =$ppart})} \enumsentence{ *the person [by whom] (this machine) inventing ({\bf S$_{r}$.t:$\langle$mode $\rangle =$ger})} \subsection{Further constraints on the null Comp $\epsilon$$_{C}$} There are additional constraints on where the null Comp $\epsilon$$_{C}$ can occur. The null Comp is not permitted in cases of subject extraction unless there is an intervening clause or or the relative clause is a reduced relative ({\bf mode = ppart/ger}). This can be seen in (\ex{1}-\ex{4}). \enumsentence{ *the toy [$\epsilon$$_{i}$ [$\epsilon$$_{C}$ [ $\epsilon$$_{i}$ likes Dafna]]]} \enumsentence{ the toy [$\epsilon$$_{i}$ [$\epsilon$$_{C}$ Fred thinks [ $\epsilon$$_{i}$ likes Dafna]]]} \enumsentence{ the boy [$\epsilon$$_{i}$ [$\epsilon$$_{C}$ [ $\epsilon$$_{i}$ eating the guava]]]} \enumsentence{ the guava [$\epsilon$$_{i}$ [$\epsilon$$_{C}$ [ $\epsilon$$_{i}$ eaten by the boy]]]} To model this paradigm, the feature {\bf $\langle$rel-pron$\rangle$} is used in conjunction with the following equations: \begin{itemize} \item {\bf S$_{r}$.t:$\langle$rel-pron$\rangle =$ Comp.t:$\langle$rel-pron$\rangle$} \item {\bf S$_{r}$.b:$\langle$rel-pron$\rangle =$ S$_{r}$.b:$\langle$mode$\rangle$} \item {\bf Comp.b:$\langle$rel-pron$\rangle =$ppart/ger/adj-clause} (for $\epsilon$$_{C}$) \end{itemize} The full set of the equations shown above is only present in Comp substitution trees involving subject extraction. So (\ex{1}) will not be ruled out. \enumsentence{ the toy [$\epsilon$$_{i}$ [$\epsilon$$_{C}$ [ Dafna likes $\epsilon$$_{i}$ ]]]} The feature mismatch induced by the above equations is not remedied by adjunction of just any S-adjunct because all other S-adjuncts are transparent to the {\bf $\langle$rel-pron$\rangle$} feature because of the following equation:\\ {\bf S$_{m}$.b:$\langle$rel-pron$\rangle =$ S$_{f}$.t:$\langle$rel-pron$\rangle$} \section{Reduced Relatives} Reduced relatives are permitted only in cases of subject-extraction. Past participial reduced relatives are only permitted on passive clauses. See (\ex{1}-\ex{8}). \enumsentence{ the toy [$\epsilon$$_{i}$ [$\epsilon$$_{C}$ [ $\epsilon$$_{i}$ playing the banjo]]] } \enumsentence{ *the instrument [$\epsilon$$_{i}$ [$\epsilon$$_{C}$ [ Amis playing $\epsilon$$_{i}$ ]]] } \enumsentence{ *the day [$\epsilon$$_{w}$ [$\epsilon$$_{C}$ [ Amis playing the banjo]]] } \enumsentence{ the apple [$\epsilon$$_{i}$ [$\epsilon$$_{C}$ [ $\epsilon$$_{i}$ eaten by Dafna]]] } \enumsentence{ *the child [$\epsilon$$_{i}$ [$\epsilon$$_{C}$ [ the apple eaten by $\epsilon$$_{i}$ ]]] } \enumsentence{ *the day [$\epsilon$$_{w}$ [$\epsilon$$_{C}$ [ Amis eaten the apple]]] } \enumsentence{ *the apple [$\epsilon$$_{i}$ [$\epsilon$$_{C}$ [ Dafna eaten $\epsilon$$_{i}$ ]]] } \enumsentence{ *the child [$\epsilon$$_{i}$ [$\epsilon$$_{C}$ [ $\epsilon$$_{i}$ eaten the apple ]]] } These restrictions are built into the {\bf $<$mode$>$} specifications of {\bf S.t}. So non-passive cases of subject extraction have the following feature equation:\\ {\bf S$_{r}$.t:$\langle$mode$\rangle =$ ind/ger/inf} Passive cases of subject extraction have the following feature equation:\\ {\bf S$_{r}$.t:$\langle$mode$\rangle =$ ind/ger/ppart/inf} Finally, all cases of non-subject extraction have the following feature equation:\\ {\bf S$_{r}$.t:$\langle$mode$\rangle =$ ind/inf}\\ \subsection{Restrictive vs. Non-restrictive relatives} The English XTAG grammar does not contain any syntactic distinction between restrictive and non-restrictive relatives because we believe this to be a semantic and/or pragmatic difference. \section{External syntax} A relative clause can combine with the NP it modifies in at least the following two ways: \enumsentence{\ [the [toy [$\epsilon$$_{i}$ [$\epsilon$$_{C}$ [Dafna likes $\epsilon$$_{i}$ ]]]]] } \label{n-attach-ex} \enumsentence{\ [[the toy] [$\epsilon$$_{i}$ [$\epsilon$$_{C}$ [Dafna likes $\epsilon$$_{i}$ ]]]] } \label{np-attach-ex} Based on cases like (\ex{1}) and (\ex{2}), which are problematic for the structure in (\ref{n-attach-ex}), the structure in (\ref{np-attach-ex}) is adopted. \enumsentence{ [[the man and the woman] [who met on the bus]]} \enumsentence{ [[the man and the woman] [who like each other]]} As it stands, the RC analysis sketched so far will combine in two ways with the Determiner tree shown in Figure~(\ref{trans-rel-clause-trees3}), \footnote{The determiner tree shown has the {\bf $<$rel-clause$>$} feature built in. The RC analysis would give two parses in the absence of this feature.} giving us both the possiblities shown in (\ref{n-attach-ex}) and (\ref{np-attach-ex}). In order to block the structure exemplified in (\ref{n-attach-ex}), the feature {\bf $\langle$rel-clause$\rangle$} is used in combination with the following equations. \begin{figure}[htb] \begin{tabular}{cc} \centerline{\psfig{figure=ps/rel_clauses-files/NbetaDnx.ps,height=10.0cm}} \end{tabular} \label{trans-rel-clause-trees3} \caption{Determiner tree with {\bf $<$rel-clause$>$} feature: $\beta$Dnx} \end{figure} On the RC:\\ {\bf NP$_{r}$.b:$\langle$rel-clause$\rangle = +$} On the Determiner tree:\\ {\bf NP$_{f}$.t:$\langle$rel-clause$\rangle = -$} Together, these equations block introduction of the determiner above the relative clause. \section{Other Issues} \subsection{Interaction with adjoined Comps} The XTAG analysis now has two different ways of introducing a complementizer like {\em that} or {\em for}, depending upon whether it occurs in a relative clause or in sentential complementation. Relative clause complementizers substitute in (using the tree $\alpha$Comp), while sentential complementizers adjoin in (using the tree $\beta$COMPs). Cases like (\ex{1}) where both kinds of complementizers illicitly occur together are blocked. \enumsentence{*the book [$\epsilon$$_{w_{i}}$ [that [that [Muriel wrote $\epsilon$$_{i}$]]]]} This is accomplished by setting the {\bf S$_{r}$.t:$<$comp$>$} feature in the relative clause tree to {\bf nil}. The {\bf S$_{r}$.t:$<$comp$>$} feature of the auxiliary tree that introduces (the sentential complementation) {\em that} is set to {\bf that}. This leads to a feature clash ruling out (\ex{0}). On the other hand, if a sentential complement taking verb is adjoined in at S$_{r}$, this feature clash goes away (cf. \ex{1}). \enumsentence{the book [$\epsilon$$_{w_{i}}$ [that Beth thinks [that [Muriel wrote $\epsilon$$_{i}$]]]]} \subsection{Adjunction on PRO} Adjunction on PRO, which would yield the ungrammatical (\ex{1}) is blocked. \enumsentence{*I want [[PRO [who Muriel likes] to read a book]].} This is done by specifying the {\bf $<$case$>$} feature of {\bf NP$_{f}$} to be {\bf nom/acc}. The {\bf $<$case$>$} feature of PRO is {\bf null}. This leads to a feature clash and blocks adjunction of relative clauses on to PRO. \subsection{Adjunct relative clauses} Two types of trees to handle adjunct relative clauses exist in the XTAG grammar: one in which there is {\bf PP$_{w}$} substitution with a null {\bf Comp} built in and one in which there is a null {\bf NP$_{w}$} built in and a {\bf Comp} substitutes in. There is no {\bf NP$_{w}$} substitution tree with a null {\bf Comp} built in. This is because of the contrast between (\ex{1}) and (\ex{2}). \enumsentence{the day [[on whose predecessor] [$\epsilon$$_{C}$ [Muriel left]]]} \enumsentence{*the day [[whose predecessor] [$\epsilon$$_{C}$ [Muriel left]]]} In general, adjunct relatives are not possible with an overt {\bf NP$_{w}$}. We do not consider (\ex{1}) and (\ex{2}) to be counterexamples to the above statements because we consider {\em where} and {\em when} to be exhaustive {\bf PP}s that head a {\bf PP} initial tree. \enumsentence{the place [where [$\epsilon$$_{C}$ [Muriel wrote her first book]]]} \enumsentence{the time [when [$\epsilon$$_{C}$ [Muriel lived in Bryn Mawr]]]} \subsection{ECM} Cases where {\em for} assigns exceptional case (cf. \ex{1}, \ex{2}) are handled. \enumsentence{a book [$\epsilon$$_{w_{i}}$ [for [Muriel to read $\epsilon$$_{i}$]]]} \enumsentence{the time [$\epsilon$$_{w_{i}}$ [for [Muriel to leave Haverford]]]} The assignment of case by {\em for} is implemented by a combination of the following equations.\\ {\bf Comp.t:$\langle$assign-case$\rangle =$acc}\\ {\bf S$_{r}$.t:$\langle$assign-case$\rangle =$Comp.t:$\langle$assign-case$\rangle$}\\ {\bf S$_{r}$.b:$\langle$assign-case$\rangle =$NP$_{0}$.t:$\langle$case$\rangle$} \section{Cases not handled} \subsection{Partial treatment of free-relatives} Free relatives are only partially handled. All free relatives on non-subject positions and some free relatives on subject positions are handled. The structure assigned to free relatives treats the extracted {\em wh}-NP as the head NP of the relative clause. The remaining relative clause modifies this extracted {\em wh}-NP (cf. \ex{1}-\ex{3}). \enumsentence{what(ever) [$\epsilon$$_{w_{i}}$ [$\epsilon$$_{C}$ [Mary likes $\epsilon$$_{i}$]]]} \enumsentence{where(ever) [$\epsilon$$_{w}$ [$\epsilon$$_{C}$ [Mary lives]]]} \enumsentence{who(ever) [$\epsilon$$_{w_{i}}$ [$\epsilon$$_{C}$ [Muriel thinks [$\epsilon$$_{i}$ likes Mary]]]]} However, simple subject extractions without further emebedding are not handled (cf. \ex{1}). \enumsentence{who(ever) [$\epsilon$$_{w_{i}}$ [$\epsilon$$_{C}$ [$\epsilon$$_{i}$ likes Bill]]]} This is because (\ex{-1}) is treated exactly like the ungrammatical (\ex{1}). \enumsentence{*the person [ $\epsilon$$_{w_{i}}$ [$\epsilon$$_{C}$ [$\epsilon$$_{i}$ likes Bill]]]} \subsection{Adjunct P-stranding} The following cases of adjunct preposition stranding are not handled (cf. \ex{1}, \ex{2}). \enumsentence{the pen Muriel wrote this letter with} \enumsentence{the street Muriel lives on} Adjuncts are not built into elementary trees in XTAG. So there is no clean way to represent adjunct preposition stranding. A better solution is, probably , available if we make use of multi-component adjunction. \subsection{Overgeneration} The following ungrammatical sentences are currently being accepted by the XTAG grammar. This is because no clean and conceptually attractive way of ruling them out is obvious to us. \subsubsection{{\em how} as {\em wh}-NP} In standard American English, {\em how} is not acceptable as a relative pronoun (cf. \ex{1}). \enumsentence{*the way [how [$\epsilon$$_{C}$ [PRO to solve this problem]]]} However, (\ex{0}) is accepted by the current grammar. The only way to rule (\ex{0}) out would be to introduce a special feature devoted to this purpose. This is unappealing. Further, there exist speech registers/dialects of English, where (\ex{0}) is acceptable. \subsubsection{{\em for}-trace effects} (\ex{1}) is ungrammatical, being an instance of a violation of the {\em for}-trace filter of early transformational grammar. \enumsentence{the person [$\epsilon$$_{w_{i}}$ [for [$\epsilon$$_{i}$ to read the book]]]} The XTAG grammar currently accepts (\ex{0}).\footnote{It may be of some interest that (\ex{0}) is acceptable in certain dialects of Belfast English.} \subsubsection{Internal head constraint} Relative clauses in English (and in an overwhelming number of languages) obey a `no internal head' constraint. This constraint is exemplified in the contrast between (\ex{1}) and (\ex{2}). \enumsentence{the person [who$_{i}$ [$\epsilon$$_{C}$ Muriel likes $\epsilon$$_{i}$]]} \enumsentence{*the person [[which person]$_{i}$ [$\epsilon$$_{C}$ Muriel likes $\epsilon$$_{i}$]]} We know of no good way to rule (\ex{0}) out, while still ruling (\ex{1}) in. \enumsentence{the person [[whose mother]$_{i}$ [$\epsilon$$_{C}$ Muriel likes $\epsilon$$_{i}$]]} Dayal (1996) suggests that `full' NPs such as {\em which person} and {\em whose mother} are R-expressions while {\em who} and {\em whose} are pronouns. R-expressions, unlike pronouns, are subject to Condition C. (\ex{-2}) is, then, ruled out as a violation of Condition C since {\em the person} and {\em which person} are co-indexed and {\em the person} c-commands {\em which person}. If we accept Dayal's argument, we have a principled reason for allowing overgeneration of relative clauses that violate the internal head constraint, the reason being that the XTAG grammar does generate binding theory violations. \subsubsection{Overt Comp constraint on stacked relatives} Stacked relatives of the kind in (\ex{1}) are handled. \enumsentence{ [[the book [that Bill likes]] [which Mary wrote]]} There is a constraint on stacked relatives: all but the relative clause closest to the head-NP must have either an overt {\bf Comp} or an overt {\bf NP$_{w}$}. Thus (\ex{1}) is ungrammatical. \enumsentence{*[[the book [that Bill likes]] [Mary wrote]]} Again, no good way of handling this constraint is known to us currently. \chapter{Adjunct Clauses} \label{adjunct-cls} \label{sub-conj} Adjunct clauses include subordinate clauses (i.e. those with overt subordinating conjunctions), purpose clauses and participial adjuncts. Subordinating conjunctions each select four trees, allowing them to appear in four different positions relative to the matrix clause. The positions are (1) before the matrix clause, (2) after the matrix clause, (3) before the VP, surrounded by two punctuation marks, and (4) after the matrix clause, separated by a punctuation mark. Each of these trees is shown in Figure \ref{sub-conj-trees}. \begin{figure}[htb] \centering \begin{tabular}{cccc} \psfig{figure=ps/sent-adjs-files/Pss.ps,height=2.1in}& \psfig{figure=ps/sent-adjs-files/vxPNs.ps,height=2.1in}& \psfig{figure=ps/sent-adjs-files/puPPpuvx.ps,height=2.1in}& \psfig{figure=ps/sent-adjs-files/spuPs.ps,height=2in}\\ (1) $\beta$Pss & (2) $\beta$vxPNs & (3) $\beta$puPPspuvx & (4) $\beta$spuPs \\ \end{tabular} \caption{Auxiliary Trees for Subordinating Conjunctions} \label{sub-conj-trees} \end{figure} Sentence-initial adjuncts adjoin at the root S of the matrix clause, while sentence-final adjuncts adjoin at a VP node. In this, the XTAG analysis follows the findings on the attachment sites of adjunct clauses for conditional clauses (\cite{iatridou91}) and for infinitival clauses (\cite{Browning87}). One compelling argument is based on Binding Condition C effects. As can be seen from examples (\ex{1})-(\ex{3}) below, no Binding Condition violation occurs when the adjunct is sentence initial, but the subject of the matrix clause clearly governs the adjunct clause when it is in sentence final position and co-indexation of the pronoun with the subject of the adjunct clause is impossible. \enumsentence{Unless she$_i$ hurries, Mary$_i$ will be late for the meeting.} \enumsentence{$\ast$She$_i$ will be late for the meeting unless Mary$_i$ hurries.} \enumsentence{Mary$_i$ will be late for the meeting unless she$_i$ hurries.} We had previously treated subordinating conjunctions as a subclass of {\em conjunction}, but are now assigning them the POS {\em preposition}, as there is such clear overlap between words that function as prepositions (taking NP complements) and subordinating conjunctions (taking clausal complements). While there are some prepositions which only take NP complements and some which only take clausal complements, many take both as shown in examples (\ex{1})-(\ex{4}), and it seems to be artificial to assign them two different parts-of-speech. \enumsentence{Helen left before the party.} \enumsentence{Helen left before the party began.} \enumsentence{Since the election, Bill has been elated.} \enumsentence{Since winning the election, Bill has been elated.} Each subordinating conjunction selects the values of the {\bf $<$mode$>$} and {\bf $<$comp$>$} features of the subordinated S. The {\bf $<$mode$>$} value constrains the types of clauses the subordinating conjunction may appear with and the {\bf $<$comp$>$} value constrains the complementizers which may adjoin to that clause. For instance, indicative subordinate clauses may appear with the complementizer {\it that} as in (\ex{1}), while participial clauses may not have any complementizers (\ex{2}). \enumsentence{Midge left that car so that Sam could drive to work.} \enumsentence{*Since that seeing the new VW, Midge could think of nothing else.} \subsection{Multi-word Subordinating Conjunctions} We extracted a list of multi-word conjunctions, such as {\it as if}, {\it in order}, and {\it for all (that)}, from \cite{quirk85}. For the most part, the components of the complex are all anchors, as shown in Figures~\ref{conjs}(a). In one case, {\it as ADV as}, there is a great deal of latitude in the choice of adverb, so this is a substitution site (Figures~\ref{conjs}(b)). This multi-anchor treatment is very similar to that proposed for idioms in \cite{AS89}, and the analysis of light verbs in the XTAG grammar (see section~\ref{nx0lVN1-family}). \begin{figure}[htb] \centering \begin{tabular}{ccc} \psfig{figure=ps/sent-adjs-files/vxPARBPs.ps,height=2.7in}& \hspace*{0.5in} & \psfig{figure=ps/sent-adjs-files/vxParbPs.ps,height=2.7in}\\ (a)&\hspace*{0.5in} &(b)\\ \end{tabular} \caption{Trees Anchored by Subordinating Conjunctions: $\beta$vxPARBPs and $\beta$vxParbPs} \label{conjs} \end{figure} \section{``Bare'' Adjunct Clauses} ``Bare'' adjunct clauses do not have an overt subordinating conjunction, but are typically parallel in meaning to clauses with subordinating conjunctions. For this reason, we have elected to handle them using the same trees shown above, but with null anchors. They are selected at the same time and in the same way the {\it PRO} tree is, as they all have {\it PRO} subjects. Three values of {\bf $<$mode$>$} are licensed: {\bf inf} (infinitive), {\bf ger} (gerundive) and {\bf ppart} (past participal).\footnote{We considered allowing bare indicative clauses, such as {\it He died that others may live}, but these were considered too archaic to be worth the additional ambiguity they would add to the grammar.} They interact with complementizers as follows: \begin{itemize} \item Participial complements do not license any complementizers:\footnote{While these sound a bit like extraposed relative clauses (see \cite{kj87}), those move only to the right and adjoin to S; as these clauses are equally grammatical both sentence-initially and sentence-finally, we are analyzing them as adjunct clauses.} \enumsentence{[Destroyed by the fire], the building still stood.} \enumsentence{The fire raged for days [destroying the building].} \enumsentence{$\ast$[That destroyed by the fire], the building still stood.} \begin{figure}[htb] \begin{tabular}{cc} \psfig{figure=ps/sent-adjs-files/destroyed-by-fire.ps,height=2.7in}& \psfig{figure=ps/sent-adjs-files/destroying-the-building.ps,height=2.7in}\\ (a)&(b) \end{tabular} \caption{Sample Participial Adjuncts} \label{destroyed} \end{figure} \item Infinitival adjuncts, including purpose clauses, are licensed both with and without the complementizer {\it for}. \enumsentence{Harriet bought a Mustang [to impress Eugene].} \enumsentence{[To impress Harriet], Eugene dyed his hair.} \enumsentence{Traffic stopped [for Harriet to cross the street].} \end{itemize} \section{Discourse Conjunction} The CONJs auxiliary tree is used to handle `discourse' conjunction, as in sentence (\ex{1}). Only the coordinating conjunctions ({\it and, or} and {\it but}) are allowed to adjoin to the roots of matrix sentences. Discourse conjunction with {\it and} is shown in the derived tree in Figure~\ref{seuss-sentence}. \enumsentence{And Truffula trees are what everyone needs! \cite{seuss71}} \begin{figure}[htbp] \centering \hspace{0in} \psfig{figure=ps/sent-adjs-files/disc-conj.ps,height=4.5in} \caption{Example of discourse conjunction, from Seuss' {\it The Lorax}\protect\nocite{seuss71}} \label{seuss-sentence} \end{figure} \chapter{Sentential Subjects and Sentential Complements} \label{scomps-section} In the XTAG grammar, arguments of a lexical item, including subjects, appear in the initial tree anchored by that lexical item. A sentential argument appears as an S node in the appropriate position within an elementary tree anchored by the lexical item that selects it. This is the case for sentential complements of verbs, prepositions and nouns and for sentential subjects. The distribution of complementizers in English is intertwined with the distribution of embedded sentences. A successful analysis of complementizers in English must handle both the cooccurrence restrictions between complementizers and various types of clauses, and the distribution of the clauses themselves, in both subject and complement positions. \section{S or VP complements?} Two comparable grammatical formalisms, Generalized Phrase Structure Grammar (GPSG) \cite{gazdar85} and Head-driven Phrase Structure Grammar (HPSG) \cite{PollardSag94:BK}, have rather different treatments of sentential complements (S-comps). They both treat embedded sentences as VP's with subjects, which generates the correct structures but misses the generalization that S's behave similarly in both matrix and embedded environments, and VP's behave quite differently. Neither account has PRO\label{PRO} subjects of infinitival clauses-- they have subjectless VP's instead. GPSG has a complete complementizer system, which appears to cover the same range of data as our analysis. It is not clear what sort of complementizer analysis could be implemented in HPSG. Following standard GB approach, the English XTAG grammar does not allow VP complements but treats verb-anchored structures without overt subjects as having PRO subjects. Thus, indicative clauses, infinitives and gerunds all have a uniform treatment as embedded clauses using the same trees under this approach. Furthermore, our analysis is able to preserve the selectional and distributional distinction between S's and VP's, in the spirit of GB theories, without having to posit `extra' empty categories.\footnote{i.e. empty complementizers. We do have PRO and NP traces in the grammar.} Consider the alternation between {\it that} and the null complementizer\footnote{Although we will continue to refer to `null' complementizers, in our analysis this is actually the absence of a complementizer.}, shown in sentences~(\ex{1}) and (\ex{2}). \enumsentence{He hopes $\emptyset$ Muriel wins.} \enumsentence{He hopes that Muriel wins.} In GB both {\it Muriel wins} in (\ex{-1}) and {\it that Muriel wins} in (\ex{0}) are CPs even though there is no overt complementizer to head the phrase in (\ex{-1}). Our grammar does not distinguish by category label between the phrases that would be labeled in GB as IP and CP. We label both of these phrases S. The difference between these two levels is the presence or absence of the complementizer (or extracted WH constituent), and is represented in our system as a difference in feature values (here, of the {\bf $<$comp$>$} feature), and the presence of the additional structure contributed by the complementizer or extracted constituent. This illustrates an important distinction in XTAG, that between features and node labels. Because we have a sophisticated feature system, we are able to make fine-grained distinctions between nodes with the same label which in another system might have to be realized by using distinguishing node labels. \section{Complementizers and Embedded Clauses in English: The Data} \label{data} Verbs selecting sentential complements (or subjects) place restrictions on their complements, in particular, on the form of the embedded verb phrase.\footnote{Other considerations, such as the relationship between the tense/aspect of the matrix clause and the tense/aspect of a complement clause are also important but are not currently addressed in the current English XTAG grammar.} Furthermore, complementizers are constrained to appear with certain types of clauses, again, based primarily on the form of the embedded VP. For example, {\it hope\/} selects both indicative and infinitival complements. With an indicative complement, it may only have {\it that\/} or null as possible complementizers; with an infinitival complement, it may only have a null complementizer. Verbs that allow wh+ complementizers, such as {\it ask}, can take {\it whether} and {\it if} as complementizers. The possible combinations of complementizers and clause types is summarized in Table \ref{facts}. As can be seen in Table \ref{facts}, sentential subjects differ from sentential complements in requiring the complementizer {\it that\/} for all indicative and subjunctive clauses. In sentential complements, {\it that\/} often varies freely with a null complementizer, as illustrated in (\ex{1})-(\ex{6}). \enumsentence{Christy hopes that Mike wins.} \enumsentence{Christy hopes Mike wins.} \enumsentence{Dania thinks that Newt is a liar.} \enumsentence{Dania thinks Newt is a liar.} \enumsentence{That Helms won so easily annoyed me.} \enumsentence{$\ast$Helms won so easily annoyed me.} \begin{table}[ht] \centering \begin{tabular}{|l|llllll|} \hline Complementizer:&&that&whether&if&for&null\\ \hline Clause type&&&&&&\\ \hline indicative&subject&Yes&Yes&No&No&No\\ &complement&Yes&Yes&Yes&No&Yes\\ \hline infinitive&subject&No&Yes&No&Yes&Yes\\ &complement&No&Yes&No&Yes&Yes\\ \hline subjunctive&subject&Yes&No&No&No&No\\ &complement&Yes&No&No&No&Yes\\ \hline gerundive\footnotemark\ &complement&No&No&No&No&Yes\\ \hline base & complement & No & No & No & No & Yes \\ \hline small clause & complement & No & No & No & No & Yes \\ \hline \end{tabular} \vspace{.2in} \caption{Summary of Complementizer and Clause Combinations} \label{facts} \end{table} \footnotetext{Most gerundive phrases are treated as NP's. In fact, all gerundive subjects are treated as NP's, and the only gerundive complements which receive a sentential parse are those for which there is no corresponding NP parse. This was done to reduce duplication of parses. See Chapter~\ref{gerunds-chapter} for further discussion of gerunds.\label{gerund-footnote}} Another fact which must be accounted for in the analysis is that in infinitival clauses, the complementizer {\it for} must appear with an overt subject NP, whereas a complementizer-less infinitival clause never has an overt subject, as shown in (\ex{1})-(\ex{4}). (See section~\ref{for-complementizer} for more discussion of the case assignment issues relating to this construction.) \enumsentence{To lose would be awful.} \enumsentence{For Penn to lose would be awful.} \enumsentence{$\ast$For to lose would be awful.} \enumsentence{$\ast$Penn to lose would be awful.} In addition, some verbs select {\bf $<$wh$>$=+} complements (either questions or clauses with {\it whether} or {\it if}) \cite{grimshaw90}: \enumsentence{Jesse wondered who left.} \enumsentence{Jesse wondered if Barry left.} \enumsentence{Jesse wondered whether to leave.} \enumsentence{Jesse wondered whether Barry left.} \enumsentence{$\ast$Jesse thought who left.} \enumsentence{$\ast$Jesse thought if Barry left.} \enumsentence{$\ast$Jesse thought whether to leave.} \enumsentence{$\ast$Jesse thought whether Barry left.} \section{Features Required} \label{s-features} As we have seen above, clauses may be {\bf $<$wh$>$=+} or {\bf $<$wh$>$=--}, may have one of several complementizers or no complementizer, and can be of various clause types. The XTAG analysis uses three features to capture these possibilities: {\bf $<$comp$>$} for the variation in complementizers, {\bf$<$wh$>$} for the question vs. non-question alternation and {\bf $<$mode$>$}\footnote{{\bf $<$mode$>$} actually conflates several types of information, in particular verb form and mood.} for clause types. In addition to these three features, the {\bf $<$assign-comp$>$} feature represents complementizer requirements of the embedded verb. More detailed discussion of the {\bf $<$assign-comp$>$} feature appears below in the discussions of sentential subjects and of infinitives. The four features and their possible values are shown in Table \ref{feat}. \begin{table}[th] \centering \begin{tabular}{|l|c|} \hline Feature&Values\\ \hline {\bf $<$comp$>$}&that, if, whether, for, rel, nil\\ \hline {\bf$<$mode$>$}&ind, inf, subjnt, ger, base, ppart, nom/prep\\ \hline {\bf$<$assign-comp$>$}&that, if, whether, for, rel, ind\underline{~}nil, inf\underline{~}nil\\ \hline {\bf$<$wh$>$}&+,--\\ \hline \end{tabular} \caption{Summary of Relevant Features} \label{feat} \end{table} \section{Distribution of Complementizers} \label{comp-distr} Like other non-arguments, complementizers anchor an auxiliary tree (shown in Figure \ref{comp-tree}) and adjoin to elementary clausal trees. The auxiliary tree for complementizers is the only alternative to having a complementizer position `built into' every sentential tree. The latter choice would mean having an empty complementizer substitute into every matrix sentence and a complementizerless embedded sentence to fill the substitution node. Our choice follows the XTAG principle that initial trees consist only of the arguments of the anchor\footnote{See section~\ref{compl-adj} for a discussion of the difference between complements and adjuncts in the XTAG grammar.} -- the S tree does not contain a slot for a complementizer, and the $\beta$COMP tree has only one argument, an S with particular features determined by the complementizer. Complementizers select the type of clause to which they adjoin through constraints on the {\bf $<$mode$>$} feature of the S foot node in the tree shown in Figure~\ref{comp-tree}. These features also pass up to the root node, so that they are `visible' to the tree where the embedded sentence adjoins/substitutes. \begin{figure}[hbt] \centering \hspace{0.0in} \psfig{figure=ps/sent-comps-subjs-files/betaCOMPs_that_.ps,height=8.2cm} \caption{Tree $\beta$COMPs, anchored by {\it that}} \label{comp-tree} \end{figure} The grammar handles the following complementizers: {\it that\/}, {\it whether\/}, {\it if\/}, {\it for\/}, and no complementizer, and the clause types: indicative, infinitival, gerundive, past participial, subjunctive and small clause ({\bf nom/prep}). The {\bf $<$comp$>$} feature in a clausal tree reflects the value of the complementizer if one has adjoined to the clause. The {\bf $<$comp$>$} and {\bf $<$wh$>$} features receive their root node values from the particular complementizer which anchors the tree. The $\beta$COMPs tree adjoins to an S node with the feature {\bf $<$comp$>$=nil}; this feature indicates that the tree does not already {\bf have} a complementizer adjoined to it.\footnote{ Because root S's cannot have complementizers, the parser checks that the root S has {\bf $<$comp$>$=nil} at the end of the derivation, when the S is also checked for a tensed verb.} We ensure that there are no stacked complementizers by requiring the foot node of $\beta$COMPs to have {\bf $<$comp$>$=nil}. \section{Case assignment, {\it for\/} and the two {\it to\/}'s} \label{for-complementizer} The {\bf $<$assign-comp$>$} feature is used to represent the requirements of particular types of clauses for particular complementizers. So while the {\bf $<$comp$>$} feature represents constraints originating from the VP dominating the clause, the {\bf $<$assign-comp$>$} feature represents constraints originating from the highest VP in the clause. {\bf $<$assign-comp$>$} is used to control the the appearance of subjects in infinitival clauses (see discussion of ECM constructions in \ref{ecm-verbs}), to block bare indicative sentential subjects (bare infinitival subjects are allowed), and to block `that-trace' violations. Examples (\ex{2}), (\ex{3}) and (\ex{4}) show that an accusative case subject is obligatory in an infinitive clause if the complementizer {\it for\/} is present. The infinitive clauses in (\ex{1}) is analyzed in the English XTAG grammar as having a PRO subject. \enumsentence{Christy wants to pass the exam.} \enumsentence{Mike wants for her to pass the exam.} \enumsentence{$\ast$Mike wants for she to pass the exam.} \enumsentence{$\ast$Christy wants for to pass the exam.} The {\it for-to\/} construction is particularly illustrative of the difficulties and benefits faced in using a lexicalized grammar. It is commonly accepted that {\it for\/} behaves as a case-assigning complementizer in this construction, assigning accusative case to the `subject' of the clause since the infinitival verb does not assign case to its subject position. However, in our featurized grammar, the absence of a feature licenses anything, so we must have overt null case assigned by infinitives to ensure the correct distribution of PRO subjects. (See section~\ref{case-assignment} for more discussion of case assignment.) This null case assignment clashes with accusative case assignment if we simply add {\it for\/} as a standard complementizer, since NP's (including PRO) are drawn from the lexicon already marked for case. Thus, we must use the {\bf $<$assign-comp$>$} feature to pass information about the verb up to the root of the embedded sentence. To capture these facts, two infinitive {\it to}'s are posited. One infinitive {\it to\/} has {\bf $<$assign-case$>$=none} which forces a PRO subject, and {\bf $<$assign-comp$>$=inf\_nil} which prevents {\it for\/} from adjoining. The other infinitive {\it to\/} has no value at all for {\bf $<$assign-case$>$} and has {\bf $<$assign-comp$>$=for/ecm}, so that it can only occur either with the complementizer {\it for\/} or with ECM constructions. In those instances either {\it for} or the ECM verb supplies the {\bf $<$assign-case$>$} value, assigning accusative case to the overt subject. \section{Sentential Complements of Verbs} \label{sent-complements} {\bf Tree families}: Tnx0Vs1, Tnx0Vnx1s2, TItVnx1s2, TItVpnx1s2, TItVad1s2. Verbs that select sentential complements restrict the {\bf $<$mode$>$} and {\bf $<$comp$>$} values for those complements. Since with very few exceptions\footnote{For example, long distance extraction is not possible from the S complement in it-clefts.} long distance extraction is possible from sentential complements, the S complement nodes are adjunction nodes. Figure \ref{think} shows the declarative tree for sentential complements, anchored by {\it think}. \begin{figure}[hbt] \centering \hspace{0.0in} \psfig{figure=ps/sent-comps-subjs-files/think.ps,height=1.7in} \caption{Sentential complement tree: $\beta$nx0Vs1} \label{think} \label{2;1,10} \end{figure} The need for an adjunction node rather than a substitution node at S$_{1}$ may not be obvious until one considers the derivation of sentences with long distance extractions. For example, the declarative in (\ex{1}) is derived by adjoining the tree in Figure~\ref{aard-emu}(b) to the S$_{1}$ node of the tree in Figure~\ref{aard-emu}(a). Since there are no bottom features on S$_{1}$, the same final result could have been achieved with a substitution node at S$_{1}$. \enumsentence{The emu thinks that the aardvark smells terrible.} \begin{figure}[htb] \centering \begin{tabular}{ccc} \psfig{figure=ps/sent-comps-subjs-files/aard-smells.ps,height=2.1in}& \hspace{0.3in}& \psfig{figure=ps/sent-comps-subjs-files/emu-thinks.ps,height=2.1in}\\ (a)&&(b)\\ \end{tabular} \caption{Trees for {\it The emu thinks that the aardvark smells terrible.}} \label{aard-emu} \label{1;4,4} \end{figure} However, adjunction is crucial in deriving sentences with long distance extraction, as in sentences (\ex{1}) and (\ex{2}). \enumsentence{Who does the emu think smells terrible?} \enumsentence{Who did the elephant think the panda heard the emu say smells terrible?} The example in (\ex{-1}) is derived from the trees for {\it who smells terrible?} shown in Figure ~\ref{who-smells} and {\it the emu thinks} S shown in Figure~\ref{aard-emu}(b), by adjoining the latter at the S$_r$ node of the former.\footnote{See Chapter~\ref{auxiliaries} for a discussion of do-support.} This process is recursive, allowing sentences like (\ex{0}). Such a representation has been shown by \cite{kj85} to be well-suited for describing unbounded dependencies. \begin{figure}[thb] \centering \hspace{0.0in} \psfig{figure=ps/sent-comps-subjs-files/who-smells.ps,height=2.3in} \caption{Tree for {\it Who smells terrible?}} \label{who-smells} \label{1;4,14} \end{figure} In English, a complementizer may not appear on a complement with an extracted subject (the `that-trace' configuration). This phenomenon is illustrated in (\ex{1})-(\ex{3}): \enumsentence{Which animal did the giraffe say that he likes?} \enumsentence{$\ast$Which animal did the giraffe say that likes him?} \enumsentence{Which animal did the giraffe say likes him?} These sentences are derived in XTAG by adjoining the tree for {\it did the giraffe say} S at the S$_r$ node of the tree for either {\it which animal likes him} (to yield sentence~(\ex{0})) or {\it which animal he likes} (to yield sentence~(\ex{-2})). That-trace violations are blocked by the presence of the feature {\bf $<$assign-comp$>$=inf\underline{~}nil/ind\underline{~}nil/ecm} feature on the bottom of the S$_r$ node of trees with extracted subjects (W0), i.e. those used in sentences such as (\ex{-1}) and (\ex{0}). If a complementizer tree, $\beta$COMPs, adjoins to a subject extraction tree at $S_r$, its {\bf $<$assign-comp$>$ = that/whether/for/if} feature will clash and the derivation will fail. If there is no complementizer, there is no feature clash, and this will permit the derivation of sentences like (\ex{0}), or of ECM constructions, in which case the ECM verb will have {\bf $<$assign-comp$>$=ecm} (see section~\ref{ecm-verbs} for more discussion of the ECM case). Complementizers may adjoin normally to object extraction trees such as those used in sentence~(\ex{-2}), and so object extraction trees have no value for the {\bf $<$assign-comp$>$} feature. In the case of indirect questions, subjacency follows from the principle that a given tree cannot contain more than one wh-element. Extraction out of an indirect question is ruled out because a sentence like: \enumsentence{$\ast$ Who$_{i}$ do you wonder who$_{j}$ e$_{j}$ loves e$_{i}$ ?} \noindent would have to be derived from the adjunction of {\it do you wonder} into {\it who$_{i}$ who$_{j}$ e$_{j}$ loves e$_{i}$}, which is an ill-formed elementary tree.\footnote{This does not mean that elementary trees with more than one gap should be ruled out across the grammar. Such trees might be required for dealing with parasitic gaps or gaps in coordinated structures.} \subsection{Exceptional Case Marking Verbs} \label{ecm-verbs} {\bf Tree family}: TXnx0Vs1 Exceptional Case Marking verbs are those which assign accusative case to the subject of the sentential complement. This is in contrast to verbs in the Tnx0Vnx1s2 family (section~\ref{nx0Vnx1s2-family}), which assign accusative case to an NP which is not part of the sentential complement. The subject of an ECM infinitive complement is assigned accusative case is a manner analogous to that of a subject in a {\it for-to\/} construction, as described in section~\ref{for-complementizer}. As in the {\it for-to\/} case, the ECM verb assigns accusative case into the subject of the lower infinitive, and so the infinitive uses the {\it to} which has no value for {\bf $<$assign-case$>$} and has {\bf $<$assign-comp$>$=for/ecm}. The ECM verb has {\bf $<$assign-comp$>$=ecm} and {\bf $<$assign-case$>$=acc} on its foot. The former allows the {\bf $<$assign-comp$>$} features of the ECM verb and the {\it to} tree to unify, and so be used together, and the latter assigns the accusative case to the lower subject. Figure~\ref{expects-decl} shows the declarative tree for the tree for the TXnx0Vs1 family, in this case anchored by {\it expects}. Figure~\ref{van-expects} shows a parse for {\it Van expects Bob to talk} \begin{figure}[hbt] \centering \hspace{0.0in} \psfig{figure=ps/sent-comps-subjs-files/expects.ps,height=3.3in} \caption{ECM tree: $\beta$Xnx0Vs1} \label{expects-decl} \label{3;1,15} \end{figure} \begin{figure}[hbt] \centering \hspace{0.0in} \psfig{figure=ps/sent-comps-subjs-files/van-expects.ps,height=3.3in} \caption{Sample ECM parse} \label{van-expects} \end{figure} The ECM and {\it for-to\/} cases are analogous in how they are used together with the correct infinitival {\it to} to assign accusative case to the subject of the lower infinitive. However, they are different in that {\it for} is blocked along with other complementizers in subject extraction contexts, as discussed in section~\ref{sent-complements}, as in (\ex{1}), while subject extraction is compatible with ECM cases, as in (\ex{2}). \enumsentence{$\ast$What child did the giraffe ask for to leave?} \enumsentence{Who did Bill expect to eat beans?} Sentence (\ex{-1}) is ruled out by the {\bf $<$assign-comp$>$= inf\underline{~}nil/ind\underline{~}nil/ecm} feature on the subject extraction tree for {\it ask}, since the {\bf $<$assign-comp$>$=for} feature from the {\it for} tree will fail to unify. However, (\ex{0}) will be allowed since {\bf $<$assign-comp$>$=ecm} feature on the {\it expect} tree will unify with the foot of the ECM verb tree. The use of features allows the ECM and {\it for-to\/} constructions to act the same for exceptional case assignment, while also being distinguished for that-trace violations. Verbs that take bare infinitives, as in (\ex{1}), are also treated as ECM verbs, the only difference being that their foot feature has {\bf $<$mode$>$=base} instead of {\bf $<$mode$>$=inf}. Since the complement does not have {\it to}, there is no question of using the {\it to} tree for allowing accusative case to be assigned. Instead, verbs with {\bf $<$mode$>$=base} allow either accusative or nominative case to be assigned to the subject, and the foot of the ECM bare infinitive tree forces it to be accusative by its {\bf $<$assign-case$>$=acc} value at its foot node unifies with the {\bf $<$assign-case$>$=nom/acc} value of the bare infinitive clause. \enumsentence{Bob sees the harmonica fall.} The trees in the TXnx0Vs1 family are generally parallel to those in the Tnx0Vs1 family, except for the {\bf $<$assign-case$>$} and {\bf $<$assign-comp$>$} values on the foot nodes. However, the TXnx0Vs1 family also includes a tree for the passive, which of course is not included in the Tnx0Vs1 family. Unlike all the other trees in the TXnx0Vs1 family, the passive tree is not rooted in S, and is instead a VP auxiliary tree. Since the subject of the infinitive is not thematically selected by the ECM verb, it is not part of the ECM verb's tree, and so it cannot be part of the passive tree. Therefore, the passive acts as a raising verb (see section~\ref{sm-clause-xtag-analysis}). For example, to derive (\ex{2}), the tree in Figure~\ref{expects-passive} would adjoin into a derivation for {\it Bob to talk} at the VP node (and the {\bf $<$mode$>$=passive} feature, not shown, forces the auxiliary to adjoin in, as for other passives, as described in chapter~\ref{passives}). \enumsentence{Van expects Bob to talk.} \enumsentence{Bob was expected to talk.} \begin{figure}[hbt] \centering \hspace{0.0in} \psfig{figure=ps/sent-comps-subjs-files/expects-passive.ps,height=1.5in} \caption{ECM passive} \label{expects-passive} \label{3;2,15} \end{figure} It has long been noted that passives of both full and bare infinitive ECM constructions are full infinitives, as in (\ex{0}) and (\ex{2}). \enumsentence{Bob sees the harmonica fall.} \enumsentence{The harmonica was seen to fall.} \enumsentence{$\ast$The harmonica was seen fall.} Under the TAG ECM analysis, this fact is easy to implement. The foot node of the ECM passive tree is simply set to have {\bf $<$mode$>$=inf}, which prevents the derivation of (\ex{0}). Therefore, for all the other trees in the family, to foot nodes are set to have {\bf $<$mode$>$=base} or {\bf $<$mode$>$=inf} depending on whether it is a bare infinitive or not. These foot nodes are all S nodes. The VP foot node of the passive tree, however, has {\bf $<$mode$>$=inf} regardless. \section{Sentential Subjects} \label{sent-subjs} {\bf Tree families}: Ts0Vnx1, Ts0Ax1, Ts0N1, Ts0Pnx1, Ts0ARBPnx1, Ts0PPnx1, Ts0PNaPnx1, Ts0V, Ts0Vtonx1, Ts0NPnx1, Ts0APnx1, Ts0A1s1. Verbs that select sentential subjects anchor trees that have an S node in the subject position rather than an NP node. Since extraction is not possible from sentential subjects, they are implemented as substitution nodes in the English XTAG grammar. Restrictions on sentential subjects, such as the required {\it that} complementizer for indicatives, are enforced by feature values specified on the S substitution node in the elementary tree. Sentential subjects behave essentially like sentential complements, with a few exceptions. In general, all verbs which license sentential subjects license the same set of clause types. Thus, unlike sentential complement verbs which select particular complementizers and clause types, the matrix verbs licensing sentential subjects merely license the S argument. Information about the complementizer or embedded verb is located in the tree features, rather than in the features of each verb selecting that tree. Thus, all sentential subject trees have the same {\bf $<$mode$>$}, {\bf $<$comp$>$} and {\bf $<$assign-comp$>$} values shown in Figure~\ref{comparison}(a). \begin{figure}[htb] \centering \begin{tabular}{ccc} \psfig{figure=ps/sent-comps-subjs-files/perplexes-feats.ps,height=2.2in}& \hspace{0.5in}& \psfig{figure=ps/sent-comps-subjs-files/think-feats.ps,height=2.6in}\\ (a)&&(b)\\ \end{tabular} \caption{Comparison of {\bf $<$assign-comp$>$} values for sentential subjects: $\alpha$s0Vnx1 (a) and sentential complements: $\beta$nx0Vs1 (b)} \label{comparison} \label{1;1,16} \end{figure} The major difference in clause types licensed by S-subjs and S-comps is that indicative S-subjs obligatorily have a complementizer (see examples in section~\ref{data}). The {\bf $<$assign-comp$>$} feature is used here to license a null complementizer for infinitival but not indicative clauses. {\bf $<$assign-comp$>$} has the same possible values as {\bf $<$comp$>$}, with the exception that the {\bf nil} value is `split' into {\bf ind\_nil} and {\bf inf\_nil}. This difference in feature values is illustrated in Figure~\ref{comparison}. Another minor difference is that {\it whether\/} but not {\it if\/} is grammatical with S-subjs.\footnote{Some speakers also find {\it if\/} as a complementizer only marginally grammatical in S-comps.} Thus, {\it if} is not among the {\bf $<$comp$>$} values allowed in S-subjs. The final difference from S-comps is that there are no S-subjs with {\bf $<$mode$>$=ger}. As noted in footnote~\ref{gerund-footnote} of this chapter, gerundive complements are only allowed when there is no corresponding NP parse. In the case of gerundive S-subjs, there is always an NP parse available. \section{Nouns and Prepositions taking Sentential Complements} \label{NPA} {\bf Trees}: $\alpha$NXNs, $\beta$vxPs, $\beta$Pss, $\beta$nxPs, Tnx0N1s1, Tnx0A1s1. \begin{figure}[thb] \centering \begin{tabular}{ccc} \psfig{figure=ps/sent-comps-subjs-files/betaPss.ps,height=5.6cm}& \hspace{0.3in}& \psfig{figure=ps/sent-comps-subjs-files/alphaNXNs.ps,height=4cm} \\ (a) && (b)\\ \end{tabular} \caption{Sample trees for preposition: $\beta$Pss (a) and noun: $\alpha$NXNs (b) taking sentential complements} \label{nounprep} \end{figure} Prepositions and nouns can also select sentential complements, using the trees listed above. These trees use the {\bf $<$mode$>$} and {\bf $<$comp$>$} features as shown in Figure~\ref{nounprep}. For example, the noun {\it claim} takes only indicative complements with {\it that}, while the preposition {\it with} takes small clause complements, as seen in sentences (\ex{1})-(\ex{4}). \enumsentence{Beth's claim that Clove was a smart dog....} \enumsentence{$\ast$Beth's claim that Clove a smart dog....} \enumsentence{Dania wasn't getting any sleep with Doug sick.} \enumsentence{$\ast$Dania wasn't getting any sleep with Doug was sick.} \section{PRO control} \label{PRO-control} \subsection{Types of control} In the literature on control, two types are often distinguished: obligatory control, as in sentences~(\ex{1}), (\ex{2}), (\ex{3}), and (\ex{4}) and optional control, as in sentence~(\ex{5}). \enumsentence{Srini$_i$ promised Mickey$_{i}$ [PRO$_i$ to leave].} \enumsentence{Srini persuaded Mickey$_{i}$ [PRO$_i$ to leave].} \enumsentence{Srini$_{i}$ wanted [PRO$_i$ to leave].} \enumsentence{Christy$_{i}$ left the party early [PRO$_i$ to go to the airport].} \enumsentence{[PRO$_{arb/i}$ to dance] is important for Bill$_{i}$.} At present, an analysis for obligatory control into complement clauses (as in sentences~(\ex{-4}), (\ex{-3}), and (\ex{-2})) has been implemented. An analysis for cases of obligatory control into adjunct clauses and optional control exists and can be found in \cite{bhatt94}. \subsection{A feature-based analysis of PRO control} The analysis for obligatory control involves co-indexation of the control feature of the NP anchored by PRO to the control feature of the controller. A feature equation in the tree anchored by the control verb co-indexes the control feature of the controlling NP with the foot node of the tree. All sentential trees have a co-indexed control feature from the root S to the subject NP. When the tree containing the controller adjoins onto the complement clause tree containing the PRO, the features of the foot node of the auxiliary tree are unified with the bottom features of the root node of the complement clause tree containing the PRO. This leads to the control feature of the controller being co-indexed with the control feature of the PRO. Depending on the choice of the controlling verb, the control propagation paths in the auxiliary trees are different. In the case of subject control (as in sentence~(\ex{-3})), the subject NP and the foot node are have co-indexed control features, while for object control (e.g. sentence~(\ex{-4}), the object NP and the foot node are co-indexed for control. Among verbs that belong to the Tnx0Vnx1s2 family, i.e. verbs that take an NP object and a clausal complement, subject-control verbs form a distinct minority, {\em promise} being the only commonly used verb in this class. Consider the derivation of sentence~(\ex{-3}). The auxiliary tree for {\em persuade}, shown in Figure \ref{persuade-tree}, has the following feature equation~(\ex{1}). \enumsentence{ NP$_{1}$:{\bf $<$control$>$} = S$_{2}$.t:{\bf $<$control$>$} } The auxiliary tree adjoins into the tree for {\em leave}, shown in Figure \ref{leave-tree}, which has the following feature equation~(\ex{1}). \enumsentence{S$_{r}$.b:{\bf $<$control$>$} = NP$_{0}$.t:{\bf $<$control$>$}} Since the adjunction takes place at the root node (S$_{r}$) of the {\em leave} tree, after unification, NP$_{1}$ of the {\em persuade} tree and NP$_{0}$ of the {\em leave} tree share a control feature. The resulting derived and derivation trees are shown in Figures \ref{derived-tree} and \ref{derivation-tree}. \begin{figure}[hbt] \centering \hspace{0.0in} \psfig{figure=ps/sent-comps-subjs-files/betanx0Vnx1s2_persuaded_.ps,height=5.2cm} \caption{Tree for {\it persuaded}} \label{persuade-tree} \end{figure} \begin{figure}[hbt] \centering \hspace{0.0in} \psfig{figure=ps/sent-comps-subjs-files/alphanx0V_leave_.ps,height=5.2cm} \caption{Tree for {\it leave}} \label{leave-tree} \end{figure} \begin{figure}[hbt] \centering \hspace{0.0in} \psfig{figure=ps/sent-comps-subjs-files/persuaded-derv.ps,height=8.2cm} \caption{Derived tree for {\it Srini persuaded Mickey to leave}} \label{derived-tree} \end{figure} \begin{figure}[hbt] \centering \hspace{0.0in} \psfig{figure=ps/sent-comps-subjs-files/persuaded-derivation.ps,height=4.2cm} \caption{Derivation tree for {\it Srini persuaded Mickey to leave}} \label{derivation-tree} \end{figure} \subsection{The nature of the control feature} The control feature does not have any value and is used only for co-indexing purposes. If two NPs have their control features co-indexed, it means that they are participating in a relationship of control; the c-commanding NP controls the c-commanded NP. \subsection{Long-distance transmission of control features} Cases involving embedded infinitival complements with PRO subjects such as (\ex{1}) can also be handled. \enumsentence{ John$_i$ wants [PRO$_i$ to want [PRO$_i$ to dance]].} The control feature of `John' and the two PRO's all get co-indexed. This treatment might appear to lead to a problem. Consider (\ex{1}): \enumsentence{ John$_{*i}$ wants [Mary$_i$ to want [PRO$_i$ to dance]].} If both the `want' trees have the control feature of their subject co-indexed to their foot nodes, we would have a situtation where the PRO is co-indexed for control feature with `John', as well as with `Mary'. Note that the higher `want' in (\ex{-1}) is {\em want$_{ECM}$} - it assigns case to the subject of the lower clause while the lower `want' in (\ex{-1}) is not. Subject control is restricted to non-ECM (Exceptional Case Marking) verbs that take infinitival complements. Since the two `want's in (\ex{-1}) are different with respect to their control (and other) properties, the control feature of PRO stops at `Mary' and is not transmitted to the higher clause. \subsection{Locality constraints on control} PRO control obeys locality constraints. The controller for PRO has to be in the immediately higher clause. Consider the ungrammatical sentence~(\ex{1}) ((\ex{1}) is ungrammatical only with the co-indexing indicated below). \enumsentence{* John$_i$ wants [PRO$_i$ to persuade Mary$_i$ [PRO$_i$ to dance]]} However, such a derivation is ruled out automatically by the mechanisms of a TAG derivation and feature unification. Suppose it was possible to first compose the {\em want} tree with the {\em dance} tree and then insert the {\em persuade} tree. (This is not possible in the XTAG grammar because of the convention that auxiliary trees have NA (Null Adjunction) constraints on their foot nodes.) Even then, at the end of the derivation the control feature of the subject of {\em want} would end up co-indexed with the PRO subject of {\em persuade} and the control feature of {\em Mary} would be co-indexed with the PRO subject of {\em dance} as desired. There is no way to generate the illegal co-indexing in (\ex{-1}). Thus the locality constraints on PRO control fall out from the mechanics of TAG derivation and feature unification. \section{Reported speech} Reported speech is handled in the XTAG grammar by having the reporting clause adjoin into the quote. Thus, the reporting clause is an auxiliary tree, anchored by the reporting verb. See \cite{doran-diss} for details of the analysis. There are trees in both the Tnx0Vs1 and Tnx0nx1s2 families to handle reporting clauses which precede, follow and come in the middle of the quote. \chapter{The English Copula, Raising Verbs, and Small Clauses} \label{small-clauses} The English copula, raising verbs, and small clauses are all handled in XTAG by a common analysis based on sentential clauses headed by non-verbal elements. Since there are a number of different analyses in the literature of how these phenomena are related (or not), we will present first the data for all three phenomena, then various analyses from the literature, finishing with the analysis used in the English XTAG grammar.\footnote{This chapter is strongly based on \cite{heycock91}. Sections \ref{sm-clause-data} and \ref{sm-clause-other-analyses} are greatly condensed from her paper, while the description of the XTAG analysis in section \ref{sm-clause-xtag-analysis} is an updated and expanded version.} \section{Usages of the copula, raising verbs, and small clauses} \label{sm-clause-data} \subsection{Copula} \label{copula-data} The verb {\it be} as used in sentences ({\ex{1}})-({\ex{3}}) is often referred to as the \xtagdef{copula}. It can be followed by a noun, adjective, or prepositional phrase. \enumsentence{Carl is a jerk .} \enumsentence{Carl is upset .} \enumsentence{Carl is in a foul mood .} Although the copula may look like a main verb at first glance, its syntactic behavior follows the auxiliary verbs rather than main verbs. In particular, \begin{itemize} \item Copula {\it be} inverts with the subject. \enumsentence{is Beth writing her dissertation ?\\ is Beth upset ?\\ $\ast$wrote Beth her dissertation ?} \item Copula {\it be} occurs to the left of the negative marker {\it not}. \enumsentence{Beth is not writing her dissertation .\\ Beth is not upset .\\ $\ast$Beth wrote not her dissertation .} \item Copula {\it be} can contract with the negative marker {\it not}. \enumsentence{Beth isn't writing her dissertation .\\ Beth isn't upset .\\ $\ast$Beth wroten't her dissertation .} \item Copula {\it be} can contract with pronominal subjects. \enumsentence{She's writing her dissertation .\\ She's upset .\\ $\ast$She'ote her dissertation .} \item Copula {\it be} occurs to the left of adverbs in the unmarked order. \enumsentence{Beth is often writing her dissertation .\\ Beth is often upset .\\ $\ast$Beth wrote often her dissertation .} \end{itemize} Unlike all the other auxiliaries, however, copula {\it be} is not followed by a verbal category (by definition) and therefore must be the rightmost verb. In this respect, it is like a main verb. The semantic behavior of the copula is also unlike main verbs. In particular, any semantic restrictions or roles placed on the subject come from the complement phrase (NP, AP, PP) rather than from the verb, as illustrated in sentences ({\ex{1}}) and ({\ex{2}}). Because the complement phrases predicate over the subject, these types of sentences are often called \xtagdef{predicative} sentences. \enumsentence{The bartender was garrulous .} \enumsentence{?The cliff was garrulous .} \subsection{Raising Verbs} \label{raising-verbs} Raising verbs are the class of verbs that share with the copula the property that the complement, rather than the verb, places semantic constraints on the subject. \enumsentence{Carl seems a jerk .\\ Carl seems upset .\\ Carl seems in a foul mood .} \enumsentence{Carl appears a jerk .\\ Carl appears upset .\\ Carl appears in a foul mood .} The raising verbs are similar to auxiliaries in that they order with other verbs, but they are unique in that they can appear to the left of the infinitive, as seen in the sentences in ({\ex{1}}). They cannot, however, invert or contract like other auxiliaries ({\ex{2}}), and they appear to the right of adverbs ({\ex{3}}). \enumsentence{Carl seems to be a jerk .\\ Carl seems to be upset .\\ Carl seems to be in a foul mood .} \enumsentence{$\ast$seems Carl to be a jerk ?\\ $\ast$Carl seemn't to be upset .\\ $\ast$Carl`ems to be in a foul mood .} \enumsentence{Carl often seems to be upset .\\ $\ast$Carl seems often to be upset .} \subsection{Small Clauses} One way of describing small clauses is as predicative sentences without the copula. Since matrix clauses require tense, these clausal structures appear only as embedded sentences. They occur as complements of certain verbs, each of which may allow certain types of small clauses but not others, depending on its lexical idiosyncrasies. \enumsentence{I consider [Carl a jerk] .\\ I consider [Carl upset] .\\ ?I consider [Carl in a foul mood] .} \enumsentence{I prefer [Carl in a foul mood] .\\ ??I prefer [Carl upset] .} \subsection{Raising Adjectives} \label{raising-adjs} Raising adjectives are the class of adjectives that share with the copula and raising verbs the property that the complement, rather than the verb, places semantic constraints on the subject. They appear with the copula in a matrix clause, as in ({\ex{1}}). However, in other cases, such as that of small clauses ({\ex{2}}), they do not have to appear with the copula. \enumsentence{Carl is likely to be a jerk .\\ Carl is likely to be upset .\\ Carl is likely to be in a foul mood .\\ Carl is likely to perjure himself .} \enumsentence{I consider Carl likely to perjure himself .} \section{Various Analyses} \label{sm-clause-other-analyses} \subsection{Main Verb Raising to INFL + Small Clause} In \cite{pollack89} the copula is generated as the head of a VP, like any main verb such as {\it sing} or {\it buy}. Unlike all other main verbs\footnote{with the exception of {\it have} in British English. See footnote~\ref{have-footnote} in Chapter~\ref{auxiliaries}.}, however, {\it be} moves out of the VP and into Infl in a tensed sentence. This analysis aims to account for the behavior of {\it be} as an auxiliary in terms of inversion, negative placement and adverb placement, while retaining a sentential structure in which {\it be} heads the main VP at D-Structure and can thus be the only verb in the clause. Pollock claims that the predicative phrase is not an argument of {\it be}, which instead he assumes to take a small clause complement, consisting of a node dominating an NP and a predicative AP, NP or PP. The subject NP of the small clause then raises to become the subject of the sentence. This accounts for the failure of the copula to impose any selectional restrictions on the subject. Raising verbs such as {\it seem} and {\it appear}, presumably, take the same type of small clause complement. \subsection{Auxiliary + Null Copula} \label{la} In \cite{lapointe80} the copula is treated as an auxiliary verb that takes as its complement a VP headed by a passive verb, a present participle, or a null verb (the true copula). This verb may then take AP, NP or PP complements. The author points out that there are many languages that have been analyzed as having a null copula, but that English has the peculiarity that its null copula requires the co-presence of the auxiliary {\it be}. \subsection{Auxiliary + Predicative Phrase} \label{gpsg} In GPSG (\cite{gazdar85}, \cite{sag85}) the copula is treated as an auxiliary verb that takes an X$^{2}$ category with a + value for the head feature [PRD] (predicative). AP, NP, PP and VP can all be [+PRD], but a Feature Co-occurrence Restriction guarantees that a [+PRD] VP will be headed by a verb that is either passive or a present participle. GPSG follows \cite{chomsky70} in adopting the binary valued features [V] and [N] for decomposing the verb, noun, adjective and preposition categories. In that analysis, verbs are [+V,$-$N], nouns are [$-$V,+N], adjectives [+V,+N] and prepositions [$-$V,$-$N]. NP and AP predicative complements generally pattern together; a fact that can be stated economically using this category decomposition. In neither \cite{sag85} nor \cite{chomsky70} is there any discussion of how to handle the complete range of complements to a verb like {\it seem}, which takes AP, NP and PP complements, as well as infinitives. The solution would appear to be to associate the verb with two sets of rules for small clauses, leaving aside the use of the verb with an expletive subject and sentential complement. \subsection{Auxiliary + Small Clause} \label{mo} In \cite{moro90} the copula is treated as a special functional category - a lexicalization of tense, which is considered to head its own projection. It takes as a complement the projection of another functional category, Agr (agreement). This projection corresponds roughly to a small clause, and is considered to be the domain within which predication takes place. An NP must then raise out of this projection to become the subject of the sentence: it may be the subject of the AgrP, or, if the predicate of the AgrP is an NP, this may raise instead. In addition to occurring as the complement of {\it be}, AgrP is selected by certain verbs such as {\it consider}. It follows from this analysis that when the complement to {\it consider} is a simple AgrP, it will always consist of a subject followed by a predicate, whereas if the complement contains the verb {\it be}, the predicate of the AgrP may raise to the left of {\it be}, leaving the subject of the AgrP to the right. \enumsentence{John$_{i}$ is [$_{AgrP}$ $t_{i}$ the culprit ] .} \enumsentence{The culprit$_{i}$ is [$_{AgrP}$ John $t_{i}$ ] .} \enumsentence{I consider [$_{AgrP}$ John the culprit] .} \enumsentence{I consider [John$_{i}$ to be [$_{AgrP}$ $t_{i}$ the culprit ]] .} \enumsentence{I consider [the culprit$_{i}$ to be [$_{AgrP}$ John $t_{i}$ ]] .} Moro does not discuss a number of aspects of his analysis, including the nature of Agr and the implied existence of sentences without VP's. \section{XTAG analysis} \label{sm-clause-xtag-analysis} \begin{figure}[htbp] \centering \begin{tabular}{ccccc} {\psfig{figure=ps/sm-clause-files/alphanx0N1.ps,height=2.3in}} & \hspace{0.5in} & {\psfig{figure=ps/sm-clause-files/alphanx0Ax1.ps,height=2.4in}} & \hspace{0.5in} & {\psfig{figure=ps/sm-clause-files/alphanx0Pnx1.ps,height=2.4in}} \\ (a)&&(b)&&(c)\\ \end{tabular} \caption{Predicative trees: $\alpha$nx0N1 (a), $\alpha$nx0Ax1 (b) and $\alpha$nx0Pnx1 (c)} \label{predicative-trees} \label{1;1,7} \label{1;1,9} \end{figure} The XTAG grammar provides a uniform analysis for the copula, raising verbs and small clauses by treating the maximal projections of lexical items that can be predicated as predicative clauses, rather than simply noun, adjective and prepositional phrases. The copula adjoins in for matrix clauses, as do the raising verbs. Certain other verbs (such as {\it consider}) can take the predicative clause as a complement, without the adjunction of the copula, to form the embedded small clause. The structure of a predicative clause, then, is roughly as seen in ({\ex{1}})-({\ex{3}}) for NP's, AP's and PP's. The XTAG trees corresponding to these structures\footnote{There are actually two other predicative trees in the XTAG grammar. Another predicative noun phrase tree is needed for noun phrases without determiners, as in the sentence {\it They are firemen}, and another prepositional phrase tree is needed for exhaustive prepositional phrases, such as {\it The workers are below}.} are shown in Figures~\ref{predicative-trees}(a), \ref{predicative-trees}(b), and \ref{predicative-trees}(c), respectively. \enumsentence{[$_{S}$ NP [$_{VP}$ N \ldots ]]} \enumsentence{[$_{S}$ NP [$_{VP}$ A \ldots ]]} \enumsentence{[$_{S}$ NP [$_{VP}$ P \ldots ]]} The copula {\it be} and raising verbs all get the basic auxiliary tree as explained in the section on auxiliary verbs (section \ref{aux-non-inverted}). Unlike the raising verbs, the copula also selects the inverted auxiliary tree set. Figure~\ref{Vvx-with-nomprep} shows the basic auxiliary tree anchored by the copula {\it be}. The {\bf $<$mode$>$} feature is used to distinguish the predicative constructions so that only the copula and raising verbs adjoin onto the predicative trees. \begin{figure}[htb] \centering \begin{tabular}{c} {\psfig{figure=ps/sm-clause-files/betaVvx_is-with-features.ps,height=5.7in}} \\ \end{tabular} \caption{Copula auxiliary tree: $\beta$Vvx} \label{Vvx-with-nomprep} \end{figure} There are two possible values of {\bf $<$mode$>$} that correspond to the predicative trees, {\bf nom} and {\bf prep}. They correspond to a modified version of the four-valued [N,V] feature described in section \ref{gpsg}. The {\bf nom} value corresponds to [N+], selecting the NP and AP predicative clauses. As mentioned earlier, they often pattern together with respect to constructions using predicative clauses. The remaining prepositional phrase predicative clauses, then, correspond to the {\bf prep} mode. Figure~\ref{upset-with-features} shows the predicative adjective tree from Figure~\ref{predicative-trees}(b) now anchored by {\it upset} and with the features visible. As mentioned, {\bf $<$mode$>$=nom} on the VP node prevents auxiliaries other than the copula or raising verbs from adjoining into this tree. In addition, it prevents the predicative tree from occurring as a matrix clause. Since all matrix clauses in XTAG must be mode indicative ({\bf ind}) or imperative ({\bf imp}), a tree with {\bf $<$mode$>$=nom} or {\bf $<$mode$>$=prep} must have an auxiliary verb (the copula or a raising verb) adjoin in to make it {\bf $<$mode$>$=ind}. \begin{figure}[htb] \centering \begin{tabular}{c} {\psfig{figure=ps/sm-clause-files/alphanx0Ax1_upset-with-features.ps,height=6.3in}} \\ \end{tabular} \caption{Predicative AP tree with features: $\alpha$nx0Ax1} \label{upset-with-features} \label{1;1,4} \end{figure} The distribution of small clauses as embedded complements to some verbs is also managed through the mode feature. Verbs such as {\it consider} and {\it prefer} select trees that take a sentential complement, and then restrict that complement to be {\bf $<$mode$>$=nom} and/or {\bf $<$mode$>$=prep}, depending on the lexical idiosyncrasies of that particular verb. Many verbs that don't take small clause complements do take sentential complements that are {\bf $<$mode$>$=ind}, which includes small clauses with the copula already adjoined. Hence, as seen in sentence sets ({\ex{1}})-({\ex{3}}), {\it consider} takes only small clause complements, {\it prefer} takes both {\bf prep} (but not {\bf nom}) small clauses and indicative clauses, while {\it feel} takes only indicative clauses. \enumsentence{She considers Carl a jerk .\\ ?She considers Carl in a foul mood .\\ $\ast$She considers that Carl is a jerk .} \enumsentence{$\ast$She prefers Carl a jerk .\\ She prefers Carl in a foul mood .\\ She prefers that Carl is a jerk .} \enumsentence{$\ast$She feels Carl a jerk .\\ $\ast$She feels Carl in a foul mood .\\ She feels that Carl is a jerk .} \noindent Figure \ref{consider-with-features} shows the tree anchored by {\it consider} that takes the predicative small clauses. \begin{figure}[htb] \centering \begin{tabular}{c} {\psfig{figure=ps/sm-clause-files/betanx0Vs1_consider-with-features.ps,height=2.3in}} \\ \end{tabular} \caption{{\it Consider} tree for embedded small clauses} \label{consider-with-features} \end{figure} Raising verbs such as {\it seems} work essentially the same as the auxiliaries, in that they also select the basic auxiliary tree, as in Figure~\ref{Vvx-with-nomprep}. The only difference is that the value of {\bf $<$mode$>$} on the VP foot node might be different, depending on what types of complements the raising verb takes. Also, two of the raising verbs take an additional tree, $\beta$Vpxvx, shown in Figure~\ref{Vpxvx}, which allows for an experiencer argument, as in {\it John seems to me to be happy}. \begin{figure}[htb] \centering \begin{tabular}{c} {\psfig{figure=ps/sm-clause-files/betaVpxvx.ps,height=2.0in}} \\ \end{tabular} \caption{Raising verb with experiencer tree: $\beta$Vpxvx} \label{Vpxvx} \end{figure} Raising adjectives, such as {\it likely}, take the tree shown in Figure~\ref{Vvx-adj}. This tree combines aspects of the auxiliary tree $\beta$Vvx and the adjectival predicative tree shown in Figure~\ref{predicative-trees}(b). As with $\beta$Vvx, it adjoins in as a VP auxiliary tree. However, since it is anchored by an adjective, not a verb, it is similar to the adjectival predicative tree in that it has an $\epsilon$ at the V node, and a feature value of {\bf $<$mode$>$=nom} which is passed up to the VP root indicates that it is an adjectival predication. This serves the same purpose as in the case of the tree in Figure~\ref{upset-with-features}, and forces another auxiliary verb, such as the copula, to adjoin in to make it {\bf $<$mode$>$=ind}. \begin{figure}[htb] \centering \begin{tabular}{c} {\psfig{figure=ps/sm-clause-files/betaVvx-adj.ps,height=2.0in}} \\ \end{tabular} \caption{Raising adjective tree: $\beta$Vvx-adj} \label{Vvx-adj} \end{figure} \section{Non-predicative {\it BE}} \label{equative-be-xtag-analysis} The examples with the copula that we have given seem to indicate that {\it be} is always followed by a predicative phrase of some sort. This is not the case, however, as seen in sentences such as ({\ex{1}})-({\ex{6}}). The noun phrases in these sentences are not predicative. They do not take raising verbs, and they do not occur in embedded small clause constructions. \enumsentence{my teacher is Mrs. Wayman .} \enumsentence{Doug is the man with the glasses .} \enumsentence{$\ast$My teacher seems Mrs. Wayman .} \enumsentence{$\ast$Doug appears the man with the glasses .} \enumsentence{$\ast$I consider [my teacher Mrs. Wayman] .} \enumsentence{$\ast$I prefer [Doug the man with the glasses] .} In addition, the subject and complement can exchange positions in these type of examples but not in sentences with predicative {\it be}. Sentence ({\ex{1}}) has the same interpretation as sentence ({\ex{-4}}) and differs only in the positions of the subject and complement NP's. Similar sentences, with a predicative {\it be}, are shown in ({\ex{2}}) and ({\ex{3}}). In this case, the sentence with the exchanged NP's ({\ex{3}}) is ungrammatical. \enumsentence{The man with the glasses is Doug .} \enumsentence{Doug is a programmer .} \enumsentence{$\ast$A programmer is Doug .} The non-predicative {\it be} in ({\ex{-8}}) and ({\ex{-7}}), also called \xtagdef{equative be}, patterns differently, both syntactically and semantically, from the predicative usage of {\it be}. Since these sentences are clearly not predicative, it is not desirable to have a tree structure that is anchored by the NP, AP, or PP, as we have in the predicative sentences. In addition to the conceptual problem, we would also need a mechanism to block raising verbs from adjoining into these sentences (while allowing them for true predicative phrases), and prevent these types of sentence from being embedded (again, while allowing them for true predicative phrases). \begin{figure}[htb] \centering \begin{tabular}{ccc} {\psfig{figure=ps/sm-clause-files/alphanx0BEnx1_is.ps,height=1.9in}} & \hspace{1.0in}& {\psfig{figure=ps/sm-clause-files/alphaInvnx0BEnx1_is.ps,height=2.5in}} \\ (a)&&(b)\\ \end{tabular} \caption{Equative {\it BE} trees: $\alpha$nx0BEnx1 (a) and $\alpha$Invnx0BEnx1 (b)} \label{equative-be} \label{1;1,6} \end{figure} Although non-predicative {\it be} is not a raising verb, it does exhibit the auxiliary verb behavior set out in section \ref{copula-data}. It inverts, contracts, and so forth, as seen in sentences ({\ex{1}}) and ({\ex{2}}), and therefore can not be associated with any existing tree family for main verbs. It requires a separate tree family that includes the tree for inversion. Figures~\ref{equative-be}(a) and \ref{equative-be}(b) show the declarative and inverted trees, respectively, for equative {\it be}. \enumsentence{is my teacher Mrs. Wayman ?} \enumsentence{Doug isn't the man with the glasses .} \chapter{Where to Find What} \label{table-intro} The two page table that follows gives an overview of what types of trees occur in various tree families with pointers to discussion in this report. An entry in a cell of the table indicates that the tree(s) for the construction named in the row header are included in the tree family named in the column header. Entries are of two types. If the particular tree(s) are displayed and/or discussed in this report the entry gives a page number reference to the relevant discussion or figure.\footnote{Since Chapter~\ref{verb-classes} has a brief discussion and a declarative tree for every tree family, page references are given only for other sections in which discussion or tree diagrams appear.} Otherwise, a $\surd$ \space indicates inclusion in the tree family but no figure or discussion related specifically to that tree in this report. Blank cells indicate that there are no trees for the construction named in the row header in the tree family named in the column header. Two tables are given below. The first one gives the expansion of abbreviations in the table headers. The second table gives the name given to each tree family in the actual XTAG grammar. This makes it easier to find the description of each tree family in Chapter~\ref{verb-classes} and to compare the description with the online XTAG grammar. \vspace{0.3in} \small \begin{tabular}{ll} Abbreviation&Full Name\\ \hline Sent. Subj. w. {\it to} & Sentential Subject with {\it to} PP complement \\ Pred. Mult-wd. ARB, P & Predicative Multi-word PP with Adv, Prep anchors\\ Pred. Mult-wd. A, P & Predicative Multi-word PP with Adj, Prep anchors\\ Pred. Mult-wd. N, P & Predicative Multi-word PP with Noun, Prep anchors\\ Pred. Mult-wd. P, P & Predicative Multi-word PP with two Prep anchors\\ Pred. Mult-wd. no int. mod. & Predicative Multi-word PP with no internal modification\\ Pred. Sent. Subj., ARB, P & Predicative PP with Sentential Subject, and Adv, Prep anchors\\ Pred. Sent. Subj., A, P & Predicative PP with Sentential Subject, and Adj, Prep anchors\\ Pred. Sent. Subj., Conj, P & Predicative PP with Sentential Subject, and Conj, Prep anchors\\ Pred. Sent. Subj., N, P & Predicative PP with Sentential Subject, and Noun, Prep anchors\\ Pred. Sent. Subj., P, P & Predicative PP with Sentential Subject, and two Prep anchors\\ Pred. Sent. Subj., no int-mod & Predicative PP with Sentential Subject, no internal modification\\ Pred. Locative & Predicative anchored by a Locative Adverb\\ Pred. A Sent. Subj., Comp. & Predicative Adjective with Sentential Subject and Complement\\ Sentential Comp. with NP&Sentential Complement with NP\\ Pred. Mult wd. V, P & Predicative Multi-word with Verb, Prep anchors \\ Adj. Sm. Cl. w. Sentential Subj.&Adjective Small Clause with Sentential Subject\\ NP Sm. Clause w. Sentential Subj.&NP Small Clause with Sentential Subject\\ PP Sm. Clause w. Sentential Subj.&PP Small Clause with Sentential Subject\\ NP Sm. Cl. w. Sent. Comp.&NP Small Clause with Sentential Complement\\ Adj. Sm. Cl. w. Sent. Comp.&Adjective Small Clause with Sentential Complement\\ Exhaustive PP Sm. Cl.&Exhaustive PP Small Clause\\ Ditrans. Light Verbs w. PP Shift&Ditransitive Light Verbs with PP Shift\\ Ditrans. Light Verbs w/o PP Shift&Ditransitive Light Verbs without PP Shift\\ Y/N question&Yes/No question \\ Wh-mov. NP complement&Wh-moved NP complement \\ Wh-mov. S comp.&Wh-moved S complement \\ Wh-mov. Adj comp.&Wh-moved Adjective complement \\ Wh-mov. object of a P&Wh-moved object of a P \\ Wh-mov. PP&Wh-moved PP \\ Topic. NP complement&Topicalized NP complement \\ Det. gerund&Determiner gerund \\ Rel. cl. on NP comp.&Relative clause on NP complement \\ Rel. cl. on PP comp.& Relative clause on PP complement\\ Rel. cl. on NP object of P& Relative clause on NP object of P\\ Pass. with wh-moved subj.&Passive with wh-moved subject (with and without {\it by} phrase) \\ Pass. w. wh-mov. ind. obj.&Passive with wh-moved indirect object (with and without {\it by} phrase) \\ Pass. w. wh-mov. obj. of the {\it {\it by} phrase}&Passive with wh-moved object of the {\it by} phrase \\ Pass. w. wh-mov. {\it by} phrase&Passive with wh-moved {\it by} phrase \\ Trans. Idiom with V, D and N & Transitive Idiom with Verb, Det and Noun anchors\\ Idiom with V, D, N & Idiom with V, D, and N anchors \\ Idiom with V, D, A, N & Idiom with V, D, A, and N anchors \\ Idiom with V, N & Idiom with V, and N anchor \\ Idiom with V, A, N & Idiom with V, A, and N anchors \\ Idiom with V, D, N, P & Idiom with V, D, N, and Prep anchors \\ Idiom with V, D, A, N, P & Idiom with V, D, A, N, and Prep anchors \\ Idiom with V, N, P & Idiom with V, N, and Prep anchors \\ Idiom with V, A, N, P & Idiom with V, A, N, and Prep anchors \end{tabular} \normalsize \small \begin{tabular}{ll} Full Name&XTAG Name\\ \hline Intransitive Sentential Subject & Ts0V\\ Sentential Subject with `to' complement & Ts0Vtonx1\\ PP Small Clause, with Adv and Prep anchors & Tnx0ARBPnx1\\ PP Small Clause, with Adj and Prep anchors & Tnx0APnx1\\ PP Small Clause, with Noun and Prep anchors & Tnx0NPnx1\\ PP Small Clause, with Prep anchors & Tnx0PPnx1\\ PP Small Clause, with Prep and Noun anchors & Tnx0PNaPnx1\\ PP Small Clause with Sentential Subject, and Adv and Prep anchors & Ts0ARBPnx1\\ PP Small Clause with Sentential Subject, and Adj and Prep anchors & Ts0APnx1\\ PP Small Clause with Sentential Subject, and Noun and Prep anchors & Ts0NPnx1\\ PP Small Clause with Sentential Subject, and Prep anchors & Ts0PPnx1\\ PP Small Clause with Sentential Subject, and Prep and Noun anchors & Ts0PNaPnx1\\ Exceptional Case Marking & TXnx0Vs1\\ Locative Small Clause with Ad anchor & Tnx0nx1ARB\\ Predicative Adjective with Sentential Subject and Complement & Ts0A1s1\\ Transitive & Tnx0Vnx1\\ Ditransitive with PP shift & Tnx0Vnx1tonx2\\ Ditransitive & Tnx0Vnx1nx2\\ Ditransitive with PP & Tnx0Vnx1pnx2\\ Sentential Complement with NP & Tnx0Vnx1s2\\ Intransitive Verb Particle & Tnx0Vpl\\ Transitive Verb Particle & Tnx0Vplnx1\\ Ditransitive Verb Particle & Tnx0Vplnx1nx2\\ Intransitive with PP & Tnx0Vpnx1\\ Sentential Complement & Tnx0Vs1\\ Light Verbs & Tnx0lVN1\\ Ditransitive Light Verbs with PP Shift & Tnx0lVN1Pnx2\\ Adjective Small Clause with Sentential Subject & Ts0Ax1\\ NP Small Clause with Sentential Subject & Ts0N1\\ PP Small Clause with Sentential Subject & Ts0Pnx1\\ Predicative Multi-word with Verb, Prep anchors & Tnx0VPnx1\\ Adverb It-Cleft & TItVad1s2\\ NP It-Cleft & TItVnx1s2\\ PP It-Cleft & TItVpnx1s2\\ Adjective Small Clause Tree & Tnx0Ax1\\ Adjective Small Clause with Sentential Complement & Tnx0A1s1\\ Equative {\it BE} & Tnx0BEnx1\\ NP Small Clause & Tnx0N1\\ NP with Sentential Complement Small Clause & Tnx0N1s1\\ PP Small Clause & Tnx0Pnx1\\ Exhaustive PP Small Clause & Tnx0Px1\\ Intransitive & Tnx0V\\ Intransitive with Adjective & Tnx0Vax1\\ Transitive Sentential Subject & Ts0Vnx1\\ Idiom with V, D and N & Tnx0VDN1\\ Idiom with V, D, A, and N anchors & Tnx0VDAN1\\ Idiom with V and N anchors & Tnx0VN1\\ Idiom with V, A, and N anchors & Tnx0VAN1\\ Idiom with V, D, N, and Prep anchors & Tnx0VDN1Pnx2\\ Idiom with V, D, A, N, and Prep anchors & Tnx0VDAN1Pnx2\\ Idiom with V, N, and Prep anchors & Tnx0VN1Pnx2\\ Idiom with V, A, N, and Prep anchors & Tnx0VAN1Pnx2 \end{tabular} \normalsize \clearpage \part{General Information} \include{getting-around} \include{intro} \include{overview} \include{compl-adj} \part{Verb Classes} \include{table-intro} \include{table} \include{verb-class} \include{ergatives} \include{sent-comps-subjs} \include{sm-clause} \include{double-obj} \include{it-clefts} \part{Sentence Types} \include{passives} \include{extraction} \include{rel_clauses} \include{sent-adjs-sub-conjs} \include{imperatives} \include{gerunds} \part{Other Constructions} \include{det} \include{modifiers} \include{auxs} \include{conj} \include{comparatives} \include{punct} \part{Appendices} \include{future-work} \include{metarules} \include{lexorg} \include{tree-naming} \include{features} \include{evaluation} \bibliographystyle{aaai-named} \chapter{Tree Naming conventions} \label{tree-naming} The various trees within the XTAG grammar are named more or less according to the following tree naming conventions. Although these naming conventions are generally followed, there are occasional trees that do not strictly follow these conventions. \section{Tree Families} Tree families are named according to the basic declarative tree structure in the tree family (see section~\ref{family-trees}), but with a T as the first character instead of an $\alpha$ or $\beta$. \section{Trees within tree families} \label{family-trees} Each tree begins with either an $\alpha$ (alpha) or a $\beta$ (beta) symbol, indicating whether it is an initial or auxiliary tree, respectively. Following an $\alpha$ or a $\beta$ the name may additionally contain one of: \begin{description} \item\begin{tabular}{ll} I&imperative\\ E&ergative\\ N{0,1,2}&relative clause\{position\}\\ G&NP gerund\\ D&Determiner gerund\\ pW{0,1,2}&wh-PP extraction\{position\}\\ W{0,1,2}&wh-NP extraction\{position\}\\ X&ECM (eXceptional case marking)\\ \end{tabular} \end{description} \noindent Numbers are assigned according to the position of the argument in the declarative tree, as follows: \begin{description} \item\begin{tabular}{ll} 0&subject position\\ 1&first argument (e.g. direct object)\\ 2&second argument (e.g. indirect object)\\ \end{tabular} \end{description} \noindent The body of the name consists of a string of the following components, which corresponds to the leaves of the tree. The anchor(s) of the trees is(are) indicated by capitalizing the part of speech corresponding to the anchor. \begin{description} \item\begin{tabular}{ll} s&sentence\\ a&adjective\\ arb&adverb\\ be&{\it be}\\ c&relative complementizer\\ x&phrasal category\\ d&determiner\\ v&verb\\ lv&light verb\\ conj&conjunction\\ comp&complementizer\\ it&{\it it}\\ n&noun\\ p&preposition\\ to&{\it to}\\ pl&particle\\ by&{\it by}\\ neg&negation\\ \end{tabular} \end{description} \noindent As an example, the transitive declarative tree consists of a subject NP, followed by a verb (which is the anchor), followed by the object NP. This translates into $\alpha$nx0Vnx1. If the subject NP had been extracted, then the tree would be $\alpha$W0nx0Vnx1. A passive tree with the {\it by} phrase in the same tree family would be $\alpha$nx1Vbynx0. Note that even though the object NP has moved to the subject position, it retains the object encoding (nx1). \section{Assorted Initial Trees} Trees that are not part of the tree families are generally gathered into several files for convenience. The various initial trees are located in {\tt lex.trees}. All the trees in this file should begin with an $\alpha$, indicating that they are initial trees. This is followed by the root category which follows the naming conventions in the previous section (e.g. n for noun, x for phrasal category). The root category is in all capital letters. After the root category, the node leaves are named, beginning from the left, with the anchor of the tree also being capitalized. As an example, the $\alpha$NXN tree is rooted by an NP node (NX) and anchored by a noun (N). \section{Assorted Auxiliary Trees} The auxiliary trees are mostly located in the buffers {\tt prepositions.trees}, {\tt conjunctions.trees}, {\tt determiners.trees}, {\tt advs-adjs.trees}, and {\tt modifiers.trees}, although a couple of other files also contain auxiliary trees. The auxiliary trees follow a slightly different naming convention from the initial trees. Since the root and foot nodes must be the same for the auxiliary trees, the root nodes are not explicitly mentioned in the names of auxiliary trees. The trees are named according to the leaf nodes, starting from the left, and capitalizing the anchor node. All auxiliary trees begin with a $\beta$, of course. For example, $\beta$ARBs, indicates a tree anchored by an adverb (ARB), that adjoins onto the left of an S node (Note that S must be the foot node, and therefore also the root node). \subsection{Relative Clause Trees} For relative clause trees, the following naming conventions have been adopted: if the {\em wh}-moved NP is overt, it is not explicitly represented. Instead the index of the site of movement (0 for subject, 1 for object, 2 for indirect object) is appended to the N. So $\beta$N0nx0Vnx1 is a subject extraction relative clause with {\bf NP$_{w}$} substitution and $\beta$N1nx0Vnx1 is an object extraction relative clause. If the {\em wh}-moved NP is covert and Comp substitutes in, the Comp node is represented by {\em c} in the tree name and the index of the extraction site follows {\em c}. Thus $\beta$Nc0nx0Vnx1 is a subject extraction relative clause with Comp substitution. Adjunct trees are similar, except that since the extracted material is not co-indexed to a trace, no index is specified (cf. $\beta$Npxnx0Vnx1, which is an adjunct relative clause with PP pied-piping, and $\beta$Ncnx0Vnx1, which is an adjunct relative clause with Comp substitution). Cases of pied-piping, in which the pied-piped material is part of the anchor have the anchor capitalized or spelled-out (cf. $\beta$Nbynx0nx1Vbynx0 which is a relative clause with {\em by}-phrase pied-piping and {\bf NP$_{w}$} substitution.). \chapter{Verb Classes} \label{verb-classes} Each main\footnote{Auxiliary verbs are handled under a different mechanism. See Chapter~\ref{auxiliaries} for details.} verb in the syntactic lexicon selects at least one tree family\footnote{See section \ref{tree-db} for explanation of tree families.} (subcategorization frame). Since the tree database and syntactic lexicon are already separated for space efficiency (see Chapter~\ref{overview}), each verb can efficiently select a large number of trees by specifying a tree family, as opposed to each of the individual trees. This approach allows for a considerable reduction in the number of trees that must be specified for any given verb or form of a verb. There are currently 52 tree families in the system.\footnote{An explanation of the naming convention used in naming the trees and tree families is available in Appendix~\ref{tree-naming}.} This chapter gives a brief description of each tree family and shows the corresponding declarative tree\footnote{Before lexicalization, the $\diamond$ indicates the anchor of the tree.}, along with any peculiar characteristics or trees. It also indicates which transformations are in each tree family, and gives the number of verbs that select that family.\footnote{Numbers given are as of August 1998 and are subject to some change with further development of the grammar.} A few sample verbs are given, along with example sentences. \section{Intransitive: Tnx0V}\index{verbs, intransitive} \label{nx0V-family} \begin{description} \item[Description:] This tree family is selected by verbs that do not require an object complement of any type. Adverbs, prepositional phrases and other adjuncts may adjoin on, but are not required for the sentences to be grammatical. 1,878 verbs select this family. \item[Examples:] {\it eat}, {\it sleep}, {\it dance} \\ {\it Al ate .} \\ {\it Seth slept .} \\ {\it Hyun danced .} \item[Declarative tree:] See Figure~\ref{nx0V-tree}. \begin{figure}[htb] \centering \begin{tabular}{c} \psfig{figure=ps/verb-class-files/alphanx0V.ps,height=3.4cm} \end{tabular} \caption{Declarative Intransitive Tree: $\alpha$nx0V} \label{nx0V-tree} \end{figure} \item[Other available trees:] wh-moved subject, subject relative clause with and without comp, adjunct (gap-less) relative clause with comp, adjunct (gap-less) relative clause with PP pied-piping, imperative, determiner gerund, NP gerund, pre-nominal participal. \end{description} \section{Transitive: Tnx0Vnx1}\index{verbs,transitive} \label{nx0Vnx1-family} \begin{description} \item[Description:] This tree family is selected by verbs that require only an NP object complement. The NP's may be complex structures, including gerund NP's and NP's that take sentential complements. This does not include light verb constructions (see sections~\ref{nx0lVN1-family} and \ref{nx0lVN1Pnx2-family}). 4,343 verbs select the transitive tree family. \item[Examples:] {\it eat}, {\it dance}, {\it take}, {\it like}\\ {\it Al ate an apple .} \\ {\it Seth danced the tango .} \\ {\it Hyun is taking an algorithms course .} \\ {\it Anoop likes the fact that the semester is finished .} \item[Declarative tree:] See Figure~\ref{nx0Vnx1-tree}. \begin{figure}[htb] \centering \begin{tabular}{c} \psfig{figure=ps/verb-class-files/alphanx0Vnx1.ps,height=3.4cm} \end{tabular} \caption{Declarative Transitive Tree: $\alpha$nx0Vnx1} \label{nx0Vnx1-tree} \end{figure} \item[Other available trees:] wh-moved subject, wh-moved object, subject relative clause with and without comp, adjunct (gap-less) relative clause with comp/with PP pied-piping, object relative clause with and without comp, imperative, determiner gerund, NP gerund, passive with {\it by} phrase, passive without {\it by} phrase, passive with wh-moved subject and {\it by} phrase, passive with wh-moved subject and no {\it by} phrase, passive with wh-moved object out of the {\it by} phrase, passive with wh-moved {\it by} phrase, passive with relative clause on subject and {\it by} phrase with and without comp, passive with relative clause on subject and no {\it by} phrase with and without comp, passive with relative clause on object on the {\it by} phrase with and without comp/with PP pied-piping, gerund passive with {\it by} phrase, gerund passive without {\it by} phrase, ergative, ergative with wh-moved subject, ergative with subject relative clause with and without comp, ergative with adjunct (gap-less) relative clause with comp/with PP pied-piping. In addition, two other trees that allow transitive verbs to function as adjectives (e.g. {\it the stopped truck}) are also in the family. \end{description} \section{Ditransitive: Tnx0Vnx1nx2}\index{verbs,ditransitive} \label{nx0Vnx1nx2-family} \begin{description} \item[Description:] This tree family is selected by verbs that take exactly two NP complements. It does {\bf not} include verbs that undergo the ditransitive verb shift (see section~\ref{nx0Vnx1Pnx2-family}). The apparent ditransitive alternates involving verbs in this class and benefactive PP's (e.g. {\it John baked a cake for Mary}) are analyzed as transitives (see section~\ref{nx0Vnx1-family}) with a PP adjunct. Benefactives are taken to be adjunct PP's because they are optional (e.g. {\it John baked a cake} vs. {\it John baked a cake for Mary}). 122 verbs select the ditransitive tree family. \item[Examples:] {\it ask}, {\it cook}, {\it win} \\ {\it Christy asked Mike a question .} \\ {\it Doug cooked his father dinner .} \\ {\it Dania won her sister a stuffed animal .} \item[Declarative tree:] See Figure~\ref{nx0Vnx1nx2-tree}. \begin{figure}[htb] \centering \begin{tabular}{c} \psfig{figure=ps/verb-class-files/alphanx0Vnx1nx2.ps,height=3.4cm} \end{tabular} \caption{Declarative Ditransitive Tree: $\alpha$nx0Vnx1nx2} \label{nx0Vnx1nx2-tree} \end{figure} \item[Other available trees:] wh-moved subject, wh-moved direct object, wh-moved indirect object, subject relative clause with and without comp, adjunct (gap-less) relative clause with comp/with PP pied-piping, direct object relative clause with and without comp, indirect object relative clause with and without comp, imperative, determiner gerund, NP gerund, passive with {\it by} phrase, passive without {\it by} phrase, passive with wh-moved subject and {\it by} phrase, passive with wh-moved subject and no {\it by} phrase, passive with wh-moved object out of the {\it by} phrase, passive with wh-moved {\it by} phrase, passive with wh-moved indirect object and {\it by} phrase, passive with wh-moved indirect object and no {\it by} phrase, passive with relative clause on subject and {\it by} phrase with and without comp, passive with relative clause on subject and no {\it by} phrase with and without comp, passive with relative clause on object of the {\it by} phrase with and without comp/with PP pied-piping, passive with relative clause on the indirect object and {\it by} phrase with and without comp, passive with relative clause on the indirect object and no {\it by} phrase with and without comp, passive with/without {\it by}-phrase with adjunct (gap-less) relative clause with comp/with PP pied-piping, gerund passive with {\it by} phrase, gerund passive without {\it by} phrase. \end{description} \section{Ditransitive with PP: Tnx0Vnx1pnx2}\index{verbs, NP with VP verbs} \label{nx0Vnx1pnx2-family} \begin{description} \item[Description:] This tree family is selected by ditransitive verbs that take a noun phrase followed by a prepositional phrase. The preposition is not constrained in the syntactic lexicon. The preposition must be required and not optional - that is, the sentence must be ungrammatical with just the noun phrase (e.g. {\it $\ast$John put the table}). No verbs, therefore, should select both this tree family and the transitive tree family (see section~\ref{nx0Vnx1-family}). This tree family is also distinguished from the ditransitive verbs, such as {\it give}, that undergo verb shifting (see section~\ref{nx0Vnx1Pnx2-family}). There are 62 verbs that select this tree family. \item[Examples:] {\it associate}, {\it put}, {\it refer} \\ {\it Rostenkowski associated money with power .} \\ {\it He put his reputation on the line .} \\ {\it He referred all questions to his attorney .} \item[Declarative tree:] See Figure~\ref{nx0Vnx1pnx2-tree}. \begin{figure}[htb] \centering \begin{tabular}{c} \psfig{figure=ps/verb-class-files/alphanx0Vnx1pnx2.ps,height=3.4cm} \end{tabular} \caption{Declarative Ditransitive with PP Tree: $\alpha$nx0Vnx1pnx2} \label{nx0Vnx1pnx2-tree} \end{figure} \item[Other available trees:] wh-moved subject, wh-moved direct object, wh-moved object of PP, wh-moved PP, subject relative clause with and without comp, adjunct (gap-less) relative clause with comp/with PP pied-piping, direct object relative clause with and without comp, object of PP relative clause with and without comp/with PP pied-piping, imperative, determiner gerund, NP gerund, passive with {\it by} phrase, passive without {\it by} phrase, passive with wh-moved subject and {\it by} phrase, passive with wh-moved subject and no {\it by} phrase, passive with wh-moved object out of the {\it by} phrase, passive with wh-moved {\it by} phrase, passive with wh-moved object out of the PP and {\it by} phrase, passive with wh-moved object out of the PP and no {\it by} phrase, passive with wh-moved PP and {\it by} phrase, passive with wh-moved PP and no {\it by} phrase, passive with relative clause on subject and {\it by} phrase with and without comp, passive with relative clause on subject and no {\it by} phrase with and without comp, passive with relative clause on object of the {\it by} phrase with and without comp/with PP pied-piping, passive with relative clause on the object of the PP and {\it by} phrase with and without comp/with PP pied-piping, passive with relative clause on the object of the PP and no {\it by} phrase with and without comp/with PP pied-piping, passive with and without {\it by} phrase with adjunct (gap-less) relative clause with comp/with PP pied-piping, gerund passive with {\it by} phrase, gerund passive without {\it by} phrase. \end{description} \section{Ditransitive with PP shift: Tnx0Vnx1tonx2}\index{verbs,ditransitive with PP shift} \label{nx0Vnx1Pnx2-family} \begin{description} \item[Description:] This tree family is selected by ditransitive verbs that undergo a shift to a {\it to} prepositional phrase. These ditransitive verbs are clearly constrained so that when they shift, the prepositional phrase must start with {\it to}. This is in contrast to the Ditransitives with PP in section~\ref{nx0Vnx1pnx2-family}, in which verbs may appear in [NP V NP PP] constructions with a variety of prepositions. Both the dative shifted and non-shifted PP complement trees are included. 56 verbs select this family. \item[Examples:] {\it give}, {\it promise}, {\it tell} \\ {\it Bill gave Hillary flowers .} \\ {\it Bill gave flowers to Hillary .} \\ {\it Whitman promised the voters a tax cut .} \\ {\it Whitman promised a tax cut to the voters .} \\ {\it Pinnochino told Gepetto a lie .} \\ {\it Pinnochino told a lie to Gepetto .} \item[Declarative tree:] See Figure~\ref{nx0Vnx1Pnx2-tree}. \begin{figure}[htb] \centering \begin{tabular}{ccc} \psfig{figure=ps/verb-class-files/alphanx0Vnx1Pnx2.ps,height=5.2cm} & \hspace{1.0in}& \psfig{figure=ps/verb-class-files/alphanx0Vnx2nx1.ps,height=3.3cm} \\ (a) && (b) \end{tabular} \caption{Declarative Ditransitive with PP shift Trees: $\alpha$nx0Vnx1Pnx2~(a) and $\alpha$nx0Vnx2nx1~(b)} \label{nx0Vnx1Pnx2-tree} \end{figure} \item[Other available trees:] {\bf Non-shifted:} wh-moved subject, wh-moved direct object, wh-moved indirect object, subject relative clause with and without comp, adjunct (gap-less) relative clause with comp/with PP pied-piping, direct object relative clause with comp/with PP pied-piping, indirect object relative clause with and without comp/with PP pied-piping, imperative, NP gerund, passive with {\it by} phrase, passive without {\it by} phrase, passive with wh-moved subject and {\it by} phrase, passive with wh-moved subject and no {\it by} phrase, passive with wh-moved object out of the {\it by} phrase, passive with wh-moved {\it by} phrase, passive with wh-moved indirect object and {\it by} phrase, passive with wh-moved indirect object and no {\it by} phrase, passive with relative clause on subject and {\it by} phrase with and without comp, passive with relative clause on subject and no {\it by} phrase with and without comp, passive with relative clause on object of the {\it by} phrase with and without comp/with PP pied-piping, passive with relative clause on the indirect object and {\it by} phrase with and without comp/with PP pied-piping, passive with relative clause on the indirect object and no {\it by} phrase with and without comp/with PP pied-piping, passive with/without {\it by}-phrase with adjunct (gap-less) relative clause with comp/with PP pied-piping, gerund passive with {\it by} phrase, gerund passive without {\it by} phrase;\\ {\bf Shifted:} wh-moved subject, wh-moved direct object, wh-moved object of PP, wh-moved PP, subject relative clause with and without comp, adjunct (gap-less) relative clause with comp/with PP pied-piping, direct object relative clause with comp/with PP pied-piping, object of PP relative clause with and without comp/with PP pied-piping, imperative, determiner gerund, NP gerund, passive with {\it by} phrase, passive without {\it by} phrase, passive with wh-moved subject and {\it by} phrase, passive with wh-moved subject and no {\it by} phrase, passive with wh-moved object out of the {\it by} phrase, passive with wh-moved {\it by} phrase, passive with wh-moved object out of the PP and {\it by} phrase, passive with wh-moved object out of the PP and no {\it by} phrase, passive with wh-moved PP and {\it by} phrase, passive with wh-moved PP and no {\it by} phrase, passive with relative clause on subject and {\it by} phrase with and without comp, passive with relative clause on subject and no {\it by} phrase with and without comp, passive with relative clause on object of the {\it by} phrase with and without comp/with PP pied-piping, passive with relative clause on the object of the PP and {\it by} phrase with and without comp/with PP pied-piping, passive with relative clause on the object of the PP and no {\it by} phrase with and without comp/with PP pied-piping, passive with/without {\it by}-phrase with adjunct (gap-less) relative clause with comp/with PP pied-piping, gerund passive with {\it by} phrase, gerund passive without {\it by} phrase. \end{description} \section{Sentential Complement with NP: Tnx0Vnx1s2}\index{verbs,Sentential Complement with NP} \label{nx0Vnx1s2-family} \begin{description} \item[Description:] This tree family is selected by verbs that take both an NP and a sentential complement. The sentential complement may be infinitive or indicative. The type of clause is specified by each individual verb in its syntactic lexicon entry. A given verb may select more than one type of sentential complement. The declarative tree, and many other trees in this family, are auxiliary trees, as opposed to the more common initial trees. These auxiliary trees adjoin onto an S node in an existing tree of the type specified by the sentential complement. This is the mechanism by which TAGs are able to maintain long-distance dependencies (see Chapter~\ref{extraction}), even over multiple embeddings (e.g. {\it What did Bill tell Mary that John said?}). 79 verbs select this tree family. \item[Examples:] {\it beg}, {\it expect}, {\it tell} \\ {\it Srini begged Mark to increase his disk quota .} \\ {\it Beth told Jim that it was his turn .} \item[Declarative tree:] See Figure~\ref{nx0Vnx1s2-tree}. \begin{figure}[htb] \centering \begin{tabular}{c} \psfig{figure=ps/verb-class-files/betanx0Vnx1s2.ps,height=3.4cm} \end{tabular} \caption{Declarative Sentential Complement with NP Tree: $\beta$nx0Vnx1s2} \label{nx0Vnx1s2-tree} \end{figure} \item[Other available trees:] wh-moved subject, wh-moved object, wh-moved sentential complement, subject relative clause with and without comp, adjunct (gap-less) relative clause with comp/with PP pied-piping, object relative clause with and without comp, imperative, determiner gerund, NP gerund, passive with {\it by} phrase before sentential complement, passive with {\it by} phrase after sentential complement, passive without {\it by} phrase, passive with wh-moved subject and {\it by} phrase before sentential complement, passive with wh-moved subject and {\it by} phrase after sentential complement, passive with wh-moved subject and no {\it by} phrase, passive with wh-moved object out of the {\it by} phrase, passive with wh-moved {\it by} phrase, passive with relative clause on subject and {\it by} phrase before sentential complement with and without comp, passive with relative clause on subject and {\it by} phrase after sentential complement with and without comp, passive with relative clause on subject and no {\it by} phrase with and without comp, passive with/without {\it by}-phrase with adjunct (gap-less) relative clause with comp/with PP pied-piping, gerund passive with {\it by} phrase befor sentential complement, gerund passive with {\it by} phrase after the sentential complement, gerund passive without {\it by} phrase, parenthetical reporting clause. \end{description} \section{Intransitive Verb Particle: Tnx0Vpl}\index{verbs,verb-particle,intransitive} \label{nx0Vpl} \begin{description} \item[Description:] The trees in this tree family are anchored by both the verb and the verb particle. Both appear in the syntactic lexicon and together select this tree family. Intransitive verb particles can be difficult to distinguish from intransitive verbs with adverbs adjoined on. The main diagnostics for including verbs in this class are whether the meaning is compositional or not, and whether there is a transitive version of the verb/verb particle combination with the same or similar meaning. The existence of an alternate compositional meaning is a strong indication for a separate verb particle construction. There are 159 verb/verb particle combinations. \item[Examples:] {\it add up}, {\it come out}, {\it sign off} \\ {\it The numbers never quite added up .} \\ {\it John finally came out (of the closet) .} \\ {\it I think that I will sign off now .} \item[Declarative tree:] See Figure~\ref{nx0Vpl-tree}. \begin{figure}[htb] \centering \begin{tabular}{c} \psfig{figure=ps/verb-class-files/alphanx0Vpl.ps,height=3.4cm} \end{tabular} \caption{Declarative Intransitive Verb Particle Tree: $\alpha$nx0Vpl} \label{nx0Vpl-tree} \end{figure} \item[Other available trees:] wh-moved subject, subject relative clause with and without comp, adjunct (gap-less) relative clause with comp/with PP pied-piping, imperative, determiner gerund, NP gerund. \end{description} \section{Transitive Verb Particle: Tnx0Vplnx1}\index{verbs,particle,transitive} \label{nx0Vplnx1-family} \begin{description} \item[Description:] Verb/verb particle combinations that take an NP complement select this tree family. Both the verb and the verb particle are anchors of the trees. Particle movement has been taken as the diagnostic to distinguish verb particle constructions from intransitives with adjoined PP's. If the alleged particle is able to undergo particle movement, in other words appear both before and after the direct object, then it is judged to be a particle. Items that do not undergo particle movement are taken to be prepositions. In many, but not all, of the verb particle cases, there is also an alternate prepositional meaning in which the lexical item did not move. (e.g. {\it He looked up the number (in the phonebook). He looked the number up. Srini looked up the road (for Purnima's car). $\ast$He looked the road up.}) There are 489 verb/verb particle combinations. \item[Examples:] {\it blow off}, {\it make up}, {\it pick out} \\ {\it He blew off his linguistics class for the third time .} \\ {\it He blew his linguistics class off for the third time .} \\ {\it The dyslexic leprechaun made up the syntactic lexicon .} \\ {\it The dyslexic leprechaun made the syntactic lexicon up .} \\ {\it I would like to pick out a new computer .} \\ {\it I would like to pick a new computer out .} \item[Declarative tree:] See Figure~\ref{nx0Vplnx1-tree}. \begin{figure}[htb] \centering \begin{tabular}{ccc} \psfig{figure=ps/verb-class-files/alphanx0Vplnx1.ps,height=3.4cm} & \hspace{1.0in}& \psfig{figure=ps/verb-class-files/alphanx0Vnx1pl.ps,height=3.4cm} \\ (a)&&(b) \end{tabular} \caption{Declarative Transitive Verb Particle Tree: $\alpha$nx0Vplnx1~(a) and $\alpha$nx0Vnx1pl~(b)} \label{nx0Vplnx1-tree} \end{figure} \item[Other available trees:] wh-moved subject with particle before the NP, wh-moved subject with particle after the NP, wh-moved object, subject relative clause with particle before the NP with and without comp, subject relative clause with particle after the NP with and without comp, object relative clause with and without comp, adjunct (gap-less) relative clause with particle before the NP with comp/with PP pied-piping, adjunct (gap-less) relative clause with particle after the NP with comp/with PP pied-piping, imperative with particle before the NP, imperative with particle after the NP, determiner gerund with particle before the NP, NP gerund with particle before the NP, NP gerund with particle after the NP, passive with {\it by} phrase, passive without {\it by} phrase, passive with wh-moved subject and {\it by} phrase, passive with wh-moved subject and no {\it by} phrase, passive with wh-moved object out of the {\it by} phrase, passive with wh-moved {\it by} phrase, passive with relative clause on subject and {\it by} phrase with and without comp, passive with relative clause on subject and no {\it by} phrase with and without comp, passive with relative clause on object of the {\it by} phrase with and without comp/with PP pied-piping, passive with/without {\it by}-phrase with adjunct (gap-less) relative clause with comp/with PP pied-piping, gerund passive with {\it by} phrase, gerund passive without {\it by} phrase. \end{description} \section{Ditransitive Verb Particle: Tnx0Vplnx1nx2}\index{verbs,particle,ditransitive} \label{nx0Vplnx1nx2} \begin{description} \item[Description:] Verb/verb particle combinations that select this tree family take 2 NP complements. Both the verb and the verb particle anchor the trees, and the verb particle can occur before, between, or after the noun phrases. Perhaps because of the complexity of the sentence, these verbs do not seem to have passive alternations ({\it $\ast$A new bank account was opened up Michelle by me}). There are 4 verb/verb particle combinations that select this tree family. The exhaustive list is given in the examples. \item[Examples:] {\it dish out}, {\it open up}, {\it pay off}, {\it rustle up} \\ {\it I opened up Michelle a new bank account .} \\ {\it I opened Michelle up a new bank account .} \\ {\it I opened Michelle a new bank account up .} \item[Declarative tree:] See Figure~\ref{nx0Vplnx1nx2-tree}. \begin{figure}[htb] \centering \begin{tabular}{ccc} \psfig{figure=ps/verb-class-files/alphanx0Vplnx1nx2.ps,height=3.0cm} & \psfig{figure=ps/verb-class-files/alphanx0Vnx1plnx2.ps,height=3.0cm} & \psfig{figure=ps/verb-class-files/alphanx0Vnx1nx2pl.ps,height=3.0cm} \\ (a) & (b) & (c) \end{tabular} \caption{Declarative Ditransitive Verb Particle Tree: $\alpha$nx0Vplnx1nx2~(a), $\alpha$nx0Vnx1plnx2~(b) and $\alpha$nx0Vnx1nx2pl~(c)} \label{nx0Vplnx1nx2-tree} \end{figure} \item[Other available trees:] wh-moved subject with particle before the NP's, wh-moved subject with particle between the NP's, wh-moved subject with particle after the NP's, wh-moved indirect object with particle before the NP's, wh-moved indirect object with particle after the NP's, wh-moved direct object with particle before the NP's, wh-moved direct object with particle between the NP's, subject relative clause with particle before the NP's with and without comp, subject relative clause with particle between the NP's with and without comp, subject relative clause with particle after the NP's with and without comp, indirect object relative clause with particle before the NP's with and without comp, indirect object relative clause with particle after the NP's with and without comp, direct object relative clause with particle before the NP's with and without comp, direct object relative clause with particle between the NP's with and without comp, adjunct (gap-less) relative clause with comp/with PP pied-piping, imperative with particle before the NP's, imperative with particle between the NP's, imperative with particle after the NP's, determiner gerund with particle before the NP's, NP gerund with particle before the NP's, NP gerund with particle between the NP's, NP gerund with particle after the NP's. \end{description} \section{Intransitive with PP: Tnx0Vpnx1}\index{verbs,intransitive with PP} \label{nx0Vpnx1-family} \begin{description} \item[Description:] The verbs that select this tree family are not strictly intransitive, in that they {\bf must} be followed by a prepositional phrase. Verbs that are intransitive and simply {\bf can} be followed by a prepositional phrase do not select this family, but instead have the PP adjoin onto the intransitive sentence. Accordingly, there should be no verbs in both this class and the intransitive tree family (see section~\ref{nx0V-family}). The prepositional phrase is not restricted to being headed by any particular lexical item. Note that these are not transitive verb particles (see section~\ref{nx0Vplnx1-family}), since the head of the PP does not move. 169 verbs select this tree family. \item[Examples:] {\it grab}, {\it impinge}, {\it provide} \\ {\it Seth grabbed for the brass ring .} \\ {\it The noise gradually impinged on Dania's thoughts .} \\ {\it A good host provides for everyone's needs .} \item[Declarative tree:] See Figure~\ref{nx0Vpnx1-tree}. \begin{figure}[htb] \centering \begin{tabular}{c} \psfig{figure=ps/verb-class-files/alphanx0Vpnx1.ps,height=1.6in} \end{tabular} \caption{Declarative Intransitive with PP Tree: $\alpha$nx0Vpnx1} \label{nx0Vpnx1-tree} \end{figure} \item[Other available trees:] wh-moved subject, wh-moved object of the PP, wh-moved PP, subject relative clause with and without comp, adjunct (gap-less) relative clause with comp/with PP pied-piping, object of the PP relative clause with and without comp/with PP pied-piping, imperative, determiner gerund, NP gerund, passive with {\it by} phrase, passive without {\it by} phrase, passive with wh-moved subject and {\it by} phrase, passive with wh-moved subject and no {\it by} phrase, passive with wh-moved {\it by} phrase, passive with relative clause on subject and {\it by} phrase with and without comp, passive with relative clause on subject and no {\it by} phrase with and without comp, passive with relative clause on object of the {\it by} phrase with and without comp/with PP pied-piping, passive with/without {\it by}-phrase with adjunct (gap-less) relative clause with comp/with PP pied-piping, gerund passive with {\it by} phrase, gerund passive without {\it by} phrase. \end{description} \section{Predicative Multi-word with Verb, Prep anchors: Tnx0VPnx1}\label{verbs,prepositional complement} \label{nx0VPnx1-family} \begin{description} \item[Description:] This tree family is selected by multiple anchor verb/preposition pairs which together have a non-compositional interpretation. For example, {\it think of} has the non-compositional interpretion involving the inception of a notion or mental entity in addition to the interpretion in which the agent is thinking about someone or something. Anchors for this tree must be able to take both gerunds and regular NP's in the second noun position. To allow adverbs to appear between the verb and the preposition, the trees contain an extra VP level. Several of the verbs which select the Tnx0Vpnx1 family, but which should not have quite the freedom it allows, will be moving to this family for the next release. 28 verb/preposition pairs select this tree family. \item[Examples:] {\it think of}, {\it believe in}, {\it depend on} \\ {\it Calvin thought of a new idea .}\\ {\it Hobbes believes in sleeping all day .}\\ {\it Bill depends on drinking coffee for stimulation .}\\ \item[Declarative tree:] See Figure~\ref{nx0VPnx1-tree}. \begin{figure}[htb] \centering \begin{tabular}{c} \psfig{figure=ps/verb-class-files/alphanx0VPnx1.ps,height=4.8cm} \end{tabular} \caption{Declarative PP Complement Tree: $\alpha$nx0VPnx1} \label{nx0VPnx1-tree} \end{figure} \item[Other available trees:] wh-moved subject, wh-moved object, subject relative clause with and without comp, adjunct (gap-less) relative clause with comp/with PP pied-piping, object relative clause with and without comp, imperative, determiner gerund, NP gerund, passive with {\it by} phrase, passive without {\it by} phrase, passive with wh-moved subject and {\it by} phrase, passive with wh-moved subject and no {\it by} phrase, passive with wh-moved object out of the {\it by} phrase, passive with wh-moved {\it by} phrase, passive with relative clause on subject and {\it by} phrase with and without comp, passive with relative clause on subject and no {\it by} phrase with and without comp, passive with relative clause on object on the {\it by} phrase with and without comp/with PP pied-piping, passive with/without {\it by}-phrase with adjunct (gap-less) relative clause with comp/with PP pied-piping, gerund passive with {\it by} phrase, gerund passive without {\it by} phrase. In addition, two other trees that allow transitive verbs to function as adjectives (e.g. {\it the thought of idea}) are also in the family. \end{description} \section{Sentential Complement: Tnx0Vs1}\label{verbs,sentential complement} \label{nx0Vs1-family} \begin{description} \item[Description:] This tree family is selected by verbs that take just a sentential complement. The sentential complement may be of type infinitive, indicative, or small clause (see Chapter~\ref{small-clauses}). The type of clause is specified by each individual verb in its syntactic lexicon entry, and a given verb may select more than one type of sentential complement. The declarative tree, and many other trees in this family, are auxiliary trees, as opposed to the more common initial trees. These auxiliary trees adjoin onto an S node in an existing tree of the type specified by the sentential complement. This is the mechanism by which TAGs are able to maintain long-distance dependencies (see Chapter~\ref{extraction}), even over multiple embeddings (e.g. {\it What did Bill think that John said?}). 338 verbs select this tree family. \item[Examples:] {\it consider}, {\it think} \\ {\it Dania considered the algorithm unworkable .}\\ {\it Srini thought that the program was working .} \\ \item[Declarative tree:] See Figure~\ref{nx0Vs1-tree}. \begin{figure}[htb] \centering \begin{tabular}{c} \psfig{figure=ps/verb-class-files/betanx0Vs1.ps,height=3.4cm} \end{tabular} \caption{Declarative Sentential Complement Tree: $\beta$nx0Vs1} \label{nx0Vs1-tree} \end{figure} \item[Other available trees:] wh-moved subject, wh-moved sentential complement, subject relative clause with and without comp, adjunct (gap-less) relative clause with comp/with PP pied-piping, imperative, determiner gerund, NP gerund, parenthetical reporting clause. \end{description} \section{Intransitive with Adjective: Tnx0Vax1}\index{verbs,intransitive with adjective} \label{nx0Vax1-family} \begin{description} \item[Description:] The verbs that select this tree family take an adjective as a complement. The adjective may be regular, comparative, or superlative. It may also be formed from the special class of adjectives derived from the transitive verbs (e.g. {\it agitated, broken}). See section~\ref{nx0Vnx1-family}). Unlike the Intransitive with PP verbs (see section~\ref{nx0Vpnx1-family}), some of these verbs may also occur as bare intransitives as well. This distinction is drawn because adjectives do not normally adjoin onto sentences, as prepositional phrases do. Other intransitive verbs can only occur with the adjective, and these select only this family. The verb class is also distinguished from the adjective small clauses (see section~\ref{nx0Ax1-family}) because these verbs are not raising verbs. 34 verbs select this tree family. \item[Examples:] {\it become}, {\it grow}, {\it smell} \\ {\it The greenhouse became hotter .} \\ {\it The plants grew tall and strong .} \\ {\it The flowers smelled wonderful .} \item[Declarative tree:] See Figure~\ref{nx0Vax1-tree}. \begin{figure}[htb] \centering \begin{tabular}{c} \psfig{figure=ps/verb-class-files/alphanx0Vax1.ps,height=3.4cm} \end{tabular} \caption{Declarative Intransitive with Adjective Tree: $\alpha$nx0Vax1} \label{nx0Vax1-tree} \end{figure} \item[Other available trees:] wh-moved subject, wh-moved adjective ({\it how}), subject relative clause with and without comp, adjunct (gap-less) relative clause with comp/with PP pied-piping, imperative, NP gerund. \end{description} \section{Transitive Sentential Subject: Ts0Vnx1}\index{verbs,sentential subject} \label{s0Vnx1-family} \begin{description} \item[Description:] The verbs that select this tree family all take sentential subjects, and are often referred to as `psych' verbs, since they all refer to some psychological state of mind. The sentential subject can be indicative (complementizer required) or infinitive (complementizer optional). 100 verbs that select this tree family. \item[Examples:] {\it delight}, {\it impress}, {\it surprise} \\ {\it that the tea had rosehips in it delighted Christy .} \\ {\it to even attempt a marathon impressed Dania .} \\ {\it For Jim to have walked the dogs surprised Beth .} \item[Declarative tree:] See Figure~\ref{s0Vnx1-tree}. \begin{figure}[htb] \centering \begin{tabular}{c} \psfig{figure=ps/verb-class-files/alphas0Vnx1.ps,height=3.4cm} \end{tabular} \caption{Declarative Sentential Subject Tree: $\alpha$s0Vnx1} \label{s0Vnx1-tree} \end{figure} \item[Other available trees:] wh-moved subject, wh-moved object, subject relative clause with and without comp, adjunct (gap-less) relative clause with comp/with PP pied-piping. \end{description} \section{Light Verbs: Tnx0lVN1}\index{verbs, light} \label{nx0lVN1-family} \begin{description} \item[Description:] The verb/noun pairs that select this tree families are pairs in which the interpretation is non-compositional and the noun contributes argument structure to the predicate (e.g. {\it The man took a walk.} vs. {\it The man took a radio}). The verb and the noun occur together in the syntactic database, and both anchor the trees. The verbs in the light verb constructions are {\it do}, {\it give}, {\it have}, {\it make} and {\it take}. The noun following the light verb is (usually) in a bare infinitive form ({\it have a good cry}) and usually occurs with {\it a(n)}. However, we include deverbal nominals ({\it take a bath}, {\it give a demonstration}) as well. Constructions with nouns that do not contribute an argument structure ({\it have a cigarette}, {\it give} NP {\it a black eye}) are excluded. In addition to semantic considerations of light verbs, they differ syntactically from Transitive verbs (section~\ref{nx0Vnx1-family}) as well in that the noun in the light verb construction does not extract. Some of the verb-noun anchors for this family, like {\it take aim} and {\it take hold} disallow determiners, while others require particular determiners. For example, {\it have think} must be indefinite and singular, as attested by the ungrammaticality of *{\it John had the think/some thinks}. Another anchor, {\it take leave} can occur either bare or with a possesive pronoun (e.g., {\it John took his leave}, but not *{\it John took the leave}). This is accomplished through feature specification on the lexical entries. There are 259 verb/noun pairs that select the light verb tree. \item[Examples:] {\it give groan}, {\it have discussion}, {\it make comment} \\ {\it The audience gave a collective groan .} \\ {\it We had a big discussion about closing the libraries .} \\ {\it The professors made comments on the paper .} \item[Declarative tree:] See Figure~\ref{nx0lVN1-tree}. \begin{figure}[htb] \centering \begin{tabular}{c} \psfig{figure=ps/verb-class-files/alphanx0lVN1.ps,height=4.0cm}\\ \end{tabular} \caption{Declarative Light Verb Tree: $\alpha$nx0lVN1} \label{nx0lVN1-tree} \end{figure} \item[Other available trees:] wh-moved subject, subject relative clause with and without comp, adjunct (gap-less) relative clause with comp/with PP pied-piping, imperative, determiner gerund, NP gerund. \end{description} \section{Ditransitive Light Verbs with PP Shift: Tnx0lVN1Pnx2}\index{verbs,ditransitive light verbs with PP shift} \label{nx0lVN1Pnx2-family} \begin{description} \item[Description:] The verb/noun pairs that select this tree family are pairs in which the interpretation is non-compositional and the noun contributes argument structure to the predicate (e.g. {\it Dania made Srini a cake.} vs. {\it Dania made Srini a loan}). The verb and the noun occur together in the syntactic database, and both anchor the trees. The verbs in these light verb constructions are {\it give} and {\it make}. The noun following the light verb is (usually) a bare infinitive form (e.g. {\it make a promise to Anoop}). However, we include deverbal nominals (e.g. {\it make a payment to Anoop}) as well. Constructions with nouns that do not contribute an argument structure are excluded. In addition to semantic considerations of light verbs, they differ syntactically from the Ditransitive with PP Shift verbs (see section~\ref{nx0Vnx1Pnx2-family}) as well in that the noun in the light verb construction does not extract. Also, passivization is severely restricted. Special determiner requirments and restrictions are handled in the same manner as for the Tnx0lVN1 family. There are 18 verb/noun pairs that select this family. \item[Examples:] {\it give look}, {\it give wave}, {\it make promise} \\ {\it Dania gave Carl a murderous look .} \\ {\it Amanda gave us a little wave as she left .} \\ {\it Dania made Doug a promise .} \item[Declarative tree:] See Figure~\ref{nx0lVN1Pnx2-tree}. \begin{figure}[htb] \centering \mbox{} \begin{tabular}{cc} \psfig{figure=ps/verb-class-files/alphanx0lVN1Pnx2.ps,height=5.1cm} \psfig{figure=ps/verb-class-files/alphanx0lVnx2N1.ps,height=5.1cm} \\ (a) & (b) \vspace*{1.2cm}\\ \end{tabular} \caption{Declarative Light Verbs with PP Tree: $\alpha$nx0lVN1Pnx2~(a), $\alpha$nx0lVnx2N1~(b)} \label{nx0lVN1Pnx2-tree} \end{figure} \item[Other available trees:] {\bf Non-shifted:} wh-moved subject, wh-moved indirect object, subject relative clause with and without comp, adjunct (gap-less) relative clause with comp/with PP pied-piping, indirect object relative clause with and without comp/with PP pied-piping, imperative, NP gerund, passive with {\it by} phrase, passive with {\it by}-phrase with adjunct (gap-less) relative clause with comp/with PP pied-piping, gerund passive with {\it by} phrase, gerund passive without {\it by} phrase \\ {\bf Shifted:} wh-moved subject, wh-moved object of PP, wh-moved PP, subject relative clause with and without comp, object of PP relative clause with and without comp/with PP pied-piping, imperative, determiner gerund, NP gerund, passive with {\it by} phrase with adjunct (gap-less) relative clause with comp/with PP pied-piping, gerund passive with {\it by} phrase, gerund passive without {\it by} phrase. \end{description} \section{NP It-Cleft: TItVnx1s2} \label{ItVnx1s2-family} \begin{description} \item[Description:] This tree family is selected by {\it be} as the main verb and {\it it} as the subject. Together these two items serve as a multi-component anchor for the tree family. This tree family is used for it-clefts in which the clefted element is an NP and there are no gaps in the clause which follows the NP. The NP is interpreted as an adjunct of the following clause. See Chapter~\ref{it-clefts} for additional discussion. \item[Examples:] {\it it be} \\ {\it it was yesterday that we had the meeting .} \item[Declarative tree:] See Figure~\ref{ItVnx1s2-tree}. \begin{figure}[htb] \centering \begin{tabular}{c} \psfig{figure=ps/verb-class-files/alphaItVpnx1s2.ps,height=4.9cm} \end{tabular} \caption{Declarative NP It-Cleft Tree: $\alpha$ItVpnx1s2} \label{ItVnx1s2-tree} \end{figure} \item[Other available trees:] inverted question, wh-moved object with {\it be} inverted, wh-moved object with {\it be} not inverted, adjunct (gap-less) relative clause with comp/with PP pied-piping. \end{description} \section{PP It-Cleft: TItVpnx1s2} \label{ItVpnx1s2-family} \begin{description} \item[Description:] This tree family is selected by {\it be} as the main verb and {\it it} as the subject. Together these two items serve as a multi-component anchor for the tree family. This tree family is used for it-clefts in which the clefted element is a PP and there are no gaps in the clause which follows the PP. The PP is interpreted as an adjunct of the following clause. See Chapter~\ref{it-clefts} for additional discussion. \item[Examples:] {\it it be} \\ {\it it was at Kent State that the police shot all those students .} \item[Declarative tree:] See Figure~\ref{ItVpnx1s2-tree}. \begin{figure}[htb] \centering \begin{tabular}{c} \psfig{figure=ps/verb-class-files/alphaItVpnx1s2.ps,height=5.0cm} \end{tabular} \caption{Declarative PP It-Cleft Tree: $\alpha$ItVnx1s2} \label{ItVpnx1s2-tree} \end{figure} \item[Other available trees:] inverted question, wh-moved prepositional phrase with {\it be} inverted, wh-moved prepositional phrase with {\it be} not inverted, adjunct (gap-less) relative clause with comp/with PP pied-piping. \end{description} \section{Adverb It-Cleft: TItVad1s2} \label{ItVad1s2-family} \begin{description} \item[Description:] This tree family is selected by {\it be} as the main verb and {\it it} as the subject. Together these two items serve as a multi-component anchor for the tree family. This tree family is used for it-clefts in which the clefted element is an adverb and there are no gaps in the clause which follows the adverb. The adverb is interpreted as an adjunct of the following clause. See Chapter~\ref{it-clefts} for additional discussion. \item[Examples:] {\it it be} \\ {\it it was reluctantly that Dania agreed to do the tech report .} \item[Declarative tree:] See Figure~\ref{ItVad1s2-tree}. \begin{figure}[htb] \centering \begin{tabular}{c} \psfig{figure=ps/verb-class-files/alphaItVad1s2.ps,height=4.9cm} \end{tabular} \caption{Declarative Adverb It-Cleft Tree: $\alpha$ItVad1s2} \label{ItVad1s2-tree} \end{figure} \item[Other available trees:] inverted question, wh-moved adverb {\it how} with {\it be} inverted, wh-moved adverb {\it how} with {\it be} not inverted, adjunct (gap-less) relative clause with comp/with PP pied-piping. \end{description} \section{Adjective Small Clause Tree: Tnx0Ax1}\index{verbs,small-clause} \label{nx0Ax1-family} \begin{description} \item[Description:] These trees are not anchored by verbs, but by adjectives. They are explained in much greater detail in the section on small clauses (see section~\ref{sm-clause-xtag-analysis}). This section is presented here for completeness. 3244 adjectives select this tree family. \item[Examples:] {\it addictive}, {\it dangerous}, {\it wary}\\ {\it cigarettes are addictive .} \\ {\it smoking cigarettes is dangerous .} \\ {\it John seems wary of the Surgeon General's warnings .} \item[Declarative tree:] See Figure~\ref{nx0Ax1-tree}. \begin{figure}[htb] \centering \begin{tabular}{c} \psfig{figure=ps/verb-class-files/alphanx0Ax1.ps,height=4.0cm} \end{tabular} \caption{Declarative Adjective Small Clause Tree: $\alpha$nx0Ax1} \label{nx0Ax1-tree} \end{figure} \item[Other available trees:] wh-moved subject, wh-moved adjective {\it how}, relative clause on subject with and without comp, imperative, NP gerund, adjunct (gap-less) relative clause with comp/with PP pied-piping. \end{description} \section{Adjective Small Clause with Sentential Complement: Tnx0A1s1} \label{nx0A1s1-family} \begin{description} \item[Description:] This tree family is selected by adjectives that take sentential complements. The sentential complements can be indicative or infinitive. Note that these trees are anchored by adjectives, not verbs. Small clauses are explained in much greater detail in section~\ref{sm-clause-xtag-analysis}. This section is presented here for completeness. 669 adjectives select this tree family. \item[Examples:] {\it able}, {\it curious}, {\it disappointed} \\ {\it Christy was able to find the problem .} \\ {\it Christy was curious whether the new analysis was working .} \\ {\it Christy was sad that the old analysis failed .} \item[Declarative tree:] See Figure~\ref{nx0A1s1-tree}. \begin{figure}[htb] \centering \begin{tabular}{c} \psfig{figure=ps/verb-class-files/alphanx0A1s1.ps,height=4.7cm} \end{tabular} \caption{Declarative Adjective Small Clause with Sentential Complement Tree: $\alpha$nx0A1s1} \label{nx0A1s1-tree} \end{figure} \item[Other available trees:] wh-moved subject, wh-moved adjective {\it how}, relative clause on subject with and without comp, imperative, NP gerund, adjunct (gap-less) relative clause with comp/with PP pied-piping. \end{description} \section{Adjective Small Clause with Sentential Subject: Ts0Ax1} \label{s0Ax1-family} \begin{description} \item[Description:] This tree family is selected by adjectives that take sentential subjects. The sentential subjects can be indicative or infinitive. Note that these trees are anchored by adjectives, not verbs. Most adjectives that take the Adjective Small Clause tree family (see section~\ref{nx0Ax1-family}) take this family as well.\footnote{No great attempt has been made to go through and decide which adjectives should actually take this family and which should not.} Small clauses are explained in much greater detail in section~\ref{sm-clause-xtag-analysis}. This section is presented here for completeness. 3,185 adjectives select this tree family. \item[Examples:] {\it decadent}, {\it incredible}, {\it uncertain} \\ {\it to eat raspberry chocolate truffle ice cream is decadent .} \\ {\it that Carl could eat a large bowl of it is incredible .} \\ {\it whether he will actually survive the experience is uncertain .} \item[Declarative tree:] See Figure~\ref{s0Ax1-tree}. \begin{figure}[htb] \centering \begin{tabular}{c} \psfig{figure=ps/verb-class-files/alphas0Ax1.ps,height=4.0cm} \end{tabular} \caption{Declarative Adjective Small Clause with Sentential Subject Tree: $\alpha$s0Ax1} \label{s0Ax1-tree} \end{figure} \item[Other available trees:] wh-moved subject, wh-moved adjective, adjunct (gap-less) relative clause with comp/with PP pied-piping. \end{description} \section{Equative {\it BE}: Tnx0BEnx1} \label{nx0BEnx1-family} \begin{description} \item[Description:] This tree family is selected only by the verb {\it be}. It is distinguished from the predicative NP's (see section~\ref{nx0N1-family}) in that two NP's are equated, and hence interchangeable (see Chapter~\ref{small-clauses} for more discussion on the English copula and predicative sentences). The XTAG analysis for equative {\it be} is explained in greater detail in section~\ref{equative-be-xtag-analysis}. \item[Examples:] {\it be} \\ {\it That man is my uncle.} \item[Declarative tree:] See Figure~\ref{nx0BEnx1-tree}. \begin{figure}[htb] \centering \begin{tabular}{c} \psfig{figure=ps/verb-class-files/alphanx0BEnx1.ps,height=5.1cm} \end{tabular} \caption{Declarative Equative {\it BE} Tree: $\alpha$nx0BEnx1} \label{nx0BEnx1-tree} \end{figure} \item[Other available trees:] inverted-question. \end{description} \section{NP Small Clause: Tnx0N1} \label{nx0N1-family} \begin{description} \item[Description:] The trees in this tree family are not anchored by verbs, but by nouns. Small clauses are explained in much greater detail in section~\ref{sm-clause-xtag-analysis}. This section is presented here for completeness. 5,595 nouns select this tree family. \item[Examples:] {\it author}, {\it chair}, {\it dish} \\ {\it Dania is an author .} \\ {\it that blue, warped-looking thing is a chair .} \\ {\it those broken pieces were dishes .} \item[Declarative tree:] See Figure~\ref{nx0N1-tree}. \begin{figure}[htb] \centering \begin{tabular}{c} \psfig{figure=ps/verb-class-files/alphanx0N1.ps,height=4.8cm} \end{tabular} \caption{Declarative NP Small Clause Trees: $\alpha$nx0N1} \label{nx0N1-tree} \end{figure} \item[Other available trees:] wh-moved subject, wh-moved object, relative clause on subject with and without comp, imperative, NP gerund, adjunct (gap-less) relative clause with comp/with PP pied-piping. \end{description} \section{NP Small Clause with Sentential Complement: Tnx0N1s1} \label{nx0N1s1-family} \begin{description} \item[Description:] This tree family is selected by the small group of nouns that take sentential complements by themselves (see section~\ref{NPA}). The sentential complements can be indicative or infinitive, depending on the noun. Small clauses in general are explained in much greater detail in the section~\ref{sm-clause-xtag-analysis}. This section is presented here for completeness. 141 nouns select this family. \item[Examples:] {\it admission}, {\it claim}, {\it vow} \\ {\it The affidavits are admissions that they killed the sheep .} \\ {\it there is always the claim that they were insane .} \\ {\it this is his vow to fight the charges .} \item[Declarative tree:] See Figure~\ref{nx0N1s1-tree}. \begin{figure}[htb] \centering \begin{tabular}{c} \psfig{figure=ps/verb-class-files/alphanx0N1s1.ps,height=4.0cm} \end{tabular} \caption{Declarative NP with Sentential Complement Small Clause Tree: $\alpha$nx0N1s1} \label{nx0N1s1-tree} \end{figure} \item[Other available trees:] wh-moved subject, wh-moved object, relative clause on subject with and without comp, imperative, NP gerund, adjunct (gap-less) relative clause with comp/with PP pied-piping. \end{description} \section{NP Small Clause with Sentential Subject: Ts0N1} \label{s0N1-family} \begin{description} \item[Description:] This tree family is selected by nouns that take sentential subjects. The sentential subjects can be indicative or infinitive. Note that these trees are anchored by nouns, not verbs. Most nouns that take the NP Small Clause tree family (see section~\ref{nx0N1-family}) take this family as well.\footnote{No great attempt has been made to go through and decide which nouns should actually take this family and which should not.} Small clauses are explained in much greater detail in section~\ref{sm-clause-xtag-analysis}. This section is presented here for completeness. 5,519 nouns select this tree family. \item[Examples:] {\it dilemma}, {\it insanity}, {\it tragedy} \\ {\it whether to keep the job he hates is a dilemma .} \\ {\it to invest all of your money in worms is insanity .} \\ {\it that the worms died is a tragedy .} \item[Declarative tree:] See Figure~\ref{s0N1-tree}. \begin{figure}[htb] \centering \begin{tabular}{c} \psfig{figure=ps/verb-class-files/alphas0N1.ps,height=4.0cm} \end{tabular} \caption{Declarative NP Small Clause with Sentential Subject Tree: $\alpha$s0N1} \label{s0N1-tree} \end{figure} \item[Other available trees:] wh-moved subject, adjunct (gap-less) relative clause with comp/with PP pied-piping. \end{description} \section{PP Small Clause: Tnx0Pnx1} \label{nx0Pnx1-family} \begin{description} \item[Description:] This family is selected by prepositions that can occur in small clause constructions. For more information on small clause constructions, see section~\ref{sm-clause-xtag-analysis}. This section is presented here for completeness. 39 prepositions select this tree family. \item[Examples:] {\it around}, {\it in}, {\it underneath} \\ {\it Chris is around the corner .} \\ {\it Trisha is in big trouble .} \\ {\it The dog is underneath the table .} \item[Declarative tree:] See Figure~\ref{nx0Pnx1-tree}. \begin{figure}[htb] \centering \begin{tabular}{c} \psfig{figure=ps/verb-class-files/alphanx0Pnx1.ps,height=4.0cm} \end{tabular} \caption{Declarative PP Small Clause Tree: $\alpha$nx0Pnx1} \label{nx0Pnx1-tree} \end{figure} \item[Other available trees:] wh-moved subject, wh-moved object of PP, relative clause on subject with and without comp, relative clause on object of PP with and without comp/with PP pied-piping, imperative, NP gerund, adjunct (gap-less) relative clause with comp/with PP pied-piping. \end{description} \section{Exhaustive PP Small Clause: Tnx0Px1} \label{nx0Px1-family} \begin{description} \item[Description:] This family is selected by {\bf exhaustive} prepositions that can occur in small clauses. Exhaustive prepositions are prepositions that function as prepositional phrases by themselves. For more information on small clause constructions, please see section~\ref{sm-clause-xtag-analysis}. The section is included here for completeness. 33 exhaustive prepositions select this tree family. \item[Examples:] {\it abroad}, {\it below}, {\it outside} \\ {\it Dr. Joshi is abroad .} \\ {\it The workers are all below .} \\ {\it Clove is outside .} \item[Declarative tree:] See Figure~\ref{nx0Px1-tree}. \begin{figure}[htb] \centering \begin{tabular}{c} \psfig{figure=ps/verb-class-files/alphanx0Px1.ps,height=4.0cm} \end{tabular} \caption{Declarative Exhaustive PP Small Clause Tree: $\alpha$nx0Px1} \label{nx0Px1-tree} \end{figure} \item[Other available trees:] wh-moved subject, wh-moved PP, relative clause on subject with and without comp, imperative, NP gerund, adjunct (gap-less) relative clause with comp/with PP pied-piping. \end{description} \section{PP Small Clause with Sentential Subject: Ts0Pnx1} \label{s0Pnx1-family} \begin{description} \item[Description:] This tree family is selected by prepositions that take sentential subjects. The sentential subject can be indicative or infinitive. Small clauses are explained in much greater detail in section~\ref{sm-clause-xtag-analysis}. This section is presented here for completeness. 39 prepositions select this tree family. \item[Examples:] {\it beyond}, {\it unlike} \\ {\it that Ken could forget to pay the taxes is beyond belief .} \\ {\it to explain how this happened is outside the scope of this discussion .} \\ {\it for Ken to do something right is unlike him .} \item[Declarative tree:] See Figure~\ref{s0Pnx1-tree}. \begin{figure}[htb] \centering \begin{tabular}{c} \psfig{figure=ps/verb-class-files/alphas0Pnx1.ps,height=4.0cm} \end{tabular} \caption{Declarative PP Small Clause with Sentential Subject Tree: $\alpha$s0Pnx1} \label{s0Pnx1-tree} \end{figure} \item[Other available trees:] wh-moved subject, relative clause on object of the PP with and without comp/with PP pied-piping, adjunct (gap-less) relative clause with comp/with PP pied-piping. \end{description} \section{Intransitive Sentential Subject: Ts0V}\index{verbs,sentential subject} \label{s0V-family} \begin{description} \item[Description:] Only the verb {\it matter} selects this tree family. The sentential subject can be indicative (complementizer required) or infinitive (complementizer optional). \item[Examples:] {\it matter} \\ {\it to arrive on time matters considerably .} \\ {\it that Joshi attends the meetings matters to everyone .} \item[Declarative tree:] See Figure~\ref{s0V-tree}. \begin{figure}[htb] \centering \begin{tabular}{c} \psfig{figure=ps/verb-class-files/alphas0V.ps,height=3.0cm} \end{tabular} \caption{Declarative Intransitive Sentential Subject Tree: $\alpha$s0V} \label{s0V-tree} \end{figure} \item[Other available trees:] wh-moved subject, adjunct (gap-less) relative clause with comp/with PP pied-piping. \end{description} \section{Sentential Subject with `to' complement: Ts0Vtonx1}\index{verbs,sentential subject, PP complement} \label{s0Vtonx1-family} \begin{description} \item[Description:] The verbs that select this tree family are {\it fall}, {\it occur} and {\it leak}. The sentential subject can be indicative (complementizer required) or infinitive (complementizer optional). \item[Examples:] {\it fall}, {\it occur}, {\it leak}\\ {\it to wash the car fell to the children .} \\ {\it that he should leave occurred to the party crasher .} \\ {\it whether the princess divorced the prince leaked to the press .} \item[Declarative tree:] See Figure~\ref{s0Vtonx1-tree}. \begin{figure}[htb] \centering \begin{tabular}{c} \psfig{figure=ps/verb-class-files/alphas0Vtonx1.ps,height=5.4cm} \end{tabular} \caption{Sentential Subject Tree with `to' complement: $\alpha$s0Vtonx1} \label{s0Vtonx1-tree} \end{figure} \item[Other available trees:] wh-moved subject, adjunct (gap-less) relative clause with comp/with PP pied-piping. \end{description} \section{PP Small Clause, with Adv and Prep anchors: Tnx0ARBPnx1} \label{nx0ARBPnx1-family} \begin{description} \item[Description:] This family is selected by multi-word prepositions that can occur in small clause constructions. In particular, this family is selected by two-word prepositions, where the first word is an adverb, the second word a preposition. Both components of the multi-word preposition are anchors. For more information on small clause constructions, see section~\ref{sm-clause-xtag-analysis}. 8 multi-word prepositions select this tree family. \item[Examples:] {\it ahead of}, {\it close to} \\ {\it The little girl is ahead of everyone else in the race .} \\ {\it The project is close to completion .} \\ \item[Declarative tree:] See Figure~\ref{nx0ARBPnx1-tree}. \begin{figure}[htb] \centering \begin{tabular}{c} \psfig{figure=ps/verb-class-files/alphanx0ARBPnx1.ps,height=4.9cm} \end{tabular} \caption{Declarative PP Small Clause tree with two-word preposition, where the first word is an adverb, and the second word is a preposition: $\alpha$nx0ARBPnx1} \label{nx0ARBPnx1-tree} \end{figure} \item[Other available trees:] wh-moved subject, wh-moved object of PP, relative clause on subject with and without comp, relative clause on object of PP with and without comp, adjunct (gap-less) relative clause with comp/with PP pied-piping, imperative, NP Gerund. \end{description} \section{PP Small Clause, with Adj and Prep anchors: Tnx0APnx1} \label{nx0APnx1-family} \begin{description} \item[Description:] This family is selected by multi-word prepositions that can occur in small clause constructions. In particular, this family is selected by two-word prepositions, where the first word is an adjective, the second word a preposition. Both components of the multi-word preposition are anchors. For more information on small clause constructions, see section~\ref{sm-clause-xtag-analysis}. 8 multi-word prepositions select this tree family. \item[Examples:] {\it according to}, {\it void of} \\ {\it The operation we performed was according to standard procedure .} \\ {\it He is void of all feeling .} \\ \item[Declarative tree:] See Figure~\ref{nx0APnx1-tree}. \begin{figure}[htb] \centering \begin{tabular}{c} \psfig{figure=ps/verb-class-files/alphanx0APnx1.ps,height=5.3cm} \end{tabular} \caption{Declarative PP Small Clause tree with two-word preposition, where the first word is an adjective, and the second word is a preposition: $\alpha$nx0APnx1} \label{nx0APnx1-tree} \end{figure} \item[Other available trees:] wh-moved subject, relative clause on subject with and without comp, relative clause on object of PP with and without comp, wh-moved object of PP, adjunct (gap-less) relative clause with comp/with PP pied-piping. \end{description} \section{PP Small Clause, with Noun and Prep anchors: Tnx0NPnx1} \label{nx0NPnx1-family} \begin{description} \item[Description:] This family is selected by multi-word prepositions that can occur in small clause constructions. In particular, this family is selected by two-word prepositions, where the first word is a noun, the second word a preposition. Both components of the multi-word preposition are anchors. For more information on small clause constructions, see section~\ref{sm-clause-xtag-analysis}. 1 multi-word preposition selects this tree family. \item[Examples:] {\it thanks to} \\ {\it The fact that we are here tonight is thanks to the valiant efforts of our staff .} \\ \item[Declarative tree:] See Figure~\ref{nx0NPnx1-tree}. \begin{figure}[htb] \centering \begin{tabular}{c} \psfig{figure=ps/verb-class-files/alphanx0NPnx1.ps,height=5.3cm} \end{tabular} \caption{Declarative PP Small Clause tree with two-word preposition, where the first word is a noun, and the second word is a preposition: $\alpha$nx0NPnx1} \label{nx0NPnx1-tree} \end{figure} \item[Other available trees:] wh-moved subject, wh-moved object of PP, relative clause on subject with and without comp, relative clause on object with comp, adjunct (gap-less) relative clause with comp/with PP pied-piping. \end{description} \section{PP Small Clause, with Prep anchors: Tnx0PPnx1} \label{nx0PPnx1-family} \begin{description} \item[Description:] This family is selected by multi-word prepositions that can occur in small clause constructions. In particular, this family is selected by two-word prepositions, where both words are prepositions. Both components of the multi-word preposition are anchors. For more information on small clause constructions, see section~\ref{sm-clause-xtag-analysis}. 9 multi-word prepositions select this tree family. \item[Examples:] {\it on to}, {\it inside of} \\ {\it that detective is on to you .} \\ {\it The red box is inside of the blue box .} \\ \item[Declarative tree:] See Figure~\ref{nx0PPnx1-tree}. \begin{figure}[htb] \centering \begin{tabular}{c} \psfig{figure=ps/verb-class-files/alphanx0PPnx1.ps,height=5.3cm} \end{tabular} \caption{Declarative PP Small Clause tree with two-word preposition, where both words are prepositions: $\alpha$nx0PPnx1} \label{nx0PPnx1-tree} \end{figure} \item[Other available trees:] wh-moved subject, wh-moved object of PP, relative clause on subject with and without comp, relative clause on object of PP with and without comp/with PP pied-piping, imperative, wh-moved object of PP, adjunct (gap-less) relative clause with comp/with PP pied-piping. \end{description} \section{PP Small Clause, with Prep and Noun anchors: Tnx0PNaPnx1} \label{nx0PNaPnx1-family} \begin{description} \item[Description:] This family is selected by multi-word prepositions that can occur in small clause constructions. In particular, this family is selected by three-word prepositions. The first and third words are always prepositions, and the middle word is a noun. The noun is marked for null adjunction since it cannot be modified by noun modifiers. All three components of the multi-word preposition are anchors. For more information on small clause constructions, see section~\ref{sm-clause-xtag-analysis}. 24 multi-word preposition select this tree family. \item[Examples:] {\it in back of}, {\it in line with}, {\it on top of} \\ {\it The red plaid box should be in back of the plain black box .} \\ {\it The evidence is in line with my newly concocted theory .} \\ {\it She is on top of the world .} \\ {\it *She is on direct top of the world .} \\ \item[Declarative tree:] See Figure~\ref{nx0PNaPnx1-tree}. \begin{figure}[htb] \centering \begin{tabular}{c} \psfig{figure=ps/verb-class-files/alphanx0PNaPnx1.ps,height=5.5cm} \end{tabular} \caption{Declarative PP Small Clause tree with three-word preposition, where the middle noun is marked for null adjunction: $\alpha$nx0PNaPnx1} \label{nx0PNaPnx1-tree} \end{figure} \item[Other available trees:] wh-moved subject, wh-moved object of PP, relative clause on subject with and without comp, relative clause on object of PP with and without comp/with PP pied-piping, adjunct (gap-less) relative clause with comp/with PP pied-piping, imperative, NP Gerund. \end{description} \section{PP Small Clause with Sentential Subject, and Adv and Prep anchors: Ts0ARBPnx1} \label{s0ARBPnx1-family} \begin{description} \item[Description:] This tree family is selected by multi-word prepositions that take sentential subjects. In particular, this family is selected by two-word prepositions, where the first word is an adverb, the second word a preposition. Both components of the multi-word preposition are anchors. The sentential subject can be indicative or infinitive. Small clauses are explained in much greater detail in section~\ref{sm-clause-xtag-analysis}. 2 prepositions select this tree family. \item[Examples:] {\it due to}, {\it contrary to} \\ {\it that David slept until noon is due to the fact that he never sleeps during the week .} \\ {\it that Michael's joke was funny is contrary to the usual status of his comic attempts .} \\ \item[Declarative tree:] See Figure~\ref{s0ARBPnx1-tree}. \begin{figure}[htb] \centering \begin{tabular}{c} \psfig{figure=ps/verb-class-files/alphas0ARBPnx1.ps,height=5.5cm} \end{tabular} \caption{Declarative PP Small Clause with Sentential Subject Tree, with two-word preposition, where the first word is an adverb, and the second word is a preposition: $\alpha$s0ARBPnx1} \label{s0ARBPnx1-tree} \end{figure} \item[Other available trees:] wh-moved subject, relative clause on object of the PP with and without comp, adjunct (gap-less) relative clause with comp/with PP pied-piping. \end{description} \section{PP Small Clause with Sentential Subject, and Adj and Prep anchors: Ts0APnx1} \label{s0APnx1-family} \begin{description} \item[Description:] This tree family is selected by multi-word prepositions that take sentential subjects. In particular, this family is selected by two-word prepositions, where the first word is an adjective, the second word a preposition. Both components of the multi-word preposition are anchors. The sentential subject can be indicative or infinitive. Small clauses are explained in much greater detail in section~\ref{sm-clause-xtag-analysis}. 5 prepositions select this tree family. \item[Examples:] {\it devoid of}, {\it according to} \\ {\it that he could walk out on her is devoid of all reason .} \\ {\it that the conversation erupted precisely at that moment was according to my theory .} \\ \item[Declarative tree:] See Figure~\ref{s0APnx1-tree}. \begin{figure}[htb] \centering \begin{tabular}{c} \psfig{figure=ps/verb-class-files/alphas0APnx1.ps,height=5.5cm} \end{tabular} \caption{Declarative PP Small Clause with Sentential Subject Tree, with two-word preposition, where the first word is an adjective, and the second word is a preposition: $\alpha$s0APnx1} \label{s0APnx1-tree} \end{figure} \item[Other available trees:] wh-moved subject, relative clause on object of the PP with and without comp, adjunct (gap-less) relative clause with comp/with PP pied-piping. \end{description} \section{PP Small Clause with Sentential Subject, and Noun and Prep anchors: Ts0NPnx1} \label{s0NPnx1-family} \begin{description} \item[Description:] This tree family is selected by multi-word prepositions that take sentential subjects. In particular, this family is selected by two-word prepositions, where the first word is a noun, the second word a preposition. Both components of the multi-word preposition are anchors. The sentential subject can be indicative or infinitive. Small clauses are explained in much greater detail in section~\ref{sm-clause-xtag-analysis}. 1 preposition selects this tree family. \item[Examples:] {\it thanks to} \\ {\it that she is worn out is thanks to a long day in front of the computer terminal .} \\ \item[Declarative tree:] See Figure~\ref{s0NPnx1-tree}. \begin{figure}[htb] \centering \begin{tabular}{c} \psfig{figure=ps/verb-class-files/alphas0NPnx1.ps,height=5.5cm} \end{tabular} \caption{Declarative PP Small Clause with Sentential Subject Tree, with two-word preposition, where the first word is a noun, and the second word is a preposition: $\alpha$s0NPnx1} \label{s0NPnx1-tree} \end{figure} \item[Other available trees:] wh-moved subject, relative clause on object of the PP with and without comp, adjunct (gap-less) relative clause with comp/with PP pied-piping. \end{description} \section{PP Small Clause with Sentential Subject, and Prep anchors: Ts0PPnx1} \label{s0PPnx1-family} \begin{description} \item[Description:] This tree family is selected by multi-word prepositions that take sentential subjects. In particular, this family is selected by two-word prepositions, where both words are prepositions. Both components of the multi-word preposition are anchors. The sentential subject can be indicative or infinitive. Small clauses are explained in much greater detail in section~\ref{sm-clause-xtag-analysis}. 3 prepositions select this tree family. \item[Examples:] {\it outside of} \\ {\it that Mary did not complete the task on time is outside of the scope of this discussion .} \\ \item[Declarative tree:] See Figure~\ref{s0PPnx1-tree}. \begin{figure}[htb] \centering \begin{tabular}{c} \psfig{figure=ps/verb-class-files/alphas0PPnx1.ps,height=5.5cm} \end{tabular} \caption{Declarative PP Small Clause with Sentential Subject Tree, with two-word preposition, where both words are prepositions: $\alpha$s0PPnx1} \label{s0PPnx1-tree} \end{figure} \item[Other available trees:] wh-moved subject, relative clause on object of the PP with and without comp, adjunct (gap-less) relative clause with comp/with PP pied-piping. \end{description} \section{PP Small Clause with Sentential Subject, and Prep and Noun anchors: Ts0PNaPnx1} \label{s0PNaPnx1-family} \begin{description} \item{Description:} This tree family is selected by multi-word prepositions that take sentential subjects. In particular, this family is selected by three-word prepositions. The first and third words are always prepositions, and the middle word is a noun. The noun is marked for null adjunction since it cannot be modified by noun modifiers. All three components of the multi-word preposition are anchors. Small clauses are explained in much greater detail in section~\ref{sm-clause-xtag-analysis}. 9 prepositions select this tree family. \item[Examples:] {\it on account of}, {\it in support of} \\ {\it that Joe had to leave the beach was on account of the hurricane .} \\ {\it that Maria could not come is in support of my theory about her .} \\ {\it *that Maria could not come is in direct/strict/desparate support of my theory about her .} \\ \item[Declarative tree:] See Figure~\ref{s0PNaPnx1-tree}. \begin{figure}[htb] \centering \begin{tabular}{c} \psfig{figure=ps/verb-class-files/alphas0PNaPnx1.ps,height=5.5cm} \end{tabular} \caption{Declarative PP Small Clause with Sentential Subject Tree, with three-word preposition, where the middle noun is marked for null adjunction: $\alpha$s0PNaPnx1} \label{s0PNaPnx1-tree} \end{figure} \item[Other available trees:] wh-moved subject, relative clause on object of the PP with and without comp, adjunct (gap-less) relative clause with comp/with PP pied-piping. \end{description} \section{Predicative Adjective with Sentential Subject and Complement: Ts0A1s1} \label{s0A1s1-family} \begin{description} \item{Description:} This tree family is selected by predicative adjectives that take sentential subjects and a sentential complement. This tree family is selected by {\it likely} and {\it certain}. \item[Examples:] {\it likely}, {\it certain} \\ {\it that Max continues to drive a Jaguar is certain to make Bill jealous .} \\ {\it for the Jaguar to be towed seems likely to make Max very angry .} \\ \item[Declarative tree:] See Figure~\ref{s0A1s1-tree}. \begin{figure}[htb] \centering \begin{tabular}{c} \psfig{figure=ps/verb-class-files/alphas0A1s1.ps,height=4.8cm} \end{tabular} \caption{Predicative Adjective with Sentential Subject and Complement: $\alpha$s0A1s1} \label{s0A1s1-tree} \end{figure} \item[Other available trees:] wh-moved subject, adjunct (gap-less) relative clause with comp/with PP pied-piping. \end{description} \section{Locative Small Clause with Ad anchor: Tnx0nx1ARB} \label{nx0nx1ARB-family} \begin{description} \item[Description:] These trees are not anchored by verbs, but by adverbs that are part of locative adverbial phrases. Locatives are explained in much greater detail in the section on the locative modifier trees (see section~\ref{locatives}). The only remarkable aspect of this tree family is the wh-moved locative tree, $\alpha$W1nx0nx1ARB, shown in Figure~\ref{W1nx0nx1ARB-tree}. This is the only tree family with this type of transformation, in which the entire adverbial phrase is wh-moved but not all elements are replaced by wh items (as in {\it how many city blocks away is the record store?}). Locatives that consist of just the locative adverb or the locative adverb and a degree adverb (see Section \ref{locatives} for details) are treated as exhaustive PPs and therefore select that tree family (Section~\ref{nx0Px1-family}) when used predicatively. For an extensive description of small clauses, see Section~\ref{sm-clause-xtag-analysis}. 26 adverbs select this tree family. \item[Examples:] {\it ahead}, {\it offshore}, {\it behind} \\ {\it the crash is three blocks ahead} \\ {\it the naval battle was many kilometers offshore} \\ {\it how many blocks behind was Max?} \\ \item[Declarative tree:] See Figure~\ref{nx0nx1ARB-tree}. \begin{figure}[htb] \centering \begin{tabular}{c} \psfig{figure=ps/verb-class-files/alphanx0nx1ARB.ps,height=5.0cm} \end{tabular} \caption{Declarative Locative Adverbial Small Clause Tree: $\alpha$nx0nx1ARB} \label{nx0nx1ARB-tree} \label{3;nx0nx1ARB} \end{figure} \begin{figure}[htb] \centering \begin{tabular}{c} \psfig{figure=ps/verb-class-files/alphaW1nx0nx1ARB.ps,height=6.0cm} \end{tabular} \caption{Wh-moved Locative Small Clause Tree: $\alpha$W1nx0nx1ARB} \label{W1nx0nx1ARB-tree} \label{3;W1nx0nx1ARB} \end{figure} \item[Other available trees:] wh-moved subject, relative clause on subject with and without comp, wh-moved locative, imperative, NP gerund. \end{description} \section{Exceptional Case Marking: TXnx0Vs1}\index{verbs,ecm} \label{Xnx0Vs1-family} \begin{description} \item[Description:] This tree family is selected by verbs that are classified as exceptional case marking, meaning that the verb asssigns accusative case to the subject of the sentential complement. This is in contrast to verbs in the Tnx0Vnx1s2 family (section~\ref{nx0Vnx1s2-family}), which assign accusative case to a NP which is not part of the sentential complement. ECM verbs take sentential complements which are either an infinitive or a ``bare'' infinitive. As with the Tnx0Vs1 family (section~\ref{nx0Vs1-family}), the declarative and other trees in the Xnx0Vs1 family are auxiliary trees, as opposed to the more common initial trees. These auxiliary trees adjoin onto an S node in an existing tree of the type specified by the sentential complement. This is the mechanism by which TAGs are able to maintain long-distance dependencies (see Chapter~\ref{extraction}), even over multiple embeddings (e.g. {\it Who did Bill expect to eat beans?}) or {\it who did Bill expect Mary to like?} See section~\ref{ecm-verbs} for details on this family. 20 verbs select this tree family. \item[Examples:] {\it expect}, {\it see} \\ {\it Van expects Bob to talk .} {\it Bob sees the harmonica fall .} \item[Declarative tree:] See Figure~\ref{Xnx0Vs1-tree}. \begin{figure}[htb] \centering \begin{tabular}{c} \psfig{figure=ps/verb-class-files/betaXnx0Vs1.ps,height=3.4cm} \end{tabular} \caption{ECM Tree: $\beta$Xnx0Vs1} \label{Xnx0Vs1-tree} \end{figure} \item[Other available trees:] wh-moved subject, subject relative clause with and without comp, adjunct (gap-less) relative clause with and without comp/with PP pied-piping, imperative, NP gerund. \end{description} \section{Idiom with V, D, and N anchors: Tnx0VDN1}\index{verbs,idiomatic} \label{nx0VDN1-family} \begin{description} \item[Description:] This tree family is selected by idiomatic phrases in which the verb, determiner, and NP are all frozen (as in {\it He kicked the bucket.}). Only a limited number of transformations are allowed, as compared to the normal transitive tree family (see section~\ref{nx0Vnx1-family}). Other idioms that have the same structure as {\it kick the bucket}, and that are limited to the same transformations would select this tree, while different tree families are used to handle other idioms. Note that {\it John kicked the bucket} is actually ambiguous, and would result in two parses - an idiomatic one (meaning that John died), and a compositional transitive one (meaning that there is an physical bucket that John hit with his foot). 1 idiom selects this family. \item[Examples:] {\it kick the bucket} \\ {\it Nixon kicked the bucket .} \item[Declarative tree:] See Figure~\ref{nx0VDN1-tree}. \begin{figure}[htb] \centering \begin{tabular}{c} \psfig{figure=ps/verb-class-files/alphanx0Vdn1.ps,height=5.2cm} \end{tabular} \caption{Declarative Transitive Idiom Tree: $\alpha$nx0VDN1} \label{nx0VDN1-tree} \end{figure} \item[Other available trees:] subject relative clause with and without comp, declarative, wh-moved subject, imperative, NP gerund, adjunct gapless relative with comp/with PP pied-piping, passive, w/wo by-phrase, wh-moved object of by-phrase, wh-moved by-phrase, relative (with and without comp) on subject of passive, PP relative. \end{description} \section{Idiom with V, D, A, and N anchors: Tnx0VDAN1}\index{verbs,idiomatic} \label{nx0VDAN1-family} \begin{description} \item[Description:] This tree family is selected by transitive idioms that are anchored by a verb, determiner, adjective, and noun. 19 idioms select this family. \item[Examples:] {\it have a green thumb}, {\it sing a different tune} \\ {\it Martha might have a green thumb, but it's uncertain after the death of all the plants.} \\ {\it After his conversion John sang a different tune.} \\ \item[Declarative tree:] See Figure~\ref{nx0VDAN1-tree}. \begin{figure}[htb] \centering \begin{tabular}{c} \psfig{figure=ps/verb-class-files/alphanx0VDAN1.ps,height=5.0cm} \end{tabular} \caption{Declarative Idiom with V, D, A, and N Anchors Tree: $\alpha$nx0VDAN1} \label{nx0VDAN1-tree} \label{3;nx0VDAN1} \end{figure} \item[Other available trees:] Subject relative clause with and without comp, adjunct relative clause with comp/with PP pied-piping, wh-moved subject, imperative, NP gerund, passive without {\it by} phrase, passive with {\it by} phrase, passive with wh-moved object of {\it by} phrase, passive with wh-moved {\it by phrase}, passive with relative on object of {\it by} phrase with and without comp. \end{description} \section{Idiom with V and N anchors: Tnx0VN1}\index{verbs,idiomatic} \label{nx0VN1-family} \begin{description} \item[Description:] This tree family is selected by transitive idioms that are anchored by a verb and noun. 15 idioms select this family. \item[Examples:] {\it draw blood}, {\it cry wolf} \\ {\it Graham's retort drew blood.} \\ {\it The neglected boy cried wolf.} \\ \item[Declarative tree:] See Figure~\ref{nx0VN1-tree}. \begin{figure}[htb] \centering \begin{tabular}{c} \psfig{figure=ps/verb-class-files/alphanx0VN1.ps,height=5.0cm} \end{tabular} \caption{Declarative Idiom with V and N Anchors Tree: $\alpha$nx0VN1} \label{nx0VN1-tree} \label{3;nx0VN1} \end{figure} \item[Other available trees:] Subject relative clause with and without comp, adjunct relative clause with comp/with PP pied-piping, wh-moved subject, imperative, NP gerund, passive without {\it by} phrase, passive with {\it by} phrase, passive with wh-moved object of {\it by} phrase, passive with wh-moved {\it by phrase}, passive with relative on object of {\it by} phrase with and without comp. \end{description} \section{Idiom with V, A, and N anchors: Tnx0VAN1}\index{verbs,idiomatic} \label{nx0VAN1-family} \begin{description} \item[Description:] This tree family is selected by transitive idioms that are anchored by a verb, adjective, and noun. 4 idioms select this family. \item[Examples:] {\it break new ground}, {\it cry bloody murder} \\ {\it The avant-garde film breaks new ground.} \\ {\it The investors cried bloody murder after the suspicious takeover.} \\ \item[Declarative tree:] See Figure~\ref{nx0VAN1-tree}. \begin{figure}[htb] \centering \begin{tabular}{c} \psfig{figure=ps/verb-class-files/alphanx0VAN1.ps,height=5.0cm} \end{tabular} \caption{Declarative Idiom with V, A, and N Anchors Tree: $\alpha$nx0VAN1} \label{nx0VAN1-tree} \label{3;nx0VAN1} \end{figure} \item[Other available trees:] Subject relative clause with and without comp, adjunct relative clause with comp/with PP pied-piping, wh-moved subject, imperative, NP gerund, passive without {\it by} phrase, passive with {\it by} phrase, passive with wh-moved object of {\it by} phrase, passive with wh-moved {\it by phrase}, passive with relative on object of {\it by} phrase with and without comp. \end{description} \section{Idiom with V, D, A, N, and Prep anchors: Tnx0VDAN1Pnx2}\index{verbs,idiomatic} \label{nx0VDAN1Pnx2-family} \begin{description} \item[Description:] This tree family is selected by transitive idioms that are anchored by a verb, determiner, adjective, noun, and preposition. 6 idioms select this family. \item[Examples:] {\it make a big deal about}, {\it make a great show of} \\ {\it John made a big deal about a miniscule dent in his car.} \\ {\it The company made a big show of paying generous dividends.} \\ \item[Declarative tree:] See Figure~\ref{nx0VDAN1Pnx2-tree}. \begin{figure}[htb] \centering \begin{tabular}{c} \psfig{figure=ps/verb-class-files/alphanx0VDAN1Pnx2.ps,height=5.0cm} \end{tabular} \caption{Declarative Idiom with V, D, A, N, and Prep Anchors Tree: $\alpha$nx0VDAN1Pnx2} \label{nx0VDAN1Pnx2-tree} \label{3;nx0VDAN1Pnx2} \end{figure} \item[Other available trees:] Subject relative clause with and without comp, adjunct relative clause with comp/with PP pied-piping, wh-moved subject, imperative, NP gerund, passive without {\it by} phrase, passive with {\it by} phrase, passive with wh-moved object of {\it by} phrase, passive with wh-moved {\it by phrase}, outer passive without {\it by} phrase, outer passive with {\it by} phrase, outer passive with wh-moved {\it by} phrase, outer passive with wh-moved object of {\it by} phrase, outer passive without {\it by} phrase with relative on the subject with and without comp, outer passive with {\it by} phrase with relative on subject with and without comp. \end{description} \section{Idiom with V, A, N, and Prep anchors: Tnx0VAN1Pnx2}\index{verbs,idiomatic} \label{nx0VAN1Pnx2-family} \begin{description} \item[Description:] This tree family is selected by transitive idioms that are anchored by a verb, adjective, noun, and preposition. 3 idioms select this family. \item[Examples:] {\it make short work of} \\ {\it John made short work of the glazed ham.} \\ \item[Declarative tree:] See Figure~\ref{nx0VAN1Pnx2-tree}. \begin{figure}[htb] \centering \begin{tabular}{c} \psfig{figure=ps/verb-class-files/alphanx0VAN1Pnx2.ps,height=5.0cm} \end{tabular} \caption{Declarative Idiom with V, A, N, and Prep Anchors Tree: $\alpha$nx0VAN1Pnx2} \label{nx0VAN1Pnx2-tree} \label{3;nx0VAN1Pnx2} \end{figure} \item[Other available trees:] Subject relative clause with and without comp, adjunct relative clause with comp/with PP pied-piping, wh-moved subject, imperative, NP gerund, passive without {\it by} phrase, passive with {\it by} phrase, passive with wh-moved object of {\it by} phrase, passive with wh-moved {\it by phrase}, outer passive without {\it by} phrase, outer passive with {\it by} phrase, outer passive with wh-moved {\it by} phrase, outer passive with wh-moved object of {\it by} phrase, outer passive without {\it by} phrase with relative on the subject with and without comp, outer passive with {\it by} phrase with relative on subject with and without comp. \end{description} \section{Idiom with V, N, and Prep anchors: Tnx0VN1Pnx2}\index{verbs,idiomatic} \label{nx0VN1Pnx2-family} \begin{description} \item[Description:] This tree family is selected by transitive idioms that are anchored by a verb, noun, and preposition. 6 idioms select this family. \item[Examples:] {\it look daggers at}, {\it keep track of} \\ {\it Maria looked daggers at her ex-husband across the courtroom.} \\ {\it The company kept track of its inventory.} \\ \item[Declarative tree:] See Figure~\ref{nx0VN1Pnx2-tree}. \begin{figure}[htb] \centering \begin{tabular}{c} \psfig{figure=ps/verb-class-files/alphanx0VN1Pnx2.ps,height=5.0cm} \end{tabular} \caption{Declarative Idiom with V, N, and Prep Anchors Tree: $\alpha$nx0VN1Pnx2} \label{nx0VN1Pnx2-tree} \label{3;nx0VN1Pnx2} \end{figure} \item[Other available trees:] Subject relative clause with and without comp, adjunct relative clause with comp/with PP pied-piping, wh-moved subject, imperative, NP gerund, passive without {\it by} phrase, passive with {\it by} phrase, passive with wh-moved object of {\it by} phrase, passive with wh-moved {\it by phrase}, outer passive without {\it by} phrase, outer passive with {\it by} phrase, outer passive with wh-moved {\it by} phrase, outer passive with wh-moved object of {\it by} phrase, outer passive without {\it by} phrase with relative on the subject with and without comp, outer passive with {\it by} phrase with relative on subject with and without comp. \end{description} \section{Idiom with V, D, N, and Prep anchors: Tnx0VDN1Pnx2}\index{verbs,idiomatic} \label{nx0VDN1Pnx2-family} \begin{description} \item[Description:] This tree family is selected by transitive idioms that are anchored by a verb, determiner, noun, and preposition. 17 idioms select this family. \item[Examples:] {\it make a mess of}, {\it keep the lid on} \\ {\it John made a mess out of his new suit.} \\ {\it The tabloid didn't keep a lid on the imminent celebrity nuptials.} \\ \item[Declarative tree:] See Figure~\ref{nx0VDN1Pnx2-tree}. \begin{figure}[htb] \centering \begin{tabular}{c} \psfig{figure=ps/verb-class-files/alphanx0VDN1Pnx2.ps,height=5.0cm} \end{tabular} \caption{Declarative Idiom with V, D, N, and Prep Anchors Tree: $\alpha$nx0VDN1Pnx2} \label{nx0VDN1Pnx2-tree} \label{3;nx0VDN1Pnx2} \end{figure} \item[Other available trees:] Subject relative clause with and without comp, adjunct relative clause with comp/with PP pied-piping, wh-moved subject, imperative, NP gerund, passive without {\it by} phrase, passive with {\it by} phrase, passive with wh-moved object of {\it by} phrase, passive with wh-moved {\it by phrase}, outer passive without {\it by} phrase, outer passive with {\it by} phrase, outer passive with wh-moved {\it by} phrase, outer passive with wh-moved object of {\it by} phrase, outer passive without {\it by} phrase with relative on the subject with and without comp, outer passive with {\it by} phrase with relative on subject with and without comp. \end{description}
1,314,259,994,158
arxiv
\section{Introduction} Much attention has been given recently to the study of large uniform triangulations of the sphere. Historically, these triangulations have been first considered by physicists as a discrete model for quantum gravity. Before the introduction of more direct tools (bijection with trees or peeling process), the first simulations \cite{JKP86, KKM85} were made using a Monte-Carlo method based on flips of triangulations. More precisely, for all $n \geq 3$, let $\mathscr{T}_n$ be the set of rooted type-I triangulations of the sphere with $n$ vertices (that is, triangulations that may contain loops and multiple edges, equipped with a distinguished oriented edge). If $t$ is a triangulation we write $V(t)$ for the set of its vertices and $E(t)$ for the set of its edges. If $t \in \mathscr{T}_n$ and $e\in E(t)$, we write $\mathfrak{flip}(t,e)$ for the triangulation obtained by removing the edge $e$ from $t$ and drawing the other diagonal of the face of degree $4$ that appears. We say that $\mathfrak{flip}(t,e)$ is obtained from $t$ by \textit{flipping} the edge $e$ (cf. Figure \ref{figureflip}). Note that it is possible to flip a loop and to flip the root edge. The only case in which an edge cannot be flipped is if both of its sides are adjacent to the same face like the edge $e_2$ on Figure \ref{figureflip}. In this case $\mathfrak{flip}(t,e)=t$. Note that there is a natural bijection between $E(t)$ and $E \left( \mathfrak{flip}(t,e) \right)$. When there is no ambiguity, we shall sometimes treat an element of one of these two sets as if it belonged to the other. \begin{figure} \begin{center} \begin{tikzpicture} \draw[very thick, orange] (2,3)--(2,2.3); \draw (2.4,2.8)[orange] node[texte] {$e_2$}; \draw (2,2.3) node{}; \draw (0,0) node{}--(4,0) node{}; \draw (0,0) node{}--(2,3) node{}; \draw (2,3) node{}--(4,0) node{}; \draw (2,3) node{} to[in=180, out=250] (2,2); \draw (2,3) node{} to[in=0, out=290] (2,2); \draw (2,3) node{} .. controls (1.5,1.8) .. (2.2,1.5) node{}; \draw (2,3) node{} to[in=60, out=300] (2.2,1.5) node{}; \draw[very thick, blue] (0,0) node{}--(2.5,0.7) node{}; \draw (2,0.3)[blue] node[texte] {$e_1$}; \draw (4,0) node{}--(2.5,0.7) node{}; \draw (4,0) node{}--(2.2,1.5) node{}; \draw (1.5,0.9) node{}--(2.5,0.7) node{}; \draw (2.5,0.7) node{}--(2.2,1.5) node{}; \draw (1.5,0.9) node{}--(0,0) node{}; \draw (1.5,0.9) node{} to[bend left=18] (2,3) node{}; \draw (1.5,0.9) node{}--(2.2,1.5) node{}; \draw (2,-0.5) node[texte]{$t$}; \begin{scope}[shift={(6,0)}] \draw (2,3)--(2,2.3); \draw[very thick, blue] (1.5,0.9) to[bend right=15] (4,0); \draw (2,2.3) node{}; \draw (0,0) node{}--(4,0) node{}; \draw (0,0) node{}--(2,3) node{}; \draw (2,3) node{}--(4,0) node{}; \draw (2,3) node{} to[in=180, out=250] (2,2); \draw (2,3) node{} to[in=0, out=290] (2,2); \draw (2,3) node{} .. controls (1.5,1.8) .. (2.2,1.5) node{}; \draw (2,3) node{} to[in=60, out=300] (2.2,1.5) node{}; \draw (4,0) node{}--(2.5,0.7) node{}; \draw (4,0) node{}--(2.2,1.5) node{}; \draw (1.5,0.9) node{}--(2.5,0.7) node{}; \draw (2.5,0.7) node{}--(2.2,1.5) node{}; \draw (1.5,0.9) node{}--(0,0) node{}; \draw (1.5,0.9) node{} to[bend left=18] (2,3) node{}; \draw (1.5,0.9) node{}--(2.2,1.5) node{}; \draw (2,-0.5) node[texte]{$\mathfrak{flip}(t,e_1)$}; \end{scope} \end{tikzpicture} \end{center} \vspace{-1cm} \caption{An example of flip of an edge. The orange edge $e_2$ is not flippable.} \label{figureflip} \end{figure} The graph of triangulations of the sphere in which two triangulations are related if one can pass from one to the other by flipping an edge has already been studied in the type-III setting (that is, triangulations with neither loops nor multiple edges): it is connected \cite{W36} and its diameter is linear in $n$ \cite{K97}. We extend these results to our setup in Lemma \ref{irreducibility}. We define a Markov chain $(T_n(k))_{k \geq 0}$ on $\mathscr{T}_n$ as follows: conditionally on $(T_n(0), \dots, T_n(k))$, let $e_k$ be a uniformly chosen edge of $T_n(k)$. We take $T_{n}(k+1)=\mathfrak{flip}(T_n(k),e_k)$. It is easy to see that the uniform measure on $\mathscr{T}_n$ is reversible, thus stationary for $\left( T_n(k) \right)_{k \geq 0}$, so this Markov chain will converge to the uniform distribution (the irreducibility is guaranteed by the connectedness results described above and the aperiodicity by the possible existence of non flippable edges). It is then natural to estimate the mixing time of $\left( T_n(k) \right)_{k \geq 0}$ (see Chapter 4.5 of \cite{LPW09} for a proper definition of the mixing time). Our theorem provides a lower bound. \begin{thm} \label{mainthm} There is a constant $c>0$ such that for all $n \geq 3$ the mixing time of the Markov chain $(T_n(k))_{k \geq 0}$ is at least $c n^{5/4}$. \end{thm} Mixing times for other types of flip chains have also been investigated. For triangulations of a convex $n$-gon without inner vertices it is known that the mixing time is polynomial and at least of order $n^{3/2}$ (see \cite{MT97, MRS98}). In particular, our proof was partly inspired by the proof of the lower bound in \cite{MRS98}. Finally, see \cite{CMSS15} for estimates on the mixing time of the flip walk on \textit{lattice triangulations}, that is, triangulations whose vertices are points on a lattice and with Boltzmann weights depending on the total length of their edges. The strategy of our proof is as follows: we start with two independent uniform triangulations with a boundary of length $1$ and $\frac{n}{2}$ inner vertices and glue them together along their boundaries. We obtain a triangulation of the sphere with a cycle of length $1$ such that half of the vertices lie on each side of this cycle. We then start our Markov chain from this triangulation and discover one of the two sides of the cycle gradually by a peeling procedure. By using the estimates of Curien and Le Gall \cite{CLGpeeling} and a result of Krikun about separating cycles in the UIPT \cite{Kri04}, we show that after $o(n^{5/4})$ flips, with high probability, the triangulation still has a cycle of length $o(n^{1/4})$, on each side of which lie a proportion at least $\frac{1}{4}$ of the vertices. But by a result of Le Gall and Paulin \cite{LGP08}, this is not the case in a uniform triangulation (this is the discrete counterpart of the homeomorphicity of the Brownian map to the sphere), which shows that a time $o(n^{5/4})$ is not enough to approach the uniform distribution. \paragraph{Acknowledgements:} I thank Nicolas Curien for carefully reading earlier versions of this manuscript. I also thank the anonymous referee for his useful comments. I acknowledge the support of ANR Liouville (ANR-15-CE40-0013) and ANR GRAAL (ANR-14-CE25-0014). \section{Combinatorial preliminaries and couplings} For all $n \geq 3$, we recall that $\mathscr{T}_n$ is the set of rooted type-I triangulations of the sphere with $n$ vertices. For $n \geq 0$ and $p \geq 1$ we also write $\mathscr{T}_{n,p}$ for the set of triangulations with a boundary of length $p$ and $n$ inner vertices, that is, planar maps with $n+p$ vertices in which all faces are triangles except one called the \textit{outer face} whose boundary is a simple cycle of length $p$, equipped with a root edge such that the outer face touches the root edge on its right. We will sometimes refer to $n$ and $p$ as the \textit{volume} and the \textit{perimeter} of the triangulation. The number of triangulations with fixed volume and perimeter can be computed by a result of Krikun. Here is a special case of the main theorem of \cite{Kri07} (the full theorem deals with triangulations with $r+1$ boundaries but we only use the case $r=0$): \begin{equation}\label{enumeration} \# \mathscr{T}_{n,p}=\frac{p(2p)!}{(p!)^2} \frac{4^{n-1} (2p+3n-5)!!}{n! (2p+n-1)!!} \underset{n \to +\infty}{\sim} C(p) \lambda_c^{-n} n^{-5/2}, \end{equation} where $\lambda_c=\frac{1}{12 \sqrt{3}}$ and $C(p) = \frac{3^{p-2} p (2p)!}{4 \sqrt{2 \pi} (p!)^2}$. In particular, a triangulation of the sphere with $n$ vertices is equivalent after a root transformation to a triangulation with a boundary of length $1$ and $n-1$ inner vertices (more precisely we need to duplicate the root edge, add a loop inbetween and root the map at this new loop, see for example Figure 2 in \cite{CLGmodif}), so \begin{equation} \label{enumerationSphere} \# \mathscr{T}_n = \# \mathscr{T}_{n-1,1} = 2 \frac{4^{n-2} \, (3n-6)!!}{(n-1)! \, n!!}. \end{equation} For $n \geq 0$ and $p \geq 1$ we write $T_{n,p}$ for a uniform triangulation with a boundary of length $p$ and $n$ inner vertices, and $T_n$ for a uniform triangulation of the sphere with $n$ vertices. We also recall that the UIPT, that we write $T_{\infty}$, is an infinite rooted planar triangulation whose distribution is characterized by the following equality. For any rooted triangulation $t$ with a hole of perimeter $p$, \begin{equation} \label{UIPT} \P \left( t \subset T_{\infty} \right)=C(p) \lambda_c^{|t|}, \end{equation} where $\lambda_c$ and the $C(p)$ are as above, $|t|$ is the total number of vertices of $t$ and by $t \subset T_{\infty}$ we mean that $T_{\infty}$ can be obtained by filling the hole of $t$ with an infinite triangulation with a boundary of length $p$. In what follows we will use several times peeling explorations of random triangulations, see section 4.1 of \cite{CLGpeeling} for a general definition. Let $t$ be a triangulation and $\mathscr{A}$ be a peeling algorithm, that is, a way to assign to every finite triangulation with one hole an edge on the boundary of the hole. We write $t^{\mathscr{A}}_j(t)$ for the part of $t$ discovered after $j$ steps of filled-in peeling following algorithm $\mathscr{A}$. By "filled-in" we mean that everytime the peeled face separates the unknown part of the map in two connected components we reveal the one with fewer vertices (if the two components have the same number of vertices we reveal one component picked deterministically). If the map is infinite and one-ended, we reveal the bounded component. From the enumeration formulas it is possible to deduce precise coupling results between finite and infinite maps. The result we will need is similar to Proposition 12 of \cite{HBP} but a bit more general since it deals with triangulations with a boundary. We recall that in a triangulation $t$ of the sphere or the plane, the \textit{ball} of radius $r$, that we write $B_r(t)$, is the triangulation with holes formed by those faces adjacent to at least one vertex lying at distance at most $r-1$ from the root, along with all their edges and vertices. If $t$ is infinite, the \textit{hull} of radius $r$, that we write $B_r^{\bullet}(t)$, is the union of $B_r (t)$ and all the bounded connected components of its complement. If $t$ is finite, it is the union of $B_r (t)$ and all the connected components of its complement except the one that contains the most vertices (if there is a tie, we pick deterministically a component among those which contain the most vertices). If $T$ is a triangulation with a boundary, we adopt the same definitions but we replace the distance to the root by the distance to the boundary. \begin{lem}\label{couplagebord} Let $p_n=o(\sqrt{n})$ and $r_n=o(n^{1/4})$ with $p_n=o(r_n^2)$. Then there are $r'_n=o(r_n)$ and couplings between $T_{n, p_n}$ and $T_{\infty}$ such that \[ \mathbb{P} \left( B^{\bullet}_{r_n} (T_{\infty}) \backslash B^{\bullet}_{r'_n} (T_{\infty}) \subset B^{\bullet}_{r_n} (T_{n,p_n}) \right) \xrightarrow[n \to +\infty]{} 1. \] \end{lem} The above lemma follows from the following. There is a cycle $\gamma'$ of length $p_n$ around the root of $T_{\infty}$ that lies inside of its hull of radius $r'_n$ and a cycle $\gamma$ in $T_{n,p_n}$ that stays at distance at most $r_n$ from its boundary, such that the part of the hull of radius $r_n$ of $T_{\infty}$ that lies outside of $\gamma'$ is isomorphic to the part of $T_{n,p_n}$ that lies between its boundary and $\gamma$ (see Figure \ref{figurecoupling}). \begin{figure} \begin{center} \begin{tikzpicture}[scale=0.9] \fill[green!15] (0.5,4) to[bend right =15] (2,4) to[bend right =15] (0.5,4); \fill[green!30] (-1.5,1) to[out=330, in=195] (-0.5,0.7) to[out=15, in=165] (0.5,0.7) to[out=345, in=210] (1.5,0.8) to[out=30, in=270] (3.5,3) to[out=90, in=0] (3,4.5) to[out=180, in=0] (2,2.5) to[out=180, in=255] (2,4) to[bend left =15] (0.5,4) to[in=0, out=285] (0.5,3) to[in=0, out=180] (-1.5,5) to[in=120, out=180] (-1,3.5) to[in=0, out=300] (-1,2) to[in=0, out=180] (-2.5,4) to[in=135, out=180] (-1.5,1); \draw(0,0) to[out=160, in=315] (-1.5,1); \draw(0,0)node{} to[out=20, in=210] (1.5,0.8); \draw(-1.5,1) to[out=135, in=180] (-2.5,4); \draw(-2.5,4) to[out=0, in=180] (-1,2); \draw(-1,2) to[out=0, in=300] (-1,3.5); \draw(-1,3.5) to[out=120, in=180] (-1.5,5); \draw(-1.5,5) to[out=0, in=180] (0.5,3); \draw(1.5,0.8) to[out=30, in=270] (3.5,3); \draw(3.5,3) to[out=90, in=0] (3,4.5); \draw(3,4.5) to[out=180, in=0] (2,2.5); \draw(0.5,3) to[out=0, in=285] (0.5,4); \draw(2,2.5) to[out=180, in=255] (2,4); \draw(0.5,4) to[out=105, in=300] (-0.5,6.5); \draw(2,4) to[out=75, in=270] (3.5,5.5); \draw(3.5,5.5) to[out=90, in=0] (3,6); \draw(3,6) to[out=180, in=0] (2,5); \draw(2,5) to[out=180, in=240] (2.5,6.5); \draw[violet, thick] (-1.5,1) to[out=330, in=195] (-0.5,0.7); \draw[violet, thick] (-0.5,0.7) to[out=15, in=165] (0.5,0.7); \draw[violet, thick] (0.5,0.7) to[out=345, in=210] (1.5,0.8); \draw[violet, dashed, thick] (-1.5,1) to[bend left =10] (1.5,0.8); \draw[blue, thick] (0.5,4) to[bend right =15] (2,4); \draw[blue, dashed, thick] (0.5,4) to[bend left =15] (2,4); \draw[<->, red, very thick] (-2,0)--(-2,1); \draw[red] (-2.5,0.5) node[texte]{$\leq r'_n$}; \draw (0.3,-0.2) node[texte]{$\rho$}; \draw[violet] (1.8,0.6) node[texte]{$\gamma'$}; \draw[violet] (0.1,1.3) node[texte]{$p_n$}; \draw[<->, red, very thick] (-3.5,0)--(-3.5,4); \draw[red] (-3.8,2) node[texte]{$r_n$}; \draw (0,-1) node[texte]{$T_{\infty}$}; \begin{scope}[shift={(8,-1)}] \fill[green!15] (0.5,4) to[bend right =15] (2,4) to[bend right =15] (0.5,4); \fill[green!30] (-1.5,1) to[bend right =15] (1.5,1) to[out=30, in=270] (3.5,3) to[out=90, in=0] (3,4.5) to[out=180, in=0] (2,2.5) to[out=180, in=255] (2,4) to[bend left =15] (0.5,4) to[in=0, out=285] (0.5,3) to[in=0, out=180] (-1.5,5) to[in=120, out=180] (-1,3.5) to[in=0, out=300] (-1,2) to[in=0, out=180] (-2.5,4) to[in=135, out=180] (-1.5,1); \draw(-1.5,1) to[out=135, in=180] (-2.5,4); \draw(-2.5,4) to[out=0, in=180] (-1,2); \draw(-1,2) to[out=0, in=300] (-1,3.5); \draw(-1,3.5) to[out=120, in=180] (-1.5,5); \draw(-1.5,5) to[out=0, in=180] (0.5,3); \draw(1.5,1) to[out=30, in=270] (3.5,3); \draw(3.5,3) to[out=90, in=0] (3,4.5); \draw(3,4.5) to[out=180, in=0] (2,2.5); \draw(0.5,3) to[out=0, in=285] (0.5,4); \draw(2,2.5) to[out=180, in=255] (2,4); \draw(2,4) to[out=75, in=270] (3.5,5.5); \draw(3.5,5.5) to[out=90, in=0] (3,6); \draw(3,6) to[out=180, in=0] (2,5); \draw(0.5,4) to[out=105, in=180] (0,6.5); \draw(0,6.5) to[out=0, in=180] (2,5); \draw[violet, thick] (-1.5,1) to[bend right =15] (1.5,1); \draw[violet, dashed, thick] (-1.5,1) to[bend left =10] (1.5,1); \draw[blue, thick] (0.5,4) to[bend right =15] (2,4); \draw[blue, dashed, thick] (0.5,4) to[bend left =15] (2,4); \draw[<->, red, very thick] (4,1)--(4,4); \draw[red] (4.5,2.5) node[texte]{$\leq r_n$}; \draw[violet] (0,1.4) node[texte]{$p_n$}; \draw[blue] (1.25,4.5) node[texte]{$\gamma$}; \draw (0,0) node[texte]{$T_{n,p_n}$}; \end{scope} \end{tikzpicture} \end{center} \vspace{-5mm} \caption{Illustration of Lemma \ref{couplagebord}. With high probability, there are two cycles $\gamma'$ and $\gamma$ such that the two green parts coincide.} \label{figurecoupling} \end{figure} \begin{proof} We start by describing a coupling between the UIPT and the UIPT with a boundary of length $p_n$, that we write $T_{\infty,p_n}$. We consider the peeling by layers $\mathscr{L}$ of the UIPT (see section 4.1 of \cite{CLGpeeling}) and we write $\tau_{p_n}$ for the first time at which the perimeter of the discovered region is equal to $p_n$ (note that this time is always finite since the perimeter can increase by at most $1$ at each peeling step). By the spatial Markov property of the UIPT, the part that is still unknown at time $\tau_{p_n}$ has the distribution of $T_{\infty,p_n}$. Moreover, by the results of Curien and Le Gall (Theorem 1 of \cite{CLGpeeling}), since $p_n=o(r_n^2)$, we have $\tau_{p_n}=o(r_n^3)$. By using Proposition 9 of \cite{CLGpeeling} (more precisely the convergence of $H$), we obtain that the smallest hull of $T_{\infty}$ containing $t_{\tau_{p_n}}^{\mathscr{L}} \left( T_{\infty} \right)$ has radius $o(r_n)$ in probability. Hence, our result holds if we replace $T_{n, p_n}$ by $T_{\infty, p_n}$. Hence, it is enough to prove that there are couplings between $T_{\infty,p_n}$ and $T_{n,p_n}$ such that \[ \mathbb{P} \left( B^{\bullet}_{r_n}(T_{n,p_n})=B^{\bullet}_{r_n}(T_{\infty,p_n}) \right) \xrightarrow[n \to +\infty]{} 1.\] The proof relies on asymptotic enumeration results and is essentially the same as that of Proposition 12 of \cite{HBP}: by using the above coupling of $T_{\infty, p_n}$ and $T_{\infty}$ we can show that \[ \left( \frac{1}{\sqrt{n}} |\partial B^{\bullet}_{r_n}(T_{\infty, p_n})|, \frac{1}{n} |B^{\bullet}_{r_n}(T_{\infty, p_n})| \right) \xrightarrow[n \to +\infty]{(P)} (0,0). \] Moreover, if $q_n=o(\sqrt{n})$ and $v_n=o(n)$ and if $t_n$ is a triangulation with two holes of perimeters $p_n$ and $q_n$ (rooted on the boundary of the $p_n$-gon) and $v_n$ vertices that is a possible value of $B^{\bullet}_{r_n}(T_{\infty, p_n})$ for all $n \geq 0$, then \[ \frac{\mathbb{P} \left( B^{\bullet}_{r_n}(T_{n, p_n})=t_n \right)}{\mathbb{P} \left( B^{\bullet}_{r_n}(T_{\infty, p_n})=t_n \right)} \xrightarrow [n \to +\infty]{} 1\] by the enumeration results, and we can conclude as in Proposition 12 of \cite{HBP}. \end{proof} We will also need another coupling lemma where we do not compare hulls of a fixed radius, but rather the parts of triangulations that have been discover after a fixed number of peeling steps. \begin{lem} Let $j_n=o(n^{3/4})$, and let $\mathscr{A}$ be a peeling algorithm. Then there are couplings between $T_{n}$ and $T_{\infty}$ such that \[\mathbb{P} \left( t_{j_n}^{\mathscr{A}}(T_{n})=t_{j_n}^{\mathscr{A}}(T_{\infty}) \right) \xrightarrow[n \to +\infty]{} 1.\] \end{lem} \begin{proof} We write $P_{\infty}(j)$ and $V_{\infty}(j)$ for respectively the perimeter and volume of $t_{j}^{\mathscr{A}}(T_{\infty})$. By the results of \cite{CLGpeeling} we have the convergences \begin{equation}\label{PVinftysmall} \frac{1}{\sqrt{n}} \sup_{0 \leq j \leq j_n} P_{\infty}(j) \xrightarrow[n \to +\infty]{} 0 \hspace{5mm} \mbox{and} \hspace{5mm} \frac{1}{n} \sup_{0 \leq j \leq j_n} V_{\infty}(j) \xrightarrow[n \to +\infty]{} 0 \end{equation} in probability, so there are $p_n=o(\sqrt{n})$ and $v_n=o(n)$ such that \[\mathbb{P} \left( P_{\infty}(j_n) \leq p_n \mbox{ and } V_{\infty}(j_n) \leq v_n \right) \to 1.\] But by the enumeration results \eqref{enumeration}, \eqref{enumerationSphere} and by \eqref{UIPT}, if $t_n$ is a rooted triangulation with perimeter at most $p_n$ and volume at most $v_n$, we have \[\frac{\P \left(t_{j_n}^{\mathscr{A}} (T_n)=t_n \right)}{\P \left(t_{j_n}^{\mathscr{A}} (T_{\infty})=t_n \right)} = \frac{\mathbb{P} \big( t_n \subset T_{n} \big)}{\mathbb{P} \big( t_n \subset T_{\infty} \big)} \xrightarrow[n \to +\infty]{} 1.\] As in Proposition 12 of \cite{HBP}, this proves that the total variation distance between the distributions of $t_{j_n}^{\mathscr{A}}(T_{n})$ and $t_{j_n}^{\mathscr{A}}(T_{\infty})$ goes to $0$ as $n \to +\infty$, which proves our claim and the lemma. \end{proof} By combining this last lemma and the estimates \eqref{PVinftysmall}, we immediately obtain estimates about the peeling process on finite uniform triangulations. We write $P_n(j)$ and $V_n(j)$ for the perimeter and volume of $t^{\mathscr{A}}_j(T_n)$. \begin{corr} \label{estimatesPV} Let $j_n=o(n^{3/4})$. Then we have the following convergences in probability: \[ \frac{1}{\sqrt{n}} \sup_{0 \leq j \leq j_n} P_n(j) \xrightarrow[n \to +\infty]{} 0 \hspace{5mm} \mbox{and} \hspace{5mm} \frac{1}{n} \sup_{0 \leq j \leq j_n} V_n(j) \xrightarrow[n \to +\infty]{} 0.\] \end{corr} Finally, we show a result about small cycles surrounding the boundary in uniform triangulations with a perimeter small enough compared to their volume. \begin{lem}\label{smallcycle} Let $p_n=o(\sqrt{n})$ and $r_n=o(n^{1/4})$ be such that $p_n=o(r_n^2)$. Then for all $\varepsilon>0$, the probability of the event \begin{center} "there is a cycle $\gamma$ in $T_{n,p_n}$ of length at most $r_n$ such that the part of $T_{n,p_n}$ lying between $\partial T_{n,p_n}$ and $\gamma$ contains at most $\varepsilon n$ vertices" \end{center} goes to $1$ as $n \to +\infty$. \end{lem} This result is not surprising. In the context of quadrangulations with a non-simple boundary, it is a consequence of the convergence of quadrangulations with boundaries to Brownian disks, see \cite{BM15}. However, no scaling limit result is known yet for triangulations with boundaries. Hence, we will rely on a result of Krikun about small cycles in the UIPT, that we will combine with Lemma \ref{couplagebord}. Here is a restatement of Theorem 6 of \cite{Kri04}. \begin{thm}[Krikun] \label{Krikuncycle} For all $\varepsilon>0$, there is a constant $C$ such that for all $r$, with probability at least $1-\varepsilon$ there is a cycle of length at most $Cr$ surrounding $B_r^{\bullet}(T_{\infty})$ and lying in $B_{2r}^{\bullet}(T_{\infty})$. \end{thm} Note that Krikun deals with type-II triangulations, i.e. with multiple edges but no loops, but the decomposition used in \cite{Kri04} is still valid and even a bit simpler in the type-I setting, see \cite{CLGmodif}. The fact that the cycle stays in $B_{2r}^{\bullet}(T_{\infty})$ is not in the statement of the theorem in \cite{Kri04} but it is immediate from its proof. \begin{proof}[Proof of Lemma \ref{smallcycle}] By Lemma \ref{couplagebord} it is possible to couple $T_{\infty}$ and $T_{n,p_n}$ in such a way that \begin{equation} \label{finalcoupling} \mathbb{P} \left( B^{\bullet}_{r_n} (T_{\infty}) \backslash B^{\bullet}_{r'_n} (T_{\infty}) \subset B^{\bullet}_{r_n} \left( T_{n,p_n} \right) \right) \xrightarrow[n \to +\infty]{} 1, \end{equation} where $r'_n=o(r_n)$. On the other hand, by Theorem \ref{Krikuncycle}, we have \[ \mathbb{P} \left( \mbox{there is a cycle $\gamma$ of length $\leq r_n$ in $B_{2r'_n}^{\bullet}(T_{\infty})$ that surrounds $B^{\bullet}_{r'_n} \left( T_{\infty} \right)$} \right) \xrightarrow[n \to +\infty]{} 1.\] For $n$ large enough we have $r_n \geq 2r'_n$ so if such a $\gamma$ exists in then it must stay in $B^{\bullet}_{r_n} \left( T_{\infty} \right)$. Since $r_n=o(n^{1/4})$, the probability that the number of vertices lying inside of $\gamma$ is greater than $\varepsilon n$ goes to $0$ by Theorem 2 of \cite{CLGpeeling}. But if the event of \eqref{finalcoupling} holds and if such a cycle exists in $T_{\infty}$, then in $T_{n,p_n}$ there is a cycle $\gamma$ of length at most $r_n$ such that the part of $T_{n,p_n}$ lying between $\partial T_{n,p_n}$ and $\gamma$ contains at most $\varepsilon n$ vertices. \end{proof} \section{Proof of Theorem \ref{mainthm}} Our main task will be to prove the following proposition. \begin{prop} \label{propcycle} Let $k_n=o(n^{5/4})$. Then there are $t_n \in \mathscr{T}_n$ and $\ell_n=o(n^{1/4})$ such that conditionally on $T_n(0)=t_n$, the probability that there is a cycle of length at most $\ell_n$ that separates $T_n(k_n)$ in two parts of volume at least $\frac{n}{4}$ goes to $1$ as $n \to +\infty$. \end{prop} We first define the initial triangulation $T_n(0)$ we will be interested in: let $T^1_n(0)$ and $T^2_n(0)$ be two independent uniform triangulations with a boundary of length $1$ and with respectively $\lfloor \frac{n-1}{2} \rfloor$ and $\lceil \frac{n-1}{2} \rceil$ inner vertices. We write $T_n(0)$ for the triangulation obtained by gluing together the boundaries of $T^1_n(0)$ and $T^2_n(0)$. We will now perform an exploration of the triangulation while it gets flipped: the part $T^1_n$ will be considered as the "discovered" part and $T^2_n$ as the "unknown" part of the map. More precisely, we define by induction $T^1_n(k)$ and $T^2_n(k)$ such that $T_n(k)$ is obtained by gluing together the boundaries of $T^1_n(k)$ and $T^2_n(k)$. The two triangulations for $k=0$ are defined above. Now assume we have constructed $T^1_n(k)$ and $T^2_n(k)$. Then: \begin{itemize} \item if $e_k$ lies inside of $T^1_n(k)$ then $T^1_n(k+1)=\mathfrak{flip}(T^1_n(k), e_k)$ and $T^2_n(k+1)=T^2_n(k)$, \item if $e_k$ lies inside of $T^2_n(k)$ then $T^1_n(k+1)=T^1_n(k)$ and $T^2_n(k+1)=\mathfrak{flip}(T^2_n(k), e_k)$, \item if $e_k \in \partial T^1_n(k)$, we write $f_k$ for the face of $T^2_n(k)$ that is adjacent to $e_k$, and we let $T^2_n(k+1)$ be the connected component of $T^2_n(k) \backslash f_k$ with the largest volume and $T^1_n(k+1)=\mathfrak{flip} \big( T_n(k) \backslash T^2_n(k+1), e_k \big)$. \end{itemize} We now set $\widetilde{P}_n(k)=|\partial T^1_n(k)|$ and $\widetilde{V}_n(k)= \left| V \left( T^1_n(k) \right) \right|- \left| V \left( T^1_n(0) \right) \right|+1$. Note that $\widetilde{V}_n(k)$ is nondecreasing in $k$. For $k \geq 0$, we define a random variable $e_k^* \in E(T^1_n(k)) \cup \{ \star \}$, where $\star$ is an additional state corresponding to all the edges not in $E(T^1_n(k))$, as follows: if $e_k$ lies inside or on the boundary of $T^1_n(k)$ then $e^*_k=e_k$, and if not then $e_k^*=\star$. We also define $\mathcal{F}_k$ as the $\sigma$-algebra generated by the variables $\left( T^1_n(i) \right)_{0 \leq i \leq k}$ and $(e_i^*)_{0 \leq i \leq k-1}$. \begin{lem} \label{resteruniforme} For all $k$, conditionally on $\mathcal{F}_k$, the triangulation $T^2_n(k)$ is a uniform triangulation with a boundary of length $\widetilde{P}_n(k)$ and $\lceil \frac{n+1}{2} \rceil -\widetilde{V}_n(k)$ inner vertices. \end{lem} \begin{proof} We prove the lemma by induction on $k$. For $k=0$ it is obvious by the definition of $T^2_n(0)$. Let $k \geq 0$ be such that the lemma holds for $k$. \begin{itemize} \item[$\bullet$] If $e_k^*$ lies inside $T^1_n(k)$, the result follows from the fact that $T^2_n(k)=T^2_n(k+1)$ and that conditionally on $\mathcal{F}_k$, the triangulation $T^2_n(k)$ is independent of $e^*_k$. \item[$\bullet$] If $e_k^*=\star$, it follows from the invariance of the uniform measure on $\mathscr{T}_{n,p}$ under flipping of a uniform edge among those which do not lie on the boundary. \item[$\bullet$] If $e_k^* \in \partial T^1_n(k)$, this is a standard peeling step: by invariance under rerooting of a uniform triangulation with fixed perimeter and volume, conditionally on $\mathcal{F}_k$ and $e_k$, the triangulation $T^2_n(k)$ rooted at $e_k$ is uniform. Hence, if the third vertex of the face $f_k$ of $T^2_n(k)$ adjacent to $e_k$ lies inside of $T^2_n(k)$, the remaining part of $T^2_n(k)$ is a uniform triangulation with a boundary of length $\widetilde{P}_n(k)+1$ and $\lceil \frac{n+1}{2} \rceil-\widetilde{V}_n(k)-1$ inner vertices. If the third vertex of $f_k$ lies on $\partial T^2_n(k)$, then the face $f_k$ separates $T^2_n(k)$ in two independent uniform triangulations with fixed perimeters and volumes, and the lemma follows. \end{itemize} \end{proof} We now define the stopping times $\tau_j$ as the times at which the flipped edge lies on the boundary of the unknown part of the map, that is, the times $k$ at which we discover new parts of $T^2_n(k)$: we set $\tau_0=0$ and $\tau_{j+1}=\inf \{ k>\tau_j | e_k \in \partial T^1_n(k)\}$ for $j \geq 0$. We also write $P_n(j)=\widetilde{P}_n(\tau_j+1)$ and $V_n(j)=\widetilde{V}_n(\tau_j+1)$. Then Lemma \ref{resteruniforme} shows that $\left( P_n, V_n \right)$ is a Markov chain with the same transitions as the perimeter and volume processes associated to the peeling process of a uniform triangulation with a boundary of length $1$ and $\lceil \frac{n-1}{2} \rceil$ inner vertices. Hence, Corollary \ref{estimatesPV} provides estimates for this process. Our next lemma will allow us to estimate the times $\tau_j$. \begin{lem} \label{estimateTau} Let $k_n=o(n^{5/4})$. Then for all $\varepsilon>0$ we have \[\mathbb{P} \left( \tau_{\varepsilon n^{3/4}} > k_n \right) \xrightarrow[n \to +\infty]{} 1.\] \end{lem} \begin{proof} Conditionally on $P_n$, the variables $\tau_{j+1}-\tau_j$ are independent geometric variables with respective parameters $\frac{P_n(j)}{n}$. Hence, $\tau_{\varepsilon n^{3/4}}$ dominates the sum $S_n$ of $\varepsilon n^{3/4}$ i.i.d. geometric variables with parameter $Q_n=\frac{1}{n} \max_{0 \leq j \leq \varepsilon n^{3/4}} P_n(j)$. We have \[\mathbb E [S_n|P_n]=\varepsilon n^{3/4} Q_n=\varepsilon n^{5/4} \times \frac{1}{\sqrt{n}} \max_{0 \leq j \leq \varepsilon n^{3/4}} P_n(j).\] By the results of \cite{CLGpeeling}, the factor $\frac{1}{\sqrt{n}} \max_{0 \leq j \leq \varepsilon n^{3/4}} P_n(j)$ converges in distribution, so $\frac{\mathbb E [S_n|P_n]}{\varepsilon n^{5/4}}$ converges in distribution so $\frac{ \mathbb{E} \left[ S_n | P_n \right]}{k_n} \to +\infty$ in probability. By the weak law of large numbers we get $\frac{S_n}{k_n} \to +\infty$ in probability so $\frac{\tau_{\varepsilon n^{3/4}}}{k_n} \to +\infty$ in probability. \end{proof} By combining Corollary \ref{estimatesPV} and Lemma \ref{estimateTau} we get the following result. \begin{lem} \label{estimateTilde} Let $k_n=o(n^{5/4})$. Then we have the convergences \[ \frac{1}{\sqrt{n}} \widetilde{P}_n(k_n) \xrightarrow[n \to +\infty]{} 0 \hspace{5mm} \mbox{and} \hspace{5mm} \frac{1}{n} \widetilde{V}_n(k_n) \xrightarrow[n \to +\infty]{} 0\] in probability. \end{lem} \begin{proof} By Lemma \ref{estimateTau} there is a deterministic sequence $j_n=o(n^{3/4})$ such that $\mathbb{P} \left( \tau_{j_n} > k_n \right) \to 1$. This means that with probability going to $1$ as $n \to +\infty$ there is $J \leq j_n$ such that $\tau_J < k_n \leq \tau_{J+1}$ so \[\widetilde{P}_n(k_n)=P_n(J) \leq \sup_{0 \leq j \leq j_n} P_n(j) \hspace{5mm} \mbox{and} \hspace{5mm} \widetilde{V}_n(k_n)=V_n(J) \leq \sup_{0 \leq j \leq j_n} V_n(j).\] But we know from Corollary \ref{estimatesPV} that \[ \left( \frac{1}{\sqrt{n}} \sup_{0 \leq j \leq j_n} P_n(j), \frac{1}{n} \sup_{0 \leq j \leq j_n} V_n(j) \right) \xrightarrow[n \to +\infty]{(P)} 0,\] which proves Lemma \ref{estimateTilde}. \end{proof} So $T_n^2(k_n)$ has the distribution of $T_{n/2-\widetilde{V}_n(k_n),\widetilde{P}_n(k_n)}$ and there is $p_n=o(\sqrt{n})$ such that \[ \P \left( \widetilde{P}_n(k_n)<p_n \mbox{ and } n/2-\widetilde{V}_n(k_n)>\frac{n}{3} \right) \xrightarrow[n \to +\infty]{} 1.\] Let $r_n$ be such that $r_n=o(n^{1/4})$ and $p_n=o(r_n^2)$ (take for example $r_n=n^{1/8} p_n^{1/4}$). By Lemma \ref{smallcycle}, with probability going to $1$ as $n \to +\infty$, there is a cycle $\gamma$ in $T_n^2(k_n)$ of length at most $r_n$ such that the part of $T^2_n(k_n)$ lying between $\partial T^2_n(k_n)$ and $\gamma$ has volume at most $\frac{n}{6}$. Moreover we have $\widetilde{V}_n(k_n)=o(n)$ in probability by Lemma \ref{estimateTilde}, so the two parts of $T_n(k_n)$ separated by $\gamma$ both have volume at least $\frac{n}{4}$, which proves Proposition \ref{propcycle}. The proof of our main theorem is now easy: let $\mathscr{T}_n^{\bowtie}$ be the set of the triangulations $t$ of the sphere with $n$ vertices in which there is a cycle of length at most $\ell_n$ that separates $t$ in two parts of volume at least $\frac{n}{4}$. Let also $k_n=o(n^{5/4})$. By Proposition \ref{propcycle} we have \[ \mathbb{P} \left( T_n(k_n) \in \mathscr{T}_n^{\bowtie} \right) \xrightarrow[n \to +\infty]{} 1, \] whereas by Corollary 1.2 of \cite{LGP08}, if $T_n(\infty)$ denotes a uniform variable on $\mathscr{T}_n$ we have \[ \mathbb{P} \left( T_n(\infty) \in \mathscr{T}_n^{\bowtie} \right) \xrightarrow[n \to +\infty]{} 0. \] Hence, the total variation distance between the distributions of $T_n(k_n)$ and $T_n(\infty)$ goes to $1$ as $n \to +\infty$ so the mixing time is greater than $k_n$ for $n$ large enough. Since this is true for any $k_n=o(n^{5/4})$, the mixing time must be at least $c n^{5/4}$ with $c>0$. We end this paper by a few remarks about our lower bound and an open question. \begin{rem} We proved a lower bound on the mixing time in the worst case, but our proof still holds for the mixing time from a typical starting point. We just need to fix $\varepsilon>0$ small, take as initial condition a uniform triangulation $T_n(0)$ conditioned on $\left|\partial B^{\bullet}_{n^{1/4}}(T_n(0)) \right| \leq \varepsilon \sqrt{n}$ and $\frac{n}{3} \leq \left| B^{\bullet}_{n^{1/4}}(T_n(0)) \right| \leq \frac{2n}{3}$ and let $T_n^1(0)=B^{\bullet}_{n^{1/4}}(T_n(0))$. The event on which we condition has probability bounded away from $0$ (by the results of \cite{CLGpeeling} and coupling arguments) and after time $o(n^{5/4})$ there is still a seperating cycle of length $O(\varepsilon^{1/2} n^{1/4})$. \end{rem} \begin{rem} Here is a back-of-the-enveloppe computation that leads us to believe the lower bound we give is sharp if we start from a typical triangulation. The lengths of the geodesics in a uniform triangulation of volume $n$ are of order $n^{1/4}$, so if we fix two vertices $x$ and $y$ the probability that a flip hits the geodesic from $x$ to $y$ is roughly $n^{-3/4}$. Hence, if we do $n^{5/4}$ flips, about $n^{1/2}$ of them will affect the distance between $x$ and $y$. If we believe that this distance evolves roughly like a random walk, it will vary of about $\sqrt{n^{1/2}}=n^{1/4}$, which shows we are at the right scale. Of course, there are many reasons why this computation seems hard to be made rigourous, but it does not seem to be contradicted by numerical simulations. \end{rem} Finally, note that even in the simpler case of triangulations of a polygon, the lower bound $n^{3/2}$ is believed to be sharp but the best known upper bound \cite{MT97} is only $n^{5+o(1)}$. In our case we were not even able to prove the following. \begin{conj} The mixing time of $(T_n(k))_{k \geq 0}$ is polynomial in $n$. \end{conj}
1,314,259,994,159
arxiv
\section{INTRODUCTION} The low surface brightness Universe received renewed attention in the last decade. Several specially designed and envisioned projects started to collect very deep data (e.g., van Dokkum et al. 2014; Duc et al. 2015; Rich et al. 2020; B\'ilek et al. 2020). Deep imaging revealed low surface brightness features in the form of stellar streams and shells, and extended optical disks (almost the size of the HI disks) around nearby galaxies. According to the $\Lambda$CDM paradigm, massive galaxies formed hierarchically by consuming surrounding smaller galaxies (Frenk \& White 2012). The accretion of dwarf satellites had left stellar streams in the outskirts of their massive hosts that are long lived and memorize past events. Cosmological simulations predict numerous disruption signatures left over from the formation and evolution of massive galaxies (Bullock et al. 2005; Cooper et al. 2010). The detection of such features and, moreover, subsequent analysis involving their quantification, provides a strong test for this hierarchical model of galaxy formation. However, these features are hidden in the low-surface brightness regime well beyond 26 mag/arcsec$^2$. Reaching the depth required to study low surface brightness structures, which are around 30 mag/arcsec$^2$ requires special observation technique and careful background subtraction. The method widely used is a technique called dithering -- off-setting the telescope randomly to sample different portions of the sky. This dithering pattern has to be large enough to make the target galaxy fall on different parts of the CCD chip. The result is a correction that can be used to remove the prevalent systematics in the image, such as residuals from the flat-fielding or the sky background. The quantification of the observation depth that allows comparison between different surveys is not well defined. There are in principle two ways to measure the surface brightness limit reached by an observation: (1) finding the limiting magnitude of the surface brightness profile of the object itself; or (2) quantifying background fluctuations. The problem with the former is that extrapolations of the profile can easily lead to misinterpretations of the true depth reached. The latter is independent of the modelling of the target galaxy, allowing for a better comparison between observations, which is why we use this approach. To study the capabilities of the Milankovi\'c telescope mounted at the Astronomical Station Vidojevica near Prokuplje (Serbia) and equipped with the IkonL CCD camera, we have imaged two galaxies for several hours -- namely NGC~474 and NGC~467 -- in four filters ($B$, $V$, $I$ and $L$). In Section 2 of this article, we describe the data acquisition and data reduction. In Section 3 we compare the surface brightness limit reached here to other surveys like the Sloan Digital Sky Survey (SDSS; York et al. 2000), the Beijing Arizona Sky Survey (BASS; Zou et al. 2017) and the Dark Energy Survey (DES; Dey et al. 2019). And finally, in Section 4 we discuss the results. \section{OBSERVATIONS AND DATA REDUCTION} \subsection{NGC~474 galaxy: $L$-band} We have imaged the elliptical galaxy NGC~474 in the $L$-band on October 22$^{\rm nd}$ 2019 with the 1.4m Milankovi\'c telescope using the IkonL CCD camera. We have applied a randomized dithering pattern with offsets of $\sim$300 pixels ($\sim$2 arcsin). The individual exposure times were 300\,s and the airmass ranged from 1.29 to 1.92. We have taken 33 exposures using the $L$-filter, resulting in an on-source integral exposure time of 2.75\,h. The image quality was on average 1.3 arcsec. An individual frame covers a square box of 13.3 arcmin, utilizing the focal reducer. As a result of the dithering the final co-add has the dimensions of 17.5 $\times$ 17.9 arcmin$^2$ (Figure 1, left). Previously, NGC~474 was observed by Duc et al. (2015) using the 3.6m Canada–France–Hawaii Telescope (CFHT) equipped with the MegaCam camera (Figure 1, right) in the $g$-band with a total exposure of 0.7h. In both images -- the Milankovi\'c and the CFHT images -- the shell structure is prominent and the stellar stream stretching to the north-east is clearly visible. The astrometric solution was found using the Astrometry code (Lang et al. 2010). The data reduction was done using the Milankovi\'c pipeline (M\"uller et al. 2019) following standard procedures: bias, dark, and flat field corrections. The creation of the super sky flat was achieved trough stacking individual frames using their native (image) scale. Simply, they were all median combined ignoring their astrometric solution. This super sky flat image was then normalized to the median value of each individual science frame and subtracted from them. Finally, the subtraction of the super sky flat was done on all the individual frames that were previously bias, dark, and flat field corrected. The last step was the co-addition of individual frames using IRAF's { \tt imcombine} task. \begin{table} \label{phot} \begin{center} \begin{tabular}{|c|c|c||c|c|c|} \hline Time &$B$-limit & $L$-limit & Time & $V$-limit & $I$-limit \\ [Hours] & [mag/arcsec$^2$] &[mag/arcsec$^2$] & [Hours] & [mag/arcsec$^2$] &[mag/arcsec$^2$] \\ \hline 0.25 & 25.15 $\pm$ 0.02 & 27.27 $\pm$ 0.01 & 0.15 & 26.12 $\pm$ 0.01 & 26.54 $\pm$ 0.03 \\ 0.75 & 25.89 $\pm$ 0.03 & 27.86 $\pm$ 0.01 & 0.45 & 26.79 $\pm$ 0.01 & 27.33 $\pm$ 0.02 \\ 1.25 & 26.22 $\pm$ 0.02 & -- & 0.75 & 27.11 $\pm$ 0.02 & 27.65 $\pm$ 0.02 \\ 1.75 & 26.42 $\pm$ 0.02 & 28.27 $\pm$ 0.02 & 1.05 & 27.22 $\pm$ 0.06 & 27.80 $\pm$ 0.03 \\ 2.25 & 26.58 $\pm$ 0.04 & 28.35 $\pm$ 0.03 & 1.35 & 27.39 $\pm$ 0.01 & 27.89 $\pm$ 0.02 \\ 2.75 & 26.68 $\pm$ 0.03 & 28.48 $\pm$ 0.03 & 1.65 & 27.35 $\pm$ 0.02 & 27.91 $\pm$ 0.02 \\ 3.25 & 26.77 $\pm$ 0.05 &-- & 1.95 & 27.40 $\pm$ 0.01 & 27.94 $\pm$ 0.04 \\ 3.75 & 26.85 $\pm$ 0.03 & -- & 2.25 & 27.54 $\pm$ 0.03 & 27.96 $\pm$ 0.04 \\ \hline \end{tabular} \caption{Surface brightness 3-sigma limits with the errors in the $B$-, $V$-, $I$- and $L$-band depending on the integral exposure time (in hours).} \end{center} \end{table} \begin{figure} \centering \includegraphics[width=0.45\textwidth]{ngc474-final.eps} \includegraphics[width=0.45\textwidth]{duc.eps} \caption{NGC 474: ({\it left}) $L$-band image from the Milankovi\'c telescope and ({\it right}) $g$-band image from the CFHT/MegaCam (Duc et al.~2015).} \end{figure} \subsection{NGC~467 galaxy: $B$-, $V$- and $I$-band} We have observed galaxy NGC~467 on September 28$^{\rm th}$ and 29$^{\rm th}$ 2019. The same equipment was used as with NGC~474. We have employed three filters: $B$, $V$ and $I$. The individual exposure times were 300s for the $B$-band, and 180s for $V$- and $I$-band. In total, 47 images were taken in each band equaling to 3.9h in the $B$-band and 2.35h in the $V$- and $I$-bands. Dithering was $\sim$ 200 pix ($\sim$ 1 arcmin) and the seeing 1.3 arcsec. The final co-add image has the size of 16.3 $\times$ 16.6 arcsec$^2$. The same procedure as before was applied: (1) astrometric solution was obtained with Astrometry code and (2) the Milankovi\'c pipeline was applied to reduce the raw data. \section{SURFACE BRIGHTNESS LIMIT} Magnitudes were measured with the standard IRAF package {\tt daophot} in the final co-add images of both galaxies in all filters. The photometric calibration was done using the linear regression formula: $m_f = a_0 + a_1 * m_{\rm cal}$, where the subscript $f$ stands for filters used ($B$,$V$,$I$, and $L$), $a_0$, $a_1$ are the intercept and slope, and $m_{\rm cal}$ is the standard star magnitude from the photometric catalog used for the calibration. We have calibrated the magnitudes using the TOPCAT software (Taylor 2005). The magnitude zeropoint $a_0$ and the corresponding slope $a_1$ inside the whole FOV of each galaxy were inferred from SDSS DR12 photometric catalog using $g$-band magnitudes (Alam et al. 2016). They are: 32.7 $\pm$ 0.2 and 1.01 $\pm$ 0.02 ($L$-band), 29.50 $\pm$ 0.2 and 1.01 $\pm$ 0.02 ($B$-band), 30.80 $\pm$ 0.3 and 1.03 $\pm$ 0.03 ($V$-band), 31.6 $\pm$ 0.9 and 1.1 $\pm$ 0.01 ($I$-band). They are all expressed in the $g$-band. All stars brighter than the 22$^{\rm nd}$ magnitude and fainter than 14$^{\rm th}$ magnitude in the $g$-band were selected for measurements. Our main interest was to determine, apart from the final surface brightness limit, the limit reached throughout the observing run. To estimate how deep the images were in shorter exposures than the integral one, we have created lists of images with integral exposure time starting from 15 minutes up to 3.75 hours increasing with steps of 30 minutes. Lists of individual frames were created to correspond to these shorter exposures and each list was combined with IRAF's {\tt imcombine} task to make a stack. In each stack we measured the standard deviation inside a dozen boxes of 10 arcsec sides in empty regions (without objects). Standard deviations were averaged and their standard deviations were reported as the errors of the surface brightness limit converted to the $g$-band (Table 1). The results of the surface brightness limiting magnitudes in various bands were converted to the $g$-band using linear regression and compared to the commonly used sky surveys SDSS, BASS and DES (Figure 2). It appears that in about an hour and a half the depth of the SDSS has been reached with our observations in all the filters. During the same time the BASS and DES limiting magnitudes are reached with all the filters except for the $B$-filter. And in the particular case of the $L$-filter all the limits are exceeded in less than hour. However, using the $L$-filter in 7.2 hours we have reached the 3-sigma surface brightness limit of 28.4 $\pm$ 0.04 mag/arcsec$^2$ in the $g$-band (M\"uller, Vudragovi\'c \& B\'ilek 2019). This appears to be the limit of the $L$-band observations with the given strategy. Here, this limit has been reached in less than 3 hours (Figure 2). \begin{figure} \centering \includegraphics[width=8.cm]{sblim_3sigma_new.eps} \caption{Surface brightness limit (3-sigma) in the $B$-, $V$-, $I$- and $L$-band variation with exposure time compared to: SDSS, BASS and DES surveys (green dashed lines).} \end{figure} \section{DISCUSSION} We have measured the limiting surface brightness in the $B$-, $V$-, $I$- and $L$-band using observations of the two nearby galaxies imaged with the Milankovi\'c telescope, equipped with IkonL CCD camera. The results can be used as a guideline for the exposure time needed for a given target surface brightness. Comparing to other sky surveys (SDSS, BASS and DES), we have shown that in $\sim$ 1.5h we can reach greater depth. Moreover, our $L$-band observations exceed the limits of these surveys after only 45 minutes of on-target time. However, with the currently used imaging strategy, even after 7.2h integral exposure time, we couldn't breach the 28.4 mag/arcsec$^2$ $g$-band 3-$\sigma$ limit. A larger dithering pattern, including the rotation of the telescope, may overcome this issue. Despite this, our observations show that the Milankovi\'c telescope is capable of studying the low-surface brightness Universe and can be used to follow-up structures identified in other surveys, as well as to hunt for hitherto undetected features. \smallskip \centerline{\bf Acknowledgements} \smallskip We acknowledge the financial support of the Ministry of Education, Science and Technological Development of the Republic of Serbia (MESTDRS) through the contract No 451-03-68/2020-14/200002 and the financial support by the European Commission through project BELISSIMA (BELgrade Initiative for Space Science, Instrumentation and Modelling in Astrophysics, call FP7-REGPOT-2010-5, contract No. 256772), which was used to procure the Milankovi\'c 1.4 meter telescope with the support from the MESTDRS. O.M. is grateful to the Swiss National Science Foundation for financial support. M.B. acknowledges the financial support by {\it Cercle Gutenberg}. The authors are grateful to Pierre-Alain Duc for providing the image of NGC~474 galaxy shown in Figure 1. We thank the technical operators at the Astronomical Station Vidojevica (ASV), Miodrag Sekuli\'c and Petar Kosti\'c for their excellent work. \references Alam, S., Albareti, F.~D., Allende P.~C. et al. : 2016,\journal{VizieR Online Data Catalog}, V/147. Bertin, E. \& Arnouts, S. : 1996, \journal{ Astron. and Astroph. Supplement S.}, \vol{117}, 393. Duc, P.~A., Cuillandre, J.-C., Karabal, E. et al. : 2015, \journal{Mon. Not. R. Astron. Soc.}, \vol{446}, 120 Dey, A., Schlegel, David J., Lang, D. et al. : 2019, \journal{Astron. J.}, \vol{157}, 168 B\'ilek, M., Duc, P.-A., Cuillandre, J.-C. et al. : 2020, \journal{Mon. Not. R. Astron. Soc.}, \vol{498}, 2138 Bertin, E., Mellier, Y., Radovich, M. et al. : 2002, \journal{Astronomical Data Analysis Software and Systems}, \vol{281}, 228. Bullock, J.~S. \& Johnston, K. V. : 2005, \journal{ Astrophy. J.}, \vol{635}, 931. Cooper, A.~P., Cole, S., Frenk, C.~S. et al. 2010, \journal{Mon. Not. R. Astron. Soc.}, \vol{406}, 744 Frenk, C.~S. \& White, S.~D.~M. : 2012, \journal{ Annalen der Physik}, \vol{524}, 507 Lang, D., Hogg, D.~W., Jester, S. et al. : 2009, \journal{Astron. J.}, \vol{137}, 4400. Rich, M.~R., Brosch N., Bullock, J. et al. : 2020, DOI: 10.1093/mnras/staa678. M\"uller, O., Vudragović, A. \& B\'ilek, M. : 2019, \journal{Astron. \& Astroph.}, \vol{632}, L13. Taylor, M.~B. : 2005, \journal{Astronomical Society of the Pacific Conference Series}, \vol{347}, 29pp. van Dokkum, P.~G., Abraham, R. \& Merritt A. : 2014, \journal{Astrophys. J.}, \vol{782}, 2. York, D.~G., Adelman, J., Anderson, John E.~J., et al. : 2000, \journal{Astron. J.}, \vol{120}, 1579. Zou, H., Zhou, X., Fan, X. et al. : 2017, \journal{ Publications of the Astronomical Society of the Pacific}, \vol{129:064101}, 9pp. \endreferences \end{document}
1,314,259,994,160
arxiv
\section{Introduction} The existence of hot solar corona ($\ge 10^6$ K) is well-established by observations of solar ultraviolet and X-ray emissions originating in magnetically confined coronal plasma, and the acceleration of fast solar wind that commence from magnetically open coronal regions (Peter and Dwivedi 2014). The identification of physical processes responsible for the coronal heating and solar wind acceleration has been a very challenging and still remains unsolved problem (Priest et al. 1998, Cranmer 2002, Peter and Dwivedi 2014). Recent high-resolution solar observations seem to clearly imply that the energy and mass transport required to heat the corona and accelerate the solar wind must come from the chromosphere through highly localized wave and dynamical plasma processes at small spatio-temporal scales, such as spicules, network jets, swirls, and twisting motions of magnetized plasma (De Pontieu et al. 2004, Shibata et al. 2007, De Pontieu et al. 2014, McIntosh et al. 2011). Solar chromospheric swirls have been detected by recent observations (Wedemeyer-B\"ohm et al. 2012). They are categorized by their various morphological shapes (Tian et al. 2014). {\bf These swirls are suggested to be driven by solar convective motions (Wedemeyer-B\"ohm et al. 2012, Steiner et al. 2010, Shelyag et al. 2011). } Magnetically-confined plasma in such swirls couples to the solar upper atmospheric layers, transferring swirls’ rotation into the low corona. At higher altitudes, these swirls may appear like large cyclones/tornadoes due to expansion of magnetic flux tubes that support them (Shelyag et al. 2011, Su et al. 2012, Zhang and Liu 2011). The relationship between the chromospheric swirls and coronal tornadoes is not yet understood (Wedemeyer et al. 2013, Shelyag et al. 2013, Wedemeyer and Steiner 2014). It is also debated whether the swirls are triggered by solar convective motions, or they are associated with torsional Alfv\'en waves at small spatial scales (Shelyag et al. 2013, Wedemeyer and Steiner 2014). The observations, however, suggest that these localized motions of magnetized plasma {\bf might transfer up to } $1.4\times 10^4$  W m$^{-2}$ energy flux to the base of the solar corona for its localized heating and wind acceleration (Wedemeyer-B\"ohm et al. 2012). The main objective of this paper is to describe novel solar phenomena called here multi-shell magnetic twisters, and their driving mechanism and related morphology. It should be noted that recently observed chromospheric swirls can be an integrated appearance of multi-shell magnetic twisters. Our finding sheds new light on the morphological evolution of such twisters, their exact excitation mechanism and their role in the localized heating of the solar corona, and mass transport to nascent solar wind. The novel result is our theoretical prediction of the multi-shell magnetic twisters, whose existence can be verified by observations with upcoming high-resolution instruments. The paper is organized as follows. Our model of solar magnetic arcades and description of our numerical method are introduced in Sects.~2 and~3, respectively. Results of our numerical simulations of the Alfv\'en wave propagation in magnetic arcades are presented and discussed in Sect.~4. Conclusions are given in Sect.~5. \section{Numerical model of multi-shells}\label{sec:atm_model} \subsection{MHD equations}\label{sec:equ_model} We consider a gravitationally stratified and magnetically confined plasma in a structure that resembles a flux tube, which is described by the following set of MHD equations: \beqa \label{eq:MHD_rho} {{\partial \varrho}\over {\partial t}}+\nabla \cdot (\varrho{\bf V})=0\, ,\\ \label{eq:MHD_V} \varrho{{\partial {\bf V}}\over {\partial t}}+ \varrho\left ({\bf V}\cdot \nabla\right ) {\bf V}= -\nabla p+ \frac{1}{\mu} (\nabla\times{\bf B})\times{\bf B} +\varrho{\bf g}\, , \\ \label{eq:MHD_B} {{\partial {\bf B}}\over {\partial t}}= \nabla \times ({\bf V}\times {\bf B})\, , \\ \label{eq:MHD_divB} \nabla\cdot{\bf B} = 0\, , \\ \label{eq:MHD_p} {\partial p\over \partial t} + {\bf V}\cdot\nabla p = -\gamma p \nabla \cdot {\bf V}\, ,\\ \label{eq:MHD_CLAP} p = \frac{k_{\rm B}}{m} \varrho T\, , \eeqa where ${\varrho}$ is mass density, $p$ gas pressure, ${\bf V}$, ${\bf B}$ and ${\bf g}=(0,-g, 0)$ represent the plasma velocity, the magnetic field and gravitational acceleration, respectively. In addition, $T$ is temperature, $m$ particle mass, that was specified by mean molecular weight value of $1.24$ (Oskar Steiner, private communication), $k_{\rm B}$ is the Boltzmann's constant, $\gamma=1.4$ the adiabatic index, and $\mu$ the magnetic permeability of plasma. The value of $g$ is $274$ m s$^{-2}$. \begin{figure \begin{center} \includegraphics[scale=0.25, angle=0]{fig1.eps} \caption{\small Magnetic field lines at $t=0$ s.} \label{fig:mag_e} \end{center} \end{figure} \begin{figure \begin{center} \includegraphics[scale=0.35, angle=0]{fig2.eps} \caption{\small Numerical blocks used in the numerical simulations and the initial pulse, $V_{\rm \theta}(x,y,z=0)$, drawn in the vertical ($x-y$) plane for $z=0$. Note that a part of the whole simulation region is displayed only and $max(\vert V_{\rm \theta}\vert)=2.56$ km s$^{-1}$. } \label{fig:blk} \end{center} \end{figure} \subsection {A model of the static solar atmosphere}\label{sec:equil} We consider a model of the static ($\partial/\partial t = 0$) solar atmosphere in which all plasma quantities vary with $x$, $y$, and $z$. We assume that the solar atmosphere is in static equilibrium (${\bf V_{\rm e}}={\bf 0}$) with the Lorentz force balanced by the gravity force and the gas pressure gradient, which means that \begin{equation} \label{eq:force_free} \frac{1}{\mu}(\nabla\times{\bf B_{\rm e}})\times{\bf B_{\rm e}} + \varrho_{\rm e} {\bf g} -\nabla p_{\rm e} = {\bf 0}\ , \end{equation} \noindent where the subscript $'{\rm e}'$ corresponds to the equilibrium configuration. We consider an axisymmetric flux tube of which equilibrium is described by Murawski et al. (2015) which is based on the analytical theory developed by Solov'ev (2010). This flux tube is initially non-twisted, and its magnetic field is represented by the azimuthal component of the magnetic flux function ($A\,{\bf\hat \theta}$) as \beq\label{eq:equil_B} {\bf B_{\rm e}} = \nabla\times (A\,{\bf\hat \theta})\, , \eeq where ${\bf\hat \theta}$ is a unit vector along the azimuthal direction. In this case, we have \beq\label{eq:B_com_A} B_{\rm er} = -\frac{\partial A}{\partial y}\, ,\hspace{3mm} B_{\rm e\theta} = 0\, , \hspace{3mm} B_{\rm ey} = \frac{1}{r} \frac{\partial (rA)}{\partial r}\, , \eeq where $r=\sqrt{x^2+z^2}$ is the radius. For a flux tube, we can specify $A(r,y)$ as \beqa\label{eq:A-Psi-flux tube} A(r,y) = B_{\rm 0} \exp{(-k_{\rm y}^2 y^2)} \frac{r}{1+k_{\rm r}^4r^4} + \frac{1}{2}B_{\rm y0} r \, , \eeqa where $B_{\rm y0}$ is the magnitude of the external magnetic field which is chosen vertical, $k_{\rm r}$ and $k_{\rm y}$ are inverse length scales along the radial and vertical directions, respectively. The magnetic field lines, which follow from Eqs.~(\ref{eq:B_com_A}) and (\ref{eq:A-Psi-flux tube}) are displayed in Fig.~\ref{fig:mag_e} for $k_{\rm r}=k_{\rm y}=4$ Mm$^{-1}$. We choose and hold fixed the magnitude of the reference magnetic field $B_{\rm 0}$ in such way that the magnetic field within the flux tube, at $r=y=0$ Mm, is about $285$ Gauss, and $B_{\rm y0}\approx 11.4$ Gauss. For these values, the resulting magnetic field lines are predominantly vertical around the tube axis, $r=0$ Mm, while further out they are bent and $B_{\rm e}$ decays with distance. Such magnetic field lines correspond to an isolated axi-symmetric magnetic flux tube. It must be also noted that the magnetic field at the top of the simulation region is essentially uniform, with its value of $B_{\rm y0}$. Having specified the magnetic field, the equilibrium mass density and the gas pressure are evaluated from Eq.~(\ref{eq:force_free}) with the adaptation of the hydrostatic gas pressure $p_{\rm h}$ which is given by \beq \label{eq:pres} p_{\rm h}(y)=p_{\rm 0}~{\rm exp}\left[ -\int_{y_{\rm r}}^{y}\frac{dy^{'}} {\Lambda (y^{'})} \right]\, , \eeq where $y_{\rm r}=10$ Mm is the reference level, $p_{\rm 0}=0.01$ Pa is the gas pressure evaluated at $y=y_{\rm r}$, noting that \begin{equation} \Lambda(y) = \frac{k_{\rm B} T_{\rm h}(y)} {mg} \end{equation} is the pressure scale-height, and $T_{\rm h}(y)$ a hydrostatic temperature profile. We adopt $T_{\rm h}(y)$ for the solar atmosphere that is specified by the model developed by Avrett \& Loeser (2008). This temperature profile is smoothly extended into the corona. It should be further noted that in our model the solar photosphere occupies the region $0 < y < 0.5$ Mm, the chromosphere is sandwiched between $y=0.5$ Mm and the transition region located at $y\simeq 2.1$ Mm, and above this height the atmospheric layers represent the solar corona. \section{Numerical simulations of MHD equations}\label{sec:num_sim_MHD} In order to solve Eqs (\ref{eq:MHD_rho})-(\ref{eq:MHD_CLAP}) numerically, we use the FLASH code (Fryxell et al. 2000; Lee \& Deane 2009; Lee 2013), in which a third-order unsplit Godunov-type solver with various slope limiters and Riemann solvers as well as Adaptive Mesh Refinement (AMR) (MacNeice et al. 1999) are implemented. The minmod slope limiter and the Roe Riemann solver (e.g., T\'oth 2000) are used. We set the simulation box as $(-1.5\, {\rm Mm},1.5\, {\rm Mm}) \times (0\, {\rm Mm},18\, {\rm Mm}) \times (-1.5\, {\rm Mm},1.5\, {\rm Mm})$ and impose fixed in time boundary conditions for all plasma quantities in all directions. In our present work, we use a static, non-uniform grid with a minimum (maximum) level of refinement set to $2$ ($5$). Note that small size blocks of numerical grid occupy the level up to $y=4$ Mm (Fig.~\ref{fig:blk}), and every numerical block consists of $8\times 8\times 8$ identical numerical cells. This results in the smallest spatial resolution of $23.4$ km below the level $y=4$ Mm and allows to well resolve spatial structures in the low solar corona. \subsection{Initial perturbations} The atmospheric magnetic field is continuously disturbed by large-scale dynamical perturbations that are able to transfer kinetic energy ({\it e.g.} buffeting due to the granular motion in the photosphere, or flare-driven blast waves in the upper atmosphere, reconnection-driven shocks, etc.). We assume that the instigator of changes in the magnetic field has such a nature. However, its exact property is not specified, and initially (at $t=0$ s) we perturb the above described equilibrium impulsively by a Gaussian pulse in the azimuthal component of velocity, $V_{\rm \theta}$, viz., \beq\label{eq:perturb} V_{\theta} = A_{\rm v} \frac{{\tilde r}}{w} \exp\left[ -\frac{{\tilde r}^2 + (y-y_{\rm 0})^2}{w^2} \right]\, , \eeq where ${\tilde r}^2=(x-x_{\rm 0})^2+z^2$ is the squared radius, $A_{\rm v}$ is the amplitude of the pulse, $(x_{\rm 0}, y_{\rm 0},0)$ its position, and $w$ its width. We set $A_{\rm v}=6$ km s$^{-1}$, $w=150$ km, and $y_{\rm 0}=500$ km, and hold them fixed, while we allow $x_{\rm 0}$ to attain one of these two values: (a) $x_{\rm 0}=0$ km; (b) $x_{\rm 0}=100$ km. This value of $A_{\rm v}$ results in the effective maximum velocity of about $2.56$ km s$^{-1}$ (Fig.~\ref{fig:blk}, color map), and the value of $y_{\rm 0}$ shows that the system is perturbed right at the top of the photosphere. Note that in our 3D model, the torsional Alfv\'en waves decouple linearly from magnetoacoustic waves. They can be described solely by $V_{\rm \theta}(x,y,t)$ (see Murawski et al. 2015). As a result, the initial pulse triggers Alfv\'en waves. \section{Results of numerical simulations}\label{sec:resultsOFnumSIM} We simulate small amplitude and impulsively excited Alfv\'en waves and investigate their propagation along the magnetic field lines which are associated with the flux tube (Fig.~\ref{fig:mag_e}). We consider the following two cases: (a) centrally-launched initial pulse; (b) off-centrally-launched initial pulse, and describe the corresponding results in the following parts of the paper. \subsection{Centrally launched pulse} The case of a centrally-launched pulse corresponds to the location of the initial pulse at the flux tube axis, which is realized by setting $x_{\rm 0}=0$ Mm in Eq.~(\ref{eq:perturb}). \begin{figure} \centering{ \includegraphics[scale=0.2,angle=0]{fig3.eps} } \caption{\small Spatial profiles of magnetic field lines at $t=200$ s for the case of centrally-launched initial pulse. } \label{fig:B-lines} \end{figure} A 3D view of the magnetic field lines of the flux tube is shown at $t=200$ s in Fig.~\ref{fig:B-lines}. It is clear that as a result of torsional Alfv\'en propagation, magnetic field lines are twisted and this twist is localized essentially below the transition region. \begin{figure} \centering{ \includegraphics[scale=0.225,angle=0]{fig4a.eps} \includegraphics[scale=0.225,angle=0]{fig4b.eps} } \caption{\small Spatial profiles of streamlines at $t=290$ s (top) and $t=420$ s (bottom) for the case of centrally-launched initial pulse. } \label{fig:V-lines} \end{figure} Figure~\ref{fig:V-lines} presents velocity streamlines which exhibit the twisters' motion. At $t=290$ s we discern a centrally located shell in which plasma counter-rotates in comparison to the rotation of the external shell (the top panel). At the later moment of time, this central shell is more developed (the bottom panel). It is to be further noted that these concentric magnetic shells have their dimension of the order of $120$ km. \begin{figure} \centering{ \includegraphics[scale=0.275,angle=0]{fig5a.eps}\\ \vspace{-0.75cm} \includegraphics[scale=0.275,angle=0]{fig5b.eps} } \caption{\small Spatial profiles of transverse velocity $V_{\rm z}(x,y,z=0,t)$ at $t=290$ s (top) and $t=420$ s (bottom) for the case of centrally-launched pulse, $x_{\rm 0}=0$ km. } \label{fig:Vz} \end{figure} Figure~\ref{fig:Vz} shows the vertical cross-sections of $V_{\rm z}(x,y,z=0)=V_{\rm \theta}(x,y,z=0)$ exhibiting the origin of various magnetic interfaces (multi-shells) at an upper chromospheric height of $y=1.9$ Mm, which is located about $200$ km below the transition region. It is to be noted that the magnetic field perturbations are confined to atmospheric layers located below the solar transition region (Murawski and Musielak 2010; Murawski et al. 2015), while the velocity perturbations easily penetrate into the corona and transfer the available mechanical energy (Fig.~\ref{fig:Vz}). The vertical cut at the upper chromospheric height in the velocity clearly shows the two oppositely oriented shells, where plasma is counter-streaming to each other and forming two twisting plasma shells. As a result, a distinct set of magnetic interfaces is formed, where Alfv\'en wave velocity perturbations are also generated. \begin{figure*} \centering{ \includegraphics[scale=0.275,angle=0]{fig6a.eps} \includegraphics[scale=0.275,angle=0]{fig6b.eps}\\ \includegraphics[scale=0.275,angle=0]{fig6c.eps} \includegraphics[scale=0.275,angle=0]{fig6d.eps} } \caption{\small Spatial profiles of $V_{\rm y}(x,y,z=0,t)$ at $t=50$ s, $t=100$ s, $t=290$ s and $t=420$ s (from top-left to bottom-right) for the case of centrally-launched pulse, $x_{\rm 0}=0$ Mm. } \label{fig:Vy} \end{figure*} Figure~\ref{fig:Vy} illustrates spatial profiles of the vertical component of velocity, $V_{\rm y}(x,y,z=0,t)$, drawn in the vertical plane for $z=0$. As a result of the initial pulse in $V_{\rm \theta}$, it follows from the Bernoulli's law that the under-pressure region is generated shortly after $t=0$ s and the ambient plasma is sucked into the region where the initial pulse was launched from. This under-pressure sucks the ambient plasma, resulting in a converging into the launching region flow. Such flow is well seen at $t=50$ s (the top-left panel). Note the down-flowing plasma in the patch located at $(x=0,y\approx 0.75)$ Mm, while the up-flowing gas is located at $(x=0,y\approx 0.3)$ Mm. Later on, at $t=100$ s these patches move upward and new regions of up-flowing plasma are generated at $(x\approx \pm 0.2,y\approx 1.4)$ Mm (top-right). Some of the up-flowing plasma is attracted by the gravity force, resulting in a down-flowing blob of the plasma velocity of $V_{\rm y}\approx -6$ km s$^{-1}$, which is seen at $t=290$ s above the transition region in the form of the violet map (the bottom-left panel). This free-falling plasma clashes with the first jet which originates right below the transition region (the red patch in the bottom-left panel). As a result of this clash a shock is generated. The second jet is seen at $t=420$ s (bottom-right). \begin{figure} \centering{ \includegraphics[scale=0.275,angle=0]{fig7a.eps} \vspace{-1.5cm} \includegraphics[scale=0.275,angle=0]{fig7b.eps} } \caption{\small Spatial profiles of $V_{\rm y}(x,y=1.9,z,t)$ at $t=290$ s (top) and $t=420$ s (bottom) for the case of the centrally-launched initial pulse, $x_{\rm 0}=0$ km. } \label{fig:Vy_y=1.9} \end{figure} The vertical components of velocity, $V_{\rm y}$, are displayed in the horizontal plane just below the transition region, at $y=1.9$ Mm, in Fig.~\ref{fig:Vy_y=1.9}. The development of concentric shells is well seen at $t=290$ s (top); the inner shell is represented by the violet ring with down-flowing plasma with its speed $V_{\rm y}\approx 3$ km s$^{-1}$, while in the outer shell, marked by the red ring, and the plasma flows upward with its speed $V_{\rm y}\approx 1$ km s$^{-1}$. In the center the plasma begins to flow upwards. At $t=420$ s this central flow develops the structure with $4$ discernible arms (bottom). Note that the vertical flow is out of phase to $V_{\rm z}$ at each instance in a particular shell at a given chromospheric height (compare with Fig.~\ref{fig:Vz}). This is a unique correlation of periodic changes in the vertical flows (i.e., up- and down-flows) negatively coupled with the Alfv\'en wave velocity perturbations in a chromospheric shell. \begin{figure} \centering{ \includegraphics[scale=0.275,angle=0]{fig8a.eps} \vspace{-1.5cm} \includegraphics[scale=0.275,angle=0]{fig8b.eps} } \caption{\small Spatial profiles of mass density $\varrho(x,y=1.9,z,t)$ at $t=290$ s (top) and $t=420$ s (bottom) for the case of centrally-launched initial pulse. } \label{fig:rho-x-y} \end{figure} The multi-shells can also be traced on the horizontal profiles of mass density, $\varrho(x,y=1.9,z,t)$, which are displayed in Fig.~\ref{fig:rho-x-y}. The initial profile (not shown) is altered at $t=290$ s with $2$ well-developed dense concentric shells (top). At a later moment of time, $t=420$ s, there are more shells seen (bottom). These shells result from up-flowing and down-flowing plasma and from the magnetoacoustic-gravity waves, which propagate in the horizontal direction. These magnetoacoustic waves are surface-gravity-like waves and they are driven by the oscillations in $V_y$. These waves propagate along the transition region. The propagating upward plasma brings a dense gas from lower atmospheric regions, while down-falling flow leads to plasma evacuation, which is well seen in the rings. Within the center of the flux-tube, the oscillations in $V_y(x,y=1.9,z)$ and $\varrho(x,y=1.9,z)$ are in anti-phase. Compare Figs.~\ref{fig:Vy_y=1.9} and ~\ref{fig:rho-x-y}. \begin{figure} \centering{ \includegraphics[width=6.2cm,angle=0]{fig9a.ps}\\ \includegraphics[width=6.2cm,angle=0]{fig9b.ps}\\ \includegraphics[width=6.2cm,angle=0]{fig9c.ps}\\ } \caption{\small Time-signatures of $V_{\rm z}$ (top, solid line), $V_{\rm y}$ (top, dashed line), vertical component of the mass density flux $\varrho V_{\rm y}$ (middle), and approximated Alfv\'en waves energy flux (bottom), evaluated at the detection point ($x=0.1$ Mm, $y=1.9$ Mm, $z=0.1$ Mm) for the case of $x_{\rm 0}=0$ km. } \label{fig:ts} \end{figure} Figure~\ref{fig:ts} (top) displays time-signatures of the transversal ($V_{\rm z}$) and vertical ($V_{\rm y}$) components of velocity, represented respectively by solid and dashed lines, and collected at the detection point which is located at ($x=0.1$ Mm, $y=1.9$ Mm, $z=0.1$ Mm). These time-signatures reveal perturbations in which $V_{\rm z}$ and $V_{\rm y}$ are approximately in anti-phase. Hence, we infer that up-flows (which correspond to $V_{\rm y}>0$) lead to clock-wise oscillations in $V_{\rm \theta}$ (represented in the vertical plane by $V_{\rm z}$), while down-flows are associated with anti-clock-wise oscillations in $V_{\rm \theta}$. The Alfv\'en wave energy flux can be approximated by the following relation (Vigeesh et al. 2012): \beqa\label{eq:E_flux} Ef \approx \varrho_{\rm e} V_{\rm \theta}^2 c_{\rm A}\, , \eeqa where $\varrho_{\rm e}$ is the unperturbed mass density, $V_{\rm \theta}$ is the azimuthal component of the Alfv\'en speed and $c_{\rm A}$ is the Alfv\'en speed which is expressed as \beq \label{eq:ca} c_{\rm A}^2(x,y,z) = \frac{ {B}^2_{\rm e}(x,y,z) }{{\mu \varrho_{\rm e}(x,y,z)}}\, . \eeq Note that the vertical flows associated with the magnetic twister carry the mass-flux of its maximum of $6\times 10^{-6}$ kg m$^{-2}$ s$^{-1}$ to the inner coronal heights, and thereby they also contribute to the mass supply in the region of nascent solar wind formation (Figure~\ref{fig:ts}, middle). The net energy flux carried by these twisters is about $0.2\times 10^4$ W m$^{-2}$ (Fig.~\ref{fig:ts}, bottom), which is almost sufficient for the required coronal heating of the quiet-Sun and the solar wind energy losses. \subsection{Off-centrally-launched pulse} We now consider the case of off-centrally-launched pulse that corresponds to the location of the initial pulse out off the flux tube axis, which is realized by setting $x_{\rm 0}=0.1$ Mm (Eq.~\ref{eq:perturb}). \begin{figure} \centering{ \includegraphics[scale=0.275,angle=0]{fig10a.eps} \includegraphics[scale=0.275,angle=0]{fig10b.eps} } \caption{\small Spatial profiles of streamlines at $t=150$ s (top) and $t=300$ s (bottom) for the case of $x_{\rm 0}=100$ km. } \label{fig:V-lines-off} \end{figure} \begin{figure} \centering{ \includegraphics[scale=0.275,angle=0]{fig11a.eps}\\ \includegraphics[scale=0.275,angle=0]{fig11b.eps} } \caption{\small Spatial profiles of transverse velocity $V_{\rm z}(x,y,z=0,t)$ at $t=150$ s (top) and $t=300$ s (bottom) for the case of $x_{\rm 0}=100$ km. } \label{fig:Vz-off} \end{figure} \begin{figure} \centering{ \includegraphics[scale=0.275,angle=0]{fig12a.eps} \vspace{-1.5cm} \includegraphics[scale=0.275,angle=0]{fig12b.eps} } \caption{\small Spatial profiles of mass density $\varrho(x,y=1.9,z,t)$ at $t=150$ s (top) and $t=300$ s (bottom) for $x_{\rm 0}=0.1$ Mm. } \label{fig:rho-x-y-off} \end{figure} The spatial profiles of streamlines are drawn in Fig.~\ref{fig:V-lines-off}. At $t=150$ s, we observe a development of the inner and outer eddies, as well as side vortices, seen at the bottom-left and top-right corners (top). At a later moment of time, $t=300$ s the flow pattern is more complex with more local eddies generated as a result of energy cascade into smaller scales (bottom). By comparing this with Fig.~\ref{fig:V-lines}, we conclude that the off-central pulse results in a more complex flow than the central pulse. This conclusion is further supported by the results of Fig.~\ref{fig:Vz-off} that illustrates vertical profiles of $V_{\rm z}(x,y,z=0)$ and Fig.~\ref{fig:rho-x-y-off} which shows the horizontal profiles of $\varrho(x,y=1.9,z)$. The symmetry of the centrally-launched pulse (Fig.~\ref{fig:Vz}) is broken for the case of the off-centrally-launched pulse (Fig.~\ref{fig:Vz-off}), with complex structures developed in the latter case. \section{Summary and Conclusions}\label{sec:Summary} In this paper, we simulated impulsively generated, either centrally or off-centrally, Alfv\'en waves in a gravitationally stratified and magnetically confined solar magnetic flux tube with the temperature profile of Avrett and Loeser (2008). A novel result is that an outer shell with Alfv\'en wave velocity perturbation is generated within the flux tube at small spatial scales. The plasma associated with this shell rotates following the magnetic field lines at this interface with the velocity streamlines forming the swirls. The vertical flows are associated with this dynamics and they feedback energy to generate one more inner shell with Alfv\'en wave velocity perturbation within the tube. This process repeats in time resulting in a third inner shell. On a time-scale of few hundred seconds, the multi-shell magnetic twister is fully developed in the tube. When we view it from top of the tube, the collective appearance shows up as the perturbations in the magnetic field lines with the streamline rotational motion constituting the multi-shell magnetic twister. Each distinct appearance of the multiple plasma shells grows in time as every shell is demarcated by the transition in density, which results in the emission $I \sim N_{\rm e}^2$. The scenario of multi-shells development is more complex in the case of the off-centrally launched Alfv\'en pulse, with a larger number of asymmetric shells generated in the system. The main new result is that the multi-shell magnetic twister and associated multiple torsional Alfv\'en waves are generated within a magnetic flux tube embedded in the solar atmosphere. The net energy flux carried by these twisters is about $0.2\times 10^4$ W m$^{-2}$. This is almost sufficient for the required coronal heating of the quiet-Sun and the solar wind energy losses. The vertical flows associated with the magnetic twister carry the mass-flux of its maximum of $6\times 10^{-6}$ kg m$^{-2}$ s$^{-1}$ to the inner coronal heights, and thereby they also contribute to the mass supply in the region of nascent solar wind formation. The Poynting energy flux carried out by the magnetic twisters is high compared to the energy transported by chromospheric swirls. Therefore, we suggest that the magnetic twisters must be considered as an important candidate for the localized coronal heating and the solar wind acceleration. The shell solutions seem to be the natural consequence of centering the initial pulse on the axis of the tube. This ensures that the perturbations have the same symmetry as the tube, and rings/shells are the obvious result. For more realistic perturbations, where the symmetry is not exact, multiple vortices with different orientations are likely to be the result (see e.g. the 'realistically' driven vortices in Moll et al. 2012, their Figs. 9 and 11). For the implications of this on the observed heating see van Ballegooijen et al. (2011). \begin{figure} \centering{ \includegraphics[width=8.5cm,angle=0]{fig13.eps} } \caption{\small The line-of-sight view of spatially averaged mass density-map in the units of $10^{-9}$ kg m$^{-3}$ (a) at the upper chromospheric height below the transition region is compared to the observational features of recently discovered ring-type chromospheric swirl (Wedemeyer et al. 2013) (b). The comparison shows that recently observed chromospheric swirls may be an integrated appearance of our multi-shell magnetic twisters; note that the spatial resolution of the presented observations is $540$ km per pixel. } \label{fig:compare} \end{figure} Based on the current observations, the typical morphologies of the swirls are: ring and spiral (Wedemeyer et al. 2013). If we fade the resolution of density map as shown in Fig. 8 by 2-3 times, then the multi-shell chromospheric swirls appear as a ring type swirl as recently observed in the solar chromosphere (Wedemeyer Bohm \& Rouppe van der Voort, 2009; Wedemeyer-B\"ohm et al. 2012, Wedemeyer et al. 2013). The ring-type swirl and its apparent motion are intimately connected with the localized magnetic flux tube, filled-in plasma, and their collective dynamics under the influence of the driver. The multi-shell magnetic twister will appear like ring-type swirl in a straight flux tube when observed with imager with less spatial resolution. This comparison is shown in Fig.~\ref{fig:compare}, where an observed ring-type chromospheric swirl matches in morphology over almost similar spatial-scale of modeled faded (density averaged) multi-shell magnetic twister. However, our model twister will appear as multi-shell magnetic twister once the spatial resolution is scaled down to less than $70$ km per pixel. Therefore, our newly proposed model of multi-shell magnetic twister provides the exact underlying physics of their drivers, morphology, and forecast on observations. The comparison shows that recently observed chromospheric swirls may be an integrated appearance of our multi-shell magnetic twisters. Moreover, the present novel model solves the paradigm of the association of Alfv\'en waves with such twisters. It is shown that they are present at multi-shells and collectively carry large amount of energy to heat the corona locally and accelerate the solar wind. Based on the physical properties of the multi-shell magnetic twisters and the amount of energy and mass carried by them, it is suggested that these multi-shell twisters are responsible for the observed heating of the solar inner corona and for the formation of nascent solar wind. Moreover, it is likely that the existence of the multi-shell magnetic twisters can be verified by observations at higher resolutions. \acknowledgments We thank the referee for his/her valuable comments and suggestions that allowed us to significantly improve our paper. This work has been supported by NSF under the grant AGS 1246074 (K.M. \& Z.E.M.), and by the Alexander von Humboldt Foundation (Z.E.M.). A.K.S. thanks Sven Wedemeyer-B\"ohm for providing the observational result of ring-type chromospheric swirl. The software used in this work was in part developed by the DOE-supported ASC/Alliance Center for Astrophysical Thermonuclear Flashes at the University of Chicago. The 2D and 3D visualizations of the simulation variables have been carried out using respectively the IDL (Interactive Data Language) and VAPOR (Visualization and Analysis Platform) software packages.
1,314,259,994,161
arxiv
\section{Introduction} In the last few years the most popular speculative idea in theoretical particle physics has been the possibility that extra spacetime dimensions exist. Much of the interest in this area was stimulated by the realization that constraints on the extra dimensions were relatively mild if only gravity and not the Standard Model gauge interactions was able to propagate in the extra dimensions or bulk\cite{Arkani-Hamed:1998rs,Antoniadis:1998ig}. This led to the possibility that the effective Planck scale in the extra dimensions was much lower than the commonly used four-dimensional Planck scale. If the effective Planck scale is of order a few TeV, then speculation arose that extra dimensions might help resolve the hierarchy problem and the electroweak scale effects of the extra dimensions might appear in future collider experiments. Gauss's law links the value of the effective Planck scale in the bulk to the conventional Planck scale via \begin{eqnarray} M_{pl}^2\sim R^nM_S^{n+2}\;. \label{scales} \end{eqnarray} Physical effects can present themselves via graviton exchange at future colliders, and an interesting class of processes are the pair production of gauge bosons in the photon-photon collider. The process $\gamma \gamma \to \gamma \gamma$ has been studied before\cite{Cheung:1999ja,Davoudiasl:1999di,Choudhury:1999gp}. The processes $\gamma \gamma \to W^+W^-$ and $\gamma \gamma \to ZZ$ were studied in Ref.~\cite{Rizzo:1999sy}. In the latter process the Standard Model contribution $\gamma \gamma \to ZZ$ is known\cite{Jikia:1993di,Berger:1993tr} but was not included. The process $\gamma \gamma \to ZZ$ is particularly attractive for the following reasons: (1) it provides another channel with which to assess the universality of the gravitational couplings to the gauge bosons; (2) the angular dependence of $\gamma \gamma \to ZZ$ is different from $\gamma \gamma \to \gamma \gamma$ because it occurs only through the $s$-channel while $\gamma \gamma \to \gamma \gamma$ occurs through the $s$, $t$, and $u$ channels; (3) since only the $s$-channel contributes to Kaluza-Klein (KK) process of $\gamma \gamma \to ZZ$ and the KK state is spin-two, we find the only helicity amplitudes which do not vanish have opposite initial photon helicities; (4) the $Z$ boson's transverse and longitudinal polarizations can be exploited by measuring the angular distribution of its decay products. Our emphasis here will be on the particular process $\gamma\ga\to ZZ$ for which the complete calculation including the full Standard Model background has not been performed\footnote{After this work was completed, we became aware of a paper\cite{Choudhury:2002zm} which included an approximate calculation of the Standard Model background and calculated the helicity amplitudes. Apart from some obvious typographical errors, we agree with the angular dependences of their helicity amplitudes and obtain similar numerical results. We have in addition included the photon-photon luminosity and explored the role of polarization in isolating the signal, and have derived bounds on the scale $M_S$.}. We also present the helicity amplitudes for $\gamma\ga\to\gamma\ga$ which provide a basis for comparison and also allow us to make particular points about the properties of these processes that can be exploited in a comprehensive analysis of all the final states. The Standard Model helicity amplitudes for $\gamma \gamma \to ZZ$ were first published in Ref.~\cite{Jikia:1993di} and their analytic form was confirmed shortly thereafter\cite{Berger:1993tr}. Numerical calculations of the cross sections were also performed in Refs.~\cite{Bajc:1993hp,Dicus:1993ux}. More recently the helicity amplitudes were again derived as a background for a search for possible virtual supersymmetric particles contribution to the loop diagrams\cite{Gounaris:1999hb}. The three calculations for the analytic expressions for the matrix elements show complete agreement (apart from a typo in Ref.~\cite{Jikia:1993di} explained in Refs.~\cite{Berger:1993tr,Gounaris:1999hb}, and taking into account an unconventional definition of the Mandelstam variables $t$ and $u$ used in Ref.~\cite{Jikia:1993di}). The fermion loop contribution in the Standard Model was first calculated\cite{Glover:1988rg} in the context of the gluon fusion process $gg\to ZZ$. The results for that process are easily adapted to the process considered $\gamma\ga\to ZZ$ considered here. At high energies where the low scale gravity signal should be most prominent, the Standard Model cross sections are dominated by the $W$ loop diagrams (as one expects since the $W$ boson is spin-one). Numerically at energies sufficiently far above threshold the cross section for the background of $\gamma\ga\to ZZ$ is an order of magnitude larger than than the background of $\gamma\ga\to\gamma\ga$. This can be understood simply by comparing the size of the $WWZ$ coupling to the $WW\gamma$ where the ratio is determined solely by the Weinberg angle. Photon beams can be realized at a future $e^+e^-$ collider by Compton backscattering laser beams off the electron or positron beam\cite{Ginzburg:1981vm,Ginzburg:1982yr,Telnov:1989sd}. By exploiting circular polarization of the lasers and polarizing the electron beams, the contribution to cross sections from various initial state photon helicities can be adjusted. We have obtained the contributions for the graviton exchange signal for both $\gamma \gamma \to \gamma\ga$ and $\gamma \gamma \to ZZ$ at the helicity amplitude level through the use of FORM\cite{Vermaseren:2000nd}. If the photon-photon option at a next generation linear collider becomes a real possibility in the future, this will facilitate detailed investigations of these processes putting in the full inteference with the Standard Model contributions and retaining all information on the polarization of the incident photon beams. Furthermore for the $ZZ$ final state, more sophisticated cuts on the $Z$ boson decay products via the density matrix formalism can be exploited to improve sensitivity to any signal. Finally having the helicity amplitudes at our disposal allows us to understand angular distributions that reflect the fact that graviton exchange is spin-two in nature. Other processes have been considered as probes of low scale gravity. For cases where gravitons appear as virtual particles, calculations have been performed for the production of fermions\cite{Mathews:1998kf}, gauge bosons\cite{Atwood:1999cy,Agashe:1999qp}, Higgs bosons\cite{Rizzo:1999qv}, and final states beyond pair production\cite{Dvergsnes:2002nc}. The general helicity formalism for spin-two particles has been developed in Ref.~\cite{Gleisberg:2003ue}. Constraints have also been placed on these theories of extra dimensions by testing the gravitational inverse-square law. The case of $n=1$ is already ruled out by solar system observations, and tests at the sub-millimeter level\cite{Hoyle:2004cw} can provide bounds at the TeV level (and hence comparable to bounds obtained in collider experiments like the one discussed in this paper) for $n=2$. \section{Helicity Amplitudes for $\boldmath{\gamma \gamma \to \gamma \gamma}$} Feynman rules have been developed for the KK compactification of $n$ extra dimensions on a torus $T^n$ with all of the $n$ compactification radii equal\cite{Han:1998sg}. Using the couplings of the $d=4$ gauge fields to gravity, one can analyze the possible effects of low scale gravity on gauge boson scattering. Since this phenomenology involves the exchange of massive spin-two KK states, there is a possibility of unique angular dependences in cross sections involving the exchange of these quanta. We define momentum and polarization vectors for the initial and final particles as\footnote{This choice of polarization vectors is the same as the one in Ref.~\cite{Jikia:1993di}. Our definitions of the Mandelstam variables require switching $t$ and $u$ when comparing with that paper.} \begin{eqnarray} p_{1} = \frac{\sqrt{s}}{2} (1 ; 0, 0, 1) && \qquad p_{2} = \frac{\sqrt{s}}{2} (1 ; 0, 0, -1)\nonumber\\ \nonumber \\ k_{1} = \frac{\sqrt{s}}{2} (1 ; \beta\sin\theta, 0, \beta\cos\theta) && \qquad k_{2} = \frac{\sqrt{s}}{2} (1 ; -\beta\sin\theta, 0, -\beta\cos\theta)\nonumber\\ \nonumber \\ e^{+}_{1}=e^{-}_{2}=\frac{1}{\sqrt{2}}(0;-1,-i,0) && \qquad e^{-}_{1}=e^{+}_{2}=\frac{1}{\sqrt{2}}(0;1,-i,0)\nonumber \end{eqnarray} \begin{eqnarray} e^{+*}_{3}=e^{-*}_{4}=\frac{1}{\sqrt{2}}(0;-\cos\theta,i,\sin\theta) \nonumber \end{eqnarray} \begin{eqnarray} e^{-*}_{3}=e^{+*}_{4}=\frac{1}{\sqrt{2}}(0;\cos\theta,i,-\sin\theta) \nonumber \end{eqnarray} \begin{eqnarray} e^{0}_{3}=\frac{\sqrt{s}}{2m_{z}}(\beta;\sin\theta,0,\cos\theta) \nonumber \end{eqnarray} \begin{eqnarray} e^{0}_{4}=\frac{\sqrt{s}}{2m_{z}}(\beta;-\sin\theta,0,-\cos\theta) \nonumber \end{eqnarray} \\ where $\beta = 1$ for the $\gamma\gamma\rightarrow\gamma\gamma$ case and $\beta = \sqrt{1-{{4M_Z^2}\over {s}}}$ for the $\gamma\gamma\rightarrow$ $ZZ$ case, $s = (p_{1} + p_{2})^{2}$~, $t = (p_{1} - k_{1})^{2}$, and $u = (p_{1} - k_{2})^{2}$.\\ The process $\gamma\gamma\rightarrow\gamma\gamma$ can be expressed in terms of three independent helicity amplitudes. The other helicity amplitudes are related to these three by virtue of crossing relations and parity considerations. For the graviton exchange signal we find that only two of these three are nonvanishing, \begin{eqnarray} i{\mathcal M}_{++++}^{\gamma\ga} &=& -\frac{\kappa^{2}}{2}\Big(D_{E}(t) + D_{E}(u)\Big)s^{2}\nonumber \;, \\ &=&-\frac{\kappa^{2}}{2}\bigg(D_{E}\Big(-\frac{s}{2}(1-\cos\theta)\Big) + D_{E}\Big(-\frac{s}{2}(1+\cos\theta)\Big)\bigg)s^{2}\nonumber \;, \\ i{\mathcal M}_{++--}^{\gamma\ga} &=&-\frac{\kappa^{2}}{4}\Big(D_{E}(t) -D_{E}(u)\Big)\Big(u^{2}-t^{2}\Big)\nonumber\\ &=&-\frac{\kappa^{2}}{4}\bigg(D_{E}\Big(-\frac{s}{2}(1-\cos\theta)\Big)-D_{E} \Big(-\frac{s}{2}(1+\cos\theta)\Big)\bigg)s^{2}\cos\theta\nonumber \;, \\ i\mathcal{M}_{+++-}^{\gamma\ga} &=&0 \end{eqnarray} where $D(x)$ for $x=s$ and $D_{E}(x)$ for $x=t,u$ are the summed propagator functions\footnote{We find the sometimes used approximations \begin{eqnarray} \kappa ^2D(s)\approx -{{16\pi i}\over {M_S^4}}{\mathcal F}\;,\label{aprx} \end{eqnarray} where \begin{eqnarray} {\mathcal F}=\left \{\begin{array}{lr} \log\left ({{M_S^2}\over s}\right )& {\mathrm for} \:\: n=2\\ & \\ {2\over {n-2}} & {\mathrm for} \:\:n>2\;. \end{array} \right . \end{eqnarray} can cause deviations from the exact expressions of tens of percent in the cross section.} derived in Ref.~\cite{Han:1998sg} and $\kappa =\sqrt{16\pi G_N}$. We have therefore used the full expression for $D(s)$ for our analysis of the $\gamma \gamma \to ZZ$ process, which is \begin{eqnarray} D(s) &=& {{s^{{n\over 2}-1}R^n}\over {(4\pi)^{n/2}\Gamma(\frac{n}{2})}} \left (\pi+2iI\left (\frac{M_{S}}{\sqrt{s}} \right )\right )\;, \label{Ds} \end{eqnarray} where \begin{eqnarray} I(x)&=&\left\{ \begin{array}{lr} -\sum_{k=1}^{{n\over 2}-1}{1\over {2k}}x^{2k} -{1\over 2}\log (x^2-1)& \qquad n=\mathrm{even}\\ & \\ -\sum_{k=1}^{{{n-1}\over 2}-1}{{1}\over {2k-1}} x^{2k-1} +{1\over 2}\log\left ({{x+1}\over {x-1}}\right )& \qquad n=\mathrm{odd} \end{array} \right. \;, \end{eqnarray} and \begin{eqnarray} D_{E}(t) &=& {{|t|^{{n\over 2}-1}R^n}\over {(4\pi)^{n/2}\Gamma({n\over 2})}} \left (-2iI_E\left ( \frac{M_{S}}{\sqrt{|t|}}\right )\right )\;, \end{eqnarray} where \begin{eqnarray} I_E(x)&=&\left\{ \begin{array}{cc} (-1)^{{n\over 2}+1}\sum_{k=1}^{{n\over 2}-1}{{(-1)^{k}}\over {2k}} x^{2k} +{1\over 2}\log (x^2+1)& \qquad n=\mathrm{even}\\ & \\ (-1)^{{n-1}\over 2}\sum_{k=1}^{{n-1}\over 2}{{(-1)^{k}}\over {2k-1}} x^{2k-1} +\tan^{-1}(x)& \qquad n=\mathrm{odd} \end{array} \right. \;. \end{eqnarray} The scale $M_S$ is defined as \begin{eqnarray} R^n={{(4\pi)^{n/2}\Gamma(n/2)}\over {2M_S^{n+2}G_N}}\;, \end{eqnarray} where $G_N=1/(8\pi\bar{M}_{pl}^2)$ is the 4-dimensional Newton's constant, with $\bar{M}_{pl}=2.4\times 10^{18}$~GeV is the reduced Planck mass. This definition for the mass scale $M_S$ is the one of Han, Lykken, and Zhang\cite{Han:1998sg} and makes precise the relationship in Eq.~\ref{scales}. Other possible conventions for the mass scale were considered in Refs.~\cite{Giudice:1998ck,Hewett:1998sn} and should not be confused with the one chosen here. The amplitudes ${\mathcal M}_{++++}^{\gamma\ga}$ and ${\mathcal M}_{+-+-}^{\gamma\ga}$ and also ${\mathcal M}_{++++}^{\gamma\ga}$ and ${\mathcal M}_{+--+}^{\gamma\ga}$ are related by crossing \begin{eqnarray} {\mathcal M}_{+--+}^{\gamma\ga} (s,t,u)&=& {\mathcal M}_{++++}^{\gamma\ga}(u,t,s) \nonumber \;, \\ {\mathcal M}_{+-+-}^{\gamma\ga} (s,t,u) &=& {\mathcal M}_{++++}^{\gamma\ga}(t,s,u) \;. \end{eqnarray} It is also noteworthy that the matrix element ${\mathcal M}_{++--}$ vanishes in the approximation $D(s)\approx D_E(|t|)\approx D_E(|u|)$. Representing the initial and final helicity states of the photons as $\lambda_{1}\lambda_{2}$ and $\lambda_{3}\lambda_{4}$, respectively, the other four non-zero helicity amplitudes can be expressed in terms of one of the previous amplitudes through \begin{eqnarray} {\mathcal M}_{\lambda_{1}\lambda_{2}\lambda_{3}\lambda_{4}}^{\gamma\ga}(s,t,u)&= &{\mathcal M}_{-\lambda_{1}-\lambda_{2}-\lambda_{3}-\lambda_{4}}^{\gamma\ga} (s,t,u)\;, \label{relation1}\\ {\mathcal M}_{\lambda_{1}\lambda_{2}\lambda_{3}\lambda_{4}}^{\gamma\ga}(s,t,u)&= &{\mathcal M}_{\lambda_{2}\lambda_{1}\lambda_{4}\lambda_{3}}^{\gamma\ga}(s,t,u) \;, \label{relation2} \end{eqnarray} which are results of Bose symmetry and parity. The angular dependence of these matrix elements are in agreement with the results of Ref.~\cite{Atwood:1999cy}. By squaring and summing these matrix elements and making the approximation as in Eq.~(\ref{aprx}), the polarization averaged result for the signal only (without the Standard Model background) can be derived, namely \begin{eqnarray} {1\over 4}\sum|{\mathcal M}^{\gamma\ga}|^2 ={{\kappa ^4}\over 2}|D(s)|^2(s^4+t^4+u^4)\;.\label{2phosig} \end{eqnarray} The factor ${1\over 4}$ is the initial state photon polarization average. This is in agreement with the corresponding result in Ref.~\cite{Cheung:1999ja} if an erroneous factor of one-half in the KK propagator of an earlier version of Ref.~\cite{Han:1998sg} is omitted. Furthermore, our result agrees with Ref.~\cite{Cheung:1999ja}'s expression when written in terms of $M_{S}$. The signal represented by these amplitudes for $\gamma \gamma \to \gamma \gamma$ at photon-photon colliders has been studied before\cite{Cheung:1999ja,Davoudiasl:1999di,Choudhury:1999gp}. Explicit analytic expressions for the helicity amplitudes allow one to understand more fully the optimal strategy for exploiting polarization to optimize the sensitivity. In our discussion of the process $\gamma \gamma \to ZZ$ beginning in the next section, we will be able to compare to the simpler case of $\gamma \gamma \to \gamma \gamma$ and highlight some important contrasts. A detailed analysis of $\gamma \gamma \to \gamma \gamma$ as a mode to study exchange of KK states at photon-photon colliders will appear elsewhere\cite{BKZ}. \section{Helicity Amplitudes for $\gamma \gamma \to ZZ$} The graviton exchange Feynmann rules for the $\gamma\gamma\rightarrow Z Z$ process is similar to the $\gamma\gamma\rightarrow\gamma\gamma$ case except for the restriction of the process to the s-channel. This restriciton is due to the fact that there is no interaction vertex between $\gamma$, $Z$, and the KK state. We define \begin{eqnarray} s_{4} &=& s - 4M_Z^2 \nonumber \;, \\ Y &=& tu - M_Z^4 = s\cdot p^{2}_{TZ}\;, \end{eqnarray} where $p_{TZ}^{}$ is the transverse momentum of either $Z$. For the TT polarization modes (the notation T denotes collectively the two transverse polarizations ($+$ and $-$) of the $Z$ boson and L will denote the longitudinal polarization ($0$)). for the final state $Z$ bosons, we obtained\footnote{We have chosen to denote the helicity amplitudes for $\gamma\ga\to\gamma\ga$ by ${\mathcal M}^{\gamma\ga}$ and those for our main focus $\gamma\ga\to ZZ$ without any superscript.} \begin{eqnarray} i{\mathcal M}_{+-++}=i{\mathcal M}_{+---}&=&-D(s)2\kappa^{2} {{Y}\over {s_{4}}}M_Z^2 \nonumber \\ &=&-D(s){{\kappa^{2}M_Z^2s}\over {2}}\sin^{2}\theta \;, \\ i{\mathcal M}_{+--+}&=&D(s){{\kappa^{2}}\over {4\beta^3}}\left (2\beta M_Z^4 -2(t-u)M_Z^2-t^2(1+\beta)+u^2(1-\beta)\right ) \nonumber \\ &=&-D(s){{\kappa^{2}s^{2}}\over {8}}(1-\cos\theta)^{2} \;, \\ i{\mathcal M}_{+-+-}&=&D(s){{\kappa^{2}}\over {4\beta^3}}\left (2\beta M_Z^4 -2(u-t)M_Z^2-u^2(1+\beta)+t^2(1-\beta)\right ) \nonumber \\ &=&-D(s){{\kappa^{2}s^{2}}\over {8}}(1+\cos\theta)^{2} \;. \end{eqnarray} The amplitudes ${\mathcal M}_{+-+-}$ and ${\mathcal M}_{+--+}$ are related either by $t\leftrightarrow u$ or by $\beta \to -\beta$. For the LL final state polarization mode, we obtained \begin{eqnarray} i{\mathcal M}_{+-00}&=&D(s){{\kappa^{2}Y}\over {2s_{4}}}(s+4M_Z^2) \nonumber \\ &=&D(s){{\kappa^{2}s}\over {8}}(s+4M_Z^2)\sin^{2}\theta \;. \end{eqnarray} Finally for the TL final state polarization modes, we obtained \begin{eqnarray} i{\mathcal M}_{+-+0}=-i{\mathcal M}_{+-0-}&=& -D(s){{\kappa^{2}\Delta Y}\over {\beta^2}} \left (\beta+{{t-u}\over {s}}\right ) \nonumber \\ &=&-D(s){{\kappa^{2}M_Zs}\over {2}} \sqrt{{{s}\over {2}}}\sin\theta(1+\cos\theta) \;, \\ i{\mathcal M}_{+--0}=-i{\mathcal M}_{+-0+}&=& -D(s){{\kappa^{2}\Delta Y}\over {\beta^2}} \left (\beta+{{u-t}\over {s}}\right ) \nonumber \\ &=&-D(s){{\kappa^{2}M_Zs}\over {2}} \sqrt{{{s}\over {2}}}\sin\theta(1-\cos\theta) \;, \end{eqnarray} with $\Delta = \sqrt{{sM_Z^2}\over {2Y}}$. Other helicity modes can be obtained from these by using equations analogous to Eqns.~(\ref{relation1})-(\ref{relation2}). The first of these equations must be modified to account for the possibility of the TL final state \begin{eqnarray} {\mathcal M}_{\lambda_{1}\lambda_{2}\lambda_{3}\lambda_{4}}(s,t,u,\beta)&= &{\mathcal M}_{-\lambda_{1}-\lambda_{2}-\lambda_{3}-\lambda_{4}}(s,t,u,\beta) (-1)^{\lambda _3-\lambda_4}\;, \label{relation3} \\ {\mathcal M}_{\lambda_{1}\lambda_{2}\lambda_{3}\lambda_{4}}(s,t,u,\beta)&= &{\mathcal M}_{\lambda_{2}\lambda_{1}\lambda_{4}\lambda_{3}}(s,t,u,\beta) \;. \label{relation4} \end{eqnarray} This amounts to an extra minus sign only. One can also obtain a relation between TL amplitudes that amounts to taking $\beta \to -\beta$, but we have chosen to display these helicity amplitudes separately to emphasize their relationship under the interchange $t\leftrightarrow u$. Helicity modes ${\mathcal M}_{-+\lambda_3\lambda_4}$ can be obtained from the corresponding amplitudes ${\mathcal M}_{+-\lambda_3\lambda_4}$ All other independent helicity amplitudes vanish; in particular, the signal vanishes if the initial photons have the same helicity. We again find agreement with the angular dependence of these helicity amplitudes with those in Ref.~\cite{Atwood:1999cy}. At high energies ($\sqrt{s}>>M_Z$) the Standard Model background is dominated by $Z$ bosons in the transverse polarization states. There are contributions from all initial helicity possibilities of the incident photons. The Higgs boson contributes only to channels in which the two initial photons have the same helicity ($\lambda_{1}=\lambda_{2}$) and the final states $Z$ bosons must have the same helicities ($\lambda_{3}=\lambda_{4}$). This property reflects the fact that the Higgs boson is spin-zero, and while the Higgs boson does not appreciably affect the results for the low scale gravity signal, we mention it here to contrast it with the spin-two nature of the $s$-channel graviton exchange graphs. The $s$-channel graviton exchange graphs require differing helicities ($\lambda_{1}=-\lambda_{2}$) for the initial photons. The dominant matrix elements for high energies ($\sqrt{s}>>M_Z$) are ${\mathcal M}_{+-+-}$, ${\mathcal M}_{+--+}$ and ${\mathcal M}_{+-00}$ which have the following angular dependences respectively: $t^2={s^2\over 4}(1-\cos\theta)^2$, $u^2={s^2\over 4}(1+\cos\theta)^2$, and $tu={s^2\over 4}\sin^2\theta$. The absence of a signal in channels where the initial photons have the same helicity differs from the $\gamma \gamma \to \gamma \gamma$ case, because in addition to the $s$-channel diagram, the $\gamma \gamma \to \gamma \gamma$ process has additional contributions from the $t$ and $u$ channels. This impacts the analysis in two ways: (1) For $\gamma \gamma \to ZZ$ one can try to isolate the signal by arranging the initial state helicities of the incoming photons to be opposite. This can be done by appropriately choosing the initial electron and positron polarizations as well as the polarization of the backscattered laser beams. (2) The signal for $\gamma \gamma \to ZZ$ is somewhat smaller than the signal for $\gamma \gamma \to \gamma \gamma$ expressed in Eq.~(\ref{2phosig}). This makes finding a signal harder, and weakens the overall bound one could otherwise place on the scale $M_S$. Since the interference between the signal and the background can be crucial to the detectability of any signal, it is important to examine not only their overall sizes but also their relative phases. At large energies, $s>>M_Z^2$, the Standard Model background is dominated by the $W$ boson loops, and these dominant contributions become predominantly imaginary\footnote{For explicit expressions, see for example Eqn.~(3.26) of Ref.~\cite{Jikia:1993di} or Eqn.~(10) of Ref.~\cite{Jikia:1993tc}.}. The signal involves the propagator function\cite{Han:1998sg} \begin{eqnarray} D(s)&=&\sum_{\vec{n}}{i\over {s-m_{\vec{n}}+i\epsilon}}\;. \end{eqnarray} Using \begin{eqnarray} {1\over {s-m^2+i\epsilon}}= P\left ({1\over {s-m^2}}\right )-i\pi\delta(s-m^2)\;, \end{eqnarray} yields the expression in Eq.~(\ref{Ds}), and one recognizes that the imaginary part of $D(s)$ contributes to the real part of the helicity amplitudes, and the real part of $D(s)$ contributes to the imaginary part of the helicity amplitudes. Physically speaking, the imaginary part of $D(s)$ involving $I(M_S/\sqrt{s})$ arises from the (coherent) summation of the large number of nonresonant states and typically dominates for $s<<M_S^2$. So in the physical region we are contemplating looking for a graviton exchange signal, $M_Z^2<<s<<M_S^2$, the background is mostly imaginary and the signal is mostly real. One point that should not be overlooked is that the approximation for $D(s)$ sometimes employed not only makes an approximation for the imaginary part, but also completely drops the real part which can still have a significant interference with the $W$ loop Standard Model background. However it should be kept in mind that the $W$ loop background approaches its asymptotic behaviour rather slowly, so the interference can still remain nonnegligible in practice especially for the realistic case of $\sqrt{s_{ee}}=1$~TeV. We find the TL polarization modes for the final state $Z$ bosons to be nonzero, but suppressed at high energies relative to the dominant helicity amplitudes identified above by a factor $M_Z/\sqrt{s}$. These polarization modes are of course absent in the case of final state photons in $\gamma \gamma \to \gamma \gamma$. Finally the TT polarization modes ${\mathcal M}_{+-++}$ and ${\mathcal M}_{+---}$ are suppressed by a factor $M_Z^2/s$ because it requires that the $Z$ bosons have the same helicity. This amplitude would vanish in the limit where $M_Z$ is taken to zero. \section{Signal and Background} Sources of high energy photons can be obtained by backscattering laser photons of energy a few electron volts off high energy beams of electrons or positrons. Such colliders have come to be called photon-photon colliders or $\gamma \gamma$ colliders. This technique allows a much harder spectrum of photons than is available in the usual Weizs\"acker-Williams spectrum. In fact, photon-photon collisions with energies almost the same order as the parent $e^+e^-$ collider can be obtained. Furthermore, polarization of the electron and positron beams together with polarization of the lasers can yield polarized photon beams. Therefore by adjusting these polarizations, one can enhance or suppress matrix elements with differing initial state photon helicities. In the case of the $ZZ$ (and $W^+W^-$) final states, one can also in principle use the differing decay distributions to study the polarization states of the final state gauge bosons. This technique has not been employed in this analysis; we have imposed instead a simple angular cut on the produced $Z$ bosons. The subprocess cross sections are given by $d\hat{\sigma}_{++}$ and $d\hat{\sigma}_{+-}$ where the final state polarizations have been summed over. Then the cross section folding in the photon luminosity functions $f(x_i)$ and $\xi(x_i)$ for $i=1,2$, one obtains the differential cross section as \begin{eqnarray} d\sigma_{\lambda_3\lambda_4}&=&\int ^{y_m^2}_{M_Z^2/s_{ee}}d\tau \int ^{y_m}_{\tau/y_m}{{dy}\over y} f(y)f(\tau/y)\nonumber \\ &&\times \left [{1\over 2}\left \{1+\xi(y)\xi(\tau/y)\right \} d\hat{\sigma}_{++\lambda_3\lambda _4}(s_{\gamma\gamma}) +{1\over 2}\left \{1-\xi(y)\xi(\tau/y)\right \} d\hat{\sigma}_{+-\lambda _3\lambda_4}(s_{\gamma\gamma})\right ]\;, \end{eqnarray} where $y=E_\gamma/E_e$ and $\tau=s_{\gamma \gamma}/s_{ee}$ are the ratios of photon energies to the parent electron/positron energies. The energy spectrum and helicity of backscattered photons, $f(y)$ and $\xi(y)$ are given in Refs.~\cite{Ginzburg:1981vm,Ginzburg:1982yr,Telnov:1989sd}. We have taken the usual choice where the laser energy $\omega_0$ is chosen so that $x=4E_e\omega_0/m_e^2=2(1+\sqrt{2})\approx 4.8$ and $y_m=x/(x+1)\approx 0.83$. The Standard Model background for $\gamma \gamma \to ZZ$ (and $\gamma \gamma \to \gamma \gamma$) is dominated by only a few helicity amplitudes at high energies. For equal initial photon helicities the contribution to the cross section from the amplitude ${\mathcal M}_{++++}$ is more than an order of magnitude larger than any other contribution even after a reasonable angular cut on the final state $Z$ bosons. Similarly in the unequal initial photon helicity case the contribution to the cross section is dominated by the two amplitudes ${\mathcal M}_{+-+-}$ and ${\mathcal M}_{+--+}$. The cross section for longitudinally polarized $Z$ bosons arising from ${\mathcal M}_{+-00}$ is at least an order of magnitude smaller for $\sqrt{s}_{\gamma \gamma}>500$~GeV. These amplitudes are dominated at high energies by the $W$ loop contributions (as opposed to the fermion loop diagrams), so the relative size of the cross sections for $\gamma \gamma \to \gamma \gamma$ and $\gamma \gamma \to ZZ$ is easily estimated in this limit. One simply substitutes for the relative sizes of the $\gamma \gamma W$ and $ZZW$ couplings: $\sigma(\gamma \gamma \to \gamma \gamma)=\tan ^4\theta_W \sigma(\gamma \gamma \to ZZ)$, and the $ZZ$ final state is enhanced by a factor of about twelve. The fact that the signal for graviton exchange contributes only to helicity amplitudes with unequal initial photon helicities can be exploited experimentally. By selecting the electron, positron, and laser polarizations to give the desired initial photon helicities, one can suppress the large background arising from ${\mathcal M}_{++++}$ while enhancing the signal. In contrast the process $\gamma \gamma\to \gamma \gamma$ has signal contributions in both same and opposite initial photon helicity channels. We have assumed a Higgs boson mass of $M_H=150$~GeV to make the plots. A higher Higgs masses would appear as a resonance in some of the cross sections ($\sigma _{++00}$, $\sigma _{++++}$, and $\sigma _{++--}$), but since the resonance is a small fraction of the background for any $M_H>400$~GeV, the exact value of the Higgs mass is completely irrelevant for determining the size of the graviton signal plus background considered here. Similarly in the region where $\sqrt{s_{\gamma\ga}}$ is several hundred TeV, the Standard Model $W$ loop background completely dominates over the fermion loops. Nevertheless we mention that we have used a top quark mass of $m_t=175$~GeV, and occasionally one can notice a change in behaviour in the Standard Model background at the threshold $\sqrt{s_{\gamma\ga}}=2m_t$. The cross section for various helicity combinations of the initial state photons and final state $Z$ bosons are shown in Figs. (1)-(5) for the Standard Model background and for the graviton exchange signal plus background for $n=4$ and for $M_S=3,4,5,6$~TeV. We have employed an angular cut on the c.o.m. scattering angle of $|\cos \theta | < \cos(\pi/6)$. The signal is dominated by the cross sections $\sigma_{+-+-}$, $\sigma_{+--+}$, and $\sigma_{+-00}$ shown in Figs. (1) and (2). For large $\sqrt{s_{\gamma\ga}}$ ($\sqrt{s_{\gamma\ga}}>>M_{z}$), the cross sections grow like $s_{\gamma\ga}^3/M_S^8$. Moreover, for such large energy the $TL$ final state signals shown in Fig. (3) grow like $s_{\gamma\ga}^2M_Z^2/M_S^8$ while the remaining $TT$ amplitudes shown in Fig. (4) grows like $s_{\gamma\ga}M_Z^4/M_S^8$ for large $s_{\gamma\ga}$. When the signal and background are of comparable size ($\sqrt{s_{\gamma\ga}}\alt 1$~TeV), the contribution from the transverse states shown in Fig. (1) will dominate the signal since the interference with the underlying Standard Model background determines its overall size. Therefore it is important to include the interference between the signal and background when estimating the reach of possible future experimental searches. The background consists of the Standard model contributions from the opposite photon helicity ($\lambda_{1}=-\lambda_{2}$) modes shown in Figs. (1)-(4) as well as from the same photon helicity ($\lambda_{1}=\lambda_{2}$) modes shown in Fig. (5) for which, as previously mentioned, do not receive contributions from the spin-two graviton exchange. The background is dominated by the cross section $\sigma _{++++}$ which can exceed 100~femtobarns. Unlike the process $\gamma\ga\to\gamma\ga$ there is no signal contribution in this mode for $\gamma\ga\to ZZ$ because the latter only proceeds via the $s$-channel. Furthermore the overall size of $\gamma\ga\to ZZ$ is larger than $\gamma\ga\to\gamma\ga$ due to enhanced $WWZ$ coupling. For most practical purposes the overall level of the signal and background can be estimated by concentrating attention on the contributions in Figs. (1) and (5) which dominate in most cases. We do not present a figure summing these contributions since the optimal strategy for uncovering the signal will be to use polarization to isolate the helicity amplitudes containing the signal as outlined in detail below. \vskip\bigskipamount} \centerline{\hbox{\epsfxsize=4.0in\epsffile{pmpm.eps}}} \parbox{5.5in}{\small \noindent Fig. 1: The cross section is shown for $\sigma_{+-+-}=\sigma_{+--+}$ for the Standard Model background (solid) and for signal plus background (dashed) for $n=4$ and $M_S=3$~TeV, $4$~TeV, $5$~TeV, and $6$~TeV from top to bottom. The signal cross sections grow like $s^3/M_S^8$ in the region $M_Z^2<<s<<M_S^2$. A cut has been placed on the c.o.m. scattering angle $|\cos \theta | < \cos(\pi/6)$.} \vskip\bigskipamount} \vskip\bigskipamount} \centerline{\hbox{\epsfxsize=4.0in\epsffile{pm00.eps}}} \parbox{5.5in}{\small \noindent Fig. 2: The cross section is shown for $\sigma_{+-00}$ for the Standard Model background (solid) and for signal plus background (dashed) for $n=4$ and $M_S=3$~TeV, $4$~TeV, $5$~TeV, and $6$~TeV from top to bottom. The signal cross section grows like $s^3/M_S^8$ in the region $M_Z^2<<s<<M_S^2$. A cut has been placed on the c.o.m. scattering angle $|\cos \theta| < \cos(\pi/6)$.} \vskip\bigskipamount} \vskip\bigskipamount} \centerline{\hbox{\epsfxsize=4.0in\epsffile{pmp0.eps}}} \parbox{5.5in}{\small \noindent Fig. 3: The cross section is shown for $\sigma_{+-+0}=\sigma_{+-0-} (\approx \sigma_{+-0+}=\sigma_{+--0})$ for the Standard Model background (solid) and for signal plus background (dashed) for $n=4$ and $M_S=3$~TeV, $4$~TeV, $5$~TeV, and $6$~TeV from top to bottom. The signal cross sections grow like $s^2M_Z^2/M_S^8$ in the region $M_Z^2<<s<<M_S^2$. A cut has been placed on the c.o.m. scattering angle $|\cos \theta| < \cos(\pi/6)$.} \vskip\bigskipamount} \vskip\bigskipamount} \centerline{\hbox{\epsfxsize=4.0in\epsffile{pmpp.eps}}} \parbox{5.5in}{\small \noindent Fig. 4: The cross section is shown for $\sigma_{+-++}=\sigma_{+---}$ for the Standard Model background (solid) and for signal plus background (dashed) for $n=4$ and $M_S=3$~TeV, $4$~TeV, and $5$~TeV from top to bottom. The signal cross section grows like $sM_Z^4/M_S^8$ in the region $M_Z^2<<s<<M_S^2$. A cut has been placed on the c.o.m. scattering angle $|\cos \theta| < \cos(\pi/6)$.} \vskip\bigskipamount} \vskip\bigskipamount} \centerline{\hbox{\epsfxsize=4.0in\epsffile{pp.eps}}} \parbox{5.5in}{\small \noindent Fig. 5: The cross sections for the case of equal photon helicities are shown. Since the gravition signal does not contribute to these modes, what is shown arises from the Standard Model alone and contributes only as background. Notice the wide range of scales and the dominance of $\sigma_{++++}$ for the larger $\sqrt{s_{\gamma\ga}}$ of interest. At the lower left the unlabeled curves correspond to $\sigma_{++--}$ (the larger one) and $\sigma_{++-0}$ (the smaller one). A cut has been placed on the c.o.m. scattering angle $|\cos \theta| < \cos(\pi/6)$.} \vskip\bigskipamount} \newpage \vskip\bigskipamount} \centerline{\hbox{\epsfxsize=4.0in\epsffile{ndimpm.eps}}} \centerline{\hbox{\epsfxsize=4.0in\epsffile{ndim00.eps}}} \parbox{5.5in}{\small \noindent Fig. 6: The cross sections are shown for (a) $\sigma_{+-+-}=\sigma_{+--+}$ and (b) $\sigma_{+-00}$ for the Standard Model background (solid) and for signal plus background (dashed) for $M_S=4$~TeV and the number of extra dimensions $n=2$, $4$, and $6$.} \vskip\bigskipamount} \vskip\bigskipamount} In Fig. (6) the effect of varying the number of extra dimensions $n$ is shown keeping the scale $M_S$ fixed at 4~TeV. We show only the most important modes, namely $\sigma_{+-+-}=\sigma_{+--+}$ in Fig. (6a) and $\sigma_{+-00}$ in Fig. (6b). The conclusion is that stronger bounds can be placed when $n$ is smaller. The strategy of choosing polarizations to optimize the signal over background is particularly simple for the process $\gamma\ga\to ZZ$. The graviton exchange signal requires opposite helicities for the initial state photons, so one should choose polarizations for the electron and positron beams as well as the laser beams to isolate this combination and to eliminate as much as possible the large background from $\sigma_{++++}$. We denote the polarizations of the electron ($e_1$), positron ($e_2$) and laser beams ($\gamma_1$ and $\gamma_2$) by $(P_{e_1},P_{\gamma _1},P_{e_2},P_{\gamma_2})$. At a photon-photon collider the luminosity is rather flat for the unpolarized case, and one achieves a peak in the luminosity just below the maximum energy by choosing opposite polarizations for the electron and laser photon, e.g. in the ideal case $P_{e_1}P_{\gamma _1}=-1$ and $P_{e_2}P_{\gamma _2}=-1$ (see for example Fig. (11) of Ref.~\cite{Telnov:1989sd}). Since one wants to look for a rapidly growing signal on top of a Standard Model background, clearly the optimal situation occurs when the luminosities is concentrated at the highest energies possible. In addition to isolate the opposite photon helicity amplitudes one wants to choose the polarizations such that $P_{e_1}=-P_{e_2}$ and $P_{\gamma _1}=-P_{\gamma _2}$. Therefore we have assumed in the following analysis that the electron/positron beams can be polarized to 90\%, and assume the photon-photon collider has the following polarization combinations \begin{eqnarray} P_{e_1}&=&-P_{e_2}=0.9\;, \nonumber \\ P_{\gamma _1}&=&-P_{\gamma _2}=-1\;. \end{eqnarray} This polarization setting will be denoted by the shorthand $(P_{e_1},P_{\gamma _1},P_{e_2},P_{\gamma_2})=(+,-,-,+)$. It was noticed in Ref.~\cite{Rizzo:1999sy} that this kind of polarization enhanced the signal for the process $\gamma\ga\to W^+W^-$. This can be understood on the basis of our helicity amplitudes for $\gamma\ga\to ZZ$ which can be converted into helicity amplitudes for $\gamma\ga\to W^+W^-$ with minor modifications since both processes occur via only the $s$ channel. The Standard Model background for $\gamma\ga \to W^+W^-$ occurs at tree level rather than at one-loop as it does for $\gamma\ga\to ZZ$, so the reach is expected to be higher in $W$ production since the interference of the signal with the background is crucial. The polarization setting that has the photon-photon luminosity peaking at the highest energy but gives predominantly backscattered photons with the same helicity is $(P_{e_1},P_{\gamma _1},P_{e_2},P_{\gamma_2})=(+,-,+,-)$. This polarization setting would be optimal for a case where a signal contributed to the helicity amplitudes ${\mathcal M}_{++\lambda_3\lambda_4}$. Thus this setting would be preferable for the $\gamma\ga\to\gamma\ga$ process, which is consistent with the results of the calculations in Ref.~\cite{Davoudiasl:1999di}. In Fig. (7) a comparison is made between the two polarization settings. One observes a noticeable improvement in the second polarization choice. For this choice we have determined the integrated luminosity required to observe at the 95\% confidence level a signal over the Standard Model background for three choices of $M_S$. This is shown in Fig. (8) for the case of $n=4$. In particular, with an integrated luminosity of $100$~fb$^{-1}$, a linear collider with c.o.m. energy of $1$~TeV has a reach almost up to $M_S=4$~TeV. This determination of the experimental reach for the case of $\gamma\ga\to ZZ$ invites us to compare with the other diboson processes that have been considered previously. The reach is higher as expected for $\gamma\ga\to W^+W^-$ where the signal interferes with the much larger tree-level background\cite{Rizzo:1999sy}. While a strategy of exploiting the decay products might favor the $ZZ$ final state with respect to the $W^+W^-$ final state, it will not be enough to overcome the different level of background. Of course, for high enough energies the signals become comparable in size and the size of the backgrounds becomes irrelevant. The reach in $M_X$ is also slightly higher in $\gamma\ga\to\gamma\ga$ where contributions to the signal occur in the $t$ and $u$ channels as well as the $s$ channel. This larger signal in $\gamma\ga\to\gamma\ga$ wins out against the larger level of Standard Model background in $\gamma\ga\to ZZ$. In any event all of these channels should be studied to determine the universality of the graviton couplings and to test whether the signal behaves as one expects from the exchange of a spin-two particle. \vskip\bigskipamount} \centerline{\hbox{\epsfxsize=4.0in\epsffile{ee+-+-.eps}}} \centerline{\hbox{\epsfxsize=4.0in\epsffile{ee+--+.eps}}} \parbox{5.5in}{\small \noindent Fig. 7: The cross section are shown for a photon-photon collider whose parent $e^+e^-$ collider has energy $\sqrt{s_{ee}}$ for the choice of polarizations (a) $(P_{e_1},P_{\gamma _1},P_{e_2},P_{\gamma_2})=(+,-,+,-)$ and (b) $(P_{e_1},P_{\gamma _1},P_{e_2},P_{\gamma_2})=(+,-,-,+)$, and for $M_S=3,4,5$~TeV. The number of extra dimensions is $n=4$. The polarization in (a) favors backscattered photons with the same helicity while (b) favors backscattered photons with opposite helicities.} \vskip\bigskipamount} \vskip\bigskipamount} \centerline{\hbox{\epsfxsize=4.0in\epsffile{luminosity.eps}}} \parbox{5.5in}{\small \noindent Fig. 8: The luminosity required to detect required to detect a signal at the 95\% confidence level for $M_S=3,4,5$ TeV as a function of $\sqrt{s_{ee}}$ with the polarization choice $(P_{e_1},P_{\gamma _1},P_{e_2},P_{\gamma_2})=(+,-,-,+)$ as in Fig. 7(b). The number of extra dimensions in $n=4$.} \vskip\bigskipamount} {\section{Conclusions} The processes $\gamma \gamma \to VV$ where $VV=ZZ$ or $W^+W^-$ are interesting reactions to look for any effects of low scale gravity. Unlike photon-photon scattering, $\gamma \gamma \to \gamma \gamma$, these cross sections occur only via $s$-channel exchange of gravitons. Due to the spin-two nature of the exchanged quanta, this results in nonzero matrix elements only when the initial photons have opposite helicities. Exploiting the ability of Compton backscattering to provide a hard spectrum of polarized photons, one can hope to isolate a signal. We can suggest an overall strategy for analyzing all of the modes $\gamma\ga\to VV$. Signals should be seen in all of the modes $\gamma\ga\to ZZ$, $\gamma\ga\to\gamma\ga$, and $\gamma\ga\to W^+W^-$ but should be absent in $\gamma\ga\to\gamma Z$. The modes that occur only in the $s$ channel, namely $\gamma\ga\to ZZ$ and $\gamma\ga\to W^+W^-$ should show a strong dependence on the polarization settings of the photon-photon collider since only the opposite helicity photons contribute to the signal. In particular the polarization setting $(P_{e_1},P_{\gamma _1},P_{e_2},P_{\gamma_2})=(+,-,-,+)$ will enhance the signal by simultaneously resulting in opposite sign backscattered photon helicities and a peak in the photon-photon luminosity at the highest energies. The signal-to-background ratio $S/B$ for the photon-photon scattering process $\gamma\ga\to\gamma\ga$ should be less sensitive to the polarization setting. In this latter setting the polarizations to $(P_{e_1},P_{\gamma _1},P_{e_2},P_{\gamma_2})=(+,-,+,-)$ will enhance the sensitivity since the same photon helicity cross sections are larger than the opposite helicity cross sections. If a graviton exchange is ever seen, then the angular dependences can be studied in detail. The rapid rise in the signal cross section means that even modest enhancements in the photon-photon collider energy can yield dramatic improvements in the rates. \vspace{0.5cm} \section*{Acknowledgments} This work was supported in part by the U.S. Department of Energy under Grant No.~DE-FG02-91ER40661.
1,314,259,994,162
arxiv
\section{Related Work}\label{sec:related} \subsection{Assumptions} \subsection{Choice of Utility Function for Resource Allocation Problem} \subsection{Value of Coalition} Table \ref{tab:util1} shows the utilities of different players in a $3$ player-$3$ application setting. Player one was assigned a linear utility while player two and three had sigmoidal utilities. It is evident from the table that the utility of the coalition improves when more players are added. The grand coalition has the highest utility, verifying the superadditive nature of the game. The last row shows the Shapley Values (S.V.) for the grand coalition. Our $\mathcal{O}(N)$ (alg2 in Figure \ref{fig:3playerm01}) provides the same value of coalition as Shapley value (due to Pareto optimality), however players are assigned different utilities in the coalition. Figure \ref{fig:3playerm01} shows the value of coalition for $3$ players, and $3,10$ and $100$ applications with $\mu$ set to $0.01$ and $100$. The grand coalition achieves the highest coalition utility for all three cases. As a higher value of $\mu$ (i.e., the slope of the sigmoidal function is steep) puts a stringent requirement on the edge clouds to satisfy requests of the applications if it is to gain any utility, we see that that the overall value of coalition is smaller for $\mu=10$ when compared with $\mu=0.01$. Figure \ref{fig:all} shows the utility of a player without resource sharing and with resource sharing in the grand coalition in varying experimental settings ($\mu=0.01$, $\mu=10$, $M_n=20$, and $M_n=100$). We show the utility of the player in the grand coalition both using Shapley value (SV) and our $\mathcal{O}(N)$ algorithm (alg2 in Figure). Similar trends are observed in Figure \ref{fig:all2}. However, we do not calculate the Shapley value for $N=10$ and $M_n=20$ due to the computational complexity and the utility of the players in the grand coalition is obtained using Algorithm $2$. It is evident that all the players' utilities improve by sharing resources and taking part in the cooperative game. \begin{table}[] \centering \caption{Utility (Pay-off) for different coalitions with $\mu=0.01, N=3, M_n=3$} \begin{tabular}{|l|l|l|l|l|} \hline \multirow{2}{*}{\textbf{Coalition}} & \multicolumn{3}{c|}{\textbf{Player Utilities}} & \multirow{2}{*}{\textbf{Coalition Utility}} \\ \cline{2-4} & \textbf{$P_1$} & \textbf{$P_2$} & \textbf{$P_3$} & \\ \hline \{1\} & 36 & 0 & 0 & 36 \\ \hline \{2\} & 0 & 4.37 & 0 & 4.37 \\ \hline \{3\} & 0 & 0 & 4.31 & 4.31 \\ \hline \{12\} & 40.17 & 4.375 & 0 & 44.545 \\ \hline \{13\} & 40.31 & 0 & 4.313 & 44.623 \\ \hline \{23\} & 0 & 8.68 & 8.68 & 17.37 \\ \hline \{123\} $\mathcal{O}(N)$ & 44.68 & 8.68 & 8.68 & 62.06 \\ \hline \{123\} (S.V.) & 40.34 & 10.90 & 10.81 & 62.06 \\ \hline \end{tabular} \label{tab:util1} \end{table} \begin{figure} \includegraphics[width=0.55\textwidth]{figures/3players_m} \caption{Value of Coalition for $3$ players, and $3,20$ and $100$ applications with $\mu=0.01$ and $\mu=10$} \protect\label{fig:3playerm01} \end{figure} \subsection{Time complexity} Computational complexity of the Shapley value is high, which is why it cannot be used for a large number of players. We compared the performance of our $\mathcal{O}(N)$ algorithm with the Shapley value based allocation (given in Algorithm \ref{algo:alg1}) in a 3-player game with different number of applications. Experimental results showed that Algorithm \ref{algo:alg2} reduces the computation time by as large as 71.67\% and as small as 26.6\% while the average improvement was about 49.75\%. Figure \ref{fig:time} shows the calculation time for different user-application settings with varying $\mu$. We see that in all the settings, our proposed algorithm outperforms the Shapley value based allocation. \begin{figure} \includegraphics[width=0.55\textwidth]{figures/all} \caption{Utilities in different settings without and with resource sharing in grand coalition for $N=3$ } \protect\label{fig:all} \end{figure} \begin{figure} \includegraphics[width=0.55\textwidth]{figures/util_10_20_both} \caption{Player utilities with and without resource sharing in grand coalition for $N=10$ and $M=20$ } \protect\label{fig:all2} \end{figure} \begin{figure} \includegraphics[width=0.55\textwidth]{figures/time_all} \caption{Comparison of time Complexity} \protect\label{fig:time} \end{figure} \subsection{Game Theoretic Solution} The characteristic function $v$ for our game that solves problem in~\eqref{eq:opt_higher} is given in ~\eqref{eq:payoff_function}. We model the resource allocation and sharing problem (with multiple objectives) in the aforementioned settings as a canonical cooperative game with transferable utility. We choose a \emph{monotone non-decreasing utility function} for our resource allocation and sharing framework. This is because in edge computing, the more resources provided, the higher is the gain or utility for the edge cloud. It is highly unlikely that the utility of any edge cloud will decrease with an increase in the amount of resources allocated. Since the utility function used is monotone non-decreasing, the game is convex. The core of any convex game is large and contains the Shapley value \cite{han2012game}. Our goal is to obtain an allocation from the core as all allocations in the core guarantee Pareto optimality and stability of the grand coalition i.e. the allocation decision obtained is Pareto optimal and no player (edge cloud) will have the incentive to leave the grand coalition. We first use the Shapley value, that requires solving $2^{N}-1$ optimization problems, to obtain an allocation from the core and then propose a computationally efficient algorithm that can provide us an allocation from the core but does not provide the fairness of the Shapley value. \begin{align} \label{eq:payoff_function} v(\mathcal{N}) \sum_{n \in \mathcal{N}}\bigg(w_n u_n(\mathcal{X})+ \zeta_{n}\sum_{j \in \mathcal{N},j\neq n} u^{n}_j(\mathcal{X})\bigg) \end{align \begin{comment} \begin{figure*}[!t] \normalsize \begin{align}\label{eq:convexityProof} & \sum_{\substack{\text{$n \in {S_2}$},\; \text{$C_{k}= \sum_{n \in {S_2}}C_{k}^{(n)} $} \\ \text{$\sum_{i} x_{ik}^{(n)}\leq C_{k}^{(n)} \; \forall k \in \mathcal{K},\; \forall n \in {S_2}$}\\ \text{$\sum_{j \in {S_2}}x_{ik}^{(j)} \leq R^{(n)}_{ik} \;\forall\; i\in \mathcal{M}, k \in \mathcal{K}, n \in {S_2}$}}}\bigg(w_n u_n(\mathcal{X})+ w'_{n}\sum_{k \in {S_2}, k\neq n} u^{j}_k(\mathcal{X})\bigg) + w_j u_j ^{(S_2)}(\mathcal{X}) + \zeta_j\sum_{l \in S_2, l \neq j} u^{j}_{l}(\mathcal{X}) - \nonumber \\ & \Bigg(\sum_{\substack{\text{$n \in {S_1}$},\; \text{$C_{k}= \sum_{n \in {S_1}}C_{k}^{(n)} $} \\ \text{$\sum_{i} x_{ik}^{(n)}\leq C_{k}^{(n)} \; \forall k \in \mathcal{K}, \; \forall n \in {S_1}$} \\ \text{$\sum_{j \in {S_1}}x_{ik}^{(j)} \leq R^{(n)}_{ik} \quad \forall\; i\in \mathcal{M}, k \in \mathcal{K}, n \in {S_1}$}}}\bigg(w_n u_n(\mathcal{X})+ w'_{n}\sum_{k \in {S_1}, k \neq n} u^{j}_k(\mathcal{X})\bigg) + w_j u_j ^{(S_1)}(\mathcal{X}) + \zeta_j\sum_{l \in S_1, l \neq j} u^{j}_{l}(\mathcal{X})\nonumber \Bigg)\\ &= \sum_{\substack{\text{$n \in {S_2}\backslash S_1$}, \text{$C_{k}= \sum_{n \in {S_2}\backslash S_1}C_{k}^{(n)} $} \\ \text{$\sum_{i} x_{ik}^{(n)}\leq C_{k}^{(n)} \; \forall k \in \mathcal{K},\; \forall n \in {S_2}\backslash S_1$} \\ \text{$\sum_{j \in {S_2 \backslash S_1}}x_{ik}^{(j)} \leq R^{(n)}_{ik} \quad \forall\; i\in \mathcal{M}, k \in \mathcal{K}, n \in {S_2 \backslash S_1}$}}}\bigg(w_n u_n(\mathcal{X})+ w'_{n}\sum_{k \in {\{S_2 \backslash S_1\}}, k \neq n} u^{j}_k(\mathcal{X})\bigg) + w_j u_j ^{(S_2)}(\mathcal{X}) \nonumber \\ & + \zeta_j\sum_{l \in S_2, l \neq j} u^{j}_{l}(\mathcal{X}) - w_j u_j ^{(S_1)}(\mathcal{X}) - \zeta_j\sum_{l \in S_1, l \neq j} u^{j}_{l}(\mathcal{X}) \nonumber \\ & = v(S_2)- v(S_1)+ w_j u_j ^{(S_2)}(\mathcal{X}) + \zeta_j\sum_{l \in S_2, l \neq j} u^{j}_{l}(\mathcal{X}) - w_j u_j ^{(S_1)}(\mathcal{X}) - \zeta_j\sum_{l \in S_1, l \neq j} u^{j}_{l}(\mathcal{X}) \nonumber \\ & \geq v(S_2)- v(S_1) \end{align} \hrulefill \end{figure*} \end{comment} \begin{comment} \begin{align}\label{eq:convexityProof} & \sum_{\substack{\text{$n \in {S_2}$} \\ \text{$R_{av,k}= \sum_{n \in {S_2}}R_{av,k}^{(n)} $} \\ \text{$\sum_{i} a_{ik}^{(n)}\leq R_{av,k}^{(n)} \; \forall k \in \mathcal{K},\; \forall n \in {S_2}$}\\ \text{$\sum_{j \in {S_2}}a_{ik}^{(j)} \leq R^{(n)}_{req,ik} \;\forall\; i\in \mathcal{M}, k \in \mathcal{K}, n \in {S_2}$}}}\bigg(w_n u_n(\mathcal{A})+ w'_{n}\sum_{k \in {S_2}, k\neq n} u^{j}_k(\mathcal{A})\bigg) + w_j u_j ^{(S_2)}(\mathcal{A}) + \zeta_j\sum_{l \in S_2, l \neq j} u^{j}_{l}(\mathcal{A}) - \nonumber \\ & \Bigg(\sum_{\substack{\text{$n \in {S_1}$} \\ \text{$R_{av,k}= \sum_{n \in {S_1}}R_{av,k}^{(n)} $} \\ \text{$\sum_{i} a_{ik}^{(n)}\leq R_{av,k}^{(n)} \; \forall k \in \mathcal{K}, \; \forall n \in {S_1}$} \\ \text{$\sum_{j \in {S_1}}a_{ik}^{(j)} \leq R^{(n)}_{req,ik} \quad \forall\; i\in \mathcal{M}, k \in \mathcal{K}, n \in {S_1}$}}}\bigg(w_n u_n(\mathcal{A})+ w'_{n}\sum_{k \in {S_1}, k \neq n} u^{j}_k(\mathcal{A})\bigg) + w_j u_j ^{(S_1)}(\mathcal{A}) + \zeta_j\sum_{l \in S_1, l \neq j} u^{j}_{l}(\mathcal{A})\nonumber \Bigg)\\ &= \sum_{\substack{\text{$n \in {S_2}\backslash S_1$} \\ \text{$R_{av,k}= \sum_{n \in {S_2}\backslash S_1}R_{av,k}^{(n)} $} \\ \text{$\sum_{i} a_{ik}^{(n)}\leq R_{av,k}^{(n)} \; \forall k \in \mathcal{K},\; \forall n \in {S_2}\backslash S_1$} \\ \text{$\sum_{j \in {S_2 \backslash S_1}}a_{ik}^{(j)} \leq R^{(n)}_{req,ik} \quad \forall\; i\in \mathcal{M}, k \in \mathcal{K}, n \in {S_2 \backslash S_1}$}}}\bigg(w_n u_n(\mathcal{A})+ w'_{n}\sum_{k \in {\{S_2 \backslash S_1\}}, k \neq n} u^{j}_k(\mathcal{A})\bigg) + w_j u_j ^{(S_2)}(\mathcal{A}) \nonumber \\ & + \zeta_j\sum_{l \in S_2, l \neq j} u^{j}_{l}(\mathcal{A}) - w_j u_j ^{(S_1)}(\mathcal{A}) - \zeta_j\sum_{l \in S_1, l \neq j} u^{j}_{l}(\mathcal{A}) \nonumber \\ & = v(S_2)- v(S_1)+ w_j u_j ^{(S_2)}(\mathcal{A}) + \zeta_j\sum_{l \in S_2, l \neq j} u^{j}_{l}(\mathcal{A}) - w_j u_j ^{(S_1)}(\mathcal{A}) - \zeta_j\sum_{l \in S_1, l \neq j} u^{j}_{l}(\mathcal{A}) \nonumber \\ & \geq v(S_2)- v(S_1) \end{align} \end{comment} \begin{comment} \begin{enumerate} \item Is the core non-empty? \item If the core is non-empty and consists of different pay-off allocations, how do we choose any particular pay-off allocation vector? \end{enumerate} Below we attempt to answer these questions \begin{definition}[Balanced Collection] Let $\Omega$ be a collection of different coalitions. $\Omega$ is considered to be a \emph{balanced collection} if we can find non-negative weights $w_S$ for each $S \in \Omega$ such that for every player $n$ \begin{align}\label{eq:balancedcollection} \sum_{S \in \Omega, S \ni n}w_S =1 \end{align} \end{definition} \begin{definition}[Balanced Game] A game is called balanced if for every balanced collection $\Omega$, the payoff vector $\mathbf{x}$ will be in $V_{\mathcal{N}}$ if $\mathbf{x}_S \in V_S\; \forall S \in \Omega$ \end{definition} \begin{theorem}[Bondareva-Shapley Theorem]\label{thm:BST} The core of a game is non-empty if and only if the game is balanced \end{theorem} \begin{theorem} The core for canonical game with our characteristic function(in Equation~\eqref{eq:payoff_function}) is non-empty \end{theorem} \begin{proof} From the definition of characteristic functions for TU games (see \cite{han2012game} for details), $v: 2^{\mathcal{N}}$ \textrightarrow $\mathbb{R}$ with $v(\emptyset)=\emptyset$. This means that the total number of coalitions possible for $\mathcal{N}$ players is $2^{|\mathcal{N}|}-1$ (we ignore the empty set). For $\Omega$, the non-negative weights $w_S$ which satisfy the condition for balanced collection in Equation~\eqref{eq:balancedcollection} are given by $\frac{1}{2^{|\mathcal{N}|-1}}$. Hence $\Omega$ is a balanced collection. We also know that $v(S)\subseteq v(\mathcal{N})$ as $v(\mathcal{N})$ is defined over a larger feasible set due to increased amount of available resources. Leveraging the monotonicity property of the utility functions used, we know that any payoff vector $\mathbf{x}$ will be in $V_{\mathcal{N}}$ if $\mathbf{x}_S$\footnote{$\mathbf{x}_S$ is the projection of $\mathbf{x}$ onto the vector space $\mathbb{R}^{|S|}$} $\in V_S\; \forall S \in \Omega$. Hence our canonical game with characteristic function $v$ in Equation~\eqref{eq:payoff_function} is balanced and its core is non-empty (from Theorem \ref{thm:BST}) \end{proof} \end{comment} \begin{comment} So for our coalition game $(\mathcal{N},v)$, the Shapley value is \begin{align} \label{eq:ourshapleyconvex} \phi_n&=\frac{1}{|\mathcal{N}|!}\sum_{\pi \in \prod} \bigg( \sum_{\substack{\text{$n \in {O_{\pi, \pi(n)}}$} \\ \text{$R_{av,k}= \sum_{n \in {O_{\pi, \pi(n)}}}R_{av,k}^{(n)} $} \\ \text{$\sum_{i} a_{ik}^{(n)}\leq R_{av,k}^{(n)} \;\forall k \in \mathcal{K}, \; \forall n \in {O_{\pi, \pi(n)}}$} \\ \text{$$}\text{$\sum_{j \in {O_{\pi, \pi(n)}}}a_{ik}^{(j)} \leq R^{(n)}_{req,ik} \;\forall\; i\in \mathcal{M}, k \in \mathcal{K}, n \in O_{\pi, \pi(n)}$}}}\bigg(w_n u_n^{(\pi)}(\mathcal{A})+ w'_{n}\sum_{j \in {O_{\pi, \pi(n)}}, j \neq n} u^{n,(\pi)}_j(\mathcal{A})\bigg)- \nonumber \\ &\Bigg(\sum_{\substack{\text{$n \in {O_{\pi, \pi(n)-1}}$} \\ \text{$R_{av,k}= \sum_{n \in {O_{\pi, \pi(n)-1}}}R_{av,k}^{(n)} $} \\ \text{$\sum_{i} a_{ik}^{(n)}\leq R_{av,k}^{(n)} \; \forall k \in \mathcal{K}, \; \forall n \in {O_{\pi, \pi(n)-1}}$}}}\bigg(w_n u^{(\pi)}_n(\mathcal{A})+ w'_{n}\sum_{j \in {O_{\pi, \pi(n)-1}}, j \neq n} u^{n,(\pi)}_j(\mathcal{A})\bigg)\Bigg) \nonumber \\ &=\frac{1}{|\mathcal{N}|!}\sum_{\pi \in \prod} \bigg( \sum_{\substack{\text{$n \in {O_{\pi, \pi(n)}\backslash O_{\pi, \pi(n)-1} }$} \\ \text{$R_{av,k}= \sum_{n \in {O_{\pi, \pi(n)}\backslash O_{\pi, \pi(n)-1}}}R_{av,k}^{(n)} $} \\ \text{$\sum_{i} a_{ik}^{(n)}\leq R_{av,k}^{(n)} \;\forall k \in \mathcal{K}, \; \forall n \in {O_{\pi, \pi(n)}\backslash O_{\pi, \pi(n)-1}}$} \\ \text{$\sum_{j \in {O_{\pi, \pi(n)}\backslash O_{\pi, \pi(n)-1}}}a_{ik}^{(j)} \leq R^{(n)}_{req,ik} \;\forall\; i\in \mathcal{M}, k \in \mathcal{K}, n \in O_{\pi, \pi(n)}\backslash O_{\pi, \pi(n)-1}$} }}\bigg(w_n u^{(\pi)}_n(\mathcal{A})+ \nonumber \\ & w'_{n}\sum_{j \in \{{O_{\pi, \pi(n)}} \}, j \neq n} u^{n,(\pi)}_j(\mathcal{A}) + w'_{n}\sum_{j \in \{{O_{\pi, \pi(n)}} \}, j \neq n} u^{j,(\pi)}_n(\mathcal{A}) \bigg)\bigg) \nonumber \\ &=\frac{1}{|\mathcal{N}|!}\sum_{\pi \in \prod}\bigg(w_nu^{(\pi)}_n(\mathcal{A})+ w'_{n}\sum_{j \in \{{O_{\pi, \pi(n)}} \}, j \neq n} u^{n,(\pi)}_j(\mathcal{A}) +w'_{n}\sum_{j \in \{{O_{\pi, \pi(n)}} \}, j \neq n} u^{j,(\pi)}_n(\mathcal{A}) \bigg) \end{align} where Equation~\eqref{eq:ourshapleyconvex} provides the Shapley value for our convex game. It is evident that our Shapley value depends on the utility of the player $n$, the utility player $n$ achieves by allocating resources to players in $j \in O_{\pi, \pi(n)}, j \neq n $ and the utility player $j \neq n, j \in O_{\pi, \pi(n)}$ achieves by allocating resources to player $n$ in every possible permutation. \begin{comment} \begin{algorithm} \begin{algorithmic}[] \State \textbf{Input}: $tt$ \State \textbf{Output}: The optimal hit probabilities \State \textbf{Step $1$:} $t \leftarrow$0, $\boldsymbol{\lambda^{(t)}\leftarrow\lambda_0}$, $\boldsymbol{\mu^{(t)}\leftarrow\mu_0}$ \While{Equation \eqref{eq:gradient1} $\neq$ 0 \State \textbf{Step $2$}:$\boldsymbol{h_{il}^{(p)}[t]} \leftarrow \frac{w_i\beta^{|p|-l}}{\lambda_{l}[t-1]\Bigg(\prod_{\substack{q \neq p,\\ q:l\in\{1,\cdots,|q|\}}}(1-h_{il}^{(q)}[t-1])\Bigg)+\mu_{ip}[t-1]}. $ \State \textbf{Step $3$:} Update $\lambda$ and $\mu$ using \eqref{eq:update}, t $\leftarrow t+1$ \EndWhile \end{algorithmic} \caption{Iterative Distributed Algorithm} \label{algo:alg} \end{algorithm} \end{comment} Algorithm \ref{algo:alg1} provides an overview of our proposed approach. We calculate the Shapley value for the players and assign it to $\mathbf{u}(R,\mathcal{X})$. Finally to obtain $\mathcal{X}$, we take the inverse function of $\mathbf{u}$. As we are using monotonic utilities, we know that the inverses of the utilities exist. A fundamental issue with the Shapley value is its complexity. This motivates developing a more efficient algorithm to obtain an allocation from the core. \subsection{Reducing the Computational Complexity} To reduce computational complexity, we propose an algorithm (Algorithm \ref{algo:alg2}) that requires solving only $2N$ optimization problems rather than $2^{N}$ \begin{algorithm} \begin{algorithmic}[] \State \textbf{Input}: $R, C,$ and vector of utility functions of all players $\mathbf{u}$ \State \textbf{Output}: $\mathcal{X}$ \State \textbf{Step $1$:} $ \mathbf{u}(R,\mathcal{X}) \leftarrow$0, $ \mathcal{X} \leftarrow$0, $ \boldsymbol{\phi} \leftarrow$0, \State \textbf{Step $1$:} Calculate Shapley Value $\phi_n\;$ $\forall n \in \mathcal{N}$ \State \textbf{Step $2$:} $ \mathbf{u}(R,\mathcal{X}) \leftarrow \boldsymbol{\phi}$ \State \textbf{Step $3$:} $\mathcal{X}\leftarrow \mathbf{u}^{-1}$ \end{algorithmic} \caption{Shapley Value based Resource Allocation} \label{algo:alg1} \end{algorithm} \begin{algorithm} \begin{algorithmic}[] \State \textbf{Input}: $R, C,$ and vector of utility functions of all players $\mathbf{u}$ \State \textbf{Output}: $\mathcal{X}$ \State \textbf{Step $1$:} $ \mathbf{u}(R,\mathcal{X}) \leftarrow$0, $ \mathcal{X} \leftarrow$0, $ \boldsymbol{O_1} \leftarrow$0, $ \boldsymbol{O_2} \leftarrow$0, \State \textbf{Step $2$:} \For{\texttt{$n \in \mathcal{N}$}} \State $ {O_1^n} \leftarrow$ \texttt{Solution of the optimization problem in Equation \eqref{eq:opt_single}} \EndFor \State \textbf{Step $3$:} Update $R, C$ based on Step 2 \State \textbf{Step $4$:} \For{\texttt{$n \in \mathcal{N}$}} \State $ {O_2^n} \leftarrow$ \texttt{Solution of the optimization problem in Equation \eqref{eq:opt_single_j} with updated $R$ and $C$} \State Update $R$ and $C$ \EndFor \State \textbf{Step $5$:} $u_n(R,\mathcal{X})\leftarrow O_1^n+O_2^n$ $\forall n \in \mathcal{N}$ \State \textbf{Step $6$:} $\mathcal{X}\leftarrow \mathbf{u}^{-1}$ \end{algorithmic} \caption{$\mathcal{O}$($N$) algorithm for obtaining Core's allocation} \label{algo:alg2} \end{algorithm} \begin{subequations}\label{eq:opt_single_j} \begin{align} \max_{\mathcal{X}}\quad& \sum_{j\neq n}u_j^n(\mathcal{X}) \quad \forall n \in \mathcal{N} \label{eq:objsinglej}\\ \text{s.t.}\quad & \sum_{i \in \mathcal{M}\backslash\mathcal{M}_n} x_{ik}^{(n)}\leq C_{k}^{(n)}, \quad \forall k \in \mathcal{K}, \label{eq:singlefirstj} \displaybreak[0]\\ & x_{ik}^{(n)} \leq r^{(n)}_{ik}, \quad \forall\; i\in \mathcal{M}\backslash\mathcal{M}_n, k \in \mathcal{K}, \label{eq:singlesecondj} \displaybreak[1]\\ & x_{ik}^{(n)} \geq 0, \quad \forall\; i\in \mathcal{M}\backslash\mathcal{M}_n, k \in \mathcal{K}. \label{eq:singlethirdj} \displaybreak[2] \end{align} \end{subequations} \begin{theorem} The solution obtained from Algorithm \ref{algo:alg2} lies in the \emph{core}. \end{theorem} \begin{proof} We need to show that the utilities obtained in Step $5$ of Algorithm \ref{algo:alg2}: a) are individually rational. b) are group rational, and c) no player has the incentive to leave the grand coalition and form another coalition $S \subset \mathcal{N}$. \noindent\textit{Individual Rationality:} For each player $n \in \mathcal{N}$, the solution $O_1^n$ obtained by solving the optimization problem in \eqref{eq:opt_single} is the utility a player obtains by working alone i.e. it is $v\{n\}$. The value $O_2^n$ in Step 4 is either zero or positive but cannot be negative due to the nature of utility used i.e. \begin{align} u_n(R,\mathcal{X})&=O_1^n+O_2^n \geq O_1^n, \quad \forall n \in \mathcal{N}. \nonumbe \end{align} \noindent \textit{Group Rationality:} The value of the grand coalition $v\{\mathcal{N}\}$ as per \eqref{eq:payoff_function} is the sum of utilities $u_n$'s and $u_j^n$'s. Steps 2, 4 and 5 of Algorithm \ref{algo:alg2} obtain the sum of the utilities as well. Hence the solution obtained as a result of Algorithm \ref{algo:alg2} is group rational. Furthermore, due to super-additivity of the game and monotone non-decreasing nature of the utilities, no player has the incentive to form a smaller coalition. Hence Algorithm \ref{algo:alg2} provides a solution from the core. \end{proof} \section{Introduction}\label{sec:intro} \input{01-intro} \section{Preliminaries}\label{sec:prelim} \input{preli} \section{System Model}\label{sec:sysmodel} \input{02-sysmodel} \subsection{Optimization Problem}\label{sec:opt_problem} \input{03-opt-problem} \subsection{Game Theoretic Solution}\label{sec:gametheory} \input{game-theory} \section{Experimental Results}\label{sec:exp_results} \input{04-exp-results} \section{Conclusions}\label{sec:conclusion} \input{05-conclusions} \bibliographystyle{unsrt \subsection{Multi-Objective Optimization} For $m$ inequality constraints and $p$ equality constraints, MOO identifies a vector $\boldsymbol{x}^*=[x_1^*,x_2^*,\cdots ,x_t^*]^T$ that optimizes a vector function \begin{equation}\label{eq:vectorobjective} \centering \bar{f}(\boldsymbol{x})=[f_1(\boldsymbol{x}),f_2(\boldsymbol{x}),\cdots,f_N(\boldsymbol{x})]^T \end{equation} such that \begin{align}\label{eq:constraints} \centering & g_i(\boldsymbol{x})\geq 0, \; i=1,2,\cdots,m, \\ &h_i(\boldsymbol{x})=0\;\; i=1,2,\cdots,p, \nonumber \end{align} where $\boldsymbol{x}=[x_1,x_2,\cdots ,x_t]^T$ is a vector of $t$ decision variables and the feasible set is denoted by $F$. The fundamental difference between a single objective optimization (SOO) and MOO is that MOO involves a vector of objective functions rather than a single objective function. This indicates that we no longer are dealing with a specific point as a solution but a \emph{frontier} of solutions known as \emph{Pareto frontier} or \emph{Pareto boundary} (see \cite{cho2017survey} for details). Some basic definitions related to MOO are \begin{definition}{Strongly Pareto non-dominated solution:} A feasible solution $x$ is strongly Pareto non-dominated if there is no $y\in F$ such that $f_n(y)\leq f_n(x)$ $\forall n=1,2,\cdots,N$ and $f_n(y)<f_n(x)$ for at least one $n$\footnote{Means that there is no other feasible solution that can improve some objectives without worsening at least one other objective.}. \end{definition} \begin{definition}{Weakly Pareto non-dominated solution:} A feasible solution $x$ is weakly Pareto non-dominated if there is no $y\in F$ such that $f_n(y)<f_n(x)$ $\forall n=1,2,\cdots,N.$ \end{definition} \begin{definition}{Pareto Improvement/Pareto Dominated Solution:} Given an initial allocation, if we can achieve a different allocation making at least one individual function better without hurting any other, then the starting state is called \emph{Pareto improvement}. \end{definition} \begin{definition}{Pareto Optimality:} For any maximization problem, $\boldsymbol{x}^*$ is $Pareto\; Optimal$ if the following holds for every $\boldsymbol{x} \in F$ \begin{align}\label{eq:paretoptimality} \bar{f}(\boldsymbol{x}^*)\geq\bar{f}(\boldsymbol{x}) \end{align} where $\bar{f}(\boldsymbol{x})=[f_1(\boldsymbol{x}),f_2(\boldsymbol{x}),\cdots,f_k(\boldsymbol{x})]^T$ and $\bar{f}(\boldsymbol{x}^*)=[f_1(\boldsymbol{x}^*),f_2(\boldsymbol{x}^*),\cdots,f_k(\boldsymbol{x}^*)]^T$. \end{definition} \begin{definition}{Strong Pareto Optimality:} A feasible solution $x$ is strongly Pareto optimal if it is strongly Pareto non-dominated. \end{definition} \begin{definition}{Weak Pareto Optimality:} A feasible solution $x$ is weakly Pareto optimal if it is weakly Pareto non-dominated. \end{definition} \begin{figure} \centering \includegraphics[width=0.5\textwidth]{figures/opt.jpg} \vspace{-36pt} \caption{MOO with two objective functions} \protect\label{fig:pareto} \vspace{-12pt} \end{figure} Figure~\ref{fig:pareto} shows a minimization MOO problem with two objective functions $f_1$ and $f_2$. $\overleftrightarrow{ab}$ is the Pareto Frontier as it consists of all the Pareto Optimal solutions. \end{comment} \subsection{Cooperative Game Theory} Cooperative game theory assists us in understanding the behavior of rational players in a cooperative setting \cite{han2012game}. Players can have agreements among themselves that affect the strategies as well as their obtained payoffs or utilities. Below we provide some basic definitions related to cooperative game theory. \noindent\textit{\bf {Coalition Game\cite{han2012game}:}} Any coalition game can be represented by the pair $(\mathcal{N},v)$ where $\mathcal{N}$ is the set of players that play the game, while $v$ is the mapping function that determines the utilities or payoffs received by the players in $\mathcal{N}$. \noindent\textit{\bf {Transferable Utility (TU):}} If the total utility of any coalition can be divided in any manner among the game players, then the game has a \emph{transferable} utility. \noindent\textit{\bf {Characteristic function:}} The characteristic function for a coalitional game with TU is a mapping $v: 2^{\mathcal{N}} \mapsto \mathbb{R}$ with $v(\emptyset)=0$. \noindent\textit{\bf {Superadditivity of TU games:}} Any game with TU is said to be superadditive if the formation of large coalitions is always desired. Mathematically, \begin{equation}\label{eq:superadditivity} v(S_1 \cup S_2 )\geq v(S_1)+v(S_2) \; \forall S_1, S_2 \subset \mathcal{N}, s.t.\; S_1\cap S_2 =\emptyset \end{equation} \noindent\textit{\bf {Canonical Game:}} A coalition game is canonical if it is in the characteristic form and is superadditive. \noindent\textit{\bf {Group Rational:}} A payoff vector $\textbf{x}\in \mathbb{R}^\mathcal{N}$ for dividing $v(\mathcal{N})$ is group-rational if $\sum_{n \in \mathcal{N}}x_n=v(\mathcal{N})$. \noindent\textit{\bf{ Individual Rational:}} A payoff vector $\textbf{x}\in \mathbb{R}^\mathcal{N}$ is individually-rational if every player obtains a larger benefit than it would acting alone, i.e., $x_n \geq v(\{n\}), \forall n\in \mathcal{N}$. \noindent\textit{\bf {Imputation:}} A payoff vector that is both individual and group rational is known as an imputation. \noindent\textit{\bf {Core:}} For any TU canonical game $(\mathcal{N},v)$, the core is the set of imputations in which no coalition $S\subset\mathcal{N}$ has any incentive to reject the proposed payoff allocation and deviate from the \emph{grand coalition}\footnote{Grand coalition means that all the players in the game form a coalition.} to form a coalition $S$ instead. \par Any payoff allocation from the core is Pareto-optimal as evident from the definition of the core. Furthermore, the grand coalition formed is stable. However, a core is not always guaranteed to exist. Even if a core exists, it might be too large so finding a suitable allocation from the core may not be easy. Furthermore, as seen from the definition, the allocation from the core may not always be fair to all the players. \emph{Shapley value} can be used to address the aforementioned shortcomings of the core. Details about Shapley value can be found in \cite{han2012game}.